So far I've only really used normal servers and clients, but now customers ask about terminal Server, and I'd like to know pro's and con's of using them instead of an "old-fashioned" client-server network.
Some things I can guess: easier administration (don't need to install/update office/stuff on 20 computers but only on the server).
Easier backup (no need to backup client computers).
And I'd guess it would be hard (impossible) to connect and use local (like USB) hardware with Terminal Server?
We used both environments (we're a public school system), and when I started here we were running several terminal servers for teachers and students, and now we're fat clients and role-specific servers (no terminal servers for users).
You have fewer machines to back up and monitor. Desktops are easily swapped and can be run on the cheap. All you need is a thin client or a system capable of running terminal service client; hardly need a gig of ram and a hundred gig hard drive for that. We were running PII and PIII systems with barely enough RAM to run Windows 98 comfortably as clients; students were sticking gum in them and jamming them with papers and when the time came that the client died, we just swapped it out with another cheap spare, no special software or custom configuration necessary.
Cons? You have twenty users on a system. System reboots, dies, hardware issue,...twenty users, kicked offline, instantly. And users don't understand terminals to begin with. They don't care why something's weird. They just know something isn't working. One switch goes wonky or one server goes wonky and you've taken out a large number of users.
You need a reliable network infrastructure to work. The users need a good path from desktop to the server. If the client machine died, a switch, a cable, a server,...their computer platform dies. With fat clients, you could potentially have a server failure and users could still do something else (i.e., mail server is down, but users can still work from their home directories or local files and just be irked that the mail server is down, not the whole system).
There is a layer of complexity added. You brought up USB, maybe things like sound, etc...again, this I think was improved since our time. But it still means another place where things can go wrong, and it's not like sitting down at a fat client to troubleshoot. You're trying to redirect stuff over the network on a server that is dealing with connections from another dozen users. Weird stuff happens. In IT, complexity is bad to intentionally introduce.
We had to move because over time we were getting more and more people that HAD to have something they wanted installed but other people didn't, and for licensing it could only be used by a small group of people. Or the software was made with Macromedia Director (ugh) and didn't "quite" work right (refresh was "off" with graphics, animations were choppy...). Or we had people running software that was just bloated and bogged down servers. Or we had people that had to use CD's for a presentation or material and they couldn't access them via the terminals (again, may have improved). Eventually we were putting in special workstations for certain tasks (log in once to run Photoshop, use the terminal shortcut to get to Office...) and finally it was too much of a burden to dual-support having labs that ran XYZ and terminals to support ABC. We had too many diverse needs.
On the cost side, the equation is not as straightforward as it might seem. You still have to pay licences for the clients, and even if you now only have one server to manage and not a bunch of workstations, the job is more difficult and require more qualifications...
If you have applications that aren't written for high-latency client-server situations - running it on a regular fat client in a remote office against a central server may result in severe performance issues.
One example would be trying to run an Access database form from a file share that's not locally replicated. Switching to a central terminal server environment in that case will boost application performance as it is then running on a central very high-performance terminal server, with a high-capacity and low-latency connection to the application server or resource. Many older line-of-business applications built with similar technology will respond much quicker if the client-side part is running close to the server-side.
And as long as the load isn't sustained and ruining the experience for other users, a number of older applications can actually respond a lot faster to the end-users not only due to lower latency to resources but because there's room for more burst performance in a high-speed server (obviously screen refresh may not be fluid, but fluid animation and quick result on say a customer search form are two very different things). Almost like Chopper said, going TS is sometimes an easy way to fix old stuff. There are other ways to do it like replicating file resources, using branch cache functionality and switching to web applications - or even siloing individual applications which would gain from this into terminal servers, serving them on fat clients as a seamless application.
Serving terminal server sessions to users on the road can also provide performance boosts. Trying to use a laptop and VPN while on a train using mobile broadband to access a shared Access form or line-of-business client-server-app with always-online-requirement on a central server will most likely be infuriatingly slow and unreliable. Replace it with a terminal session and it will probably be very zippy as long as the connection doesn't die completely. And if it does die, the session state will be preserved when resuming the connection (as long as the set parameters for auto-logoff aren't met) and the user can continue his or her work where it was left.
An APMTerminals.com account enables you to save containers to your Container Watchlist, set daily Watchlist email notifications, and subscribe for Terminal Alerts. Terminal Alerts provide you with real-time, personalised operational updates via SMS or email. An APM Terminals account is also required to manage Truck Appointments or Additional Service Requests at some terminals.
A thin client is a computer that runs from resources stored on a central server instead of a localized hard drive. Thin clients work by connecting remotely to a server-based computing environment where most applications, sensitive data, and memory, are stored.
Thin clients can also be simpler to manage, since upgrades, security policies, and more can be managed in the data center instead of on the endpoint machines. This leads to less downtime, increasing productivity among IT staff as well as endpoint machine users.
With shared terminal services, all users at thin client stations share a server-based operating system and applications. Users of a shared services thin client are limited to simple tasks on their machine like creating folders, as well as running IT-approved applications.
A browser-based approach to using thin clients means that an ordinary device connected to the internet carries out its application functions within a web browser instead of on a remote server. Data processing is done on the thin client machine, but software and data are retrieved from the network.
Forcepoint Trusted Thin Client enables secure access to multiple sensitive networks while delivering a robust combination of security, flexibility, usability, and reduced total cost of ownership. It is made up of two components: a Distribution Console and client software.
If SkillManager is stopped or restarted both Web Terminal and Web CLI client should continue running, and can be stoed after skills are loaded again as teh skill stores the pids in the settings.json file.
Hello - Our current EG remote users are using the client over a VPN, but we're finding it not to be as fast as it should be. I've seen other discussions here that are in line with steps we are taking to setup a remote desktop environment to keep the client local to the server. Is anyone using Windows terminal server and do you have any sizing recommendations for the remote desktop server? Or do you know of any specific requirements? We're moving forward and it's a small group, but instead of guessing ram, cores, etc. for 10-15 users, I was trying to find a guideline of specs needed per user or someone already doing it that could share what works for them. Thanks!
A terminal server, also known as a communication server, is a hardware device or server that provides terminals -- such as PCs, printers and other devices -- with a common connection point to a local or wide area network (WAN). The terminals connect to the terminal server from their RS-232C or RS-423 serial port. The other side of the terminal server connects through network interface cards (NICs) to a local area network (LAN), usually an Ethernet or token ring LAN, through modems to the dial-in/out WAN, or to an X.25 network or a 3270 gateway. Different makes of terminal servers offer different kinds of interconnection. Some can be ordered in different configurations based on customer needs.
Some terminal servers can be shared by hundreds of terminals. The terminals can be PCs, terminals that emulate 3270s, printers, or other devices with the RS-232/423 interface. Terminals can use TCP/IP for a Telnet connection to a host, LAT to a Digital Equipment Corporation (DEC) host, or TN3270 for a Telnet connection to an IBM host with 3270 applications. With some terminal servers, a given terminal user can have multiple host connections to different kinds of host operating systems, such as UNIX, IBM and DEC.
When a user needs to interact with a session through keyboard, mouse or touch inputs, those inputs are made within the RDP client. The RDP client then transmits the inputs to the terminal server for processing. The terminal server is also responsible for performing all graphical rendering, although it is the RDP client that actually makes the session visible to the user.
df19127ead