In-Depth
The Changing Challenges of End-User Computing
As fragmented endpoint devices meet increasingly fragmented data, applications, services and storage, new ways of thinking about how to tie it all together are emerging.
- By Dan Kusnetzky
- 08/14/2017
Working in a cloud computing environment can challenge previous notions of end-user computing (EUC) and create chaos for enterprise IT planners and administrators. There are so many permutations and combinations of computing environments, devices, OSes and applications to support, that often enterprises find themselves walking a tightrope over an abyss. Some have tried limiting support to specific devices or OSes, only to find that their end-user community rebelled. End users and customers want to choose the platforms, devices and applications themselves.
Enterprise IT simply doesn't have the funding, resources, or time to develop and execute comprehensive test plans for each and every combination of EUC environment that they now face. By the time that one environment has been completely tested, something in the stack of software and hardware changes. It might be new devices that must be tested, new versions of the OS or new applications. Once on this merry-go-round, enterprise IT wants to get off and use a "wayback machine" to take a one-way trip to the past.
If we step back from today's challenges for a moment to examine how EUC was handled in the past, it's easy to see why IT wants to stop for a breather.
Time-Sharing
In the beginning of interactive computing, nearly all of an application's components resided back on the server. End users accessed the entire computing environment using a block-mode or character-cell terminal. The UI, application logic, data management software and storage were all under the control of a single OS executing on a single server. This was called "time-sharing" because the processing time and memory of a single processor was shared among all users; and in many cases, the background batch processing, as well. This environment was fairly straightforward to manage. After all, everything resided in one place, the same OS was supporting all tasks and one set of tools could monitor and manage the entire computing environment.
Enter the PC
As PCs became economically available, applications and data started to be hosted in several places. Some applications resided completely on these new PCs, and some resided completely back in the datacenter.
Enterprise applications continued to reside on a server in the enterprise datacenter, and end users accessed these applications through a terminal. More advanced users installed terminal emulators on their PCs so that they could use applications such as word processing locally and still access enterprise applications executing on the server via a terminal emulator.
Enterprise IT was still able to deal with enterprise applications using a single set of tools, and could monitor the entire computing environment fairly easily. They often didn't consider the applications and data residing on the PCs as being their responsibility.
Over time, as PCs became more powerful, smaller, and less costly, enterprise applications were increasingly accessed using PCs and more and more enterprise data was hosted on remote PCs. Many of these PCs had become "luggable," and thus could be used in places other than enterprise offices. This meant that enterprise applications were being accessed from enterprise offices as before, but something new data was added. End users were accessing enterprise applications and data from hotels, airports and customers' offices.
Enterprise IT found it increasingly difficult to monitor where and how critical applications and data were being used, and who was using them; enterprise security became a greater concern as a result.
Distributed Computing and Disaggregation
As PCs and their laptop cousins became more powerful and less costly, end users started using them to support more and more applications.
Enterprise IT developers saw that there was a positive side to this transition, in that the enterprise applications could be segmented. Part of the application could be hosted on the remote PCs, and the remainder could stay back in the datacenter. They liked this because PC computing was less costly than server computing, and also because the expensive servers could be used to support more end users.
At first, only the UI moved to the PCs. Later, the UI and part of the application processing moved. Still later, the UI and the majority of the application moved out to live next to the end users.
As networks became faster and less costly, some IT developers segmented applications more finely. The UI and some application logic remained out by the end user, but now, what was executing on the server could be segmented and portions of the work hosted on smaller, less-expensive (but still fast) PC servers.
The back-end processing might have been segmented so that application components, such as credit-card processing, managing customer data and other important functions could each be hosted on their own PC server, close to the EUC community being supported, while the back-end mainframe continued to host enterprise batch operations and databases. Today we describe this as "moving the processing to the edge."
Enterprise IT liked the increased processing power that could be applied to a task, but soon complained that they were now supporting end-user devices, departmental and business unit servers, and enterprise servers. Each of these devices were likely to be using a different OS, different programming languages and have its own tools to monitor and manage the work they were doing.
The industry started to hear pundits telling enterprises that increased complexity would increase overall costs and offer greater opportunities for human or machine error to disrupt enterprise processing; they suggested that IT planners consider other ways to develop and deploy enterprise applications.
Complexity Reigns Supreme
We're now living through trends that are forcing some of these applications back onto the server. This time, however, the server is in a cloud services provider's datacenter. Now the device next to the user is a smartphone, tablet, laptop and, yes, even a PC. The actual application, its data and other components are hosted elsewhere. The application itself has been broken down into services and microservices that each might be hosted on separate machines somewhere in the network.
While the disaggregation of the application made it possible for the enterprise to improve performance by applying more processing power and improve application resilience through redundancy, the levels of complexity were getting out of hand.
Throw in that many of the layers of technology may no longer reside in the enterprise's own datacenter and may no longer be managed by enterprise IT, and enterprise IT planners soon began to think that maybe they had gone too far in breaking things down into components.
Re-Aggregation
The perceived chaotic computing environment has forced many enterprises to consider how they can regain control once again. The IT administrators want to be able to monitor and control applications. They now need to be able to know where the data resides, to comply with government regulations. They need to be able to assure high levels of application reliability. Application and data security has become extremely important.
These forces, combined with a relentless need to reduce IT costs, have driven enterprises to start bringing functions back together into a single cabinet again so that they can be managed and controlled as they were in the past.
Enter Virtualization
As is typical in the IT industry, new buzzwords and catchphrases were announced so that it appeared we were all moving forward, rather than trying to recapture the past. We've seen vendors use terms such as "hyper-converged" to describe a systems design that brings application logic, database management, storage management and network management back into a single cabinet. How this migration was being done, however, wasn't a blast from the past; each function was being encapsulated into its own virtual environment.
Virtualized functions, regardless of whether they're access, application control, processing, networking or storage, can be agile and mobile. They're logically isolated from one another, but can still reside on systems housed in the same cabinet. The functions could move from place to place in the datacenter, from datacenter to datacenter, or even out into the datacenter of a cloud services provider manually or automatically, based on performance criteria or detected outages in systems, memory, storage, or networks.
Here's how virtualization tools are driving this migration:
- Access virtualization makes it possible for enterprises to deploy the UI in a different place and on a different machine than the rest of the application. The UI could be projected onto smartphones, tablets, laptops or PCs. What OS each device supports may no longer be a concern. As long as the virtualization technology has been ported to that environment, access from that environment is straightforward. The applications now reside in either the enteprise or a cloud services provider's datacenter and can be managed there.
- Application virtualization has also played a role. The use of that technology made it possible for applications to be encapsulated and delivered to remote devices; and just as quickly removed from those devices when the application was no longer needed. This approach was a bit more tricky because it was necessary for the application to be written to run in the target computing environment. The use of interpreted or incrementally compiled application environments, such as Java or Python, has even removed the OS restrictions.
- Processing virtualization enables complete desktop computing environments to reside in the network and still be accessible using access virtualization (also known as virtual desktop infrastructure, or VDI).
Moving To the Edge
To improve performance, services providers and system vendors are discussing the next step in this process. They're increasingly suggesting that data and applications be moved to the edge of the network. The goal is to reduce latency all while maintaining the centralized management and control profile that has always been a part of the mainframe experience.
What does this really mean when: the end user might see a UI that has been projected to the local device across the network; some of the application logic resides on a nearby server; some of the services the application invokes are distributed across the Internet; or the data being processed is somewhere else? It can either mean little to nothing, since the end user just accesses the application and all the complexity is hidden; or it can mean headaches of a new order.
Managing the Environment Is an Increasing Concern
The vendors are telling end users to "pay no attention to the man behind the curtain." When things go well, that suggestion can work well. If something goes wrong, however, the story can be quite different.
Finding out where the application and data are, who manages it, and how a problem can be identified and resolved are increasingly problematic in such a complex, distributed, virtualized environment. For example, the application can be nearby in the morning, have moved to another datacenter later to address an outage (that the end user is totally unaware of) or even to the datacenter of a cloud services provider after that.
Here are some of the key challenges posed by the environment toward which we're rapidly moving:
- Who's managing the entire computing environment?
- Who's responsible to address issues when detected?
- How's the data made secure in flight from system to system, or when it finally settles down in a database somewhere?
At the moment, there are few overall standards for implementation or management. That means that EUC still has a lot of potential—and a lot of room for improvement.