A surprising number of companies are not yet fully exploiting one of the most flexible software hosting strategies -- application virtualization. This is logical, though, as it may be the most difficult of these cost-savings frameworks to grasp.
Most software is either installed on the user's own system or executed from a server in a client/server model. Client/server applications, however, can have performance issues, and as configuration and registry parameters expand and possible conflicts with local libraries and middleware become more likely, local software installation becomes overly complex. For many companies, simply keeping all of their software requirements in order on the systems that run desktop applications is a job for a full support team.
Application virtualization eliminates these issues with a sort of client/server, two-step process. That's the good news. The bad news is that the process isn't easily understood. Part of that reason is because, like most terms in the IT industry, "application virtualization" is overloaded.
In its most general sense, a virtualized application would be expected to be like a virtual server -- hosted on any platform. In practice, however, that's not going to be possible due to the issue of middleware. Applications run on an operating system and hardware platforms, but they also employ other system software tools for communications, database access and the GUI. If you have an application written for Windows, running it on Linux requires a complicated set of application middleware usually called "DLLs", or Dynamic Link Libraries. If you tried to run a Windows DLL on Linux, for example, you'd most likely face both a compatibility problem and considerable performance issues.
Most application virtualization today is based on a two-element model with a client or target system and a host system. The host system is used to run the application and build what is sometimes called an "application package" through a process called "sequencing". This creates a series of "machine images" of application components as they're loaded and run, in such a way that registry and configuration parameters are already processed when the image is created. The application images are then "streamed" to the client system when the application is used.
Since these images are already independent of the registry/configuration parameters, all you need on the client is a compatible set of application program interfaces (APIs) that provide the machine image with what it thinks are the correct operating system and middleware interfaces. This is why, in theory, you could stream applications to incompatible software and even hardware; as long as the client system can emulate the execution environment of the host system, everything should run. In most cases, though, the hardware platform (x86) and software OS (Windows) need to be the same for maximum reliability.
The big benefit of application virtualization is that it eliminates the common problem of configuration incompatibilities, particularly with registry or configuration data. Most companies have had issues with applications and their configuration variables that were incompatible or downright destructive, and trying to figure out what applications will run together can be a major issue. Application virtualization can also help with system protection because contaminated system library files aren't used -- the streamed application is self-contained.
The negative side of this process is the difficulty in sequencing applications, which can range from slightly inconvenient to highly complex. If you don't expect to use application virtualization widely, the process probably wouldn't be worth the effort of sequencing the application. Another issue to be considered is that many applications either won't sequence at all or won't run if sequenced and streamed. It's best to have a complete inventory of applications you want to virtualize and check both for their compatibility with sequencing and their "stream-ability" before you commit to the concept.
Industry experience with virtualized applications across versions of an operating system is very limited, so you should also be wary of is the coming Windows 7 deployment. It's important to know just how the new OS will be supported by your virtualization tools; developers have had the pre-release versions long enough to provide some experience with the transition. To be safe, you should probably assume that any applications sequenced and streamed under earlier versions of any OS may have to be resequenced to use the new version and that the host and target/client OSs will have to be the same.
Application virtualization is a potentially powerful tool that is becoming more mature, functional and useful every day. It should be an important part of any enterprise's arsenal of operations-savings strategy, along with something to track carefully as technologies evolve.
ABOUT THE AUTHOR:
Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is a member of the IEEE, ACM, Telemanagement Forum, and the IPsphere Forum, and is the publisher of Netwatcher, a journal in advanced telecommunications strategy issues.