ra2 studio - Fotolia
Understand the principles of data residency
IT administrators should assess data residency on a case by case basis. That means understanding how the concept has changed and how it works in the cloud.
There is a general principle that systems architects need to keep applications and data colocated.
With on-premises end-user computing deployments involving multiple sites, that often results in remote desktops or data duplication. Remote desktops keep the data and applications together at the main site while workers use local devices for access. Data duplication, using techniques such as a Distributed File System or extended SQL server clustering, can also allow for data colocation -- or a managed cache of the data -- with the applications and users at the remote sites.
Cloud-based deployments follow this same principle of data residency. If IT moves the applications to a cloud-hosted infrastructure -- whether it's web apps, Remote Desktop Session Host, VDI or desktops as a service -- the tendency is to move all of the data with them.
But organizations should not automatically take that principle for granted. Instead, they should look at each case individually when it comes to determining data residency.
The economics of data
This principle dates back to a Microsoft Research paper from 2003 called "Distributed Computing Economics," which concludes that one should "put the computation near the data." That conclusion assumed that the cost of a sufficient network outweighed the cost of storage, however, and those data center costs today have changed.
Client-server computing is based on the idea that the application is split into two pieces. It provides the feel of a desktop application, which includes a local application front end on the desktop, tied to the back end that contains both the actual application and the data. Even 20 years ago, sufficient bandwidth generally existed to support the small amount of communication traffic needed between the two portions of the application.
As more affordable bandwidth became available, desktop remoting technologies became more popular. This allowed for only a single generic application at the remote site to connect to many different applications running in the main data center. Benefits around centralization, management and speed of delivery of new and updated applications paid for the extra bandwidth costs. That means it's no longer necessarily more cost-effective to put all of the data and applications in the data center.
Cloud data residency
When it comes to cloud-based applications, it is not always necessary to move the data into the cloud just because the application moves there.
Back in 2003, the cost of more storage was decreasing more quickly than the cost of faster networking. But this trend has now reversed. Reasonably cheap, high-capacity and reliable bandwidth is now available from many company sites to cloud-hosting sites. But perhaps more important, when you look at cloud pricing, the storage costs are no longer just for storing the bits. Movement of the storage, such as for backup and redundancy, can be very expensive. As a result, organizations might consider keeping some data on premises.
IT must keep in mind these issues around data residency when contemplating new deployments. Somebody has to do the math to ensure that data and applications reside in the location that makes the most economic and user-friendly sense.
Explore the legal implications of cloud data residency
A global look at data residency
How safe harbor laws affect cloud data