The world is entering the age of hyperconnectivity, where devices, data and information systems talk constantly, sharing data between and among numerous applications programmed to do everything from safeguarding our homes to running oil rigs.
This world of hyperconnectivity is awash in data.
IDC's Global DataSphere Forecast, 2021-2025 predicted that global data creation and replication will grow from 64.2 zettabytes of data in 2020 to 181 zettabytes in 2025.
More and more of that data creation will happen at the very end of computing networks, thanks to the rapid growth of IoT and its collection of connected endpoint devices.
And the amount of data that's processed at the edge of those computing networks is expected to grow just as rapidly.
Figures from Gartner, a tech research and advisory firm, confirm this dramatic shift. The firm found that approximately 10% of enterprise-generated data was created and processed outside of traditional centralized data centers and the cloud in 2018. But it predicted that 75% of data will be processed at the edge by 2025.
What is edge computing?
Edge computing is -- as the name so succinctly says -- is computer power that exists on the edge of a connected ecosystem. It's positioned physically close to the endpoint devices, such as sensors or mobile phones, that are generating the data.
The role of edge computing is to ingest data generated from the nearby endpoint devices and then use a machine learning program to first analyze that data and then direct an action in response to that analysis.
Edge computing is an alternative to sending endpoint-generated data to centralized servers -- whether on premises or, more likely, in the cloud -- for processing.
This edge computing capability is commonly housed in purpose-built devices, such as IoT gateways, but it can sometimes be housed in the endpoints themselves.
Benefits of edge computing
True to its name, edge computing takes compute out of an enterprise's core data center and places it close to endpoint devices where data is being generated, which brings several key benefits, such as:
1. Improved speed/reduced latency
By its definition and design, edge computing eliminates the need to move data from endpoints to the cloud and back again. Decreasing that travel shaves time off the entire process; this time savings can be measured in seconds, sometimes even milliseconds. That might not seem like much, but travel time -- known as latency -- is a critical consideration in a connected world where real-time decision-making capabilities are necessary for proper functioning of the endpoint devices.
For example, autonomous vehicles, industrial and manufacturing IoT deployments and medical use cases all require machines to analyze data and return instructions nearly instantaneously in order to function safely.
2. Improved security and privacy protections
Edge computing can provide enhanced security and more privacy protections because it keeps data close to the edge and thus out of centralized servers. Edge devices are still vulnerable to being hacked, particularly if they're not adequately protected. However, edge devices hold very limited amounts of data and often not complete data sets that could be used by hackers.
On the other hand, endpoint data stored in centralized servers tends to be combined with other data points that then creates a more complete collection of information that hackers could use for nefarious purposes. Consider, for example, edge computing in a healthcare setting. Sensors collect a patient's vital signs, which are then analyzed by an edge computing device. That device only holds those readings.
However, if the endpoint sensors send the data back to centralized servers where it's stored with other information, including personally identifiable information about the patient, and that information is hacked, then that patient's privacy is compromised.
3. Savings/reduced operational costs
Although data storage costs have dropped significantly in the past decade or so, the cost of moving data around is on the rise as the volume of it increases. Experts expect connectivity costs to continue climbing as the volume of data spikes. They also expect that users will need to implement more bandwidth to handle the load, further driving up the price tag.
Edge computing can help keep costs in check, or at least from climbing as high as they could, by reducing the amount of data being moved back and forth to the cloud.
4. Reliability and resiliency
Edge computing continues to operate even when communication channels are slow, intermittently available or temporarily down. For example, an energy company with edge computing deployments on an oil rig doesn't have to constantly rely on an available satellite connection to relay data back to a data center for processing; it can opt instead to move only the necessary processed information from the edge back to its data center when the connection is available.
Edge computing further enhances resiliency by reducing a central point of failure -- as is the case with centralized servers; a failure at one edge device won't affect the performance of other edge devices in the ecosystem, thereby improving the reliability of the entire connected environment.
Like cloud computing, organizations can add edge devices as they expand their uses so that they're deploying and managing only what they need. Additionally, endpoint hardware and edge devices often cost less than adding more computing resources within a centralized data center -- thereby making it more efficient for organizations to scale at the edge.
The future of edge computing
Edge computing will not replace the need for centralized servers and cloud computing. Rather, it will work in conjunction with those elements to create a hyperconnected world.
Experts expect that computing capabilities will continue to be split between the edge and the core, with individual use cases along with connectivity, cost and latency considerations determining when edge computing should be used over centralized computing resources.