There are three places where most businesses tend to deploy and manage their own applications and services:
On premise , where data centers house multiple server racks, where they are equipped with the resources to power and cool them, and where there is dedicated connectivity.
Colocation facilities , where customer equipment is housed in a fully managed building where power, cooling and connectivity are provided as a service
Cloud service providers , where customer infrastructure can be virtualized to some extent, and services and applications are delivered on a usage basis, allowing operations to be counted as operational expenses rather as investment expenses (opex vs capex).
Edge computing architects would be looking to add their design to this list as a fourth category: a category that takes advantage of the portability of containerized facilities with smaller, more modular servers, to reduce distances between the point of processing and the point of consumption of network functionality. If their plans come true, they seek to accomplish the following:
Potential benefits of edge computing
Minimal latency. The problem with cloud computing services today is that they are slow, especially for artificial intelligence workloads. This makes the cloud unusable for applications such as real-time forecasting of securities markets or piloting autonomous vehicles.
Processors located in small data centers closest to where they are used could open up new markets for IT services that cloud providers have not been able to address until now. In an IoT scenario, where stand-alone data collection device clusters are widely distributed, having processors closer to these device clusters could dramatically improve processing time, making analysis more difficult. real time possible at a much more granular level.
Simplified maintenance. For a business that has no difficulty in sending a fleet of maintenance vehicles to the field, Micro Data Centers (µDCs) are designed for maximum accessibility, with modularity and a reasonable degree of portability. These are compact enclosures, some small enough to fit in the back of a pickup truck, which can support just enough servers to house critical functions, and which can be deployed closer to their users.
Conceivably, for a building that currently houses, powers, and cools its data center in its basement, replacing all of that with three or four µDCs somewhere in the parking lot might actually be an improvement.
Cheaper cooling. For large data centers, the monthly cost of electricity used for cooling can easily exceed the cost of electricity used for processing. The ratio between the two is called the efficiency of energy utilization (PUE – Power usage effectiveness ). It is the benchmark measure of data center efficiency (although in recent years surveys have shown that some IT operators do not know what this ratio actually means).
Theoretically, it may be less expensive for a business to cool and condition multiple small data center spaces than a single large one. Additionally, due to the particular way some utility companies handle billing, the cost per kilowatt may drop for the same server racks hosted in multiple small facilities rather than a large one.
A white paper published in 2017 by Schneider Electric [ PDF ] assessed all the costs associated with building traditional data centers and micro data centers. While a company might incur just under $ 7 million in capital expenditures to build a traditional 1 MW facility, it would spend just over $ 4 million to facilitate the establishment of 200 5 KW installations.
An ecological device? There has always been a certain ecological appeal to the idea of distributing computing power to customers over a larger geographic area, as opposed to centralizing that power in gigantic facilities, and using fiber optic links. .
The initial marketing of edge computing was based on the common sense impression that small facilities consume less energy, even collectively. But it is really difficult to know if this is scientifically proven. A 2018 study by researchers at Kosice Technical University, Slovakia [ PDF ], using simulated edge computing deployments in an IoT scenario, concluded that the energy efficiency of edge computing depends almost entirely on the accuracy and efficiency of the calculations performed there. The overheads generated by inefficient calculations, they found, would in fact be magnified.
While this all sounds like too complex a system to be feasible, it should be borne in mind that in its current form, the public cloud computing model might not be viable in the long term. Under this model, subscribers would continue to run applications, data streams and content streams through pipes connected to mega data centers whose service areas span entire states, provinces and countries.