What Does “Refining Edge Computing” Mean?

In any telecommunications network, the edge is the farthest point from the facilities and its services provided to customers. In the context of edge computing, the edge is where servers can deliver functionality to customers as quickly as possible.

With regard to the internet, data is collected from multiple servers, and is conveyed to the data center for processing. CDNs speed up this process by acting as “pump stations” for users. The typical lifecycle of network services involves this “round trip” process, where data is actually extracted, shipped, refined, and reshipped. And, as with any process that involves logistics, transportation takes time.

This simplified diagram of NTT shows CDN servers injecting themselves between the data access point and the users. From the perspective of data or content producers, as opposed to delivery players, CDNs are near the end of the supply chain – the penultimate step in fact before the data reaches the end of the supply chain. ‘user.

Over the past decade, major CDN vendors have started to introduce IT services that reside at the point of delivery. Imagine that a gas station could be its own refinery, and you get the idea. The value proposition of this service depends on the perception that CDNs are not at the center, but at the periphery of the system. It allows certain data to bypass the need for long distance transport.

The trend towards decentralization

If CDNs have not yet proven the effectiveness of edge computing, they have at least demonstrated its business value: companies pay to have certain data processed before it reaches the center, or “core”, of the network. .

“We’ve had a pretty long period of centralization,” said Matt Baker, senior vice president of Dell Technologies. “As we seek to deliver more and more real-time digital experiences through our digital transformation initiatives, the ability to maintain this highly centralized approach to IT is starting to fracture.”

Edge computing is touted as one of the lucrative new markets made possible by 5G technology. For the transition from 4G to 5G to be economically feasible for many telecom companies, the new generation must open up new revenue channels. 5G requires a new and vast network of (ironically) wired fiber-optic connections to provide transmitters and base stations instant access to digital data (the backhaul ).

Therefore, a new category of IT service providers has the opportunity to deploy multiple µDCs adjacent to RANs, perhaps adjacent to or sharing the same building with telecom operators’ base stations. These data centers could offer cloud computing services to selected customers, at competitive rates and with features comparable to large-scale cloud computing providers such as Amazon, Microsoft Azure, and Google Cloud Platform. .

Ideally, perhaps after a decade of evolution, edge computing would provide fast service to customers located near their base stations. We would need huge fiber optic pipes to provide the necessary backhaul, but the income from edge computing services could in theory finance their construction, making them profitable.

Service level objectives

Ultimately, the success or failure of the edge computing data centers will be determined by the ability to meet service level objectives (SLO – objective service level). It is the expectations of customers who pay for the services, as codified in their service contracts.  f an edge deployment isn’t significantly faster than a large-scale deployment, then edge computing is dead.

“What are we interested in? It’s application response time,” said Tom Gillis, senior vice president of VMware. “If we can characterize how the application responds, and look at the individual components that work to provide that response, we can actually begin to create a self-healing infrastructure.”

Reducing latency and improving processing speed should work in favor of SLOs. Some point out how the wide distribution of resources over an area contributes to service redundancy and even business continuity – which, at least until the pandemic, were seen as one or two day events, followed by periods recovery.

But there will be balancing factors, including upkeep and maintenance. A typical Tier 2 data center can be maintained, in emergency circumstances (such as a pandemic), by only two people on-site, with support staff off-site. A µDC is designed to operate without being constantly maintained by personnel. Its built-in monitoring functions constantly send telemetry data to a central center, which could theoretically be in the public cloud. As long as a µDC meets its SLOs, there is no need to monitor it from a maintenance standpoint.

This is where the viability of the edge computing model still needs to be thoroughly tested. As part of a typical data center vendor contract, an SLO is often measured by how quickly the vendor’s staff can resolve an outstanding issue. In general, resolution times can remain low when personnel do not have to travel by road. If an edge computing deployment model is to be competitive with a colocation deployment model, its automated resolution capabilities should be very good.

Leave a Reply

Your email address will not be published. Required fields are marked *