The notion of edge computing typically conjures an image of a device in a factory someplace, providing rudimentary computing and data collection to support a piece of manufacturing equipment. Perhaps it keeps the factory temperature and humidity optimized for the manufacturing process.
Those who deal with edge computing these days understand that what was considered “edge” just a few years ago has morphed into something a bit more involved. Here are the emerging edge computing architecture patterns that I’m seeing:
The new edge hierarchy. No longer are edge devices connected directly to some centralized system, such as one residing in the cloud. They connect to other edge devices that may connect to larger edge devices that ultimately connect to a centralized system or cloud.
This means that we’re leveraging very small and underpowered devices at the true edge, such as a thermostat on the wall. That device connects to a local server in the physical building, which is also considered an edge device, in a one-to-many configuration (one edge server to many thermostats). Then another edge server aggregates the data of many buildings and finally transmits to the public cloud where the edge-based data is stored, analyzed, and results returned down the hierarchy.
Although this seems like we’re killing ourselves with complexity by adding layers to the edge architecture, the motivations are pragmatic. There is no need to send all of the data to the centralized storage and processing system in the cloud when it can be processed better and cheaper by edge server that are closer to the devices, especially if the collection devices (the thermostat) are not powerful enough to do any real processing.
The advantages here are better performance and resiliency. The data did not have to be transmitted out of the building. The architecture is much more agile; you can repurpose each edge device without forcing changes to the centralized data storage and processing systems in the cloud.
Autonomous edge data movement. This edge architecture, simply put, allows data to flow from edge device to edge device, as well as to the back-end system in the cloud using autonomous AI-based agents charged with relocating data based on predefined rules. This is typically for storage and processing. Data from an edge device may be moved to another edge device or to a centralized server based on what needs to be done to the data.
This has a clear advantage. It avoids saturating edge device that typically don’t have a lot of storage. Many edge storage devices use only one to three percent of their storage; others hover around 90 percent, which is scary.
If the data does not need to be transmitted to the centralized processing systems (typically in a public cloud) and can be stored locally more efficiently, then edge architects will find autonomous edge data movement compelling, compared to ongoing upgrades to all edge devices and edge servers.
The role of the cloud within these emerging architectures is to provide command and control, not just to be place for processing. The clear pattern has been to avoid sending data to the back end if it can be avoided. But there has to be a centralized “big brain” for all of this to work, and automation should exist in a central and configurable space, putting volatility into a domain.
Although these architectures are rare today, I’m seeing more and more of them as enterprises attempt to move to edge computing in innovative ways adapted for their specific use and geographical distribution. Edge is likely to become even more complex, and the patterns will expand. Keeping the data away from the cloud seems counterintuitive, but we’re leveraging the cloud to be the master of all edge devices that will end up storing even more data.