Edge computing is getting a great deal of attention now, and for good reason. Cloud architecture requires that some processing be placed closest to the point of data consumption. Think computing systems in your car, industrial robots, and now full-blown connected mini clouds such as Microsoft’s Stack and AWS’s Outpost, certainly all examples of edge computing.
The architectural approach to edge computing—and IoT (Internet of Things), for that matter—is the creation of edge computing replicants in the public clouds. You can think of these as clones of what exists on the edge computing device or platform, allowing you to sync changes and manage configurations on “the edge” centrally.
The trouble with this model is that it’s static. The processing and data is tightly coupled to the public cloud or an edge platform. There is typically no movement of those processes and data stores, albeit data is transmitted and received. This is a classic distributed architecture.
The trouble with the classic approach is that sometimes the processing and I/O load requirements expand to 10 times the normal load. Edge devices are typically underpowered, considering that their mission is fairly well defined, and edge applications are created to match the amount of resources on the edge device or platform. However, as edge devices become more popular, we’re going to need to expand the load on these devices or they will more frequently hit an upward limit that they can’t handle.
The answer is the dynamic migration of processing and data storage from an edge device to the public cloud. Considering that a replicant is already on the public cloud provider, that should be less of a problem. You will need to start syncing the data as well as the application and configuration, so at any movement one can take over for the other (active/active).
The idea here is to keep things as simple as you can. Assuming that the edge device does not have the processing power needed for a specific use case, the processing shifts from the edge to the cloud. Here the amount of CPU and storage resources are almost unlimited, and processing should be able to scale, afterwards returning the processing to the edge device, with up-to-date, synced data.
Some ask the logical question about just keeping the processing and the data on the cloud and not bothering with an edge device. Edge is an architectural pattern that is still needed, with processing and data storage placed closest to the point of origin. Dynamic distributed leverages centralized processing as needed, dynamically. It provides the architectural advantage of allowing scalability without the loss of needed edge functionality.
A little something to add to your bag of cloud architecture tricks.