EDGE COMPUTING ARCHITECTURE

The edge computing architecture identifies the key layers of the edge: the device edge (which includes edge devices), the local edge (which includes the application and network layer), and the cloud edge. But we’re just getting started. Our next article in this series will dive deeper into the different layers and tools that developers need to implement an edge computing architecture.


Edge Computing consists of three main nodes:

  1. Device edge, where the edge devices sit

  2. Local edge, which includes both the infrastructure to support the application and also the network workloads

  3. Cloud, or the nexus of your environment, where everything comes together that needs to come together

Figure 1 represents an architecture overview of these details with the local edge broken out to represent the workloads.

Figure 1. Edge computing architecture overview


Each of these nodes is an important part of the overall edge computing architecture.

  • Device Edge The actual devices running on-premises at the edge such as cameras, sensors, and other physical devices that gather data or interact with edge data. Simple edge devices gather or transmit data, or both. The more complex edge devices have processing power to do additional activities. In either case, it is important to be able to deploy and manage the applications on these edge devices. Examples of such applications include specialized video analytics, deep learning AI models, and simple real time processing applications. IBM’s approach (in its IBM Edge Computing solutions) is to deploy and manage containerized applications on these edge devices.

  • Local Edge The systems running on-premises or at the edge of the network. The edge network layer and edge cluster/servers can be separate physical or virtual servers existing in various physical locations or they can be combined in a hyperconverged system. There are two primary sublayers to this architecture layer. Both the components of the systems that are required to manage these applications in these architecture layers as well as the applications on the device edge will reside here.

1. Application layer: Applications that cannot run at the device edge because the footprint is too

large for the device will run here. Example applications include complex video analytics and IoT

processing.

2. Network layer: Physical network devices will generally not be deployed due to the complexity

of managing them. The entire network layer is mostly virtualized or containerized. Examples

include routers, switches, or any other network components that are required to run the local

edge.

  • Cloud This architecture layer is generically referred to as the cloud but it can run on-premise or in the public cloud. This architecture layer is the source for workloads, which are applications that need to handle the processing that is not possible at the other edge nodes and the management layers. Workloads include application and network workloads that are to be deployed to the different edge nodes by using the appropriate orchestration layers.

Figure 2 illustrates a more detailed architecture that shows which components are relevant within each edge node. Duplicate components such as Industry Solutions/Apps exist in multiple nodes as certain workloads might be more suited to either the device edge or the local edge and other workloads might be dynamically moved between nodes under certain circumstances, either manually controlled or with automation in place. It is important to recognize the importance of managing workloads in discreet ways as the less discreet, the more limited in how we might deploy and manage them.

While a focus of this article has been on application and analytics workloads, it should also be noted that network function is a key set of capabilities that should be incorporated into any edge strategy and thus our edge architecture. The adoption of tools should also take into consideration the need to handle application and network workloads in tandem.