CLOSE

Platform enabled ecosystem

Kubernetes On The Edge

Definition

Fit of Kubernetes to the requirements of edge deployment

Kubernetes is the de-facto standard for orchestrating containerized workloads. Originally built for the Public Cloud, Kubernetes is also growingly used for all other deployment targets, including edge.

Kubernetes’ main strengths are to declaratively define, implement, and supervise a desired deployment state of a set of containers across multiple nodes, manage load distribution across nodes, and manage upgrades and other changes using advanced deployment patterns.

These capabilities are a good fit to some core requirements of edge deployment: Typically, an edge environment consists of a high number of nodes with a similar or identical target deployment state, and require central declaration and supervision including the ability to manage changes.

However, there are also edge specific challenges that native Kubernetes has no good solution for yet.

  • The term ‘edge’ is not firmly defined in terms of resources, it can rather be understood as a continuum from ‘edge clusters’ over ‘thick’ multi-core devices to ‘thin’ resource constrained devices. Edge deployment may go to either a single point or various points in the edge continuum, for which efficient handling is required.

  • For some ‘thin edge’ deployment targets, native Kubernetes’ resource footprint is too large. To alleviate this issue, lightweight Kubernetes distributions like K3s and MicroK8s have been created. However, the benefits of these are often limited, particularly with regard to run-time memory which is often the most critical resource.

  • Coming from the Public Cloud, Kubernetes assumes network access to be ubiquitous, and takes HTTPS over TCP as a given. At the edge, Network connectivity is often intermittent, expensive, of low bandwidth, or even intentionally disabled. Most importantly, the worker node agent (kubelet) needs to robustly maintain a target deployment state even during outages of the connectivity to the control plane.

  • Finally, some requirements are not edge specific but very typical for edge as well:

    • Node Management
    • Application management following the GitOps approach
    • IoT device management

Key patterns of using Kubenetes for edge deployment

Kubernetes can be used on the edge in several deployment patterns. If resources allow, a full, ‘regular’ Kubernetes cluster can be installed on the edge of course.

In the typical scenario of many nodes with identical workload to be deployed, limited resources, and independent operation (no shared load with ‘Ingress’ and load balancing), there are three fundamental deployment patterns:

  1. Single Cluster: The edge nodes are worker nodes of a single, large cluster. The master node (or nodes, if clustered) run at the ‘thick’ end of the edge, or in the Cloud. Workload is distributed on all nodes (DaemonSet).
  2. Many Clusters: The edge nodes form many small clusters, to the extreme of single-node clusters (the workload running directly on a single master node). The high number of clusters is managed by using a multi-cluster management tool. Examples for those tools (or product suites with related capabilities) are SUSE Rancher, Platform 9, VMWare Tanzu, Canonical JuJu, Google Anthos, and Mirantis Kubernetes Engine.
  3. Seed - Shoot: The edge nodes are worker nodes of many inner (‘shoot’) clusters. The control planes of those clusters run as workload on the worker nodes of an outer (‘seed’) cluster. A tool provides management capabilities on top of the seed cluster. Prime example for such a tool is Gardener. Kubermatics is a multi-cluster manager supporting a variety of deployment patterns, including this one.

Edge Deployment Patterns (simplified)

Edge Deployment Patterns (simplified)

Proprietary agents for edge nodes

kubelet is the primary ‘agent’ of Kubernetes running on each node. Some solutions replace it by a component that is specifically designed to meet edge challenges, at the price of a more proprietary (but still Kubernetes based) solution.

KubeEdge (CNCF ‘incubating’ since September 2020) is an open-source solution with the purpose of extending the Kubernetes ecosystem from Cloud to edge. It requires a regular Kubernetes cluster in the Cloud, where apps can be orchestrated, monitored, etc. with standard Kubernetes means. On the edge nodes:

  • The lightweight component edged replaces kubelet, using ~ 30MB of RAM
  • Edge to Cloud connectivity is WebSockets based
  • The target state is enforced independent of Cloud connectivity. It is stored on the edge in a so-called ‘Device Twin’.
  • IoT device management capabilities are also provided

KubeEdge is an interesting option for many scenarios as it overcomes a number of edge specific challenges.

The IBM Edge Application Manager (IEAM) also introduces a proprietary component on the edge node, the IEAM Agent. A core feature is node management: Nodes are declaratively specified, workloads as well, and an ‘Agreement Bot’ automatically decides which pod to deploy on which node based on a matching algorithm. This allows the separation of the roles ‘node manager’ and ‘application manager’. It suits environments with constantly changing node and application sets, and a complex relationship between them. Another strength of IEAM is support of AI/ML on the edge, with tooling managing the AI/ML models.

Technology Evaluation

A central objective of Kubernetes is to provide single endpoints to an application (‚service‘, ‚ingress‘) and to hide the distributedness of the underlying computing. This is in sharp contrast to edge computing, where the different edges are typically neither hidden to the outside nor working together (although use cases for that are conceivable). Workloads can be distributed to every edge node (using ‚DaemonSet‘), but beyond that, the pattern of a Kubernetes control plane between edges is of very limited value.

Kubernetes is a mature product for its original deployment targets. For the edge though, Kubernetes’ native features, related CNCF initiatives, and commercial offerings including multi-cluster managers are still evolving. The products/solutions come with a large variety of deployment patterns, feature sets, degrees of vendor lock-in, maturity, and cost.

To avoid vendor lock-in is always a good advice in evolving markets, however it is also a bit simplistic. For Kubernetes on the edge, companies should identify the specific requirements of their target edge environment, shortlist solutions covering these (avoid looking for an ultimate solution), and then to choose the least proprietary approach among them. For example, choose KubeEdge for thin edge nodes where independent operation is a must; or choose Kubermatics as it supports a variety of deployment architectures, some with minimal vendor lock-in.

Market Outlook

Custom Resource Definitions (CRDs) will play a central role in the further evolution of Kubernetes in general, and Kubernetes on the edge in particular. They allow to specify and manage arbitrary objects ‘Kubernetes style’. For example, workloads with e.g. Work API (major potential for edge, but not mature yet), bare metal (Tinkerbell), IoT devices, and more. Kubernetes will evolve from an orchestrator of containers to an orchestrator of everything, the operating system of the web.

For many objects, we expect that de-facto standards for CRDs will evolve, like an ‘EdgeNode’ resource that is as common as, say, an ‘Ingress’ resource today. This will shift the management of these objects back into native Kubernetes. Also, we expect Kubernetes to improve with regard to handling multiple clusters, supporting different protocols, etc. With other words, in the long term we expect Kubernetes itself to cover many of the requirements that we consider edge-specific challenges today.

  • Böhm, S & Wirtz, G, ‘Profiling Lightweight Container Platforms: MicroK8s and K3s in Comparison to Kubernetes’, Proceedings of the 13th European Workshop on Services and their Composition (ZEUS 2021) pp 65-73, http://ceur-ws.org/Vol-2839/paper11.pdf