One of the interesting aspects of moving to a top-down, application-centric way of working is rethinking how we do networking. Much as the application model first abstracted away physical infrastructure with virtualization and is now using Kubernetes and similar orchestration tools to abstract away the underlying virtual machines, networking is moving away from general-purpose routed protocol stacks to software-driven networking that uses common protocols to implement application-specific network functions.
We can see how networking is evolving with Windows Server 2022’s introduction of SMB over QUIC as an alternative to general-purpose VPNs for file sharing between on-premises Azure Stack systems and the Azure public cloud. Similarly, in Kubernetes, we’re seeing technologies such as service mesh provide an application-defined networking model that delivers network meshes with your distributed application as part of the application definition rather than as a network that an application uses.
A new networking layer: application-defined networking
This application-driven networking is a logical extension of much of the software-defined networking model that underpins the public cloud. However, instead of requiring deep understanding of networking and, more importantly, network hardware, it’s a shift to a higher-level approach where a network is automatically deployed using the intents in policy and rules. The shift away from both the virtual and the physical is essential when we’re working with dynamically self-orchestrating applications that scale up and down on demand, with instances across multiple regions and geographies all part of the same application.
It’s still early days for application-driven networking, but we’re seeing tools appear in Azure as part of its Kubernetes implementation. One option is the Open Service Mesh, of course, but there’s another set of tools that helps manage the network security of our Kubernetes applications: Network Policy. This helps manage connectivity between the various components of a Kubernetes application, handling traffic flow between pods.
Network policies in Azure Kubernetes Service
AKS (Azure Kubernetes Service) offers network policy support through two routes: its own native tool or the community-developed Calico. This second option is perhaps the most interesting, as it gives you a cross-cloud tool that can work not only with AKS, but also with your own on-premises Kubernetes, Red Hat’s Open Shift, and many other Kubernetes implementations.
Calico is managed by Kubernetes security and management company Tigera. It is an open source implementation of the Kubernetes network policy specification, handling connectivity between workloads and enforcing security policies on those connections, adding its own extensions to the base Kubernetes functions. It’s designed to work using different data planes, from eBPF on Linux to Windows Host Networking. This approach makes it ideal for Azure, which offers Kubernetes support for both Linux and Windows containers.
Setting up network policy in AKS is important. By default, all pods can send data anywhere. Although this isn’t inherently insecure, it does open up your cluster to the possibility of compromise. Pods containing back-end services are open to the outside world, allowing anyone to access your services. Implementing a network policy allows you to ensure that those back-end services are only accessible by front-end systems, reducing risk by controlling traffic.
Whether using the native service or Calico, AKS network policies are YAML documents that define the rules used to route traffic between pods. You can make those policies part of the overall manifest for your application, defining your network with your application definition. This allows the network to scale with the application, adding or removing pods as AKS responds to changes in load (or if you’re using it with KEDA [Kubernetes-based Event-Driven Autoscaling], as your application responds to events).
Using Calico in Azure Kubernetes Service
Choosing a network policy tool must be done at cluster creation; you can’t change the tool you’re using once it’s been deployed. There are differences between the AKS native implementation and its Calico support. Both implement the Kubernetes specification, and both run on Linux AKS clusters, but only Calico has support for Windows containers. It’s important to note that although Calico will work in AKS, there’s no official Azure support for Calico beyond the existing community options.
Getting started with Calico in AKS is relatively simple. First, create an AKS cluster and add the Azure Container Networking plug-in to your cluster. This can host either AKS network policy or Calico. Next, set up your virtual network with any subnets you plan to use. Once you have this in place, all you need to do is use the Azure command line to create an AKS cluster, setting your network policy to “calico” rather than “azure.” This enables Calico support on both Linux and Windows node pools. If you’re using Windows, make sure to register Calico support using the EnableAKSWindowsCalico feature flag from the Azure CLI.
The Calico team recommends installing the calicoctl management tool in your cluster. There are several different options for installation: running binaries under Windows or Linux or adding a Kubernetes pod to your cluster. This last option is probably best for working with AKS as you can then mix and match Windows and Linux pods in your cluster and manage both from the same Kubernetes environment.
Building and deploying Calico network policies
You’ll create Calico network policies using YAML, setting policies for pods with specific roles. These roles are applied as pod labels when creating the pod, and your rules will need a selector to attach your policy to the pods that meet your app and role labels. Once you’ve created a policy, use kubectl to apply it to your cluster.
Rules are easy enough to define. You can set ingress policies for specific pods to, say, only receive traffic from another set of pods that match another selector pattern. This way you can ensure your application back end, say, only receives traffic from your front end, and that your data service only works when addressed by your back end. The resulting simple set of ingress rules ensures isolation between application tiers as part of your application definition. Other options allow you to define rules for namespaces as well as roles, ensuring separation between production and test pods.
Calico gives you fine-grained control over your application network policy. You can manage ports, specific application endpoints, protocols, and even IP versions. Your policies can be applied to a specific namespace or globally across your Kubernetes instance. Rules are set for ingress and egress, allowing you to control the flow of traffic in and out of your pods, with policies denying all traffic apart from what is specifically allowed. With Calico, there’s enough flexibility to quickly build complex network security models with a handful of simple YAML files. Just create the YAML you need and use calicoctl to apply your rules.
Application-driven networking is an important concept that allows application development teams to control how their code interacts with the underlying network fabric. Like storage and—thanks to tools like Kubernetes—compute, the ability to treat networking as a fabric that can be simply controlled at a connection level is important. Networking teams no longer have to configure application networks; all they need to do is help define VNets and then leave the application policies up to the application.
If we’re to build flexible, modern applications, we need to take advantage of tools such as Calico, allowing our networking to be as portable as our code and as flexible and scalable. It may be a change in how we think about networks, but it’s an essential one to support modern application infrastructures.