Networking
SDN
Software Defined Network (SDN) routes the traffic across a virtualized system. It is a broad concept that does not belong to Kubernetes (unlike CNI).
Kubernetes through the notion of network plugins can be use with a vast number of SDN. The most famous are:
-
Calico (L3) used by GKE, EKS, AKS, …
-
OpenShiftSDN
-
ACI plugin for Kubernetes
-
Flannel (can run in L2)
-
Canal (Flannel with Calico)
For a more detail description of Flannel, Calico and other network provider, see Deep Into Kubernetes Networking.
CNI
The Container Networking Interface is under the governance of the CNCF. It is responsible for connecting the containerā€™s network with that of the host. Kubernetes natively supports that model.
Kubernetes networking has one important fundamental design property:
Every pod must be able to communicate with each other in the cluster without NAT, using a unique IP. A node can also communicate with a Pod without the use of NAT
This principle gives a unique and first-class identity to every Pod in the cluster. Pods communicates likes VMs where the receiving side sees this unique identity/IP.
Containers inside a pod share the same network namespace and communicate over localhost.
This network scheme is not implemented by Kubernetes but via a plugin architecture called the Container Network Interface (CNI) model.
The specification requires that providers implement their plugin as a binary executable that the container engine invokes.
IPAM plugin
The CNI plugin provides IP address management for the Pods and builds routes for the virtual interfaces. To do this, the plugin interfaces with an IPAM plugin that is also part of the CNI specification. The IPAM plugin must also be a single executable that the CNI plugin consumes. The role of the IPAM plugin is to provide to the CNI plugin the gateway, IP subnet, and routes for the Pod. |
Services
Expose a single, constant IP address through which clients can connect to a set of pods. Services operate at the transport layer (TCP/IP).
Each service receives a unique virtual IP called ClusterIP that has no visibility outside of the cluster.
This is the job of the kube-proxy
that is installed on every node and is achieved through iptables or other means according to the type of the kube-proxy
in use.
ClusterIP
is also the default type of a service. To make the service available to the external world, you need to use/specify another type of service such as NodePort
, LoadBalancer
or ExternalName
.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: ClusterIP (1)
1 | NodePort | LoadBalancer | ExternalName
|
Ingress
Because LoadBalancer service works at the Layer 4 only, every service requires its own load balancer with its own public IP address. The Kubernetes resource that handles load balancing at Layer 7 (HTTP) is called an Ingress.
Ingress only requires one load balancer (Nginx, HAProxy, Traefik, or Amazon ALB,…) even when providing access to dozens of services.