- Key components
- Pod creation and ip allocation with routing
- Dataplane options in calico
- Routing options
- Ippool and ip pool management in calico
- Network policy using calico
- Bgp peering in calico
In plane kubernetes installation three types of networking in k8s as key takeaways
-node ip addresses
-service ip addresses
– pod ip addresses
CNI is using for pod networking
How can a runtime can communicate with containers and create networks
Calico is one of the plugins which used to implementation of CNI
Why calico? Is one of the prominent plugins and has 1million downloads already.
Components of Calico:
Phoenix : it create the proper route for a pod in a node and iptable rule implementation of network policy in node level
Bird: Route of a pod from one node to other node is managing by Bird
CNI plugin: to create interface for a pod
IPAM plugin: ip allocation management plugin
What happens when a pod gets created?
Api server collects the request and sends it to kubelet runtime to create a network os namespace.
So CRI gets the request to create network ns and it will call CNI. So CNI will process this request and will create a host virtual ethernet pair. One of its sides will be in the pod’s namespace and the other end will connect to the host namespace.
It should required that the host routing table needs to be updated to get the host understand the presents of new pod
Phoenix will then add a routing table to the host to enable communication to the pod’s ip address
Bird using BGP will then pass this routing info to all nodes
Calico will have a default storage backend to keep the pod creation, its metadata and ip details
This info will be used while implementing network policies for respective pods.
Calico will provide an ip block to each node. Most /26 is the network subnet class.
Calico will have multiple data planes.
IPtable based data plane is the default and tested one.
EBPF is used by calico for one data plane due to its performance advancements.
EBPF data plane can replace the kube proxy.
VPP is another data plane, which is the latest.
Calico routing modes:
Routing or encapsulation modes.
Calico is using ip in ip mode.
When one pod reaches another pod in another node, calico will add an additional ip header (ip in ip) which contains the source node to the destination node ip details.
If our cluster is in layer2 vlan, we don’t need ipinip encapsulation. Because we handle the same functionality using physical network configuration.
Ipinip and vxlan encapsulation supported by calico.
IPpool and ip address management:
Calico will create a default ip pool and it will allocate a ip block with /26 subnet for each node in the cluster.
So 64 ip per node for pods.
Once the 64 ip addresses got used, calico will get a new block and it will allocate to this exhausted node.
If one node is not having enough ip, it will borrow ip from other node which has free ip
We can set the ip pool for per node, per namespace basis.
For one pod, if it needs a specific ip it can also possible using annotation
Network policy implementation:
Calico has a crd. We can create a network policy using this crd.
We can create namespace level and global network policies using calico.
We can set order numbers for network policies. Lower order policies will get execute first and higher order will get executed second.
We can also configure pod accesses explicitly using calico
We can also log the pod access activities from one pod to another using calico syslogs.
We can create policies using even service accounts in kubernetes.
In istio implemented cluster, we can create layer7 policies using calico
Host level firewalls can manage using calico by enabling host end points while implementing calico.
We can select node or pod using labels in calico while network policy creation
Calico will use fail safe ports configuration to avoid unnecessary blockings.
Service ip and pod ips are always available inside a k8s cluster only.
Calico’s bgp peer will help us to configure policies using peering supported routers so that pods become reachable from outside the cluster too .
With the help of calico Metallb can be used to spread lbip, service and pod ip addresses outside the cluster.
Default deny network policies will be a recommended best practice to prevent unnecessary accesses and privileges.
Host end point attached calico can implement separate network policies for node based communication. For example specific node to etcd like components.
Calico supports encryption using wireguard.
Certified calico operator one, a certification provided by calico.
Metallb will work in two modes: Layer2 and bgp modes
Calico support dual stack, ipv4 and ipv6 .
Calico vs Cilium:
Calico is older one
Cilium is using EBPF, so its advanced one.
Cilium is having better potential due to community support
Hubble UI is a good feature in cilium