I wrote an article about using eBPF based service-mesh in EKS ( eBPF with AWS ) The main benefit is in with high performance networking by shortening the network path . But How does it work ? How is it done otherwise ? Lets talk about it
How is it done traditionally ?
Service mesh uses Kube-Proxy that runs on every Node takes care of routing all the request from the Kubernetes nodes to the pod. It also takes care of service abstraction , and load balancing for all the pods that are part of the service .It uses Linux routing and updates the routing rules based on the pods deployed in the Node .When new service is created it updates the iptable rules to redirect traffic from the service Cluster IP to the backend pods.
So this is good what's the issue ?
Its the network path and latency associated with it. Also the updates that needs to be done in the iptables whenever pods get deployed /removed . eBPF allows packet processing directly in the linux kernel eliminating the need for packet-to-packet NAT transformation with iptables. eBPF can also optimize the service network and facilitate direct pod-to-pod communication without intermediate NAT processing
But what about TLS ?
Applications offload the network encryption requirements to the service mesh or the underlying network . Simply using transparent encryption ensures that the traffic is encrypted with in the cluster.Both IPSec and Wireguard are supported by Kubernetes networking by Cilium and Calico. There is also option to use TLS for initial certificate exchange and endpoint authentication , this way the application is used for identity rather than the node .
No comments:
Post a Comment