A few weeks ago, when we introduced Dapr, we also discussed its overlapping capabilities with a service mesh, although Dapr itself is not a service mesh. As already mentioned in a previous blogpost, in recent years service meshes have become a pivotal component of modern cloud-native applications, providing critical features like traffic management, security, and observability. Traditional service meshes rely on sidecar proxies, which can introduce complexity and performance overhead. Enter Cilium with its sidecar-less service mesh capabilities, leveraging eBPF for efficient, kernel-level networking. Coupled with Dapr, a powerful distributed application runtime, this integration promises to deliver high performance, enhanced security, and simplified operations.
The Need for Sidecar-Less Service Mesh
As discussed in this article, Cilium initially started as an eBPF-powered Container Network Interface (CNI) for Kubernetes. Over time, Cilium evolved to become a versatile tool for networking within Kubernetes. In July 2022, the first release of the Cilium Service Mesh was launched, introducing a sidecar-less service mesh that leverages eBPF and operates within the Linux kernel.
Source: https://isovalent.wpengine.com/wp-content/uploads/2022/07/1.12@2x-4.png
This service mesh aims to utilize eBPF extensively, focusing on lower-level networking (up to layer 4) to deliver its capabilities. However, for a fully-featured service mesh, higher-level functionalities such as service invocation rely on layer 7. Typically, these require a proxy to implement features like retries and circuit breaking.
When considering Dapr and its layer 7 functionalities, it traditionally runs on a sidecar-based model. While this is accurate, there is an alternative approach that aligns with the sidecar-less architecture.
Introducing Dapr Shared
Dapr Shared provides an alternative deployment model for Dapr. While the default approach still utilizes sidecars, Dapr Shared allows users to easily deploy Dapr runtimes (Daprd) either as a DaemonSet (one per node) or as Deployments (per cluster) inside Kubernetes. This approach offers the significant benefit of decoupling the lifecycle of a Daprd instance from the lifecycle of the actual application to some extent. You might wonder: Why is this beneficial? There are several scenarios where this approach is advantageous:
- Function as a Service (FaaS) Integration: In FaaS, an instance of the actual function might only exist for a few seconds during its execution. However, you may still want to utilize PubSub for asynchronous messaging.
- Sidecar-Free Service Mesh Integration: When using Dapr primarily for its building blocks, similar to the FaaS example, decoupling the Daprd lifecycle from its application is beneficial. By controlling the number of Daprd instances ("proxies"), this approach integrates well with sidecar-less service meshes.
By utilizing Dapr Shared in these cases, we can still leverage all the building blocks and advanced functionalities such as mTLS in conjunction with service invocation.
Our Demo case & Architecture
With the theory covered, it's time to get practical. We'll set up a simple yet powerful architecture to demonstrate how Dapr and Cilium Service Mesh can work together.
In this setup, we're embracing the cutting edge. Cilium will handle the general networking layer and we'll also use its Gateway API implementation to manage the north-south traffic flowing into our demo cluster. For east-west traffic, we'll leverage the Cilium GAMMA implementation, enhanced with Dapr Shared runtimes, to ensure comprehensive integration.
By incorporating these technologies, we'll be able to add resiliency and other advanced features to our setup. This architecture will showcase the seamless collaboration between Dapr and Cilium, providing robust and scalable networking solutions for Kubernetes environments.
Requirements
If you want to follow along, you'll need access to a Kubernetes cluster. This cluster should have Cilium version 1.16 installed with the following configurations:
- Kube proxy replacement OR
- NodePort Services
- GatewayAPI enabled
- Hubble and Hubble UI accessible
You can follow the offical documentation to install Cilium.
Setup Gateway API and a Gateway
As previously mentioned, we will use GatewayAPI to route traffic into our cluster. This can be easily accomplished by defining a Gateway resource as shown below:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: default-gateway
namespace: default
spec:
gatewayClassName: cilium
listeners:
- allowedRoutes:
namespaces:
from: All
name: default-http
port: 80
protocol: HTTP
With this resource in place and Cilium correctly installed, you can verify that the Gateway has been successfully programmed and has an external IP by running the following command:
kubectl get gateway default-gateway
This should output something like:
NAME CLASS ADDRESS PROGRAMMED AGE
default-gateway cilium 18.192.114.200 True 27h
Now that we have the Gateway set up, we can proceed to get a dummy app running.
Dummy app and simple routing
To test our setup, we'll deploy the whoami image created by Traefik Labs. This app displays HTTP request headers and other related information, which is useful for validating our configuration. Let's start by deploying and making the dummy app accessible.
Create a Service, Deployment, and HTTPRoute for the whoami app:
apiVersion: v1
kind: Service
metadata:
name: whoami
labels:
app: whoami
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: whoami
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-v1
spec:
replicas: 2
selector:
matchLabels:
app: whoami
version: v1
template:
metadata:
labels:
app: whoami
version: v1
spec:
serviceAccountName: whoami
containers:
- image: traefik/whoami
imagePullPolicy: IfNotPresent
name: whoami
ports:
- containerPort: 80
env:
- name: WHOAMI_NAME
value: "v1"
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: whoami-route-service
spec:
parentRefs:
- group: ""
kind: Service
name: whoami
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: whoami
port: 80
After deploying these resources, you can validate the setup using a simple curl command:
curl -v http://18.192.114.200
The expected output should be similar to:
* Trying 18.192.114.200:80...
* Connected to 18.192.114.200 (18.192.114.200) port 80
> GET / HTTP/1.1
> Host: 18.192.114.200
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 200 OK
< date: Tue, 30 Jul 2024 14:16:41 GMT
< content-length: 350
< content-type: text/plain; charset=utf-8
< x-envoy-upstream-service-time: 6
< server: envoy
<
Name: v1
Hostname: whoami-v1-cd9b4bdf7-ht6dc
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.101
IP: fe80::e0d3:4fff:fe30:95ef
RemoteAddr: 10.42.1.170:35911
GET / HTTP/1.1
Host: 18.192.114.200
User-Agent: curl/8.4.0
Accept: */*
X-Envoy-Internal: true
X-Forwarded-For: 10.42.1.82
X-Forwarded-Proto: http
X-Request-Id: bc928d69-06e2-427b-89f1-fab7c8bfb7e9
* Connection #0 to host 18.192.114.200 left intact
This confirms that the whoami app is accessible and that the GatewayAPI is routing traffic correctly into our cluster.
Install Dapr runtime
Next, we need to install the Dapr runtime in our Kubernetes cluster. This can be quickly done using the available Helm chart. Execute the following command to install Dapr:
helm upgrade --install dapr dapr/dapr \
--version=1.13 \
--namespace dapr-system \
--create-namespace \
--wait
A successful installation output looks like this:
Release "dapr" has been upgraded. Happy Helming!
NAME: dapr
LAST DEPLOYED: Tue Jul 30 16:09:45 2024
NAMESPACE: dapr-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Dapr: High-performance, lightweight serverless runtime for cloud and edge
Your release is named dapr.
To get started with Dapr, we recommend using our quickstarts:
https://github.com/dapr/quickstarts
For more information on running Dapr, visit:
https://dapr.io
To verify that all Dapr components are running correctly, check the pods in the dapr-system namespace:
kubectl get pods -n dapr-system
A typical output should show all Dapr components in a running state:
NAME READY STATUS RESTARTS AGE
dapr-operator-c449df54-px8q2 1/1 Running 0 25h
dapr-placement-server-0 1/1 Running 0 25h
dapr-sentry-774c876df5-h2nhh 1/1 Running 0 25h
dapr-sidecar-injector-78769c6d9-zh2jw 1/1 Running 0 25h
With Dapr successfully installed and all components healthy, we can proceed to install our shared instances.
Deploy Dapr Shared Instances
Now, let’s set up our Dapr shared instances. We need to deploy two instances: one for the ingress (Cilium Gateway) and one for the dummy whoami app. The ingress instance will be deployed as a DaemonSet, while the whoami instance will be deployed as a Deployment.
helm upgrade --install ingress-shared oci://registry-1.docker.io/daprio/dapr-shared-chart --set shared.appId=ingress --set shared.strategy=daemonset --set shared.remoteURL=cilium-gateway-default-gateway.default.svc.cluster.local --set shared.remotePort=80 --set shared.daprd.mtls.enabled=true --namespace default
helm upgrade --install whoami-shared oci://registry-1.docker.io/daprio/dapr-shared-chart --set shared.appId=whoami --set shared.strategy=deployment --set shared.remoteURL=whoami.default.svc.cluster.local --set shared.remotePort=80 --set shared.daprd.mtls.enabled=true --namespace default
After running these commands, you can verify that the Dapr shared instances are running correctly by using:
kubectl get pods
You should see output similar to this:
NAME READY STATUS RESTARTS AGE
ingress-shared-dapr-shared-chart-bfnxc 1/1 Running 0 29s
ingress-shared-dapr-shared-chart-lg62n 1/1 Running 0 21s
whoami-shared-dapr-shared-chart-69fd8658db-kmzph 1/1 Running 0 26s
With these shared instances up and running, we have successfully established a robust setup for handling application traffic and service invocation.
Daprise things!
Now comes the fun part: "Daprise" our setup! We'll integrate Dapr with both the Cilium Gateway and the whoami app to leverage Dapr’s capabilities fully. To ensure all requests flow through the Dapr shared instance dedicated to the gateway, we'll configure the HTTPRoute to target this Dapr instance. Dapr expects a specific URL format for service invocation, so we’ll use the URLRewrite filter in Gateway API to adjust the request paths accordingly.
Update the previous HTTPRoute resource with the following configuration:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: whoami-route
spec:
parentRefs:
- name: default-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /v1.0/invoke/whoami.default/method//
backendRefs:
- name: ingress-dapr
port: 3500
How did we Daprise it?
- filters: The URLRewrite filter rewrites the URL to match Dapr’s service invocation format. (and yes, the // is correct)
- backendRefs: Points to the ingress-dapr instance which handles requests for the gateway.
To verify everything is working correctly, perform a curl request to the external IP of your gateway:
curl -v http://18.192.114.200
You should see output similar to this:
* Trying 18.192.114.200:80...
* Connected to 18.192.114.200 (18.192.114.200) port 80
> GET / HTTP/1.1
> Host: 18.192.114.200
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-length: 719
< content-type: text/plain; charset=utf-8
< date: Tue, 30 Jul 2024 14:38:18 GMT
< traceparent: 00-00000000000000000000000000000000-0000000000000000-00
< x-envoy-upstream-service-time: 18
< server: envoy
<
Name: v1
Hostname: whoami-v1-cd9b4bdf7-4tmvr
IP: 127.0.0.1
IP: ::1
IP: 10.42.1.30
IP: fe80::7f:66ff:fe14:15ad
RemoteAddr: 10.42.0.60:55178
GET / HTTP/1.1
Host: whoami.default.svc.cluster.local:80
User-Agent: curl/8.4.0
Accept: */*
Accept-Encoding: gzip
Dapr-Callee-App-Id: whoami
Dapr-Caller-App-Id: ingress
Forwarded: for=10.42.0.131;by=10.42.0.131;host=ingress-shared-dapr-shared-chart-bfnxc
Traceparent: 00-00000000000000000000000000000000-0000000000000000-00
X-Envoy-Internal: true
X-Envoy-Original-Path: /
X-Forwarded-For: 10.42.1.110
X-Forwarded-For: 10.42.0.131
X-Forwarded-Host: ingress-shared-dapr-shared-chart-bfnxc
X-Forwarded-Proto: http
X-Request-Id: 5556df41-f78f-4c98-9526-f5025a787a1e
* Connection #0 to host 18.192.114.200 left intact
From the headers, you can see Dapr-Callee-App-Id
and Dapr-Caller-App-Id
indicating that traffic is correctly routed through the Dapr instances. This setup ensures that connections are mTLS (Transport) encrypted by default, adding an extra layer of security to our communication. We've successfully integrated Dapr with Cilium’s service mesh, leveraging its advanced capabilities to enhance your application’s networking and service invocation.
Implement Traffic Shifting Using Gamma
One feature that Dapr doesn't provide but Cilium does is traffic shifting. Fortunately, with our integration, we can leverage Cilium's GAMMA (Gateway API for Service Mesh) to enable this capability. GAMMA extends the Gateway API to support east-west traffic management within the cluster, complementing its use for north-south traffic. To set up traffic shifting, we'll need to create an HTTPRoute configuration that facilitates load balancing between different versions of our app. By leveraging GAMMA, you can shift traffic between different versions of your application, making it easier to deploy new features or perform canary releases with minimal disruption. First, we'll deploy a new version of the whoami app and create additional services for it. Here’s the YAML configuration for deploying whoami version 2 and setting up the necessary services:
apiVersion: v1
kind: Service
metadata:
name: whoami-v1
labels:
app: whoami
version: v1
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: whoami
version: v1
---
apiVersion: v1
kind: Service
metadata:
name: whoami-v2
labels:
app: whoami
version: v2
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: whoami
version: v2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-v2
spec:
replicas: 2
selector:
matchLabels:
app: whoami
version: v2
template:
metadata:
labels:
app: whoami
version: v2
spec:
serviceAccountName: whoami
containers:
- image: traefik/whoami
imagePullPolicy: IfNotPresent
name: whoami
ports:
- containerPort: 80
env:
- name: WHOAMI_NAME
value: "v2"
Next, we configure a HTTPRoute to enable traffic shifting between whoami-v1 and whoami-v2. The HTTPRoute will route a percentage of traffic to each version of the app based on the defined weights.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: whoami-route-service
spec:
parentRefs:
- group: ""
kind: Service
name: whoami
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: whoami-v1
port: 80
weight: 50
- name: whoami-v2
port: 80
weight: 50
The important piece here is the parentRefs
. While in traditional gateway api, we use this to bind that HTTPRoute
to a gateway, we bind it against the whoami
service here enabling the GAMMA case.
Also, with the weight
defined in the backendRef
we can adjust how we want the traffic to be shifted. To ensure traffic shifting is working correctly, you can perform several curl requests and check the responses to verify that traffic is distributed between whoami-v1 and whoami-v2 as specified.
curl http://18.192.114.200
Name: v1
Hostname: whoami-v1-cd9b4bdf7-4tmvr
IP: 127.0.0.1
IP: ::1
IP: 10.42.1.30
IP: fe80::7f:66ff:fe14:15ad
RemoteAddr: 10.42.0.60:55178
GET / HTTP/1.1
Host: whoami.default.svc.cluster.local:80
User-Agent: curl/8.4.0
Accept: */*
Accept-Encoding: gzip
Dapr-Callee-App-Id: whoami
Dapr-Caller-App-Id: ingress
Forwarded: for=10.42.0.131;by=10.42.0.131;host=ingress-shared-dapr-shared-chart-bfnxc
Traceparent: 00-00000000000000000000000000000000-0000000000000000-00
X-Envoy-Internal: true
X-Envoy-Original-Path: /
X-Forwarded-For: 10.42.1.110
X-Forwarded-For: 10.42.0.131
X-Forwarded-Host: ingress-shared-dapr-shared-chart-bfnxc
X-Forwarded-Proto: http
X-Request-Id: e9d8ab85-8ae7-46a9-8b44-42240d1d6833
---
curl http://18.192.114.200
Name: v2
Hostname: whoami-v2-766bb5fbd5-bfx9j
IP: 127.0.0.1
IP: ::1
IP: 10.42.1.181
IP: fe80::d441:87ff:fea1:d123
RemoteAddr: 10.42.1.27:32774
GET / HTTP/1.1
Host: whoami.default.svc.cluster.local:80
User-Agent: curl/8.4.0
Accept: */*
Accept-Encoding: gzip
Dapr-Callee-App-Id: whoami
Dapr-Caller-App-Id: ingress
Forwarded: for=10.42.0.131;by=10.42.0.131;host=ingress-shared-dapr-shared-chart-bfnxc
Traceparent: 00-00000000000000000000000000000000-0000000000000000-00
X-Envoy-Internal: true
X-Envoy-Original-Path: /
X-Forwarded-For: 10.42.1.64
X-Forwarded-For: 10.42.0.131
X-Forwarded-Host: ingress-shared-dapr-shared-chart-bfnxc
X-Forwarded-Proto: http
X-Request-Id: f1df7207-098c-4a98-8a97-858ce1f9c0ed
You should see responses from both versions of the app based on the configured traffic weights. Additionally, we can consult the hubble flow
logs to check for the actual connections happening:
Jul 29 15:18:13.883: kube-system/svclb-cilium-gateway-default-gateway-f455e04d-jljl8:54047 (ingress) -> default/ingress-shared-dapr-shared-chart-g2596:3500 (ID:33809) http-request FORWARDED (HTTP/1.1 GET http://whoami.cloud-native.rocks/v1.0/invoke/whoami.default/method/something)
Jul 29 15:18:13.884: 10.42.1.170:39105 (ingress) <> default/ingress-shared-dapr-shared-chart-g2596:3500 (ID:33809) to-overlay FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:13.884: 10.42.1.170:39105 (ingress) -> default/ingress-shared-dapr-shared-chart-g2596:3500 (ID:33809) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:13.890: default/ingress-shared-dapr-shared-chart-g2596:55194 (ID:33809) -> default/whoami-shared-dapr-shared-chart-crmxz:50002 (ID:14274) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:13.890: default/ingress-shared-dapr-shared-chart-g2596:55194 (ID:33809) <- default/whoami-shared-dapr-shared-chart-crmxz:50002 (ID:14274) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 29 15:18:13.891: default/whoami-shared-dapr-shared-chart-crmxz:46028 (ID:14274) -> default/whoami-v1-cd9b4bdf7-gxkvt:80 (ID:26920) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:13.891: default/whoami-shared-dapr-shared-chart-crmxz:46028 (ID:14274) <- default/whoami-v1-cd9b4bdf7-gxkvt:80 (ID:26920) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:13.893: 10.42.1.170:39105 (ingress) <> default/ingress-shared-dapr-shared-chart-g2596:3500 (ID:33809) to-overlay FORWARDED (TCP Flags: ACK)
Jul 29 15:18:13.895: kube-system/svclb-cilium-gateway-default-gateway-f455e04d-jljl8:54047 (ingress) <- default/ingress-shared-dapr-shared-chart-g2596:3500 (ID:33809) http-response FORWARDED (HTTP/1.1 200 15ms (GET http://whoami.cloud-native.rocks/v1.0/invoke/whoami.default/method/something))
Jul 29 15:18:14.405: kube-system/svclb-cilium-gateway-default-gateway-f455e04d-jljl8:54048 (ingress) -> default/ingress-shared-dapr-shared-chart-zxv74:3500 (ID:33809) http-request FORWARDED (HTTP/1.1 GET http://whoami.cloud-native.rocks/v1.0/invoke/whoami.default/method/something)
Jul 29 15:18:14.406: 10.42.1.170:38391 (ingress) -> default/ingress-shared-dapr-shared-chart-zxv74:3500 (ID:33809) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:14.414: default/ingress-shared-dapr-shared-chart-zxv74:55876 (ID:33809) -> default/whoami-shared-dapr-shared-chart-g572c:50002 (ID:14274) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:14.414: default/whoami-shared-dapr-shared-chart-g572c:58854 (ID:14274) -> default/whoami-v2-766bb5fbd5-jvtcd:80 (ID:2184) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:14.414: default/ingress-shared-dapr-shared-chart-zxv74:55876 (ID:33809) <- default/whoami-shared-dapr-shared-chart-g572c:50002 (ID:14274) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 29 15:18:14.415: default/whoami-shared-dapr-shared-chart-g572c:58854 (ID:14274) <- default/whoami-v2-766bb5fbd5-jvtcd:80 (ID:2184) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
This log shows, that traffic gets redirected to whoami-v1
as well as too whoami-v2
. In the hubble UI, thats also visible.
Access Control with Network Policies
With Cilium handling the traffic flow to Dapr, we can leverage Cilium’s network policies to configure access control. Unlike Dapr’s access control, which operates at layer 7 (application layer), Cilium’s approach works at much lower levels of the network stack, providing more granular control earlier in the traffic flow.
Here’s how to configure access control using Cilium’s CiliumNetworkPolicy:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: "allow-ingress-shared-to-whoami"
spec:
endpointSelector:
matchLabels:
dapr.io/app-id: whoami
ingress:
- fromEndpoints:
- matchLabels:
dapr.io/app-id: ingress
toPorts:
- ports:
- port: "50002"
protocol: TCP
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-ingress-to-ingress-shared"
spec:
endpointSelector:
matchLabels:
dapr.io/app-id: ingress
ingress:
- fromEntities:
- ingress
toPorts:
- ports:
- port: "3500"
protocol: TCP
Those two policies will restrict the traffic flow direction in our cluster. They ensure that only the cilium-gateway is allowed to connect to the ingress-dapr instance. Similarly, the ingress-dapr instance is the only entity permitted to communicate with the whoami-dapr instance. This setup guarantees that Dapr shared instances are the sole communicators with each other, enhancing security and control. Cilium applies network policies at layers 3 and 4, offering granular control over traffic before it even reaches the application layer. This reduces unnecessary traffic and enhances performance by blocking unauthorized access early in the network stack.
Bonus Point 1 - Local redirect Policy
Cilium 1.16 introduces a beta feature known as Local Redirect Policy. This feature allows you to redirect traffic to local endpoints on the same node, rather than routing it cluster-wide. This can be particularly useful for optimizing traffic flows and ensuring that certain connections remain local.
apiVersion: "cilium.io/v2"
kind: CiliumLocalRedirectPolicy
metadata:
name: "redirect-ingress"
spec:
redirectFrontend:
serviceMatcher:
serviceName: ingress-dapr
namespace: default
redirectBackend:
localEndpointSelector:
matchLabels:
dapr.io/app-id: ingress
toPorts:
- port: "3500"
protocol: TCP
We will use this policy to ensure that traffic from the cilium-gateway
to its Dapr shared instance remains local to the node. This approach keeps the connection on the same node, ensuring that mTLS is established and maintained before the traffic leaves the node. Note that since this is a beta feature, you need to enable it in Cilium. Refer to the Cilium documentation for guidance on enabling this feature.
Bonus Point 2 - mTLS and strong identities
Another advantage of integrating Cilium into your setup is the option to use Cilium’s mTLS capabilities instead of Dapr’s. DaüCilium provides mTLS through either Wireguard or IPSec tunnels between nodes, offering distinct benefits compared to Dapr's approach utilsing a bidirectional gRPC connection with mTLS.
- Kernel-Level Enforcement: Cilium's mTLS can be enforced directly in the Linux kernel, leveraging Wireguard or IPSec. This approach generally provides better performance compared to Dapr's user-space implementation of mTLS through gRPC.
- Broad Encryption Coverage: Cilium ensures that all service-to-service connections are encrypted, not just those that traverse Dapr. This offers a more comprehensive security model.
Once mTLS is enabled in Cilium itself, you can enforce mTLS in Cilium by attaching it to a CiliumNetworkPolicy
authentication:
mode: "required"
Cilium will then check for matching strong identities together with mTLS validation.
Conclusion
In summary, the integration of Dapr with Cilium's sidecar-less service mesh opens up a new realm of possibilities for microservices architecture. By leveraging Dapr Shared, we can decouple the lifecycle of Dapr runtimes from the applications, allowing for more flexible deployments. The combination of Dapr's robust service invocation and Cilium's efficient, eBPF-powered networking and service mesh capabilities enables highly performant, secure, and resilient microservice interactions.
This demonstration showcases how these technologies can be harmonized to create a cutting-edge infrastructure that simplifies service management while enhancing scalability and reliability. Integrating Dapr and Cilium represents a significant step forward in building the next generation of distributed applications.
More articles
fromManuel Zapf
Your job at codecentric?
Jobs
Agile Developer und Consultant (w/d/m)
Alle Standorte
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.
Gemeinsam bessere Projekte umsetzen.
Wir helfen deinem Unternehmen.
Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.
Hilf uns, noch besser zu werden.
Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.
Blog author
Manuel Zapf
Solution Architect
Do you still have questions? Just send me a message.
Do you still have questions? Just send me a message.