Beliebte Suchanfragen

Cloud Native



Agile Methoden



Building your own serverless functions with k3s and OpenFaaS on Raspberry Pi

6.8.2019 | 18 minutes of reading time

In recent years, lots of new programming paradigms have emerged – going from monolithic architectures towards microservices and now serverless functions. As a result, less code needs to be deployed, and updating an application becomes easier and faster as only a part has to be built and deployed. When serverless functions are mentioned, AWS Lambda is not a far cry as it’s the most prominent player in the serverless world. Other cloud providers have similar offerings, for example Google Cloud Functions or Azure Functions . But using them, we will put ourselves into a vendor lock. This isn’t necessarily a problem if your entire infrastructure is hosted on one of those platforms. If you, however, want to stay independent of cloud providers, an open-source solution that can be deployed on your own infrastructure can be beneficial. Cut to OpenFaaS , an open source serverless framework that runs on Kubernetes or Docker Swarm.

In this blog post, we will focus on setting up a Raspberry Pi cluster, using Ansible for reproducible provisioning, k3s as a lightweight Kubernetes distribution, and OpenFaaS to run serverless functions.

A notable mention is the blog post Will it cluster? k3s on your Raspberry Pi by the founder of OpenFaaS Alex Ellis, it covers the manual installation of k3s and OpenFaaS on a Raspberry Pi cluster as well as the deployment of a microservice.

Building the Raspberry Pi cluster

For this cluster, we use 4 Raspberry Pis 3, a USB power supply, an ethernet switch, a USB ethernet adapter, and some cables:

4x Raspberry Pi 3
4x SD card (min. 8GB, 32GB for master)
1x TeckNet® 50W 10A 6-Port USB PSU
1x D-Link GO SW-5E Ethernet switch
5x 0,25m Ethernet cables
4x Anker 0,3m USB cable
1x Ugreen USB to ethernet adapter
1x USB power cable for ethernet switch
(1x 16×2 LCD display)

Plug everything together and the final result looks like this:

The printable stl files of the case are available in the repository , if you want to print it yourself.

One note on the ethernet connection: Connect all Raspberries to the ethernet switch and plug the USB ethernet adapter in the Pi that will be our master node. Our setup will have all Raspberries in their own network and can be accessed via the master node. The advantage of this is that only one device will be connected to the outside world and therefore the cluster is portable since the internal IPs don’t change when we connect it to another network. The external IP of the master node will also be displayed on the 16×2 LCD display, but more on that later. The architecture will look like this:

Next, we have to prepare the SD cards. Download and install Raspbian Buster Lite on all SD cards and activate ssh by default by putting a file named `ssh` on the boot partition.

Provisioning the cluster with Ansible

In this part part of the blog, we will provision our cluster using the automation tool Ansible. Using it, we have a reproducible setup for our cluster in case we want to add a new node or reset it. Lots of useful information is on the Ansible homepage . For provisioning a Raspberry Pi cluster Larry Smith Jr. has a repository with lots of helpful tasks.

Ansible allows you to write your infrastructure configuration and provisioning in YAML files. We can define one or multiple hosts with groups, set configurations for a (sub)set and run tasks on all or a part of the hosts. Those tasks are then executed locally or via ssh on a remote host. We can define conditionals on tasks to execute them not always and Ansible takes care of not executing a task twice if it was previously executed.

Inventory and configuration files

First, we define our inventory. An inventory holds a list of the hosts we want to connect to and a range of variables for all, one, or a subset of hosts. Read more about inventories on their website . We create a ansible.cfg file in a new folder, which will hold all our Ansible files:

2inventory = ./inventory/hosts.inv ①
3host_key_checking = false

This way, we tell Ansible to use the inventory at ./inventory/hosts.inv ① and to disable host key checking ②. This is needed because using a jumphost doesn’t allow us to approve the key.

Note: A jumphost is a computer on a network to access other machines in a separate network.

Next we build our hosts.inv file:

5[k3s_rpi_master]6k3s-rpi1 ansible_host=
8[k3s_rpi_worker]9k3s-rpi2 ansible_host=
10k3s-rpi3 ansible_host=
11k3s-rpi4 ansible_host=

We first define a group called k3s_rpi which contains all nodes ①. The master node must have the external IP set on which we can access it from our host machine, in our case ②. The workers get the IP in the 192.168.50.x range, which will be used inside the cluster network ③. Because we cannot access our worker nodes directly, we have to configure a jumphost and set a ssh proxy command in inventory/group_vars/k3s_rpi_worker/all.yml. All variables in this folder will be used for hosts in the k3_rpi_worker group:

1ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q {{ rpi_username }}@{{ jumphost_ip }}"'

The variables rpi_username and jumphost_ip are defined in the variables file valid for all hosts inventory/group_vars/all/all.yml:

1# Use python3 ①
2ansible_python_interpreter: /usr/bin/python3 
4# Defines jumphost IP address to use as bastion host to reach isolated hosts
5jumphost_name: "{{ groups['k3s_rpi_master'][0] }}"
6jumphost_ip: "{{ hostvars[jumphost_name].ansible_host }}"
8# Defines IPTABLES rules to define on jumphost ②
10  - chain: POSTROUTING
11    jump: MASQUERADE
12    out_interface: "{{ external_iface }}"
13    source: "{{ dhcp_scope_subnet }}.0/24"
14    state: present
15    table: nat
16  - chain: FORWARD
18    in_interface: "{{ external_iface }}"
19    jump: ACCEPT
20    out_interface: "{{ cluster_iface }}"
21    state: present
22    table: filter
23  - chain: FORWARD
24    in_interface: "{{ cluster_iface }}"
25    jump: ACCEPT
26    out_interface: "{{ external_iface }}"
27    state: present
28    table: filter
30# Default raspberry pi login ③
31rpi_username: pi
32rpi_password: raspberry
34# Defines the ansible user to use when connecting to devices
35ansible_user: "{{ rpi_username }}"
37# Defines the ansible password to use when connecting to devices
38ansible_password: "{{ rpi_password }}"
40# Defines DHCP scope subnet mask ④
43# Defines DHCP scope address ④
44# Important: set the range to exactly the number of pi's in the cluster.
45# It also has to match the hosts in the host.inv file!
46dhcp_master: "{{ dhcp_scope_subnet }}.200"
47dhcp_scope_start_range: "{{ dhcp_scope_subnet }}.201"
48dhcp_scope_end_range: "{{ dhcp_scope_subnet }}.203"
50# Defines dhcp scope subnet for isolated network ⑤ 
51dhcp_scope_subnet: 192.168.50
53master_node_ip: "{{ dhcp_master }}"
55cluster_iface: eth0
56external_iface: eth1

Here, all the major configuration of our cluster is defined, e.g. the Python interpreter ① that is used to install Python packages, iptables rules ②, default Raspberry Pi login ③, and network configuration for our internal network ④. The dhcp_scope_subnet ⑤ defines our subnet and therefore the IP addresses our Raspberry Pies will receive. Be careful if you change this value, you have to change the hosts.inv file accordingly.

Okay, now that we have our basic configuration set, we can start provisioning our tasty Pis. 🙂 We define our tasks in playbooks. Each playbook has a set of tasks for a specific part of the cluster setup, which we will explore in the next sections.

Master node and network setup

The playbook network.yml contains all tasks that are used to set up a DHCP server using dnsmasq and iptables rules. Those are necessary in order for our workers to access the internet via the master node. We also configure the dhcp daemon to use a static ip on the eth0 interface, which is connected to the cluster network.

2- hosts: k3s_rpi_master
3  remote_user: pi
4  become: True
5  gather_facts: True
7  tasks:
8    - name: Install dnsmasq and iptables persistance 
9      apt:
10        name: "{{ packages }}"
11      vars:
12        packages:
13        - dnsmasq
14        - iptables-persistent 
15        - netfilter-persistent
17    - name: Copy dnsmasq config 
18      template:
19        src: "dnsmasq.conf.j2"
20        dest: "/etc/dnsmasq.conf"
21        owner: "root"
22        group: "root"
23        mode: 0644
25    - name: Copy dhcpcd config 
26      template:
27        src: "dhcpcd.conf.j2"
28        dest: "/etc/dhcpcd.conf"
29        owner: "root"
30        group: "root"
31        mode: 0644
33    - name: restart dnsmasq
34      service:
35        name: dnsmasq
36        state: restarted
37      become: true
39    - name: Configuring IPTables 
40      iptables:
41        table: "{{ item['table']|default(omit) }}"
42        chain: "{{ item['chain']|default(omit) }}"
43        ctstate: "{{ item['ctstate']|default(omit) }}"
44        source: "{{ item['source']|default(omit) }}"
45        in_interface: "{{ item['in_interface']|default(omit) }}"
46        out_interface: "{{ item['out_interface']|default(omit) }}"
47        jump: "{{ item['jump']|default(omit) }}"
48        state: "{{ item['state']|default(omit) }}"
49      become: true
50      register: _iptables_configured
51      tags:
52        - rpi-iptables
53      with_items: "{{ jumphost_iptables_rules }}"
55    - name: Save IPTables
56      command: service netfilter-persistent save
57      when: _iptables_configured['changed']
59  post_tasks:
60    - name: Reboot after cluster setup 
61      reboot:

We first install the necessary packages ①, fill the configuration templates and copy them to our master ②, configure and save the IPTables rules so our workers can access the internet via the master node ③, and finally, reboot the master to apply all configurations ④.

The configuration for dnsmasq is very simple and easy to understand. We just tell it on which interface to run the dhcp server (eth0) and the IP range for the clients. Read more about the configuration .

1interface={{ cluster_iface }}
2dhcp-range={{ dhcp_scope_start_range}},{{dhcp_scope_end_range}},12h

After we executed this playbook with ansible-playbook playbooks/network.yml, all our nodes should have an internal IP in the 192.168.50.x range. Now we can start bootstrapping all nodes, installing necessary packages and so on.


In this section, we bootstrap our cluster by installing necessary packages, securing access to the nodes, setting hostnames, and updating the operating system, as well as enabling unattended upgrades.

First, we have to create a new ssh-keypair on the master node to be able to shell into the workers without a password:

1- hosts: k3s_rpi_master
2  remote_user: "{{ rpi_username }}"
3  gather_facts: True
5  tasks:
6    - name: Set authorized key taken from file 
7      authorized_key:
8        user: pi
9        state: present
10        key: "{{ lookup('file', '/home/amu/.ssh/') }}"
12    - name: Generate RSA host key 
13      command: "ssh-keygen -q -t rsa -f /home/{{ rpi_username }}/.ssh/id_rsa -C \"\" -N \"\""
14      args:
15        creates: /home/{{ rpi_username }}/.ssh/
17    - name: Get public key 
18      shell: "cat /home/{{ rpi_username }}/.ssh/"
19      register: master_ssh_public_key

The first step adds the public key from our host machine to the master node ① so we can authenticate via ssh. If you haven’t generated one yet, you can do it via ssh-keygen. Next, we create a keypair on the master node ② if it doesn’t exist yet and store the public key in a host variable called master_ssh_public_key ③. Host variables are only directly accessible on the host they are registered on, but we can fetch them and add them to our workers:

1- hosts: k3s_rpi_worker
2  remote_user: pi
3  become: True
4  gather_facts: True
6  tasks:
7    - set_fact: 
8        k3s_master_host: "{{ groups['k3s_rpi_master'][0] }}"
10    - set_fact: 
11        master_ssh_public_key: "{{ hostvars[k3s_master_host]['master_ssh_public_key'] }}"
13    - name: Set authorized key taken from master 
14      authorized_key:
15        user: pi
16        state: present
17        key: "{{ master_ssh_public_key.stdout }}"

First, we define a variable k3s_master_host which contains the hostname of our master node k3s_rpi1 ①. Next, we get the public key from the host variable we previously defined and store it in a variable called master_ssh_public_key ②. Now we can access the stdout of the cat command in the previous part, which contains the public key, and use the authorized_key to add it to the authorized keys on our worker nodes ③. This is also the part where the host key verification would have failed when Ansible tries to connect to the workers as we cannot interactively approve it.

For unattended upgrades we use the role jnv.unattended-upgrades which we install via ansible-galaxy install jnv.unattended-upgrades.

1- hosts: all
2  remote_user: "{{ rpi_username }}"
3  become: True
4  gather_facts: True
6  roles:
7  - role: jnv.unattended-upgrades 
8    unattended_origins_patterns: 
9      - 'origin=Raspbian,codename=${distro_codename},label=Raspbian'

We import the role ① and configure the pattern for the unattended updates service ②.

The following tasks are used for general configuration of the nodes and are executed before the role for enabling unattended upgrades:

2    - name: Change pi password 
3      user:
4        name: pi
5        password: "{{ lookup('password', '{{ playbook_dir }}/credentials/{{ inventory_hostname }}/pi.pass length=32 chars=ascii_letters,digits') }}"
7    - name: Put pi into sudo group 
8      user:
9        name: pi
10        append: yes
11        groups: sudo
13    - name: Remove excessive privilege from pi 
14      lineinfile:
15        dest: /etc/sudoers
16        state: present
17        regexp: '^#?pi'
18        line: '#pi ALL=(ALL) NOPASSWD:ALL'
19        validate: 'visudo -cf %s'
21    - name: Set hostname 
22      hostname:
23        name: "{{ inventory_hostname }}"
25    - name: set timezone 
26      copy: content='Europe/Berlin\n'
27        dest=/etc/timezone
28        owner=root
29        group=root
30        mode=0644
31        backup=yes
33    - name: Add IP address of all hosts to all hosts 
34      template:
35        src: "hosts.j2"
36        dest: "/etc/hosts"
37        owner: "root"
38        group: "root"
39        mode: 0644
41    - name: Disable Password Authentication 
42      lineinfile:
43        dest=/etc/ssh/sshd_config
44        regexp='^PasswordAuthentication'
45        line="PasswordAuthentication no"
46       state=present
47        backup=yes
49    - name: Expand filesystem ⑤
50      shell: "raspi-config --expand-rootfs >> .ansible/sd-expanded"
51      args:
52        creates: .ansible/sd-expanded
54    - name: Update system 
55      apt:
56        cache_valid_time: 3600
57        update_cache: yes
58        upgrade: safe
60    - name: Install some base packages 
61      apt:
62        name: "{{ packages }}"
63      vars:
64        packages:
65        - vim
66        - aptitude 
67        - git

We start changing the passwords of the Pi user ①, remove some excessive rights for the user ②, set the hostname ③, time and hosts, disable password authentication ④, expanding the file system ⑤, update the system ⑥, and install some base packages ⑦ needed for Kubernetes. The changed passwords are stored under playbooks/credentials.

Lastly, we restart all nodes. Due to the nature of the cluster, we restart the workers first and the master afterwards. Otherwise the ansible playbook will fail, because it cannot reach the workers if the master is rebooting:

1- hosts: k3s_rpi_worker
2  remote_user: "{{ rpi_username }}"
3  gather_facts: True
4  become: True
6  tasks:
7    - name: Reboot after bootstrap
8      reboot:
11- hosts: k3s_rpi_master
12  remote_user: "{{ rpi_username }}"
13  gather_facts: True
14  become: True
16  tasks:
17    - name: Reboot after bootstrap
18      reboot:

k3s and OpenFaaS

Our nodes are now set up and bootstrapped. We can install the k3s Kubernetes distribution, the dashboard and OpenFaas.

On the master node we install the k3s server and bind it to, so we can access it with kubectl from our local machine.

1- hosts: k3s_rpi_master
2  remote_user: pi
3  become: True
4  gather_facts: True
6  tasks:
7    - name: Install / upgrade k3s on master node 
8      shell: "curl -sfL | INSTALL_K3S_EXEC=\"server --bind-address\" sh -"
10    - name: Get token from master 
11      shell: "cat /var/lib/rancher/k3s/server/node-token"
12      register: k3s_node_token

Installing it is done with a simple curl and takes a minute or so ①. Now we have a running Kubernetes cluster with one node on our master node. 🙂 But we want to add our worker nodes too, so we save the node token needed for joining the cluster in a variable ②. Next, we install the k3s agent on the worker nodes and join it to the cluster:

1- hosts: k3s_rpi_worker
2   remote_user: pi
3   become: True 
4   gather_facts: True
6   tasks:
7     - set_fact: 
8         k3s_master_host: "{{ groups['k3s_rpi_master'][0] }}"
10     - set_fact: 
11         k3s_master_token: "{{ hostvars[k3s_master_host]['k3s_node_token'].stdout }}"
13     - name: Install / upgrade k3s on worker nodes and connect to master 
14       shell: "curl -sfL | K3S_URL=https://{{ master_node_ip }}:6443 K3S_TOKEN={{ k3s_master_token }} sh -"

We first get the hostname of our master node ① and retrieve the token from it ②. Installing and joining is also done with a single curl command ③. We pass the master IP and the token to the install script and it takes care of installing the agent and joining the cluster. After a few minutes, we should see the nodes popping up in sudo k3s kubectl get nodes on the master node. We have our Kubernetes cluster running on our Raspberry Pies! 🙂

Now we want to deploy the Kubernetes dashboard to our cluster:

1- hosts: k3s_rpi_master
2  remote_user: pi
3  become: True
4  gather_facts: True
6  tasks:
7    - name: Make sure destination dir exists 
8      become: False
9      file:
10        path: /home/{{ rpi_username }}/kubedeployments
11        state: directory
13    - name: Copy dashboard admin file 
14      become: False
15      copy:
16        src: files/dashboard-admin.yaml 
17        dest: /home/{{ rpi_username }}/kubedeployments/dashboard-admin.yaml
18        owner: "{{ rpi_username }}"
19        group: "{{ rpi_username }}"
20        mode: '0644'
22    - name: Apply dashboard admin 
23      shell: "k3s kubectl apply -f /home/{{ rpi_username }}/kubedeployments/dashboard-admin.yaml"
25    - name: Install dashboard 
26      shell: "k3s kubectl apply -f"
28    - name: Get dashboard token 
29      shell: "k3s kubectl -n kube-system describe secret $(k3s kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | cut -d':' -f2 | xargs"
30      register: dashboard_token
32    - debug: 
33        msg: "{{ dashboard_token.stdout }}"
35    - name: Save dashboard token to credentials/dashboard_token 
36      become: False
37      local_action: copy content={{ dashboard_token.stdout }} dest={{ playbook_dir }}/credentials/dashboard_token

First, we create a folder kubedeployments on the master node ①. We copy the dashboard-admin.yaml from playbooks/files/ file which is needed to access the dashboard ②. Then we can apply the file ③ as well as the dashboard resource ④. To access it, we have to get the token. We grep the secret from the cluster ⑤ and print it in Ansible ⑥. For later access we also store it in the playbooks/credentials/dashboard_token file on the local machine ⑦.

To connect to the Kubernetes cluster from our local machine, we copy the generated Kubernetes config file from the master to our local machine ① and fix up the IP address of the master node ②. If we copy this file to ~/.kube/config, you can access it with kubectl from our local machine. Run kubectl proxy and open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. We should be able to log in with the token provided previously and see our Kubernetes cluster.

1- name: Download kubernetes config 
2      fetch:
3        src: /etc/rancher/k3s/k3s.yaml
4        dest: "{{ playbook_dir }}/credentials/k3s.yaml"
5        flat: yes
7    - name: Set correct IP in downloaded kubernetes config 
8      become: False
9      local_action: 
10        module: lineinfile
11        dest: "{{ playbook_dir }}/credentials/k3s.yaml"
12        regexp: "^    server"
13        line: "    server: {{ jumphost_ip }}:6443"

To install OpenFaaS, we have to clone the repository containing the charts and apply two resource files ①. The first one creates the namespaces in our cluster ②, the second one installs all services for the armhf architecture in our cluster ③.

1- name: Clone OpenFAAS kubernetes charts 
2      git:
3        repo:
4        dest: /home/{{ rpi_username }}/faas-netes
6    - name: Install OpenFAAS
7      shell: |
8        k3s kubectl apply -f /home/{{ rpi_username }}/faas-netes/namespaces.yml ②
9        k3s kubectl apply -f /home/{{ rpi_username }}/faas-netes/yaml_armhf ③

http://master_node_ip:31112 should open the OpenFaaS dashboard:

We now can deploy our first function! OpenFaaS has a function store integrated, meaning that we can deploy pre-built functions to our cluster. Just click on Deploy New Function, select nodeinfo and hit deploy. This is a function that returns some information about one for our nodes:

Now we can also start developing our own functions. We’ll talk about this in a later blog post, if you are curious you can read the official documentation here:

LCD display monitoring

We also want to add an LCD display to show some information about our cluster, for example the external IP, how many k3s nodes are available and how many functions are deployed in OpenFaaS. For this we connect a 16×2 LCD with an i2c interface to our Raspberry Pi. It has 4 pins, we connect 5V to pin 2 or 4, GND to pin 6, the i2c data lines SDA to pin 3 and SCL to pin 5. On we can find a schematic of the pinlayout of the Pi.

Now we have to enable i2c on our master node to be able to communicate with the display. We create a new playbook file lcd.yml

1- hosts: k3s_rpi_master
2  remote_user: "{{ rpi_username }}"
3  gather_facts: True
4  become: True
6  tasks:
7    - name: Check if i2c is enabled 
8      shell: "raspi-config nonint get_i2c"
9      register: i2c_disabled
11    - name: Enable i2c 
12      shell: "raspi-config nonint do_i2c 0"
13      when: i2c_disabled.stdout == "1"
15    - name: Reboot after enabling i2c 
16      when: i2c_disabled.stdout == "1"
17      reboot:

We first check if i2c is already enabled ①. If not, we enable it ② and restart our Pi ③. To control the display, we use a small Python script. Therefore, we need some dependencies installed to access i2c and the k3s cluster:

    - name: Install python pip, smbus and i2c-tools 
        name: "{{ packages }}"
        - python3-pip
        - python3-smbus
        - i2c-tools

    - name: Install kubernetes python package
        name: kubernetes
        executable: pip3

In order to have access to our cluster, we have to set the Kubernetes config. This is essentially the same we did before, this time we copy the file locally:

1- name: Copy kube config and fix ip
2      shell: "cp /etc/rancher/k3s/k3s.yaml /home/{{ rpi_username }}/.kube/config && chown {{rpi_usernam\}\\.[0-9]\\{1,3\\}\\.[0-9]\\{1,3\\}/' /home/{{ rpi_username }}/.kube/config"
4    - name: Create k3s_lcd directory
5      become: False
6      file:
7        path: /home/{{ rpi_username }}/k3s_status_lcd
8        state: directory

Lastly, we copy the script ① and systemd service files ② and enable them ④:

1- name: Copy k3s_status_lcd files 
2      become: False
3      copy:
4        src: "{{ item }}"
5        dest: /home/{{ rpi_username }}/k3s_status_lcd
6        owner: "{{ rpi_username }}"
7        group: "{{ rpi_username }}"
8        mode: '0644'
9      with_fileglob:
10        - ../../k3s_status_lcd/*
12    - name: Install k3s-status service 
13      template:
14        src: "../../k3s_status_lcd/k3s-status.service.j2"
15        dest: "/etc/systemd/system/k3s-status.service"
16        owner: "root"
17        group: "root"
18        mode: 0644
20    - name: Install k3s-shutdown script 
21      template:
22        src: "../../k3s_status_lcd/"
23        dest: "/lib/systemd/system-shutdown/"
24        owner: "root"
25        group: "root"
26        mode: 0744
28    - name: Start k3s-status service 
29      systemd:
30        state: restarted
31        enabled: yes
32        daemon_reload: yes
33        name: k3s-status

We also have a shutdown script that is executed right before the Raspberry turns off or reboots ③. It is placed in the /lib/systemd/system-shutdown/ folder and is executed right before the shutdown signal. This way, we know when it’s safe to unplug it. 🙂

The source files for the status LCD can be found here:

And this is how it looks when the Raspberry Pi boots:

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Always unblock YouTube


In this blog post, we built the hardware for a cluster made with 4 Raspberry Pis and provisioned it, using Ansible with a k3s Kubernetes cluster running OpenFaaS to run serverless functions. We also added a status LCD display to see the current status of our cluster and the functions running on it. If you don’t want to execute all Ansible playbooks sequentaially, you can run the deploy.yml playbook. This one executes all previously mentioned playbooks in order. After waiting a few minutes, we have a fully configured Kubernetes cluster running OpenFaaS on our Raspberry Pis!

In the next post, we’ll dive deeper into OpenFaaS and how to develop and deploy custom functions on it.

Links for further information:
k3s – Lightweight Kubernetes
OpenFaaS – Serverless Functions, Made Simple
Will it cluster? k3s on your Raspberry Pi
Ansible – Raspberry Pi Kubernetes Cluster
GOTO 2018 • Serverless Beyond the Hype • Alex Ellis

share post




More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.


Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.