Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

//

Scaling Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Spring Cloud Netflix and Docker Compose

29.5.2017 | 25 minutes of reading time

Provision a Docker Windows Container with Ansible? No Problem! But wasn´t Docker meant for more than one Container?! Don´t we wanna have many of these tiny buckets, scaling them as we need?! And what about this Spring Cloud Netflix thingy? Isn´t this the next logical step for our Spring Boot Apps?

Running Spring Boot Apps on Windows – Blog series

Part 1: Running Spring Boot Apps on Windows with Ansible
Part 2: Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell
Part 3: Scaling Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Spring Cloud Netflix and Docker Compose
Part 4: Taming the Hybrid Swarm: Initializing a Mixed OS Docker Swarm Cluster running Windows & Linux Native Containers with Vagrant & Ansible

Docker was build for more – and Spring Boot as well

We achieved really cool goals throughout the last posts – like provisioning Windows with Ansible, using native Docker Windows Containers to run our Spring Boot Apps on, providing us with a completely automated build of our Vagrant Windows Box with Packer , doing healthchecks without having a working localhost loopback in place. And it really had it´s obstacles to get there.

However until now we only ran one Docker Windows Container. This is after all not the final aim of Docker, where we are enabled to run lots of Containers on a single machine or virtualized host. The same is true for Spring Boot, which is a perfect match to build a Microservice architecture when it comes to Java. So as promised in the last post , we should take a step further! We´ll have a look on how to provision some more Docker Windows Containers with Ansible!

But guess what – starting with scaling Docker Windows Containers the already sparse documentation seems to get nearly non-existent! There´s a small hint inside the Windows Container Networking documentation about Docker Compose and Service Discovery, the first linking to a technet blog post about how to scale-out your multi-service container application on Windows . That´s all – and I found myself really lost, because not only was the information so fractional, the described steps also didn´t work for me out of the box… But hey, that´s where this blog post hopefully will come to the rescue 😉 Let´s get this right!

Before we start to use Docker Compose on Windows, we´ll need a more complex example app. This means more apps than one! You heard of Spring Cloud for sure! It´s the Spring guy´s answer to all the obstacles that you´ll have to overcome if you´re going to build distributed systems.

Example Apps with Spring Boot & Spring Cloud Netflix

There are some posts and articles around telling us about Spring Cloud or Spring Cloud Netflix – just give it a google search. But for me they seem to get stuck on a explanatory level. They describe all these nice technologies – and that´s it. Therefore I don´t want to introduce all the components of Spring Cloud – all those articles (or simply the docs on projects.spring.io/spring-cloud ) are a better source for that.

The documentation about Spring Cloud Netflix also seems to be quite sparse at first – a similarity to Docker Compose on Windows 🙂 But don´t get stuck like me: Use the latest Spring Cloud Release Train version to find the current docs like in this link: cloud.spring.io/spring-cloud-static/Dalston.RELEASE. The latest Release Train now is Dalston (the names are just London Tube stations in alphabetical sequence). And if you´re looking for the complete source of configuration parameters – it´s no ancient wisdom (as some stackoverflow Q&As could be interpreted). The only thing you have to do is to scroll down to the Appendix: Compendium of Configuration Properties.

I wanted to focus only on those few projects we´ll need to show the collaboration of applications in a kind of Microservice deployment. But at the same time these projects represent a well known and working setup from inside our customer´s projects, which is a good basis for you to start from. Considering that I created an example project that contains several Spring Boot Apps . As usual things should be 100% comprehensible on your machine. Let´s have a look onto the core applications inside this project:

logo sources: Spring Cloud icon , Spring Boot logo, Netflix OSS logo

What do we have here? At first there´s an edge service , which is the central entry point to our services. We use Zuul from the Spring Cloud Netflix Stack here. It acts as a proxy, that provides us with dynamic routes to our services (there are also many more features ).

Dynamic routing is a really cool feature! But what does it mean? Speaking on a higher level, we don´t have to tell our proxy manually about all service routes. It´s the other way round – all our services register their specific routes for themselves. As all the Spring Cloud components do heavily rely on each other, Zuul uses Eureka in this scenario – another Spring Cloud Netflix tool. Eureka acts as a central service registry, where all our services register to. Zuul then obtains all the registered instances from Eureka, which we implemented in the service registry project . Having all the example applications fired up locally, you´re able to see all the registered routes if you point your Browser to the Zuul at http://localhost:8080/routes.

I found dynamic routing a must have feature of Zuul. But to get to know how to configure this properly is not the easiest path you can choose. Normally routes in Zuul are defined explicitely in the application.yml. But this isn´t what we´re able to use in our scenario with Docker on Windows. If you want to dive deeper into how to configure Zuul together with Eureka so that dynamic routing kicks in, have a look at the zuul-edgeservice ´s and eureka-serviceregistry ´s application.ymls.

Beside those services that serve more technical tasks, we also have two functional services available. The weatherservice shows more enterprisey habits. It uses the cxf-spring-boot-starter to easily provide a weather forecast web service. I lend it from this blog series . It intentionally provides a SOAP web service to show that the power of Spring Cloud is not restricted to new hype technologies and can be easily adapted also for more old school use cases. You´d be surprised to see those many cases in the real world of a consultant…

But enough old school! The weatherservice also uses a backend called weatherbackend with some incredible complex ( 🙂 ) logic to provide some really badly needed information about the weather. Coming from the Spring world, a first attempt to call the weatherbackend from inside the weatherservice would maybe involve the well known Spring RestTemplate or a more easy to read framework like rest-assured . But Spring Cloud Netflix has something for us here too: the declarative REST client Feign . And because Feign adds discovery awareness, it´ll look up the weatherbackend instances with the help of our Eureka service registry. So no need to manually configure a host and port here, which I think is really cool!

Apart from that there are some more Spring Cloud frameworks running behind the scenes – e.g. Ribbon and Hystrix. Ribbon is used mostly every time, when services have to be called. It adds nice features like caching and client-side loadbalancing, and also provides Zuul and Feign with the ability to use a dynamic server list (ribbon-eureka ) to do their HTTP calls. Hystrix is also used for mostly every HTTP call – it adds latency and fault tolerance by stopping cascading failures, providing fallbacks and isolation through circuit breakers. My colleagues Felix Braun und Benjamin Wilms did some great talks and blog posts about their experiences with Hystrix in real world projects (Hystrix introduction and Hystrix & dynamic configuration with Archaius , sorry German only).

And finally there´s also a simple client app that is able to call our Microservices through the edge service. But more on that later.

If you want to get your hands on these example applications, it is always a good idea to eliminate complexity. Consider to start simple and fire up all Spring Boot apps inside your IDE (e.g. with IntelliJ´s cool new Run Dashboard for Spring Boot ). If that´s working fine, go ahead and bring Docker and finally Ansible into the game. I experienced strange behavior on all levels – and it´s always good to know, that the simple things really work. And as another note: If you´re on a Mac, even a simple localhost lookup can take too long which will cause your Spring Cloud Apps to not register properly with Eureka and other strange errors!

Now that we have a more complex application in place, let´s have a look on how to use Docker Compose on Windows.

Docker Compose – Scaling Docker Windows Containers

Apart from the little documentation I was quite impressed about the partnership of Docker Inc. and Microsoft again: Docker Compose now also natively supports to manage Docker Windows Containers! And as this is the simplest way if you want to start with more than one Docker Container, I chose it as a basis for this blog post. Future posts about Docker Swarm and Kubernetes cannot be ruled out. But
it´s always a good idea to start with some basics and then dive deeper into a topic. And as “Compose is a tool for defining and running multi-container Docker applications” – which is really easy to use as well, it seems the perfect starting point for us.

Using Docker Compose everything starts with a docker-compose.yml file. This is quite easy to read and looks like this:

1version: '3.1'
2 
3services:
4 
5 weatherbackend:
6  build: ./weatherbackend
7  ports:
8   - "8090"
9  tty:
10    true
11  restart:
12    unless-stopped
13 
14 weatherservice:
15  build: ./weatherservice
16  ports:
17   - "8095:8095"
18  tty:
19    true
20  restart:
21    unless-stopped
22 
23networks:
24 default:
25  external:
26   name: "nat"

Mostly nothing new for those who already used Compose before. The file starts with a version header that defines the used Compose file format version . With the 3.1 we use a quite current version here, which I would recommend for an up to date Docker installation like our preparing Ansible playbook prepare-docker-windows.yml arranges. As this post is based upon the last blog Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell and the findings there, feel free to give it a few minutes if you haven´t read it before.

Docker Compose introduces the concept of Services (don´t mix them up with Services in Docker Swarm ), which is somehow one abstraction level higher than a common Docker Container. Still at the beginning, you could put it on the same level with a Container: it also has a build directory, where it will read a Dockerfile from or directly defines the Docker image , where it should pull from. A Service could also have a port binding with the keyword ports and is able to pretend to be a real tty (pseudo-tty ). We also define the restart policy unless-stopped, so that all our containers will be fired up after a reboot again. If you use the port mapping as shown in the service weatherservice above, you´ll get a 1:1 mapping between Service and Container – because this port could only be mapped once. If you don´t use this port binding to the host, you´re able to scale your Service later on.

The last bit of the docker-compose.yml is somehow Windows specific – in a way that you wouldn’t define this piece in a simple beginners Compose file. But we need it here to connect our Docker network to standard Windows nat network – which you could easily inspect with a docker network inspect nat.
And that´s all.

A simple docker-compose up will fire up all your Services – and you´re not stuck in the naive approach I started after the first Docker experiences: building, starting, stopping, removing (and so on) multiple Docker Containers one by one – which is inevitable if you don´t have something like Docker Compose. And scaling your Docker Compose Services is also really easy. A simple docker-compose scale weatherbackend=3 will fire up two additional weatherbackend Services!

Be aware of the Service Discovery issue, that could frustrate your Docker Compose experience: Remember to place the temporary workaround (for Windows 10 already fixed in this Update ) inside your Dockerfiles, which I already mentioned in the previous blog post:

1# A 'Temporary workaround for Windows DNS client weirdness' randomly found at https://github.com/docker/labs/blob/master/windows/windows-containers/MultiContainerApp.md
2# Without this, DNS
3SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
4RUN set-itemproperty -path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name ServerPriorityTimeLimit -Value 0 -Type DWord

Docker Compose and Ansible – who´ll take the lead?

Now bringing together Docker Compose and Ansible we find ourselves in an interesting situation: A docker-compose.yml is able to hold quite similar information like an ansible-playbook.yml. That leads us to the question: Who has the core information about our app, if we want to use both? We have to make an architectural decision.

I decided to put Ansible into the lead here. This has several reasons. The first is: Inside our projects we´re using Ansible as the core Continuous Delivery glue for all our apps. That means, Ansible isn´t limited to this use case here and Docker Compose wasn´t designed to handle all the other cases. Additionally Ansible might sometime outlive Docker Compose if we use another technology in the future – like Docker Swarm or Kubernetes . And the last thing is: The following approach will use Ansible as a central truth of information, but will enable us at the same time to use Docker Compose on the machine level as we´re used to – especially to scale our Services on demand.

Joining the Power of Docker Compose and Spring Cloud Netflix on Windows with Ansible

Before going into the details, let´s shortly recap this blog posts setup with the help of a small architectural sketch. In the end, we want all our applications leveraging Spring Cloud Netflix to be run on Docker Windows Containers by Ansible:

logo sources: Windows icon , Docker logo , Ansible logo , Packer logo , Vagrant logo , VirtualBox logo , Spring Cloud icon , Spring Boot logo, Netflix OSS logo

Now we should have everything in place to give our Ansible playbook a first run. We´ll build upon the knowledge from the last blog post , where we used Packer.io , Vagrant and Ansible to prepare a Windows Server 2016 box to run Spring Boot Apps inside Docker Windows Container. As this article should be 100% comprehensible, be sure you went through the steps 0 to 2 from this GitHub repository as described in the previous blog post! You should also have the example applications´ repository cloned and build by a mvn clean package.

Having a running Docker installation on Windows Server 2016 (or Windows 10) and the example applications in place, we´re able to dig deeper into the playbooks inside the project step3-multiple-spring-boot-apps-docker-compose . Let´s start with the central ansible-windows-docker-springboot.yml :

1---
2- hosts: "{{host}}"
3  vars:
4    base_path: "C:\\springboot"
5    services:
6      - name: zuul-edgeservice
7        path_to_jar: "../../cxf-spring-cloud-netflix-docker/zuul-edgeservice/target/zuul-edgeservice-0.0.1-SNAPSHOT.jar"
8        port: 8080
9        map_to_same_port_on_host: true
10        service_registry_name: eureka-serviceregistry
11
12      - name: eureka-serviceregistry
13        path_to_jar: "../../cxf-spring-cloud-netflix-docker/eureka-serviceregistry/target/eureka-serviceregistry-0.0.1-SNAPSHOT.jar"
14        port: 8761
15        map_to_same_port_on_host: true
16        service_registry_name: eureka-serviceregistry-second
17
18      - name: eureka-serviceregistry-second
19        path_to_jar: "../../cxf-spring-cloud-netflix-docker/eureka-serviceregistry/target/eureka-serviceregistry-0.0.1-SNAPSHOT.jar"
20        port: 8761
21        service_registry_name: eureka-serviceregistry
22
23      - name: weatherbackend
24        path_to_jar: "../../cxf-spring-cloud-netflix-docker/weatherbackend/target/weatherbackend-0.0.1-SNAPSHOT.jar"
25        port: 8090
26        service_registry_name: eureka-serviceregistry-second
27
28      - name: weatherservice
29        path_to_jar: "../../cxf-spring-cloud-netflix-docker/weatherservice/target/weatherservice-0.0.1-SNAPSHOT.jar"
30        port: 8095
31        service_registry_name: eureka-serviceregistry
32
33  tasks:
34  - name: Create base directory C:\springboot, if not there
35    win_file: path={{base_path}} state=directory
36
37  - name: Preparing the Spring Boot App´s Files for later docker-compose run
38    include: spring-boot-app-prepare.yml
39    with_items: "{{ vars.services }}"
40
41  - name: Run all Services with Docker Compose
42    include: docker-compose-run-all-services.yml
43
44  - name: Do healthchecks for all services
45    include: spring-boot-app-health-check.yml
46    with_items: "{{ vars.services }}"

The main playbook starts with the variable definition section. We use it here to define our Spring Boot & Cloud applications, which will be Docker Compose Services at the same time. Some of those configuration parameters seem to be quite obvious: name, path_to_jar and port should be self-explaining.

The map_to_same_port_on_host option is more interesting. When it is set, the port configured inside the Container will also be mapped to the Host (we´ll see how that works later). If not Docker Compose will also map the port to the Host, but using a randomly picked port number. The latter will enable us to use our desired docker-compose scale weatherbackend=3, which isn´t possible for the services with map_to_same_port_on_host.

The last parameter is service_registry_name. It´s usage is also quite obvious: It defines the Eureka service registry´s DNS alias. Isn´t that one always the same? Why do we need a configuration option here? Because we want Eureka in a peer aware mode setup. This means, that we use two Eureka Server instances to showcase a more resilient and available setup with multiple instances. We therefore define two service registry services/applications: eureka-serviceregistry and eureka-serviceregistry-second. As both are provided by one project , we need to set eureka.client.registerWithEureka: true and the opposite Eureka instance into the eureka.client.serviceUrl.defaultZone property inside the eureka-serviceregistry´s application.yml . The opposite host name – eureka-serviceregistry-second in eureka-serviceregistry and eureka-serviceregistry in eureka-serviceregistry-second is set via an environment variable in the Dockerfile .

The second part of the main playbook is filled with four tasks, where the first one simply defines the base directory where all the magic will happen 🙂 The second task includes the spring-boot-app-prepare.yml to prepare the Dockerfiles and jars of all our applications. The last two tasks then use Docker Compose to run all our services (docker-compose-run-all-services.yml ) and Ansible´s win_uri module to health check them afterwards (spring-boot-app-health-check.yml ). As those tasks are quite interesting in detail, let´s have a more detailed look into them now.

Preparing our Apps for Docker Compose

The second task uses the spring-boot-app-prepare.yml playbook and is quite simple:

1---
2  - name: Defining needed variables
3    set_fact:
4      spring_boot_app:
5        name: "{{ item.name }}"
6        port: "{{ item.port }}"
7        jar: "{{ item.path_to_jar }}"
8        registry_name: "{{ item.service_registry_name }}"
9
10  - name: Preparing the following Spring Boot App´s Files for docker-compose run
11    debug:
12      msg: "Processing '{{spring_boot_app.name}}' with port '{{ spring_boot_app.port }}'"
13
14  - name: Create directory C:\springboot\spring_boot_app.name, if not there
15    win_file: path={{base_path}}\\{{spring_boot_app.name}} state=directory
16
17  - name: Template and copy Spring Boot app´s Dockerfile to directory C:\springboot\spring_boot_app.name
18    win_template:
19      src: "templates/Dockerfile-SpringBoot-App.j2"
20      dest: "{{base_path}}\\{{spring_boot_app.name}}\\Dockerfile"
21
22  - name: Copy Spring Boot app´s jar-File to directory C:\springboot\spring_boot_app.name
23    win_copy:
24      src: "{{spring_boot_app.jar}}"
25      dest: "{{base_path}}\\{{spring_boot_app.name}}\\{{spring_boot_app.name}}.jar"

After some variable definition for better readability and a debug output to let the user know which of the apps is processed, we use the module win_file to create a application specific directory. Then we template and copy the Dockerfile template and the application´s jar file into the created directory – using win_template and win_copy . The most interesting part here is the Dockerfile template Dockerfile-SpringBoot-App.j2 itself:

1#jinja2: newline_sequence:'\r\n'
2FROM springboot-oraclejre-nanoserver:latest
3 
4MAINTAINER Jonas Hecht
5 
6ENV REGISTRY_HOST {{spring_boot_app.registry_name}}
7ENV SPRINGBOOT_APP_NAME {{spring_boot_app.name}}
8 
9# Expose the apps Port
10EXPOSE {{spring_boot_app.port}}
11 
12# Add Spring Boot app.jar to Container
13ADD {{spring_boot_app.name}}.jar app.jar
14 
15# Fire up our Spring Boot app by default
16CMD ["java.exe", "-jar app.jar --server.port={{spring_boot_app.port}}"]

We didn´t go into much detail on that one in the previous blog post . The focus there was more on the creation of the Spring Boot basis image springboot-oraclejre-nanoserver for Docker Windows containers , which you could see beeing used in the FROM instruction. The ENV instructions define environment variables, which set the REGISTRY_HOST and SPRINGBOOT_APP_NAME. They will be loaded into the ${registry.host} and ${springboot.app.name} variables of every application (defined in their application.yml). E.g. in the weatherbackend´s application.yml this ensures the correct registration of the app in the Eureka service registry:

1eureka:
2  client:
3    serviceUrl:
4      defaultZone: http://${registry.host:localhost}:8761/eureka/

The Dockerfile template also defines the open port of our application via an EXPOSE and adds our application´s jar file to the Docker build context. The last CMD instruction fires up our Spring Boot application nearly as we´re used to with a java -jar – just with a slightly other syntax and the defined and exposed port as the server.port property.

Running our Apps with Docker Compose

Now that all our application are prepared for Docker Compose, we should finally be able to fire up a docker-compose up, right?! Well – nearly. First of all we need a valid docker-compose.yml. As we decided to put Ansible into the lead, this also turned into a template. So the second task´s playbook docker-compose-run-all-services.yml starts with the templating the docker-compose.j2 :

1#jinja2: newline_sequence:'\r\n'
2version: '3.2'
3
4services:
5
6{% for service in vars.services %}
7  {{ service.name }}:
8    build: ./{{ service.name }}
9{% if service.map_to_same_port_on_host is defined %}
10    ports:
11         - "{{ service.port }}:{{ service.port }}"
12{% else %}
13    ports:
14         - "{{ service.port }}"
15{% endif %}
16    tty:
17      true
18    restart:
19      unless-stopped
20{% endfor %}
21
22networks:
23 default:
24  external:
25   name: "nat"

There should be similarities to the common docker-compose.yml we´ve seen earlier. But besides the needed first line to tell Ansible & Jinja not to turn this file into just one line, the services block looks quite exceptional. The instruction {% for service in vars.services %} tells Jinja 2 repeat the following block for every service we have in our central ansible-windows-docker-springboot.yml . It will result in an docker-compose.yml entry like this:

1zuul-edgeservice:
2  build: ./zuul-edgeservice
3  ports:
4   - "8080:8080"
5  tty:
6    true

And here you already see the strenght in this approach: We indeed put Ansible into the lead to hold the core information about our overall application. But the result on the Windows Server is just an ordinary docker-compose.yml – where we´re enabled to use all the nice command line tools of Docker Compose – e.g. docker-compose up. There´s one thing left: {% if service.map_to_same_port_on_host is defined %} will map our exact port to the host´s port. If map_to_same_port_on_host isn´t set, we make use of a nice feature of Docker Compose here. The resulting ports: - "8080" will tell Docker to also map the defined port to the host, but to pick a random one here. That frees us to scale our service with the mentioned docker-compose scale weatherbackend=3. It uses the Docker run option --publish-all or just -P behind the scenes.

Ok, enough about the docker-compose.yml templating here. Let´s go back to second task´s playbook docker-compose-run-all-services.yml :

1---
2  - name: Template docker-compose.yml to directory C:\spring-boot
3    win_template:
4      src: "templates/docker-compose.j2"
5      dest: "{{base_path}}\\docker-compose.yml"
6
7  - name: Stop all Docker containers (if there)
8    win_shell: docker-compose --project-name springboot stop
9    args:
10      chdir: "{{base_path}}"
11    ignore_errors: yes
12
13  - name: Remove all Docker containers (if there)
14    win_shell: docker-compose --project-name springboot rm -f
15    args:
16      chdir: "{{base_path}}"
17    ignore_errors: yes
18
19  - name: (Re-)Build all Docker images
20    win_shell: docker-compose build
21    args:
22      chdir: "{{base_path}}"
23    ignore_errors: yes
24
25  - name: Run all Docker containers
26    win_shell: docker-compose --project-name springboot up -d
27    args:
28      chdir: "{{base_path}}"
29    ignore_errors: yes

We talked about the first win_template already. After the docker-compose.yml is created there, we stop and remove all Docker containers at first – just to be clean here. And build and run all of them afterwards (everything done with the help of the win_shell module ). That´s all on a high level here.

But you´ve already seen it: there´s an --project-name springboot spread all over the nice docker-compose CLI commands. That´s because later on we need to know our container’s names to be able to do healthchecks onto them. As Docker Compose generates the container names like GeneratedContainerNameBeginning_serviceName_number we wouldn´t be able to do that – or we would sacrifice our ability to scale our applications by using the container_name option, which is a bad idea in a blog post about scaling apps with Ansible & Docker on Windows 🙂 But there´s help! With docker-compose --project-name we are able to set the GeneratedContainerNameBeginning exactly . And that´s all we need!

Healthchecking our many Spring Boot apps inside Docker Windows Containers

This brings us to the last task of our central ansible-windows-docker-springboot.yml : doing health checks with our spring-boot-app-health-check.yml :

1---
2  - name: Defining needed variables
3    set_fact:
4      spring_boot_app:
5        name: "{{ item.name }}"
6        port: "{{ item.port }}"
7
8  - name: Obtain the Docker Container´s internal IP address (because localhost doesn´t work for now https://github.com/docker/for-win/issues/458)
9    win_shell: "docker inspect -f {% raw %}'{{ .NetworkSettings.Networks.nat.IPAddress }}' {% endraw %} springboot_{{spring_boot_app.name}}_1 {{ '>' }} container_ip.txt"
10
11  - name: Get the Docker Container´s internal IP address from the temporary txt-file (we have to do this because of templating problems, see http://stackoverflow.com/a/32279729/4964553)
12    win_shell: cat container_ip.txt
13    register: win_shell_txt_return
14
15  - name: Define the IP as variable
16    set_fact:
17      docker_container_ip: "{{ win_shell_txt_return.stdout.splitlines()[0] }}"
18
19  - name: Wait until our Spring Boot app is up & running
20    win_uri:
21      url: "http://{{ docker_container_ip }}:{{spring_boot_app.port}}/health"
22      method: GET
23      headers:
24        Accept: application/json
25    until: health_result.status_code == 200  
26    retries: 30
27    delay: 20
28    ignore_errors: yes

We´ve already discussed the obstacles of health checking without having a working localhost loopback in place in the previous blog post . But there are some slight enhancements needed here when scaling applications. As already mentioned in the task before, we need to know the container´s name to obtain its internal IP address. Because we fired up our containers with a docker-compose --project-name springboot up -d we´re now able to do docker inspect -f {% raw %}'{{ .NetworkSettings.Networks.nat.IPAddress }}' {% endraw %} springboot_{{spring_boot_app.name}}_1 successfully to get the needed IP address.

And there´s a second gotcha. We need to tell our win_uri module to use a explicit HTTP header with headers: Accept: application/json. Why? Because from Spring Boot 1.5.x on, the Spring Boot Actuators Content-Type is something like application/vnd.spring-boot.actuator.v1+json when calling http://appname:port/health without the correct Accept header. As I love those “in a perfect world” answers, I also have to encourage you to read about the “well-written clients” – which I assume 90% of earth´s HTTP clients is not 🙂 (including most browsers that have no clue about that strange Content-Type and refrain to render it correctly). But knowing that, our health check will run fine again!

The final step: Run and test it!

I assume you did the Packer build a, the vagrant init windows_2016_docker_virtualbox.box and the vagrant up inside step0-packer-windows-vagrantbox and prepared your machine with a ansible-playbook -i hostsfile prepare-docker-windows.yml --extra-vars "host=ansible-windows-docker-springboot-dev" inside the step1-prepare-docker-windows directory. If you have any questions about those preparing steps, I would encourage you to give the previous blog post a short read!

There was only a small change compared to the previous article – but that one is relevant for our showcase here. Because we want our client application weatherclient to be able to access our Windows Host running with Vagrant inside VirtualBox. Therefore we add a port forwarding configuration to VirtualBox using a simple line in our Vagrantfile template :

1config.vm.network "forwarded_port", guest: 8080, host: 48080, host_ip: "127.0.0.1", id: "edgeservice"

Because of this, it is a good idea to run the Packer build again. Alternatively (e.g. if your coffee machine doesn´t work at the moment) you could also configure the port forwarding manually in your VirtualBox´ network settings.

Now we´re there! Just fire up the main playbook onto our running Windows Server 2016 Docker Vagrant Box with the following command inside the step3-multiple-spring-boot-apps-docker-compose directory :

1ansible-playbook -i hostsfile ansible-windows-docker-springboot.yml --extra-vars "host=ansible-windows-docker-springboot-dev"
1docker logs springboot_zuul-edgeservice_1 -f
2docker logs springboot_weatherservice_1 -f
3docker logs springboot_weatherbackend_1 -f

Although I am already itching to extend my showcase with an Elastic Stack to free me from having to look into every single container´s logfiles, this should give us the insights we need here. Feel free to enter a zip code (I recommend 99423 !) and hit the “Try it out!” button! If everything went fine, this should bring up an Response Code 200 alongside a valid Response. But the contents of our Powershell windows are far more interesting than this simple REST GUI´s output. Let´s have a look into zuul-edgeservice´ logs:

1[...]: POST request to http://localhost:48080/api/weatherservice/soap/Weather

Really nice! Our edge service seems to be successfully called 🙂 Also the weatherservice´ logs look quite chatty – it´s a SOAP service implementation after all using the cxf-spring-boot-starter :

1[...] : 000 >>> Inbound Message: <soap:Envelope [...]<ForecastRequest><ZIP>99423</ZIP>[...]</soap:Envelope>
2[...] : Transformation of incoming JAXB-Bind Objects to internal Model
3[...] : Call Backend with internal Model
4[...] : Calling weatherbackend with Feign: '192.168.109.247', '8090', 'UP', 'http://192.168.109.247:8090/'
5[...] : Transformation internal Model to outgoing JAXB-Bind Objects
6[...] : 000 >>> Outbound Message: <soap:Envelope [...] <ForecastResult>[...]<ns2:Temperatures><ns2:MorningLow>0°</ns2:MorningLow><ns2:DaytimeHigh>90°</ns2:DaytimeHigh></ns2:Temperatures>[...]</soap:Envelope>

We´re also able to spot the Feign based call to the weatherbackend, which itself is also invoked:

1[...] : Called Backend
2[...] : Request for /general/outlook with POST

We´ve made it 🙂 Every application is called and our setup seems to work out well. I think, there´s only one thing left…

The second final step: Scaling our functional services

We eventually reached the state where we can scale our functional services – they don´t map the same port to the host and therefore should be scalable through the Docker Compose CLI. So let´s do it! Open a Powershell again and fire up a

1docker-compose scale weatherbackend=3 weatherservice=2

Now let´s open Eureka again. After a few seconds, we should be able to see our new scaled instances:

And voilá. There are our new weatherservice and weatherbackend instances!

We´ve done it – again!

Now we´re able to leverage the power that Docker provides us with! We´re running multiple applications on multiple Docker containers. With the help of Spring Cloud Netflix we implemented a single entry point (edge service), that provides us with dynamic routes from a central but resilient service registry, where all our services are registered to. We additionally leverage the build in resilience patterns of Spring Cloud Hystrix and use the discovery aware REST client Feign.

And with Docker Compose we´re able to easily manage and scale our services on the Windows Docker host, without giving away the advantages of Ansible as our core Continuous Delivery glue for all our apps. Comprising that all we´ve put together Spring Boot apps with Spring Cloud Netflix support on Docker Windows Containers with Ansible. Really cool!

What´s left? Oh, I “hate” that question 🙂 There´s always something left. I see some tools like Docker Swarm or Kubernetes longing to be used in this scenario. Also an Elastic Stack would be nice to have as a central log monitoring portal. And there´s also Microsoft Azure around, that brings all the concepts shown in this blog series into a Cloud infrastructure. So as always: Stay tuned!

share post

Likes

0

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.