Beliebte Suchanfragen

Cloud Native



Agile Methoden



Vert.x in a real-world use-case

17.7.2014 | 6 minutes of reading time

Vert.x is an event-drive, non-blocking polyglot application platform. In certain ways quite comparable to platforms such as NodeJS, only Vert.x runs on the JVM. The first time I encountered Vert.x was during Devoxx 2012. The platform not only sounded very interesting, after performance tests appeared, it showed to be quite fast also ( ). For some reason however, I simply never found the time to give Vert.x a proper go.

And then still, I’m always asking myself what to build with such a platform. Playing around, following the examples and building a simple webserver is one thing. But I always try to find some real-world use-case, to properly learn about the advantages and disadvantages.
Enter May 2014, the ShipIt day at my current customer (which I talked about here; ). At this project we are about to build a new application that should pass messages between systems, transform certain messages and take care of some non-functional tasks like logging and authentication. This sounded like an ideal candidate to give Vert.x a try, and our goal was born.


The first setup with Vert.x was very easy. You need to install a separate container and the instructions are clearly explained on the website. After fifteen minutes all team members had the container running with the demo app. What helped here is that besides deploying Vert.x modules, Vert.x also supports directly running a single Java class (or JavaScript or any other language for that matter).

All that these classes need to do is extend from the Verticle abstract class. These verticles are the unit of work inside Vert.x. They are started when deployed, and will keep on running until the container is stopped. Special threads inside the Vert.x container execute a verticle. A single (instance of a) verticle will always be executed by the same thread, but a single thread may handle the work for multiple verticles. This is also the reason that all long-during work done by a verticle must be non-blocking, it would otherwise block all execution.
Should you need to do any blocking work such as database queries or heavy computations, you can create certain “worker verticles” for this. They will be executed separately in their own thread-pool.

After installation of the Vert.x container, the next step meant setting up a Maven project, to build and package our own module for deployment. A Maven Archetype is provided to make this very easy. Only some cleanup is necessary afterwards, to remove classes from unused languages. Now, actual work can start.

Implementing the Flow

The first ‘flow’ or functionality that we wanted to implement for our application will fetch data from one system using SOAP/HTTP, and forward this data to another system again using SOAP/HTTP. Because the SOAP messages are so simple, we decided to directly use HTTP POST with the correct message body. The entire flow needs to be triggered by a timer, to run every ten minutes or so.

For all these interfaces, Vert.x provides simple objects and methods that are all called non-blocking. Basically for every call you need to provide a Handler class that will get called when an answer is received.

Let’s start with the timer. In the example below you see a simple Verticle that will be started automatically by the container. From there, a periodic timer is started that triggers every minute, and will call the ‘handle’ method.

1public class TriggerVerticle extends Verticle {
3  public void start() {
4    final Logger log = container.logger();
6    // timeout set to 1 minute
7    final long periodicTimerId = vertx.setPeriodic(60_000, new Handler() {
8      public void handle(final Long timerID) {
9"Trigger Data Fetch");
10      }
11    });
13"TriggerVerticle started");
14  }

We can now integrate this with the HTTP client that should fetch the data from the server (which is a POST call because of SOAP). The code for the client is shown here separated from the timer above:

1final HttpClient client = vertx.createHttpClient()
2    .setHost(“localhost”)
3    .setPort(8080);
5  final HttpClientRequest request ="/fetch/data",
6    new HttpResponseHandler());
7  request.exceptionHandler(new Handler() {
8    public void handle(final Throwable throwable) {
9      log.error("Exception when trying to invoke server", throwable);
10    }
11  });
13  // Needed because you can write to the Request object before actual invocation
14  request.end(SOAP_REQ_MSG);
16  // ...etc
18private class HttpResponseHandler implements Handler {
20  public void handle(final HttpClientResponse httpClientResponse) {
21"Got a response: " + httpClientResponse.statusCode());
23    if (httpClientResponse.statusCode() == 200) {
24    // Only post message for 200 - OK
25    httpClientResponse.bodyHandler(new Handler() {
26      public void handle(Buffer body) {
27        // The entire response body has been received
28"The total body received was " + body.length() + " bytes. Forwarding msg");
29        vertx.eventBus().publish(AppStarter.QUEUE_ POST_DATA, body);
30      }
31    });
32  }

From the example above, the request.end() method might be confusing. This is because the method does not actually send any requests, but gives us a request object that we can first use to e.g. set headers or in our case add an exception handler. Only upon the request.end() the actual HTTP request is fired.
Because the response might contain a large body, again a Handler object is needed to read from the buffer. Here we immediately set the Buffer object on the event bus.

Finally on the other side we need to receive the message and then post it to another HTTP server. Because the original response is still a Buffer, we can write its contents directly to the new request, it is only necessary to correctly set the content-length manually.
Only the response handler is omitted below because it is similar to the example above.

1public class PostVerticle extends Verticle {
3  public void start() {
4    final Logger log = container.logger();
6    final HttpClient client = vertx.createHttpClient()
7      .setHost(“localhost”)
8      .setPort(8081);
10    vertx.eventBus().registerHandler(AppStarter.QUEUE_POST_DATA, new Handler<Message>() {
11      public void handle(final Message message) {
12"Received msg, forwarding to other side");
14        final HttpClientRequest request =“/post/data”,
15          new MyResponseHandler(message));
16        request.putHeader("Content-Length", Integer.toString(message.body().length()));
17        request.write(message.body());
18        request.end();
19      }
20    });
21  }



  • Programming model – Vertices
  • Reactional / Event-driven
  • Simple modules / configuration
  • Event Bus
  • Fast and lightweight
  • Adequate documentation

Missing for our project:

  • Non-reliable event processing (no transactions)
  • Automatic re-deploy not for production
  • Unclear module management
  • Limited availability of ‘connectors’ (SOAP, JMS, …)
  • Yet another container

The code examples on the website, explaining parts of the Vert.x API, looked really simplistic. But after some struggling building our first verticles, we found out it really is that simple. All code is run inside a verticle and as long as you’re using the client and server instances as presented by Vert.x, they will be automatically closed and cleaned up when your verticle stops.
This made programming modules for the Vert.x platform really easy and fun. Also communication between verticles via the event bus is simple and just works. There was some obscurity on how to handle big messages and transport them from one verticle to another, but it seems the event bus is the way to go also in this case.

For us, the reason not to use Vert.x, mostly came down to the non-reliable event processing and having to use another container. Many companies are now writing applications based on Spring or JavaEE, and have components or pieces of code that they can easily re-use from another project. Because right now Vert.x provides no integration with any of these containers (and also because the number of available connecters is somewhat limited), all these components need to be re-written.
Regarding the non-reliable event processing; in current applications messages are fetched from a queue and passed on to other systems. When something goes wrong inside the application and a message is not properly ‘signed off’, it will re-appear on the queue. This will even happen when for example the application crashes. Without support for transactions, this functionality is not available. And for the system we needed to write, the risk of possibly losing messages is too high.

share post




More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.


Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.