Beliebte Suchanfragen

Cloud Native



Agile Methoden



Gatling Load Testing Part 1 – Using Gatling

20.6.2017 | 18 minutes of reading time

Gatling is a Scala-based load testing tool developed by the Gatling Corp. The tool itself is open source and can be found on GitHub . On top of the open part, an enterprise edition exists.

Load tests in Gatling are written in Scala. The API for writing those tests makes heavy use of the builder pattern and fluent interfaces. This might be a question of personal preferences but in my opinion this approach fits quite well. Especially, because no detailed Scala knowledge is necessary in order to write Gatling load tests. Therefore, Java developers should not be afraid of using Gatling.

A single load test in Gatling is called a scenario. Roughly, a scenario can be divided into three parts:

  1. General configuration (protocol, server address, encoding …)
  2. Steps to execute (open webpage, click this, enter that …)
  3. Scenario configuration (no. of total users, users over time …)

The different parts will be explained in more detail in the following sections. But the possibilities for reusing different parts across tests should already be obvious.

Gatling currently provides support for HTTP protocols (including WebSocket and SSE) and JMS. Extending this functionality will be part of the next blog post. For the following example we will rely on HTTP requests because they are the easiest to understand.

Test Scenario

Within this scenario, we will make use of the website the Gatling team provides for testing Gatling:
The website allows to test the basic HTTP actions. I do not want to repeat the tutorial from the Gatling website here, therefore we will do some things differently:

  • Firstly, we will not use the recorder. It might be convenient, but we will stick to code here. If you wanna know more about the recorder, see here .
  • Secondly, Gatling shall be included within a regular project. This means we create a Scala project using the “Simple Build Tool” (SBT). If you are a Java developer do not worry, the usage is not complicated and the SBT parts are kept short.

Project Set-Up

As usual, you can find the whole project on GitHub:

So, let’s start with a basic SBT file (build.sbt):

1lazy val root = project
2 .in(file("."))
3 .settings(
4   name := "gatling-example",
5   scalaVersion := "2.11.8",
6   version := "0.1.0-SNAPSHOT",
7   libraryDependencies ++= Seq(
8       "io.gatling.highcharts" % "gatling-charts-highcharts" % "2.2.1",
9       "io.gatling" % "gatling-test-framework" % "2.2.1"
10   )
11 ).enablePlugins(GatlingPlugin)

We defined a project in the current directory, gave it a name, a scala version and a project version. Additionally, the Gatling dependencies have been added. Next to the build.sbt file, we need the project directory and the file with the following content to set the SBT version


Finally, we add the file plugins.sbt in the project directory containing the following line:

addSbtPlugin("io.gatling" % "gatling-sbt" % "2.2.1")

Using the SBT plugin, the tests can be run as part of the SBT build. The directory structure


should exist now. This is all that is needed to start writing some Gatling tests. A simple call to sbt gatling:test should show that everything works as expected.

Simple Test

The most simple HTTP test one can come up with is probably opening a web page and check that some content is being displayed. So, let’s do that.
If it does not exist yet, please create a src/test/scala directory and use whichever package you prefer.

Every class has to extend io.gatling.core.scenario.Simulation in order to be recognized by Gatling. Additionally, the imports

1import io.gatling.core.Predef._
2import io.gatling.http.Predef._

are recommended. A Gatling module (here: core and HTTP) generally defines a class called Predef, which represents the central access point to that library. E.g. if we take a look at the io.gatling.http.Predef class, we can see that it just defines two types and extends io.gatling.http.HttpDsl, which provides the HTTP methods we need.
So, let’s start with the first part, the general configuration. For now we will keep it simple:

1private val httpConfig = http.baseURL("")

http is a method provided from the HttpDsl and our starting point. There are a lot more options provided by the HTTP module to simulate a browser as precisely as possible, but we only need the base URL. Please note that the base URL is fixed for this configuration. All navigation will be relative to it. The second part is also quite easy:

1private val scn = scenario("SimpleSimulation").
2 exec(http("open").
3   get("/")).
4 pause(1)

Here, we define the actual scenario, i.e. we state what shall happen and in which order. The scenario is given a name, which will appear in the results later. The exec() method takes the concrete actions in order to execute them. The pause at the end is not mandatory. You find many examples online that have pauses between different logical steps. In more elaborate tests it could be used to represent the time a user takes to think after seeing a new webpage. Here, it rather serves as a visual separator. The “1” stands for one second in this test, but any scala.concurrent.duration.Duration can be used.
Lastly, we have to tell Gatling how many users we want to simulate:


We told Gatling to use the previously defined scenario and start it with one user. The scenario will use HTTP configuration we defined before. And that’s it.

Now that we have our (simple) simulation: how to run it? Basically, there are three options:

Running on the IDE

Although I suppose this is not directly intended from the Gatling people, there is a possibility to run your simulations directly from in your IDE. In order to do so, a run configuration pointing to as main class has to be created. This class contains the main method that starts Gatling. Additionally, we have to tell Gatling where to find the class files and which simulation to start. This information is passed as program arguments. We need three parameters:

  • -s de.codecentric.gatling.example.SimpleSimulation
  • -sf src/test/scala
  • -bf target/scala-2.11/test-classes


  • -bf target/scala-2.11/gatling-classes

The -s option stands for “simulation”, -sf for “simulations folder”, i.e. the sources of the simulations and -bf for “binaries folder”. Those are actually Gatling’s command line parameters . In IntelliJ the configuration could look like this:

The -bf option is a little bit tricky. If you do not have the Gatling plugin activated in your SBT project, the simulation classes will be compiled into test-classes. If the plugin is active, the simulations are being place in gatling-classes. So be careful which directory you choose. For the example project it is gatling-classes.

Running on SBT

Due to the SBT plugin we already added to our project, this is the easiest way to run the simulation. Simply type

sbt gatling:test

and the simulation will start. If there is more than one simulation in the project SBT will run all of them. Alternatively,

sbt gatling:testOnly 

will work as expected.

Running in the Terminal

You might ask how you would start SBT if not from the terminal 😉 Still, there is yet another way to run your simulations. For this option, you have to download the Gatling bundle . After unzipping the bundle, you will see that there is a lib directory within it. In this directory, we place the (test) JAR of the project. Since we will not use any src/main classes it is sufficient to create a JAR containing only the test classes. You might use SBT or your IDE to create it. For SBT the line

1publishArtifact in (Test, packageBin) := true

in the build.sbt file allows to create a test jar with the command sbt test:package. After copying the JAR into the Gatling lib directory and calling

1/bin/ -s de.codecentric.gatling.example.SimpleSimulation

the simulation should run. In my case, running without the parameter did not list my simulation class. Maybe that is related to the simulation being packaged in a JAR and not as class file in the user-files/simulations directory.

You might ask why this complicated way to run the simulation is needed. We have a nice and comfy SBT plugin and can even run the simulation on our IDE. So why download a bundle, package a test jar and use a script? Well, you might have noticed, that the first two ways of running the simulation are limited to a single machine. This way is required for distributed execution of simulations. We will come to that later.
Also, this way might be better suited for your CI system. The JAR that already passed the unit and integration tests is being pushed to the next phase: the performance tests. Using the Gatling bundle, there is no need to rebuild the JAR. An IDE hopefully does not appear in your CI pipeline and next to that, this step is independent of Scala and SBT, it only requires (plain, old) Java.


Whichever way you chose to execute the tests, a results directory should have appeared. Within this directory another directory with the name of the scenario and a timestamp should be present. And lastly, within that one, an index.html file. This webpage contains all of the data that was collected by Gatling during the simulation, presented in a nice way. All that you need to know about the graphics is written in the Gatling documentation .

Because the example only executed a single operation there is not much to see. Gatling presents two executed operations, because the original request, we called it “open”, was redirected. Hopefully, the request was successful on your computer.

After having established the basics, let’s see if we can do a little bit better on the test.

Complex Test

A “complex” test is mostly defined by the reader of the test. For some it might appear easily understandable, for others it is difficult. Within this section, our simple test shall be extended by some nice Gatling functionalities. I especially want to point out some culprits I wish I had known about when I started working with Gatling.


The first thing we want to use in our more complex test is a Feeder. As the name suggests, it feeds some data to the scenario. A feeder is necessary because the scenario you define is fixed in the way you write it. E.g.

1private val addComputer = scenario("Add Computer").
2 exec(http("create new computer").
3   post("/computers").
4 formParamMap(Map("name" -> "Codecentric Machine")))

performs a simple POST to create a new computer in the database. The problem here is that when using more than one simulated user, every one of them would create a computer with the same name. In the case of the computers database, duplicated names are allowed. But what if not? Then the first POST would be successful and the following ones would fail. That is where a feeder comes in handy:

1private val numberFeeder = for( x <- 0 until 10 ) yield Map("veryImportantId" -> x)
3private val addComputer = scenario("Add Computer").
4 feed(numberFeeder).
5 exec(http("create new computer").
6   post("/computers").
7 formParamMap(Map("name" -> "Codecentric Machine ${veryImportantId}")))

At first, we created a feeder. In order to be able to give the value it feeds a name, we have to create several maps that contain a single value, where the key is the name (the Java developers hopefully excuse my use of Scala’s yield). Of course, for counting from 0 to 9, we could have used a simple loop but that would be too easy. Next, we added the feeder to the scenario with the feed() method. Gatling provides an implicit conversion from an IndexedSeq of Map to FeederBuilder. Within the scenario we can use the value by its key from the map. For that purpose, Gatling has its own, small EL .

Be careful when using the ${…} notation that your IDE does not transform the string automagically into a Scala string. It has to stay a normal string.

If you run the simulation now, you will see that only a single computer is created. That is because only one user shall be simulated. Let’s change that value to 11. (Note: When starting the example from the IDE, do not forget to change the class name in the run configuration. A blog author who does not want to be mentioned by name forgot to do that at first…)
Now, you should be presented with an exception, because the feeder ran out of values. If not defined differently, a feeder provides each value once in order. If there are more users than values, it will simply crash. There are four different ways to provide values , take whichever fits your needs.
I want to point out another thing. Consider that we want ten users to create the computers including the id and then ten users to query the newly created computers. This could look like this:

1private val numberFeeder = for( x <- 0 until 10 ) yield Map("veryImportantId" -> x)
3private val addComputer = scenario("Add Computer").
4 feed(numberFeeder).
5 exec(http("create new computer").
6   post("/computers").
7 formParamMap(Map("name" -> "Codecentric Machine ${veryImportantId}")))
9private val checkComputer = scenario("Check Computer").
10 feed(numberFeeder).
11 exec(http("request computer").
12 get("/computers?f=Codecentric Machine ${veryImportantId}").
13 check(css("a:contains('Codecentric Machine ${veryImportantId}')", "href")))
16 addComputer.inject(atOnceUsers(10)),
17 checkComputer.pause(4).inject(atOnceUsers(10))

Please note that the same numberFeeder is used twice. Additionally, in checkComputer, I use a CSS check I shamelessly copied from the Gatling example. Additionally, when setting up the scenarios, I added a four second delay to the second one in order to give the creation of the computers a head start. Again, this should crash. Although the numberFeeder is being implicitly converted, it is the same feeder both times. But what can we do if we do not want to define numberFeeder, numberFeeder2,… each time we would like to use the same values? Copying and pasting the feeder code would be horrible and error prone. A simple trick here is to force the creation of a new object each time. Can you already guess what we could use? Exactly, the iterator method. Each iterator is an independent object and Gatling again provides an implicit conversion. Therefore, by changing the code slightly, we can reuse the feeder across scenarios:

1private val numberFeeder = for( x <- 0 until 10 ) yield Map("veryImportantId" -> x)
3private val addComputer = scenario("Add Computer").
4 feed(numberFeeder.iterator).
5 exec(http("create new computer").
6   post("/computers").
7 formParamMap(Map("name" -> "Codecentric Machine ${veryImportantId}")))
9private val checkComputer = scenario("Check Computer").
10 feed(numberFeeder.iterator).
11 exec(http("request computer").
12 get("/computers?f=Codecentric Machine ${veryImportantId}").
13 check(css("a:contains('Codecentric Machine ${veryImportantId}')", "href")))
16 addComputer.inject(atOnceUsers(10)),
17 checkComputer.pause(4).inject(atOnceUsers(10))

Having feeders, you might ask where a feeder stores its values. That is why we take a look at the session next.

Session Manipulation

A session is an interesting thing in Gatling, because that is where each individual, virtual user can store its “personal” data. Like a browser session, each one has its own. Simply put, the session is a key-value map. It offers some more features but we do not need them right now. Gatling itself places some information for every simulated user in the session and feeders use it, too. In the previous example, the feeder placed the value under the key veryImportantId in the session. If you are curious, what is present in the session, add this line to your scenario (e.g. at the end):

1exec(s => {s.attributes.foreach(println(_)); s})

In my case, three keys were present: veryImportantId, gatling.http.cookies and gatling.http.referer.

Using this exec() line, you can basically manipulate the session in any way you want, or perform some additional validation on things stored in it. Keep in mind, that when changing something inside the session, the manipulated one has to be returned. Calling set() on a session returns a new one. A session itself is immutable. So, what could you use the session for? I used this for an async operation. A callback was placed inside the session and a long enough pause was placed in the scenario. If the callback had not logged a successful response until the pause ended it was cancelled, therefore it logged an error for the operation.


Lastly, I would like to show how different parts can be combined, so reuse becomes easier. Every exec() or feed() calls can be stored in a variable. This allows for a nice combination of the different parts. If you have checked out the example project from GitHub, this is now placed in FinalSimulation. Firstly, opening a webpage might be common enough, therefore we extract it:

1private val openMainPage = exec(http("open").
2 get("/")).
3 pause(1)

Next is the addition of a computer:

1private val addComputer = feed(numberFeeder.iterator).
2 exec(http("create new computer").
3   post("/computers").
4   formParamMap(Map("name" -> "Codecentric Machine ${veryImportantId}"))
5 )

This is very similar to the previous simulation, but this time the scenario() part is missing. As you can guess, searching for a computer is the next part:

1private val checkComputer = feed(numberFeeder.iterator).
2 exec(http("request computer").
3   get("/computers?f=Codecentric Machine ${veryImportantId}").
4   check(css("a:contains('Codecentric Machine ${veryImportantId}')", "href"))
5 )

And now, we combine the different parts into a single scenario:

2 exec(openMainPage, addComputer, checkComputer).
3 inject(atOnceUsers(numberUsers))

As you can see, we a free to combine the different parts in a scenario, in any order we like. Additionally, we are free to introduce a pause between the different parts:

3 inject(atOnceUsers(numberUsers))

It is still the same simulation as before (the only thing that changed is that the same users that did the creation perform the check), but in my opinion this is more readable. There is no need to know exactly what the individual parts do to get an overview of the scenario,. Open main page, pause, add computer, pause and check computer is quite descriptive.

After running the tests, you can take a look at the results again. They should look familiar by now.

As mentioned before and as you might have noticed by now, the tests all run on your machine, i.e. a single machine. This might not be the best option if you intend to simulate many, many users. The commercial version of Gatling offers a way to run your simulation in a distributed way but if you do not want to or can’t pay, there is still a way to perform a distributed test.

Distributed Test

Luckily for us, the Gatling documentation mentions a way to run the simulation in a distributed way . Still, the documentation is quite short and the provided Shell script is quite inflexible. Therefore I want to get into more detail here. For the distributed simulation, your local computer will serve as coordinator. We will start two Docker containers (everything has to be done with Docker these days, right? ;)), each one containing the Gatling bundle and serving as a worker.

The Dockerfile

The Dockerfile is nothing special. The majority is copied from the Docker SSH example. Only one line had to be changed from

1RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config


1RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
1if [ -z "$1" ]
2   then
3       echo "Must provide param for test class"
4       exit 1
7#Assuming same user name for all hosts
10#Remote hosts list
11HOSTS=( localhost:32782 localhost:32783 )
13#Simulation options
15#Assuming all Gatling installation in same path (with write permissions)
19#Change to your simulation class name

This is just the basic setup. The simulation name has to be be provided as a parameter, therefore $1 is being checked. Due to Docker running locally, the SSH ports of the hosts have to made explicitly in the HOSTS array. The GATHER_REPORTS_DIR is named like this because the results from the other machines will be placed there before combining them into one report. The next part consists only of a cleanup that removes the old results:

1echo "Starting Gatling cluster run for simulation: $SIMULATION_NAME"
3echo "Cleaning previous runs from localhost"
7for HOST in "${HOSTS[@]}"
9 echo "Copying simulation JARs to host: $HOST"
10 IFS=: read -r address port <<< "$HOST"
11 scp -i id_rsa -P $port $GATLING_LIB_DIR/gatling-example_2.11-0.1.0-SNAPSHOT-tests.jar $USER_NAME@$address:/$GATLING_LIB_DIR
15for HOST in "${HOSTS[@]}"
17 echo "Cleaning previous runs from host: $HOST"
18 IFS=: read -r address port <<< "$HOST"
19 ssh -n -f -i id_rsa $USER_NAME@$address -p $port "sh -c 'rm -rf $GATLING_REPORT_DIR'"

Next to deleting the old results, the JAR is being copied into the lib directory. The last part is the most interesting one:

1for HOST in "${HOSTS[@]}"
3  echo "Running simulation on host: $HOST"
4 IFS=: read -r address port <<< "$HOST" ssh -n -f -i id_rsa $USER_NAME@$address -p $port "sh -c 'nohup /$GATLING_RUNNER -nr -s $SIMULATION_NAME > gatling-run.log 2>&1 &'"
7$GATLING_RUNNER -nr -s $SIMULATION_NAME > gatling-run-localhost.log
9echo "Gathering result file from localhost"
10ls -t $GATLING_REPORT_DIR | head -n 1 | xargs -I {} mv ${GATLING_REPORT_DIR}{} ${GATLING_REPORT_DIR}report
11cp ${GATLING_REPORT_DIR}report/simulation.log ${GATHER_REPORTS_DIR}simulation.log
13for HOST in "${HOSTS[@]}"
15 echo "Gathering result file from host: $HOST"
16 IFS=: read -r address port <<< "$HOST"
17 ssh -n -f -i id_rsa $USER_NAME@$address -p $port "sh -c 'ls -t /$GATLING_REPORT_DIR | head -n 1 | xargs -I {} mv /${GATLING_REPORT_DIR}{} /${GATLING_REPORT_DIR}report'"
18 scp -i id_rsa -P $port $USER_NAME@$address:/${GATLING_REPORT_DIR}report/simulation.log ${GATHER_REPORTS_DIR}simulation-${HOST}.log
21for HOST in "${HOSTS[@]}"
23 echo "Gathering run log file from host: $HOST"
24 scp -i id_rsa -P $port $USER_NAME@$address:gatling-run.log ./gatling-run-${HOST}.log
28echo "Aggregating simulations"
29$GATLING_RUNNER -ro reports

So, we connect to every host and start the Gatling runner in the background. Additionally, the output is being collected in a file. The -nr option of the Gatling runner tells it not to create any report because we want to do that later when we have collected all results. Then, we do basically the same on the local machine.

After that, the newest file in the results directory is being moved to report (ls -t $GATLING_REPORT_DIR | head -n 1 | xargs -I {} mv ${GATLING_REPORT_DIR}{} ${GATLING_REPORT_DIR}report). The same is being done on the remote machines. For the remote machines, the simulation logs are also copied to the local computer, appending the remote host name every time in order to avoid duplicated names. Finally, the command $GATLING_RUNNER -ro reports creates a single report including all the log files that are present in the directory.

When you run locally, it should produce some output in order to tell you what the script is doing. If everything goes well, a results directory appears under gatling-charts-highcharts-bundle-2.2.5 and within it reports/index.html. Since three machines executed the same simulation, the numbers should be multiplied by three.

And that’s it. That’s how you can execute your simulation in a cluster mode. It’s not completely convenient and the implicit expectations limit the portability but feel free to make recommendations.

Now that you know the Gatling basics and can execute your simulations in different ways, what’s left?

Following Post

In the next blog post we will write our own Gatling module/protocol, like HTTP or JMS. If you ever want to test something that is not yet supported by Gatling, that could come in very handy for you. The next post will also involve more Scala code.

Until then, feel free to mention improvements or your own experiences. If you have any problems with the example do not hesitate to complain 😉

share post




More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.


Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.