The agile software methodology is becoming more and more mainstream. As a recent Dutch survey [AS] shows, the agile methodology is known by nearly all respondents. Also the usage of the agile methodology for software development is very high within the organizations.
What it also shows however, is that only 1/3 have ‘frequent’ software release cycles with duration of less than one month. This means that while the development teams might frequently release working software, it never reaches production. Which means there is no actual value added for the customer.
Continuous Delivery aims to solve this by creating an automated build-pipeline, in which new software builds automatically progress down the line as long as all tests are succeeding. Any build that reaches the end stage can be put into production automatically or by pressing the button.
To make this build pipeline work, several things are necessary: the process must be automated as much as possible, it must be fully reproducible and fast feedback is required. If these requirements are fulfilled, a fast and stable build pipeline can be created, delivering software of high quality.
But there lies also the biggest problem; testing. To get the required feedback and confidence in software quality, many tests are needed for full code coverage. Performance testing is even more important and a harder area to cover.
With agile development, new features come fast and in small increments. But this also means that the performance impact of these features is often very small and hardly measurable.
While tools such as JMeter [JM] or Grinder [GR] are great to put load on your system and, measure performance outside your application as a black-box, this is simply insufficient to measure the small fluctuations impacted from new features. Tooling such as AppDynamics [AD] does make it possible to give this insight, as it makes it possible to monitor inside the application in a fine-grained way. Let me explain why this is necessary.
During the capacity test phase in the continuous delivery pipeline, you want to match the production environment as much as possible. That means using production (like) hardware, production like load and testing against the entire end-to-end application. As is probably familiar, current applications consist of many separate components, and during development, every one of these components might have been changed.
These changes could have had positive or negative impact on the performance, on the component itself and/or on the application as a whole. When measuring outside the application, performance improvements might get cancelled out by negative performance impact, eventually showing more or less the same measurements. (And even if you see degrading performance, which of the components is the ‘bad guy’?)
Because with AppDynamics you can monitor specific business transactions or look at the performance of specific tiers, these internal fluctuations can be discovered and lead to early feedback about the quality of the application.
The same holds true also for errors or other problems occurring inside the application. With the extensive monitoring of AppDynamics, it can discover almost everything going wrong, where JMeter can only look at the HTTP response code to determine if a problem has occurred.
Now how do we integrate the capacity testing into the continuous delivery pipeline? One of the more well known build tools is Jenkins CI [JE] , which can also be extended with several plugins. With some configuration, Jenkins can be setup to create a complete build-pipeline (for example explained here [CD] ) for bringing software releases to production.
Recently, a AppDynamics plugin [AP] for Jenkins has been released, which uses the RESTful API of AppDynamics to fetch measurements.
Integrating in the Build Pipeline
The plugin can be configured in such a way, that it will grab the measurements after a load-generator has run, so that it gives an overview of how the system performed during the capacity test.
Also thresholds can be specified, so that Jenkins will automatically fail the build as soon as performance has degraded too much. This will give you the quick feedback you need for the build-pipeline to be successful and deliver high quality software into production.