Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

//

The state of APIOps and the deployment of API definitions

12.10.2022 | 6 minutes of reading time

Having learned in one of my posts on Medium that API design is not really an easy task and involves a lot of work, also mentioned in my last post here on the blog, I'm going to move on to another complicated area of APIs, APIOps and, in more detail, the deployments of API definitions. Some readers may be wondering why, as there is already a post of mine on the blog about APIOps from January 2021. Since the latter was written at a different level of depth, it is time to revisit the implementation of APIOps.

APIOps, what does that mean exactly?

APIOps denotes a process that prepares API definitions for deployment. This process is supposed ensure that the definitions are valid and tested on the base of the API guideline.

For the use of APIOps, it is basically irrelevant which CI/CD tool is supporting the process. For further consideration, the push of an API definition in the Git repository represents the starting point. With APIOps, we have exactly one goal in terms of deployment: to distribute the API definition artifact to different components or to make components available on the base of the artifact.

CI of APIs

Using Continuous Integration (CI), we can achieve this as part of APIOps. We combine all elements of API definitions by building, which means joining references within a definition, for validation and testing.

Build

An API definition can be created from different sub-documents by using the ref property. For the following steps it is important that we have a merged API definition in which the respective references are resolved. This can be easily achieved using the join-command of redocly-cli.

Validate

In the Validate stage, the terms Linting and Validation are often used. Therefore, I would like to clarify the difference between Linter and Validator in advance. A linter looks for suspicious or dangerous code and validates the code against design guidelines. A linter must be able to parse the code, which means it also validates against a language specification. This means that, to some extent, a linter is a validator with additional functions. A validator makes sure he code conforms to the language specification. It doesn't care about style or logic.

The step "Validate" can now be done, supported by different tools. Currently Spectral as well as redocly-cli are very visible here. Using redocly-cli has proven to be a good recommendation from the project experience. The supplied rules react more exactly to violations against the OpenAPI specification. With both tools, it is basically a matter of creating rule sets, which can still be extended with your own written functions. Which tool you choose is again a matter of taste. I generally use both in my projects, since I find it easier to create rules in Spectral.

1extends: spectral:oas
2rules:
3  contact-properties: info
4  include-title:
5   description: Info section has to include a title as identifying name of the API.
6   given: $.info
7   severity: error
8   then:
9     field: title
10     function: truthy
11   include-version:
12    description: Info section has to include a version following semantic rules.
13    given: $.info
14    severity: error
15    then:
16      field: version
17      function: truthy
18 valid-semantic-version:
19    description: Versions are restricted to the format <MAJOR>.<MINOR>.<PATCH>, e.g. 1.0.0. See https://developer.docs.company.net/api/guidelines.html#semantic-versioning for details.
20    given: $.info
21    severity: error
22    then:
23      field: version
24      function: pattern
25      functionOptions:
26          match: ^([0-9]+)\.([0-9]+)\.([0-9]+)$

Within the CI pipeline, their use is based on the respective command line interface (CLI). However, in order to validate an API definition, it is necessary to have corresponding guidelines for the design of APIs. In the world wide web we find a lot of them. So please adapt them and don't copy them ;). If a set of rules for linting and validation has been developed from the guidelines, a respective API definition can now also be validated accordingly. Exactly this result decides on the progress of the pipeline and thus whether the definition meets the requirements for good APIs within the organization or the company. Good API definitions deserve to be tested afterwards ;).

Test

When testing APIs, a distinction needs to be made between three different types related to APIs, Contract, Content, and Variation. All three have different goals during testing. Contract testing is about validating the API definition from the consumer's point of view, whereas Content Testing focuses around testing the validation of response values. The previous approaches have always tested for success, the so-called "happy path". But what if exactly this is not desired and we want to check whether the expected answers are also output for any error answers? Then we need to take a look at Variation Testing.

In projects it has proven to be very useful not to write down these test procedures in an individual test suite based on frameworks like mocha and chai, but to describe the respective tests based on a configuration. Within the pipeline, Portman is used for testing, since it exactly fulfills the requirements.

1{
2   "version": 1.0,
3   "tests": {
4                   "contractTests": [],
5                   "contentTests": [],
6                   "variationTests": [],
7   },
8   "globals": {
9         "stripResponseExamples": true
10  }
11}

By using Portman, a postman collection is created directly after a successfully executed contract test, which in turn is also included in the test. In the previous Validate block, an API definition is created that is resolved by all references. Since this is the basis for the contract test, two tested artifacts can now be delivered to interested consumers. In addition to the types of tests shown, performance tests are also of interest for APIs. After all, it is important for the respective provider to know how resiliently an API works under load. For this purpose, the created postman collection can be used. With the help of postman-to-k6, it is possible to create a k6 script from a collection and execute it with k6. In a following post I will go into more detail about k6. Hence only this short summary for now.

CD of APIs

The final block of an APIOps pipeline is the deployment to the corresponding target environments. The first question here is whether the API definition can simply be pushed in the direction of the target environment, i.e. the target environment is able to read and process the OpenAPI specification. Additionally, OpenAPI extensions are used for this purpose. However, it may also be that the target environments are to be rolled out on the basis of Configuration as Code (CoC) in the direction of Infrastructure as Code (IaC). In the context of deploying API definitions, using IaC doesn't make it any easier in terms of an API architecture, since the target environments are also infrastructures. At this point, however, I would like to conclude the topic of deployment for now. Due to the complexity, we can see that this phase of an APIOps pipeline must always be considered in relation to the architecture and also the infrastructure.

Conclusion

We now have to realize that APIOps is not a trivial process that just disappears with a flick of the wrist. Instead, it contains many facets that cannot simply be solved with one tool or one platform. A short list of the tools used:

  • redocly CLI
  • Stoplight Spectral
  • Portman
  • postman-to-k6
  • k6

The ideas around APIOps must evolve from within a team. With APIOps, such a team also delivers a segment of the "Golden Path", which is desired by many developers. This is where APIOps also addresses the buzzworthy Developer Experience. But APIOps is not finished with this post, because we currently only have the classic Rest(ful) APIs in mind. We are also not looking at security issues. So we still have to open these doors. Perhaps you already have ideas on how you want to tackle these individual topics? Or do you have alternative tools in your toolbox? I appreciate your comments.

share post

Likes

3

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.