Beliebte Suchanfragen
|
//

Elasticsearch Cluster in a Jiffy

5.2.2016 | 4 minutes of reading time

The setup of an elasticsearch cluster can differ strongly depending on its scenario. In order to quickly deliver visible, individually customized results to our customers we have automated the installation process for elasticsearch clusters and are now able to run a local demo cluster at the flick of a switch.

A local cluster in less than 30 minutes

An elasticsearch cluster can be customized for a specific scenario in many different ways, e.g. by using different plugins, security settings or loadbalancing components. Since there are big differences about what our customers consider to be the most important requirements, there is not “that one and only” cluster setup we could bring along as a full package. Nevertheless, in order to provide an individually customized solution to our customers as early as possible, we have automated the installation process and made it configurable flexibly in one go. At most 30 minutes – it should not take longer to set up a fully customized cluster on our notebook.

From notebook to production

The installation process for a full ELK-Cluster is indeed sophisticated, but it’s not in the center of our customers attention, thus it should be handled as quick and smooth as possible. Therefore we designed the installation process in a way which allows us to transfer a prepared cluster from our local development environment onto a production server.

Local default configuration with vagrant

Let’s start by raising a simple cluster with vagrant . We need the following packages to be installed:

  • lxc
  • VirtualBox (I’m using v4.3.36)
  • Vagrant (I’m using v1.7.4) with the “hosts” plugin (vagrant plugin install vagrant-hosts)
  • java keytool (containt e.g. in openjdk-7-jre)

Attention: The Vagrantfile is designed for Debian based systems, on other platforms you might need to adjust the network settings.

Now we clone the elastic-project from https://github.com/tobiasschaber/elastic , change into the elastic-directory and modify the file “prepare-ssl.sh”. We need to adjust the path from “ELKINSTALLDIR” and let it point to the elastic directory we just cloned:

Here we go! Just type the following commands:
./prepare-ssl.sh (confirm with 1x ENTER)
vagrant up elkdata1 elkmaster1 elkclient1

A nice setup with little effort

While the cluster is installing, I’ll explain what’s currently happening: First we executed a shell script which creates some SSL artifacts and places them in the “ssl”-directory. If we perform the installation on a production server later, we just have to replace these self-generated artifacts by “real” ones which then will be rolled out instead.
After that, vagrant starts three VMs – an elasticsearch setup with master-, client- and data-node – which contain SSL-encryption, license-, watcher- and shield-plugin plus kibana with HTTPs, marvel- and timelion-plugin.

Kibana will not start automatically, so we have to login into the client node via SSH:
vagrant ssh elkclient1
and start the kibana service:
sudo systemctl start kibana

Kibana is now reachable under: https://10.0.3.131:5601/ (esadmin/esadmin)

A little more is not difficult – adding logstash

As a central component of the ELK-stack, we now add a logstash node to our cluster:
vagrant up logstash1
After booting this vagrant box, an SSL-connected logstash node with a working minimum configuration is available. We can try it out by logging into the logstash machine via SSH:
vagrant ssh logstash1
and writing some stuff into the file „/tmp/input“:
echo „Hello World“ >> /tmp/input

After a short time this message should appear in the Kibana index “default-*”:

Migration to production servers

In the installation process we currently rely on puppet. But vagrant does not directly execute a puppet module, instead we encapsulated the installation behind a shell script. The advantage is that later it will be enough to execute the appropriate shell script on all nodes to install the complete productive cluster. There is no full puppet infrastructure required.

Even more capabilities – installation by configuration

At this point, those who have enough RAM in their notebooks can add additional nodes to their vagrant cluster (elkmaster2, elkdata2+3, elkclient2). It is also possible (even if its little harder) to replace the logstash node by a combination of shipper- and indexer-nodes and place a redis -cluster between them – including SSL encryption. For this some changes have to be done at the hiera files. These files are placed in the “hiera” directory. The “common.yaml” controls the configuration concerning all nodes, additionally there are server-specific config files in the “nodes” directory.

In the hiera files we can adjust the configuration for the whole cluster. Here are some examples:

  • SSL with HTTPs yes/no
  • Elasticsearch/Kibana/Plugin version
  • List of elasticsearch and kibana plugins to be installed
  • Authentication
  • Redis incl. SSL yes/no

Almost every feature of our cluster can be configured in this place. With that setup it is now possible to ramp up a cluster with a very broad range of options in less then 30 minutes – production ready.

|

share post

Likes

0

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.