2020/07/17 • 5 min read

Healthy recipe for a DevOps Dashboard

The adoption of CI/CD pipelines allowed for code change deliveries to become part of our daily routine. This meant traceability became even more important. We verify and test each code change, up until production, but we still want to keep track of what is happening to the system. How can we achieve this? A solution is to have a dashboard where all of these daily changes can quickly and easily be seen!

 

In La Redoute we are working with Jenkins as our CI / CD tool and we already have a delivery dashboard in place.

Figure 1: Delivery Dashboard example

However, we are starting to move to GitLab as our source code control and pipeline definition. So, in this article, we will go over the steps to create a deployment dashboard. As a result, we will incorporate information from both CI / CD tools.

Figure 2: Event information flow

Ingredients

The list may vary depending on the recipe. For this one in particular, here is the list of ingredients we will use:

 

Job events

This one is the main ingredient. We will need as many as we can get from each pipeline job event on our existing CI / CD tools.

For this dashboard, we will only use deployment events. However, this doesn’t mean that the remaining events (builds, packaging, testing, …) are not important. All job events should be collected and made available so that they can be used for different purposes. For instance, a Delivery dashboard.

 

3 Kafka Topics

Since we we are using 2 CI / CD tools, we will use 1 topic to hold the Jenkins job events (and call it jenkins.job) and another for GitLab (let’s call it gitlab.job). These 2 topics will contain the job event information with no format change. This means that the message you get is the message you produce.

We will use a 3rd topic (let’s call this one software.release.lifecycle), to merge the events from both tools. In this case, the events will be in a standard format, regardless of their origin.

 

1 Jenkins Script

Script that we already have running in Jenkins and producing the events to the jenkins.job kafka topic. This was the solution we have implemented a while back and it is working until today. We could have used a different approach here, but we have opted to continue with the script.

 

1 Standard Event Format

We need to create a standard format so that the events from the existing CI / CD tools can be put together. This means taking common information and placing it under the same names. For example:

– type of job (build, testing, deployment, …)

– component

– job result

– when the job started

– the amount of time it took

– who triggered the job

– commit change that triggered the job

 

4 Microservices

The 1st microservice will handle the job events from GitLab (lets call it gitlabjobmanager). This component receives the information through a post method and produces a message to the gitlab.job kafka topic. Like we said before, no transformation is done to the message, besides the addition of an header with mandatory information for La Redoute events.

The 2nd microservice will consume the events from the gitlab.job kafka topic, format the event into the standard format and produce the result into the software.release.lifecycle kafka topic (we will call this one gitlabjobformatter).

The 3rd microservice will consume the events from the jenkins kafka topic. It will format the event into the same standard format, used for GitLab events, and produce the result into the same software.release.lifecycle kafka topic (lets call it jenkinsjobformatter).

The 4rd microservice will consume the events from software.release.lifecycle kafka topic (lets call this one releasearchiver) and add them to our next ingredient:

 

1 ElasticSearch Index

The index on ElasticSearch will hold the job events formated information and will be the source for our delivery dashboard.

 

1 Grafana

Grafana is where we create our delivery dashboard. This includes the need to have an available datasource to the ElasticSearch index contaning the information we want to use.

 

Preparation

We already have all ingredients on the table. The Jenkins events are being provided to jenkins.job kafka topic. All of our 4 microservices are in place and our ElasticSearch index is ready. Finally, we have Grafana ready for our delivery dashboard creation:

 

1) Add GitLab Webhook

First, we need to connect GitLab to our 1st microservice, gitlabjobmanager.

  • Open GitLab and navigate to a project. Choose Webhooks from the Settings menu option:
Figure 3: Navigating to Webhooks settings
  • Setting up the webhook:
    • Set gitlabjobmanager microservice URL, including the post method
    • A a secret token can be specified so the service can validate the message origin
    • Choose the type of trigger for the events to receive. In this case, Job events
    • Specify if SSL verification should be used or not
    • Add Webhook
Figure 4: Configuring a webhook
    • Test webhook by going to the bottom of the page. Click on “Test” dropdown and select the type of event you want to test

      Figure 5: Webhook successfully added

Note: Make sure your service is accessible outside of your internal network. Otherwise, it won’t be possible to use it on GitLab

Important: It is also possible to add webhook at the group level, which is what we’ve done. The webhook is added the same way as for a project.

 

2) Check job events collection

Make sure the GitLab job events information are getting to the gitlab.job kafka topic when a pipeline is triggered on each CI / CD tool. At this point, we know that the events from Jenkins should already be available.

Once the job events are being received, confirm that the messages format arriving on software.release.lifecycle kafka topic is the one you expect and you are able to distinguish from which CI / CD tool the event was sent.

Finally, verify that the formatted job events are being replicated to the ElasticSearch index.

 

3) Create the delivery dashboard

Presentation is very important! So all we need now is to carefully plate the information!

We will do that by creating a dashboard on Grafana. This is just a simple presentation example, but you can get fancier!

Note that you will need an ElasticSearch datasource defined in Grafana so you can get the information on your dashboard.

  • Open Grafana and add a new dashboard
Figure 6: New dashboard on Grafana
  • On an empty dashboard, you get a new panel where you can choose convert to row
Figure 7: Adding a row
    • The row will hold the panels you are going to create later. You can change its name by moving the mouse over the row and clicking on the settings icon:
Figure 8: Accessing row settings
    • Change the Title as needed
Figure 9: Changing row settings
  • Go to the dashboard settings
Figure 10: Accessing dashboard settings
  • Add the variables you need for the dashboard.
Figure 11: Adding new variable
    • Datasource
      • Depending on the number of panels you are going to add, it is good to define a datasource variable that all panels can use. This way, if you need to change the datasource, you will not need to change it panel by panel, but only on the variable.
Figure 12: Adding new datasource variable
    • Others variables
      • Click on New
Figure 13: Adding a second variable
      • For example, Environment:
Figure 14: Adding an environment variable
      • You can add other variables, like a component filter
  • Add new panel
Figure 15: Adding new panel
    • Choose Visualization

      Figure 16: Choosing visualization
    • Choose Graph

      Figure 17: Adding a graph
    • Go to the Query tab
      • Specify your datasource variable and enter query. For example, filtering by environment

        Figure 18: Adding a query to the graph
    • Go to the General tab and specify the title and Description as needed

      Figure 19: Graph settings
    • Move panel to row and adjust size of the panel as needed

      Figure 20: Moving the graph
  • Add new panel

    Figure 21: Adding new panel
    • Choose Visualization

      Figure 22: Choosing visualization
    • Choose Table

      Figure 23: Adding a table
    • Specify your datasource variable and enter query. You should set the same query you’ve used for the Graph.

      Figure 24: Adding a query to the table
    • Go to the Visualization tab and click on the + button to add the columns you want to display. You can change the columns style according to the information being displayed.

      Figure 25: Adding columns to the table
    • As before, you can go to the General tab and specify a title and description

      Figure 26: Table settings
  • Move panel to row and adjust size of the panel as needed

    Figure 27: Moving the table

 

Bon appétit!

That’s it! Just don’t forget to save your new dashboard and it is good to go!

A simple Deployment Dashboard to keep track of the system changes that are happening right now!

Go back to the blog posts list