Skip to content
This repository has been archived by the owner on Jul 17, 2024. It is now read-only.

5G EVE - WP4 - Monitoring Dockerized environment for testing Monitoring & Data Collection tools

License

Notifications You must be signed in to change notification settings

5GEVE/5geve-wp4-monitoring-dockerized-env

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

5G EVE - WP4 - Monitoring Dockerized environment for testing Monitoring & Data Collection tools

This repository gathers all the tools used for testing the Monitoring & Data Collection features present in the 5G EVE project, within WP4 scope, based on the Elastic stack. For simulating a complete workflow, it has also been included other tools related to other components from the 5G EVE platform, which are the Data shippers (based on Filebeat), used for collecting the metrics from the monitored components deployed in the site facilities, and the Data Collection Manager (based on Kafka), which gathers the metrics collected by the Data shippers and delivers them to other components interested in using that data, following a publish-subscribe paradigm for that workflow.

The toolchain for the demo is depicted in the following diagram:

Toolchain for the demo

And the architecture used for the demo is the following:

Demo architecture

Note that:

  • This demo has been done by using two different VMs: one which contains the filebeat repository, and other containing the kafka-elk repository. However, it can be done in the same server (physical or VM) if needed. In any case, you have to change the <KAFKA_IP> string present in both filebeat.yml and docker-compose.yml files in order to include the server IP which will hold the kafka-elk repository.
  • The process of obtaining the .txt file ingested by Filebeat is out of the scope of this demo, as it was depicted in the architecture's diagram.
  • The server(s) which will be used for the demo need(s) to have Docker and Docker Compose installed.
  • Take care of following the recommendations provided by Elastic regarding the issue of Running Elasticsearch from the command line in production mode.
  • This demo is a Docker-based adaptation of the following tutorial: Deploying Kafka with the ELK Stack. Although that tutorial has a slightly different environment, it is a useful reference if you are starting with this set of tools.

Test stages

Although it is going to be presented the different test stages, with the commands used in each stage, note that in the /resources/videos folder it is available the complete demo recorded in different videos.

1. Cleaning up the scenario

# in the server which will hold the kafka-elk repository
cd kafka-elk
sudo docker-compose down
sudo docker volume prune # if you want to start the demo from zero, deleting all the data saved in Elasticsearch

# in the server which will hold the filebeat repository
cd filebeat
sudo docker container prune

2. Building the Docker images

# in the server which will hold the kafka-elk repository
cd kafka-elk
# DO NOT FORGET TO CHANGE <KAFKA_IP> STRING IN docker-compose.yml FILE
sudo docker-compose build

# in the server which will hold the filebeat repository
cd filebeat
# DO NOT FORGET TO CHANGE <KAFKA_IP> STRING IN config/filebeat.yml FILE
sudo docker build -t filebeat .

3. Deploying the scenario

# in the server which will hold the kafka-elk repository
cd kafka-elk
sudo docker-compose up
# in a new terminal
sudo docker exec -it kafka /bin/bash
# within the kafka container terminal
	kafka-topics.sh --list --zookeeper 192.168.4.20:2181 # topictest will be printed
	kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topictest # consume the messages sent to Kafka

# in the server which will hold the filebeat repository
cd filebeat
sudo docker run filebeat

# JSON chains will be sent from filebeat container to kafka container, with the following format:
# {"metrics_series": 1, "resource_uuid": "agv1", "-1": -1, "metrics_data": "-1.15", "metric_name": "deviation", "time": "71.321902"}

4. Presenting the metrics in Kibana

After logging in Kibana by using a browser (URL: http://<KAFKA_IP>:5601), the first step is to define the Index pattern where the data is being received and saved, which is the Logstash pipeline defined.

Demo architecture

Then, you can check that data has been received correctly in Elasticsearch.

Demo architecture

Finally, you can play with the data by generating different graphs. For example, in the following picture, it has been displayed the time-metrics_data graph (average + lower and upper standard deviation).

Demo architecture

Copyright

This work has been done by Telcaria Ideas S.L. for the 5G EVE European project under the Apache 2.0 License.

About

5G EVE - WP4 - Monitoring Dockerized environment for testing Monitoring & Data Collection tools

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Shell 69.9%
  • Dockerfile 30.1%