This demo uses Kibana to visualize data collected with AppScope, an open source, runtime-agnostic instrumentation utility for any Linux command or application.
Contents
- Prerequisites
- Overview
- Cribl Stream configuration
- Preparation
- Using the demo
- Cleaning up after a session
For this demo environment, you will need Docker, bash
, openssl
and curl
.
This demo environment uses:
- AppScope to instrument applications running in the demo environment.
- Cribl Stream as a agent.
- Elasticsearch to store data.
- Kibana to visualize metrics and events.
By default, the services will be available at the following URLs:
Service | URL |
---|---|
Cribl Stream | http://localhost:9000 |
Elasticsearch | http://localhost:9200 |
Kibana | http://localhost:5601 |
If you want to modify these settings, edit the .env
file.
The diagram below depicts the demo cluster.
The diagram below depicts the Cribl Stream configuration.
The demo provides two interfaces, each of which runs in its own Docker container:
appscope01
for scoping an entire bash session using TCP connection.appscope02
for scoping an individual command using TCP connection.appscope01_tls
for scoping an individual command using TLS Secure Connections.
You can opt to use a fourth interface, which simply runs AppScope on the host in the usual way (without running a Docker container).
If this is your desired option, you must configure a Cribl Stream port to receive data from AppScope. Note that Cribl Stream is running in its own Docker container, named cribl01
.
To do this, edit the docker-compose.yml
to match the following:
cribl01:
...
ports:
- "${CRIBL_HOST_PORT:-9000}:9000"
- 10090:10090
- 10091:10091
What this does is exposed ports 10090
and 10091
to the host. The ports can be used to send data by AppScope
(running on host):
- for TCP
tcp://127.0.0.1:10090
- for TLS
tcp://127.0.0.1:10091
see setup details
CA certificate file:domain.crt
is automatically generated by ./start.sh
script.
To build the demo:
./start.sh
To confirm that everything works correctly:
docker ps
You should see results similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
76af61bde578 cribl/scope:1.1.0 "bash" 4 seconds ago Up 2 seconds appscope01_tls
1282f039efd9 cribl/scope:1.1.0 "bash" 4 seconds ago Up 2 seconds appscope02
b7611e8bdfe9 docker.elastic.co/elasticsearch/elasticsearch:7.17.0 "/bin/tini -- /usr/l…" 4 seconds ago Up 2 seconds 9200/tcp, 9300/tcp es03
c8e5d96b909f cribl/cribl:3.5.1 "/sbin/entrypoint.sh…" 4 seconds ago Up 2 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp cribl01
4a4da91d6562 docker.elastic.co/elasticsearch/elasticsearch:7.17.0 "/bin/tini -- /usr/l…" 4 seconds ago Up 2 seconds 9200/tcp, 9300/tcp es02
f171487fbf47 docker.elastic.co/kibana/kibana:7.17.0 "/bin/tini -- /usr/l…" 4 seconds ago Up 2 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp kib01
b5137275cb38 docker.elastic.co/elasticsearch/elasticsearch:7.17.0 "/bin/tini -- /usr/l…" 4 seconds ago Up 2 seconds 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp es01
This section covers how to run the appscope01
, appscope02
and appscope01_tls
containers; no special instructions are needed for the fourth option, of running AppScope on the host in the usual way.
Connect to the appscope01
container:
docker-compose run appscope01
Every command that you run in the bash session will be scoped.
Connect to the appscope02
container and run the desired command:
docker exec -it appscope02 bash
ldscope <command>
Connect to the appscope01_tls
container and run the desired command:
docker exec -it appscope01_tls bash
ldscope <command>
To clean up the demo environment:
./stop.sh
By default, Elasticsearch stores the data in /Elasticsearch/data
using docker volume
.
To clean it up:
docker volume prune
To clean up Cribl data (i.e., data from Cribl Stream) that the Elasticsearch backend has stored:
-
Open the Kibana console.
-
In your Cribl Stream Elasticsearch Destination, note the value you have under General Settings > Index or data stream. Substitute this value for
<index_or_data_stream>
in the example below, and run the query:
DELETE <index_or_data_stream>
What this does is to send a request to the Kibana Delete objects API.