-
Notifications
You must be signed in to change notification settings - Fork 268
Home
The largest part of the work for reviewing a datasource plugin is setting up the datasource. This is a guide for setting up the different databases needed for testing.
There is a test server for akumuli at http://206.189.27.155:8181/ and this is the only setting (URL field) needed to create a datasource connection.
There are a lot of steps (takes 30 minutes to install everything) and requires vagrant and virtualbox. Follow the instructions here.
I had to manually start the start the ambari services after they failed to start the first time when using the cluster wizard.
The url to login into the Ambari website is http://c6801.ambari.apache.org:8080 (where c68 is if you use centos 6.8) but the url for config in Grafana should be:
http://c6801.ambari.apache.org:6188 with basic auth and user admin
and password `admin.
To get Atlas running with some sample data (instructions from here):
curl -LO https://github.com/Netflix/atlas/releases/download/v1.5.3/atlas-1.5.3-standalone.jar
curl -Lo memory.conf https://raw.githubusercontent.com/Netflix/atlas/v1.5.x/conf/memory.conf
java -jar atlas-1.5.3-standalone.jar memory.conf
curl -Lo publish-test.sh https://raw.githubusercontent.com/Netflix/atlas/v1.5.x/scripts/publish-test.sh
chmod 755 publish-test.sh
./publish-test.sh
- Sign up for an account for their publicdata. Follow the steps here: https://doc.cognitedata.com/quickstart
- After creating the api key, then use: Project:
publicdata
and the api key created in the previous step.
Option 1:
- Use the docker image on DockerHub
- Start the server:
docker run -d --name test-clickhouse-server --ulimit nofile=262144:262144 yandex/clickhouse-server
- Connect to it with the client:
docker run -it --rm --link test-clickhouse-server:clickhouse-server yandex/clickhouse-client --host clickhouse-server
- Create the database:
CREATE DATABASE IF NOT EXISTS test
USE test
- Create the table:
CREATE TABLE test.timeseries (EventDate Date, EventTime DateTime, Name String, Value Int32) ENGINE = MergeTree() ORDER BY EventTime
- Insert some data - vary the series name and value in this query:
INSERT INTO test.timeseries(*) VALUES(now(), now(), 'test1', 10)
- Query in Grafana:
$columns( Name, sum(Value) c) FROM test.timeseries
Option 2:
- Use the Clickhouse playground - credentials and url here: https://clickhouse.tech/docs/en/getting-started/playground/
- Create query for the
hits_100m_obfuscated
table in thedatasets
database using the wizard in the query editor and select time range for the year 2013 (2013-06-29 to 2013-08-01)
- Initial agent:
docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul
- Add more agents with:
docker run -d -e CONSUL_BIND_INTERFACE=eth0 consul agent -dev -join=172.17.0.2
- Create a test key value (key: test3, value: 40):
curl -X PUT -d @- http://172.17.0.2:8500/v1/kv/test3 <<< 40
- Fill in a dummy value for the token on the Grafana config page.
See the docs: https://grafana.com/plugins/andig-darksky-datasource
- https://playground.devicehive.com/
- Sign up
- Create a device with curl (curl command is on playground)
- Copy the access token for the data source
- Use the admin console on playground to send commands. Name: test Parameters: {"test": 1} or {"test2": 1}
Use the following docker-compose file:
glpi-md:
image: mariadb
restart: always
ports:
- "32806:3306"
environment:
MYSQL_DATABASE: glpi
MYSQL_ROOT_PASSWORD: password
MYSQL_USER: glpi
MYSQL_PASSWORD: password
volumes:
- glpi-db:/var/lib/mysql
- glpi-dblog:/var/log/mysql
- glpi-dbetc:/etc/mysql
glpi:
image: fjudith/glpi
restart: always
ports:
- "32706:80"
volumes:
- glpi-files:/var/www/html/files
- glpi-plugins:/var/www/html/plugins
links:
- glpi-md:mysql
-
Then browse to http://localhost:32706 to start the installation wizard. Instructions for that here if you get stuck.
-
The step where with 3 fields - database, user and password. Database should be
localhost
(or the IP address of the mariadb started with docker-compose e.g. 172.17.0.3). User:glpi
Password:password
-
Then login to the GLPI portal with user:
glpi
password:glpi
-
The REST API needs to be enabled in Setup -> General section:
-
Need to enable an API client as well (the first time you need to check the regenerate checkbox for the token):
-
For the config page in Grafana, you will need the App token that is generated in the previous step and the client token can be generated in the Adminstration->Users section. Find the glpi user, then navigate to the Setting section. Find the API token field, check the regenerate checkbox and save:
-
Finally, I have only ever got this working using Access mode Browser and using a Cors plugin in Chrome to get around the CORS issues. Then create a ticket and it should show up in Grafana.
cd data/plugins/instana-datasource
docker-compose up mountebank
- Create datasource for Instana in Grafana with url:
http://localhost:8010
. For the API key, fill in the INSTANA_API_TOKEN value from the local.env file, currently the value isvalid-api-token
- Create a new dashboard with a graph panel
- In the Query field, write
filler
and then select something from the dropdowns.
- Make sure you have an account that can create API tokens (not included in the free version).
- Create an API Key. Tools -> API Tokens.
- Upload some sample apache log data using :
Get the account token from this page: https://app.logz.io/#/dashboard/data-sources/File-UploadcURL (you have to wait a minute or two before the uploaded data shows up)
curl http://logzio-elk.s3.amazonaws.com/apache-daily-access.log --output apache-daily-access.log && curl -T apache-daily-access.log https://listener.logz.io:8022/file_upload/<ACCOUNT-TOKEN>/apache-access
- Use the API token to configure the datasource in Grafana. The index name and daily pattern fields don't seem to matter - they can be anything.
-
Use the following docker-compose file to start a SensorThings server (the server implementation is called Gost) with a postgres db with GIS installed:
https://raw.githubusercontent.com/gost/docker-compose/master/docker-compose.yml (From this repo)
-
Test that it is working. The following command should return a json file with an empty array and not an error:
curl http://localhost:8080/v1.0/Things
-
Create some data using Postman. Gost has a collection of HTTP commands for Postman that you can use to create sensors, things and data. Import this collection into Postman and run some POST commands to create datapoints.
-
Create a datasource in Grafana and set this value in the url field:
http://localhost:8080/v1.0
- Will need access to our internal Pagerduty account
- Create an API key at https://xxx.pagerduty.com/api_keys
- Use that key in the datasource config
- If you want to filter by service id, the service ids can be found in Pagerduty be looking at the url for a service. E.g. P8CXWJA is the service id for https://xxx.pagerduty.com/services/P8CXWJA
Grafana has a docker block for prometheus that includes everything needed to test this datasource.
In Grafana source root folder:
cd docker
./create_docker_compose.sh prometheus
docker-compose up
- create datasource in Grafana with url: http://127.0.0.1:9093
- Fill in the severity level fields
- Use docker compose with the Skydive docker compose file and
docker-compose up
- Navigate to http://localhost:8082
- You should see one node - your machine. Expand to see all the network interfaces by clicking on the node and then using the expand button (bottom left of the diagram). Choose any interface and then on the right hand side, create a capture. Give the capture a name and choose an interface. After clicking the create button, you should get a query like
G.V().Has('TID', '86adb4c4-0f05-56ee-6ea8-aa1578df14f6')
. - Copy the query into the Skydive datasource in Grafana.
QuasarDB runs in insecure mode which allows for anonymous login, and secure mode which requires a username and secret key.
A complete environment can be build with docker using the following command.
docker build -t qdb-grafana-example-docker https://github.com/kontrarian/qdb-grafana-example-docker.git
The updated dist/
folder is included in the plugin git repository so it can be cloned directory to the grafana plugins directory to install it.
docker run -it -p 40080:40080 qdb-grafana-example-docker
This will start the QuasarDB db server and rest server in the background, log you into bash and print the server url to the console.
In the plugin configuration set the URL
to http://127.0.0.1:40080
and the Access
to Server
. Auth can be skipped and toggle off Use Secured Cluster
under QuasarDB Details
select avg(low), max(high) from btcusd in $__range group by $__interval
docker run -it -p 40493:40493 --env QDB_SECURITY=true qdb-grafana-example-docker
This will start the QuasarDB db server and rest server in the background, log you into bash and print the server url and user credentials to the console.
In the plugin configuration set the URL
to https://127.0.0.1:40493
and the Access
to Server
. Under Auth ensure Skip TLS Verify
is checked as this demo uses self-signed TLS certificates. Under QuasarDB Details toggle on User Secured Cluster
and enter the User name
and User secret
that were printed to the console when you ran the docker container.
docker run -p 6020:6020 tdengine/tdengine:latest
- Url: http://localhost:6020
- Don't need to add user or password
- Url: https://demo.thruk.org/demo/thruk/
- Basic Auth with user:
admin
and password:admin
- Start the docker container for Warp10:
docker run --name warp10 -p 8080:8080 -p 8081:8081 -d -i warp10io/warp10:latest
- To get tokens to read and write (from the Worf component):
docker exec --user warp10 -t -i warp10 warp10-standalone.sh worf test 31536000000
- It should return something like:
{"read":{"token":"8rJ46zGxQkTLQsBu39SUUrANJmjHB__qJkHjDM8IsWsf8XAi6P03EI1e6ve5NqbzrC81uIwiB6S9JgI9bNtR2PEwD7qpNR9pYA4U29H3HiER37DNIOyvP.","tokenIdent":"2cfb976558c726f7","ttl":31536000000,"application":"test","applications":["test"],"owners":["932cd87c-bb00-4127-b75e-07fedbd12fa3"],"producer":"932cd87c-bb00-4127-b75e-07fedbd12fa3","producers":[]},"write":{"token":"UYWwdR4S6at_NpxlD_tsN99rAj5H_6yBZ5JhSJ5oTlaoJXmEYPMjPWQuD6Zs6ZENVaRacl2lgfSkI595gSETveyDQAaxPqzeAcQXfWBkKt7","tokenIdent":"f9211941b17d9b81","ttl":31536000000,"application":"test","owner":"932cd87c-bb00-4127-b75e-07fedbd12fa3","producer":"932cd87c-bb00-4127-b75e-07fedbd12fa3"}}
- Create a file with test data (warp10.txt):
1538250207762000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 79.16 1538250237727000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 75.87 1538250267504000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 74.46 1538250267504000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 73.55 1538250297664000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 72.30 1538250327765000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 70.73 1538250327765000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 69.50 1538250357724000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 68.24 1538250387792000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 66.66 1538250387792000/51.501988:0.005953/ some.sensor.model.humidity{xbeeId=XBee_40670F0D,moteId=53,area=1} 65.73
- Take the write token from the json result and use it to create test data:
curl -H 'X-Warp10-Token: your_write_token' --data-binary @~/warp10.txt 'http://localhost:8080/api/v0/update'
- Use the read token in the Warp10 query editor.
- Example raw query for Worldmap:
'[ { "key": "amsterdam", "latitude": 52.3702, "longitude": 4.8952, "name": "Amsterdam", "value": 9 }, { "key": "charleroi", "latitude": 50.4108, "longitude": 4.4446, "name": "Charleroi", "value": 6 }, { "key": "frankfurt", "latitude": 50.110924, "longitude": 8.682127, "name": "Frankfurt", "value": 9 }, { "key": "london", "latitude": 51.503399, "longitude": -0.119519, "name": "London", "value": 12 }, { "key": "paris", "latitude": 48.864716, "longitude": 2.349014, "name": "Paris", "value": 15 } ]' JSON->
- Example query for test data file:
- Metric name: some.sensor.model.humidity
- Label key: moteId
- Label value: 53