The k-Weatherlink jobs allow to scrape all sort of data from sensors connected to stations. The data are stored in a MongoDB database and more precisely in 2 collections:
- the
observations
collection stores the observed data from the sensors, more information here - the
stations
collection stores the data of the stations
The project consists in 2 jobs:
- the
stations
job scrapes the stations data associated to your account, according a specific cron expression. By default every day at midnight. - the
observations
job scrapes the last (current) observation of the stations above according a specific cron expression. By default every 5 minutes.
The data retrieved and frequence at wich they are retrieved are limited by your relationship with the station and subscription, see more here.
Variable | Description |
---|---|
DB_URL |
The database URL. The default value is mongodb://127.0.0.1:27017/weatherlink |
API_KEY |
The WeatherLink API key for authentication. |
API_SECRET |
The WeatherLink API secret to sign requests. |
DEBUG |
Enables debug output. Set it to krawler* to enable full output. By default it is undefined. |
Variable | Description |
---|---|
DB_URL |
The database URL. The default value is mongodb://127.0.0.1:27017/weatherlink |
TTL |
The observations data time to live. It must be expressed in seconds and the default value is 604 800 (7 days) |
API_KEY |
The WeatherLink API key for authentication. |
API_SECRET |
The WeatherLink API secret to sign requests. |
DATA_TYPE |
The data types to retrieve (e.g 1,11,13,4,15 ). The default value is everything (1 to 27) . |
TIMEOUT |
The maximum duration of the job. It must be in milliseconds and the default value is 1 800 000 (30 minutes). |
DEBUG |
Enables debug output. Set it to krawler* to enable full output. By default it is undefined. |
We personally use Kargo to deploy the service.
Please refer to contribution section for more details.
This project is sponsored by
This project is licensed under the MIT License - see the license file for details