A CLI for building Zendro Sandbox.
A quick installation would be this command: npm install -g Zendro-dev/zendro
.
However, if you would like to customize your Zendro CLI, you can set it up as the following:
$ git clone https://github.com/Zendro-dev/zendro.git
$ cd zendro
$ npm install
$ npm link
For example, you can customize the version of each repository by editing zendro_dependencies.json
file in your local Zendro CLI repository.
Start a new application:
- copy starter pack to folder <your_application_name>
- clone
single-page-app
&graphql-server
&graphiql-auth
repositories from GitHub (version: seezendro_dependencies.json
) - install packages for each repository
- create templates of environment variables for each repository
- -d or --dockerize: keep Dockerfiles (default: false). Set to be true, keep Dockerfiles. Default: remove Dockerfiles and initUserDb.sh
- a welcome interface
- hints: edit config files if necessary. And there are two example files for the configuration of all supported storage types with docker setup, namely
./config/data_models_storage_config_example.json
and./docker-compose-dev-example.yml
- without docker setup: ./graphql-server/config/data_models_storage_config.json
- with docker setup: ./config/data_models_storage_config.json
- ./graphql-server/.env
- SPA in development mode: ./single-page-app/.env.development
- SPA in production mode: ./single-page-app/.env.production
- GraphiQL in development mode: ./graphql-server/.env.development
- GraphiQL in production mode: ./graphql-server/.env.production
Generate code for graphql-server.
- -f or --data_model_definitions: input directory or a JSON file (default: current directory path + "/data_model_definitions")
- -o or --output_dir: output directory (default: current directory path + "/graphql_server")
- -m or --migrations: generate migrations (default: false). Set to be true, generate migrations
- allow unknown options
Dockerize Zendro App with example docker files.
- -u or --up: start docker service
- set the host of Postgres as sdb_postgres
- -d or --down: stop docker service
- -p or --production: start or stop GQS and SPA with production mode
Start Zendro service.
- default: start all service
- -p or --production: start GQS and SPA with production mode
- start specified service with the following abbreviations:
- gqs: graphql-server
- spa: single-page-app
- giql:graphiql
Stop Zendro service.
- default: stop all service
- -p or --production: stop GQS and SPA with production mode
- stop specified service with abbreviations
Generate migration code for graphql-server.
- -f or --data_model_definitions: input directory or a JSON file (default: current directory path + "/../data_model_definitions")
- -o or --output_dir: output directory (default: current directory path + "/migrations")
Note: all generated migrations would be stored in a directory called
migrations
.
Execute migrations which are generated after the last executed migration.
Note: the last executed migration is recorded in zendro_migration_state.json
and the log of migrations is in the file zendro_migration_log.json
.
Drop the last executed migration.
Parse a file and upload parsed records to graphql-server.
- -f or --file_path: file path. Supported file format: CSV, XLSX, JSON
- -n or --model_name: model name.
- -s or --sheet_name: sheet name for XLSX file. By default process the first sheet.
- -r or --remote_server: upload to a remote server (default: false).
Download records into a file.
- -f or --file_path: file path.
- -n or --model_name: model name.
- -r or --remote_server: download from a remote server (default: false).
Set up a sandbox with default data models and SQLite
- -d or --dockerize: Keep Docker files (default: false).
Create empty or default plots.
- -p or --default_plots: Create default plots (default: false).
- -f or --plot_name: Customized plot name.
- -t or --type: The visualization library (options: "plotly", "d3").
- -m or --menu: The location of the plot menu (options: "none", "top", "left").
- -n or --menu_item_name: The item name in the plot menu (default value is the plot name).
Hints: The meaning of options in "menu" (-m):
- "top": the navigation of the plot would be located in the top menu bar with the menu item name
- "left": the sub-menu for the plot would be generated in the left menu with the menu item name
- "none": no navigation
- By default, three data models with associations would be used for this sandbox, namely city, country and river. And a default SQLite database would be used.
- Execute
zendro set-up -d <name>
, edit NEXTAUTH_SECRET to your expect secret and modify other environment variables if necessary in the following config files:
- SPA in development mode: ./single-page-app/.env.development
- SPA in production mode: ./single-page-app/.env.production
- GraphiQL in development mode: ./graphql-server/.env.development
- GraphiQL in production mode: ./graphql-server/.env.production
If you would like to upload a file to a remote server, please consider the template
.env.migration.sample
, create a new file.env.migration
and modify relevant environment variables.
- Execute
zendro dockerize -u -p
, then zendro instance with production mode would start. - Execute
zendro dockerize -d -p -v
, then zendro instance would stop and all volumes would be removed.
- create a new application (test). Keep docker files (-d) by executing
zendro new -d test
. If you want to modify some environment variables, please edit relevant files, which are also specified in the console.
- without docker setup: ./graphql-server/config/data_models_storage_config.json
- with docker setup: ./config/data_models_storage_config.json
- ./graphql-server/.env
- SPA in development mode: ./single-page-app/.env.development
- SPA in production mode: ./single-page-app/.env.production
- GraphiQL in development mode: ./graphql-server/.env.development
- GraphiQL in production mode: ./graphql-server/.env.production
Note: if you would like to upload a file to a remote server, please consider the template .env.migration.sample
, create a new file .env.migration
and modify relevant environment variables. By default, SQLite3 would be used for the data storage. If you want to use other storage types, then you can reuse part of two example files, which illustrate the configuration of all supported storage types with docker setup, namely ./config/data_models_storage_config_example.json
and ./docker-compose-dev-example.yml
.
-
cd test
-
add JSON files for model definitions in
./data_model_definitions
folder and generate graphql-server (GQS) code and migrations by executingzendro generate -m
-
if you prefer to use local setup with Keycloak, you can start all service by executing
zendro start
. And an example configuration file for Keycloak is./test/env/keycloak.conf
. By default Keycloak would use H2 database to store information. Moreover, the default database for records in Zendro would be a SQLite3 database. Its configuration is in this file:./graphql-server/config/data_models_storage_config.json
. If user would like to add other storage types, it is necessary to edit this file. Meanwhile, if you would like to use production mode, please add-p
option. -
stop all running service by executing
zendro stop
. Besides, if you would like to stop production mode, please add-p
option. -
If you don't have local setup with keycloak, you can play with Zendro by dockerizing example Zendro App. The command would be
zendro dockerize -u
. Moreover, if you would like to use production mode, please executezendro dockerize -u -p
. Besides, the default username iszendro-admin
and the corresponding password isadmin
. -
When you want to stop docker service, you can execute
zendro dockerize -d
. In addition, if your services are in production mode, please executezendro dockerize -d -p
.
If a user has new data model definitions, it is convinient to use Zendro CLI for dealing with migrations. And the following procedure shows how to generate, perform or drop migrations:
- in
graphql-server
folder, executezendro migration:generate -f <data_model_definitions>
. The migrations are automatically generated in the/graphql-server/migrations
folder. By default, every migration file has two functions, namelyup
anddown
. Theup
function creates a table, thedown
function deletes the existing table. Furthermore it is possible to customize the migration functions. - in
graphql-server
folder, it is possible to perform new generated migrations, which are generated after the last executed migration, by executingzendro migration:up
. After that, the last executed migration and migration log are updated. - in
graphql-server
folder, the last executed migration can be dropped by executingzendro migration:down
. This will update the latest successful migration and add the dropped operation to the migration log. If there are some remaining records and associations in the table, by default an error is thrown. To forcefully drop the table, in spite of remaining records, set the environment variableDOWN_MIGRATION
totrue
in/graphql-server/.env
file and re-execute this down-migration.
Note: for all up
and down
functions, there is a default argument called zendro
. It provides the access to different APIs in zendro layers (resolvers, models, adapters) and enables graphql queries. In model and adapter levels zendro can also access the storage handler which can interact with corresponding database management system. Following are some examples for model movie
and adapter dist_movie_instance1
.
await zendro.models.movie.storageHandler;
await zendro.models.movie.countRecords();
await zendro.adapters.dist_movie_instance1.storageHandler;
await zendro.adapters.dist_movie_instance1.countRecords();
At the resolver level the zendro
argument exposes the corresponding API functions , e.g. readOneMovie
, countMovies
and so on. Those functions expect a context
which needs to be provided like in the example below. This includes an event emitter to collect any occurring errors. See the following example for using countMovies
API via zendro
:
const {
BenignErrorArray,
} = require("./graphql-server/utils/errors.js");
let benign_errors_arr = new BenignErrorArray();
let errors_sink = [];
let errors_collector = (err) => {
errors_sink.push(err);
};
benign_errors_arr.on("push", errors_collector);
const res = await zendro.resolvers.countMovies(
{ search: null },
{
request: null, // by default the token is null
acl: null,
benignErrors: benign_errors_arr, // collect errors
recordsLimit: 15,
}
);
Moreover, it is possible to execute graphql queries or mutations via execute_graphql(query, variables)
function. Specifically, the query
argument refers to the query string and the variable
argument represents dynamic values for that query. By default, queries would be executed without token, however in a distributed setup with ACL rules a token is necessary for sending queries. To obtain that token from keycloak the MIGRATION_USERNAME
and MIGRATION_PASSWORD
environment variables are needed. The function can then be used as follows:
await zendro.execute_graphql("{ countMovies }");
Data to populate each model in your schema must be in a separate CSV file, following the format requirements below:
- Column names in the first row must correspond to model attributes. And for associations, the format of a column name is like
add<associationName>
, e.g.addCountries
for assciationNamecountries
. - Empty values should be represented as
"NULL"
. - All fields should be quoted by
"
. However, if field delimiter and array delimiter do not occur in fields with String type, namely characters could be split without ambiguity, then no quotes are necessary. For example, if the field delimiter is,
and one String field is likeZendro, excellent!
, then without the quotation mark, this field will be split as two fields. So in such case these String fields must be quoted. - Default configuration: LIMIT_RECORDS=10000, RECORD_DELIMITER="\n", FIELD_DELIMITER=",", ARRAY_DELIMITER=";". They can be changed in the config file for environment variables.
- Date and time formats must follow the RFC 3339 standard.
There are two ways to upload a file via zendro CLI:
- If the Zendro instance is on your local machine, you can directly go into the folder
graphql-server
and executezendro bulk-create -f <filename> -n <modelname> -s <sheetname>
, e.g.zendro bulk-create -f ./country.csv -n country
. Three formats are supported here, namely CSV, XLSX and JSON. And the paramtersheetname
is only used for XLSX file. If it is empty, by default records in the first sheet would be imported. And the default configuration for delimiters and record limit, you can find them ingraphql-server/.env
. - If you want to upload a file to a remote Zendro server, it is also possible via Zendro CLI. All configuration could be modified in the file
zendro/.env.migration
. After the configuration, you can executezendro bulk-create -f <filename> -n <modelname> -s <sheetname> -r
, e.g.zendro bulk-create -f ./country.csv -n country -r
.
Note: if the validation of records fails, the log file would be stored in the folder of the uploaded file and its name would be like errors_<uuid>.log
.
In general, it is possible to download all data into CSV format in two ways, either using the Zendro CLI or the Zendro Single Page App. Here every attribute will be quoted to avoid ambiguity and enable seamless integration with the zendro bulk creation functionalities. And column names for foreign keys would be like add<associationName>
. For example, there is an association named countries
, which includes a foreign key called country_ids
, then the column name for country_id
should be addCountries
.
-
If the Zendro instance is installed locally, then user can execute the command in the
graphql-server
folder:zendro bulk-download -f <filename> -n <modelname>
. To configure delimiters (ARRAY_DELIMITER
,FIELD_DELIMITER
andRECORD_DELIMITER
) and record-limit (LIMIT_RECORDS
), set the according environment variables ingraphql-server/.env
-
If the Zendro instance is accessible remotely, modify the
zendro/.env.migration
configuration file to map to the remote Zendro instance. After that, executezendro bulk-create -f <filename> -n <modelname> -r
to download the records to CSV.
It is possible to generate default or empty plots via CLI.
When a user wants to create a empty plot, the plot name (-f) and the visualization library (-t) must be specified. Other options for the location of the plot menu (-m) and the item name in the plot menu (-n) are optional. For example, an empty plot named barchart
with plotly
library at the top menu bar could be generated by executing the following command in the single-page-app
folder:
zendro create-plot -f barchart -t plotly -m top
Then user can customize the data processing in the file single-page-app/src/pages/barchart.tsx
. In the fetchData
function, user can pass the required query as argument in the zendro.request
function, process the response res
into a desired format and set that into data
variable. And the user can pass the data
as a parameter of <PlotlyPlot/>
component. Besides, the layout and other plot parameters could be set in the single-page-app/src/zendro/plots/barchart.tsx
.
Similarly, if the user wants to generate a plot named circle
with d3
library at the left menu, the following command should be executed in the single-page-app
folder:
zendro create-plot -f circle -t d3 -m left
And the processed data
could be passed as a parameter for the <D3Plot/>
component in the file single-page-app/src/pages/circle.tsx
. Meanwhile, the plot could be customized in the corresponding single-page-app/src/zendro/plots/circle.tsx
file.
By executing the following command in the single-page-app
folder, three default plots would be generated:
zendro create-plot -p
- scatter-plot
Only numerical attributes could be selected in the scatter-plot. And the attribute for Y-axis must be specified for the plot. If the attribute for X-axis has not been selected, then values in Y-axis would be used for generating a plot. Besides, different modes for the plot could be selected, namely lines
, markers
, lines+markers
.
- rain-cloud-plot
Only numerical attributes could be selected in the rain-cloud-plot. It is possible to visualize multiple attributes within one plot. So the user needs to specify The number of numerical attributes
, then corresponding selectors would be rendered. And the user can select necessary attributes in different data models. Besides, user can choose the Direction
of the plot. Namely, the plot could be rendered horizontally (default) or vertically. Moreover, the user can specify the Tickangle
and use the Autoscale
button in the plot for a better alignment effect. Apart from that, the Span mode
(default: soft) could be selected for the plot. And the detailed explanation for the span mode could be found here.
- boxplot
The setup of a boxplot is very similar to a rain-cloud-plot. And there is no Span mode
for generating a boxplot.
Zendro is the product of a joint effort between the Forschungszentrum Jülich, Germany and the Comisión Nacional para el Conocimiento y Uso de la Biodiversidad, México, to generate a tool that allows efficiently building data warehouses capable of dealing with diverse data generated by different research groups in the context of the FAIR principles and multidisciplinary projects. The name Zendro comes from the words Zenzontle and Drossel, which are Mexican and German words denoting a mockingbird, a bird capable of “talking” different languages, similar to how Zendro can connect your data warehouse from any programming language or data analysis pipeline.
Francisca Acevedo1, Vicente Arriaga1, Katja Dohm3, Constantin Eiteneuer2, Sven Fahrner2, Frank Fischer4, Asis Hallab2, Alicia Mastretta-Yanes1, Roland Pieruschka2, Alejandro Ponce1, Yaxal Ponce2, Francisco Ramírez1, Irene Ramos1, Bernardo Terroba1, Tim Rehberg3, Verónica Suaste1, Björn Usadel2, David Velasco2, Thomas Voecking3, Dan Wang2
- CONABIO - Comisión Nacional para el Conocimiento y Uso de la Biodiversidad, México
- Forschungszentrum Jülich - Germany
- auticon - www.auticon.com
- InterTech - www.intertech.de
Asis Hallab and Alicia Mastretta-Yanes coordinated the project. Asis Hallab designed the software. Programming of code generators, the browser based single page application interface, and the GraphQL application programming interface was done by Katja Dohm, Constantin Eiteneuer, Francisco Ramírez, Tim Rehberg, Veronica Suaste, David Velasco, Thomas Voecking, and Dan Wang. Counselling and use case definitions were contributed by Francisca Acevedo, Vicente Arriaga, Frank Fischer, Roland Pieruschka, Alejandro Ponce, Irene Ramos, and Björn Usadel. User experience and application of Zendro on data management projects was carried out by Asis Hallab, Alicia Mastretta-Yanes, Yaxal Ponce, Irene Ramos, Verónica Suaste, and David Velasco. Logo design was made by Bernardo Terroba.