From bf4b915986f4a3905cba2089d4a91a9f395d3ab8 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 13:29:51 +0100
Subject: [PATCH 01/24] Update monitoring_overview.md
---
docs/book/monitoring/monitoring_overview.md | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/docs/book/monitoring/monitoring_overview.md b/docs/book/monitoring/monitoring_overview.md
index d64b864a7d..18b1831b80 100644
--- a/docs/book/monitoring/monitoring_overview.md
+++ b/docs/book/monitoring/monitoring_overview.md
@@ -3,23 +3,25 @@ description: How Evidently ML Monitoring works.
---
# How It Works
-ML monitoring helps you track data and ML model performance over time, identify issues, and receive alerts.
+ML monitoring helps track data and ML model performance over time, identify issues, and get alerts.
* **Instrumentation**: You use the open-source Evidently Python library to collect metrics and generate JSON `snapshots` containing data summaries, metrics, and test results.
* **Snapshot Storage**: You save `snapshots` in Evidently Cloud or in a local or remote workspace.
-* **Monitoring Service**: You visualize metrics from `snapshots` on a Dashboard in the Evidently Cloud web app or a self-hosted UI service.
+* **Monitoring Service**: You visualize data from `snapshots` on a Dashboard in the Evidently Cloud web app or a self-hosted UI service.
-The evaluation functionality relies on Evidently `Reports` and `Test Suites` available in the open-source Python library. You can use all 100+ metrics and tests on data quality, data and prediction drift, model quality (classification, regression, ranking, LLMs, NLP models), etc. You can also add custom metrics.
+The evaluation functionality relies on the open-source Evidently `Reports` and `Test Suites`. You can use 100+ Metrics and Tests on data quality, data and prediction drift, model quality (classification, regression, ranking, LLMs, NLP models), etc. You can also add custom metrics.
![](../.gitbook/assets/cloud/cloud_service_overview-min.png)
-By default, Evidently Cloud does not store raw data or model inferences. `Snapshots` contain data aggregates (e.g., distribution summaries) and metadata with test results. This hybrid architecture helps avoid data duplication and preserves its privacy.
+{% hint style="info" %}
+**Data privacy.** By default, Evidently does not store raw data or model inferences. `Snapshots` contain data aggregates (e.g., distribution summaries) and metadata with test results. This hybrid architecture helps avoid data duplication and preserves its privacy.
+{% endhint %}
# Deployment Options
-* **Evidently Cloud (Recommended)**: This is the easiest way to start with ML monitoring without the need to manage infrastructure. Snapshots and the UI service are hosted by Evidently. Evidently Cloud includes support, a scalable backend, and premium features such as in-built alerting, user management, and UI features like visual dashboard design.
-* **Self-hosted ML Monitoring**: You can also self-host an open-source dashboard service, suitable for proof of concept, small-scale deployments, or teams with advanced infrastructure knowledge.
-* **Self-hosted Enterprise Deployment**: For a scalable self-hosted version of Evidently Platform with support, contact us for a [demo of Evidently Enterprise](https://www.evidentlyai.com/get-demo) which can be hosted in your private cloud or on-premises.
+* **Evidently Cloud (Recommended)**: This is the easiest way to start, with UI service and snapshots hosted by Evidently. Evidently Cloud includes support, a scalable backend, and premium features such as built-in alerting, user management, and visual dashboard design.
+* **Self-hosted ML Monitoring**: In this case, you host the open-source UI dashboard service and manage the infrastructure and storage on your own. Recommended as proof of concept, small-scale deployments, or for teams with advanced infrastructure knowledge.
+* **Self-hosted Enterprise Deployment**: For a scalable self-hosted version of Evidently Platform with support, contact us for a [demo of Evidently Enterprise](https://www.evidentlyai.com/get-demo). Evidently Enterprise be hosted in your private cloud or on-premises.
# Deployment architecture
From 4c974076aa3ab578199aa9aeafe3d121515df93a Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:09:29 +0100
Subject: [PATCH 02/24] Update monitoring_overview.md
---
docs/book/monitoring/monitoring_overview.md | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/docs/book/monitoring/monitoring_overview.md b/docs/book/monitoring/monitoring_overview.md
index 18b1831b80..b172e6b11b 100644
--- a/docs/book/monitoring/monitoring_overview.md
+++ b/docs/book/monitoring/monitoring_overview.md
@@ -3,51 +3,51 @@ description: How Evidently ML Monitoring works.
---
# How It Works
-ML monitoring helps track data and ML model performance over time, identify issues, and get alerts.
+ML monitoring helps track data and model performance over time, identify issues, and get alerts.
* **Instrumentation**: You use the open-source Evidently Python library to collect metrics and generate JSON `snapshots` containing data summaries, metrics, and test results.
* **Snapshot Storage**: You save `snapshots` in Evidently Cloud or in a local or remote workspace.
* **Monitoring Service**: You visualize data from `snapshots` on a Dashboard in the Evidently Cloud web app or a self-hosted UI service.
-The evaluation functionality relies on the open-source Evidently `Reports` and `Test Suites`. You can use 100+ Metrics and Tests on data quality, data and prediction drift, model quality (classification, regression, ranking, LLMs, NLP models), etc. You can also add custom metrics.
+The evaluation functionality relies on the open-source Evidently `Reports` and `Test Suites`. You can use 100+ Metrics and Tests on data quality, data and prediction drift, model quality (classification, regression, ranking, LLMs, NLP models) and add custom metrics.
![](../.gitbook/assets/cloud/cloud_service_overview-min.png)
{% hint style="info" %}
-**Data privacy.** By default, Evidently does not store raw data or model inferences. `Snapshots` contain data aggregates (e.g., distribution summaries) and metadata with test results. This hybrid architecture helps avoid data duplication and preserves its privacy.
+**Data privacy.** By default, Evidently does not store raw data or model inferences. Snapshots contain data aggregates (e.g., distribution summaries) and metadata with test results. This hybrid architecture helps avoid data duplication and preserves its privacy.
{% endhint %}
# Deployment Options
* **Evidently Cloud (Recommended)**: This is the easiest way to start, with UI service and snapshots hosted by Evidently. Evidently Cloud includes support, a scalable backend, and premium features such as built-in alerting, user management, and visual dashboard design.
-* **Self-hosted ML Monitoring**: In this case, you host the open-source UI dashboard service and manage the infrastructure and storage on your own. Recommended as proof of concept, small-scale deployments, or for teams with advanced infrastructure knowledge.
+* **Self-hosted ML Monitoring**: Best for proof of concept, small-scale deployments, or teams with advanced infrastructure knowledge. In this case, you must host the open-source UI dashboard service and manage the data storage on your own.
* **Self-hosted Enterprise Deployment**: For a scalable self-hosted version of Evidently Platform with support, contact us for a [demo of Evidently Enterprise](https://www.evidentlyai.com/get-demo). Evidently Enterprise be hosted in your private cloud or on-premises.
# Deployment architecture
-For initial exploration, you can send individual snapshots ad hoc. For production monitoring, you can orchestrate batch evaluation jobs, or send data directly from live ML services.
+You can start by sending snapshots ad hoc. For production monitoring, you can orchestrate batch evaluation jobs or send live data directly from ML services.
## Batch
You can run monitoring jobs using a Python script or a workflow manager tool like Airflow.
-If you already have batch data pipelines, you can add a monitoring or validation step directly when you need it. Say, you have a batch ML model and score new data once per day. Every time you generate the predictions, you can capture a snapshot with the input data summary, data quality metrics, and prediction drift checks. Once you get the true labels, you can compute the model performance and log the model quality metrics to update the performance dashboard.
+You can add a monitoring or validation step to an existing batch pipeline. Say you generate predictions daily: on every run, you can capture a snapshot with the input data summary and check for data quality and prediction drift. Once you get the true labels, you can compute the model quality and add model quality metrics to the Dashboard.
![](../.gitbook/assets/monitoring/monitoring_batch_workflow_min.png)
-This approach also allows you to run tests for your batch pipelines or during CI/CD. For example, you can validate data at ingestion or model after retraining, and implement automatic actions based on the validation results.
+You can also run tests during CI/CD (e.g.m after model retraining), and implement automatic actions based on the validation results.
-If you store your data (model inference logs) in a data warehouse, you can design separate monitoring jobs. For example, you can set up a script that would query the data and compute snapshots on a regular cadence, e.g., hourly, daily, weekly, or after new data or labeles are added.
+You can design separate monitoring jobs if you store your data (model inference logs) in a data warehouse. For example, you can set up a script to query the data and compute snapshots on a regular cadence, e.g., hourly, daily, weekly, or after new data or labels are added.
# Near real-time
-If you have a live ML service, you can send the data and predictions for near real-time monitoring. In this case, you must deploy and configure the **Evidently collector service** and send inferences from your ML service to the self-hosted collector.
+If you have a live ML service, you can deploy and configure the **Evidently collector service**. You will then send the incoming data and predictions for near real-time monitoring.
Evidently Collector will manage data batching, compute `Reports` or `Test Suites` based on the configuration, and send them to the Evidently Cloud or to your designated workspace.
![](../.gitbook/assets/monitoring/monitoring_collector_min.png)
-If you receive delayed ground truth, you can also later compute and log the model quality to the same project via a batch workflow. You can run it as a separate process or monitoring job.
+If you receive delayed ground truth, you can also later compute and log the model quality to the same Project via a batch workflow. You can run it as a separate process or monitoring job.
![](../.gitbook/assets/monitoring/monitoring_collector_delayed_labels_min.png)
From 542fb17ffffd85457412b1955c599d4af074284f Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:28:38 +0100
Subject: [PATCH 03/24] Update workspace.md
---
docs/book/monitoring/workspace.md | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/docs/book/monitoring/workspace.md b/docs/book/monitoring/workspace.md
index f51d32a1d0..466200a778 100644
--- a/docs/book/monitoring/workspace.md
+++ b/docs/book/monitoring/workspace.md
@@ -1,19 +1,19 @@
---
-description: Set up an Evidently Cloud account or self-hosted workspace.
+description: Connect to Evidently Cloud or self-hosted Workspace.
---
# What is a Workspace?
You need a workspace to organize your data and Projects.
-* In Evidently Cloud, your account is your workspace. As simple as that!
-* In self-hosted deployments, a workspace means a remote or local directory where you store the snapshots. The Monitoring UI will read the data from this source.
+* In Evidently Cloud, your account is your Workspace. As simple as that!
+* In self-hosted deployments, a Workspace is a remote or local directory where you store the snapshots. The Monitoring UI will read the data from this source.
# Evidently Cloud
If you do not have one yet, create an [Evidently Cloud account](https://app.evidently.cloud/signup).
-**Get the API token**. You will use it to connect to the Evidently Cloud workspace from your Python environment. Use the "key" sign in the left menu to get to the token page, and click "generate token." Save it in a temporary file since it won't be visible once you leave the page.
+**Get the API token**. You will use it to connect with Evidently Cloud Workspace from your Python environment. Use the "key" sign in the left menu to get to the token page, and click "generate token." Save it in a temporary file since it won't be visible once you leave the page.
**Connect to the workspace**. To connect to the Evidently Cloud workspace, you must first [install Evidently](../installation/install-evidently.md).
@@ -40,7 +40,7 @@ url="https://app.evidently.cloud")
## Local Workspace
In this scenario, you will generate, store the snapshots and run the monitoring UI on the same machine.
-To create a local workspace and assign a name:
+To create a local Workspace and assign a name:
```python
ws = Workspace.create("evidently_ui_workspace")
@@ -54,9 +54,9 @@ You can pass a `path` parameter to specify the path to a local directory.
## Remote Workspace
-In this scenario, after generating the snapshots, you will send them to the remote server. You must run the Monitoring UI on the same remote server, so that it directly interfaces with the filesystem where the snapshots are stored.
+In this scenario, you send the snapshots to a remote server. You must run the Monitoring UI on the same remote server. It will directly interface with the filesystem where the snapshots are stored.
-To create a remote workspace (UI should be running at this address):
+To create a remote Workspace (UI should be running at this address):
```python
workspace = RemoteWorkspace("http://localhost:8000")
@@ -75,13 +75,13 @@ You can pass the following parameters:
## Remote snapshot storage
-In the examples above, you store the snapshots and run the UI on the same server. Alternatively, you can store snapshots in a remote data store (such as an S3 bucket). In this case, the Monitoring UI service will interface with the designated data store to read the snapshot data.
+In the examples above, you store the snapshots and run the UI on the same server. Alternatively, you can store snapshots in a remote data store (such as an S3 bucket). The Monitoring UI service will interface with the designated data store to read the snapshot data.
To connect to data stores Evidently uses `fsspec` that allows accessing data on remote file systems via a standard Python interface.
-You can verify supported data stores in the [Fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#built-in-implementations](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations).
+You can verify supported data stores in the Fsspec documentation: [built-in implementations](https://filesystem-spec.readthedocs.io/en/latest/api.html#built-in-implementations) and [other implementations](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations).
-For example, to read snapshots from an S3 bucket (in this example we have MinIO running on localhost:9000), you must specify environment variables:
+For example, to read snapshots from an S3 bucket (with MinIO running on localhost:9000), you must specify environment variables:
```
FSSPEC_S3_ENDPOINT_URL=http://localhost:9000/
From 68ffaf36463bb01dd1d763ddc6826e95dd4c40bd Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:32:39 +0100
Subject: [PATCH 04/24] Update workspace.md
---
docs/book/monitoring/workspace.md | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/docs/book/monitoring/workspace.md b/docs/book/monitoring/workspace.md
index 466200a778..8409756b7a 100644
--- a/docs/book/monitoring/workspace.md
+++ b/docs/book/monitoring/workspace.md
@@ -15,7 +15,7 @@ If you do not have one yet, create an [Evidently Cloud account](https://app.evid
**Get the API token**. You will use it to connect with Evidently Cloud Workspace from your Python environment. Use the "key" sign in the left menu to get to the token page, and click "generate token." Save it in a temporary file since it won't be visible once you leave the page.
-**Connect to the workspace**. To connect to the Evidently Cloud workspace, you must first [install Evidently](../installation/install-evidently.md).
+**Connect to the Workspace**. To connect to the Evidently Cloud Workspace, you must first [install Evidently](../installation/install-evidently.md).
```python
pip install evidently
@@ -49,7 +49,7 @@ ws = Workspace.create("evidently_ui_workspace")
You can pass a `path` parameter to specify the path to a local directory.
{% hint style="info" %}
-**Code example** [Self-hosting tutorial](../get-started/tutorial-monitoring.md) shows a complete Python script to create and populate a local workspace.
+**Code example** [Self-hosting tutorial](../get-started/tutorial-monitoring.md) shows a complete Python script to create and populate a local Workspace.
{% endhint %}
## Remote Workspace
@@ -111,11 +111,11 @@ evidently ui --workspace . /workspace
evidently ui --workspace ./workspace --port 8080
```
-To view the Evidently interface, go to URL http://localhost:8000 or a different specified port in your web browser.
+To view the Evidently interface, go to URL http://localhost:8000 or a specified port in your web browser.
-## [DANGER] Delete workspace
+## [DANGER] Delete Workspace
-If you want to delete an existing workspace (for example, an empty or a test workspace), run the command from the Terminal:
+To delete a Workspace (for example, an empty or a test Workspace), run the command from the Terminal:
```
cd src/evidently/ui/
@@ -128,5 +128,4 @@ rm -r workspace
# What’s next?
-Regardless of the workspace type (cloud, local, or remote), you can use the same methods to create and manage Projects. Head to the next section to see how.
-
+After you set up a Workspace, you can add and manage Projects. Head to the [next section to see how](add_project.md).
From 9fe51911ff5fd469b883bbc1cf0a6d46f53d837f Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:33:26 +0100
Subject: [PATCH 05/24] Update workspace.md
---
docs/book/monitoring/workspace.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/book/monitoring/workspace.md b/docs/book/monitoring/workspace.md
index 8409756b7a..e027a6a850 100644
--- a/docs/book/monitoring/workspace.md
+++ b/docs/book/monitoring/workspace.md
@@ -1,5 +1,5 @@
---
-description: Connect to Evidently Cloud or self-hosted Workspace.
+description: Connect to the Evidently Cloud or a self-hosted Workspace.
---
# What is a Workspace?
From b818f1454386fabd152fa93f2b79ecbaf46cbe65 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:46:46 +0100
Subject: [PATCH 06/24] Update add_project.md
---
docs/book/monitoring/add_project.md | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/docs/book/monitoring/add_project.md b/docs/book/monitoring/add_project.md
index fd1c27ec2a..c55811e2a2 100644
--- a/docs/book/monitoring/add_project.md
+++ b/docs/book/monitoring/add_project.md
@@ -1,16 +1,16 @@
---
-description: How to create a Project for your monitoring use case.
+description: Organize your data in a Project.
---
# What is a Project?
-A Project helps gather all Reports and Test Suites associated with the same use case. Each Project has a dedicated monitoring dashboard and snapshot storage.
+A Project helps gather all Reports and Test Suites related to the same use case. Each Project has a dedicated monitoring Dashboard and snapshot storage.
{% hint style="info" %}
-**Should you have one Project for one ML model?** You will often create one project per ML model or dataset, but this is not a strict rule. For example, you can log the performance of a champion and challenger models to the same Project. Or, store data on related models (such as demand forecasting models by country) in one Project and use tags to organize them. You can also set up your monitoring for any data pipeline or dataset.
+**Should you have one Project for one ML model?** You will often create one project per ML model or dataset, but this is not a strict rule. For example, you can log data from champion/challenger models, or related models in one Project and use Tags to organize them.
{% endhint %}
-Once you create a Project, you can connect to it from your Python environment to send the data or edit the dashboards. In Evidently Cloud, you can work both via API and a graphic user interface.
+Once you create a Project, you can connect via Python to send data or edit the Dashboard. In Evidently Cloud, you can also the web interface.
# Create a Project
@@ -50,7 +50,7 @@ After creating a Project, you can click to open a Dashboard. Since there's no da
Team management is a Pro feature available in the Evidently Cloud.
{% endhint %}
-You can associate a Project with a particular Team, such as a "Marketing team" for related ML models. A Project inside the Team will be visible to all Team members.
+You can associate a Project with a particular Team: it becomes visible to all Team members.
You must create a Team before adding a Project. Navigate to the “Teams” section in the left menu, and add a new one. You can add other users to this Team at any point after creating it.
@@ -58,7 +58,7 @@ You must create a Team before adding a Project. Navigate to the “Teams” sect
{% tab title="API" %}
-After creating the team, copy the `team_ID` from the team page. To add a Project to a Team, reference the team_id when creating the Project:
+After creating the team, copy the `team_id` from the team page. To add a Project to a Team, reference the `team_id` when creating the Project:
```
project = ws.create_project("Add your project name", team_id="TEAM ID")
@@ -84,7 +84,7 @@ Click on the “plus” sign on the home page and type your Project name and des
## Connect to a Project
-To connect to an existing Project from your Python environment (for example, if you first created the Project in the UI and now want to send data to it), use the `get_project` method.
+To connect to an existing Project from Python, use the `get_project` method.
```python
project = ws.get_project("PROJECT_ID")
@@ -92,7 +92,7 @@ project = ws.get_project("PROJECT_ID")
## Save changes
-After you make any changes to a Project via API (such as editing description or adding new monitoring panels), you must use the `save()` command:
+After making changes to the Project (such as editing description or adding monitoring Panels), always use the `save()` command:
```python
project.save()
@@ -147,9 +147,9 @@ Each Project has the following parameters.
| `name: str` | Project name. |
| `id: UUID4 = Field(default_factory=uuid.uuid4)` | Unique identifier of the Project. Assigned automatically. |
| `description: Optional[str] = None` | Optional description. Visible when you browse Projects. |
-| `dashboard: DashboardConfig` | Configuration of the Project dashboard. It describes the monitoring Panels that appear on the dashboard.
**Note**: Explore the [Dashboard Design](design_dashboard_api.md) section for details. There is no need to explicitly pass `DashboardConfig` as a parameter if you use the `.dashboard.add_panel` method to add Panels. |
-| `date_from: Optional[datetime.datetime] = None` | Start DateTime of the monitoring dashboard. By default, Evidently shows data for all available periods based on the snapshot timestamps.
You can set a specific date or a relative DateTime. For example, to refer to the last 30 days:
`from datetime import datetime, timedelta`
`datetime.now() + timedelta(-30)`
When you view the dashboard, the data will be visible from this start date. You can switch to other dates in the interface. |
-| `date_to: Optional[datetime.datetime] = None` | End datetime of the monitoring dashboard.
Works the same as above. |
+| `dashboard: DashboardConfig` | Dashboard configuration that describes the composition monitoring Panels.
**Note**: See [Dashboard Design](design_dashboard_api.md) for details. You don't need to explicitly pass `DashboardConfig` if you use the `.dashboard.add_panel` method to add Panels. |
+| `date_from: Optional[datetime.datetime] = None` | Start DateTime of the monitoring Dashboard. By default, Evidently shows data for all available periods based on the snapshot timestamps.
You can set a different DateTime. E.g., to refer to the last 30 days:
`from datetime import datetime, timedelta`
`datetime.now() + timedelta(-30)`|
+| `date_to: Optional[datetime.datetime] = None` | End DateTime of the monitoring Dashboard.
Works the same as above. |
# What’s next?
From b93a795ae7e3a322190ba527333a92208eea39f9 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:47:43 +0100
Subject: [PATCH 07/24] Update add_project.md
---
docs/book/monitoring/add_project.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/book/monitoring/add_project.md b/docs/book/monitoring/add_project.md
index c55811e2a2..6862ef2715 100644
--- a/docs/book/monitoring/add_project.md
+++ b/docs/book/monitoring/add_project.md
@@ -42,7 +42,7 @@ After creating a Project, you can click to open a Dashboard. Since there's no da
{% endtabs %}
-**Project ID**. Once you run `create_project`, you will see the Project ID. You can later use it to reference the Project. You can also copy the Project ID directly from the UI: it appears above the monitoring dashboard.
+**Project ID**. Once you run `create_project`, you will see the Project ID. You can later use it to reference the Project. You can also copy the Project ID directly from the UI: it appears above the monitoring Dashboard.
## Add a Team Project
@@ -155,4 +155,4 @@ Each Project has the following parameters.
Once you create or connect to a Project, you can:
* [Send snapshots](snapshots.md) using the `add_report` or `add_test_suite` methods.
-* Configure the monitoring dashboard in the [user interface](add_dashboard_tabs.md) or via the [Python API](design_dashboard_api.md).
+* Configure the monitoring Dashboard in the [user interface](add_dashboard_tabs.md) or via the [Python API](design_dashboard_api.md).
From cd34f49f574471e97e79595795c945b7f39bf636 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:48:13 +0100
Subject: [PATCH 08/24] Update add_project.md
---
docs/book/monitoring/add_project.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/book/monitoring/add_project.md b/docs/book/monitoring/add_project.md
index 6862ef2715..7dde90872e 100644
--- a/docs/book/monitoring/add_project.md
+++ b/docs/book/monitoring/add_project.md
@@ -1,5 +1,5 @@
---
-description: Organize your data in a Project.
+description: Create a Project for your use case.
---
# What is a Project?
From 2d0b4ae18fa789041a175b306abca09d228e469e Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 20:49:21 +0100
Subject: [PATCH 09/24] Update add_project.md
---
docs/book/monitoring/add_project.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/book/monitoring/add_project.md b/docs/book/monitoring/add_project.md
index 7dde90872e..b825647b48 100644
--- a/docs/book/monitoring/add_project.md
+++ b/docs/book/monitoring/add_project.md
@@ -1,5 +1,5 @@
---
-description: Create a Project for your use case.
+description: Set up a Project for your use case.
---
# What is a Project?
From 8e2b2a9777ddade7c89a6a0c981dcbd41a92e97b Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 21:08:12 +0100
Subject: [PATCH 10/24] Update snapshots.md
---
docs/book/monitoring/snapshots.md | 45 +++++++++++++++----------------
1 file changed, 21 insertions(+), 24 deletions(-)
diff --git a/docs/book/monitoring/snapshots.md b/docs/book/monitoring/snapshots.md
index e5d7221194..b0ed6fdff4 100644
--- a/docs/book/monitoring/snapshots.md
+++ b/docs/book/monitoring/snapshots.md
@@ -1,20 +1,20 @@
---
-description: How to send snapshots for data and ML monitoring.
+description: Run evaluations and send the results.
---
To visualize data in the Evidently ML monitoring interface, you must capture data and model metrics as Evidently JSON `snapshots`.
# What is snapshot?
-Evidently `snapshot` is a JSON file summarizing data and model performance for a specific period. Each snapshot includes metrics, data summaries, test results, and supporting render data. You choose which metrics to include when creating a snapshot. This determines what you can later plot on the monitoring dashboard or alert on.
+`Snapshots` are JSON summaries of data and model performance for a given period. They contain metrics, data summaries, test results, and supporting render data. You pick what exactly goes in a `snapshot`: this also determines what you can alert on.
-By sending multiple snapshots to the Project (e.g., hourly, daily, or weekly), you create a data source for monitoring panels. You can plot trends over time by parsing values from individual snapshots. You can also view individual snapshots for each period.
+By sending multiple `snapshots` to the Project (e.g., hourly, daily, or weekly), you create a data source for monitoring Panels. You can plot trends over time by parsing values from individual snapshots.
You can:
-* Send the snapshots sequentially on a schedule (e.g., send a data summary every hour or every day).
-* Send one-off snapshots after specific evaluations (e.g., send results of CI/CD checks).
-* Backdate your snapshots (e.g., to log model quality after you get the labels).
-* Log multiple snapshots for the same period (e.g., for shadow and production models).
+* Send the snapshots sequentially (e.g., hourly or daily data summaries).
+* Send one-off snapshots after specific evaluations (e.g., results of CI/CD checks).
+* Backdate your snapshots (e.g., log model quality after you get the labels).
+* Add multiple snapshots for the same period (e.g., for shadow and production models).
{% hint style="info" %}
**Snapshots vs. Reports.** The snapshot functionality is directly based on the Evidently Reports and Test Suites. Put simply, a snapshot is a JSON "version" of the Evidently Report or Test Suite.
@@ -24,13 +24,13 @@ You can:
Here is the general workflow.
-**1. Connect to a Project**. Connect to a [Project](add_project.md) in your workspace where you want to send the snapshots.
+**1. Connect to a [Project](add_project.md)** in your workspace where you want to send the snapshots.
```python
project = ws.get_project("PROJECT_ID")
```
-**2. Define and compute a snapshot**. Define an Evidently Test Suite or Report as usual:
+**2. Define and compute a snapshot**.
* Create a `Report` or `Test Suite` object. Define the `metrics` or `tests`.
* Pass the `current` dataset you want to evaluate or profile.
* Optional: pass the `column_mapping` to define the data schema. (Required for model quality or text data checks to map target, prediction, text columns, etc.).
@@ -40,7 +40,7 @@ project = ws.get_project("PROJECT_ID")
For monitoring, you can also add `tags` and `timestamp` to your snapshots.
{% hint style="info" %}
-**New to Evidently?** Check the [Reports and Test Suites tutorial](../get-started/tutorial.md)) and a related [docs section](../tests-and-reports/) for end-to-end examples. Browse [Presets](../presets/all-presets.md), [Metrics](../reference/all-metrics.md) and [Tests](../reference/all-tests.md) to see available checks.
+**New to Evidently?** Check the [Reports and Tests Tutorial](../get-started/tutorial.md) and a related [docs section](../tests-and-reports/) for end-to-end examples. Browse [Presets](../presets/all-presets.md), [Metrics](../reference/all-metrics.md) and [Tests](../reference/all-tests.md) to see available checks.
{% endhint %}
3. **Send the snapshot**. After you compute the Report or Test Suite, use the `add_report` or `add_test_suite` methods to send them to a corresponding Project in your workspace.
@@ -59,7 +59,7 @@ data_report.run(reference_data=None, current_data=batch1)
ws.add_report(project.id, data_report)
```
-**Send a Test Suite**. To create and send Test Suite with data drift checks, passing both current and reference datasets:
+**Send a Test Suite**. To create and send Test Suite with data drift checks, passing current and reference data:
```python
drift_checks = TestSuite(tests=[
@@ -69,19 +69,19 @@ drift_checks.run(reference_data=reference_batch, current_data=batch1)
ws.add_test_suite(project.id, drift_checks)
```
-**Send a snapshot**. The `add_report` or `add_test_suite` methods generate snapshots automatically. But if you already have a snapshot (e.g., a previously saved Report), you can load it into Python and send it to your workspace:
+**Send a snapshot**. The `add_report` or `add_test_suite` methods generate snapshots automatically. If you already have a snapshot (e.g., a previously saved Report), you can load it to Python and add to your Project:
```
ws.add_snapshot(project.id, snapshot.load("data_drift_snapshot.json"))
```
{% hint style="info" %}
-**Snapshot size**. Ensure that a single upload to Evidently Cloud does not exceed 50 GB for free trial users or 500 GB for users on the Pro plan. Note that this limitation applies to the size of the resulting JSON, not the dataset itself. For example, a data drift report for 50 columns and 10,000 rows of current and reference data results in a snapshot of approximately 1MB. (For 100 columns x 10,000 rows: ~ 3.5MB; for 100 columns x 100,000 rows: ~ 9MB). However, the size varies depending on the metrics or tests used.
+**Snapshot size**. A single upload to Evidently Cloud should not exceed 50MB for free trial users or 500MB for Pro plan. This limitation applies to the size of the resulting JSON, not the dataset itself. For example, a data drift report for 50 columns and 10,000 rows of current and reference data results in a snapshot of approximately 1MB. (For 100 columns x 10,000 rows: ~ 3.5MB; for 100 columns x 100,000 rows: ~ 9MB). However, the size varies depending on the metrics or tests used.
{% endhint %}
## Add timestamp
-Each `snapshot` is associated with a single timestamp. By default, Evidently will assign the `datetime.now()` timestamp using the Report/Test Suite computation time based on the user time zone.
+Each `snapshot` is associated with a single timestamp. By default, Evidently will assign the `datetime.now()` using the Report/Test Suite computation time based on the user time zone.
You can also add your own timestamp:
@@ -112,13 +112,13 @@ Since you can assign arbitrary timestamps, you can log snapshots asynchronously
You can include `tags` and `metadata` in snapshots. This is optional but useful for search and data filtering for monitoring Panels.
Examples of when to use tags include:
-* You have production/shadow or champion/challenger models and want to visualize them separately on a dashboard.
+* You have production/shadow or champion/challenger models.
* You compute snapshots with different reference datasets (for example, to compare distribution drift week-by-week and month-by-month).
* You have data for multiple models of the same type inside a Project.
* You capture snapshots for multiple segments in your data.
-* You want to tag individual Reports in a Project, such as a datasheet card for the training dataset, a model card, etc.
+* You want to tag individual Reports in a Project, e.g., datasheet card, a model card, etc.
-**Custom tags**. Pass any custom tags as a list:
+**Custom tags**. Pass any custom Tags as a list:
```python
data_drift_report = Report(
@@ -143,7 +143,7 @@ data_drift_report = Report(
)
```
-**Default metadata**. You can also use built-in metadata fields `model_id`, `reference_id`, `batch_size`, `dataset_id`:
+**Default metadata**. Use built-in metadata fields `model_id`, `reference_id`, `batch_size`, `dataset_id`:
```python
data_drift_report = Report(
@@ -157,7 +157,7 @@ data_drift_report = Report(
)
```
-**Add tags to existing Reports.**. You can also add tags to a previously generated Report or Test Suite:
+**Add Tags to existing Reports.**. You can add Tags to a previously generated Report or Test Suite:
```python
data_summary_report.tags=["training_data"]
@@ -165,7 +165,7 @@ data_summary_report.tags=["training_data"]
# Delete snapshots
-To delete snapshots in the workspace `ws`, pass the Project ID and snapshot ID. You can verify the ID of the snapshot on the Report or Test Suite page.
+To delete snapshots in the Workspace `ws`, pass the Project ID and snapshot ID. You can see the snapshot ID on the Report or Test Suite page.
```python
ws.delete_snapshot(project_id, snapshot_id)
@@ -173,7 +173,4 @@ ws.delete_snapshot(project_id, snapshot_id)
# What's next?
-Now that you've sent data to the Project, you can design monitoring panels. Check the next [section](design_dashboard.md.md) to learn more.
-{% endhint %}
-
-
+Once you've sent data to the Project, you can [add monitoring Panels and Tabs](design_dashboard.md).
From 6f6e5b310a68a37c7ba277619826748d151318c9 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 21:16:31 +0100
Subject: [PATCH 11/24] Update add_dashboard_tabs.md
---
docs/book/monitoring/add_dashboard_tabs.md | 24 +++++++++++-----------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/docs/book/monitoring/add_dashboard_tabs.md b/docs/book/monitoring/add_dashboard_tabs.md
index 04453ced7e..e8cd971c87 100644
--- a/docs/book/monitoring/add_dashboard_tabs.md
+++ b/docs/book/monitoring/add_dashboard_tabs.md
@@ -1,5 +1,5 @@
---
-description: How to get a pre-built monitoring dashboard using templates.
+description: Get a pre-built monitoring Dashboard using templates.
---
# What is a Dashboard?
@@ -13,21 +13,21 @@ Each Project has a monitoring Dashboard to visualize metrics and test results ov
**Data source**. To populate the Dashboard, you must send the relevant data inside the snapshots. The Panels will be empty otherwise. Read more about [sending snapshots](snapshots.md).
{% endhint %}
-You choose how exactly to organize your Dashboard and which values to plot. By default, the Dashboard for a new Project is empty.
+Initially, the Dashboard for a new Project is empty. You can organize it and select values to plot.
-For both Evidently Cloud and open-source, you can define the composition of monitoring Panels via API. This is great for version control.
+For both Evidently Cloud and open-source, you can define monitoring Panels via API. This is great for version control.
In Evidently Cloud, you can also:
-* Get pre-built Dashboards for Data Quality, Data Drift, etc.
-* Add and modify Panels directly in the user interface.
+* Get pre-built Dashboards.
+* Add Panels directly in the user interface.
* Add multiple Tabs on the Dashboard to logically group the Panels.
-# Pre-built dashboards
+# Pre-built Dashboards
{% hint style="success" %}
Dashboard templates is a Pro feature available in the Evidently Cloud.
{% endhint %}
-Starting with template Dashboard Tabs is convenient: you get a set of monitoring Panels out of the box without adding them individually.
+Template Tabs include a pre-set combination of monitoring Panels, so you don't have to add them one by one.
To use a template:
* Enter the “Edit” mode clicking on the top right corner of the Dashboard.
@@ -36,15 +36,15 @@ To use a template:
Optionally, give a custom name to the Tab.
-You can choose between the following options:
+You have the following options:
| Tab Template | Description | Data source |
|---|---|---|
-| Columns | Shows column values (e.g., mean, quantiles) over time for categorical and numerical columns. | Capture the `DataQualityPreset()` or `ColumnSummaryMetric()` for individual columns. |
-| Data Quality | Shows data quality metrics (e.g., missing values, duplicates) over time for the complete dataset and results of Data Quality Tests. | For the Metric Panels, capture the `DataQualityPreset()` or `DatasetSummaryMetric()`. For the Test Panel, include any individual Tests from Data Quality or Data Integrity groups.|
-| Data Drift | Shows the share of drifting features over time, and the results of Column Drift Tests. | For the Metric Panel, capture the `DataDriftPreset()` or `DataDriftTestPreset()`. For the Test Panel, include individual `TestColumnDrift()` or `DataDriftTestPreset()`. |
+| Columns | Plots column distributions over time for categorical and numerical columns. | `DataQualityPreset()` or `ColumnSummaryMetric()` for individual columns. |
+| Data Quality | Shows dataset quality metrics (e.g., missing values, duplicates) over time and results of Data Quality Tests. | For the Metric Panels: `DataQualityPreset()` or `DatasetSummaryMetric()`. For the Test Panel: any individual Tests from Data Quality or Data Integrity groups.|
+| Data Drift | Shows the share of drifting features over time, and the results of Column Drift Tests. | For the Metric Panel: `DataDriftPreset()` or `DataDriftTestPreset()`. For the Test Panel: `DataDriftTestPreset()` or individual `TestColumnDrift()` Tests. |
# What’s next?
-* See available individual [monitoring Panels types](design_dashboard.md).
+* See available [monitoring Panels types](design_dashboard.md).
* How to add [custom monitoring Panels and Tabs to your dashboard](design_dashboard_api.md).
From c5ec44a0dd09483a6000eb66d42a297b25619015 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 21:27:53 +0100
Subject: [PATCH 12/24] Update design_dashboard.md
---
docs/book/monitoring/design_dashboard.md | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/docs/book/monitoring/design_dashboard.md b/docs/book/monitoring/design_dashboard.md
index 2d8d44c59d..15381590a3 100644
--- a/docs/book/monitoring/design_dashboard.md
+++ b/docs/book/monitoring/design_dashboard.md
@@ -6,16 +6,14 @@ description: Overview of the available monitoring Panel types.
A monitoring Panel is an individual plot or counter on the Monitoring Dashboard.
-You can add multiple Panels and organize them by **Tabs**. There are several Panel types to choose from, and you can customize titles and legends.
+You can add multiple Panels and organize them by **Tabs**. You can choose from Metric, Distribution, and Test Panels, and customize titles and legends.
When adding a Panel, you point to the **source Metric or Test** and the value (`field_path`) inside it. Evidently will pull selected value(s) from all snapshots in the Projects and add them to the Panel.
-You can use **Tags** to filter data from specific snapshots. For example, you can plot the accuracy of Model A and Model B next to each other. To achieve this, add relevant tags when creating a snapshot.
-
-Broadly, there are Metric, Distribution, and Test Panels. This page details the panel types.
+You can use **Tags** to filter data from specific snapshots. For example, you can plot the accuracy of Model A and Model B next to each other. To achieve this, add relevant Tags when creating a snapshot.
{% hint style="info" %}
-**How to add Panels**. Check the next [docs section](design_dashboard_api.md).
+**How to add Panels**. This page explains the Panel types. Check the next section on [adding Panels](design_dashboard_api.md).
{% endhint %}
# Metric Panels
@@ -66,7 +64,7 @@ Class `DashboardPanelTestSuiteCounter`
| Panel Type| Example |
|---|---|
-|Shows a counter of Tests with selected status (pass, fail). |![](../.gitbook/assets/monitoring/panel_tests_counter_example.png)|
+|Shows a counter of Tests with selected status. |![](../.gitbook/assets/monitoring/panel_tests_counter_example.png)|
## Test plot
Class `DashboardPanelTestSuite`.
@@ -77,12 +75,12 @@ Class `DashboardPanelTestSuite`.
|Aggregated plot: `TestSuitePanelType.AGGREGATE`. Only the total number of Tests by status is visible. |![](../.gitbook/assets/monitoring/panel_tests_aggregated_hover_example.png)|
# Distribution Panel
-Class `DashboardPanelDistribution`. Shows a distribution of values over time. For example, if you capture Data Quality or Data Drift Reports that include histograms for categorical values, you can visualize the distribution over time.
-
-![](../.gitbook/assets/monitoring/distribution_panels.png)
+Class `DashboardPanelDistribution`. Shows a distribution of values over time. For example, if you capture Data Quality or Data Drift Reports that include histograms for categorical values, you can plot how the frequency of categories changes.
You can create distribution plots from either Reports or Test Suites.
+![](../.gitbook/assets/monitoring/distribution_panels.png)
+
| Panel Type| Example |
|---|---|
|Stacked bar chart: `HistBarMode.STACK`. Shows absolute counts.|![](../.gitbook/assets/monitoring/panel_dist_stacked_2-min.png)|
@@ -91,10 +89,9 @@ You can create distribution plots from either Reports or Test Suites.
|Stacked bar chart: `HistBarMode.RELATIVE`. Shows relative frequency (percentage).|![](../.gitbook/assets/monitoring/panel_dist_relative-min.png)|
{% hint style="info" %}
-**What is the difference between a Distribution Panel and a Histograms Plot?**. A Histogram Plot displays the distribution of individual values from all snapshots over a period. Each source snapshot must contain a **single value** of the type (e.g., a "number of drifting features"). The Plot shows how frequently each of the numbers occurs. A Distribution Panel, in contrast, shows changes in distribution over time. Each source snapshot must contain a **distribution histogram** (e.g., a histogram of categorical values). The Plot shows how these distributions change within the selected timeframe.
+**What is the difference between a Distribution Panel and a Histogram Plot?** A Histogram Plot shows the distribution of values from all snapshots. Each source snapshot contains a **single value** (e.g., a "number of drifting features"). A Distribution Panel shows how a distribution changes in time. Each source snapshot contains a **histogram** (e.g. frequency of different categories).
{% endhint %}
-
# What's next?
How to add [monitoring Panels and Tabs](design_dashboard_api.md).
From 851d44d897af73a5161c99ce4210c4ebfff55afb Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 21:28:20 +0100
Subject: [PATCH 13/24] Update design_dashboard.md
---
docs/book/monitoring/design_dashboard.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/book/monitoring/design_dashboard.md b/docs/book/monitoring/design_dashboard.md
index 15381590a3..56457560a3 100644
--- a/docs/book/monitoring/design_dashboard.md
+++ b/docs/book/monitoring/design_dashboard.md
@@ -89,7 +89,7 @@ You can create distribution plots from either Reports or Test Suites.
|Stacked bar chart: `HistBarMode.RELATIVE`. Shows relative frequency (percentage).|![](../.gitbook/assets/monitoring/panel_dist_relative-min.png)|
{% hint style="info" %}
-**What is the difference between a Distribution Panel and a Histogram Plot?** A Histogram Plot shows the distribution of values from all snapshots. Each source snapshot contains a **single value** (e.g., a "number of drifting features"). A Distribution Panel shows how a distribution changes in time. Each source snapshot contains a **histogram** (e.g. frequency of different categories).
+**What is the difference between a Distribution Panel and a Histogram Plot?** A Histogram Plot shows the distribution of values from all snapshots. Each source snapshot contains a **single value** (e.g., a "number of drifting features"). A Distribution Panel shows how a distribution changes over time. Each source snapshot contains a **histogram** (e.g. frequency of different categories).
{% endhint %}
# What's next?
From 08d94286400c6a25b34fc9f4baccc3c7d39f8682 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 22:03:15 +0100
Subject: [PATCH 14/24] Update design_dashboard_api.md
---
docs/book/monitoring/design_dashboard_api.md | 121 +++++++++----------
1 file changed, 60 insertions(+), 61 deletions(-)
diff --git a/docs/book/monitoring/design_dashboard_api.md b/docs/book/monitoring/design_dashboard_api.md
index cc86d060bc..ed2ef30229 100644
--- a/docs/book/monitoring/design_dashboard_api.md
+++ b/docs/book/monitoring/design_dashboard_api.md
@@ -1,5 +1,5 @@
---
-description: How to add monitoring Panels to the Dashboard.
+description: Design your own Dashboard with custom Panels.
---
We recommend starting with [pre-built Tabs](add_dashboard_tabs.md) for a quick start.
@@ -17,14 +17,14 @@ You can also explore the [source code](https://github.com/evidentlyai/evidently/
You can add monitoring Panels using the Python API or the Evidently Cloud user interface.
Here is the general flow:
-* Define the **Panel type**: metric Counter, metric Plot, Distribution, Test Counter, or Test Plot. (See [Panel types](design_dashboard.md)).
+* Define the **Panel type**: Counter, Plot, Distribution, Test Counter, or Test Plot. (See [Panel types](design_dashboard.md)).
* Specify panel **title** and **size**.
-* Add optional **Tags** to filter data. If no Tag is specified, the Panel will use data from all snapshots in the Project.
-* Select **parameters** based on the Panel type, e.g., aggregation level. (See the section on Parameters below).
+* Add optional **Tags** to filter data. Without Tags, the Panel will use data from all Project snapshots.
+* Select Panel **parameters**, e.g., aggregation level.
* Define the **Panel value(s)** to show:
* For Test Panels, specify `test_id`.
- * For Metric and Distribution Panels, specify `metric_id` and `field_path`. (See the section on `PanelValue` below).
-* Pass arguments (`test_args` or `metric_args`) to identify the exact value when they repeat inside the snapshot. For instance, to plot the mean value of a given column, you need the column name as an argument. (See the section on `PanelValue` below).
+ * For Metric and Distribution Panels, specify `metric_id` and `field_path`. .
+* Pass `test_args` or `metric_args` to identify the exact value when they repeat in a snapshot. For instance, to plot the mean value of a given column, pass the column name as an argument.
This page explains each step in detail.
@@ -49,7 +49,6 @@ Some tips:
{% endtab %}
{% tab title="API - Metric Panel" %}
-Each Project has a `DashboardConfig` that describes the composition of the monitoring Panels. You can edit it remotely.
**Connect to a Project**. Load the latest dashboard configuration into your Python environment.
@@ -57,7 +56,7 @@ Each Project has a `DashboardConfig` that describes the composition of the monit
project = ws.get_project("YOUR PROJECT ID HERE")
```
-**Add a new Panel**. Use the `add_panel` method. Specify the Panel type, name, Panel Value, etc. You can add multiple Panels: they will appear in the listed order. Save the configuration with `project.save()`. Here is an example:
+**Add a new Panel**. Use the `add_panel` method and pass the parameters. You can add multiple Panels: they will appear in the listed order. Save the configuration with `project.save()`.
```python
project.dashboard.add_panel(
@@ -84,14 +83,13 @@ Go back to the web app to see the Dashboard. Refresh the page if needed.
{% endtab %}
{% tab title="API - Test Panel" %}
-Each Project has a `DashboardConfig` that describes the composition of the monitoring Panels. You can edit it remotely.
**Connect to a Project**. Load the latest dashboard configuration into your Python environment.
```python
project = ws.get_project("YOUR PROJECT ID HERE")
```
-**Add a new Test Panel**. Use the `add_panel` method, and set `include_test_suites=True`. Specify the Panel type, name, Tests to include, aggregation level, etc. You can add multiple Panels: they will appear in the listed order. Save the configuration with `project.save()`. Here is an example:
+**Add a new Test Panel**. Use the `add_panel` method, set `include_test_suites=True` and pass the parameters. You can add multiple Panels: they will appear in the listed order. Save the configuration with `project.save()`.
```python
project.dashboard.add_panel(
@@ -123,17 +121,20 @@ Go back to the web app to see the Dashboard. Refresh the page if needed.
Multiple tabs is a Pro feature available in the Evidently Cloud.
{% endhint %}
-By default, all new Panels appear on the same monitoring Dashboard. You can create named Tabs to organize them.
+By default, you add Panels appear to a single monitoring Dashboard. You can create Tabs to organize them.
{% tabs %}
{% tab title="UI" %}
+
Enter the "edit" mode on the Dashboard (top right corner) and click "add Tab". To create a custom Tab, choose an “empty” tab and give it a name.
-Proceed with adding Panels to this Tab as usual.
+Proceed with adding Panels to this Tab as usual.
+
{% endtab %}
-{% tab title=”API" %}
+{% tab title="API" %}
+
**Connect to a Project**. Load the latest Dashboard configuration into your Python environment.
```python
@@ -191,7 +192,7 @@ To delete the Tabs or Panels in the UI, use the “Edit” mode and click the
# Panel parameters
{% hint style="success" %}
-**Panel types**. To preview all Panel types, check the previous [docs section](design_dashboard.md. This page details the parameters and API.
+**Panel types**. To preview all Panel types, check the previous [docs section](design_dashboard.md). This page details the parameters and API.
{% endhint %}
Class `DashboardPanel` is a base class. Its parameters apply to all Panel types.
@@ -209,11 +210,11 @@ See usage examples below together with panel-specific parameters.
| Parameter | Description |
|---|---|
-| `value: Optional[PanelValue] = None` | Specifies the value to display in the Counter. If empty, you will get a text-only panel.
*Refer to the Panel Value section below for examples.* |
+| `value: Optional[PanelValue] = None` | Specifies the value to display. If empty, you get a text-only panel.
*Refer to the Panel Value section below for examples.* |
| `text: Optional[str] = None` | Supporting text to display on the Counter. |
-| `agg: CounterAgg`
**Available:**
`SUM`, `LAST`, `NONE` | Data aggregation options:
`SUM`: Calculates the sum of values from all snapshots (or filtered by tags).
`LAST`: Displays the last available value.
`NONE`: Reserved for text panels. |
+| `agg: CounterAgg`
**Available:**
`SUM`, `LAST`, `NONE` | Data aggregation options:
`SUM`: Calculates the value sum (from all snapshots or filtered by Tag).
`LAST`: Displays the last available value.
`NONE`: Reserved for text panels. |
-See examples.
+See examples:
{% tabs %}
@@ -235,7 +236,7 @@ project.dashboard.add_panel(
{% tab title="Value sum" %}
-**Panel with a sum of values**. To create a panel that sums up the number of rows over time.
+**Panel with a sum of values**. To create a Panel that sums up the number of rows over time:
```python
project.dashboard.add_panel(
@@ -267,14 +268,13 @@ project.dashboard.add_panel(
| `values: List[PanelValue]` | Specifies the value(s) to display in the Plot.
The field path must point to the individual **MetricResult** (e.g., not a dictionary or a histogram).
If you pass multiple values, they will appear together, e.g., as separate lines on a Line plot, bars on a Bar Chart, or points on a Scatter Plot.
*Refer to the Panel Value section below for examples.* |
| `plot_type: PlotType`
**Available:** `SCATTER`, `BAR`, `LINE`, `HISTOGRAM` | Specifies the plot type: scatter, bar, line, or histogram. |
+See examples:
{% tabs %}
{% tab title="Single value" %}
-See examples.
-
-**Single value on a Plot**. To plot MAPE over time in a line plot.
+**Single value on a Plot**. To plot MAPE over time in a line plot:
```python
project.dashboard.add_panel(
@@ -296,9 +296,9 @@ project.dashboard.add_panel(
{% endtab %}
-{% tab title=”Multiple values" %}
+{% tab title="Multiple values" %}
-**Multiple values on a Plot**. To plot MAPE and reference MAPE on the same plot.
+**Multiple values on a Plot**. To plot MAPE and reference MAPE on the same plot:
```python
project.dashboard.add_panel(
@@ -335,7 +335,7 @@ project.dashboard.add_panel(
| `value: PanelValue` | Specifies the distribution to display on the Panel.
The `field_path` must point to a histogram.
*Refer to the Panel Value section below for examples.* |
| `barmode: HistBarMode`
**Available:** `STACK`, `GROUP`, `OVERLAY`, `RELATIVE` | Specifies the distribution plot type: stacked, grouped, overlay or relative. |
-**Example**: to plot the distribution of the "education" column over time using STACK plot:
+**Example**. To plot the distribution of the "education" column over time using STACK plot:
```python
p.dashboard.add_panel(
@@ -360,15 +360,15 @@ p.dashboard.add_panel(
|---|---|
| `test_filters: List[TestFilter]=[]`
`test_id: test_id`
`test_arg: List[str]`|Test filters select specific Test(s). Without a filter, the Panel considers the results of all Tests.
You must reference a `test_id` even if you used a Preset. You can check the Tests included in each Preset [here](https://docs.evidentlyai.com/reference/all-tests).|
| `statuses: List[statuses]`
**Available**:
`TestStatus.ERROR`, `TestStatus.FAIL`, `TestStatus.SUCCESS`, `TestStatus.WARNING`, `TestStatus.SKIPPED`| Status filters select Tests with a specific outcomes. (E.g., choose the FAIL status to display a counter of failed Tests). Without a filter, the Panel considers Tests with any status.|
-|
`agg: CounterAgg`
**Available**:
`SUM`, `LAST` | Data aggregation options:
`SUM`: Calculates the sum of Test results from all snapshots (or filtered by tags).
`LAST`: Displays the last available Test result. |
+|
`agg: CounterAgg`
**Available**:
`SUM`, `LAST` | Data aggregation options:
`SUM`: Calculates the sum of Test results from all snapshots (or filtered by Tags).
`LAST`: Displays the last available Test result. |
See examples.
{% tabs %}
-{% tab title="Show latesr" %}
+{% tab title="Show latest" %}
-**Last Tast**. To display the result of the last Test only. All Tests are considered.
+**Last Test**. To display the result of the latest Test in the Project.
```python
project.dashboard.add_panel(
@@ -381,9 +381,9 @@ project.dashboard.add_panel(
{% endtab %}
-{% tab title=”Filter by Test and Status" %}
+{% tab title="Filter by Test and Status" %}
-**Filter by Test ID and Status**. To display the number of failed Tests and errors for a specific Test (Number of unique values in the column “age”).
+**Filter by Test ID and Status**. To display the number of failed Tests and errors for a specific Test (Number of unique values in the column "age").
```python
project.dashboard.add_panel(
@@ -403,19 +403,18 @@ project.dashboard.add_panel(
`DashboardPanelTestSuite` shows Test results over time.
-
| Parameter | Description |
|---|---|
-| `test_filters: List[TestFilter]=[]`
`test_id: test_id`
`test_arg: List[str]`|Test filters select specific Test(s). Without a filter, the Panel shows the results of all Tests in the Project.
You must reference a `test_id` even if you used a Preset. You can check the Tests included in each Preset [here](https://docs.evidentlyai.com/reference/all-tests).|
-| `statuses: List[statuses]`
**Available**:
`TestStatus.ERROR`, `TestStatus.FAIL`, `TestStatus.SUCCESS`, `TestStatus.WARNING`, `TestStatus.SKIPPED`| Status filters select Tests with specific outcomes. Without a filter, the Panel plots all Test statuses over time.|
+| `test_filters: List[TestFilter]=[]`
`test_id: test_id`
`test_arg: List[str]`|Test filters select specific Test(s). Without a filter, the Panel shows the results of all Tests.
You must reference a `test_id` even if you used a Preset. Check the [Preset composition](https://docs.evidentlyai.com/reference/all-tests).|
+| `statuses: List[statuses]`
**Available**:
`TestStatus.ERROR`, `TestStatus.FAIL`, `TestStatus.SUCCESS`, `TestStatus.WARNING`, `TestStatus.SKIPPED`| Status filters select Tests with specific outcomes. Without a filter, the Panel shows all Test statuses.|
| `panel_type=TestSuitePanelType`
**Available**:
`TestSuitePanelType.DETAILED`
`TestSuitePanelType.AGGREGATE`| Defines the Panel type. **Detailed** shows individual Test results. **Aggregate** (default) shows the total number of Tests by status.|
-|
`time_agg: Optional[str] = None`
**Available**:
`1H`, `1D`, `1W`, `1M` (see [period aliases](https://pandas.pydata.org/docs/user_guide/timeseries.html#timeseries-period-aliases))| Groups all Test results within a defined period (e.g., 1 DAY).|
+|
`time_agg: Optional[str] = None`
**Available**:
`1H`, `1D`, `1W`, `1M` (see [period aliases](https://pandas.pydata.org/docs/user_guide/timeseries.html#timeseries-period-aliases))| Groups all Test results in a period (e.g., 1 DAY).|
{% tabs %}
{% tab title="Detailed Tests" %}
-**Detailed Tests** To show the detailed results (any status) of all individual Tests, with daily level aggregation.
+**Detailed Tests**. To show the results of all individual Tests, with daily level aggregation.
```python
project.dashboard.add_panel(
@@ -431,9 +430,9 @@ project.dashboard.add_panel(
{% endtab %}
-{% tab title=”Aggregated Tests" %}
+{% tab title="Aggregated by Status" %}
-**Aggregated Tests by Status**. To show the total number of failed Tests (status filter), with daily level aggregation.
+**Aggregated by Status**. To show the total number of failed Tests (status filter), with daily level aggregation.
```
project.dashboard.add_panel(
@@ -449,9 +448,9 @@ project.dashboard.add_panel(
{% endtab %}
-{% tab title=”Filtered by Test ID" %}
+{% tab title="Filtered by Test ID" %}
-**Filtered by Test ID**. To show the results (any status) of specified Tests (on constant columns, missing values, empty rows), with daily level aggregation.
+**Filtered by Test ID**. To show all results for a specified list of Tests (on constant columns, missing values, empty rows) with daily-level aggregation.
```
project.dashboard.add_panel(
@@ -473,7 +472,7 @@ project.dashboard.add_panel(
{% endtab %}
-{% tab title=”Filtered with Test Args" %}
+{% tab title="Filtered by Args" %}
**Filtered by Test ID and Test Args**. To show the results of individual column-level Tests with daily aggregation, you must use both `test_id` and `test_arg` (column name):
@@ -497,15 +496,14 @@ project.dashboard.add_panel(
{% endtabs %}
-# Panel value
+# Panel Value
To define the value to show on a Metric Panel (Counter, Distribution, or Plot), you must pass the `PanelValue`. This includes source `metric_id`, `field_path` and `metric_args`.
| Parameter | Description |
|---|---|
-| `metric_id` | A metric ID that corresponds to the Evidently Metric logged inside the snapshots. You must specify the `metric_id` even if you use Test Suites.
-A metric ID corresponding to the Evidently Metric in a snapshot.
Note that if you used a Metric Preset, you must still reference a `metric_id`. You can check the Metrics included in each Preset [here](https://docs.evidentlyai.com/reference/all-metrics).
If you used a Test Suite but want to plot individual values from it on a Metric Panel, you must also reference the `metric_id` that the Test relies on.|
-| `field_path` | The path to the specific computed Result inside the Metric. You can provide either a complete field path or a `field_name`. For Counters and Plot, the `field_path` must point to a single value. For the Distribution Panel, the `field_path` must point to a histogram.|
+| `metric_id` | The ID corresponds to the Evidently `metric` in a snapshot.
Note that if you used a Metric Preset, you must still reference a `metric_id`. Check the Metric [Preset composition](https://docs.evidentlyai.com/reference/all-metrics).
If you used a Test Suite but want to plot individual values from it on a Metric Panel, you must also reference the `metric_id` that the Test relies on.|
+| `field_path` | The path to the computed Result inside the Metric. You can provide a complete field path or a `field_name`. For Counter and Plot, the `field_path` must point to a single value. For the Distribution Panel, the `field_path` must point to a histogram.|
| `metric_args` (optional) | Use additional arguments (e.g., column name, text descriptor, drift detection method) to identify the exact value when it repeats inside the same snapshot.|
| `legend` (optional) | Value legend to show on the Plot.|
@@ -529,7 +527,7 @@ In this example, you pass the exact name of the field.
{% endtab %}
-{% tab title=”Complete field path" %}
+{% tab title="Complete field path" %}
**Complete field path**. To include the `current.share_of_missing_values` available inside the `DatasetMissingValueMetric()`:
@@ -552,7 +550,7 @@ See examples using different `metric_args`:
{% tab title="Column names" %}
-**Column names as arguments**. To display the mean values of target and prediction over time in a line plot.
+**Column names as arguments**. To shows the mean values of target and prediction on a line plot.
```python
values=[
@@ -573,7 +571,7 @@ values=[
{% endtab %}
-{% tab title=”Descriptors" %}
+{% tab title="Descriptors" %}
**Descriptors as arguments**. To specify the text descriptor (share of out-of-vocabulary words) using `metric_args`:
@@ -590,7 +588,7 @@ values=[
{% endtab %}
-{% tab title=”Metric parameters" %}
+{% tab title="Metric parameters" %}
**Metric parameters as arguments**. To specify the `euclidean` drift detection method (when results from multiple methods logged inside a snapshot) using `metric_args`:
@@ -607,26 +605,27 @@ values=[
{% endtabs %}
-
### How to find the field path?
-Let's take an example of `DataDriftPreset()`. It contains two Metrics: `DatasetDriftMetric()` and `DataDriftTable()`. (Check the composition of Presets [here](https://docs.evidentlyai.com/reference/all-metrics). You can point to any of them as a `metric_id`, depending on what you’d like to plot.
+Let's take an example of `DataDriftPreset()`. It contains two Metrics: `DatasetDriftMetric()` and `DataDriftTable()`. (Check the [Preset ccomposition](https://docs.evidentlyai.com/reference/all-metrics).
+
+You can point to any of them as a `metric_id`, depending on what you’d like to plot.
![](../.gitbook/assets/monitoring/field_path.png)
-Most Metrics contain multiple measurements inside (MetricResults) and some render data. To point to the specific value, use the `field path`. To find available fields in the chosen Metric, you can explore the contents of the individual snapshot or use automated suggestions in UI or Python.
+Most Metrics contain multiple measurements inside (MetricResults) and some render data. To point to the specific value, use the `field path`.
+
+To find available fields in the chosen Metric, you can explore the contents of the individual snapshot or use automated suggestions in UI or Python.
{% tabs %}
{% tab title="Open the snapshot" %}
-**Option 2**. Explore the contents of the snapshot, Metric or Test and find the relevant keys.
-
-Each snapshot is a JSON file. You can open an existing snapshot and see available fields. (You can download an individual Report as JSON from the UI or open it in Python).
+Each snapshot is a JSON file. You can download or open it in Python to see the available fields.
Alternatively, you can [generate a Report](../tests-and-reports/get-reports.md) with the selected Metrics on any test data. Get the output as a Python dictionary using `as_dict()` and explore the keys with field names.
-Here is a partial example of the contents of DatasetDriftMetric():
+Here is a partial example of the contents of `DatasetDriftMetric()`:
```python
'number_of_columns': 15,
@@ -635,11 +634,13 @@ Here is a partial example of the contents of DatasetDriftMetric():
'dataset_drift': False,
```
-Once you identify the value you’d like to plot (e.g., `number_of_drifted_columns`), pass it as the `field_path` to the `PanelValue` parameter. Include the `DatasetDriftMetricz as the `metric_id`.
+Once you identify the value you’d like to plot (e.g., `number_of_drifted_columns`), pass it as the `field_path` to the `PanelValue` parameter. Include the `DatasetDriftMetric` as the `metric_id`.
+
+Other Metrics and Tests follow the same logic.
{% endtab %}
-{% tab title=”Python autocomplete" %}
+{% tab title="Python autocomplete" %}
You can use autocomplete in interactive Python environments (like Jupyter notebook or Colab) to see available fields inside a specific Metric. They appear as you start typing the `.fields.` path for a specific Metric.
@@ -649,14 +650,12 @@ You can use autocomplete in interactive Python environments (like Jupyter notebo
{% endtab %}
-{% tab title=”Suggestions in UI" %}
+{% tab title="Suggestions in UI" %}
-When working in the Evidently Cloud, you can see available fields in the drop-down menu when adding the new Panel.
+When working in the Evidently Cloud, you can see available fields in the drop-down menu as you add a new Panel.
{% endtab %}
{% endtabs %}
-Other Metrics and Tests follow the same logic. Note that there is some data inside the snapshots that you cannot currently plot on a monitoring dashboard (for example, render data or dictionaries). You can only plot values that exist as individual data points or histograms.
-
-
+Note that there is some data inside the snapshots that you cannot currently plot on a monitoring Dashboard (for example, render data or dictionaries). You can only plot values that exist as individual data points or histograms.
From 3447170bbcf02140662b12965c041ab271980df5 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 22:06:31 +0100
Subject: [PATCH 15/24] Update alerting.md
---
docs/book/monitoring/alerting.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/book/monitoring/alerting.md b/docs/book/monitoring/alerting.md
index 2a09e4a526..c3ebc9a4f9 100644
--- a/docs/book/monitoring/alerting.md
+++ b/docs/book/monitoring/alerting.md
@@ -6,7 +6,7 @@ description: How to send alerts.
Built-in alerting is a Pro feature available in the Evidently Cloud.
{% endhint %}
-To enable alerts, open the Project and navigate to the "Alerts" section in the left menu. To enable alerts, you must set:
+To enable alerts, open the Project and navigate to the "Alerts" section in the left menu. You must set:
* A notification channel.
* An alert condition.
From e2f65ac0fede33823413e0867361858f1aaaec66 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 22:10:56 +0100
Subject: [PATCH 16/24] Update alerting.md
---
docs/book/monitoring/alerting.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/book/monitoring/alerting.md b/docs/book/monitoring/alerting.md
index c3ebc9a4f9..33009bb951 100644
--- a/docs/book/monitoring/alerting.md
+++ b/docs/book/monitoring/alerting.md
@@ -1,5 +1,5 @@
---
-description: How to send alerts.
+description: Get notifications in Slack or email.
---
{% hint style="success" %}
@@ -24,7 +24,7 @@ You can choose between the following options:
If you use Test Suites, you can tie alerting to the failed Tests in a Test Suite. Toggle this option on the Alerts page. Evidently will set an alert to the defined channel if any of the Tests fail.
{% hint style="info" %}
-**How to avoid alert fatigue?** When you create a Test Suite, you can [mark certain conditions as Warnings](../tests-and-reports/custom-test-suite.md) using the `is_critical` parameters. This helps distinguish between critical failures that trigger alerts (set `is_critical` as `True`; default) and non-critical ones for which no alerts will be generated (set `is_critical` as `False`).
+**How to avoid alert fatigue?** When you create a Test Suite, you can [mark certain conditions as Warnings](../tests-and-reports/custom-test-suite.md) using the `is_critical` parameter. Set is `False` for non-critical checks to avoid triggering alerts.
{% endhint %}
## Custom conditions
From 4fce77259afe22e426e91a69e007a93d32c8f25d Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 22:12:17 +0100
Subject: [PATCH 17/24] Update snapshots.md
---
docs/book/monitoring/snapshots.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/docs/book/monitoring/snapshots.md b/docs/book/monitoring/snapshots.md
index b0ed6fdff4..b3950cfeba 100644
--- a/docs/book/monitoring/snapshots.md
+++ b/docs/book/monitoring/snapshots.md
@@ -45,6 +45,10 @@ For monitoring, you can also add `tags` and `timestamp` to your snapshots.
3. **Send the snapshot**. After you compute the Report or Test Suite, use the `add_report` or `add_test_suite` methods to send them to a corresponding Project in your workspace.
+{% hint style="info" %}
+**Collector service.** To compute snapshots in near real-time, you can configure a [collector service](collector_service.md).
+{% endhint %}
+
# Send snapshots
**Send a Report**. To create and send a Report with data summaries for a single dataset `batch1`:
From 7a4c112b5454335d996236305ea8d527068d043f Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 22:17:31 +0100
Subject: [PATCH 18/24] Update collector_service.md
---
docs/book/monitoring/collector_service.md | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/docs/book/monitoring/collector_service.md b/docs/book/monitoring/collector_service.md
index f3cd5751cf..3a63a5e7bd 100644
--- a/docs/book/monitoring/collector_service.md
+++ b/docs/book/monitoring/collector_service.md
@@ -6,15 +6,15 @@ description: Send data in near real-time.
In this scenario, you deploy an **Evidently Collector** service for near real-time monitoring.
-Evidently Collector is a service that allows you to collect online events into batches, create Reports or Test Suites over batches of data, and save them as `snapshots` into the `workspace`.
+Evidently Collector is a service that allows you to collect online events into batches, create `Reports` or `TestSuites` over batches of data, and save them as `snapshots` to your Workspace.
-You will need to POST the predictions from the ML service to the Evidently Collector service. You can POST data on every prediction or batch them. The Evidently collector service will perform asynchronous computation of monitoring snapshots based on the provided config.
+You will need to POST the predictions from the ML service to the Evidently Collector service. You can POST data on every prediction or batch them. The Evidently collector service will perform asynchronous computation of monitoring snapshots based on the provided configuration.
You can also pass the path to the optional reference dataset.
![](../.gitbook/assets/monitoring/monitoring_collector_min.png)
-If you receive delayed ground truth, you can also later compute and log the model quality to the same project. You can run it as a separate process or batch monitoring job.
+If you receive delayed ground truth, you can later compute and log the model quality to the same Project. You can run it as a separate process or a batch job.
![](../.gitbook/assets/monitoring/monitoring_collector_delayed_labels_min.png)
@@ -32,7 +32,7 @@ You can choose either of the two options:
* Create configuration via code, save it to a JSON file, and run the service using it.
* Run the service first and create configuration via API.
-The collector service can simultaneously run multiple “collectors” that compute and save snapshots to different workspaces or projects. Each one is represented by a `CollectorConfig` object.
+The collector service can simultaneously run multiple “collectors” that compute and save snapshots to different Workspaces or Projects. Each one is represented by a `CollectorConfig` object.
## `CollectorConfig` Object
@@ -41,7 +41,7 @@ You can configure the following parameters:
| Parameter | Type | Description |
|-----------------|------------------|--------------------------------------------------------------------------------------------------|
| `trigger` | `CollectorTrigger`| Defines when to create a new snapshot from the current batch. |
-| `report_config` | `ReportConfig` | Configures the contents of the snapshot: Report or TestSuite computed for each batch of data. |
+| `report_config` | `ReportConfig` | Configures the contents of the snapshot: `Report` or `TestSuite` computed for each batch of data. |
| `reference_path` | Optional[str] | Local path to a *.parquet* file with the reference dataset. |
| `cache_reference` | bool | Defines whether to cache reference data or re-read it each time. |
| `api_url` | str | URL where the Evidently UI Service runs and snapshots will be saved to. For Evidently Cloud, use `api_url="https://app.evidently.cloud"`|
@@ -65,8 +65,8 @@ report_config = ReportConfig.from_test_suite(test_suite)
## CollectorTrigger
Currently, there are two options available:
-* `IntervalTrigger`: triggers the snapshot calculation each interval seconds
-* `RowsCountTrigger`: triggers the snapshot calculation every time the configured number of rows has been sent to the collector service
+* `IntervalTrigger`: triggers the snapshot calculation at set intervals (in seconds).
+* `RowsCountTrigger`: triggers the snapshot calculation when a specific row count is reached.
**Note**: we are also working on `CronTrigger` and other triggers. Would you like to see additional scenarios? Please open a GitHub issue with your suggestions.
From cfacbaa5b79f5254798ae9071668508b7dfa9341 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Wed, 15 May 2024 22:59:16 +0100
Subject: [PATCH 19/24] Update tutorial-cloud.md
---
docs/book/get-started/tutorial-cloud.md | 113 ++++++++++++------------
1 file changed, 58 insertions(+), 55 deletions(-)
diff --git a/docs/book/get-started/tutorial-cloud.md b/docs/book/get-started/tutorial-cloud.md
index df2f53d019..dfc67c7c9e 100644
--- a/docs/book/get-started/tutorial-cloud.md
+++ b/docs/book/get-started/tutorial-cloud.md
@@ -1,15 +1,15 @@
---
-description: Get started with Evidently Cloud. Run checks and customize a dashboard in 15 minutes.
+description: Get started with Evidently Cloud. Run checks and customize a Dashboard in 15 minutes.
---
In this tutorial, you'll set up production data and ML monitoring for a toy ML model. You'll run evaluations in Python and access a web dashboard in Evidently Cloud.
The tutorial consists of three parts:
* Overview of the architecture (2 min).
-* Launching a pre-built demo dashboard (2-3 min).
+* Launching a pre-built demo Dashboard (2-3 min).
* Setting up monitoring for a new toy dataset (10 min).
-You'll need basic knowledge of Python. Once you connect the data, you can continue working in the web interface.
+You'll need basic knowledge of Python. Once you connect the data, you can continue in the web interface.
{% hint style="success" %}
**Want a very simple example first?** Check this [Evidently Cloud "Hello World"](quickstart-cloud.md) instead.
@@ -38,9 +38,9 @@ Evidently supports over 100 pre-built Metrics and Tests. You can also add custom
The monitoring setup consists of two components:
* **Open-source Evidently Python library**. You perform evaluations in your environment. Each run produces a JSON `snapshot` with statistics, metrics, or test results for a specific period. You then send these `snapshots` to Evidently Cloud using an API key.
-* **Evidently Cloud web app**. After sending the data, you can access it in the Evidently Cloud UI. You can view individual evaluation results, build dashboards with trends over time, and set up alerts to notify on issues.
+* **Evidently Cloud web app**. After sending the data, you can access it in the Evidently Cloud UI. You can view individual evaluation results, build a Dashboard with trends over time, and set up alerts to notify on issues.
-You can run batch monitoring jobs (e.g., hourly, daily, weekly) or use Evidently Collector for near real-time checks. This tutorial shows a simple batch workflow. You can later explore alternative deployment architectures.
+You can run batch monitoring jobs (e.g., hourly, daily, weekly) or use Evidently Collector for near real-time checks. This tutorial shows a batch workflow.
![](../.gitbook/assets/cloud/cloud_service_overview-min.png)
@@ -48,9 +48,9 @@ You can run batch monitoring jobs (e.g., hourly, daily, weekly) or use Evidently
**Data security by design**. By default, Evidently Cloud does not store raw data or model inferences. Snapshots contain only data aggregates (e.g., histograms of data distributions, descriptive stats, etc.) and metadata with test results. This hybrid architecture helps avoid data duplication and preserves its privacy.
{% endhint %}
-# Demo dashboard
+# Demo Dashboard
-Let's quickly look at an example monitoring dashboard.
+Let's quickly look at an example monitoring Dashboard.
## 1. Create an account
@@ -62,13 +62,15 @@ After logging in, click on "Generate Demo Project". It will create a Project for
![](../.gitbook/assets/cloud/generate_demo_project.png)
-It'll take a few moments to populate the data. In the background, Evidently will run the code to generate Reports and Test Suites for 20 days. Once it's ready, open the Project to see a monitoring dashboard with multiple Tabs that show data quality, data drift, and model quality over time.
+It'll take a few moments to populate the data. In the background, Evidently will run the code to generate Reports and Test Suites for 20 days. Once it's ready, open the Project to see a monitoring Dashboard.
+
+Dashboards Tabs will show data quality, data drift, and model quality over time.
![](../.gitbook/assets/cloud/demo_dashboard.gif)
You can customize the choice of Panels and Tabs for your Project – this is just an example.
-You can also see individual snapshots if you navigate to the "Reports" or "Test Suites" section using the left menu. They display the performance on a given day and act as a data source for the monitoring panels.
+You can also see individual snapshots if you navigate to the "Reports" or "Test Suites" section using the left menu. They display the performance on a given day and act as a data source for the monitoring Panels.
Now, let's see how you can create something similar for your dataset!
@@ -78,13 +80,13 @@ You'll use a toy dataset to mimic a production ML model. You'll follow these ste
* Prepare a tabular dataset.
* Run data quality and data drift Reports in daily batches.
* Send them to Evidently Cloud.
-* Get dashboards to track metrics over time.
+* Get a Dashboard to track metrics over time.
* (Optional) Add custom monitoring panels.
* (Optional) Run Test Suites for continuous testing.
-In this example, you'll track data quality and drift. ML monitoring often starts here because true labels for assessing model quality come with a delay. Until then, you can only monitor the incoming data and predictions.
+In the example, you'll track data quality and drift. ML monitoring often starts here because true labels for assessing model quality come with a delay. Until then, you monitor the incoming data and predictions.
-However, the core workflow tutorial covers works for any evaluation. You can later expand it to monitor ML model quality and text-based LLM models.
+However, the core workflow tutorial covers will work for any evaluation. You can later expand it to monitor ML model quality and text-based LLM models.
To complete the tutorial, use the provided code snippets or run a sample notebook.
@@ -130,7 +132,7 @@ from evidently.test_preset import DataDriftTestPreset
from evidently.tests.base_test import TestResult, TestStatus
```
-**Optional**. Import the components to design monitoring panels via API. This is entirely optional: you can also add the panels using the UI.
+**Optional**. Import the components to design monitoring Panels via API. This is entirely optional: you can also add the Panels using the UI.
```python
from evidently import metrics
@@ -163,7 +165,7 @@ adult_prod = adult[adult.education.isin(["Some-college", "HS-grad", "Bachelors"]
**What is a reference dataset?** You need one to evaluate distribution drift. Here, you compare the current data against a past period, like an earlier data batch. You must provide this reference to compute the distance between two datasets. A reference dataset is optional when you compute descriptive stats or model quality metrics.
{% endhint %}
-Here is how the dataset looks. This could resemble a binary classification use case, with "class" being the prediction column.
+Preview the dataset. It resembles a binary classification use case with "class" as the prediction column.
![](../.gitbook/assets/cloud/data_preview-min.png)
@@ -171,7 +173,7 @@ Here is how the dataset looks. This could resemble a binary classification use c
Now, let's start monitoring!
-**Get the API token**. To connect to Evidently Cloud, you'll need an access token. Use the "key" sign in the left menu to get to the token page, and click "generate token." Copy and paste it into a temporary file since it won't be visible once you leave the page.
+**Get the API token**. To connect to Evidently Cloud, you need an access token. Use the "key" sign in the left menu to get to the token page, and click "generate token."
To connect to the Evidently Cloud workspace, run:
@@ -192,7 +194,7 @@ Click on the “plus” sign on the home page and type your Project name and des
After creating a Project, click its name to open the Dashboard. Since there's no data yet, it will be empty.
-To send data to this Project, you'll need to connect to it from your Python environment using `get_project` method. You can find your Project ID above the monitoring dashboard.
+To send data to this Project, you'll need to connect to it from your Python environment using `get_project` method. You can find your Project ID above the monitoring Dashboard.
```python
project = ws.get_project("PROJECT_ID")
@@ -213,23 +215,23 @@ project.save()
{% endtabs %}
{% hint style="info" %}
-**What is a Project?** Projects help organize monitoring for different use cases. Each project has a shared dashboard and alerting. You can create a Project for a single ML model or dataset or put related models together. For example, you can group shadow and production models and use tags to distinguish them within the Project.
+**What is a Project?** Projects help organize monitoring for different use cases. Each Project has a shared Dashboard and alerting. You can create a Project for a single ML model or dataset or put related models together and use Tags to distinguish them.
{% endhint %}
## 4. Run first evaluation
-To send snapshots to the Project, you must compute them using the Evidently Python library. Here's the process:
+To send snapshots, first compute them using the Evidently Python library. Here's the process:
* Prepare the data batch to evaluate.
-* Create a Report or Test Suite object.
-* Define metrics or tests to include.
+* Create a `Report` or `TestSuite` object.
+* Define `Metrics` or `Tests` to include.
* Pass optional parameters, like data drift detection method or test conditions.
-* Compute and send the snapshot.
+* Compute and send the snapshot to the Project.
{% hint style="info" %}
**What are Reports and Test Suites?** These are pre-built evaluations available in the open-source Evidently Python library. They cover 100+ checks for data quality, data drift, and model quality. You can check out the [open-source Evidently Tutorial](https://docs.evidentlyai.com/get-started/tutorial) for an introduction. A `snapshot` is a "JSON version" of a Report or Test Suite.
{% endhint %}
-Let’s start with data quality and drift checks using a preset metric combination. This helps observe how model inputs and outputs are changing. For each batch of data, you'll generate:
+Let’s start with data quality and drift checks using `Presets`. This will help observe how model inputs and outputs are changing. For each batch of data, you'll generate:
* **Data Quality Preset**. It captures stats like feature ranges and missing values.
* **Data Drift Preset**. This compares current and reference data distributions. You will use PSI (Population Stability Index) method, with a 0.3 threshold for significant drift.
@@ -247,7 +249,7 @@ data_report.run(reference_data=adult_ref, current_data=adult_prod.iloc[0 : 100,
```
{% hint style="info" %}
-**Defining the current dataset.** To specify which dataset you're evaluating, you pass it as the `current_dataset` inside the `run` method. In our example, we used a slice function `adult_prod.iloc[0 : 100, :]` to select the first 100 rows from the `adult_prod` dataset. In practice, you can simply pass your data batch: `current_data=your_batch_name`.
+**Defining the dataset.** To specify the dataset to evaluate, you pass it as the `current_dataset` inside the `run` method. Our example uses a slice function `adult_prod.iloc[0 : 100, :]` to select 100 rows from the `adult_prod` dataset. In practice, simply pass your data: `current_data=your_batch_name`.
{% endhint %}
To send this Report to the Evidently Cloud, use the `add_report` method.
@@ -260,7 +262,7 @@ You can now view the Report in the Evidently Cloud web app. Go to the "Reports"
## 5. Send multiple snapshots
-For production use, you can run evaluations like this on a schedule – for example, daily, hourly, or weekly – each time passing a new batch of data. Once you have multiple snapshots in the Project, you can observe trends over time on a monitoring dashboard.
+In production, you can run evaluations on a schedule (e.g., daily or hourly) each time passing a new batch of data. Once you have multiple snapshots in the Project, you can plot trends on a monitoring Dashboard.
To simulate production use, let’s create a script to compute multiple Reports, taking 100 rows per "day":
@@ -299,11 +301,11 @@ We use the script only to imitate multiple batch checks. In real use, you should
-Once you run the script, you will compute and send 10 daily snapshots. Navigate to the "Reports" section in the UI to view them.
+Run the script to compute and send 10 daily snapshots. Go to the "Reports" section to view them.
![](../.gitbook/assets/cloud/view-reports-min.gif)
-However, each such Report is static. To see trends over time, you need a monitoring dashboard!
+However, each such Report is static. To see trends, you need a monitoring Dashboard!
{% hint style="info" %}
**Want to reuse this script for your data?** If you try replacing the toy dataset for your data, increasing the `i`, or adding more metrics, it's best to send Reports one by one instead of running a script. Otherwise, you might hit Rate Limits when sending many Reports together. For free trial users, the limit on the single data upload is 50MB; for paying users, it is 500MB. Snapshot size varies based on metrics and tests included.
@@ -311,29 +313,29 @@ However, each such Report is static. To see trends over time, you need a monitor
## 6. Add monitoring Tabs
-Monitoring Dashboard helps observe trends. It pulls selected values from individual Reports to show them over time. You can add multiple individual monitoring panels and organize them using Tabs.
+Monitoring Dashboard helps observe trends. It pulls selected values from individual Reports to show them over time. You can add multiple monitoring Panels and organize them by Tabs.
-For a simple start, you can use Tab templates, which are pre-built combinations of monitoring panels. You can choose:
+For a simple start, you can use Tab templates, which are pre-built combinations of monitoring Panels:
* **Data Quality Tab**: displays data quality metrics (nulls, duplicates, etc.).
* **Columns Tab**: shows descriptive statistics for each column over time.
-* **Data Drift Tab**: illustrates the share of drifting features over time.
+* **Data Drift Tab**: shows the share of drifting features over time.
-To add these pre-built Tabs, enter "Edit" mode in the top right corner of the Dashboard, and click the plus sign to add a new Tab. Then, choose the Tab template. You will then observe how the data in the Reports changed over 10 days.
+To add pre-built Tabs, enter "Edit" mode in the top right corner of the Dashboard. Click the plus sign to add a new Tab and choose the template.
## 7. Add custom Panels [OPTIONAL]
-You can also add individual monitoring panels one by one. You can:
+You can also add individual monitoring Panels one by one. You can:
* Add them to an existing or a new Tab.
-* Choose the panel type, including Line Plot, Bar Plot, Histogram, Counter, etc.
-* Customize panel name, legend, etc.
+* Choose the Panel type, including Line Plot, Bar Plot, Histogram, Counter, etc.
+* Customize Panel name, legend, etc.
{% hint style="info" %}
-**You can only view values stored inside snapshots.** In our example, these are values related to data drift and quality. You can't see model quality metrics yet, since there is no such data in a snapshot. If you add a panel for model quality metrics, it will be empty. To populate, you must add more snapshots, for example, with `ClassificationPreset()`.
+**You can only view values stored inside snapshots.** In our example, they relate to data drift and quality. You can't see model quality metrics yet, since there is no data on it. If you add a model quality Panel, it will be empty. To populate it, add more snapshots, for example, with `ClassificationPreset()`.
{% endhint %}
-Say, you want to add a new “Summary” Tab and add a couple of panels:
-* inferences over time.
-* share of drifting features over time.
+Say, you want to add a new “Summary” Tab and add a couple of Panels to show:
+* Inferences over time.
+* The share of drifting features over time.
You can add panels both in the UI or using the Python API.
@@ -351,13 +353,13 @@ Choose a Panel type - for example, LINE or BAR plot, and add your legend.
{% tab title="API" %}
-**Connect to a Project**. If you've made changes to the dashboard in the UI since creating the Project (such as adding Tabs), use the "get_project" method to load the latest dashboard configuration into your Python environment. This ensures that the new dashboard code won't overwrite Panels previously added in the UI.
+**Connect to a Project**. If you've edited the Dashboard in the UI since creating the Project (like adding Tabs), use `get_project to load the latest configuration. This ensures you won't overwrite existing Panels.
```python
project = ws.get_project("YOUR PROJECT ID HERE")
```
-**Add new panels**. Use the `add_panel` method. You can specify the Panel name, legend, plot type, destination Tab, etc. After implementing the changes, save the configuration with `project.save()`.
+**Add new Panels**. Use the `add_panel` method. You can specify the Panel name, legend, plot type, destination Tab ("Summary"), etc. After implementing the changes, save the configuration with `project.save()`.
```python
project.dashboard.add_panel(
@@ -374,6 +376,7 @@ project.dashboard.add_panel(
plot_type=PlotType.LINE,
size=WidgetSize.FULL,
),
+ tab="Summary"
)
project.dashboard.add_panel(
DashboardPanelPlot(
@@ -394,7 +397,7 @@ project.dashboard.add_panel(
project.save()
```
-Return to the Evidently Cloud web app to view the dashboards you created. Refresh the page if necessary.
+Return to the Evidently Cloud web app to view the Dashboards. Refresh the page if necessary.
{% endtab %}
@@ -406,7 +409,7 @@ Return to the Evidently Cloud web app to view the dashboards you created. Refres
## 8. Monitor Test runs [OPTIONAL]
-You just created a dashboard to track individual metric values over time. Another option is to conduct your evaluations as Tests and monitor their outcomes.
+You just created a Dashboard to track individual metric values. Another option is to run your evaluations as Tests and track their outcomes.
![](../.gitbook/assets/cloud/toy_test_dashboard-min.png)
@@ -418,7 +421,7 @@ To do this, use Test Suites instead of Reports. Each Test in a Test Suite checks
Let’s create a Test Suite that includes:
* **Data Drift Test Preset**. It will generate a data drift check for all columns in the dataset using the same PSI method with a 0.3 threshold.
-* **Individual data quality tests**. They will check for missing values, empty rows, columns, duplicates, and constant columns. You can set test conditions using parameters like `eq` (equal) or `lte` (less than or equal). If no condition is set, Evidently will auto-generate them based on the reference data and heuristics.
+* **Individual data quality tests**. They will check for missing values, empty rows, columns, duplicates, and constant columns. You can set test conditions using parameters like `eq` (equal) or `lte` (less than or equal). If you don't specify the conditions, Evidently will auto-generate them based on the reference data.
Here's a script that again simulates generating Test Suites for 10 days in a row:
@@ -448,16 +451,16 @@ for i in range(0, 10):
ws.add_test_suite(project.id, test_suite)
```
-To visualize the results, let’s add a new dashboard Tab ("Data tests") and test-specific monitoring Panels.
+To visualize the results, add a new Dashboard Tab ("Data tests") and test-specific monitoring Panels.
{% tabs %}
{% tab title="UI" %}
-Enter the “edit” mode on a Dashboard, and click the “add Tab” and “add Panel” buttons. When creating a Panel, choose the “Test Plot” panel type, with a "detailed" option and 1D (daily) aggregation level.
+Enter the “edit” Dashboard mode, click the “add Tab” and “add Panel” buttons. Choose the “Test Plot” panel type, with a "detailed" option and 1D (daily) aggregation level.
You can add:
-* One panel to display all column drift checks over time. Choose the `TestColumnDrift` test for `all' columns.
-* One panel for dataset-level data quality checks. Choose the `TestNumberOfConstantColumns`, `TestShareOfMissingValues`, `TestNumberOfEmptyRows`, `TestNumberOfEmptyColumns`, `TestNumberOfDuplicatedColumns` from the dropdown.
+* One Panel with all column drift checks. Choose the `TestColumnDrift` test for `all' columns.
+* One Panel with dataset-level data quality checks. Choose the `TestNumberOfConstantColumns`, `TestShareOfMissingValues`, `TestNumberOfEmptyRows`, `TestNumberOfEmptyColumns`, `TestNumberOfDuplicatedColumns` from the dropdown.
{% endtab %}
@@ -503,7 +506,7 @@ project.save()
{% endtabs %}
-You'll see dashboards with Test results over time in the new Tab. Head to the "Test Suites" section in the left menu for individual Test Suites. This helps debug Test outcomes.
+You'll see Dashboards with Test results over time in the new Tab. Head to the "Test Suites" section in the left menu for individual Test Suites. This helps debug Test outcomes.
![](../.gitbook/assets/cloud/view-tests-min.gif)
@@ -511,24 +514,24 @@ You'll see dashboards with Test results over time in the new Tab. Head to the "T
When to use Test Suites?
-You can choose between Reports and Test Suites or both in combination. Test Suites are useful for:
-* **Monitoring multiple conditions at once**. Bundling checks in a Test Suite helps reduce alert fatigue and ease condition setup. For example, you can quickly check if all columns in a dataset are within a defined min-max range.
+You can choose between Reports and Test Suites or use both. Test Suites are useful for:
+* **Monitoring multiple conditions at once**. Bundling checks in a Test Suite helps reduce alert fatigue and simplify configuration. For example, you can quickly check if all columns in a dataset are within a defined min-max range.
* **Batch testing scenarios** like comparing new vs. old models in CI/CD or validating the quality of input data batch.
* **Using Test results outside Evidently Cloud**. For instance, you can stop your pipeline if data quality tests fail.
However, Test Suites require you to define pass or fail conditions upfront. If you only want to plot metrics, you can start with Reports instead.
-Note that if you use Test Suites, you can also plot the individual metric values (e.g., nulls over time) in addition to the Test-specific panels.
+Note that if you use Test Suites, you can still plot the individual values (e.g., nulls over time) in addition to the Test-specific panels.
## What's next?
-To go through all the steps in more detail, read to the complete [Monitoring User Guide](https://docs.evidentlyai.com/user-guide/monitoring/monitoring_overview). Here are some of the things you might want to explore next:
+To go through all the steps in more detail, refer to the complete [Monitoring User Guide](https://docs.evidentlyai.com/user-guide/monitoring/monitoring_overview). Here are some of the things you might want to explore next:
-* **Customize your evaluations**. Check available [Presets](https://docs.evidentlyai.com/presets), [Metrics](https://docs.evidentlyai.com/reference/all-metrics), and [Tests](https://docs.evidentlyai.com/reference/all-tests) to see other evaluations you can run.
-* **Build your batch or real-time workflow**. For batch evaluations, you can run regular monitoring jobs - for example, using a tool like Airflow or a script to orchestrate them. If you have a live ML service, you use [Evidently collector service](https://docs.evidentlyai.com/user-guide/monitoring/collector_service) to collect incoming production data and manage the computation of Reports and Test Suites following your configuration.
-* **Add alerts**. You enable email, Slack, or Discord alerts for Test Suites failures or when specific metrics are out of bounds.
-* **Use Tags**. You can add metadata or tags to your snapshots. For instance, indicate shadow and production models and build individual monitoring panels.
+* **Customize your evaluations**. See available [Presets](https://docs.evidentlyai.com/presets), [Metrics](https://docs.evidentlyai.com/reference/all-metrics), and [Tests](https://docs.evidentlyai.com/reference/all-tests) to see other checks you can run.
+* **Build your batch or real-time workflow**. For batch evaluations, you can run regular monitoring jobs - for example, using a tool like Airflow or a script to orchestrate them. If you have a live ML service, you use [Evidently collector service](https://docs.evidentlyai.com/user-guide/monitoring/collector_service) to collect incoming production data and manage the computations.
+* **Add alerts**. You can enable email, Slack, or Discord [alerts](https://docs.evidentlyai.com/user-guide/monitoring/alerting) when Tests fail or specific values are out of bounds.
+* **Use Tags**. You can add Metadata or Tags to your snapshots and filter monitoring Panels. For instance, build individual monitoring Panels for two model versions.
Need help? Ask in our [Discord community](https://discord.com/invite/xZjKRaNp8b).
From 6d1a89d57766b7fa61ed24a633ee3551b715669e Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Thu, 23 May 2024 16:46:40 +0100
Subject: [PATCH 20/24] Update quickstart-cloud.md
---
docs/book/get-started/quickstart-cloud.md | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/docs/book/get-started/quickstart-cloud.md b/docs/book/get-started/quickstart-cloud.md
index 52e3c2269e..4d4eab53ee 100644
--- a/docs/book/get-started/quickstart-cloud.md
+++ b/docs/book/get-started/quickstart-cloud.md
@@ -4,13 +4,17 @@ description: ML Monitoring “Hello world.” From data to dashboard in a couple
# 1. Create an account
-If not already, [sign up for an Evidently Cloud account](https://app.evidently.cloud/signup).
+If not already, [sign up for an Evidently Cloud account](https://app.evidently.cloud/signup). Upon registration, click on the "plus" sign in the UI to create a Team. For example, "Personal" team.
-# 2. Get an access token
+# 2. Copy the team ID
+
+Go to the [Teams page](https://app.evidently.cloud/teams), copy and save the team ID.
+
+# 3. Get an access token
Click on the left menu with a key sign, select "personal token," generate and save the token.
-# 3. Install the Python library
+# 4. Install the Python library
Install the Evidently Python library. You can run this example in Colab or another Python environment.
@@ -29,19 +33,19 @@ from evidently.report import Report
from evidently.metric_preset import DataQualityPreset
```
-# 4. Create a new Project
+# 5. Create a new Project
-Connect to Evidently Cloud using your access token and create a Project.
+Connect to Evidently Cloud using your access token and Team ID and create a Project.
```python
-ws = CloudWorkspace(token="YOUR_TOKEN_HERE", url="https://app.evidently.cloud")
+ws = CloudWorkspace(token="YOUR_TOKEN_HERE", team_ID="YOUR_TEAM_ID_HERE", url="https://app.evidently.cloud")
project = ws.create_project("My test project")
project.description = "My project description"
project.save()
```
-# 5. Collect metrics
+# 6. Collect metrics
Import the demo "adult" dataset as a pandas DataFrame.
From 360e58b07471fff727995e646a2765224928e2aa Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Thu, 23 May 2024 16:46:50 +0100
Subject: [PATCH 21/24] Update quickstart-cloud.md
---
docs/book/get-started/quickstart-cloud.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/docs/book/get-started/quickstart-cloud.md b/docs/book/get-started/quickstart-cloud.md
index 4d4eab53ee..9b16202f23 100644
--- a/docs/book/get-started/quickstart-cloud.md
+++ b/docs/book/get-started/quickstart-cloud.md
@@ -4,7 +4,9 @@ description: ML Monitoring “Hello world.” From data to dashboard in a couple
# 1. Create an account
-If not already, [sign up for an Evidently Cloud account](https://app.evidently.cloud/signup). Upon registration, click on the "plus" sign in the UI to create a Team. For example, "Personal" team.
+If not already, [sign up for an Evidently Cloud account](https://app.evidently.cloud/signup).
+
+Upon registration, click on the "plus" sign in the UI to create a Team. For example, "Personal" team.
# 2. Copy the team ID
From 8bbd4027e5a1325ff79346a91d7aef3164192172 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Thu, 23 May 2024 16:57:32 +0100
Subject: [PATCH 22/24] Update tutorial-cloud.md
---
docs/book/get-started/tutorial-cloud.md | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/docs/book/get-started/tutorial-cloud.md b/docs/book/get-started/tutorial-cloud.md
index dfc67c7c9e..7cbe522af3 100644
--- a/docs/book/get-started/tutorial-cloud.md
+++ b/docs/book/get-started/tutorial-cloud.md
@@ -56,9 +56,13 @@ Let's quickly look at an example monitoring Dashboard.
If you do not have one yet, [create an Evidently Cloud account](https://app.evidently.cloud/signup).
-## 2. View a demo project
+## 2. Create a Team
-After logging in, click on "Generate Demo Project". It will create a Project for a toy regression model that forecasts bike demand.
+Upon registration, click on the "plus" sign in the UI to create a Team. For example, "Personal" team.
+
+## 3. View a demo project
+
+Click on "Generate Demo Project" inside your new Team. It will create a Project for a toy regression model that forecasts bike demand.
![](../.gitbook/assets/cloud/generate_demo_project.png)
@@ -175,12 +179,15 @@ Now, let's start monitoring!
**Get the API token**. To connect to Evidently Cloud, you need an access token. Use the "key" sign in the left menu to get to the token page, and click "generate token."
+**Team ID**. Go to the [Teams page](https://app.evidently.cloud/teams), copy and save the team ID.
+
To connect to the Evidently Cloud workspace, run:
```python
ws = CloudWorkspace(
token="YOUR_TOKEN_HERE",
-url="https://app.evidently.cloud")
+url="https://app.evidently.cloud",
+team_id="YOUR_TEAM_ID_HERE")
```
Now, you need to create a new Project. You can do this programmatically or in the UI.
From 695104dd5c4b9482a713ee5f45b70baa8e180ba1 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Thu, 23 May 2024 17:05:23 +0100
Subject: [PATCH 23/24] Update workspace.md
---
docs/book/monitoring/workspace.md | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/docs/book/monitoring/workspace.md b/docs/book/monitoring/workspace.md
index e027a6a850..41a54c25da 100644
--- a/docs/book/monitoring/workspace.md
+++ b/docs/book/monitoring/workspace.md
@@ -13,6 +13,8 @@ You need a workspace to organize your data and Projects.
If you do not have one yet, create an [Evidently Cloud account](https://app.evidently.cloud/signup).
+**Create a Team**. In the UI, create a new team - for example, "Personal". Copy the Team ID from [teams page](https://app.evidently.cloud/teams).
+
**Get the API token**. You will use it to connect with Evidently Cloud Workspace from your Python environment. Use the "key" sign in the left menu to get to the token page, and click "generate token." Save it in a temporary file since it won't be visible once you leave the page.
**Connect to the Workspace**. To connect to the Evidently Cloud Workspace, you must first [install Evidently](../installation/install-evidently.md).
@@ -21,14 +23,15 @@ If you do not have one yet, create an [Evidently Cloud account](https://app.evid
pip install evidently
```
-Then, run imports and pass your API token to connect:
+Then, run imports and pass your API token and Team ID to connect:
```python
from evidently.ui.workspace.cloud import CloudWorkspace
ws = CloudWorkspace(
token="YOUR_TOKEN_HERE",
-url="https://app.evidently.cloud")
+url="https://app.evidently.cloud",
+team_id="YOUR_TEAM_ID")
```
{% hint style="info" %}
From 6d6f533d60780fd450d48d09e5beaaaf649ca361 Mon Sep 17 00:00:00 2001
From: elenasamuylova <67064421+elenasamuylova@users.noreply.github.com>
Date: Thu, 23 May 2024 17:07:09 +0100
Subject: [PATCH 24/24] Update quickstart-cloud.md
---
docs/book/get-started/quickstart-cloud.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/book/get-started/quickstart-cloud.md b/docs/book/get-started/quickstart-cloud.md
index 9b16202f23..68d658c042 100644
--- a/docs/book/get-started/quickstart-cloud.md
+++ b/docs/book/get-started/quickstart-cloud.md
@@ -40,7 +40,7 @@ from evidently.metric_preset import DataQualityPreset
Connect to Evidently Cloud using your access token and Team ID and create a Project.
```python
-ws = CloudWorkspace(token="YOUR_TOKEN_HERE", team_ID="YOUR_TEAM_ID_HERE", url="https://app.evidently.cloud")
+ws = CloudWorkspace(token="YOUR_TOKEN_HERE", team_id="YOUR_TEAM_ID_HERE", url="https://app.evidently.cloud")
project = ws.create_project("My test project")
project.description = "My project description"