Skip to content

Commit

Permalink
Merge pull request #735 from reportportal/develop
Browse files Browse the repository at this point in the history
Release
  • Loading branch information
Vadim73i committed May 3, 2024
2 parents f62d02e + 8ba6ea7 commit 10fe98c
Show file tree
Hide file tree
Showing 106 changed files with 587 additions and 291 deletions.
13 changes: 12 additions & 1 deletion apis/service-uat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,18 @@ info:
url: http://www.apache.org/licenses/LICENSE-2.0
version: 5.11.0
servers:
- url: //demo.reportportal.io/uat
- url: '{protocol}://{authority}/uat'
description: ReportPortal UAT server
variables:
protocol:
default: https
description: Protocol
enum:
- http
- https
authority:
description: Host name and port (if needed) of Report Portal server
default: demo.reportportal.io
tags:
- name: auth-configuration-endpoint
description: Auth Configuration Endpoint
Expand Down
2 changes: 1 addition & 1 deletion docs/FAQ/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ ReportPortal has a lot of widgets to visualize test results and understand the s

**14. Can ReportPortal aggregate performance test results?**

We do not support direct integration with performance testing frameworks, but as a workaround you can import performance test results in JUnit format into ReportPortal. Further information on this topic can be found [here](https://github.com/reportportal/reportportal/issues/1820).
We do not support direct integration with performance testing frameworks, but as a workaround you can [import performance test results](https://github.com/reportportal/reportportal/issues/1820) in JUnit format into ReportPortal.

**15. Does ReportPortal have integration with Jira?**

Expand Down
10 changes: 6 additions & 4 deletions docs/analysis/AutoAnalysisOfLaunches.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ An auto-analyzer is presented by a combination of several services: OpenSearch,
* Analyzer train instance is responsible for training models for Auto-analysis and ML suggestions functionality.
* Metrics gatherer calculates metrics about the analyzer usage and requests deletion of custom models if metrics goes down.

*You have the option to disable the Analyzer by removing the Analyzer, Analyzer train, and Metrics gatherer services from the installation.*

There are several ways to use an analyzer in our test automation reporting dashboard:

* Use the ReportPortal Analyzer: **manual** (analysis is switched on only for chosen launch manually) or **auto** (analysis is switched on after the launch finishing automatically);
Expand All @@ -29,13 +31,13 @@ There are several ways to use an analyzer in our test automation reporting dashb

* Do not use any Analyzers at all and do an analytical routine by yourself;

## ReportPortal Analyzer. How to install

Add info about OpenSearch, Analyzer service (two instances – Analyzer and Analyzer train), Metrics gatherer in the docker-compose file as mentioned [here](https://github.com/reportportal/reportportal/blob/release/24.1/docker-compose.yml).
:::important
The Auto Analyzer service is a part of the ReportPortal bundle.
:::

## ReportPortal Analyzer. How the Auto-Analysis is working

ReportPortal's auto-analyzer allows users to reduce the time spent on test execution investigation by analyzing test failures in automatic mode. For that reason, you can deploy the ReportPortal with a service Analyzer by adding info about this service in a docker-compose file. The default analysis component is running along with OpenSearch which is used for test logs indexing.
ReportPortal's Auto Analyzer allows users to reduce the time spent on test execution investigation by analyzing test failures in automatic mode. The default analysis component is running along with OpenSearch which is used for test logs indexing.
For effective using Auto–Analysis you should come through several stages.

### Create an analytical base in the OpenSearch
Expand Down
14 changes: 8 additions & 6 deletions docs/api/versioned_sidebars/api-sidebars.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,15 @@ const apiSidebars: SidebarsConfig = {
serviceApi: [
{
type: 'html',
defaultStyle: true,
defaultStyle: false,
value: versionSelector(serviceApiVersions),
className: 'version-button',
},
{
type: 'html',
defaultStyle: true,
defaultStyle: false,
value: versionCrumb(`v5.11`),
className: 'version-crumb',
},
{
type: 'category',
Expand All @@ -32,14 +33,15 @@ const apiSidebars: SidebarsConfig = {
'service-api-5.10': [
{
type: 'html',
defaultStyle: true,
defaultStyle: false,
value: versionSelector(serviceApiVersions),
className: 'version-button'
className: 'version-button',
},
{
type: 'html',
defaultStyle: true,
value: versionCrumb(`v5.10`)
defaultStyle: false,
value: versionCrumb(`v5.10`),
className: 'version-crumb',
},
{
type: 'category',
Expand Down
14 changes: 8 additions & 6 deletions docs/api/versioned_sidebars/uat-sidebars.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,15 @@ const uatSidebars: SidebarsConfig = {
serviceUat: [
{
type: 'html',
defaultStyle: true,
defaultStyle: false,
value: versionSelector(serviceUatVersions),
className: 'version-button',
},
{
type: 'html',
defaultStyle: true,
defaultStyle: false,
value: versionCrumb(`v5.11`),
className: 'version-crumb',
},
{
type: 'category',
Expand All @@ -32,14 +33,15 @@ const uatSidebars: SidebarsConfig = {
'service-uat-5.10': [
{
type: 'html',
defaultStyle: true,
defaultStyle: false,
value: versionSelector(serviceUatVersions),
className: 'version-button'
className: 'version-button',
},
{
type: 'html',
defaultStyle: true,
value: versionCrumb(`v5.10`)
defaultStyle: false,
value: versionCrumb(`v5.10`),
className: 'version-crumb',
},
{
type: 'category',
Expand Down
125 changes: 86 additions & 39 deletions docs/dashboards-and-widgets/ComponentHealthCheck.mdx
Original file line number Diff line number Diff line change
@@ -1,45 +1,72 @@
---
sidebar_position: 23
sidebar_position: 24
sidebar_label: Component health check
---

# Component health check

Shows the passing rate of the application components which are indicated by the specified attributes.
The widget shows the passing rate of the application components, indicated by the attributes specified for test cases.

:::note
For using this widget you need to report (or add manually) attributes to test items.
:::
<MediaViewer src="https://youtu.be/T98iy0mJk0s" alt="Component Health Check Video" type="video" />

**Widget's parameters:**

- Filter
- Parameters: All launches/ Latest launches
- The min allowable passing rate for the component: Possible value from 50 - 100%. Default value 100%.
- The min allowable passing rate for the component: Possible values from 50 - 100%. The default value is 100%.
- Attribute key for the first level (mandatory)
- Attribute key for the 2-10 levels (optional)
- Attribute key for the 2nd – 10th levels (optional)

<MediaViewer src={require('./img/widget-types/ComponentHealthCheckCreation.png')} alt="Component Health Check Creation" />

**Widget view**


> **Use case:**
>
> **Situation:** As a Project Manager or Test Lead, I want to see the most unstable place in my product ( application).
> **Situation:**
> As a Project Manager or Test Lead, I want to identify the most unstable areas in my product (application).
>
> **Solution:** All test cases in my project in ReportPortal have attributes. For example `function: (order, team, configure, administrative)`, `type: (backend, API, Unit, UI)`, ...., `market state: (open, close)`, `role: (ProjectManager, Member, Admin)` and other. The attributes can be different and dependent on your project needs.
> **Solution:**
> Let’s build Component Health Check Widget based on a particular filter for the launches and the following attributes for test cases:
>
> 1st level: key: function, possible values: order, team, configure, administrate<br />
> 2nd level: key: type, possible values: backend, API, Unit, UI<br />
> 3d level: key: market state, possible values: open, close<br />
> 4th level: key: role, possible values: Project Manager, Member, Admin
>
> A user can create a Component Health Check Widget and set attribute key = `function` for the 1st level, for the 2nd -`type` and the 3rd - `market state`
>
> So that a user will see on the first level several groups: order, team, configure, administrative. All groups will contain only
> test cases with an attribute that contains attribute key `function`. Each group has been grouped by attribute value: order, team,
> configure, administrative.
> If a user clicks on the group `function: order`, the system will show the second level of the widget. All test items on the second
> level will contain the attribute `function: order` and attributes that contain attribute key: `type`. And these items will be
> grouped by attribute values: backend, API, Unit, UI.
> The same logic will be applied for the next levels.
> On the first level of the widget, the user will only see the test cases with attribute key **‘function’**.
> The test cases will be grouped by attribute value: order, team, configure, administrate.
> By clicking on one of the groups (e.g., order), the user will see the second level of the widget.
> It will contain only test cases that have attribute **function:order** and attribute key **‘type’**.
> Test cases will be grouped by attribute value available for attribute key **‘type’** which are: backend, API, Unit, UI.
1st LEVEL

<MediaViewer src={require('./img/widget-types/ComponentHealthCheckFirstLevel.png')} alt="First level of Component Health Check widget in our qa metrics dashboard" />

2nd LEVEL

<MediaViewer src={require('./img/widget-types/ComponentHealthCheckSecondLevel.png')} alt="Second level of Component Health Check widget in our test reporting tool" />

**Widget view**

The Component Health Check widget is multi-level (up to 10 level) with the ability to drill down to the list of test cases included in the corresponding group at each attribute key level.

**‘ALL LAUNCHES’ option**

**For the first level**, system applies chosen filter to all the launches in ReportPortal and analyzes the last 600 launches
from the filter. After combining all the tests cases from these launches, the system searches for the test cases
with the attribute key specified for the first widget level (e.g., attribute key **‘function’**) and groups the found test cases
around unique attribute values (order, team, configure, administrate). The system then calculates the passing rate for each group.

**For the second level**, the system again analyzes the 600 launches from the filter and searches
for the test cases with 1st level attribute key plus value for chosen group (e.g., **function:order**)
and also attribute key specified for the second widget level (e.g.,**. type**).
The found tests cases are grouped around unique attribute values (backend, API, Unit, UI).
The system again calculates the passing rate for each group.

The same flow is applied to the other levels of the widget.

<MediaViewer src={require('./img/widget-types/ComponentHealthCheckScheme1.png')} alt="All launches in ReportPortal" />

Expand All @@ -59,23 +86,43 @@ For using this widget you need to report (or add manually) attributes to test it

<MediaViewer src={require('./img/widget-types/ComponentHealthCheckScheme9.png')} alt="Group by unique attribute with attributes key for 2 level" />

**Widget level**
Each level shows all available attributes with corresponded to his level attribute key.
For each level system analyze the last 600 launches.
**‘LATEST LAUNCH’ option**

**For the first level**, the flow is almost the same as for the ‘ALL LANCHES’ option.
However, after the test cases are grouped around unique values for the attribute key, system only leaves the tests cases
from latest launches executions for each selection.
For example, if you have Launch A with executions #1 and #2 and Launch B with executions #1 and #2
and they correspond to the applied filter, then building the widget based on “LATEST LAUNCHES’ parameter
will take into account only the test cases from Launch A executions #2 and Launch B executions #2.

**Widget section**
The widget has two sections: Passed and Failed
**Failed section has:** all groups (test cases with the same attribute) which have passing rate less than passing rated which has been specified on widget wizard

**Passed section has:** all groups which have a passing rate higher than passing rated which has been specified on widget wizard
The widget is divided into two sections: Passed and Failed.

**The Failed section** includes all groups (test cases with the same attribute) that have a passing rate lower than the passing rate specified in the widget wizard.

**The Passed section** includes all groups that have a passing rate higher than the rate specified in the widget wizard.

<MediaViewer src={require('./img/widget-types/ComponentHealthCheckPassedFailed.png')} alt="Component Health Check widget: Passed and Failed sections" />

Each group on the widget has a name which equals to attribute value, passing rate = passed test cases with attribute / total test cases with attribute
number of test cases with attribute
link to the widget list view: Filter list view + test method: Test + status: Passed, Failed, Skipped, Interrupted, InProgress; the number of items is equal to the number of Test cases in the widget
a color line which depends on passing rate (see section Widget legend)
Widget legend
Each group on the widget has:

- **name**, which is equivalent to attribute value;
- **passing rate**, calculated as (the number of passed test cases in the group)/(total number of test cases in the group);
- **the number of test cases** in the group;
- **a color line**, which depends on the passing rate (see section Widget legend)

Users can drill down to view the list of test cases included in the group, filtered by:

- **test method:** Test
- **status:** Passed, Failed, Skipped, Interrupted,
- **attributes** (key=Key for corresponding level, value=group name)

:::note
Each subsequent level should contain the attributes of previous levels.
:::

Widget legend has two lines: Passed and Failed
**Widget legend** consists of two lines: Passed and Failed

**Failed**

Expand All @@ -86,25 +133,25 @@ The failed line has four colors:
- strong red
- dark red

And have values - less than specified on widget wizard -1
It represents values that are less than the rate specified in the widget minus 1.

**Passed**

The passing line has only two colors:

- slightly green
- green = Passed
- green

And have values - from specified on widget wizard to 100%. Depends on this color scheme each group on the widget has its own color.
And have values - from specified on widget wizard to 100%. Depending on this color scheme each group on the widget has its own color.

Let's say we set 'The min allowable passing rate for the component' to be 90%.

- passed green: groups which have passing rate 100%.
- slightly green: groups which passing rate from 99 - specified on widget wizard.
- light red: from 3* (90% - 1)/4 to (90% - 1)
- strong red: from (90% - 1)/2 to 3* (90% - 1)/4
- regular red: from (90% - 1)/4 to 2*(90% - 1)/4
- dark red: 0 - ((90% - 1)/4 -1)
passed green: groups which have passing rate 100%.<br />
slightly green: groups which passing rate from 99 - specified on widget wizard.<br />
light red: from 3* (90% - 1)/4 to (90% - 1)<br />
strong red: from (90% - 1)/2 to 3* (90% - 1)/4<br />
regular red: from (90% - 1)/4 to 2*(90% - 1)/4<br />
dark red: 0 - ((90% - 1)/4 -1)

<MediaViewer src={require('./img/widget-types/ComponentHealthCheckView.png')} alt="Data visualization in test automation: Component Health Check " />

Expand Down
2 changes: 1 addition & 1 deletion docs/dashboards-and-widgets/CumulativeTrendChart.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 21
sidebar_position: 22
sidebar_label: Cumulative trend chart
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 17
sidebar_position: 18
sidebar_label: Different launches comparison chart
---

Expand Down
2 changes: 1 addition & 1 deletion docs/dashboards-and-widgets/FailedCasesTrendChart.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 15
sidebar_position: 16
sidebar_label: Failed cases trend chart
---

Expand Down
2 changes: 1 addition & 1 deletion docs/dashboards-and-widgets/FlakyTestCasesTableTop50.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 20
sidebar_position: 21
sidebar_label: Flaky test cases table (TOP-50)
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 11
sidebar_position: 12
sidebar_label: Investigated percentage of launches
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 8
sidebar_position: 9
sidebar_label: Launch execution and issue statistic
---

Expand Down
2 changes: 1 addition & 1 deletion docs/dashboards-and-widgets/LaunchStatisticsChart.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 5
sidebar_position: 6
sidebar_label: Launch statistics chart
---

Expand Down
2 changes: 1 addition & 1 deletion docs/dashboards-and-widgets/LaunchesDurationChart.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 7
sidebar_position: 8
sidebar_label: Launches duration chart
---

Expand Down
2 changes: 1 addition & 1 deletion docs/dashboards-and-widgets/LaunchesTable.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 12
sidebar_position: 13
sidebar_label: Launches table
---

Expand Down
2 changes: 1 addition & 1 deletion docs/dashboards-and-widgets/ManageWidgets.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 4
sidebar_position: 5
sidebar_label: Manage Widgets
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 14
sidebar_position: 15
sidebar_label: Most failed test-cases table (TOP-50)
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 22
sidebar_position: 23
sidebar_label: Most popular pattern table (TOP-20)
---

Expand Down
Loading

0 comments on commit 10fe98c

Please sign in to comment.