Skip to content

Commit

Permalink
Merge pull request #692 from reportportal/develop
Browse files Browse the repository at this point in the history
Release 24.1
  • Loading branch information
Vadim73i committed Mar 5, 2024
2 parents cd0ae87 + 6a7b549 commit 6817ab5
Show file tree
Hide file tree
Showing 89 changed files with 1,438 additions and 356 deletions.
2 changes: 1 addition & 1 deletion docs/FAQ/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ All test results and testing data reside in-house, within your instance of Repor

**2. Assuming ReportPortal locally caches logs to understand their content, where are these stored, and what are the associated retention policies?**

ReportPortal utilizes PostgreSQL for its database, MinIO and the local system for file storage, and Elasticsearch for log indexing and ML processes.
ReportPortal utilizes PostgreSQL for its database, MinIO and the local system for file storage, and OpenSearch for log indexing and ML processes.

Retention policies can be set and adjusted within the application on a per-project basis.

Expand Down
144 changes: 67 additions & 77 deletions docs/analysis/AutoAnalysisOfLaunches.mdx

Large diffs are not rendered by default.

42 changes: 42 additions & 0 deletions docs/analysis/ImmediateAutoAnalysis.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
sidebar_position: 2
sidebar_label: Immediate Auto-Analysis
---

# Immediate Auto-Analysis

In the realm of software development, quick issue detection is a critical aspect that directly impacts the quality of your products. Understanding modern trends, ReportPortal is ready to introduce a new feature – Immediate Auto-Analysis (Immediate AA). Starting from version 24.1, Auto-Analysis can be started via API after certain test cases finish before whole Launch is done thereby accelerating the test failure triage.

With Immediate AA, you no longer need to wait for the Launch finish before starting the analysis of failed tests. For instance, in your Launch there are 2000 tests, you can now see that 16 tests have already failed, and Immediate AA promptly marked these issues for you on the fly.

To initiate Immediate AA, you need to specify the following parameters in the **attributes** section **for each step on its start or finish** when reporting:

<MediaViewer src={require('./img/ImmediateAA.png')} alt="Parameters to initiate Immediate AA in our test automation dashboard" />

If the value for **“immediateAutoAnalysis”** is set to **“false”**, Immediate AA will not work.

If the “immediateAutoAnalysis” attribute is not specified, Immediate AA will not work either.

:::important
The “immediateAutoAnalysis” attribute can only be applied to a step level. It is essential to send the log, as the Analyzer operates based on the log.
:::

Immediate AA will work in any case if this attribute is present, regardless Auto-Analysis is enabled or disabled in Project Settings. In case when some items have been already analyzed by Immediate AA, Auto-Analysis and Manual Analysis on Launch finish will skip previously analyzed items.

Immediate AA, like Auto-Analysis on Launch finish, is based on the following options:

* All previous launches
* Current and all previous launches with the same name
* All previous launches with the same name
* Only previous launch with the same name
* Only current launch

You can select the required option in the Project settings for Auto-Analysis.

It is important to highlight that Immediate AA works only for defect types from “To Investigate” group. If you reported a Launch with failed step, and this step has another defect type (“Product Bug”, etc.), then step will not be analyzed by Immediate AA.

This way, Immediate AA allows to detect issues early and enhances testing performance.

<MediaViewer src="https://youtu.be/YR2lYtpukks" alt="Immediate Auto-Analysis: how it works" type="video" />

<MediaViewer src="https://youtu.be/kZ--1DFJGYg" alt="Immediate Auto-Analysis feature in our test automation reporting dashboard" type="video" />
46 changes: 46 additions & 0 deletions docs/analysis/ImmediatePatternAnalysis.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
sidebar_position: 8
sidebar_label: Immediate Pattern Analysis
---

# Immediate Pattern Analysis

In a modern software world, quick issue detection becomes necessary. ReportPortal as a progressive test automation dashboard, follows today’s faster development methods. Starting from version 24.1, Pattern Analysis can be started via API after certain test cases finish before whole Launch is done. Immediate Pattern Analysis (PA) significantly speeds up failure triage.

Previously, Pattern Analysis can’t be started before Launch finish. Since some users may have their launches last up to 12 hours, or even all day, they couldn’t start test failure analysis for a long time. This was slowing down issue spotting. With Immediate Pattern Analysis, you can begin looking at your test results much faster, which is especially valuable for large Launches.

To initiate Immediate PA, fulfill the following conditions:

1. PA rule should be created.

2. PA rule should be enabled.

3. When reporting, you need to specify the following parameters in the **attributes** section **for each step on its start or finish**:

<MediaViewer src={require('./img/ImmediatePA1.png')} alt="Parameters to initiate Immediate PA in our test reporting tool" />

If the value for **“immediatePatternAnalysis”** is set to **“false”**, Immediate PA will not work.

If the “immediatePatternAnalysis” attribute is not specified, Immediate PA will not work either.

The “system” parameter determines whether the “immediatePatternAnalysis” attribute will be displayed in the UI. If **“system”** is set to **“true”**, the “immediatePatternAnalysis” attribute will not be displayed on the UI, and if **“system”** is set to **“false”**, it will be.

<MediaViewer src={require('./img/ImmediatePA2.png')} alt="immediatePatternAnalysis attribute with system parameter set to false" />

You can provide this attribute at the start of the step or at the finish. You can also set one value at the start and another at the finish, in which case the last value will take.

:::note
Immediate PA will work in any case if this attribute is present, regardless Auto Pattern Analysis is enabled or disabled. In this case, Auto Pattern Analysis and Manual Pattern Analysis on Launch finish will skip previously analyzed items by Immediate Pattern Analysis.
:::

:::important
It would be better to use STRING rule instead of REGEX rule in all possible cases to speed up the Pattern Analysis processing in the database. As a result, you can get your analysis completed faster using the STRING patterns rather than REGEX and reduce the database workload.
:::

Apart from the timing of execution, Immediate PA differs from PA on Launch finish in that Immediate PA works for any issue type, whereas PA on Launch finish works for "To Investigate" items only. This means that you can report certain items, such as Automation Bug or System Issue, and specify parameters for launching Immediate PA.

<MediaViewer src={require('./img/ImmediatePA3.png')} alt="Item reported as Automation Bug with immediatePatternAnalysis attribute" />

Overall, Immediate PA helps to catch issues early and improves testing quality.

<MediaViewer src="https://youtu.be/i9LYeiRXSxA" alt="Immediate Pattern Analysis: how it works" type="video" />
8 changes: 4 additions & 4 deletions docs/analysis/MLSuggestions.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ sidebar_label: ML Suggestions

# ML Suggestions

ML suggestions functionality is based on previously analyzed results (either manually or via Auto-analysis feature) using Machine Learning. The functionality is provided by the Analyzer service in combination with ElasticSearch.
ML suggestions functionality is based on previously analyzed results (either manually or via Auto-analysis feature) using Machine Learning. The functionality is provided by the Analyzer service in combination with OpenSearch.

This analysis hints what are the most similar analyzed items to the current test item. You can interact with this functionality in several ways:
* Choose one of the suggested items if you see that the reason for the current test item is similar to the suggested one. When you choose the item and apply changes to the current item, the following test item characteristics will be copied from the chosen test item:
Expand All @@ -17,7 +17,7 @@ This analysis hints what are the most similar analyzed items to the current test

## How the ML suggestions functionality is working

ML Suggestions searches for similar previously analyzed items to the current test item, so it requires an analytical base saved in Elasticsearch. ML suggestions takes into account all user-investigated, auto-analyzed items or items chosen from ML suggestions. While the analytical base is growing ML suggestions functionality will have more examples to search by and suggest you the best options.
ML Suggestions searches for similar previously analyzed items to the current test item, so it requires an analytical base saved in OpenSearch. ML suggestions takes into account all user-investigated, auto-analyzed items or items chosen from ML suggestions. While the analytical base is growing ML suggestions functionality will have more examples to search by and suggest you the best options.

ML suggestions analysis is run everytime you enter "Make decision" editor. ML suggestions are run for all test items no matter what defect type they have now. This functionality is processing only test items with logs (log level >= 40000).

Expand All @@ -31,13 +31,13 @@ The request for the suggestions part looks like this:
* analyzerConfig;
* logs = List of log objects (logId, logLevel, message)

The Analyzer preprocesses log messages from the request for analysis: extracts error message, stacktrace, numbers, exceptions, urls, paths, parameters and other parts from text to search for the most similar items by these parts in the analytical base. We make several requests to the Elasticsearch to find similar test items by all the error logs.
The Analyzer preprocesses log messages from the request for analysis: extracts error message, stacktrace, numbers, exceptions, urls, paths, parameters and other parts from text to search for the most similar items by these parts in the analytical base. We make several requests to the OpenSearch to find similar test items by all the error logs.

:::note
When a test item has several error logs, we will use the log with the highest score as a representative of this test item.
:::

The ElasticSearch returns to the service Analyzer 10 logs with the highest score for each query and all these candidates will be processed further by the ML model. The ML model is an XGBoost model which features (about 40 features) represent different statistics about the test item, log message texts, launch info and etc, for example:
The OpenSearch returns to the service Analyzer 10 logs with the highest score for each query and all these candidates will be processed further by the ML model. The ML model is an XGBoost model which features (about 40 features) represent different statistics about the test item, log message texts, launch info and etc, for example:
* the percent of selected test items with the following defect type
* max/min/mean scores for the following defect type
* cosine similarity between vectors, representing error message/stacktrace/the whole message/urls/paths and other text fields
Expand Down
Binary file added docs/analysis/img/BaseForAnalysis.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/analysis/img/ImmediateAA.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/analysis/img/ImmediatePA1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/analysis/img/ImmediatePA2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/analysis/img/ImmediatePA3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
31 changes: 0 additions & 31 deletions docs/dashboards-and-widgets/FlakyTestCasesTableTop20.mdx

This file was deleted.

45 changes: 45 additions & 0 deletions docs/dashboards-and-widgets/FlakyTestCasesTableTop50.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
sidebar_position: 20
sidebar_label: Flaky test cases table (TOP-50)
---

# Flaky test cases table (TOP-50)

Shows the TOP-50 the most flaky test cases within the specified previous launches.

The test case is displayed in the table if test case status has changed at least once from Passed to Failed or from Failed to Passed in the specified previous launches.

**Widget's parameters:**

- Launches count: 2-100. The default meaning is 30.

- Launch name: required filed.

- Include Before and After methods: optional.

**Widget view**

The widget has a table view with the following data displayed:

- Test Case - link to the Step level of the last launch.
- Switches - count of found results with often switches.
- % of Switches - the percent of the fact switches and the possible.
- Last switch - date and time of the last run, when the test item switches the status, displayed in “time ago” format (i.e. “10 minutes ago”).

On mouse hover, the system will display accurate start times.

:::note
In “Switches” column only Passed and Failed statuses are displayed (Passed - green, Failed - red).
:::

:::important
The number of switches from one state to another of the test case with the same uniqueID is displayed in the format **N from M** where:

**N** is the number of changes of statuses.

**M** is the number of all possible changes of the item in selection (number of item runs=number of test executions minus number of executions with status Skipped minus 1).

On mouse hover, tooltip appears “N status changes from M possible times”.
:::

<MediaViewer src={require('./img/widget-types/FlakyTestCasesTableWidget.png')} alt="Data visualization in test automation: Flaky Test Cases Table Widget" />
58 changes: 33 additions & 25 deletions docs/dashboards-and-widgets/PassingRatePerLaunch.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,50 +5,58 @@ sidebar_label: Passing rate per launch

# Passing rate per launch

Shows the percentage ratio of Passed test cases to Total test cases for the last run of selected launch.

:::note
Total test cases = Passed + Not Passed, while Not Passed = Failed + Skipped + Interrupted

Thus, Passing rate = Passed / (Passed + Failed + Skipped + Interrupted)
:::
Shows the percentage ratio of Passed test cases to Total test cases including or excluding Skipped ones of the selected launch.

**Widget's parameters:**

- Launch Name: the name of any finished launch

- Mode: Bar View/Pie View

- Ratio based on: Total test cases (Passed/Failed/Skipped) / Total test cases excluding Skipped
- Widget name: any text

- Description: any text

Please find below an example of configuration:
:::note
During the setup process, you can choose whether to consider Skipped items using the radio button.
:::

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch1.png')} alt="Configuration Passing Rate Per Launch Widget" />
**Passing rate calculation including Skipped items**

As you can see, this widget was built based on the test results of the last run of the Daily Smoke Suite:
* Total test cases = Passed + Not Passed, while Not Passed = Failed + Skipped + Interrupted.
* Thus, Passing rate = Passed / (Passed + Failed + Skipped + Interrupted).

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch2.png')} alt="Last run of the Daily Smoke Suite" />
**Passing rate calculation excluding Skipped items**

**Widget view**
* Total test cases = Passed +Failed, while Failed= Failed +Interrupted.
* Thus, Passing rate = Passed / (Passed + Failed + Interrupted).

Please find below an example of configuration:

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch1.png')} alt="Configuring the Passing rate per launch widget in our test report dashboard" />

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch2.png')} alt="Creating a Passing rate per launch widget" />

The widget can be displayed in two options as shown on the pictures below:

Bar View
**Bar View**

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch3.png')} alt="Dashboard to manage test results: Passing Rate Per Launch Bar View" />
<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch3.png')} alt="Passing rate per launch. Bar view" />

Pie View
**Pie View**

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch4.png')} alt="Passing Rate Per Launch Pie View" />
<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch4.png')} alt="Passing rate per launch. Pie view" />

The tooltip on mouse hover over chart area shows the quantity of Passed/Failed test cases and percentage ratio of Passed/Failed test cases to Total cases for the last run.
As you can see, this widget was built based on the test results of the last run of the Daily Smoke Suite.

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch5.png')} alt="Percentage ratio for the last run" />
An example of Passing rate per launch widget including Skipped items:

The widget has clickable sections. When you click on a specific section in the widget, the system forwards you to the launch view for appropriate selection.
<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch5.png')} alt="Passing rate per launch widget including Skipped items" />

:::note
The widget doesn't contain 'IN PROGRESS" launches.
:::
An example of Passing rate per launch widget excluding Skipped items:

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch6.png')} alt="Passing rate per launch widget excluding Skipped items" />

The tooltip on mouse hover over chart area shows the number of test cases and ratio of Passed/Not Passed to Total test cases (Passed, Failed, Skipped) or the number of test cases and ratio of Passed/Not Passed to Total test cases excluding Skipped.

The Passing rate per launch widget has clickable sections. When you click on not Passed pie/bar element, system redirects to the test item view. If the widget was built with the option “Total test cases (Passed/Failed/Skipped)”, then tests with statuses Failed, Interrupted, Skipped are displayed. If the widget was built with the option “Total test cases excluding Skipped”, then tests with statuses Failed and Interrupted are displayed.

<MediaViewer src={require('./img/widget-types/PassingRatePerLaunch7.png')} alt="Redirect to the test item view" />
Loading

0 comments on commit 6817ab5

Please sign in to comment.