Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(load-test): Performance tests for create dialog and search #1331

Merged
merged 23 commits into from
Oct 31, 2024

Conversation

dagfinno
Copy link
Collaborator

@dagfinno dagfinno commented Oct 22, 2024

Description

See commits for details

Related Issue(s)

Verification

  • Your code builds clean without any errors or warnings
  • Manual testing done (required)
  • Relevant automated test added (if you find this hard, leave it and we'll help out)

Documentation

  • Documentation is updated (either in docs-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)

Summary by CodeRabbit

  • New Features

    • Introduced new GitHub Actions workflows for K6 performance testing, allowing manual execution with customizable parameters.
    • Added performance testing scripts for end-user interactions and dialog creation using the K6 framework.
  • Bug Fixes

    • Improved handling of authorization headers in request parameter functions.
  • Chores

    • Updated .gitignore to exclude sensitive files and generated token files.
    • Added a script for generating tokens for service owners and end users.

@dagfinno dagfinno requested review from a team as code owners October 22, 2024 12:48
Copy link
Contributor

coderabbitai bot commented Oct 22, 2024

📝 Walkthrough
📝 Walkthrough
📝 Walkthrough

Walkthrough

This pull request introduces several new GitHub Actions workflows for K6 performance testing, including dispatch-k6-performance.yml and workflow-run-k6-performance.yml. These workflows allow for manual triggering and configuration of performance tests through user-defined parameters. Additionally, it adds a Bash script for token generation, new performance testing scripts, and updates to the .gitignore file to enhance security by excluding sensitive files. Modifications to existing functions ensure proper handling of authorization headers in requests.

Changes

File Path Change Summary
.github/workflows/dispatch-k6-performance.yml New workflow added for manual K6 performance test execution with configurable inputs.
.github/workflows/performance-workflows/workflow-run-k6-performance.yml New workflow added for K6 performance tests, triggered by workflow_call with required inputs.
.gitignore Added entries to ignore .secrets, **/.endusers-with-tokens.csv, and **/.serviceowners-with-tokens.csv.
tests/k6/common/request.js Updated getEnduserRequestParams and getServiceOwnerRequestParams to conditionally handle Authorization header.
tests/k6/tests/enduser/performance/simple-search.js New performance testing script for end-user interactions with K6, including dynamic user data handling.
tests/k6/tests/scripts/generate_tokens.sh New Bash script for generating tokens for service owners and end users based on API environment.
tests/k6/tests/serviceowner/performance/create-dialog.js New performance testing script for creating dialog entries in a service owner context.
tests/k6/tests/serviceowner/testdata/01-create-dialog.js Updated function to accept an optional endUser parameter for flexibility in dialog creation.
tests/k6/tests/serviceowner/performance/createremove-no-delay.js Updated setup function to standardize the casing of the headers property.

Possibly related issues

Possibly related PRs

Suggested reviewers

  • oskogstad
  • arealmaas

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Outside diff range and nitpick comments (11)
.github/workflows/performance-workflows/workflow-run-k6-performance.yml (2)

5-23: LGTM: Well-defined inputs for K6 performance tests.

The inputs cover essential parameters for configuring K6 performance tests, and marking them as required ensures all necessary information is provided.

Remove trailing spaces from lines 8 and 23 to improve code cleanliness:

-        type: string   
+        type: string
-        type: string    
+        type: string
🧰 Tools
🪛 yamllint

[error] 8-8: trailing spaces

(trailing-spaces)


[error] 23-23: trailing spaces

(trailing-spaces)


40-55: LGTM: Well-structured job steps for running K6 tests.

The job steps are logically ordered and cover all necessary actions for running K6 tests. The use of latest action versions and proper environment variable setup is commendable.

Remove trailing spaces from lines 47 and 53 to improve code cleanliness:

-        ./tests/k6/tests/scripts/generate_tokens.sh ./tests/k6/tests/performancetest_data ${{ inputs.tokens }} 
+        ./tests/k6/tests/scripts/generate_tokens.sh ./tests/k6/tests/performancetest_data ${{ inputs.tokens }}
-        TOKEN_GENERATOR_PASSWORD: ${{ secrets.TOKEN_GENERATOR_PASSWORD }}  
+        TOKEN_GENERATOR_PASSWORD: ${{ secrets.TOKEN_GENERATOR_PASSWORD }}
🧰 Tools
🪛 yamllint

[error] 47-47: trailing spaces

(trailing-spaces)


[error] 53-53: trailing spaces

(trailing-spaces)

.github/workflows/dispatch-k6-performance.yml (1)

5-42: Input parameters are well-defined, but consider these minor improvements:

  1. The description for the 'tokens' input could be clearer. Consider rewording it to something like: "Tokens to generate: for create dialog, search, both, or none".

  2. For the 'vus' (Virtual Users) input, consider adding a maximum value to prevent accidental overload of the system. For example:

vus:
  description: 'Number of Virtual Users (VUs)'
  required: true
  type: number
  default: 10
  maximum: 100  # or an appropriate maximum value
  1. For the 'duration' input, consider adding a pattern to ensure valid time formats are entered. For example:
duration:
  description: 'Duration of test (e.g., 30s, 1m, 10m)'
  required: true
  default: 1m
  type: string
  pattern: '^(\d+[smh])+$'

These changes would enhance input validation and clarity.

tests/k6/common/request.js (1)

36-43: Approved with suggestions for improvement

The changes to getEnduserRequestParams function improve its robustness and flexibility. However, there are a couple of points to consider:

  1. The use of params.Headers (capital H) is inconsistent with the rest of the code which uses params.headers (lowercase h). Consider changing this for consistency.

  2. It would be beneficial to add comments explaining the purpose of these changes, especially the logic for handling the overridden Authorization header.

Consider applying the following changes:

 function getEnduserRequestParams(params = null, tokenOptions = null) {
     params = params || {};
-    const headers = params.Headers || {};
+    const headers = params.headers || {}; // Changed to lowercase 'headers' for consistency
+    // Check if Authorization header is overridden in the params
     const hasOverridenAuthorizationHeader = headers.Authorization !== undefined;
     let defaultParams = {
         headers: {
             'Accept': 'application/json',
             'User-Agent': 'dialogporten-k6',
+            // Use overridden Authorization if provided, otherwise generate a new token
             'Authorization': hasOverridenAuthorizationHeader ? headers.Authorization : 'Bearer ' + getEnduserTokenFromGenerator(tokenOptions)
         }
     }
.gitignore (2)

382-382: Good security practice for test data.

Ignoring **/.endusers-with-tokens.csv is a good practice to prevent committing sensitive test data. The ** ensures this file is ignored in any directory.

Consider adding a comment above this line to explain its purpose, e.g.:

# Ignore files containing test tokens

This will help other developers understand why these files are being ignored.


383-383: Good security practice for test data.

Ignoring **/.serviceowners-with-tokens.csv is consistent with the previous entry and maintains good security practices for test data.

Consider grouping this with the previous entry under a single comment:

# Ignore files containing test tokens
**/.endusers-with-tokens.csv
**/.serviceowners-with-tokens.csv

This improves readability and maintainability of the .gitignore file.

tests/k6/tests/serviceowner/testdata/01-create-dialog.js (1)

4-4: Approve the function signature change with a minor suggestion.

The updated function signature enhances flexibility by allowing an optional endUser parameter. This change is well-implemented and maintains backward compatibility.

Consider adding a brief comment explaining the purpose of the endUser parameter for improved code clarity.

You could add a comment like this:

/**
 * Generates test data for creating a dialog.
 * @param {string} [endUser] - The end user's SSN. If not provided, uses the default end user SSN.
 * @returns {Object} The dialog creation test data.
 */
export default function (endUser = getDefaultEnduserSsn()) {
  // ... rest of the function
}

Also applies to: 7-7

tests/k6/tests/enduser/performance/simple-search.js (1)

8-10: Add error handling for CSV parsing and file access.

Currently, the script assumes that the CSV file exists and is correctly formatted. Consider adding error handling to manage cases where the file is missing or contains malformed data to prevent runtime errors.

tests/k6/tests/serviceowner/performance/create-dialog.js (3)

37-37: Use const instead of var for variable declaration

It's a good practice to use const for variables that are not reassigned, enhancing code readability and preventing unintentional mutations.

Apply this diff:

-export function createDialog(serviceOwner, endUser) {  
-    var paramsWithToken = {
+export function createDialog(serviceOwner, endUser) {  
+    const paramsWithToken = {

45-45: Use const instead of let for unchanging variables

Since r is not reassigned after initialization, using const is preferable.

Apply this diff:

 describe('create dialog', () => {
-    let r = postSO('dialogs', dialogToInsert(endUser.ssn), paramsWithToken);  
+    const r = postSO('dialogs', dialogToInsert(endUser.ssn), paramsWithToken);  
     expect(r.status, 'response status').to.equal(201);
 });

20-24: Review the thresholds configuration for meaningful assertions

The threshold 'max>=0' for 'http_req_duration{scenario:default}' will always pass since response times cannot be negative. This might not provide valuable insights into performance.

Consider setting realistic thresholds to monitor performance effectively. For example:

 thresholds: {
-    'http_req_duration{scenario:default}': [`max>=0`],
+    'http_req_duration{scenario:default}': [`p(95)<500`],

This sets a threshold where 95% of requests should complete within 500ms.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 6b926f4 and d8e238b.

⛔ Files ignored due to path filters (2)
  • tests/k6/tests/performancetest_data/endusers-staging.csv is excluded by !**/*.csv
  • tests/k6/tests/performancetest_data/serviceowners-staging.csv is excluded by !**/*.csv
📒 Files selected for processing (8)
  • .github/workflows/dispatch-k6-performance.yml (1 hunks)
  • .github/workflows/performance-workflows/workflow-run-k6-performance.yml (1 hunks)
  • .gitignore (1 hunks)
  • tests/k6/common/request.js (1 hunks)
  • tests/k6/tests/enduser/performance/simple-search.js (1 hunks)
  • tests/k6/tests/scripts/generate_tokens.sh (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (1 hunks)
  • tests/k6/tests/serviceowner/testdata/01-create-dialog.js (1 hunks)
🧰 Additional context used
🪛 yamllint
.github/workflows/performance-workflows/workflow-run-k6-performance.yml

[error] 8-8: trailing spaces

(trailing-spaces)


[error] 23-23: trailing spaces

(trailing-spaces)


[error] 47-47: trailing spaces

(trailing-spaces)


[error] 53-53: trailing spaces

(trailing-spaces)

🔇 Additional comments (13)
.github/workflows/performance-workflows/workflow-run-k6-performance.yml (3)

1-4: LGTM: Workflow name and trigger are well-defined.

The workflow name clearly describes its purpose, and using workflow_call as the trigger allows for modular and reusable CI/CD setups.


24-32: LGTM: Appropriate use of secrets for sensitive information.

The workflow correctly defines required secrets for authentication and K6 cloud integration, ensuring secure handling of sensitive information.


34-39: LGTM: Job definition and permissions are well-configured.

The job is appropriately named and uses the latest Ubuntu runner. The permissions granted are necessary for updating PR status and comments.

.github/workflows/dispatch-k6-performance.yml (2)

1-4: LGTM: Workflow name and trigger are well-defined.

The workflow name clearly describes its purpose, and the manual trigger (workflow_dispatch) is appropriate for performance tests, allowing controlled execution.


1-60: Overall, the workflow is well-structured and functional.

This new workflow file effectively sets up a manually triggered K6 performance test with configurable parameters. The use of a reusable workflow for the actual test execution is a good practice for maintaining separation of concerns.

The suggestions provided in the review aim to:

  1. Improve input parameter descriptions and validation.
  2. Enhance the readability and maintainability of the job configuration.

These changes, while not critical, would further improve the robustness and user-friendliness of the workflow.

.gitignore (2)

379-379: Excellent addition for security!

Adding .secrets to .gitignore is a crucial security measure. It prevents accidental commits of sensitive information, which is especially important for performance tests that may require authentication tokens.


379-383: Overall, excellent security-focused changes.

These additions to .gitignore are crucial for the performance testing changes mentioned in the PR. They effectively prevent sensitive data (secrets and test tokens) from being accidentally committed. This aligns well with best practices for security and the PR objectives.

Consider adding comments as suggested in the previous review comments to improve documentation for future developers.

tests/k6/tests/serviceowner/testdata/01-create-dialog.js (2)

Line range hint 1-283: Overall assessment: Changes are well-implemented and improve flexibility.

The modifications to this file are minimal but impactful. They enhance the flexibility of the createDialog function while maintaining backward compatibility. The changes align well with the performance testing context mentioned in the PR description.

Key points:

  1. The function signature update is implemented correctly.
  2. Backward compatibility is maintained through the use of a default parameter.
  3. The change is localized and doesn't introduce breaking changes to the existing code structure.

These changes contribute positively to the overall goal of implementing performance tests for create dialog functionality.


4-7: Verify usage of this function in other parts of the codebase.

The change to the function signature is well-implemented and localized. To ensure full utilization of this enhancement:

  1. Review any code that directly calls this function to see if it can benefit from passing a specific endUser.
  2. Update relevant documentation or comments in other files that describe the usage of this function.

This will help maintain consistency across the codebase and take full advantage of the added flexibility.

To assist in this verification, you can run the following script:

✅ Verification successful

Usage of create-dialog function is properly localized.

The changes to the create-dialog function signature are confined to specific test files. No impacts on other parts of the codebase were found.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find all occurrences of the create-dialog function usage

# Search for import statements of this file
echo "Searching for import statements:"
rg -n "import.*01-create-dialog"

# Search for function calls
echo "\nSearching for function calls:"
rg -n "createDialog\s*\("

Length of output: 2713

tests/k6/tests/enduser/performance/simple-search.js (3)

6-10: Ensure secure handling of tokens and sensitive data.

Storing tokens in a CSV file may pose security risks if unauthorized access occurs. Consider using secure methods for storing and accessing tokens, such as environment variables or secret management services.

[security]


36-37: Sanitize user input when constructing query parameters.

When building the defaultFilter, ensure that enduser.ssn is properly validated and sanitized, even in testing environments, to prevent potential injection issues.

[security]


6-6: Verify the file path and visibility of the end users CSV file.

Using a filename starting with a dot (.endusers-with-tokens.csv) indicates a hidden file in Unix-like systems. Ensure that the file is accessible when the test runs and that the path is correct.

Run the following script to verify the existence and accessibility of the CSV file:

tests/k6/tests/serviceowner/performance/create-dialog.js (1)

45-45: ⚠️ Potential issue

Ensure secure handling of sensitive personal information

The ssn (social security number) of the end user is being used. It's crucial to handle PII securely to comply with data protection regulations and prevent leakage.

Confirm that:

  • The endUser.ssn data is handled securely and only used where necessary.
  • No PII is logged or exposed in test outputs or logs.
  • Proper anonymization or obfuscation is applied if required.

tests/k6/tests/scripts/generate_tokens.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/simple-search.js Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/simple-search.js Outdated Show resolved Hide resolved
tests/k6/tests/serviceowner/performance/create-dialog.js Outdated Show resolved Hide resolved
tests/k6/tests/serviceowner/performance/create-dialog.js Outdated Show resolved Hide resolved
Co-authored-by: Ole Jørgen Skogstad <skogstad@softis.net>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between bb982c2 and 9f55f7f.

📒 Files selected for processing (1)
  • tests/k6/tests/scripts/generate_tokens.sh (1 hunks)
🧰 Additional context used
🪛 Shellcheck
tests/k6/tests/scripts/generate_tokens.sh

[warning] 88-88: This $? refers to echo/printf, not a previous command. Assign to variable to avoid it being overwritten.

(SC2320)

🔇 Additional comments (1)
tests/k6/tests/scripts/generate_tokens.sh (1)

9-45: LGTM! Well-structured argument validation.

The implementation includes clear usage information, proper argument validation, and appropriate environment mapping.

tests/k6/tests/scripts/generate_tokens.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Outdated Show resolved Hide resolved
@dagfinno dagfinno changed the title Performance tests for create dialog and search Test: Performance tests for create dialog and search Oct 23, 2024
@dagfinno dagfinno changed the title Test: Performance tests for create dialog and search test(load-test): Performance tests for create dialog and search Oct 23, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (2)
tests/k6/tests/serviceowner/performance/createremove-no-delay.js (2)

Line range hint 13-28: Enhance error handling for performance metrics.

The current implementation could benefit from additional error handling and metrics collection for performance analysis.

Consider adding metrics tracking:

import { Counter } from 'k6/metrics';

const errors = new Counter('errors');

// In your test function
export default function(paramsWithToken) {
    let dialogId = null;
    describe('create dialog', () => {
        let r = postSO('dialogs', dialogToInsert(), paramsWithToken);   
        if (r.status !== 201) {
            errors.add(1);
            console.error(`Failed to create dialog: ${r.status} ${r.body}`);
        }
        expect(r.status, 'response status').to.equal(201);
        dialogId = r.json();
    });

    describe('remove dialog', () => {
        if (dialogId) {
            let r = purgeSO('dialogs/' + dialogId, paramsWithToken);   
            if (r.status !== 204) {
                errors.add(1);
                console.error(`Failed to remove dialog: ${r.status} ${r.body}`);
            }
            expect(r.status, 'response status').to.equal(204);
        }
    });
}

Additional load test scenarios needed for comprehensive performance testing

The current performance tests only implement basic scenarios with configurable VUs (virtual users) and iterations, but lack specific load patterns. The test files show:

  • createremove-no-delay.js: Basic create/remove operations
  • create-dialog.js: Single dialog creation with random users
  • simple-search.js: Basic search operation

The codebase needs the following load test scenarios:

  • Stress test with ramping VUs using k6's stages configuration
  • Spike test configuration for sudden load bursts
  • Soak test for extended duration testing
  • Scenarios combining multiple operations (create, search, remove) under various load patterns
🔗 Analysis chain

Line range hint 1-28: Verify test coverage across different load scenarios.

The current test focuses on basic create/remove operations. For comprehensive performance testing, consider different load patterns.

Let's check if there are other performance test variations:

Consider adding the following test scenarios if not already present:

  1. Stress test with gradually increasing load
  2. Spike test for sudden bursts of traffic
  3. Soak test for long-duration stability

Let me check the content of these performance test files to understand the existing load patterns.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for different performance test scenarios
# Expected: Find variations of load tests (e.g., spike tests, stress tests)

echo "Searching for performance test files..."
fd . tests/k6/tests/serviceowner/performance -e js

echo "Checking for load test configurations..."
rg -l "options.*vus|stages" tests/k6/tests/

Length of output: 509


Script:

#!/bin/bash
# Check the content of performance test files to understand load patterns
echo "Content of create-dialog.js:"
cat tests/k6/tests/serviceowner/performance/create-dialog.js

echo -e "\nContent of simple-search.js:"
cat tests/k6/tests/enduser/performance/simple-search.js

Length of output: 3723

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between a09f2fa and 22d55b8.

📒 Files selected for processing (4)
  • tests/k6/common/request.js (2 hunks)
  • tests/k6/tests/enduser/performance/simple-search.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/createremove-no-delay.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/k6/common/request.js
  • tests/k6/tests/serviceowner/performance/create-dialog.js
🧰 Additional context used
🔇 Additional comments (2)
tests/k6/tests/serviceowner/performance/createremove-no-delay.js (1)

7-9: LGTM! Property name follows JavaScript conventions.

The change from 'Headers' to 'headers' aligns with JavaScript naming conventions and maintains consistency.

tests/k6/tests/enduser/performance/simple-search.js (1)

20-27: LGTM! Clean implementation of test scenarios.

The function properly handles both single and multiple user scenarios, with appropriate user selection logic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 22d55b8 and a62d4aa.

📒 Files selected for processing (1)
  • tests/k6/tests/scripts/generate_tokens.sh (1 hunks)
🧰 Additional context used
🪛 Shellcheck
tests/k6/tests/scripts/generate_tokens.sh

[warning] 67-67: This $? refers to echo/printf, not a previous command. Assign to variable to avoid it being overwritten.

(SC2320)


[warning] 89-89: This $? refers to echo/printf, not a previous command. Assign to variable to avoid it being overwritten.

(SC2320)

🔇 Additional comments (3)
tests/k6/tests/scripts/generate_tokens.sh (3)

9-45: LGTM! Well-structured argument validation and environment mapping.

The implementation includes thorough argument validation, clear usage information, and proper environment mapping with appropriate error handling.


47-50: LGTM! File paths are correctly constructed.


1-7: 🛠️ Refactor suggestion

Add script safety measures and secure cleanup.

While the environment variable checks are good, consider adding these safety measures:

  1. Enable strict error handling
  2. Add secure cleanup of sensitive files
 #!/bin/bash
+set -euo pipefail
+
+# Cleanup function to securely remove sensitive files
+cleanup() {
+    rm -f "$serviceowner_tokenfile" "$enduser_tokenfile"
+}
+
+# Set up trap to ensure cleanup on exit
+trap cleanup EXIT INT TERM
 
 # Check if required environment variables are set

Likely invalid or redundant comment.

tests/k6/tests/scripts/generate_tokens.sh Show resolved Hide resolved
tests/k6/tests/scripts/generate_tokens.sh Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
tests/k6/tests/enduser/performance/simple-search.js (1)

1-6: Consider version pinning and sensitive data handling improvements.

  1. Pin the k6-utils version more specifically to ensure test stability:
-import { randomItem } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js';
+import { randomItem } from 'https://jslib.k6.io/k6-utils/1.2.0/index.min.js';
  1. Consider moving sensitive data path to environment variables or configuration file for better security and flexibility.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between a62d4aa and ff4b5e8.

📒 Files selected for processing (1)
  • tests/k6/tests/enduser/performance/simple-search.js (1 hunks)
🧰 Additional context used
🔇 Additional comments (2)
tests/k6/tests/enduser/performance/simple-search.js (2)

33-40: LGTM! Well-structured test execution logic.

The code properly handles both single and multiple virtual user scenarios, with good randomization for multiple VUs.


25-31: ⚠️ Potential issue

Define specific performance thresholds for meaningful test results.

The threshold configuration is still using empty arrays. Define specific thresholds to ensure the performance tests have clear pass/fail criteria:

 export let options = {
     summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'],
     thresholds: {
-        'http_req_duration{name:simple search}': [],
-        'http_reqs{name:simple search}': [],
+        'http_req_duration{name:simple search}': [
+            'p(95) < 2000',  // 95% of requests should complete within 2s
+            'max < 5000'     // No request should take longer than 5s
+        ],
+        'http_reqs{name:simple search}': [
+            'count > 0',     // Ensure requests are being made
+            'rate > 0.95'    // Success rate should be above 95%
+        ],
     },
 };

Likely invalid or redundant comment.

Copy link

@dagfinno dagfinno merged commit b3a01e8 into main Oct 31, 2024
23 checks passed
@dagfinno dagfinno deleted the performance/create_and_search_dialog branch October 31, 2024 10:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants