Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: add more performance tests #1373

Merged
merged 43 commits into from
Nov 6, 2024
Merged

test: add more performance tests #1373

merged 43 commits into from
Nov 6, 2024

Conversation

dagfinno
Copy link
Contributor

@dagfinno dagfinno commented Nov 1, 2024

Description

Added more performance tests and some improvements on github workflow for performance tests

Related Issue(s)

Verification

  • Your code builds clean without any errors or warnings
  • Manual testing done (required)
  • Relevant automated test added (if you find this hard, leave it and we'll help out)

Documentation

  • Documentation is updated (either in docs-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)

Summary by CodeRabbit

  • New Features

    • Introduced performance testing scripts for end-user search and GraphQL search functionalities.
    • Added a tagging system for performance tests with a new required input parameter.
    • Enhanced dialog creation and removal performance tests with improved data handling.
    • Implemented a new function for generating default thresholds for performance tests.
    • Added new performance testing scenarios for creating dialogs and executing searches.
  • Bug Fixes

    • Updated performance test configurations to ensure correct environment settings.
  • Refactor

    • Reorganized performance test scripts to utilize newly created utility functions for better maintainability.
    • Removed outdated performance testing scripts to streamline the testing framework.
  • Documentation

    • Updated workflow configurations for clarity and improved execution context.

@dagfinno dagfinno requested review from a team as code owners November 1, 2024 12:19
Copy link
Contributor

coderabbitai bot commented Nov 1, 2024

📝 Walkthrough

Walkthrough

The pull request introduces significant updates to K6 performance testing workflows and scripts. The workflow configuration is modified to change default parameters and add new input requirements. Several new files are created for testing GraphQL and end-user functionalities, while existing scripts are updated to improve data handling and error management. Additionally, the removal of outdated performance scripts is noted. Overall, these changes enhance the structure and flexibility of performance testing across various scenarios.

Changes

File Path Change Summary
.github/workflows/dispatch-k6-performance.yml - Updated environment default value from 'staging' to 'yt01'.
- Added required input parameter tag with default value 'Performance test'.
- Changed vus default value from 10 to 1.
- Updated testSuitePath options and removed old options.
- Updated run-name to include dynamic inputs.
tests/k6/common/config.js - Added new base URL constants for various environments.
- Updated baseUrls.v1 to use these constants and added a new graphql property.
- Introduced baseUrlGraphql constant.
tests/k6/common/request.js - Added import for baseUrlGraphql.
- Introduced new function postGQ for making GraphQL requests.
tests/k6/common/testimports.js - Added new export for postGQ from ./request.js.
tests/k6/tests/enduser/performance/simple-search.js - Removed the entire performance testing script.
tests/k6/tests/performancetest_data/01-create-dialog.js - Introduced a new function for generating dialog objects for performance testing, including validation and cleanup logic.
tests/k6/tests/performancetest_data/graphql-search.js - Added new function getGraphqlParty for constructing GraphQL query strings.
tests/k6/tests/scripts/generate_tokens.sh - Modified handling of the API_ENVIRONMENT variable to assign "yt01" for the "performance" case.
tests/k6/tests/serviceowner/performance/create-dialog.js - Updated imports and restructured data handling.
- Replaced thresholds property with a call to getDefaultThresholds.
- Removed previous implementation of the createDialog function.
tests/k6/tests/serviceowner/performance/create-remove-dialog.js - Enhanced imports and updated performance testing framework to include dynamic thresholds and conditional execution logic.
.github/workflows/workflow-run-k6-performance.yml - Added environment property to the k6-test job to utilize specified environment directly.
tests/k6/tests/enduser/performance/enduser-search.js - Introduced a new performance testing script for end-user search functionality with dynamic thresholds.
tests/k6/tests/graphql/performance/graphql-search.js - Added a new performance testing script for GraphQL search functionality with flexible execution paths.
tests/k6/tests/performancetest_common/createDialog.js - Introduced createDialog and createAndRemoveDialog functions for dialog creation and deletion tests.
tests/k6/tests/performancetest_common/getDefaultThresholds.js - Added a new function to create a default thresholds configuration for K6 performance tests.
tests/k6/tests/performancetest_common/readTestdata.js - Introduced functionality for loading and parsing CSV data files for service owners and end users.
tests/k6/tests/performancetest_common/simpleSearch.js - Introduced functions for testing dialog retrieval and search operations, including error handling and response validation.
tests/k6/tests/scenarios/performance/create-dialog-and-search.js - Introduced performance testing scenarios for creating dialogs and executing searches with structured options and metrics.

Possibly related PRs


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 26

🧹 Outside diff range and nitpick comments (7)
tests/k6/tests/enduser/performance/graphql-search.js (3)

8-8: Consider using path.join for file path construction.

The hardcoded relative path could be fragile if the file structure changes. Consider using a more robust path construction method.

-const filenameEndusers = '../../performancetest_data/.endusers-with-tokens.csv';
+const filenameEndusers = path.join(__dirname, '../../performancetest_data/.endusers-with-tokens.csv');

36-43: Add JSDoc documentation for the default function.

The user selection logic should be documented to make the behavior clear for other developers.

+/**
+ * Main test function that executes GraphQL search tests.
+ * For single VU/iteration tests, uses the first user from the dataset.
+ * For multiple VUs/iterations, randomly selects users to simulate concurrent access.
+ */
 export default function() {

53-53: Remove commented out code.

Dead code should be removed to maintain code cleanliness.

-        //let r = ('dialogs' + defaultFilter, paramsWithToken);
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (2)

7-8: Consider making file paths more robust

The relative paths to CSV files might be fragile if the test is run from different working directories. Consider using __dirname or a similar mechanism to ensure path resolution is consistent.

-const filenameServiceowners = '../../performancetest_data/.serviceowners-with-tokens.csv';
-const filenameEndusers = `../../performancetest_data/endusers-${__ENV.API_ENVIRONMENT}.csv`;
+const filenameServiceowners = `${__dirname}/../../performancetest_data/.serviceowners-with-tokens.csv`;
+const filenameEndusers = `${__dirname}/../../performancetest_data/endusers-${__ENV.API_ENVIRONMENT}.csv`;

30-37: Verify array access safety

Direct array access without bounds checking could fail if the CSV files are empty or malformed. While this aligns with the "fail fast" approach, consider adding a debug log for easier troubleshooting:

 export default function() {
+    console.debug(`Loaded ${serviceOwners.length} service owners and ${endUsers.length} end users`);
     if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
       createDialog(serviceOwners[0], endUsers[0]);
     }
tests/k6/tests/scripts/generate_tokens.sh (2)

Line range hint 64-65: Consider making the token generator URL configurable.

The token generator URLs are currently hardcoded. Consider moving these to environment variables or a configuration file to:

  • Improve flexibility for different environments
  • Follow security best practices by not hardcoding service URLs
  • Enable easier updates if the service endpoint changes

Example implementation:

+ # Read token generator URL from environment variable with a default fallback
+ TOKEN_GENERATOR_BASE_URL=${TOKEN_GENERATOR_BASE_URL:-"https://altinn-testtools-token-generator.azurewebsites.net"}
+
- url="https://altinn-testtools-token-generator.azurewebsites.net/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=3600"
+ url="$TOKEN_GENERATOR_BASE_URL/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=3600"

Also applies to: 89-90


Line range hint 64-65: Consider making token TTL configurable.

The token Time-To-Live (TTL) is hardcoded to 3600 seconds. For performance testing scenarios, different durations might be needed.

Example implementation:

+ # Default TTL of 1 hour if not specified
+ TOKEN_TTL=${TOKEN_TTL:-3600}
+
- url="$TOKEN_GENERATOR_BASE_URL/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=3600"
+ url="$TOKEN_GENERATOR_BASE_URL/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=$TOKEN_TTL"

Also applies to: 89-90

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between a1df0ff and fb08882.

⛔ Files ignored due to path filters (2)
  • tests/k6/tests/performancetest_data/endusers-yt01.csv is excluded by !**/*.csv
  • tests/k6/tests/performancetest_data/serviceowners-yt01.csv is excluded by !**/*.csv
📒 Files selected for processing (12)
  • .github/workflows/dispatch-k6-performance.yml (3 hunks)
  • tests/k6/common/config.js (2 hunks)
  • tests/k6/common/request.js (2 hunks)
  • tests/k6/common/testimports.js (1 hunks)
  • tests/k6/tests/enduser/performance/graphql-search.js (1 hunks)
  • tests/k6/tests/enduser/performance/simple-search.js (2 hunks)
  • tests/k6/tests/performancetest_data/01-create-dialog.js (1 hunks)
  • tests/k6/tests/performancetest_data/graphql-search.js (1 hunks)
  • tests/k6/tests/scripts/generate_tokens.sh (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog-and-search.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (2 hunks)
  • tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1 hunks)
🧰 Additional context used
📓 Learnings (1)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1331
File: tests/k6/tests/serviceowner/performance/create-dialog.js:28-33
Timestamp: 2024-10-23T11:32:22.074Z
Learning: In k6 performance tests, when data arrays such as `serviceOwners` and `endUsers` might be empty, prefer to let the test crash rather than adding checks to log error messages continuously.
🔇 Additional comments (16)
tests/k6/common/testimports.js (2)

17-18: LGTM! Clean addition of the GraphQL request function export.

The new export follows the established naming convention and is properly grouped with other request-related functions.


18-18: Verify GraphQL request implementation and usage.

Let's verify the implementation of postGQ and its usage in performance tests.

✅ Verification successful

Let me gather more information about the GraphQL implementation and its usage pattern.


Let me check one final aspect - the actual GraphQL query structure from the data file.


GraphQL implementation and usage verified successfully

The verification shows a well-structured GraphQL implementation:

  • postGQ function is properly implemented in request.js with correct content-type headers
  • GraphQL endpoint is correctly configured via baseUrlGraphql
  • The query structure in graphql-search.js follows GraphQL best practices, fetching dialog data with proper error handling
  • Test expectations validate both HTTP status and JSON response structure
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the implementation and usage of postGQ function

# Test 1: Check the implementation in request.js
echo "Checking postGQ implementation:"
rg -A 5 "postGQ.*=.*function" ./tests/k6/common/request.js

# Test 2: Find where postGQ is being used
echo -e "\nChecking postGQ usage:"
rg "postGQ" --type js

# Test 3: Verify GraphQL endpoint configuration
echo -e "\nChecking GraphQL endpoint configuration:"
rg "baseUrlGraphql" ./tests/k6/common/

Length of output: 1050


Script:

#!/bin/bash
# Check the GraphQL query structure and implementation details

# Test 1: Check the GraphQL query structure
echo "Checking GraphQL query structure in graphql-search.js:"
rg -A 10 "getGraphqlParty" ./tests/k6/tests/enduser/performance/graphql-search.js

# Test 2: Check the configuration setup
echo -e "\nChecking GraphQL configuration:"
rg -A 5 'baseUrls\[.*\]\["graphql"\]' ./tests/k6/common/config.js

# Test 3: Check error handling in postGQ
echo -e "\nChecking error handling in postGQ:"
rg -A 10 "function postGQ" ./tests/k6/common/request.js

# Test 4: Check test setup and expectations
echo -e "\nChecking test setup and expectations:"
rg -B 2 -A 5 "postGQ.*paramsWithToken" ./tests/k6/tests/enduser/performance/graphql-search.js

Length of output: 1984


Script:

#!/bin/bash
# Check the GraphQL query definition

# Test 1: Check the GraphQL query definition
echo "Checking GraphQL query definition:"
rg -A 10 "export.*getGraphqlParty" ./tests/k6/tests/performancetest_data/graphql-search.js

# Test 2: Check for any other GraphQL query patterns
echo -e "\nChecking other GraphQL patterns:"
rg "query.*{" ./tests/k6/tests/performancetest_data/graphql-search.js

Length of output: 763

tests/k6/tests/serviceowner/performance/create-dialog.js (2)

4-4: LGTM: Import path aligns with performance test data structure.

The updated import path maintains consistency with other imports and properly separates performance test data.


45-45: Verify the updated dialogToInsert parameter usage across test files.

The addition of endUser.resource parameter enhances the test coverage. However, we should ensure consistent usage across all test files.

Let's verify the usage of dialogToInsert across the codebase:

✅ Verification successful

Let me gather more information about the function definition since the previous ast-grep pattern didn't yield results.


Let's try one more search pattern to find the function definition:


Let's check the actual function definition in the test data files:


The updated dialogToInsert parameter usage is correct and consistent

The verification shows that:

  1. The performance test files correctly use both parameters (endUser.ssn, endUser.resource)
  2. Other test files use the default parameter which internally sets serviceResource and uses the default SSN, maintaining backward compatibility
  3. Both implementations (performancetest_data/01-create-dialog.js and testdata/01-create-dialog.js) handle the parameters appropriately
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any inconsistent usage of dialogToInsert function
# Expected: All calls should use both ssn and resource parameters

# Search for dialogToInsert function calls
echo "Checking dialogToInsert usage patterns:"
rg -A 1 "dialogToInsert\(" tests/

# Search for the function definition to verify parameter expectations
echo -e "\nChecking dialogToInsert function definition:"
ast-grep --pattern 'export default function dialogToInsert($_) {
  $$$
}'

Length of output: 5870


Script:

#!/bin/bash
# Search for dialogToInsert function definition with different patterns
echo "Searching for dialogToInsert function definition:"
rg -A 5 "function dialogToInsert" tests/

# Search for the file that might export dialogToInsert
echo -e "\nSearching for files that might export dialogToInsert:"
rg "export.*dialogToInsert" tests/

Length of output: 311


Script:

#!/bin/bash
# Search for files containing dialogToInsert
echo "Finding files that might contain dialogToInsert:"
fd dialogToInsert tests/

# Search for any import statements related to dialogToInsert
echo -e "\nSearching for dialogToInsert imports:"
rg "import.*dialogToInsert" tests/

# Search for the function with a broader pattern
echo -e "\nSearching with broader pattern:"
rg -l "dialogToInsert" tests/

Length of output: 3773


Script:

#!/bin/bash
# Check both possible locations of the function definition
echo "Checking performancetest_data definition:"
cat tests/k6/tests/performancetest_data/01-create-dialog.js

echo -e "\nChecking testdata definition:"
cat tests/k6/tests/serviceowner/testdata/01-create-dialog.js

Length of output: 23722

tests/k6/tests/performancetest_data/graphql-search.js (1)

1-70: Consider performance testing best practices.

As this query is used in performance tests, consider:

  1. Adding directives for query caching/optimization
  2. Including error scenarios in performance tests
  3. Documenting expected performance characteristics

Let's verify the query usage in performance tests:

Consider adding GraphQL specific performance optimizations:

  1. Add @cached directive for appropriate fields
  2. Consider implementing query batching for performance tests
  3. Document expected query complexity and performance characteristics in comments
.github/workflows/dispatch-k6-performance.yml (4)

29-33: LGTM! Well-structured tag parameter addition.

The new tag parameter is well-defined with appropriate defaults and description, improving test run identification and filtering capabilities.


55-55: LGTM! Improved run name format.

The updated run-name format provides better visibility and traceability by including the tag, VUs count, and duration.


13-13: Verify the environment change impact.

The default environment has been changed from 'staging' to 'yt01', and the environment options have been updated. Please ensure:

  1. All team members are aware of this change
  2. CI/CD pipelines and documentation are updated accordingly
  3. 'yt01' is a valid and accessible environment
#!/bin/bash
# Search for any references to the old environment names in workflow files and docs
echo "Searching for references to 'staging' environment..."
rg -l 'staging' --type yaml --type md

echo "Checking other workflow files for environment dependencies..."
fd -e yml -e yaml . .github/workflows/ --exec grep -l 'environment:'

Also applies to: 18-18


51-51: Verify new test file implementations.

Two new test paths have been added. Please ensure:

  1. The test files exist and are properly implemented
  2. They follow the same patterns and best practices as existing tests

Also applies to: 53-53

✅ Verification successful

Let me try a different approach to verify the existence and content of these test files.


Let me check the content of these files to verify their implementation.


Test files are properly implemented and follow existing patterns

Both new test files exist and are correctly implemented following the established patterns:

  1. create-remove-dialog.js extends the existing create-dialog.js pattern with additional cleanup functionality
  2. graphql-search.js follows similar k6 test structure with proper imports, options configuration, and test scenarios

Key verification points:

  • Both files have correct k6 imports and exports
  • They use shared test utilities and common imports
  • Follow consistent error handling and authentication patterns
  • Include proper performance metrics and thresholds
  • Maintain consistent code structure with other performance tests
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the new test files exist and verify their content
echo "Checking for new test files..."
for file in "tests/k6/tests/serviceowner/performance/create-remove-dialog.js" "tests/k6/tests/enduser/performance/graphql-search.js"; do
    if [ -f "$file" ]; then
        echo "Found $file"
        # Look for essential k6 imports and exports
        rg "import {.*} from 'k6'" "$file"
        rg "export (default|const) " "$file"
    else
        echo "Missing file: $file"
    fi
done

Length of output: 1690


Script:

#!/bin/bash
# Check if the files exist
echo "Checking create-remove-dialog.js..."
ls -l tests/k6/tests/serviceowner/performance/create-remove-dialog.js 2>/dev/null

echo -e "\nChecking graphql-search.js..."
ls -l tests/k6/tests/enduser/performance/graphql-search.js 2>/dev/null

# If files exist, check their content
echo -e "\nChecking k6 imports and exports..."
rg -l "import.*from 'k6'" tests/k6/tests/serviceowner/performance/create-remove-dialog.js tests/k6/tests/enduser/performance/graphql-search.js 2>/dev/null

# Look at similar existing k6 test files for pattern comparison
echo -e "\nListing existing k6 test files for reference..."
fd ".*\.js$" tests/k6/tests/ -t f

Length of output: 2391


Script:

#!/bin/bash
# Check content of create-remove-dialog.js
echo "Content of create-remove-dialog.js:"
cat tests/k6/tests/serviceowner/performance/create-remove-dialog.js

echo -e "\nContent of graphql-search.js:"
cat tests/k6/tests/enduser/performance/graphql-search.js

# Check if they follow similar patterns as existing performance tests
echo -e "\nComparing with existing performance test structure:"
cat tests/k6/tests/serviceowner/performance/create-dialog.js

Length of output: 6891

tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)

10-17: LGTM! Efficient data loading implementation

The use of SharedArray with CSV parsing is well-implemented and follows the established pattern of letting tests fail fast on empty data rather than adding continuous checks.

tests/k6/common/config.js (1)

19-26: ⚠️ Potential issue

Verify the production GraphQL endpoint URL pattern.

The production GraphQL endpoint URL follows a different pattern than other environments:

  • Other environments: .../dialogporten/graphql
  • Production: .../dialogporten/api/v1/graphql

This inconsistency might cause issues in production.

✅ Verification successful

Let me gather more specific information about the production endpoint pattern by checking the backend configuration.


Let me check the base URL configuration and routing in the backend code to understand if this pattern is intentional.


Based on the evidence gathered, I can now provide a final response:

The production GraphQL endpoint URL pattern is correct and intentional

The /api/v1/ segment in the production URL is the correct pattern as it follows API versioning conventions. The other environments (test, staging, etc.) are development/testing environments where the simplified URL pattern is acceptable.

Evidence:

  • The backend configuration in appsettings.prod.json shows the base URI is configured as https://platform.altinn.no/dialogporten
  • The /api/v1/ segment is consistently used in production for versioned APIs, as seen in other production URLs (e.g., platform.altinn.no/authentication/api/v1/)
  • The GraphQL endpoint is properly mapped in the backend code through standard ASP.NET Core routing
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify GraphQL endpoint patterns across the codebase
# Expected: Find any other GraphQL endpoint references to confirm the correct pattern

# Search for GraphQL endpoint patterns
echo "Searching for GraphQL endpoint patterns in the codebase..."
rg -i "graphql" --type js --type ts --type json

Length of output: 3595


Script:

#!/bin/bash
# Check the GraphQL endpoint configuration in backend
cat src/Digdir.Domain.Dialogporten.GraphQL/appsettings.prod.json

Length of output: 2384


Script:

#!/bin/bash
# Search for API routing or endpoint configuration in the backend
rg -g "*.cs" -i "graphql|endpoint.*config" src/Digdir.Domain.Dialogporten.GraphQL/

Length of output: 16283

tests/k6/tests/scripts/generate_tokens.sh (1)

31-31: LGTM! Environment mapping for 'yt01' added.

The addition of the "yt01" environment case aligns with the workflow changes and ensures consistent environment handling across the testing framework.

tests/k6/common/request.js (2)

2-2: LGTM! Import follows established patterns.

The addition of baseUrlGraphql maintains consistency with existing base URL imports.


129-133: Verify GraphQL query usage patterns across tests.

Let's ensure the new GraphQL functionality is being used consistently across test files.

✅ Verification successful

Let me check the actual usage patterns in these files to ensure consistency.


GraphQL implementation is properly integrated and consistently used

The verification shows:

  • postGQ is correctly imported and used in the performance test
  • The helper is properly exported in testimports.js
  • GraphQL endpoints are well-configured across environments in config.js
  • No direct GraphQL endpoint usage bypassing the postGQ helper was found
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for GraphQL query usage patterns
echo "Checking GraphQL query usage patterns..."
rg -l "postGQ" tests/k6/

# Check for potential direct GraphQL endpoint usage that should use postGQ
echo "Checking for direct GraphQL endpoint usage..."
rg -l "graphql" tests/k6/ | grep -v "request.js"

Length of output: 453


Script:

#!/bin/bash
# Check the implementation in graphql-search.js
echo "=== GraphQL Search Implementation ==="
rg "postGQ" tests/k6/tests/enduser/performance/graphql-search.js -B 2 -A 2

# Check how it's imported/used in testimports.js
echo -e "\n=== Test Imports Usage ==="
rg "postGQ" tests/k6/common/testimports.js -B 2 -A 2

# Check config.js for any GraphQL endpoint configurations
echo -e "\n=== GraphQL Configuration ==="
rg "graphql" tests/k6/common/config.js -B 2 -A 2

Length of output: 1796

tests/k6/tests/serviceowner/performance/create-dialog-and-search.js (1)

74-79: ⚠️ Potential issue

Fix incorrect access to VUs and iterations in options object

Similar to the createDialogs function, the simpleSearches function is accessing options.vus and options.iterations which are undefined at the global level when multiple scenarios are used. This can cause the condition to fail unexpectedly.

Apply this diff to correct the access:

 export function simpleSearches() {
-  if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
+  const scenarioOptions = options.scenarios.simple_search;
+  if (
+    (scenarioOptions.vus === undefined || scenarioOptions.vus === 1) &&
+    (scenarioOptions.iterations === undefined || scenarioOptions.iterations === 1)
+  ) {
     simpleSearch(endUsers[0]);
   } else {
     simpleSearch(randomItem(endUsers));
   }
 }
tests/k6/tests/enduser/performance/simple-search.js (1)

125-130: 🛠️ Refactor suggestion

Add Error Handling in getUrl Function

The getUrl function currently assumes that all requests will be successful and return valid JSON. For more robust code, include error handling for unexpected responses.

Enhance the function as follows:

export function getUrl(url, paramsWithToken) {
    let r = getEU(url, paramsWithToken);
+   if (r.status !== 200) {
+       console.error(`Request to ${url} failed with status ${r.status}`);
+       return;
+   }
    expectStatusFor(r).to.equal(200);
    expect(r, 'response').to.have.validJsonBody();
    return r;
}

This addition logs an error message when the response status is not 200 and prevents further processing on invalid responses.

Likely invalid or redundant comment.

tests/k6/tests/enduser/performance/graphql-search.js Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/graphql-search.js Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/simple-search.js Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/simple-search.js Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/simple-search.js Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/simple-search.js Outdated Show resolved Hide resolved
tests/k6/tests/enduser/performance/simple-search.js Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between fb08882 and cb037e4.

📒 Files selected for processing (1)
  • tests/k6/common/config.js (2 hunks)
🔇 Additional comments (3)
tests/k6/common/config.js (3)

1-6: LGTM: Base URL constants are well-structured

The base URL constants follow security best practices by using HTTPS and maintain a consistent naming convention.


64-65: Add validation for GraphQL URLs

The previous review comment about missing GraphQL URL validation is still applicable. This validation is important for preventing runtime errors.


8-10: Consider versioning consistency for GraphQL path

While the endUser and serviceOwner paths include version information (api/v1/), the GraphQL path doesn't follow this pattern. Consider aligning the versioning strategy for consistency and future compatibility.

tests/k6/common/config.js Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
tests/k6/tests/serviceowner/performance/create-dialog.js (1)

22-22: LGTM! Consider documenting threshold values.

The standardization of thresholds using getDefaultThresholds improves consistency. However, it would be helpful to document the actual threshold values for better visibility.

Consider adding a comment above this line documenting the specific threshold values being applied.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 74d678c and 8ff74c2.

📒 Files selected for processing (2)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (2 hunks)
  • tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/k6/tests/serviceowner/performance/create-remove-dialog.js
🔇 Additional comments (3)
tests/k6/tests/serviceowner/performance/create-dialog.js (3)

6-7: LGTM! Import changes improve organization and reusability.

The changes enhance the test structure by:

  • Centralizing threshold management through getDefaultThresholds
  • Moving test data to a more appropriately named directory

43-43: Verify test data contains required resource field.

The addition of endUser.resource parameter needs verification to ensure all test data files contain this field.

#!/bin/bash
# Description: Verify resource field in test data
# Expected: Find resource field in CSV headers

echo "Checking resource field in enduser CSV files..."
head -n 1 tests/k6/tests/performancetest_data/endusers-*.csv

echo "Checking dialogToInsert function signature..."
ast-grep --pattern 'dialogToInsert = function($ssn, $resource)' tests/k6/tests/performancetest_data/01-create-dialog.js

12-18: LGTM! Verify the exported arrays usage.

The export of shared arrays improves modularity. However, let's verify if these exports are being used by other test files.

✅ Verification successful

Let me try a different search strategy to verify the usage of these exported arrays.


Exports are actively used across multiple test files

The exported arrays are indeed being used extensively:

  • serviceOwners is used in 3 performance test files
  • endUsers is used in 6 test files across performance, enduser, and common modules
  • The file itself is imported by 19 different test files

The export of these shared arrays is well-justified as they serve as common test data across multiple test scenarios.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if the exported arrays are used in other test files
# Expected: Find references to these exports in other test files

echo "Checking usage of exported arrays..."
rg -l "from.*create-dialog.*" tests/k6/tests/

Length of output: 618


Script:

#!/bin/bash
# Search for direct references to serviceOwners and endUsers arrays
echo "Searching for serviceOwners usage..."
rg "serviceOwners" tests/k6/tests/ -l

echo -e "\nSearching for endUsers usage..."
rg "endUsers" tests/k6/tests/ -l

# Also check for any imports from this file
echo -e "\nSearching for imports from create-dialog..."
rg "import.*from.*create-dialog" tests/k6/tests/ -l

Length of output: 1955

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
tests/k6/tests/performancetest_data/01-create-dialog.js (2)

1-1: Simplify the import statement.

The import statement can be simplified by removing the unnecessary {default as ...} syntax since it's already a default import.

-import {default as createDialogPayload} from "../serviceowner/testdata/01-create-dialog.js"
+import createDialogPayload from "../serviceowner/testdata/01-create-dialog.js"

43-54: Enhance URN handling and error handling.

While the input validation is good, there are a few improvements that could be made:

  1. Use template literals for URN construction
  2. Extract URN prefixes as constants
  3. Add error handling for createDialogPayload
+const URN_RESOURCE_PREFIX = 'urn:altinn:resource:';
+const URN_PERSON_PREFIX = 'urn:altinn:person:identifier-no:';

 export default function (endUser, resource) {
     if (!endUser?.match(/^\d{11}$/)) {
         throw new Error('endUser must be a 11-digit number');
     }
     if (!resource?.trim()) {
         throw new Error('resource is required');
     }
-    let payload = createDialogPayload();
-    payload.serviceResource = "urn:altinn:resource:" +resource;
-    payload.party = "urn:altinn:person:identifier-no:" + endUser;
+    
+    let payload;
+    try {
+        payload = createDialogPayload();
+    } catch (error) {
+        throw new Error(`Failed to create dialog payload: ${error.message}`);
+    }
+    
+    payload.serviceResource = `${URN_RESOURCE_PREFIX}${resource.trim()}`;
+    payload.party = `${URN_PERSON_PREFIX}${endUser}`;
     return cleanUp(payload);
 }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 8ff74c2 and 423962d.

📒 Files selected for processing (3)
  • tests/k6/tests/performancetest_common/common.js (1 hunks)
  • tests/k6/tests/performancetest_data/01-create-dialog.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/k6/tests/performancetest_common/common.js
  • tests/k6/tests/serviceowner/performance/create-dialog.js
🔇 Additional comments (1)
tests/k6/tests/performancetest_data/01-create-dialog.js (1)

5-34: LGTM! Clean implementation with proper error handling.

The cleanUp function is well-implemented with:

  • Input validation
  • Immutable operations using spread operator
  • Proper handling of optional fields using optional chaining
  • Clear activity transformation logic

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Outside diff range and nitpick comments (15)
tests/k6/tests/performancetest_common/getDefaultThresholds.js (3)

1-8: Enhance documentation with examples and threshold explanations.

The documentation is well-structured but could be more helpful with:

  • Usage examples showing typical counter/label combinations
  • Explanation of why empty threshold arrays are used (they make labels visible in k6 reports)
  • Description of the default http_req_failed threshold

Here's a suggested improvement:

 /**
  * Creates default thresholds configuration for K6 tests.
+ * Empty threshold arrays are used to make labels visible in k6 reports without enforcing specific thresholds.
  * @param {string[]} counters - Array of counter names
  * @param {string[]} labels - Array of label names
  * @returns {Object} Threshold configuration object
  * @throws {Error} If inputs are invalid
+ * 
+ * @example
+ * // Returns thresholds for duration and request counts for 'search' operations
+ * getDefaultThresholds(['http_req_duration', 'http_reqs'], ['search'])
+ * 
+ * // The returned object will include:
+ * // - http_req_failed: ['rate<0.01'] // Ensures less than 1% of requests fail
+ * // - http_req_duration{name:search}: [] // Makes duration metrics visible
+ * // - http_reqs{name:search}: [] // Makes request count metrics visible
  */

11-13: Enhance input validation.

While type checking is good, consider adding more thorough validation:

  • Check for non-empty arrays
  • Validate that array elements are non-empty strings
  • Validate counter/label names for k6 syntax compatibility

Here's a suggested improvement:

 export function getDefaultThresholds(counters, labels) {
     if (!Array.isArray(counters) || !Array.isArray(labels)) {
         throw new Error('Both counters and labels must be arrays');
     }
+    if (counters.length === 0 || labels.length === 0) {
+        throw new Error('Both counters and labels must be non-empty arrays');
+    }
+    if (!counters.every(c => typeof c === 'string' && c.trim().length > 0) ||
+        !labels.every(l => typeof l === 'string' && l.trim().length > 0)) {
+        throw new Error('All counters and labels must be non-empty strings');
+    }

14-16: Consider making the failure threshold configurable.

The hardcoded failure rate threshold of 0.01 might not suit all test scenarios. Consider making it configurable:

Here's a suggested improvement:

+const DEFAULT_FAILURE_RATE = 0.01;
+
-export function getDefaultThresholds(counters, labels) {
+export function getDefaultThresholds(counters, labels, failureRate = DEFAULT_FAILURE_RATE) {
     // ... validation code ...
     let thresholds = {
-        http_req_failed: ['rate<0.01']
+        http_req_failed: [`rate<${failureRate}`]
     };
tests/k6/tests/serviceowner/performance/create-dialog.js (1)

Line range hint 14-20: Simplify the VU/iterations condition check.

The current implementation correctly handles different test scenarios, but the condition check could be more concise.

Consider this simplified version:

-    if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
+    if ((options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1) {
       createDialog(serviceOwners[0], endUsers[0]);
     }
     else {
         createDialog(randomItem(serviceOwners), randomItem(endUsers));
     }
tests/k6/tests/graphql/performance/graphql-search.js (3)

6-9: Consider defining GraphQL-specific performance thresholds.

While using default thresholds is a good start, GraphQL searches might have different performance characteristics compared to other operations. Consider defining specific thresholds based on your performance requirements for GraphQL searches.

Example modification:

 export let options = {
     summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'],
-    thresholds: getDefaultThresholds(['http_req_duration', 'http_reqs'],['graphql search'])
+    thresholds: {
+        'http_req_duration{type:graphql search}': ['p(95)<500', 'p(99)<1000'],  // Adjust these values
+        'http_reqs{type:graphql search}': ['rate>100'],  // Adjust based on your requirements
+    }
 };

11-21: Simplify the execution path condition.

The current condition for determining single vs multi-user mode could be more readable.

Consider this cleaner approach:

 export default function() {
     if (!endUsers || endUsers.length === 0) {
         throw new Error('No end users loaded for testing');
     }
-    if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
+    const isSingleUserMode = (options.vus || 1) === 1 && (options.iterations || 1) === 1;
+    if (isSingleUserMode) {
         graphqlSearch(endUsers[0]);
     }
     else {
         graphqlSearch(randomItem(endUsers));
     }
 }

23-23: Remove extra empty lines at the end of the file.

Clean up the file by removing the extra empty lines at the end.

tests/k6/tests/performancetest_common/readTestdata.js (1)

1-2: Consider hosting PapaParse locally.

While using jslib.k6.io is convenient, consider bundling PapaParse with your test code to:

  • Eliminate external dependencies during test execution
  • Ensure consistent test execution in air-gapped environments
  • Reduce network-related test variability
tests/k6/tests/enduser/performance/enduser-search.js (2)

5-17: Improve readability of operations array.

While the configuration is technically correct, the array formatting could be improved for better readability.

Consider this formatting:

    thresholds: getDefaultThresholds(
      ['http_req_duration', 'http_reqs'],
      [
        'enduser search',
        'get dialog',
        'get dialog activities',
        'get dialog activity',
        'get seenlogs',
        'get seenlog',
        'get transmissions',
        'get transmission',
        'get labellog'
      ]
    )

5-17: Consider environment-specific thresholds and documentation.

For a search performance test, consider the following improvements:

  1. Make thresholds configurable based on environment (dev/staging/prod) as search performance can vary significantly
  2. Add comments documenting expected performance baselines and any known bottlenecks
  3. Consider adding gradual ramp-up periods for VUs to better simulate real-world load patterns

Example configuration with environment-specific thresholds:

const ENV = __ENV.TEST_ENV || 'dev';
const thresholdConfigs = {
  dev: { p95: 500, p99: 1000 },
  staging: { p95: 300, p99: 600 },
  prod: { p95: 200, p99: 400 }
};

export let options = {
  // ... existing options ...
  scenarios: {
    ramp_up: {
      executor: 'ramping-vus',
      startVUs: 0,
      stages: [
        { duration: '30s', target: 10 },
        { duration: '1m', target: 10 },
        { duration: '30s', target: 0 }
      ]
    }
  }
};
tests/k6/tests/scenarios/performance/create-dialog-and-search.js (2)

1-6: Consider version pinning for local imports.

While the k6-utils import is properly versioned, the local imports lack version specifications. This could lead to compatibility issues if the imported modules are updated.

Consider using a similar versioning approach for local imports:

-import { enduserSearch } from '../../performancetest_common/simpleSearch.js';
+import { enduserSearch } from '../../performancetest_common/simpleSearch.js?v=1.0.0';

8-39: Add documentation for test configuration.

The configuration lacks comments explaining the purpose of each scenario and the expected performance characteristics.

Consider adding JSDoc comments:

+/**
+ * Test configuration for dialog creation and search performance scenarios.
+ * @property {Object} scenarios - Test scenarios configuration
+ * @property {Object} scenarios.create_dialogs - Creates dialogs with constant VUs
+ * @property {Object} scenarios.simple_search - Performs searches with constant VUs
+ */
 export const options = {
tests/k6/tests/performancetest_common/createDialog.js (1)

7-13: Enhance JSDoc documentation for better clarity.

The function documentation could be more comprehensive. Consider adding:

  • Return value documentation
  • More specific parameter type information
  • Example usage
 /**
  * Creates a dialog.
+ * 
+ * @param {Object} serviceOwner - The service owner object containing token information.
+ * @param {string} serviceOwner.token - The authentication token for the service owner.
+ * @param {Object} endUser - The end user object containing identification information.
+ * @param {string} endUser.ssn - The social security number of the end user.
+ * @param {string} endUser.resource - The resource identifier for the end user.
+ * @returns {void}
+ * 
+ * @example
+ * createDialog({
+ *   token: 'bearer-token'
+ * }, {
+ *   ssn: '12345678901',
+ *   resource: 'resource-id'
+ * });
  */
.github/workflows/dispatch-k6-performance.yml (1)

29-33: Enhance tag parameter configuration

The tag parameter configuration has a few potential improvements:

  1. The parameter is marked as required but has a default value, which is contradictory
  2. The description could better explain the tag's purpose and format expectations
      tag:
-        description: 'tag the performance test'
+        description: 'Label to identify the performance test run (e.g., "API-v1-Search", "Baseline")'
        required: true
-        default: 'Performance test'
        type: string
tests/k6/tests/performancetest_common/simpleSearch.js (1)

16-28: Add error handling in 'retrieveDialogContent' function

The retrieveDialogContent function may not handle cases where items[0] does not have an id or when subsequent requests fail. This could lead to unhandled exceptions.

Consider adding additional checks and error handling to ensure robustness.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between cbadc0b and 088fc3c.

📒 Files selected for processing (11)
  • .github/workflows/dispatch-k6-performance.yml (3 hunks)
  • tests/k6/tests/enduser/performance/enduser-search.js (1 hunks)
  • tests/k6/tests/enduser/performance/simple-search.js (0 hunks)
  • tests/k6/tests/graphql/performance/graphql-search.js (1 hunks)
  • tests/k6/tests/performancetest_common/createDialog.js (1 hunks)
  • tests/k6/tests/performancetest_common/getDefaultThresholds.js (1 hunks)
  • tests/k6/tests/performancetest_common/readTestdata.js (1 hunks)
  • tests/k6/tests/performancetest_common/simpleSearch.js (1 hunks)
  • tests/k6/tests/scenarios/performance/create-dialog-and-search.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1 hunks)
💤 Files with no reviewable changes (1)
  • tests/k6/tests/enduser/performance/simple-search.js
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/k6/tests/serviceowner/performance/create-remove-dialog.js
🧰 Additional context used
📓 Learnings (2)
tests/k6/tests/graphql/performance/graphql-search.js (2)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/graphql-search.js:37-40
Timestamp: 2024-11-04T13:50:00.958Z
Learning: In `tests/k6/tests/enduser/performance/graphql-search.js`, the team prefers to validate only the HTTP status and JSON format of GraphQL responses, without adding specific validations of response fields.
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/performancetest_data/graphql-search.js:3-4
Timestamp: 2024-11-04T12:58:36.855Z
Learning: In `tests/k6/tests/performancetest_data/graphql-search.js`, direct string interpolation in GraphQL queries is acceptable, and hardcoded URN formats are preferred for the performance tests.
tests/k6/tests/performancetest_common/getDefaultThresholds.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/simple-search.js:11-20
Timestamp: 2024-11-04T07:06:39.379Z
Learning: In `tests/k6/tests/performancetest_common/common.js`, the function `getDefaultThresholds` may intentionally create empty threshold arrays to make labels visible in k6 reports.
🔇 Additional comments (16)
tests/k6/tests/performancetest_common/getDefaultThresholds.js (1)

17-21: LGTM! The empty threshold arrays are intentional.

The empty threshold arrays for counter-label combinations are correctly implemented. As confirmed by the learning note, this makes the labels visible in k6 reports without enforcing specific thresholds.

tests/k6/tests/serviceowner/performance/create-dialog.js (3)

4-6: LGTM! Good modularization of test utilities.

The refactoring improves code organization by:

  • Centralizing threshold configuration
  • Standardizing dialog creation
  • Consolidating test data management

5-5: Verify the imported createDialog implementation.

Ensure the imported createDialog from performancetest_common maintains the same functionality as the removed implementation.

#!/bin/bash
# Description: Compare old and new createDialog implementations
# Expected: Similar functionality with improved structure

# Check the new implementation
ast-grep --pattern 'export function createDialog($_) {
  $$$
}'

# Look for any direct usage of the old implementation
rg -B 5 "function createDialog" "tests/k6/tests/"

10-10: Verify threshold configuration parameters.

The threshold configuration uses standard metrics but should be validated:

  • 'http_req_duration': Ensures response time thresholds
  • 'http_reqs': Monitors request throughput
  • 'create dialog': Specific to this operation
✅ Verification successful

Threshold configuration is correctly implemented and consistent

The threshold configuration is properly implemented using the standardized getDefaultThresholds utility:

  • The function is defined in tests/k6/tests/performancetest_common/getDefaultThresholds.js and used consistently across all performance tests
  • It sets appropriate default failure threshold for http_req_failed at rate < 1%
  • The metrics (http_req_duration, http_reqs) and label (create dialog) match the pattern used in other similar tests
  • The implementation follows the same configuration pattern as other performance tests like create-remove-dialog, graphql-search, and enduser-search
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify threshold configuration consistency across performance tests
# Expected: Similar threshold patterns for similar operations

# Check threshold configuration patterns across performance tests
rg -A 3 "getDefaultThresholds" "tests/k6/tests/"

# Verify the implementation of getDefaultThresholds
ast-grep --pattern 'export function getDefaultThresholds($_) {
  $$$
}'

Length of output: 1217


Script:

#!/bin/bash
# Let's try a different approach to find the threshold configuration

# Search for the getDefaultThresholds implementation
rg -l "getDefaultThresholds" tests/k6/

# Look for the file content where getDefaultThresholds is defined
rg -A 10 "function getDefaultThresholds" tests/k6/

# Check other performance test configurations for comparison
rg "thresholds:" tests/k6/tests/ -A 2

Length of output: 3038

tests/k6/tests/graphql/performance/graphql-search.js (2)

1-4: LGTM! Well-structured imports with version pinning.

The imports are well-organized, with version-pinned CDN imports for third-party modules and clear relative paths for local modules.


4-4: Verify the GraphQL search implementation.

Let's ensure the imported graphqlSearch function is properly implemented.

✅ Verification successful

Let me verify the implementation of the helper functions used within graphqlSearch.


GraphQL search implementation is properly integrated and implemented

The verification shows a well-structured implementation:

  • graphqlSearch function properly handles authentication with Bearer token
  • postGQ helper correctly manages GraphQL requests with proper content type
  • getGraphqlParty contains a comprehensive GraphQL query for dialog search with proper fields
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the implementation of graphqlSearch function
# Expected: Find the implementation with proper error handling and GraphQL query execution

# Search for the graphqlSearch function implementation
ast-grep --pattern 'export function graphqlSearch($_) {
  $$$
}'

# Backup: Search using ripgrep if the function is not exported
rg -A 10 'function graphqlSearch'

Length of output: 2302


Script:

#!/bin/bash
# Search for postGQ and getGraphqlParty functions
ast-grep --pattern 'function postGQ($_) {
  $$$
}'

ast-grep --pattern 'function getGraphqlParty($_) {
  $$$
}'

# Backup search with ripgrep
rg -A 5 'function (postGQ|getGraphqlParty)'

Length of output: 7048

tests/k6/tests/performancetest_common/readTestdata.js (2)

4-7: 🛠️ Refactor suggestion

Improve path handling and add error handling.

The current implementation has several potential issues:

  1. Relative path could break if tests are run from different directories
  2. No explicit error handling for missing files
  3. The .serviceowners-with-tokens.csv appears to contain sensitive data

Consider these improvements:

-const filenameServiceowners = '../performancetest_data/.serviceowners-with-tokens.csv';
+const BASE_PATH = __ENV.TEST_DATA_PATH || '../performancetest_data';
+const filenameServiceowners = `${BASE_PATH}/.serviceowners-with-tokens.csv`;
 export const serviceOwners = new SharedArray('serviceOwners', function () {
+  if (!open(filenameServiceowners)) {
+    throw new Error(`Service owners data file not found: ${filenameServiceowners}`);
+  }
   return papaparse.parse(open(filenameServiceowners), { header: true, skipEmptyLines: true }).data;
 });

Also, ensure that:

  1. The CSV file containing tokens is properly secured
  2. Tokens in the file are rotated regularly
  3. The file is excluded from version control

17-20: 🛠️ Refactor suggestion

Add data validation and security measures for token data.

The end users with tokens data should be validated and properly secured.

Add validation and use the refactored loadCsvData function:

-const filenameEndusersWithTokens = '../performancetest_data/.endusers-with-tokens.csv';
-export const endUsersWithTokens = new SharedArray('endUsersWithTokens', function () {
-  return papaparse.parse(open(filenameEndusersWithTokens), { header: true, skipEmptyLines: true }).data;
-});
+const filenameEndusersWithTokens = `${BASE_PATH}/.endusers-with-tokens.csv`;
+export const endUsersWithTokens = loadCsvData(filenameEndusersWithTokens, 'endUsersWithTokens');
+
+// Validate that required fields are present
+endUsersWithTokens.forEach((user, index) => {
+  if (!user.token || !user.id) {
+    throw new Error(`Invalid user data at index ${index}: missing required fields`);
+  }
+});

Let's verify the security of token files:

tests/k6/tests/scenarios/performance/create-dialog-and-search.js (1)

26-37: Review and clean up threshold operations list.

The thresholds array includes operations like 'get labellog', 'get seenlogs', etc., which don't appear to be used in this test file. This could make maintenance more difficult and potentially mask performance issues.

tests/k6/tests/performancetest_common/createDialog.js (1)

5-5: Verify test data generation implementation.

Let's verify the implementation of dialogToInsert to ensure consistent test data generation.

✅ Verification successful

Let me try a different approach to locate and verify the implementation.


Let me check the test data generation source that's being imported.


Test data generation implementation is consistent and well-structured

The implementation follows a clean pattern where:

  • Base test data is generated in serviceowner/testdata/01-create-dialog.js
  • Performance test data in performancetest_data/01-create-dialog.js extends it with proper cleanup and validation
  • The implementation ensures consistent test data generation with:
    • Input validation for end user and resource
    • Proper cleanup of sensitive data (e.g., actorId)
    • Consistent structure for dialog payload
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for dialogToInsert implementation
echo "Searching for dialogToInsert implementation..."
rg -A 10 "dialogToInsert.*=.*function" tests/k6/tests/performancetest_data/

# Verify test data structure
echo "Verifying test data structure..."
ast-grep --pattern 'export default function dialogToInsert($_) { $$$ }'

Length of output: 329


Script:

#!/bin/bash
# Check the content of the data file
echo "Checking content of create-dialog.js..."
cat tests/k6/tests/performancetest_data/01-create-dialog.js

# Search for any usage patterns
echo -e "\nSearching for usage patterns..."
rg -l "dialogToInsert" tests/k6/tests/

Length of output: 2782


Script:

#!/bin/bash
# Check the source test data implementation
echo "Checking source test data implementation..."
cat tests/k6/tests/serviceowner/testdata/01-create-dialog.js

# Check one usage example
echo -e "\nChecking a usage example..."
cat tests/k6/tests/performancetest_common/createDialog.js

Length of output: 13852

.github/workflows/dispatch-k6-performance.yml (3)

55-55: LGTM! Improved run name format

The new run name format including tag, VUs, and duration improves test run identification and observability.


37-37: Verify the impact of reduced default VUs

Changing the default from 10 to 1 virtual user significantly affects the performance test characteristics. While this might be suitable for initial testing, it may not provide realistic performance metrics under normal load conditions.

#!/bin/bash
# Description: Check VU usage in existing test files

# Test: Look for VU-related configurations in k6 tests
echo "Checking VU configurations in k6 test files..."
rg -B2 -A2 "vus:|scenarios:|stages:" "tests/k6/tests/"

51-53: Verify new test file implementations

The addition of new test paths is aligned with the PR objectives. Let's verify that these files exist and are properly implemented.

#!/bin/bash
# Description: Verify existence and implementation of new test files

# Test 1: Check if files exist
echo "Checking existence of new test files..."
for file in "tests/k6/tests/serviceowner/performance/create-remove-dialog.js" \
            "tests/k6/tests/enduser/performance/enduser-search.js" \
            "tests/k6/tests/graphql/performance/graphql-search.js"; do
    if [ -f "$file" ]; then
        echo "Found: $file"
    else
        echo "Missing: $file"
    fi
done

# Test 2: Check for k6 test structure in these files
echo "Checking k6 test structure..."
rg "export default function|export function setup" "tests/k6/tests/"
tests/k6/tests/performancetest_common/simpleSearch.js (3)

84-85: Verify that 'json' is an array before checking its length

In the getContentChain function, on lines 84-85, you retrieve the JSON response and check if it has a length:

let json = d.json();
if (json.length > 0) { ... }

Ensure that json is an array to prevent runtime errors if it's an object.

Consider adding a check to confirm json is an array:

-let json = d.json();
+let json = d.json();
+if (Array.isArray(json) && json.length > 0) { ... }

38-38: Confirm validity of the end user's token in Authorization header

On line 38, the Authorization header is set:

Authorization: "Bearer " + enduser.token,

Ensure that enduser.token contains a valid and non-expired token to authenticate requests properly.

Consider adding token validation before making the request.


43-43: ⚠️ Potential issue

Handle PII securely when constructing 'defaultParty'

On line 43, you're constructing the defaultParty string using the end user's SSN:

let defaultParty = "urn:altinn:person:identifier-no:" + enduser.ssn;

Ensure that the SSN is handled securely and is not logged or exposed in any way to prevent PII leakage.

Please confirm that this complies with data privacy policies.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (6)
tests/k6/tests/serviceowner/performance/create-dialog.js (2)

1-8: Enhance test documentation with more details.

While the basic run command is documented, it would be helpful to add:

  • Description of different test scenarios (single vs multiple iterations)
  • Available parameters and their impact
  • Expected outcomes or performance targets

Add this documentation above the imports:

 /**
  * Performance test for creating a dialog
  * Run: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 1 --iterations 1
+ * 
+ * Parameters:
+ * - vus: Number of virtual users (default: 1)
+ * - iterations: Number of iterations per VU (default: 1)
+ * 
+ * Scenarios:
+ * 1. Single iteration (vus=1, iterations=1):
+ *    Uses fixed serviceOwner[0] and endUser[0] for consistent baseline testing
+ * 2. Multiple iterations:
+ *    Randomly selects serviceOwner and endUser for load testing
+ * 
+ * Performance targets:
+ * - See thresholds in getDefaultThresholds for specific metrics
  */

11-12: Improve clarity of threshold configuration parameters.

The parameters passed to getDefaultThresholds could be more descriptive to better convey their purpose.

Consider this refactoring:

-    thresholds: getDefaultThresholds(['http_req_duration', 'http_reqs'],['create dialog'])
+    thresholds: getDefaultThresholds(
+      metricsToTrack = ['http_req_duration', 'http_reqs'],
+      operationNames = ['create dialog']
+    )
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (2)

1-4: Enhance test documentation with more details.

Consider adding more information to the documentation:

  • Purpose and scope of the test
  • Description of test scenarios
  • Available parameters and their impact
  • Expected outcomes and thresholds
 /** 
  * Performance test for creating and removing a dialog
+ * 
+ * Purpose:
+ *   Validates performance of dialog creation and removal operations
+ * 
+ * Parameters:
+ *   --vus: Number of virtual users (default: 1)
+ *   --iterations: Number of test iterations (default: 1)
+ * 
+ * Scenarios:
+ *   - Single user: Uses first service owner and end user
+ *   - Multiple users: Randomly selects service owners and end users
+ * 
  * Run: k6 run tests/k6/tests/serviceowner/performance/create-remove-dialog.js --vus 1 --iterations 1
  */

18-25: Simplify conditions and add array access validation.

While the logic is correct, consider these improvements:

  1. Simplify the single user condition
  2. Add validation for array access
 export default function() {
+    if (!serviceOwners.length || !endUsers.length) {
+        throw new Error('No test data available');
+    }
+
-    if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
+    const isSingleUserMode = (options.vus || 1) === 1 && (options.iterations || 1) === 1;
+    if (isSingleUserMode) {
       createAndRemoveDialog(serviceOwners[0], endUsers[0]);
     }
     else {
         createAndRemoveDialog(randomItem(serviceOwners), randomItem(endUsers));
     }
   }
tests/k6/tests/performancetest_common/readTestdata.js (1)

9-10: Consider local dependencies for performance testing reliability.

Using a CDN-hosted dependency (PapaParse) in performance tests could introduce network-related variability. Additionally, the k6 import lacks version pinning.

Consider:

  1. Bundling PapaParse locally with your test code
  2. Pin the k6 data module version: "k6/data@1.x"
tests/k6/tests/performancetest_common/simpleSearch.js (1)

20-32: Consider optimizing multiple sequential API calls.

The function makes several sequential API calls which could impact performance test results. Consider:

  1. Implementing parallel requests using Promise.all() where order doesn't matter
  2. Adding error handling for individual API calls
  3. Adding timing metrics for each type of request

Example optimization:

async function retrieveDialogContent(response, paramsWithToken) {
    const items = response.json().items;
    if (!items?.length) return;
    
    const dialogId = items[0].id;
    if (!dialogId) return;
    
    const requests = [
        getContent(dialogId, paramsWithToken, 'get dialog'),
        getContentChain(dialogId, paramsWithToken, 'get dialog activities', 'get dialog activity', '/activities/'),
        getContentChain(dialogId, paramsWithToken, 'get seenlogs', 'get seenlog', '/seenlog/'),
        getContent(dialogId, paramsWithToken, 'get labellog', '/labellog'),
        getContentChain(dialogId, paramsWithToken, 'get transmissions', 'get transmission', '/transmissions/')
    ];
    
    try {
        await Promise.all(requests);
    } catch (error) {
        console.error(`Failed to retrieve dialog content: ${error.message}`);
    }
}
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 088fc3c and 5d13c27.

📒 Files selected for processing (8)
  • tests/k6/tests/graphql/performance/graphql-search.js (1 hunks)
  • tests/k6/tests/performancetest_common/createDialog.js (1 hunks)
  • tests/k6/tests/performancetest_common/getDefaultThresholds.js (1 hunks)
  • tests/k6/tests/performancetest_common/readTestdata.js (1 hunks)
  • tests/k6/tests/performancetest_common/simpleSearch.js (1 hunks)
  • tests/k6/tests/scenarios/performance/create-dialog-and-search.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • tests/k6/tests/graphql/performance/graphql-search.js
  • tests/k6/tests/performancetest_common/createDialog.js
  • tests/k6/tests/performancetest_common/getDefaultThresholds.js
  • tests/k6/tests/scenarios/performance/create-dialog-and-search.js
🧰 Additional context used
📓 Learnings (1)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/serviceowner/performance/create-remove-dialog.js:26-50
Timestamp: 2024-11-04T13:04:34.896Z
Learning: In `tests/k6/tests/serviceowner/performance/create-remove-dialog.js`, the team has decided not to add a teardown function using k6's `teardown` functionality for cleanup in case of test failures.
🔇 Additional comments (5)
tests/k6/tests/serviceowner/performance/create-dialog.js (1)

Line range hint 14-21: Add data validation and simplify conditions.

The test execution logic could benefit from some improvements:

  1. Validate that serviceOwners and endUsers arrays are not empty
  2. Simplify the condition check for single iteration mode

Consider this refactoring:

 export default function() {
+    if (!serviceOwners.length || !endUsers.length) {
+        throw new Error('Test data arrays cannot be empty');
+    }
-    if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
+    const isSingleIteration = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1;
+    if (isSingleIteration) {
       createDialog(serviceOwners[0], endUsers[0]);
     }
     else {
         createDialog(randomItem(serviceOwners), randomItem(endUsers));
     }
   }

Let's verify the test data loading:

tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)

10-16: Well-structured test configuration!

The configuration demonstrates good practices:

  • Comprehensive summary statistics including critical percentiles
  • Reusable threshold configuration through getDefaultThresholds
  • Clear separation of concerns between stats and thresholds
tests/k6/tests/performancetest_common/readTestdata.js (1)

1-52: Verify data usage in performance tests.

Let's verify how this test data is being used in the performance test scripts.

✅ Verification successful

Shared test data arrays are properly integrated in performance tests

The verification shows that the shared test data arrays are correctly imported and utilized across multiple performance test scenarios:

  • All test files properly import the required data arrays from readTestdata.js
  • Each scenario appropriately uses the data for its specific test case:
    • Create/remove dialog tests use serviceOwners and endUsers
    • Search tests use endUsersWithTokens
    • All tests handle both single-user and multi-user scenarios correctly

No issues found with the test data implementation or its usage patterns.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check usage of the exported data arrays in performance test files

echo "Checking usage of exported data arrays in performance tests..."
rg -l "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/

echo "\nChecking for any hardcoded test data that should use these shared arrays..."
rg -l "\.csv" tests/k6/tests/ | grep -v "readTestdata.js"

Length of output: 813


Script:

#!/bin/bash
# Let's check how these shared arrays are actually used in the test files

echo "Checking usage in create-dialog-and-search.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/scenarios/performance/create-dialog-and-search.js

echo "\nChecking usage in enduser-search.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/enduser/performance/enduser-search.js

echo "\nChecking usage in create-remove-dialog.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/serviceowner/performance/create-remove-dialog.js

echo "\nChecking usage in create-dialog.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/serviceowner/performance/create-dialog.js

echo "\nChecking usage in graphql-search.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/graphql/performance/graphql-search.js

Length of output: 3429

tests/k6/tests/performancetest_common/simpleSearch.js (2)

1-11: LGTM! Imports are well-organized and provide necessary utilities.

The imports provide a good foundation for testing utilities, including test assertions, request helpers, and UUID generation.


117-130: 🛠️ Refactor suggestion

Add GraphQL-specific error handling and response validation.

The GraphQL search implementation should handle GraphQL-specific errors and validate the response structure.

export function graphqlSearch(enduser) {
    if (!enduser?.token) {
        throw new Error('Invalid enduser object: missing token');
    }
    
    let paramsWithToken = {
        headers: {
            Authorization: "Bearer " + enduser.token,
            traceparent: uuidv4()
        },
        tags: { name: 'graphql search' }
    };
    
    describe('Perform graphql dialog list', () => {
        let r = postGQ(getGraphqlParty(enduser.ssn), paramsWithToken);
        expectStatusFor(r).to.equal(200);
        expect(r, 'response').to.have.validJsonBody();
        
        // Validate GraphQL response
        const body = r.json();
        expect(body, 'graphql response').to.not.have.property('errors');
        expect(body, 'graphql data').to.have.property('data');
    });
}
⛔ Skipped due to learnings
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/graphql-search.js:37-40
Timestamp: 2024-11-04T13:50:00.958Z
Learning: In `tests/k6/tests/enduser/performance/graphql-search.js`, the team prefers to validate only the HTTP status and JSON format of GraphQL responses, without adding specific validations of response fields.

Copy link

sonarqubecloud bot commented Nov 5, 2024

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (1)
tests/k6/tests/performancetest_common/simpleSearch.js (1)

1-129: Consider implementing a metrics collection strategy.

The file implements various search operations but lacks a consistent strategy for performance metrics collection. Consider:

  1. Creating a centralized metrics collection utility
  2. Defining standard performance metrics across all operations
  3. Implementing proper error tracking and reporting

Would you like assistance in designing a metrics collection strategy that aligns with k6's best practices?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 5d13c27 and a864d86.

📒 Files selected for processing (1)
  • tests/k6/tests/performancetest_common/simpleSearch.js (1 hunks)
🔇 Additional comments (1)
tests/k6/tests/performancetest_common/simpleSearch.js (1)

1-10: LGTM! Well-structured imports and documentation.

The imports are well-organized and the k6-utils version is consistently set to 1.4.0 as previously suggested.

@dagfinno dagfinno merged commit 70a63cd into main Nov 6, 2024
24 of 26 checks passed
@dagfinno dagfinno deleted the performance/create-tests branch November 6, 2024 06:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants