-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: add more performance tests #1373
Conversation
📝 WalkthroughWalkthroughThe pull request introduces significant updates to K6 performance testing workflows and scripts. The workflow configuration is modified to change default parameters and add new input requirements. Several new files are created for testing GraphQL and end-user functionalities, while existing scripts are updated to improve data handling and error management. Additionally, the removal of outdated performance scripts is noted. Overall, these changes enhance the structure and flexibility of performance testing across various scenarios. Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 26
🧹 Outside diff range and nitpick comments (7)
tests/k6/tests/enduser/performance/graphql-search.js (3)
8-8
: Consider using path.join for file path construction.The hardcoded relative path could be fragile if the file structure changes. Consider using a more robust path construction method.
-const filenameEndusers = '../../performancetest_data/.endusers-with-tokens.csv'; +const filenameEndusers = path.join(__dirname, '../../performancetest_data/.endusers-with-tokens.csv');
36-43
: Add JSDoc documentation for the default function.The user selection logic should be documented to make the behavior clear for other developers.
+/** + * Main test function that executes GraphQL search tests. + * For single VU/iteration tests, uses the first user from the dataset. + * For multiple VUs/iterations, randomly selects users to simulate concurrent access. + */ export default function() {
53-53
: Remove commented out code.Dead code should be removed to maintain code cleanliness.
- //let r = ('dialogs' + defaultFilter, paramsWithToken);
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (2)
7-8
: Consider making file paths more robustThe relative paths to CSV files might be fragile if the test is run from different working directories. Consider using
__dirname
or a similar mechanism to ensure path resolution is consistent.-const filenameServiceowners = '../../performancetest_data/.serviceowners-with-tokens.csv'; -const filenameEndusers = `../../performancetest_data/endusers-${__ENV.API_ENVIRONMENT}.csv`; +const filenameServiceowners = `${__dirname}/../../performancetest_data/.serviceowners-with-tokens.csv`; +const filenameEndusers = `${__dirname}/../../performancetest_data/endusers-${__ENV.API_ENVIRONMENT}.csv`;
30-37
: Verify array access safetyDirect array access without bounds checking could fail if the CSV files are empty or malformed. While this aligns with the "fail fast" approach, consider adding a debug log for easier troubleshooting:
export default function() { + console.debug(`Loaded ${serviceOwners.length} service owners and ${endUsers.length} end users`); if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) { createDialog(serviceOwners[0], endUsers[0]); }
tests/k6/tests/scripts/generate_tokens.sh (2)
Line range hint
64-65
: Consider making the token generator URL configurable.The token generator URLs are currently hardcoded. Consider moving these to environment variables or a configuration file to:
- Improve flexibility for different environments
- Follow security best practices by not hardcoding service URLs
- Enable easier updates if the service endpoint changes
Example implementation:
+ # Read token generator URL from environment variable with a default fallback + TOKEN_GENERATOR_BASE_URL=${TOKEN_GENERATOR_BASE_URL:-"https://altinn-testtools-token-generator.azurewebsites.net"} + - url="https://altinn-testtools-token-generator.azurewebsites.net/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=3600" + url="$TOKEN_GENERATOR_BASE_URL/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=3600"Also applies to: 89-90
Line range hint
64-65
: Consider making token TTL configurable.The token Time-To-Live (TTL) is hardcoded to 3600 seconds. For performance testing scenarios, different durations might be needed.
Example implementation:
+ # Default TTL of 1 hour if not specified + TOKEN_TTL=${TOKEN_TTL:-3600} + - url="$TOKEN_GENERATOR_BASE_URL/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=3600" + url="$TOKEN_GENERATOR_BASE_URL/api/GetEnterpriseToken?org=$org&env=$env&scopes=$scopes&orgno=$orgno&ttl=$TOKEN_TTL"Also applies to: 89-90
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (2)
tests/k6/tests/performancetest_data/endusers-yt01.csv
is excluded by!**/*.csv
tests/k6/tests/performancetest_data/serviceowners-yt01.csv
is excluded by!**/*.csv
📒 Files selected for processing (12)
.github/workflows/dispatch-k6-performance.yml
(3 hunks)tests/k6/common/config.js
(2 hunks)tests/k6/common/request.js
(2 hunks)tests/k6/common/testimports.js
(1 hunks)tests/k6/tests/enduser/performance/graphql-search.js
(1 hunks)tests/k6/tests/enduser/performance/simple-search.js
(2 hunks)tests/k6/tests/performancetest_data/01-create-dialog.js
(1 hunks)tests/k6/tests/performancetest_data/graphql-search.js
(1 hunks)tests/k6/tests/scripts/generate_tokens.sh
(1 hunks)tests/k6/tests/serviceowner/performance/create-dialog-and-search.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(2 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(1 hunks)
🧰 Additional context used
📓 Learnings (1)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1331
File: tests/k6/tests/serviceowner/performance/create-dialog.js:28-33
Timestamp: 2024-10-23T11:32:22.074Z
Learning: In k6 performance tests, when data arrays such as `serviceOwners` and `endUsers` might be empty, prefer to let the test crash rather than adding checks to log error messages continuously.
🔇 Additional comments (16)
tests/k6/common/testimports.js (2)
17-18
: LGTM! Clean addition of the GraphQL request function export.
The new export follows the established naming convention and is properly grouped with other request-related functions.
18-18
: Verify GraphQL request implementation and usage.
Let's verify the implementation of postGQ
and its usage in performance tests.
✅ Verification successful
Let me gather more information about the GraphQL implementation and its usage pattern.
Let me check one final aspect - the actual GraphQL query structure from the data file.
GraphQL implementation and usage verified successfully
The verification shows a well-structured GraphQL implementation:
postGQ
function is properly implemented inrequest.js
with correct content-type headers- GraphQL endpoint is correctly configured via
baseUrlGraphql
- The query structure in
graphql-search.js
follows GraphQL best practices, fetching dialog data with proper error handling - Test expectations validate both HTTP status and JSON response structure
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation and usage of postGQ function
# Test 1: Check the implementation in request.js
echo "Checking postGQ implementation:"
rg -A 5 "postGQ.*=.*function" ./tests/k6/common/request.js
# Test 2: Find where postGQ is being used
echo -e "\nChecking postGQ usage:"
rg "postGQ" --type js
# Test 3: Verify GraphQL endpoint configuration
echo -e "\nChecking GraphQL endpoint configuration:"
rg "baseUrlGraphql" ./tests/k6/common/
Length of output: 1050
Script:
#!/bin/bash
# Check the GraphQL query structure and implementation details
# Test 1: Check the GraphQL query structure
echo "Checking GraphQL query structure in graphql-search.js:"
rg -A 10 "getGraphqlParty" ./tests/k6/tests/enduser/performance/graphql-search.js
# Test 2: Check the configuration setup
echo -e "\nChecking GraphQL configuration:"
rg -A 5 'baseUrls\[.*\]\["graphql"\]' ./tests/k6/common/config.js
# Test 3: Check error handling in postGQ
echo -e "\nChecking error handling in postGQ:"
rg -A 10 "function postGQ" ./tests/k6/common/request.js
# Test 4: Check test setup and expectations
echo -e "\nChecking test setup and expectations:"
rg -B 2 -A 5 "postGQ.*paramsWithToken" ./tests/k6/tests/enduser/performance/graphql-search.js
Length of output: 1984
Script:
#!/bin/bash
# Check the GraphQL query definition
# Test 1: Check the GraphQL query definition
echo "Checking GraphQL query definition:"
rg -A 10 "export.*getGraphqlParty" ./tests/k6/tests/performancetest_data/graphql-search.js
# Test 2: Check for any other GraphQL query patterns
echo -e "\nChecking other GraphQL patterns:"
rg "query.*{" ./tests/k6/tests/performancetest_data/graphql-search.js
Length of output: 763
tests/k6/tests/serviceowner/performance/create-dialog.js (2)
4-4
: LGTM: Import path aligns with performance test data structure.
The updated import path maintains consistency with other imports and properly separates performance test data.
45-45
: Verify the updated dialogToInsert parameter usage across test files.
The addition of endUser.resource
parameter enhances the test coverage. However, we should ensure consistent usage across all test files.
Let's verify the usage of dialogToInsert
across the codebase:
✅ Verification successful
Let me gather more information about the function definition since the previous ast-grep pattern didn't yield results.
Let's try one more search pattern to find the function definition:
Let's check the actual function definition in the test data files:
The updated dialogToInsert parameter usage is correct and consistent
The verification shows that:
- The performance test files correctly use both parameters (
endUser.ssn, endUser.resource
) - Other test files use the default parameter which internally sets
serviceResource
and uses the default SSN, maintaining backward compatibility - Both implementations (
performancetest_data/01-create-dialog.js
andtestdata/01-create-dialog.js
) handle the parameters appropriately
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any inconsistent usage of dialogToInsert function
# Expected: All calls should use both ssn and resource parameters
# Search for dialogToInsert function calls
echo "Checking dialogToInsert usage patterns:"
rg -A 1 "dialogToInsert\(" tests/
# Search for the function definition to verify parameter expectations
echo -e "\nChecking dialogToInsert function definition:"
ast-grep --pattern 'export default function dialogToInsert($_) {
$$$
}'
Length of output: 5870
Script:
#!/bin/bash
# Search for dialogToInsert function definition with different patterns
echo "Searching for dialogToInsert function definition:"
rg -A 5 "function dialogToInsert" tests/
# Search for the file that might export dialogToInsert
echo -e "\nSearching for files that might export dialogToInsert:"
rg "export.*dialogToInsert" tests/
Length of output: 311
Script:
#!/bin/bash
# Search for files containing dialogToInsert
echo "Finding files that might contain dialogToInsert:"
fd dialogToInsert tests/
# Search for any import statements related to dialogToInsert
echo -e "\nSearching for dialogToInsert imports:"
rg "import.*dialogToInsert" tests/
# Search for the function with a broader pattern
echo -e "\nSearching with broader pattern:"
rg -l "dialogToInsert" tests/
Length of output: 3773
Script:
#!/bin/bash
# Check both possible locations of the function definition
echo "Checking performancetest_data definition:"
cat tests/k6/tests/performancetest_data/01-create-dialog.js
echo -e "\nChecking testdata definition:"
cat tests/k6/tests/serviceowner/testdata/01-create-dialog.js
Length of output: 23722
tests/k6/tests/performancetest_data/graphql-search.js (1)
1-70
: Consider performance testing best practices.
As this query is used in performance tests, consider:
- Adding directives for query caching/optimization
- Including error scenarios in performance tests
- Documenting expected performance characteristics
Let's verify the query usage in performance tests:
Consider adding GraphQL specific performance optimizations:
- Add
@cached
directive for appropriate fields - Consider implementing query batching for performance tests
- Document expected query complexity and performance characteristics in comments
.github/workflows/dispatch-k6-performance.yml (4)
29-33
: LGTM! Well-structured tag parameter addition.
The new tag parameter is well-defined with appropriate defaults and description, improving test run identification and filtering capabilities.
55-55
: LGTM! Improved run name format.
The updated run-name format provides better visibility and traceability by including the tag, VUs count, and duration.
13-13
: Verify the environment change impact.
The default environment has been changed from 'staging' to 'yt01', and the environment options have been updated. Please ensure:
- All team members are aware of this change
- CI/CD pipelines and documentation are updated accordingly
- 'yt01' is a valid and accessible environment
#!/bin/bash
# Search for any references to the old environment names in workflow files and docs
echo "Searching for references to 'staging' environment..."
rg -l 'staging' --type yaml --type md
echo "Checking other workflow files for environment dependencies..."
fd -e yml -e yaml . .github/workflows/ --exec grep -l 'environment:'
Also applies to: 18-18
51-51
: Verify new test file implementations.
Two new test paths have been added. Please ensure:
- The test files exist and are properly implemented
- They follow the same patterns and best practices as existing tests
Also applies to: 53-53
✅ Verification successful
Let me try a different approach to verify the existence and content of these test files.
Let me check the content of these files to verify their implementation.
Test files are properly implemented and follow existing patterns
Both new test files exist and are correctly implemented following the established patterns:
create-remove-dialog.js
extends the existingcreate-dialog.js
pattern with additional cleanup functionalitygraphql-search.js
follows similar k6 test structure with proper imports, options configuration, and test scenarios
Key verification points:
- Both files have correct k6 imports and exports
- They use shared test utilities and common imports
- Follow consistent error handling and authentication patterns
- Include proper performance metrics and thresholds
- Maintain consistent code structure with other performance tests
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the new test files exist and verify their content
echo "Checking for new test files..."
for file in "tests/k6/tests/serviceowner/performance/create-remove-dialog.js" "tests/k6/tests/enduser/performance/graphql-search.js"; do
if [ -f "$file" ]; then
echo "Found $file"
# Look for essential k6 imports and exports
rg "import {.*} from 'k6'" "$file"
rg "export (default|const) " "$file"
else
echo "Missing file: $file"
fi
done
Length of output: 1690
Script:
#!/bin/bash
# Check if the files exist
echo "Checking create-remove-dialog.js..."
ls -l tests/k6/tests/serviceowner/performance/create-remove-dialog.js 2>/dev/null
echo -e "\nChecking graphql-search.js..."
ls -l tests/k6/tests/enduser/performance/graphql-search.js 2>/dev/null
# If files exist, check their content
echo -e "\nChecking k6 imports and exports..."
rg -l "import.*from 'k6'" tests/k6/tests/serviceowner/performance/create-remove-dialog.js tests/k6/tests/enduser/performance/graphql-search.js 2>/dev/null
# Look at similar existing k6 test files for pattern comparison
echo -e "\nListing existing k6 test files for reference..."
fd ".*\.js$" tests/k6/tests/ -t f
Length of output: 2391
Script:
#!/bin/bash
# Check content of create-remove-dialog.js
echo "Content of create-remove-dialog.js:"
cat tests/k6/tests/serviceowner/performance/create-remove-dialog.js
echo -e "\nContent of graphql-search.js:"
cat tests/k6/tests/enduser/performance/graphql-search.js
# Check if they follow similar patterns as existing performance tests
echo -e "\nComparing with existing performance test structure:"
cat tests/k6/tests/serviceowner/performance/create-dialog.js
Length of output: 6891
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
10-17
: LGTM! Efficient data loading implementation
The use of SharedArray with CSV parsing is well-implemented and follows the established pattern of letting tests fail fast on empty data rather than adding continuous checks.
tests/k6/common/config.js (1)
19-26
:
Verify the production GraphQL endpoint URL pattern.
The production GraphQL endpoint URL follows a different pattern than other environments:
- Other environments:
.../dialogporten/graphql
- Production:
.../dialogporten/api/v1/graphql
This inconsistency might cause issues in production.
✅ Verification successful
Let me gather more specific information about the production endpoint pattern by checking the backend configuration.
Let me check the base URL configuration and routing in the backend code to understand if this pattern is intentional.
Based on the evidence gathered, I can now provide a final response:
The production GraphQL endpoint URL pattern is correct and intentional
The /api/v1/
segment in the production URL is the correct pattern as it follows API versioning conventions. The other environments (test, staging, etc.) are development/testing environments where the simplified URL pattern is acceptable.
Evidence:
- The backend configuration in
appsettings.prod.json
shows the base URI is configured ashttps://platform.altinn.no/dialogporten
- The
/api/v1/
segment is consistently used in production for versioned APIs, as seen in other production URLs (e.g.,platform.altinn.no/authentication/api/v1/
) - The GraphQL endpoint is properly mapped in the backend code through standard ASP.NET Core routing
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify GraphQL endpoint patterns across the codebase
# Expected: Find any other GraphQL endpoint references to confirm the correct pattern
# Search for GraphQL endpoint patterns
echo "Searching for GraphQL endpoint patterns in the codebase..."
rg -i "graphql" --type js --type ts --type json
Length of output: 3595
Script:
#!/bin/bash
# Check the GraphQL endpoint configuration in backend
cat src/Digdir.Domain.Dialogporten.GraphQL/appsettings.prod.json
Length of output: 2384
Script:
#!/bin/bash
# Search for API routing or endpoint configuration in the backend
rg -g "*.cs" -i "graphql|endpoint.*config" src/Digdir.Domain.Dialogporten.GraphQL/
Length of output: 16283
tests/k6/tests/scripts/generate_tokens.sh (1)
31-31
: LGTM! Environment mapping for 'yt01' added.
The addition of the "yt01" environment case aligns with the workflow changes and ensures consistent environment handling across the testing framework.
tests/k6/common/request.js (2)
2-2
: LGTM! Import follows established patterns.
The addition of baseUrlGraphql
maintains consistency with existing base URL imports.
129-133
: Verify GraphQL query usage patterns across tests.
Let's ensure the new GraphQL functionality is being used consistently across test files.
✅ Verification successful
Let me check the actual usage patterns in these files to ensure consistency.
GraphQL implementation is properly integrated and consistently used
The verification shows:
postGQ
is correctly imported and used in the performance test- The helper is properly exported in
testimports.js
- GraphQL endpoints are well-configured across environments in
config.js
- No direct GraphQL endpoint usage bypassing the
postGQ
helper was found
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for GraphQL query usage patterns
echo "Checking GraphQL query usage patterns..."
rg -l "postGQ" tests/k6/
# Check for potential direct GraphQL endpoint usage that should use postGQ
echo "Checking for direct GraphQL endpoint usage..."
rg -l "graphql" tests/k6/ | grep -v "request.js"
Length of output: 453
Script:
#!/bin/bash
# Check the implementation in graphql-search.js
echo "=== GraphQL Search Implementation ==="
rg "postGQ" tests/k6/tests/enduser/performance/graphql-search.js -B 2 -A 2
# Check how it's imported/used in testimports.js
echo -e "\n=== Test Imports Usage ==="
rg "postGQ" tests/k6/common/testimports.js -B 2 -A 2
# Check config.js for any GraphQL endpoint configurations
echo -e "\n=== GraphQL Configuration ==="
rg "graphql" tests/k6/common/config.js -B 2 -A 2
Length of output: 1796
tests/k6/tests/serviceowner/performance/create-dialog-and-search.js (1)
74-79
:
Fix incorrect access to VUs and iterations in options object
Similar to the createDialogs
function, the simpleSearches
function is accessing options.vus
and options.iterations
which are undefined at the global level when multiple scenarios are used. This can cause the condition to fail unexpectedly.
Apply this diff to correct the access:
export function simpleSearches() {
- if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
+ const scenarioOptions = options.scenarios.simple_search;
+ if (
+ (scenarioOptions.vus === undefined || scenarioOptions.vus === 1) &&
+ (scenarioOptions.iterations === undefined || scenarioOptions.iterations === 1)
+ ) {
simpleSearch(endUsers[0]);
} else {
simpleSearch(randomItem(endUsers));
}
}
tests/k6/tests/enduser/performance/simple-search.js (1)
125-130
: 🛠️ Refactor suggestion
Add Error Handling in getUrl
Function
The getUrl
function currently assumes that all requests will be successful and return valid JSON. For more robust code, include error handling for unexpected responses.
Enhance the function as follows:
export function getUrl(url, paramsWithToken) {
let r = getEU(url, paramsWithToken);
+ if (r.status !== 200) {
+ console.error(`Request to ${url} failed with status ${r.status}`);
+ return;
+ }
expectStatusFor(r).to.equal(200);
expect(r, 'response').to.have.validJsonBody();
return r;
}
This addition logs an error message when the response status is not 200
and prevents further processing on invalid responses.
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
tests/k6/common/config.js
(2 hunks)
🔇 Additional comments (3)
tests/k6/common/config.js (3)
1-6
: LGTM: Base URL constants are well-structured
The base URL constants follow security best practices by using HTTPS and maintain a consistent naming convention.
64-65
: Add validation for GraphQL URLs
The previous review comment about missing GraphQL URL validation is still applicable. This validation is important for preventing runtime errors.
8-10
: Consider versioning consistency for GraphQL path
While the endUser and serviceOwner paths include version information (api/v1/
), the GraphQL path doesn't follow this pattern. Consider aligning the versioning strategy for consistency and future compatibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
22-22
: LGTM! Consider documenting threshold values.The standardization of thresholds using
getDefaultThresholds
improves consistency. However, it would be helpful to document the actual threshold values for better visibility.Consider adding a comment above this line documenting the specific threshold values being applied.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
tests/k6/tests/serviceowner/performance/create-dialog.js
(2 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/k6/tests/serviceowner/performance/create-remove-dialog.js
🔇 Additional comments (3)
tests/k6/tests/serviceowner/performance/create-dialog.js (3)
6-7
: LGTM! Import changes improve organization and reusability.
The changes enhance the test structure by:
- Centralizing threshold management through
getDefaultThresholds
- Moving test data to a more appropriately named directory
43-43
: Verify test data contains required resource field.
The addition of endUser.resource
parameter needs verification to ensure all test data files contain this field.
#!/bin/bash
# Description: Verify resource field in test data
# Expected: Find resource field in CSV headers
echo "Checking resource field in enduser CSV files..."
head -n 1 tests/k6/tests/performancetest_data/endusers-*.csv
echo "Checking dialogToInsert function signature..."
ast-grep --pattern 'dialogToInsert = function($ssn, $resource)' tests/k6/tests/performancetest_data/01-create-dialog.js
12-18
: LGTM! Verify the exported arrays usage.
The export of shared arrays improves modularity. However, let's verify if these exports are being used by other test files.
✅ Verification successful
Let me try a different search strategy to verify the usage of these exported arrays.
Exports are actively used across multiple test files
The exported arrays are indeed being used extensively:
serviceOwners
is used in 3 performance test filesendUsers
is used in 6 test files across performance, enduser, and common modules- The file itself is imported by 19 different test files
The export of these shared arrays is well-justified as they serve as common test data across multiple test scenarios.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if the exported arrays are used in other test files
# Expected: Find references to these exports in other test files
echo "Checking usage of exported arrays..."
rg -l "from.*create-dialog.*" tests/k6/tests/
Length of output: 618
Script:
#!/bin/bash
# Search for direct references to serviceOwners and endUsers arrays
echo "Searching for serviceOwners usage..."
rg "serviceOwners" tests/k6/tests/ -l
echo -e "\nSearching for endUsers usage..."
rg "endUsers" tests/k6/tests/ -l
# Also check for any imports from this file
echo -e "\nSearching for imports from create-dialog..."
rg "import.*from.*create-dialog" tests/k6/tests/ -l
Length of output: 1955
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
tests/k6/tests/performancetest_data/01-create-dialog.js (2)
1-1
: Simplify the import statement.The import statement can be simplified by removing the unnecessary
{default as ...}
syntax since it's already a default import.-import {default as createDialogPayload} from "../serviceowner/testdata/01-create-dialog.js" +import createDialogPayload from "../serviceowner/testdata/01-create-dialog.js"
43-54
: Enhance URN handling and error handling.While the input validation is good, there are a few improvements that could be made:
- Use template literals for URN construction
- Extract URN prefixes as constants
- Add error handling for createDialogPayload
+const URN_RESOURCE_PREFIX = 'urn:altinn:resource:'; +const URN_PERSON_PREFIX = 'urn:altinn:person:identifier-no:'; export default function (endUser, resource) { if (!endUser?.match(/^\d{11}$/)) { throw new Error('endUser must be a 11-digit number'); } if (!resource?.trim()) { throw new Error('resource is required'); } - let payload = createDialogPayload(); - payload.serviceResource = "urn:altinn:resource:" +resource; - payload.party = "urn:altinn:person:identifier-no:" + endUser; + + let payload; + try { + payload = createDialogPayload(); + } catch (error) { + throw new Error(`Failed to create dialog payload: ${error.message}`); + } + + payload.serviceResource = `${URN_RESOURCE_PREFIX}${resource.trim()}`; + payload.party = `${URN_PERSON_PREFIX}${endUser}`; return cleanUp(payload); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
tests/k6/tests/performancetest_common/common.js
(1 hunks)tests/k6/tests/performancetest_data/01-create-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/k6/tests/performancetest_common/common.js
- tests/k6/tests/serviceowner/performance/create-dialog.js
🔇 Additional comments (1)
tests/k6/tests/performancetest_data/01-create-dialog.js (1)
5-34
: LGTM! Clean implementation with proper error handling.
The cleanUp function is well-implemented with:
- Input validation
- Immutable operations using spread operator
- Proper handling of optional fields using optional chaining
- Clear activity transformation logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🧹 Outside diff range and nitpick comments (15)
tests/k6/tests/performancetest_common/getDefaultThresholds.js (3)
1-8
: Enhance documentation with examples and threshold explanations.The documentation is well-structured but could be more helpful with:
- Usage examples showing typical counter/label combinations
- Explanation of why empty threshold arrays are used (they make labels visible in k6 reports)
- Description of the default
http_req_failed
thresholdHere's a suggested improvement:
/** * Creates default thresholds configuration for K6 tests. + * Empty threshold arrays are used to make labels visible in k6 reports without enforcing specific thresholds. * @param {string[]} counters - Array of counter names * @param {string[]} labels - Array of label names * @returns {Object} Threshold configuration object * @throws {Error} If inputs are invalid + * + * @example + * // Returns thresholds for duration and request counts for 'search' operations + * getDefaultThresholds(['http_req_duration', 'http_reqs'], ['search']) + * + * // The returned object will include: + * // - http_req_failed: ['rate<0.01'] // Ensures less than 1% of requests fail + * // - http_req_duration{name:search}: [] // Makes duration metrics visible + * // - http_reqs{name:search}: [] // Makes request count metrics visible */
11-13
: Enhance input validation.While type checking is good, consider adding more thorough validation:
- Check for non-empty arrays
- Validate that array elements are non-empty strings
- Validate counter/label names for k6 syntax compatibility
Here's a suggested improvement:
export function getDefaultThresholds(counters, labels) { if (!Array.isArray(counters) || !Array.isArray(labels)) { throw new Error('Both counters and labels must be arrays'); } + if (counters.length === 0 || labels.length === 0) { + throw new Error('Both counters and labels must be non-empty arrays'); + } + if (!counters.every(c => typeof c === 'string' && c.trim().length > 0) || + !labels.every(l => typeof l === 'string' && l.trim().length > 0)) { + throw new Error('All counters and labels must be non-empty strings'); + }
14-16
: Consider making the failure threshold configurable.The hardcoded failure rate threshold of 0.01 might not suit all test scenarios. Consider making it configurable:
Here's a suggested improvement:
+const DEFAULT_FAILURE_RATE = 0.01; + -export function getDefaultThresholds(counters, labels) { +export function getDefaultThresholds(counters, labels, failureRate = DEFAULT_FAILURE_RATE) { // ... validation code ... let thresholds = { - http_req_failed: ['rate<0.01'] + http_req_failed: [`rate<${failureRate}`] };tests/k6/tests/serviceowner/performance/create-dialog.js (1)
Line range hint
14-20
: Simplify the VU/iterations condition check.The current implementation correctly handles different test scenarios, but the condition check could be more concise.
Consider this simplified version:
- if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) { + if ((options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1) { createDialog(serviceOwners[0], endUsers[0]); } else { createDialog(randomItem(serviceOwners), randomItem(endUsers)); }tests/k6/tests/graphql/performance/graphql-search.js (3)
6-9
: Consider defining GraphQL-specific performance thresholds.While using default thresholds is a good start, GraphQL searches might have different performance characteristics compared to other operations. Consider defining specific thresholds based on your performance requirements for GraphQL searches.
Example modification:
export let options = { summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'], - thresholds: getDefaultThresholds(['http_req_duration', 'http_reqs'],['graphql search']) + thresholds: { + 'http_req_duration{type:graphql search}': ['p(95)<500', 'p(99)<1000'], // Adjust these values + 'http_reqs{type:graphql search}': ['rate>100'], // Adjust based on your requirements + } };
11-21
: Simplify the execution path condition.The current condition for determining single vs multi-user mode could be more readable.
Consider this cleaner approach:
export default function() { if (!endUsers || endUsers.length === 0) { throw new Error('No end users loaded for testing'); } - if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) { + const isSingleUserMode = (options.vus || 1) === 1 && (options.iterations || 1) === 1; + if (isSingleUserMode) { graphqlSearch(endUsers[0]); } else { graphqlSearch(randomItem(endUsers)); } }
23-23
: Remove extra empty lines at the end of the file.Clean up the file by removing the extra empty lines at the end.
tests/k6/tests/performancetest_common/readTestdata.js (1)
1-2
: Consider hosting PapaParse locally.While using jslib.k6.io is convenient, consider bundling PapaParse with your test code to:
- Eliminate external dependencies during test execution
- Ensure consistent test execution in air-gapped environments
- Reduce network-related test variability
tests/k6/tests/enduser/performance/enduser-search.js (2)
5-17
: Improve readability of operations array.While the configuration is technically correct, the array formatting could be improved for better readability.
Consider this formatting:
thresholds: getDefaultThresholds( ['http_req_duration', 'http_reqs'], [ 'enduser search', 'get dialog', 'get dialog activities', 'get dialog activity', 'get seenlogs', 'get seenlog', 'get transmissions', 'get transmission', 'get labellog' ] )
5-17
: Consider environment-specific thresholds and documentation.For a search performance test, consider the following improvements:
- Make thresholds configurable based on environment (dev/staging/prod) as search performance can vary significantly
- Add comments documenting expected performance baselines and any known bottlenecks
- Consider adding gradual ramp-up periods for VUs to better simulate real-world load patterns
Example configuration with environment-specific thresholds:
const ENV = __ENV.TEST_ENV || 'dev'; const thresholdConfigs = { dev: { p95: 500, p99: 1000 }, staging: { p95: 300, p99: 600 }, prod: { p95: 200, p99: 400 } }; export let options = { // ... existing options ... scenarios: { ramp_up: { executor: 'ramping-vus', startVUs: 0, stages: [ { duration: '30s', target: 10 }, { duration: '1m', target: 10 }, { duration: '30s', target: 0 } ] } } };tests/k6/tests/scenarios/performance/create-dialog-and-search.js (2)
1-6
: Consider version pinning for local imports.While the k6-utils import is properly versioned, the local imports lack version specifications. This could lead to compatibility issues if the imported modules are updated.
Consider using a similar versioning approach for local imports:
-import { enduserSearch } from '../../performancetest_common/simpleSearch.js'; +import { enduserSearch } from '../../performancetest_common/simpleSearch.js?v=1.0.0';
8-39
: Add documentation for test configuration.The configuration lacks comments explaining the purpose of each scenario and the expected performance characteristics.
Consider adding JSDoc comments:
+/** + * Test configuration for dialog creation and search performance scenarios. + * @property {Object} scenarios - Test scenarios configuration + * @property {Object} scenarios.create_dialogs - Creates dialogs with constant VUs + * @property {Object} scenarios.simple_search - Performs searches with constant VUs + */ export const options = {tests/k6/tests/performancetest_common/createDialog.js (1)
7-13
: Enhance JSDoc documentation for better clarity.The function documentation could be more comprehensive. Consider adding:
- Return value documentation
- More specific parameter type information
- Example usage
/** * Creates a dialog. + * + * @param {Object} serviceOwner - The service owner object containing token information. + * @param {string} serviceOwner.token - The authentication token for the service owner. + * @param {Object} endUser - The end user object containing identification information. + * @param {string} endUser.ssn - The social security number of the end user. + * @param {string} endUser.resource - The resource identifier for the end user. + * @returns {void} + * + * @example + * createDialog({ + * token: 'bearer-token' + * }, { + * ssn: '12345678901', + * resource: 'resource-id' + * }); */.github/workflows/dispatch-k6-performance.yml (1)
29-33
: Enhance tag parameter configurationThe tag parameter configuration has a few potential improvements:
- The parameter is marked as required but has a default value, which is contradictory
- The description could better explain the tag's purpose and format expectations
tag: - description: 'tag the performance test' + description: 'Label to identify the performance test run (e.g., "API-v1-Search", "Baseline")' required: true - default: 'Performance test' type: stringtests/k6/tests/performancetest_common/simpleSearch.js (1)
16-28
: Add error handling in 'retrieveDialogContent' functionThe
retrieveDialogContent
function may not handle cases whereitems[0]
does not have anid
or when subsequent requests fail. This could lead to unhandled exceptions.Consider adding additional checks and error handling to ensure robustness.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (11)
.github/workflows/dispatch-k6-performance.yml
(3 hunks)tests/k6/tests/enduser/performance/enduser-search.js
(1 hunks)tests/k6/tests/enduser/performance/simple-search.js
(0 hunks)tests/k6/tests/graphql/performance/graphql-search.js
(1 hunks)tests/k6/tests/performancetest_common/createDialog.js
(1 hunks)tests/k6/tests/performancetest_common/getDefaultThresholds.js
(1 hunks)tests/k6/tests/performancetest_common/readTestdata.js
(1 hunks)tests/k6/tests/performancetest_common/simpleSearch.js
(1 hunks)tests/k6/tests/scenarios/performance/create-dialog-and-search.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(1 hunks)
💤 Files with no reviewable changes (1)
- tests/k6/tests/enduser/performance/simple-search.js
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/k6/tests/serviceowner/performance/create-remove-dialog.js
🧰 Additional context used
📓 Learnings (2)
tests/k6/tests/graphql/performance/graphql-search.js (2)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/graphql-search.js:37-40
Timestamp: 2024-11-04T13:50:00.958Z
Learning: In `tests/k6/tests/enduser/performance/graphql-search.js`, the team prefers to validate only the HTTP status and JSON format of GraphQL responses, without adding specific validations of response fields.
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/performancetest_data/graphql-search.js:3-4
Timestamp: 2024-11-04T12:58:36.855Z
Learning: In `tests/k6/tests/performancetest_data/graphql-search.js`, direct string interpolation in GraphQL queries is acceptable, and hardcoded URN formats are preferred for the performance tests.
tests/k6/tests/performancetest_common/getDefaultThresholds.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/simple-search.js:11-20
Timestamp: 2024-11-04T07:06:39.379Z
Learning: In `tests/k6/tests/performancetest_common/common.js`, the function `getDefaultThresholds` may intentionally create empty threshold arrays to make labels visible in k6 reports.
🔇 Additional comments (16)
tests/k6/tests/performancetest_common/getDefaultThresholds.js (1)
17-21
: LGTM! The empty threshold arrays are intentional.
The empty threshold arrays for counter-label combinations are correctly implemented. As confirmed by the learning note, this makes the labels visible in k6 reports without enforcing specific thresholds.
tests/k6/tests/serviceowner/performance/create-dialog.js (3)
4-6
: LGTM! Good modularization of test utilities.
The refactoring improves code organization by:
- Centralizing threshold configuration
- Standardizing dialog creation
- Consolidating test data management
5-5
: Verify the imported createDialog implementation.
Ensure the imported createDialog from performancetest_common maintains the same functionality as the removed implementation.
#!/bin/bash
# Description: Compare old and new createDialog implementations
# Expected: Similar functionality with improved structure
# Check the new implementation
ast-grep --pattern 'export function createDialog($_) {
$$$
}'
# Look for any direct usage of the old implementation
rg -B 5 "function createDialog" "tests/k6/tests/"
10-10
: Verify threshold configuration parameters.
The threshold configuration uses standard metrics but should be validated:
- 'http_req_duration': Ensures response time thresholds
- 'http_reqs': Monitors request throughput
- 'create dialog': Specific to this operation
✅ Verification successful
Threshold configuration is correctly implemented and consistent
The threshold configuration is properly implemented using the standardized getDefaultThresholds
utility:
- The function is defined in
tests/k6/tests/performancetest_common/getDefaultThresholds.js
and used consistently across all performance tests - It sets appropriate default failure threshold for
http_req_failed
at rate < 1% - The metrics (
http_req_duration
,http_reqs
) and label (create dialog
) match the pattern used in other similar tests - The implementation follows the same configuration pattern as other performance tests like create-remove-dialog, graphql-search, and enduser-search
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify threshold configuration consistency across performance tests
# Expected: Similar threshold patterns for similar operations
# Check threshold configuration patterns across performance tests
rg -A 3 "getDefaultThresholds" "tests/k6/tests/"
# Verify the implementation of getDefaultThresholds
ast-grep --pattern 'export function getDefaultThresholds($_) {
$$$
}'
Length of output: 1217
Script:
#!/bin/bash
# Let's try a different approach to find the threshold configuration
# Search for the getDefaultThresholds implementation
rg -l "getDefaultThresholds" tests/k6/
# Look for the file content where getDefaultThresholds is defined
rg -A 10 "function getDefaultThresholds" tests/k6/
# Check other performance test configurations for comparison
rg "thresholds:" tests/k6/tests/ -A 2
Length of output: 3038
tests/k6/tests/graphql/performance/graphql-search.js (2)
1-4
: LGTM! Well-structured imports with version pinning.
The imports are well-organized, with version-pinned CDN imports for third-party modules and clear relative paths for local modules.
4-4
: Verify the GraphQL search implementation.
Let's ensure the imported graphqlSearch
function is properly implemented.
✅ Verification successful
Let me verify the implementation of the helper functions used within graphqlSearch
.
GraphQL search implementation is properly integrated and implemented
The verification shows a well-structured implementation:
graphqlSearch
function properly handles authentication with Bearer tokenpostGQ
helper correctly manages GraphQL requests with proper content typegetGraphqlParty
contains a comprehensive GraphQL query for dialog search with proper fields
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation of graphqlSearch function
# Expected: Find the implementation with proper error handling and GraphQL query execution
# Search for the graphqlSearch function implementation
ast-grep --pattern 'export function graphqlSearch($_) {
$$$
}'
# Backup: Search using ripgrep if the function is not exported
rg -A 10 'function graphqlSearch'
Length of output: 2302
Script:
#!/bin/bash
# Search for postGQ and getGraphqlParty functions
ast-grep --pattern 'function postGQ($_) {
$$$
}'
ast-grep --pattern 'function getGraphqlParty($_) {
$$$
}'
# Backup search with ripgrep
rg -A 5 'function (postGQ|getGraphqlParty)'
Length of output: 7048
tests/k6/tests/performancetest_common/readTestdata.js (2)
4-7
: 🛠️ Refactor suggestion
Improve path handling and add error handling.
The current implementation has several potential issues:
- Relative path could break if tests are run from different directories
- No explicit error handling for missing files
- The
.serviceowners-with-tokens.csv
appears to contain sensitive data
Consider these improvements:
-const filenameServiceowners = '../performancetest_data/.serviceowners-with-tokens.csv';
+const BASE_PATH = __ENV.TEST_DATA_PATH || '../performancetest_data';
+const filenameServiceowners = `${BASE_PATH}/.serviceowners-with-tokens.csv`;
export const serviceOwners = new SharedArray('serviceOwners', function () {
+ if (!open(filenameServiceowners)) {
+ throw new Error(`Service owners data file not found: ${filenameServiceowners}`);
+ }
return papaparse.parse(open(filenameServiceowners), { header: true, skipEmptyLines: true }).data;
});
Also, ensure that:
- The CSV file containing tokens is properly secured
- Tokens in the file are rotated regularly
- The file is excluded from version control
17-20
: 🛠️ Refactor suggestion
Add data validation and security measures for token data.
The end users with tokens data should be validated and properly secured.
Add validation and use the refactored loadCsvData
function:
-const filenameEndusersWithTokens = '../performancetest_data/.endusers-with-tokens.csv';
-export const endUsersWithTokens = new SharedArray('endUsersWithTokens', function () {
- return papaparse.parse(open(filenameEndusersWithTokens), { header: true, skipEmptyLines: true }).data;
-});
+const filenameEndusersWithTokens = `${BASE_PATH}/.endusers-with-tokens.csv`;
+export const endUsersWithTokens = loadCsvData(filenameEndusersWithTokens, 'endUsersWithTokens');
+
+// Validate that required fields are present
+endUsersWithTokens.forEach((user, index) => {
+ if (!user.token || !user.id) {
+ throw new Error(`Invalid user data at index ${index}: missing required fields`);
+ }
+});
Let's verify the security of token files:
tests/k6/tests/scenarios/performance/create-dialog-and-search.js (1)
26-37
: Review and clean up threshold operations list.
The thresholds array includes operations like 'get labellog', 'get seenlogs', etc., which don't appear to be used in this test file. This could make maintenance more difficult and potentially mask performance issues.
tests/k6/tests/performancetest_common/createDialog.js (1)
5-5
: Verify test data generation implementation.
Let's verify the implementation of dialogToInsert
to ensure consistent test data generation.
✅ Verification successful
Let me try a different approach to locate and verify the implementation.
Let me check the test data generation source that's being imported.
Test data generation implementation is consistent and well-structured
The implementation follows a clean pattern where:
- Base test data is generated in
serviceowner/testdata/01-create-dialog.js
- Performance test data in
performancetest_data/01-create-dialog.js
extends it with proper cleanup and validation - The implementation ensures consistent test data generation with:
- Input validation for end user and resource
- Proper cleanup of sensitive data (e.g., actorId)
- Consistent structure for dialog payload
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for dialogToInsert implementation
echo "Searching for dialogToInsert implementation..."
rg -A 10 "dialogToInsert.*=.*function" tests/k6/tests/performancetest_data/
# Verify test data structure
echo "Verifying test data structure..."
ast-grep --pattern 'export default function dialogToInsert($_) { $$$ }'
Length of output: 329
Script:
#!/bin/bash
# Check the content of the data file
echo "Checking content of create-dialog.js..."
cat tests/k6/tests/performancetest_data/01-create-dialog.js
# Search for any usage patterns
echo -e "\nSearching for usage patterns..."
rg -l "dialogToInsert" tests/k6/tests/
Length of output: 2782
Script:
#!/bin/bash
# Check the source test data implementation
echo "Checking source test data implementation..."
cat tests/k6/tests/serviceowner/testdata/01-create-dialog.js
# Check one usage example
echo -e "\nChecking a usage example..."
cat tests/k6/tests/performancetest_common/createDialog.js
Length of output: 13852
.github/workflows/dispatch-k6-performance.yml (3)
55-55
: LGTM! Improved run name format
The new run name format including tag, VUs, and duration improves test run identification and observability.
37-37
: Verify the impact of reduced default VUs
Changing the default from 10 to 1 virtual user significantly affects the performance test characteristics. While this might be suitable for initial testing, it may not provide realistic performance metrics under normal load conditions.
#!/bin/bash
# Description: Check VU usage in existing test files
# Test: Look for VU-related configurations in k6 tests
echo "Checking VU configurations in k6 test files..."
rg -B2 -A2 "vus:|scenarios:|stages:" "tests/k6/tests/"
51-53
: Verify new test file implementations
The addition of new test paths is aligned with the PR objectives. Let's verify that these files exist and are properly implemented.
#!/bin/bash
# Description: Verify existence and implementation of new test files
# Test 1: Check if files exist
echo "Checking existence of new test files..."
for file in "tests/k6/tests/serviceowner/performance/create-remove-dialog.js" \
"tests/k6/tests/enduser/performance/enduser-search.js" \
"tests/k6/tests/graphql/performance/graphql-search.js"; do
if [ -f "$file" ]; then
echo "Found: $file"
else
echo "Missing: $file"
fi
done
# Test 2: Check for k6 test structure in these files
echo "Checking k6 test structure..."
rg "export default function|export function setup" "tests/k6/tests/"
tests/k6/tests/performancetest_common/simpleSearch.js (3)
84-85
: Verify that 'json' is an array before checking its length
In the getContentChain
function, on lines 84-85, you retrieve the JSON response and check if it has a length:
let json = d.json();
if (json.length > 0) { ... }
Ensure that json
is an array to prevent runtime errors if it's an object.
Consider adding a check to confirm json
is an array:
-let json = d.json();
+let json = d.json();
+if (Array.isArray(json) && json.length > 0) { ... }
38-38
: Confirm validity of the end user's token in Authorization header
On line 38, the Authorization
header is set:
Authorization: "Bearer " + enduser.token,
Ensure that enduser.token
contains a valid and non-expired token to authenticate requests properly.
Consider adding token validation before making the request.
43-43
:
Handle PII securely when constructing 'defaultParty'
On line 43, you're constructing the defaultParty
string using the end user's SSN:
let defaultParty = "urn:altinn:person:identifier-no:" + enduser.ssn;
Ensure that the SSN is handled securely and is not logged or exposed in any way to prevent PII leakage.
Please confirm that this complies with data privacy policies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (6)
tests/k6/tests/serviceowner/performance/create-dialog.js (2)
1-8
: Enhance test documentation with more details.While the basic run command is documented, it would be helpful to add:
- Description of different test scenarios (single vs multiple iterations)
- Available parameters and their impact
- Expected outcomes or performance targets
Add this documentation above the imports:
/** * Performance test for creating a dialog * Run: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 1 --iterations 1 + * + * Parameters: + * - vus: Number of virtual users (default: 1) + * - iterations: Number of iterations per VU (default: 1) + * + * Scenarios: + * 1. Single iteration (vus=1, iterations=1): + * Uses fixed serviceOwner[0] and endUser[0] for consistent baseline testing + * 2. Multiple iterations: + * Randomly selects serviceOwner and endUser for load testing + * + * Performance targets: + * - See thresholds in getDefaultThresholds for specific metrics */
11-12
: Improve clarity of threshold configuration parameters.The parameters passed to
getDefaultThresholds
could be more descriptive to better convey their purpose.Consider this refactoring:
- thresholds: getDefaultThresholds(['http_req_duration', 'http_reqs'],['create dialog']) + thresholds: getDefaultThresholds( + metricsToTrack = ['http_req_duration', 'http_reqs'], + operationNames = ['create dialog'] + )tests/k6/tests/serviceowner/performance/create-remove-dialog.js (2)
1-4
: Enhance test documentation with more details.Consider adding more information to the documentation:
- Purpose and scope of the test
- Description of test scenarios
- Available parameters and their impact
- Expected outcomes and thresholds
/** * Performance test for creating and removing a dialog + * + * Purpose: + * Validates performance of dialog creation and removal operations + * + * Parameters: + * --vus: Number of virtual users (default: 1) + * --iterations: Number of test iterations (default: 1) + * + * Scenarios: + * - Single user: Uses first service owner and end user + * - Multiple users: Randomly selects service owners and end users + * * Run: k6 run tests/k6/tests/serviceowner/performance/create-remove-dialog.js --vus 1 --iterations 1 */
18-25
: Simplify conditions and add array access validation.While the logic is correct, consider these improvements:
- Simplify the single user condition
- Add validation for array access
export default function() { + if (!serviceOwners.length || !endUsers.length) { + throw new Error('No test data available'); + } + - if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) { + const isSingleUserMode = (options.vus || 1) === 1 && (options.iterations || 1) === 1; + if (isSingleUserMode) { createAndRemoveDialog(serviceOwners[0], endUsers[0]); } else { createAndRemoveDialog(randomItem(serviceOwners), randomItem(endUsers)); } }tests/k6/tests/performancetest_common/readTestdata.js (1)
9-10
: Consider local dependencies for performance testing reliability.Using a CDN-hosted dependency (PapaParse) in performance tests could introduce network-related variability. Additionally, the k6 import lacks version pinning.
Consider:
- Bundling PapaParse locally with your test code
- Pin the k6 data module version:
"k6/data@1.x"
tests/k6/tests/performancetest_common/simpleSearch.js (1)
20-32
: Consider optimizing multiple sequential API calls.The function makes several sequential API calls which could impact performance test results. Consider:
- Implementing parallel requests using Promise.all() where order doesn't matter
- Adding error handling for individual API calls
- Adding timing metrics for each type of request
Example optimization:
async function retrieveDialogContent(response, paramsWithToken) { const items = response.json().items; if (!items?.length) return; const dialogId = items[0].id; if (!dialogId) return; const requests = [ getContent(dialogId, paramsWithToken, 'get dialog'), getContentChain(dialogId, paramsWithToken, 'get dialog activities', 'get dialog activity', '/activities/'), getContentChain(dialogId, paramsWithToken, 'get seenlogs', 'get seenlog', '/seenlog/'), getContent(dialogId, paramsWithToken, 'get labellog', '/labellog'), getContentChain(dialogId, paramsWithToken, 'get transmissions', 'get transmission', '/transmissions/') ]; try { await Promise.all(requests); } catch (error) { console.error(`Failed to retrieve dialog content: ${error.message}`); } }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (8)
tests/k6/tests/graphql/performance/graphql-search.js
(1 hunks)tests/k6/tests/performancetest_common/createDialog.js
(1 hunks)tests/k6/tests/performancetest_common/getDefaultThresholds.js
(1 hunks)tests/k6/tests/performancetest_common/readTestdata.js
(1 hunks)tests/k6/tests/performancetest_common/simpleSearch.js
(1 hunks)tests/k6/tests/scenarios/performance/create-dialog-and-search.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/k6/tests/graphql/performance/graphql-search.js
- tests/k6/tests/performancetest_common/createDialog.js
- tests/k6/tests/performancetest_common/getDefaultThresholds.js
- tests/k6/tests/scenarios/performance/create-dialog-and-search.js
🧰 Additional context used
📓 Learnings (1)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/serviceowner/performance/create-remove-dialog.js:26-50
Timestamp: 2024-11-04T13:04:34.896Z
Learning: In `tests/k6/tests/serviceowner/performance/create-remove-dialog.js`, the team has decided not to add a teardown function using k6's `teardown` functionality for cleanup in case of test failures.
🔇 Additional comments (5)
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
Line range hint 14-21
: Add data validation and simplify conditions.
The test execution logic could benefit from some improvements:
- Validate that serviceOwners and endUsers arrays are not empty
- Simplify the condition check for single iteration mode
Consider this refactoring:
export default function() {
+ if (!serviceOwners.length || !endUsers.length) {
+ throw new Error('Test data arrays cannot be empty');
+ }
- if ((options.vus === undefined || options.vus === 1) && (options.iterations === undefined || options.iterations === 1)) {
+ const isSingleIteration = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1;
+ if (isSingleIteration) {
createDialog(serviceOwners[0], endUsers[0]);
}
else {
createDialog(randomItem(serviceOwners), randomItem(endUsers));
}
}
Let's verify the test data loading:
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
10-16
: Well-structured test configuration!
The configuration demonstrates good practices:
- Comprehensive summary statistics including critical percentiles
- Reusable threshold configuration through getDefaultThresholds
- Clear separation of concerns between stats and thresholds
tests/k6/tests/performancetest_common/readTestdata.js (1)
1-52
: Verify data usage in performance tests.
Let's verify how this test data is being used in the performance test scripts.
✅ Verification successful
Shared test data arrays are properly integrated in performance tests
The verification shows that the shared test data arrays are correctly imported and utilized across multiple performance test scenarios:
- All test files properly import the required data arrays from
readTestdata.js
- Each scenario appropriately uses the data for its specific test case:
- Create/remove dialog tests use
serviceOwners
andendUsers
- Search tests use
endUsersWithTokens
- All tests handle both single-user and multi-user scenarios correctly
- Create/remove dialog tests use
No issues found with the test data implementation or its usage patterns.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check usage of the exported data arrays in performance test files
echo "Checking usage of exported data arrays in performance tests..."
rg -l "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/
echo "\nChecking for any hardcoded test data that should use these shared arrays..."
rg -l "\.csv" tests/k6/tests/ | grep -v "readTestdata.js"
Length of output: 813
Script:
#!/bin/bash
# Let's check how these shared arrays are actually used in the test files
echo "Checking usage in create-dialog-and-search.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/scenarios/performance/create-dialog-and-search.js
echo "\nChecking usage in enduser-search.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/enduser/performance/enduser-search.js
echo "\nChecking usage in create-remove-dialog.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/serviceowner/performance/create-remove-dialog.js
echo "\nChecking usage in create-dialog.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/serviceowner/performance/create-dialog.js
echo "\nChecking usage in graphql-search.js..."
rg -A 5 "serviceOwners|endUsers|endUsersWithTokens" tests/k6/tests/graphql/performance/graphql-search.js
Length of output: 3429
tests/k6/tests/performancetest_common/simpleSearch.js (2)
1-11
: LGTM! Imports are well-organized and provide necessary utilities.
The imports provide a good foundation for testing utilities, including test assertions, request helpers, and UUID generation.
117-130
: 🛠️ Refactor suggestion
Add GraphQL-specific error handling and response validation.
The GraphQL search implementation should handle GraphQL-specific errors and validate the response structure.
export function graphqlSearch(enduser) {
if (!enduser?.token) {
throw new Error('Invalid enduser object: missing token');
}
let paramsWithToken = {
headers: {
Authorization: "Bearer " + enduser.token,
traceparent: uuidv4()
},
tags: { name: 'graphql search' }
};
describe('Perform graphql dialog list', () => {
let r = postGQ(getGraphqlParty(enduser.ssn), paramsWithToken);
expectStatusFor(r).to.equal(200);
expect(r, 'response').to.have.validJsonBody();
// Validate GraphQL response
const body = r.json();
expect(body, 'graphql response').to.not.have.property('errors');
expect(body, 'graphql data').to.have.property('data');
});
}
⛔ Skipped due to learnings
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/graphql-search.js:37-40
Timestamp: 2024-11-04T13:50:00.958Z
Learning: In `tests/k6/tests/enduser/performance/graphql-search.js`, the team prefers to validate only the HTTP status and JSON format of GraphQL responses, without adding specific validations of response fields.
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
tests/k6/tests/performancetest_common/simpleSearch.js (1)
1-129
: Consider implementing a metrics collection strategy.The file implements various search operations but lacks a consistent strategy for performance metrics collection. Consider:
- Creating a centralized metrics collection utility
- Defining standard performance metrics across all operations
- Implementing proper error tracking and reporting
Would you like assistance in designing a metrics collection strategy that aligns with k6's best practices?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
tests/k6/tests/performancetest_common/simpleSearch.js
(1 hunks)
🔇 Additional comments (1)
tests/k6/tests/performancetest_common/simpleSearch.js (1)
1-10
: LGTM! Well-structured imports and documentation.
The imports are well-organized and the k6-utils version is consistently set to 1.4.0 as previously suggested.
Description
Added more performance tests and some improvements on github workflow for performance tests
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Documentation