-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test(load-test): Performance tests for create dialog and search #1331
Conversation
…er and enduser so far
📝 Walkthrough📝 Walkthrough📝 WalkthroughWalkthroughThis pull request introduces several new GitHub Actions workflows for K6 performance testing, including Changes
Possibly related issues
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (11)
.github/workflows/performance-workflows/workflow-run-k6-performance.yml (2)
5-23
: LGTM: Well-defined inputs for K6 performance tests.The inputs cover essential parameters for configuring K6 performance tests, and marking them as required ensures all necessary information is provided.
Remove trailing spaces from lines 8 and 23 to improve code cleanliness:
- type: string + type: string - type: string + type: string🧰 Tools
🪛 yamllint
[error] 8-8: trailing spaces
(trailing-spaces)
[error] 23-23: trailing spaces
(trailing-spaces)
40-55
: LGTM: Well-structured job steps for running K6 tests.The job steps are logically ordered and cover all necessary actions for running K6 tests. The use of latest action versions and proper environment variable setup is commendable.
Remove trailing spaces from lines 47 and 53 to improve code cleanliness:
- ./tests/k6/tests/scripts/generate_tokens.sh ./tests/k6/tests/performancetest_data ${{ inputs.tokens }} + ./tests/k6/tests/scripts/generate_tokens.sh ./tests/k6/tests/performancetest_data ${{ inputs.tokens }} - TOKEN_GENERATOR_PASSWORD: ${{ secrets.TOKEN_GENERATOR_PASSWORD }} + TOKEN_GENERATOR_PASSWORD: ${{ secrets.TOKEN_GENERATOR_PASSWORD }}🧰 Tools
🪛 yamllint
[error] 47-47: trailing spaces
(trailing-spaces)
[error] 53-53: trailing spaces
(trailing-spaces)
.github/workflows/dispatch-k6-performance.yml (1)
5-42
: Input parameters are well-defined, but consider these minor improvements:
The description for the 'tokens' input could be clearer. Consider rewording it to something like: "Tokens to generate: for create dialog, search, both, or none".
For the 'vus' (Virtual Users) input, consider adding a maximum value to prevent accidental overload of the system. For example:
vus: description: 'Number of Virtual Users (VUs)' required: true type: number default: 10 maximum: 100 # or an appropriate maximum value
- For the 'duration' input, consider adding a pattern to ensure valid time formats are entered. For example:
duration: description: 'Duration of test (e.g., 30s, 1m, 10m)' required: true default: 1m type: string pattern: '^(\d+[smh])+$'These changes would enhance input validation and clarity.
tests/k6/common/request.js (1)
36-43
: Approved with suggestions for improvementThe changes to
getEnduserRequestParams
function improve its robustness and flexibility. However, there are a couple of points to consider:
The use of
params.Headers
(capital H) is inconsistent with the rest of the code which usesparams.headers
(lowercase h). Consider changing this for consistency.It would be beneficial to add comments explaining the purpose of these changes, especially the logic for handling the overridden Authorization header.
Consider applying the following changes:
function getEnduserRequestParams(params = null, tokenOptions = null) { params = params || {}; - const headers = params.Headers || {}; + const headers = params.headers || {}; // Changed to lowercase 'headers' for consistency + // Check if Authorization header is overridden in the params const hasOverridenAuthorizationHeader = headers.Authorization !== undefined; let defaultParams = { headers: { 'Accept': 'application/json', 'User-Agent': 'dialogporten-k6', + // Use overridden Authorization if provided, otherwise generate a new token 'Authorization': hasOverridenAuthorizationHeader ? headers.Authorization : 'Bearer ' + getEnduserTokenFromGenerator(tokenOptions) } }.gitignore (2)
382-382
: Good security practice for test data.Ignoring
**/.endusers-with-tokens.csv
is a good practice to prevent committing sensitive test data. The**
ensures this file is ignored in any directory.Consider adding a comment above this line to explain its purpose, e.g.:
# Ignore files containing test tokens
This will help other developers understand why these files are being ignored.
383-383
: Good security practice for test data.Ignoring
**/.serviceowners-with-tokens.csv
is consistent with the previous entry and maintains good security practices for test data.Consider grouping this with the previous entry under a single comment:
# Ignore files containing test tokens **/.endusers-with-tokens.csv **/.serviceowners-with-tokens.csv
This improves readability and maintainability of the .gitignore file.
tests/k6/tests/serviceowner/testdata/01-create-dialog.js (1)
4-4
: Approve the function signature change with a minor suggestion.The updated function signature enhances flexibility by allowing an optional
endUser
parameter. This change is well-implemented and maintains backward compatibility.Consider adding a brief comment explaining the purpose of the
endUser
parameter for improved code clarity.You could add a comment like this:
/** * Generates test data for creating a dialog. * @param {string} [endUser] - The end user's SSN. If not provided, uses the default end user SSN. * @returns {Object} The dialog creation test data. */ export default function (endUser = getDefaultEnduserSsn()) { // ... rest of the function }Also applies to: 7-7
tests/k6/tests/enduser/performance/simple-search.js (1)
8-10
: Add error handling for CSV parsing and file access.Currently, the script assumes that the CSV file exists and is correctly formatted. Consider adding error handling to manage cases where the file is missing or contains malformed data to prevent runtime errors.
tests/k6/tests/serviceowner/performance/create-dialog.js (3)
37-37
: Useconst
instead ofvar
for variable declarationIt's a good practice to use
const
for variables that are not reassigned, enhancing code readability and preventing unintentional mutations.Apply this diff:
-export function createDialog(serviceOwner, endUser) { - var paramsWithToken = { +export function createDialog(serviceOwner, endUser) { + const paramsWithToken = {
45-45
: Useconst
instead oflet
for unchanging variablesSince
r
is not reassigned after initialization, usingconst
is preferable.Apply this diff:
describe('create dialog', () => { - let r = postSO('dialogs', dialogToInsert(endUser.ssn), paramsWithToken); + const r = postSO('dialogs', dialogToInsert(endUser.ssn), paramsWithToken); expect(r.status, 'response status').to.equal(201); });
20-24
: Review the thresholds configuration for meaningful assertionsThe threshold
'max>=0'
for'http_req_duration{scenario:default}'
will always pass since response times cannot be negative. This might not provide valuable insights into performance.Consider setting realistic thresholds to monitor performance effectively. For example:
thresholds: { - 'http_req_duration{scenario:default}': [`max>=0`], + 'http_req_duration{scenario:default}': [`p(95)<500`],This sets a threshold where 95% of requests should complete within 500ms.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (2)
tests/k6/tests/performancetest_data/endusers-staging.csv
is excluded by!**/*.csv
tests/k6/tests/performancetest_data/serviceowners-staging.csv
is excluded by!**/*.csv
📒 Files selected for processing (8)
- .github/workflows/dispatch-k6-performance.yml (1 hunks)
- .github/workflows/performance-workflows/workflow-run-k6-performance.yml (1 hunks)
- .gitignore (1 hunks)
- tests/k6/common/request.js (1 hunks)
- tests/k6/tests/enduser/performance/simple-search.js (1 hunks)
- tests/k6/tests/scripts/generate_tokens.sh (1 hunks)
- tests/k6/tests/serviceowner/performance/create-dialog.js (1 hunks)
- tests/k6/tests/serviceowner/testdata/01-create-dialog.js (1 hunks)
🧰 Additional context used
🪛 yamllint
.github/workflows/performance-workflows/workflow-run-k6-performance.yml
[error] 8-8: trailing spaces
(trailing-spaces)
[error] 23-23: trailing spaces
(trailing-spaces)
[error] 47-47: trailing spaces
(trailing-spaces)
[error] 53-53: trailing spaces
(trailing-spaces)
🔇 Additional comments (13)
.github/workflows/performance-workflows/workflow-run-k6-performance.yml (3)
1-4
: LGTM: Workflow name and trigger are well-defined.The workflow name clearly describes its purpose, and using
workflow_call
as the trigger allows for modular and reusable CI/CD setups.
24-32
: LGTM: Appropriate use of secrets for sensitive information.The workflow correctly defines required secrets for authentication and K6 cloud integration, ensuring secure handling of sensitive information.
34-39
: LGTM: Job definition and permissions are well-configured.The job is appropriately named and uses the latest Ubuntu runner. The permissions granted are necessary for updating PR status and comments.
.github/workflows/dispatch-k6-performance.yml (2)
1-4
: LGTM: Workflow name and trigger are well-defined.The workflow name clearly describes its purpose, and the manual trigger (
workflow_dispatch
) is appropriate for performance tests, allowing controlled execution.
1-60
: Overall, the workflow is well-structured and functional.This new workflow file effectively sets up a manually triggered K6 performance test with configurable parameters. The use of a reusable workflow for the actual test execution is a good practice for maintaining separation of concerns.
The suggestions provided in the review aim to:
- Improve input parameter descriptions and validation.
- Enhance the readability and maintainability of the job configuration.
These changes, while not critical, would further improve the robustness and user-friendliness of the workflow.
.gitignore (2)
379-379
: Excellent addition for security!Adding
.secrets
to .gitignore is a crucial security measure. It prevents accidental commits of sensitive information, which is especially important for performance tests that may require authentication tokens.
379-383
: Overall, excellent security-focused changes.These additions to .gitignore are crucial for the performance testing changes mentioned in the PR. They effectively prevent sensitive data (secrets and test tokens) from being accidentally committed. This aligns well with best practices for security and the PR objectives.
Consider adding comments as suggested in the previous review comments to improve documentation for future developers.
tests/k6/tests/serviceowner/testdata/01-create-dialog.js (2)
Line range hint
1-283
: Overall assessment: Changes are well-implemented and improve flexibility.The modifications to this file are minimal but impactful. They enhance the flexibility of the
createDialog
function while maintaining backward compatibility. The changes align well with the performance testing context mentioned in the PR description.Key points:
- The function signature update is implemented correctly.
- Backward compatibility is maintained through the use of a default parameter.
- The change is localized and doesn't introduce breaking changes to the existing code structure.
These changes contribute positively to the overall goal of implementing performance tests for create dialog functionality.
4-7
: Verify usage of this function in other parts of the codebase.The change to the function signature is well-implemented and localized. To ensure full utilization of this enhancement:
- Review any code that directly calls this function to see if it can benefit from passing a specific
endUser
.- Update relevant documentation or comments in other files that describe the usage of this function.
This will help maintain consistency across the codebase and take full advantage of the added flexibility.
To assist in this verification, you can run the following script:
✅ Verification successful
Usage of
create-dialog
function is properly localized.The changes to the
create-dialog
function signature are confined to specific test files. No impacts on other parts of the codebase were found.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all occurrences of the create-dialog function usage # Search for import statements of this file echo "Searching for import statements:" rg -n "import.*01-create-dialog" # Search for function calls echo "\nSearching for function calls:" rg -n "createDialog\s*\("Length of output: 2713
tests/k6/tests/enduser/performance/simple-search.js (3)
6-10
: Ensure secure handling of tokens and sensitive data.Storing tokens in a CSV file may pose security risks if unauthorized access occurs. Consider using secure methods for storing and accessing tokens, such as environment variables or secret management services.
[security]
36-37
: Sanitize user input when constructing query parameters.When building the
defaultFilter
, ensure thatenduser.ssn
is properly validated and sanitized, even in testing environments, to prevent potential injection issues.[security]
6-6
: Verify the file path and visibility of the end users CSV file.Using a filename starting with a dot (
.endusers-with-tokens.csv
) indicates a hidden file in Unix-like systems. Ensure that the file is accessible when the test runs and that the path is correct.Run the following script to verify the existence and accessibility of the CSV file:
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
45-45
:⚠️ Potential issueEnsure secure handling of sensitive personal information
The
ssn
(social security number) of the end user is being used. It's crucial to handle PII securely to comply with data protection regulations and prevent leakage.Confirm that:
- The
endUser.ssn
data is handled securely and only used where necessary.- No PII is logged or exposed in test outputs or logs.
- Proper anonymization or obfuscation is applied if required.
Co-authored-by: Ole Jørgen Skogstad <skogstad@softis.net>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- tests/k6/tests/scripts/generate_tokens.sh (1 hunks)
🧰 Additional context used
🪛 Shellcheck
tests/k6/tests/scripts/generate_tokens.sh
[warning] 88-88: This $? refers to echo/printf, not a previous command. Assign to variable to avoid it being overwritten.
(SC2320)
🔇 Additional comments (1)
tests/k6/tests/scripts/generate_tokens.sh (1)
9-45
: LGTM! Well-structured argument validation.The implementation includes clear usage information, proper argument validation, and appropriate environment mapping.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (2)
tests/k6/tests/serviceowner/performance/createremove-no-delay.js (2)
Line range hint
13-28
: Enhance error handling for performance metrics.The current implementation could benefit from additional error handling and metrics collection for performance analysis.
Consider adding metrics tracking:
import { Counter } from 'k6/metrics'; const errors = new Counter('errors'); // In your test function export default function(paramsWithToken) { let dialogId = null; describe('create dialog', () => { let r = postSO('dialogs', dialogToInsert(), paramsWithToken); if (r.status !== 201) { errors.add(1); console.error(`Failed to create dialog: ${r.status} ${r.body}`); } expect(r.status, 'response status').to.equal(201); dialogId = r.json(); }); describe('remove dialog', () => { if (dialogId) { let r = purgeSO('dialogs/' + dialogId, paramsWithToken); if (r.status !== 204) { errors.add(1); console.error(`Failed to remove dialog: ${r.status} ${r.body}`); } expect(r.status, 'response status').to.equal(204); } }); }
Additional load test scenarios needed for comprehensive performance testing
The current performance tests only implement basic scenarios with configurable VUs (virtual users) and iterations, but lack specific load patterns. The test files show:
createremove-no-delay.js
: Basic create/remove operationscreate-dialog.js
: Single dialog creation with random userssimple-search.js
: Basic search operationThe codebase needs the following load test scenarios:
- Stress test with ramping VUs using k6's stages configuration
- Spike test configuration for sudden load bursts
- Soak test for extended duration testing
- Scenarios combining multiple operations (create, search, remove) under various load patterns
🔗 Analysis chain
Line range hint
1-28
: Verify test coverage across different load scenarios.The current test focuses on basic create/remove operations. For comprehensive performance testing, consider different load patterns.
Let's check if there are other performance test variations:
Consider adding the following test scenarios if not already present:
- Stress test with gradually increasing load
- Spike test for sudden bursts of traffic
- Soak test for long-duration stability
Let me check the content of these performance test files to understand the existing load patterns.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for different performance test scenarios # Expected: Find variations of load tests (e.g., spike tests, stress tests) echo "Searching for performance test files..." fd . tests/k6/tests/serviceowner/performance -e js echo "Checking for load test configurations..." rg -l "options.*vus|stages" tests/k6/tests/Length of output: 509
Script:
#!/bin/bash # Check the content of performance test files to understand load patterns echo "Content of create-dialog.js:" cat tests/k6/tests/serviceowner/performance/create-dialog.js echo -e "\nContent of simple-search.js:" cat tests/k6/tests/enduser/performance/simple-search.jsLength of output: 3723
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (4)
- tests/k6/common/request.js (2 hunks)
- tests/k6/tests/enduser/performance/simple-search.js (1 hunks)
- tests/k6/tests/serviceowner/performance/create-dialog.js (1 hunks)
- tests/k6/tests/serviceowner/performance/createremove-no-delay.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/k6/common/request.js
- tests/k6/tests/serviceowner/performance/create-dialog.js
🧰 Additional context used
🔇 Additional comments (2)
tests/k6/tests/serviceowner/performance/createremove-no-delay.js (1)
7-9
: LGTM! Property name follows JavaScript conventions.The change from 'Headers' to 'headers' aligns with JavaScript naming conventions and maintains consistency.
tests/k6/tests/enduser/performance/simple-search.js (1)
20-27
: LGTM! Clean implementation of test scenarios.The function properly handles both single and multiple user scenarios, with appropriate user selection logic.
…d by coderabbitai
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- tests/k6/tests/scripts/generate_tokens.sh (1 hunks)
🧰 Additional context used
🪛 Shellcheck
tests/k6/tests/scripts/generate_tokens.sh
[warning] 67-67: This $? refers to echo/printf, not a previous command. Assign to variable to avoid it being overwritten.
(SC2320)
[warning] 89-89: This $? refers to echo/printf, not a previous command. Assign to variable to avoid it being overwritten.
(SC2320)
🔇 Additional comments (3)
tests/k6/tests/scripts/generate_tokens.sh (3)
9-45
: LGTM! Well-structured argument validation and environment mapping.The implementation includes thorough argument validation, clear usage information, and proper environment mapping with appropriate error handling.
47-50
: LGTM! File paths are correctly constructed.
1-7
: 🛠️ Refactor suggestionAdd script safety measures and secure cleanup.
While the environment variable checks are good, consider adding these safety measures:
- Enable strict error handling
- Add secure cleanup of sensitive files
#!/bin/bash +set -euo pipefail + +# Cleanup function to securely remove sensitive files +cleanup() { + rm -f "$serviceowner_tokenfile" "$enduser_tokenfile" +} + +# Set up trap to ensure cleanup on exit +trap cleanup EXIT INT TERM # Check if required environment variables are setLikely invalid or redundant comment.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
tests/k6/tests/enduser/performance/simple-search.js (1)
1-6
: Consider version pinning and sensitive data handling improvements.
- Pin the k6-utils version more specifically to ensure test stability:
-import { randomItem } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js'; +import { randomItem } from 'https://jslib.k6.io/k6-utils/1.2.0/index.min.js';
- Consider moving sensitive data path to environment variables or configuration file for better security and flexibility.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- tests/k6/tests/enduser/performance/simple-search.js (1 hunks)
🧰 Additional context used
🔇 Additional comments (2)
tests/k6/tests/enduser/performance/simple-search.js (2)
33-40
: LGTM! Well-structured test execution logic.The code properly handles both single and multiple virtual user scenarios, with good randomization for multiple VUs.
25-31
:⚠️ Potential issueDefine specific performance thresholds for meaningful test results.
The threshold configuration is still using empty arrays. Define specific thresholds to ensure the performance tests have clear pass/fail criteria:
export let options = { summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'], thresholds: { - 'http_req_duration{name:simple search}': [], - 'http_reqs{name:simple search}': [], + 'http_req_duration{name:simple search}': [ + 'p(95) < 2000', // 95% of requests should complete within 2s + 'max < 5000' // No request should take longer than 5s + ], + 'http_reqs{name:simple search}': [ + 'count > 0', // Ensure requests are being made + 'rate > 0.95' // Success rate should be above 95% + ], }, };Likely invalid or redundant comment.
Quality Gate passedIssues Measures |
Description
See commits for details
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)Summary by CodeRabbit
New Features
Bug Fixes
Chores
.gitignore
to exclude sensitive files and generated token files.