-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(perfomance): Fixing github action to run performance tests in k8s #1739
Conversation
📝 WalkthroughWalkthroughThis pull request introduces modifications to the K6 performance testing infrastructure, focusing on updating workflow configurations and refactoring performance test scripts. Key changes include altering the workflow reference for performance tests, adding new input parameters and secrets for Azure integration, and implementing a new workflow for executing K6 tests. Additionally, the test scripts have been streamlined to utilize a centralized data validation approach, enhancing the overall structure and logic of the performance testing process. Changes
Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (13)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
9-9
: Remove the useless rename
Static analysis warns about a useless rename. You can export the symbol directly without renaming.-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/graphql/performance/graphql-search.js (1)
8-8
: Remove useless rename
Consider simplifying the export to avoid unnecessary renaming:-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 8-8: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-transmissions.js (1)
9-9
: Simplify export statement
Remove the unwanted rename:-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/enduser/performance/enduser-search.js (1)
4-4
: Remove unnecessary rename in export statement.The
as
keyword is redundant when exporting with the same name.-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 4-4: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
8-9
: Consolidate imports from readTestdata.js.The file imports both
serviceOwners
andsetup
separately from the same module.-import { serviceOwners } from '../../performancetest_common/readTestdata.js'; -export { setup as setup } from '../../performancetest_common/readTestdata.js'; +import { serviceOwners, setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/serviceowner-search.js (1)
4-5
: Consolidate imports and remove redundant rename.Similar to other files, imports can be consolidated and the rename is unnecessary.
-import { serviceOwners } from '../../performancetest_common/readTestdata.js'; -export { setup as setup } from '../../performancetest_common/readTestdata.js'; +import { serviceOwners, setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 5-5: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/enduser/performance/enduserSearchWithThresholds.js (2)
1-1
: Remove unnecessary rename in export statement.The
as
keyword is redundant when exporting with the same name.-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 1-1: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
Line range hint
4-27
: Consider extracting threshold configurations to a shared configuration file.The threshold configurations are hardcoded and might need to be reused or modified across different test files.
Create a shared configuration file:
// performancetest_common/thresholdConfigs.js export const defaultHttpThresholds = { duration: "30s", vus: 1, thresholds: { "http_req_duration{name:enduser search}": ["p(95)<300", "p(99)<500"], // ... other thresholds } };Then import and use it:
+import { defaultHttpThresholds } from '../../performancetest_common/thresholdConfigs.js'; -export let options = { - summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'], - vus: 1, - duration: "30s", - thresholds: { - // ... thresholds - } -} +export let options = { + ...defaultHttpThresholds, + summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'], +};🧰 Tools
🪛 Biome (1.9.4)
[error] 1-1: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/performancetest_common/readTestdata.js (1)
73-80
: Consider adding input validation.While the setup function works correctly, it might benefit from validation of the
totalVus
value.export function setup() { const totalVus = exec.test.options.scenarios.default.vus; + if (!totalVus || totalVus < 1) { + throw new Error('Invalid number of virtual users'); + } let parts = []; for (let i = 1; i <= totalVus; i++) { parts.push(endUsersPart(totalVus, i)); } return parts; }.github/workflows/workflow-run-k6-ci-cd-yt01.yml (2)
38-43
: Consider adding error handling for CSV output.The K6 command outputs results to CSV, but there's no error handling if the file write fails.
k6 run ${{ inputs.testSuitePath }} --quiet --log-output=stdout --include-system-env-vars \ --vus=${{ inputs.vus }} --duration=${{ inputs.duration }} --out csv=./results.csv +if [ ! -f "./results.csv" ]; then + echo "Error: Failed to generate results CSV file" + exit 1 +fi grep http_req_duration ./results.csv | sort --field-separator=',' --key=3 -nr | head -10
48-48
: Add newline at end of file.Add a newline character at the end of the file to comply with POSIX standards.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 48-48: no new line character at the end of file
(new-line-at-end-of-file)
.github/workflows/dispatch-k6-performance.yml (1)
34-38
: Fix trailing spaces in YAML.Remove trailing spaces after
type: number
to maintain consistent formatting.parallelism: description: 'Number of parallel test runs' required: true default: 1 - type: number + type: number🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 38-38: trailing spaces
(trailing-spaces)
tests/k6/tests/scripts/run-test-in-k8s.sh (1)
44-48
: LGTM! Good test status tracking.The status check properly tracks pod failures. Consider enhancing the logging to show why the pod failed.
- if [ "$status" != "Succeeded" ]; then - failed=1 - fi + if [ "$status" != "Succeeded" ]; then + echo "Pod failed with status: $status" + failed=1 + fi
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
.github/workflows/ci-cd-yt01.yml
(1 hunks).github/workflows/dispatch-k6-performance.yml
(2 hunks).github/workflows/workflow-run-k6-ci-cd-yt01.yml
(1 hunks).github/workflows/workflow-run-k6-performance.yml
(1 hunks)tests/k6/tests/enduser/performance/enduser-search.js
(2 hunks)tests/k6/tests/enduser/performance/enduserSearchWithThresholds.js
(1 hunks)tests/k6/tests/graphql/performance/graphql-search.js
(2 hunks)tests/k6/tests/performancetest_common/readTestdata.js
(2 hunks)tests/k6/tests/scripts/run-test-in-k8s.sh
(4 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(2 hunks)tests/k6/tests/serviceowner/performance/create-transmissions.js
(1 hunks)tests/k6/tests/serviceowner/performance/serviceowner-search.js
(2 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
tests/k6/tests/enduser/performance/enduserSearchWithThresholds.js
[error] 1-1: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/graphql/performance/graphql-search.js
[error] 8-8: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/serviceowner-search.js
[error] 5-5: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/enduser/performance/enduser-search.js
[error] 4-4: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-dialog.js
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-transmissions.js
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
🪛 YAMLlint (1.35.1)
.github/workflows/dispatch-k6-performance.yml
[error] 38-38: trailing spaces
(trailing-spaces)
.github/workflows/workflow-run-k6-ci-cd-yt01.yml
[error] 48-48: no new line character at the end of file
(new-line-at-end-of-file)
🪛 actionlint (1.7.4)
.github/workflows/workflow-run-k6-performance.yml
70-70: shellcheck reported issue in this script: SC2086:info:4:79: Double quote to prevent globbing and word splitting
(shellcheck)
70-70: shellcheck reported issue in this script: SC2086:info:4:96: Double quote to prevent globbing and word splitting
(shellcheck)
🔇 Additional comments (19)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (3)
5-5
: Import statement looks good
This import is necessary for managing execution context within K6 tests.
7-7
: Service owners import is used appropriately
No issues here. The importedserviceOwners
is used later in the script.
21-25
: Streamlined end-user selection is clear
Usingexec.vu.idInTest
andexec.vu.iterationInInstance
is a solid approach for distributing end users.tests/k6/tests/graphql/performance/graphql-search.js (2)
5-5
: Import statement looks good
Importing from'k6/execution'
helps handle VU-based indexing.
24-27
: Simple and effective approach to end-user binding
This indexing strategy is straightforward. Ensuredata
is guaranteed to contain end-user arrays for each virtual user.tests/k6/tests/serviceowner/performance/create-transmissions.js (3)
5-5
: Appropriate import
Bringing inexec
for indexing logic is consistent with the other files.
8-8
: Imported service owners
Nothing problematic here. The array is used below for transmissions creation.
21-25
: Transmission creation logic
This logic cleanly assigns one service owner and indexes intomyEndUsers
. Confirm that sufficient end users are allocated for your concurrency level.tests/k6/tests/performancetest_common/readTestdata.js (1)
55-71
: LGTM! Well-implemented user distribution logic.The
endUsersPart
function effectively handles the distribution of end users across virtual users, including edge cases and remainder distribution. The implementation ensures fair distribution of users when the total count doesn't divide evenly..github/workflows/workflow-run-k6-ci-cd-yt01.yml (1)
30-33
: LGTM! Well-defined permissions scope.The permissions are appropriately scoped to only what's needed: checks and pull-requests write access.
.github/workflows/dispatch-k6-performance.yml (1)
60-61
: Verify Azure credentials configuration.The addition of Azure credentials and parallelism parameter looks correct, but ensure these credentials have the minimum required permissions for the K8s operations.
Also applies to: 68-68
✅ Verification successful
Azure credentials configuration is correctly implemented ✅
The credentials are properly used for OIDC-based authentication to access the k6 test cluster (k6tests-cluster) and the implementation follows security best practices including proper permission scoping.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if the Azure service principal has the necessary role assignments az role assignment list --assignee ${{ secrets.AZURE_CLIENT_ID }} | jq '.[] | select(.roleDefinitionName | contains("Kubernetes"))'Length of output: 185
Script:
#!/bin/bash # Search for Azure/K8s configuration files and documentation echo "=== Searching for Azure/K8s config files ===" fd -e yaml -e yml -e md . | grep -i -E 'azure|kubernetes|k8s|readme' echo -e "\n=== Checking workflow file content ===" cat .github/workflows/dispatch-k6-performance.yml echo -e "\n=== Searching for similar Azure auth patterns ===" rg -l "AZURE_CLIENT_ID|AZURE_TENANT_ID" --type yamlLength of output: 3156
Script:
#!/bin/bash # Check the reusable workflow and k6 documentation echo "=== Checking reusable workflow ===" cat .github/workflows/workflow-run-k6-performance.yml echo -e "\n=== Checking k6 test documentation ===" cat tests/k6/README.mdLength of output: 8247
.github/workflows/workflow-run-k6-performance.yml (2)
43-48
: LGTM! Proper Azure authentication setup.The Azure login step is well-configured with the necessary parameters and allows for no subscriptions.
49-66
: LGTM! Robust error handling in kubeconfig setup.The kubeconfig setup includes proper error handling for all critical operations: CLI installation, credentials retrieval, and kubeconfig conversion.
tests/k6/tests/scripts/run-test-in-k8s.sh (5)
35-38
: LGTM! Good error handling.The added verification ensures early failure if cluster access is not available, preventing confusing errors later in the script.
126-128
: LGTM! Good test identification.The added testid tag and explicit namespace specification improve test tracking and resource management.
158-160
: LGTM! Good cleanup handling.The sleep before cleanup and proper exit code propagation ensure reliable test results and resource cleanup.
5-7
: Verify namespace existence and permissions.Setting the namespace context is good practice, but we should ensure the namespace exists and the service account has the necessary permissions.
143-145
: Validate memory requirements for performance tests.The memory request of 200Mi should be validated against actual test requirements. Consider:
- Analyzing memory usage during test runs
- Adding CPU requests for better resource management
✅ Verification successful
Memory request of 200Mi is appropriate for k6 test pods
The specified memory request of 200Mi is reasonable for k6 test pods considering:
- Various test scenarios (search, create, remove) with different load patterns
- Configurable virtual user counts and test durations
- Standard k6 memory footprint for performance testing
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check memory usage of previous test runs # Test: Get memory usage metrics from completed pods # Get memory usage from previous test runs kubectl get pods -l k6-test -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.containerStatuses[0].name}{"\t"}{.status.containerStatuses[0].lastState.terminated.finishedAt}{"\t"}{.status.containerStatuses[0].lastState.terminated.reason}{"\n"}{end}' | sort -k3 | tail -n 5Length of output: 382
Script:
#!/bin/bash # Search for memory-related configurations and documentation echo "=== Checking k6 test configurations ===" rg -g "*.js" -g "*.json" "options" -A 5 -B 5 tests/k6/ echo -e "\n=== Checking documentation for resource requirements ===" fd README.md --exec cat {} \; echo -e "\n=== Checking other k8s resource configurations ===" fd -e yaml -e yml | xargs grep -l "resources" || trueLength of output: 63460
.github/workflows/ci-cd-yt01.yml (1)
133-133
: Verify the referenced workflow file exists.Ensure that
.github/workflows/workflow-run-k6-ci-cd-yt01.yml
exists and contains the necessary job definitions.✅ Verification successful
Referenced workflow file is properly configured ✅
The workflow file exists and contains all necessary components including proper job definitions, required inputs, and secrets configuration for k6 performance testing.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify workflow file existence and content # Test: Check if the workflow file exists and contains required inputs # Check if file exists test -f .github/workflows/workflow-run-k6-ci-cd-yt01.yml || echo "Workflow file not found" # Check if file contains required inputs grep -A 10 "inputs:" .github/workflows/workflow-run-k6-ci-cd-yt01.ymlLength of output: 341
Script:
#!/bin/bash # Description: Check complete workflow file content cat .github/workflows/workflow-run-k6-ci-cd-yt01.ymlLength of output: 1569
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
tests/k6/tests/serviceowner/performance/createDialogWithThresholds.js (1)
2-2
: Simplify the setup export statement.The current export statement uses an unnecessary rename since both the imported and exported names are 'setup'.
-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 2-2: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
7-9
: Consider consolidating imports from readTestdata.js.Multiple imports from the same module can be combined for better readability.
-import { serviceOwners } from '../../performancetest_common/readTestdata.js'; -import { validateTestData } from '../../performancetest_common/readTestdata.js'; -export { setup as setup } from '../../performancetest_common/readTestdata.js'; +import { serviceOwners, validateTestData } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/performancetest_common/readTestdata.js (3)
55-71
: Consider using more descriptive variable names.While the logic is sound, the variable names could be more descriptive:
ixStart
→startIndex
extras
→remainingUsers
function endUsersPart(totalVus, vuId) { const endUsersLength = endUsers.length; if (totalVus == 1) { return endUsers.slice(0, endUsersLength); } let usersPerVU = Math.floor(endUsersLength / totalVus); - let extras = endUsersLength % totalVus; - let ixStart = (vuId-1) * usersPerVU; + let remainingUsers = endUsersLength % totalVus; + let startIndex = (vuId-1) * usersPerVU; if (vuId <= extras) { usersPerVU++; - ixStart += vuId - 1; + startIndex += vuId - 1; } else { - ixStart += extras; + startIndex += remainingUsers; } - return endUsers.slice(ixStart, ixStart + usersPerVU); + return endUsers.slice(startIndex, startIndex + usersPerVU); }
73-80
: Add error handling for edge cases.The setup function should handle cases where VU count is invalid or unavailable.
export function setup() { const totalVus = exec.test.options.scenarios.default.vus; + if (!totalVus || totalVus <= 0) { + throw new Error('Invalid or missing VU count in test options'); + } let parts = []; for (let i = 1; i <= totalVus; i++) { parts.push(endUsersPart(totalVus, i)); } return parts; }
82-99
: Add JSDoc documentation for better code clarity.The function would benefit from proper parameter and return type documentation.
+/** + * Validates the test data structure and returns validated end users with their index. + * @param {Array} data - Array of end user arrays, one per VU + * @param {Array|null} serviceOwners - Optional array of service owners + * @returns {{endUsers: Array, index: number}} Object containing validated end users and current index + * @throws {Error} If data structure is invalid + */ export function validateTestData(data, serviceOwners=null) {tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
9-9
: Simplify the export syntax.The current export syntax is unnecessarily verbose.
-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-transmissions.js (1)
22-23
: Consider documenting environment variables.The function uses several environment variables (numberOfTransmissions, maxTransmissionsInThread, testid) that should be documented for better maintainability.
+/** + * @param {Array} data - Test data array from setup + * @env {string} numberOfTransmissions - Number of transmissions to create (default: 10) + * @env {string} maxTransmissionsInThread - Max transmissions per thread (default: 100) + * @env {string} testid - Test identifier (default: 'createTransmissions') + */ export default function(data) { const { endUsers, index } = validateTestData(data, serviceOwners); createTransmissions(serviceOwners[0], endUsers[index], traceCalls, numberOfTransmissions, maxTransmissionsInThread, testid); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
tests/k6/common/token.js
(1 hunks)tests/k6/tests/enduser/performance/enduser-search.js
(2 hunks)tests/k6/tests/graphql/performance/graphql-search.js
(2 hunks)tests/k6/tests/performancetest_common/readTestdata.js
(2 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(2 hunks)tests/k6/tests/serviceowner/performance/create-transmissions.js
(1 hunks)tests/k6/tests/serviceowner/performance/createDialogWithThresholds.js
(1 hunks)tests/k6/tests/serviceowner/performance/serviceOwnerSearchWithThresholds.js
(1 hunks)tests/k6/tests/serviceowner/performance/serviceowner-search.js
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/k6/common/token.js
🧰 Additional context used
🪛 Biome (1.9.4)
tests/k6/tests/serviceowner/performance/serviceOwnerSearchWithThresholds.js
[error] 2-2: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/createDialogWithThresholds.js
[error] 2-2: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/graphql/performance/graphql-search.js
[error] 8-8: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-dialog.js
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-transmissions.js
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/serviceowner-search.js
[error] 5-5: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/enduser/performance/enduser-search.js
[error] 4-4: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/serviceowner/performance/create-remove-dialog.js
[error] 9-9: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
🔇 Additional comments (7)
tests/k6/tests/serviceowner/performance/serviceOwnerSearchWithThresholds.js (1)
2-2
: Simplify the setup export statement.The current export statement uses an unnecessary rename since both the imported and exported names are 'setup'.
-export { setup as setup } from '../../performancetest_common/readTestdata.js'; +export { setup } from '../../performancetest_common/readTestdata.js';🧰 Tools
🪛 Biome (1.9.4)
[error] 2-2: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
tests/k6/tests/enduser/performance/enduser-search.js (2)
3-4
: LGTM! Good improvement in test data handling.The introduction of validateTestData centralizes validation logic and improves code maintainability.
🧰 Tools
🪛 Biome (1.9.4)
[error] 4-4: Useless rename.
Safe fix: Remove the renaming.
(lint/complexity/noUselessRename)
22-24
: Verify validateTestData implementation.The function now relies on validateTestData for input validation. Please ensure that validateTestData includes comprehensive checks for:
- Array structure validation
- Non-empty array verification
- Index bounds checking
✅ Verification successful
validateTestData implementation is comprehensive and correct
The implementation in
tests/k6/tests/performancetest_common/readTestdata.js
includes all required validations:
- Validates array structure for both outer and inner arrays
- Ensures arrays are non-empty
- Performs index bounds checking using
exec.vu.idInTest
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check validateTestData implementation ast-grep --pattern 'function validateTestData($_) { $$$ }'Length of output: 62
Script:
#!/bin/bash # Search for validateTestData definition rg "validateTestData" -l rg "validateTestData.*=.*" -A 10Length of output: 1667
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
18-20
: Verify serviceOwners validation in validateTestData.The function now uses validateTestData with serviceOwners parameter. Please ensure proper validation of serviceOwners array before accessing index 0.
✅ Verification successful
Proper serviceOwners validation confirmed in validateTestData
The validateTestData function in readTestdata.js includes robust validation that ensures serviceOwners is a non-empty array before it's used, making the access to serviceOwners[0] safe.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check validateTestData's serviceOwners validation ast-grep --pattern 'function validateTestData($_, $serviceOwners) { $$$ }'Length of output: 78
Script:
#!/bin/bash # First, find where validateTestData is defined rg "function validateTestData" -A 10Length of output: 1238
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
21-24
: LGTM! Clean and efficient implementation.The refactored code is more concise and leverages the new validation utility effectively.
tests/k6/tests/graphql/performance/graphql-search.js (1)
25-25
: Add serviceOwners parameter to validateTestData call.The validateTestData function is called without the optional serviceOwners parameter, which is inconsistent with other test files.
- const { endUsers, index } = validateTestData(data); + const { endUsers, index } = validateTestData(data, null);Let's verify the usage pattern across files:
tests/k6/tests/serviceowner/performance/serviceowner-search.js (1)
21-24
: LGTM! Clean implementation following the new pattern.The changes align well with the established pattern across other test files.
…runner' into performance/github-runner
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/dispatch-k6-performance.yml (1)
38-38
: Remove trailing spaces.There are trailing spaces at the end of line 38.
- type: number + type: number🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 38-38: trailing spaces
(trailing-spaces)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/dispatch-k6-performance.yml
(2 hunks).github/workflows/workflow-run-k6-performance.yml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/workflow-run-k6-performance.yml
🧰 Additional context used
🪛 YAMLlint (1.35.1)
.github/workflows/dispatch-k6-performance.yml
[error] 38-38: trailing spaces
(trailing-spaces)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Run K6 performance test / k6-test
- GitHub Check: Dry run deploy infrastructure / Deploy to test
🔇 Additional comments (4)
.github/workflows/dispatch-k6-performance.yml (4)
34-38
: LGTM! Well-structured parallelism parameter.The new parameter is well-defined with appropriate type constraints and a sensible default value that maintains backward compatibility.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 38-38: trailing spaces
(trailing-spaces)
52-52
: LGTM! Clear run name format.The run name clearly displays all key test parameters including the new parallelism value.
68-68
: LGTM! Consistent parameter handling.The parallelism parameter is correctly passed using
fromJson
to ensure proper number type conversion, consistent with how thevus
parameter is handled.
60-61
: Verify Azure authentication configuration.The Azure Client ID and Tenant ID are added, but please verify if additional Azure credentials (like Client Secret or Certificate) are required for the authentication to work properly in the workflow being called.
✅ Verification successful
Azure authentication configuration is properly set up ✅
The workflow is using OIDC-based authentication which only requires the Client ID and Tenant ID. No additional credentials are needed as the configuration is complete with the proper permissions set.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if the called workflow requires additional Azure credentials rg -A 10 'AZURE_(CLIENT_ID|TENANT_ID)' .github/workflows/workflow-run-k6-performance.ymlLength of output: 733
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
tests/k6/tests/scripts/run-test-in-k8s.sh (1)
Line range hint
89-98
: Add validation for input parameter values.While the script checks for missing parameters, it doesn't validate their values. Consider adding validation for:
- Duration format and minimum value
- VUs being a positive integer
- Parallelism being within reasonable bounds
if [ ${#missing_args[@]} -ne 0 ]; then echo "Error: Missing required arguments: ${missing_args[*]}" help exit 1 fi + +# Validate parameter values +if ! [[ "$vus" =~ ^[1-9][0-9]*$ ]]; then + echo "Error: vus must be a positive integer" + exit 1 +fi + +if ! [[ "$parallelism" =~ ^[1-9][0-9]*$ ]] || [ "$parallelism" -gt 10 ]; then + echo "Error: parallelism must be a positive integer not exceeding 10" + exit 1 +fi + +if ! [[ "$duration" =~ ^[1-9][0-9]*(s|m|h)$ ]]; then + echo "Error: duration must be in format: <number>(s|m|h)" + exit 1 +fi
🧹 Nitpick comments (3)
tests/k6/tests/scripts/run-test-in-k8s.sh (3)
5-7
: Add error handling for kubectl context command.While setting the namespace context is good practice, the command should include error handling to ensure the operation succeeded.
failed=0 -kubectl config set-context --current --namespace=dialogporten +if ! kubectl config set-context --current --namespace=dialogporten; then + echo "Error: Failed to set kubernetes context to dialogporten namespace" + exit 1 +fi
44-48
: Add timeout handling for kubectl commands.The kubectl commands for logs and status could hang indefinitely. Consider adding timeouts.
- kubectl logs --tail=-1 $pod - status=`kubectl get $pod -o jsonpath='{.status.phase}'` + if ! timeout 30s kubectl logs --tail=-1 $pod; then + echo "Error: Timed out getting logs for $pod" + failed=1 + fi + status=$(timeout 30s kubectl get $pod -o jsonpath='{.status.phase}' || echo "Failed") + if [ "$status" = "Failed" ]; then + echo "Error: Failed to get status for $pod" + failed=1 + fi
158-160
: Improve cleanup robustness and documentation.The cleanup function includes a hardcoded sleep duration without explanation and lacks timeout handling for cleanup operations.
- Document why the sleep is necessary
- Make the sleep duration configurable
- Add timeout handling for cleanup operations
- local exit_code=$failed - echo "Sleeping for 15s and then cleaning up resources..." - sleep 15 + local exit_code=$failed + # Wait for Kubernetes to finish any pending operations before cleanup + local cleanup_wait="${CLEANUP_WAIT_DURATION:-15}" + echo "Sleeping for ${cleanup_wait}s and then cleaning up resources..." + sleep "$cleanup_wait" + + echo "Starting cleanup..." + local cleanup_timeout=30 + if ! timeout ${cleanup_timeout}s kubectl delete -f config.yml --ignore-not-found; then + echo "Warning: Cleanup of config.yml timed out after ${cleanup_timeout}s" + fi
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/k6/tests/scripts/run-test-in-k8s.sh
(4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
Description
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)