-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: avoid crash if testdata file is empty #1403
Conversation
📝 Walkthrough📝 WalkthroughWalkthroughThis pull request introduces various enhancements to error handling across multiple performance testing scripts. Specifically, it adds checks to ensure that critical arrays, such as Changes
Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (6)
tests/k6/tests/enduser/performance/enduser-search.js (1)
20-22
: Enhance error message for better debugging.The guard clause is a good addition that prevents test execution with invalid data. However, the error message could be more descriptive to help users diagnose and fix the issue.
Consider enhancing the error handling:
if (!endUsersWithTokens || endUsersWithTokens.length === 0) { - throw new Error('No end users loaded for testing'); + console.error('Failed to load end users from test data file. Please ensure the test data file exists and contains valid user data.'); + throw new Error('No end users loaded for testing. Check the test data file and ensure it contains valid user data.'); }tests/k6/tests/serviceowner/performance/create-dialog.js (2)
16-18
: Consider using optional chaining for null checkThe null check can be simplified using optional chaining.
- if (!endUsers || endUsers.length === 0) { + if (!endUsers?.length) { throw new Error('No end users loaded for testing'); }
19-21
: Consider using optional chaining for null checkSimilar to the above, this null check can also be simplified.
- if (!serviceOwners || serviceOwners.length === 0) { + if (!serviceOwners?.length) { throw new Error('No service owners loaded for testing'); }tests/k6/tests/performancetest_common/readTestdata.js (2)
12-19
: LGTM! Consider enhancing error logging and input validation.The implementation successfully addresses the PR objective by gracefully handling file read errors and returning an empty array instead of crashing.
Consider these enhancements:
function readCsv(filename) { + if (!filename || typeof filename !== 'string') { + console.log('Invalid filename provided'); + return []; + } try { return papaparse.parse(open(filename), { header: true, skipEmptyLines: true }).data; } catch (error) { - console.log(`Error reading CSV file: ${error}`); + console.log(`Error reading CSV file ${filename}: ${error.message}`); + console.log(`Stack trace: ${error.stack}`); return []; } }
Line range hint
1-60
: Consider centralizing test data validation and configuration.The current implementation handles test data well, but consider these architectural improvements:
- Move file paths to a separate configuration file
- Create a TestDataManager class to handle all data loading and validation
- Add data schema validation using a library like Joi or Zod
This would make the code more maintainable and easier to test. Would you like me to provide an example implementation?
tests/k6/tests/scenarios/performance/create-dialog-and-search.js (1)
70-72
: Consider extracting common validation logicThe validation for
endUsersWithTokens
is duplicated betweencreateDialogs()
andenduserSearches()
. Consider extracting this into a shared validation function.+function validateEndUsers() { + if (!endUsersWithTokens || endUsersWithTokens.length === 0) { + throw new Error('No end users loaded for testing'); + } +} export function createDialogs() { - if (!endUsersWithTokens || endUsersWithTokens.length === 0) { - throw new Error('No end users loaded for testing'); - } + validateEndUsers(); if (!serviceOwners || serviceOwners.length === 0) { throw new Error('No service owners loaded for testing'); } createDialog(randomItem(serviceOwners), randomItem(endUsersWithTokens)); } export function enduserSearches() { - if (!endUsersWithTokens || endUsersWithTokens.length === 0) { - throw new Error('No end users loaded for testing'); - } + validateEndUsers(); enduserSearch(randomItem(endUsersWithTokens)); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (5)
tests/k6/tests/enduser/performance/enduser-search.js
(1 hunks)tests/k6/tests/performancetest_common/readTestdata.js
(4 hunks)tests/k6/tests/scenarios/performance/create-dialog-and-search.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(1 hunks)
🧰 Additional context used
📓 Learnings (3)
tests/k6/tests/scenarios/performance/create-dialog-and-search.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1331
File: tests/k6/tests/serviceowner/performance/create-dialog.js:28-33
Timestamp: 2024-10-23T11:32:22.074Z
Learning: In k6 performance tests, when data arrays such as `serviceOwners` and `endUsers` might be empty, prefer to let the test crash rather than adding checks to log error messages continuously.
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1331
File: tests/k6/tests/serviceowner/performance/create-dialog.js:28-33
Timestamp: 2024-10-23T11:32:22.074Z
Learning: In k6 performance tests, when data arrays such as `serviceOwners` and `endUsers` might be empty, prefer to let the test crash rather than adding checks to log error messages continuously.
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1331
File: tests/k6/tests/serviceowner/performance/create-dialog.js:28-33
Timestamp: 2024-10-23T11:32:22.074Z
Learning: In k6 performance tests, when data arrays such as `serviceOwners` and `endUsers` might be empty, prefer to let the test crash rather than adding checks to log error messages continuously.
🔇 Additional comments (6)
tests/k6/tests/enduser/performance/enduser-search.js (1)
20-22
: Verify consistent error handling across test files.
Let's ensure this error handling pattern is consistently implemented across related test files.
✅ Verification successful
Error handling is consistently implemented across test files
The verification shows consistent error handling patterns:
enduser-search.js
andcreate-dialog-and-search.js
both check for!endUsersWithTokens || endUsersWithTokens.length === 0
create-dialog.js
implements the same pattern forendUsers
- All files throw similar error messages when test data is missing
- The test data loading is centralized in
readTestdata.js
and properly imported by all test files
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent error handling in k6 test files
# Expected: Similar guard clauses in other test files using endUsersWithTokens
# Test 1: Find all k6 test files using endUsersWithTokens
echo "=== K6 test files using endUsersWithTokens ==="
rg -l "endUsersWithTokens" --type js "tests/k6"
# Test 2: Check error handling patterns in these files
echo -e "\n=== Error handling patterns ==="
rg "(!|\s+undefined.*|\.length\s*===\s*0).*endUsersWithTokens" --type js "tests/k6"
# Test 3: Verify the test data loading implementation
echo -e "\n=== Test data loading implementation ==="
rg -A 10 "readTestdata" --type js "tests/k6"
Length of output: 7513
tests/k6/tests/serviceowner/performance/create-dialog.js (1)
16-21
: Consider previous feedback about handling empty arrays in k6 tests
Based on previous feedback in PR #1331, it was suggested to let k6 performance tests crash naturally rather than adding explicit checks for empty arrays. The current implementation contradicts this guidance.
However, I notice that these error messages are more descriptive than a natural crash would be, which could help with debugging. We have two options here:
- Remove these checks to align with previous guidance
- Keep them if the improved error messages provide significant debugging value
Let's check if similar checks exist in other k6 test files:
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (3)
19-21
: LGTM: Clear error message for missing end users.
The error message clearly indicates when end users data is missing, which aligns with the PR's objective of better handling empty test data files.
22-24
: LGTM: Clear error message for missing service owners.
The error message clearly indicates when service owners data is missing, which aligns with the PR's objective of better handling empty test data files.
19-24
: Consider previous feedback about handling empty arrays in k6 tests.
Based on previous feedback in PR #1331, it was suggested to let k6 tests crash naturally rather than adding explicit checks for empty arrays. The current implementation contradicts this guidance.
However, if the explicit checks are intentional to provide more descriptive error messages, then the implementation is consistent with similar changes in other test files.
Let's verify the consistency of error handling across other k6 test files:
✅ Verification successful
Empty array checks are consistently implemented across k6 test files
The explicit checks for empty arrays and descriptive error messages are consistently implemented across multiple k6 test files:
tests/k6/tests/scenarios/performance/create-dialog-and-search.js
tests/k6/tests/enduser/performance/enduser-search.js
tests/k6/tests/graphql/performance/graphql-search.js
tests/k6/tests/serviceowner/performance/create-dialog.js
This appears to be an established pattern in the codebase, suggesting that the previous feedback about letting tests crash naturally has been superseded by a more explicit error handling approach that provides better debugging information.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for similar error handling patterns in other k6 test files
# Expected: Find similar array emptiness checks in other test files
# Search for similar error handling patterns in k6 test files
rg -A 2 "(!.*Users.*\.length === 0|!.*Owners.*\.length === 0)" tests/k6/tests/
Length of output: 2822
tests/k6/tests/scenarios/performance/create-dialog-and-search.js (1)
60-65
: Clarification needed: Change contradicts previous approach
This validation logic contradicts your previous feedback in PR #1331 where you mentioned preferring to let tests crash rather than adding empty checks. Could you clarify if this is an intentional change in approach?
If this is a new direction, consider adding a comment explaining why explicit validation is now preferred over letting the test fail naturally.
Quality Gate passedIssues Measures |
🤖 I have created a release *beep* *boop* --- ## [1.28.3](v1.28.2...v1.28.3) (2024-11-06) ### Bug Fixes * avoid crash if testdata file is empty ([#1403](#1403)) ([e0ea0af](e0ea0af)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
return an empty array if testdatafile not found, check for empty testdata in k6 tests
Description
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)Summary by CodeRabbit
New Features
Bug Fixes