-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(performance): run tests in k8s #1660
Conversation
📝 WalkthroughWalkthroughThis pull request introduces significant modifications to the performance testing infrastructure, focusing on simplifying token management and workflow configurations. The changes span multiple files across GitHub workflows, test scripts, and utility modules. Key modifications include removing token-related parameters from workflows, updating token retrieval mechanisms, and introducing new functions for creating transmissions and managing performance tests. The overall structure of the workflows remains intact, with an emphasis on streamlining the performance testing process and standardizing token generation and usage. Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Nitpick comments (7)
tests/k6/tests/performancetest_common/createDialog.js (1)
24-26
: Consider caching the enterprise token to enhance performanceThe function
getEnterpriseToken(serviceOwner)
is called each time a dialog is created, which may lead to multiple network requests and increased latency if the tokens are valid for multiple requests. Consider caching the token within theserviceOwner
object or implementing a token caching mechanism to reuse the token across multiple requests, improving performance and reducing overhead.tests/k6/tests/performancetest_common/readTestdata.js (1)
30-30
: LGTM! Consider error handling enhancement.The environment-specific path for service owners aligns with the configuration strategy. The removal of
endUsersWithTokens
supports the new token generation approach.Consider adding a check for file existence before parsing:
function readCsv(filename) { try { + if (!open(filename)) { + throw new Error(`File not found: ${filename}`); + } return papaparse.parse(open(filename), { header: true, skipEmptyLines: true }).data; } catch (error) { console.log(`Error reading CSV file: ${error}`); return []; } }tests/k6/tests/serviceowner/performance/create-transmissions.js (2)
32-35
: Consider load distribution strategy.In multi-user mode, using the same service owner for all end users might not represent real-world usage patterns.
Consider rotating service owners:
- let serviceOwner = randomItem(serviceOwners); for (let i = 0; i < endUsers.length; i++) { + let serviceOwner = serviceOwners[i % serviceOwners.length]; createTransmissions(serviceOwner, endUsers[i], traceCalls, numberOfTransmissions, maxTransmissionsInThread); }
1-4
: Add test coverage expectations.The documentation should specify the expected performance metrics and test coverage goals.
Add performance expectations to the documentation:
/** * Performance test for creating dialogs with transmissions * Run: k6 run tests/k6/tests/serviceowner/performance/create-transmissions.js --vus 1 --iterations 1 -e numberOfTransmissions=100 + * + * Expected metrics: + * - p95 response time < 1000ms + * - Error rate < 1% + * - Throughput >= 10 transmissions/second */tests/k6/tests/performancetest_data/create-transmission.js (1)
61-63
: Simplify conditional logicThe conditional assignment of
relatedTransmissionId
can be simplified.- if (relatedTransmissionId != 0) { - transmission.relatedTransmissionId = relatedTransmissionId; - } + relatedTransmissionId && (transmission.relatedTransmissionId = relatedTransmissionId);tests/k6/tests/scripts/run-test-in-k8s.sh (2)
76-95
: Make Prometheus configuration more flexible.The Prometheus server URL is hardcoded and might need to be different across environments.
Consider making it configurable:
+PROMETHEUS_URL="${PROMETHEUS_URL:-http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write}" + cat <<EOF > config.yml apiVersion: k6.io/v1alpha1 kind: TestRun metadata: name: $name spec: arguments: --out experimental-prometheus-rw --vus=$vus --duration=$duration parallelism: $parallelism script: configMap: name: $configmapname file: archive.tar runner: env: - name: K6_PROMETHEUS_RW_SERVER_URL - value: "http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write" + value: "$PROMETHEUS_URL" metadata: labels: k6-test: $name EOF
99-101
: Improve wait timeout calculation.The current timeout calculation (
${duration}100s
) is arbitrary and might not be suitable for all test durations.Consider a more flexible approach:
-wait_timeout="${duration}100s" +# Add 5 minutes to test duration for setup/teardown +wait_timeout="$((duration + 300))s" kubectl --context k6tests-cluster wait --for=jsonpath='{.status.stage}'=finished testrun/$name --timeout=$wait_timeout
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.github/workflows/ci-cd-yt01.yml
(0 hunks).github/workflows/dispatch-k6-performance.yml
(1 hunks).github/workflows/workflow-run-k6-performance.yml
(0 hunks)tests/k6/common/token.js
(1 hunks)tests/k6/tests/enduser/performance/enduser-search.js
(2 hunks)tests/k6/tests/graphql/performance/graphql-search.js
(1 hunks)tests/k6/tests/performancetest_common/createDialog.js
(3 hunks)tests/k6/tests/performancetest_common/getTokens.js
(1 hunks)tests/k6/tests/performancetest_common/readTestdata.js
(1 hunks)tests/k6/tests/performancetest_common/simpleSearch.js
(4 hunks)tests/k6/tests/performancetest_data/01-create-dialog.js
(1 hunks)tests/k6/tests/performancetest_data/create-transmission.js
(1 hunks)tests/k6/tests/scripts/run-test-in-k8s.sh
(1 hunks)tests/k6/tests/serviceowner/performance/create-transmissions.js
(1 hunks)
💤 Files with no reviewable changes (2)
- .github/workflows/ci-cd-yt01.yml
- .github/workflows/workflow-run-k6-performance.yml
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
🔇 Additional comments (7)
tests/k6/tests/performancetest_common/createDialog.js (1)
112-114
: 🛠️ Refactor suggestionVerify the logic for resetting
relatedTransmissionId
The condition
if (i % maxTransmissionsInThread === 0)
will evaluate totrue
wheni
is0
, causingrelatedTransmissionId
to be reset to0
before creating the first transmission. This may not be the intended behavior if you want to resetrelatedTransmissionId
after everymaxTransmissionsInThread
transmissions. Consider adjusting the condition to avoid resetting on the first iteration.tests/k6/tests/enduser/performance/enduser-search.js (1)
3-3
: EnsureenduserSearch
handles token retrieval forendUsers
The import has been changed from
endUsersWithTokens
toendUsers
, indicating that tokens are no longer pre-fetched. Verify that theenduserSearch
function internally handles token retrieval for each end user to ensure authentication works correctly during performance testing.tests/k6/tests/performancetest_data/01-create-dialog.js (1)
57-58
: LGTM! Good data validation practice.Adding the
cleanUp
function call ensures consistent payload structure and proper validation before transmission.tests/k6/tests/performancetest_data/create-transmission.js (1)
54-54
: Avoid hardcoding environment-specific URLsThe URL contains a hardcoded environment reference 'tt02' which could cause issues when testing in different environments.
Consider making the URL configurable through environment variables:
- "url": "https://digdir.apps.tt02.altinn.no/some-other-url", + "url": `https://digdir.apps.${__ENV.ENVIRONMENT}.altinn.no/some-other-url`,tests/k6/common/token.js (1)
34-34
: LGTM! Good token caching implementationThe export of the
fetchToken
function is a good change, and the implementation includes proper caching, error handling, and is thread-safe for k6 virtual users.tests/k6/tests/performancetest_common/simpleSearch.js (1)
48-48
: LGTM! Consistent token retrieval implementationThe changes consistently implement the new token retrieval mechanism across all search functions, improving token management.
Also applies to: 135-135, 163-163
.github/workflows/dispatch-k6-performance.yml (1)
45-45
: LGTM! Good workflow simplificationThe addition of the transmission test suite and removal of token configuration aligns well with the PR objectives of enhancing performance testing and simplifying token management.
…dialogporten into performance/create-transmissions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (5)
tests/k6/tests/performancetest_data/create-transmission.js (3)
2-4
: Add parameter validation and documentation.The function would benefit from:
- Parameter type validation for
relatedTransmissionId
- JSDoc documentation describing the function's purpose, parameters, and return value
import { uuidv7 } from "../../common/uuid.js"; +/** + * Creates a transmission object for performance testing + * @param {string|number} relatedTransmissionId - The ID of a related transmission (0 if none) + * @returns {Object} The constructed transmission object + */ export default function (relatedTransmissionId) { + if (relatedTransmissionId !== 0 && typeof relatedTransmissionId !== 'string') { + throw new Error('relatedTransmissionId must be 0 or a valid string UUID'); + }
14-39
: Improve test data management and validation.Consider the following improvements:
- Move the test data to a separate configuration file for better maintainability
- Add validation for required language codes
Example validation:
"content": { + const REQUIRED_LANGUAGES = ['nb', 'en']; + const validateLanguages = (values) => { + const codes = values.map(v => v.languageCode); + return REQUIRED_LANGUAGES.every(lang => codes.includes(lang)); + }; + if (!validateLanguages(title.value) || !validateLanguages(summary.value)) { + throw new Error('Missing required language translations'); + }
61-63
: Improve related transmission condition.The comparison with zero suggests numeric input, but transmission IDs are typically UUIDs (strings). Consider making the condition more explicit:
- if (relatedTransmissionId != 0) { + if (relatedTransmissionId && relatedTransmissionId !== '0') { transmission.relatedTransmissionId = relatedTransmissionId; }tests/k6/tests/performancetest_common/getTokens.js (2)
3-4
: Add documentation for environment variables.Consider adding JSDoc comments to document the purpose and expected values of
API_ENVIRONMENT
andTTL
environment variables.+/** + * @constant {string} tokenGeneratorEnv - The API environment to use (e.g., 'yt01'). Defaults to 'yt01'. + */ const tokenGeneratorEnv = __ENV.API_ENVIRONMENT || 'yt01'; +/** + * @constant {number} tokenTtl - Token time-to-live in seconds. Defaults to 3600 (1 hour). + */ const tokenTtl = __ENV.TTL || 3600;
12-12
: Consider architectural improvements for token generation.The current implementation has a few architectural considerations:
- Hardcoded Azure URLs make environment changes difficult.
- External dependency could impact test reliability.
Consider:
- Moving the base URL to an environment variable.
- Adding retry logic or circuit breakers for resilience.
- Implementing local token generation for offline testing.
Also applies to: 21-21
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
tests/k6/tests/graphql/performance/graphql-search.js
(2 hunks)tests/k6/tests/performancetest_common/createDialog.js
(3 hunks)tests/k6/tests/performancetest_common/getTokens.js
(1 hunks)tests/k6/tests/performancetest_data/create-transmission.js
(1 hunks)tests/k6/tests/scripts/run-test-in-k8s.sh
(1 hunks)tests/k6/tests/serviceowner/performance/create-transmissions.js
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/k6/tests/serviceowner/performance/create-transmissions.js
- tests/k6/tests/graphql/performance/graphql-search.js
- tests/k6/tests/scripts/run-test-in-k8s.sh
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
🔇 Additional comments (3)
tests/k6/tests/performancetest_data/create-transmission.js (1)
8-10
: Verify placeholder values in transmission properties.The values "element1" for
authorizationAttribute
and "string" forextendedType
appear to be placeholders. Please verify if these should be replaced with actual production values or if they are intentionally set for testing purposes.tests/k6/tests/performancetest_common/createDialog.js (2)
9-9
: LGTM! Import statement properly added for transmission data.The new import aligns well with the added transmission functionality.
24-24
: LGTM! Token retrieval standardized using getEnterpriseToken.The change improves token management by using a centralized token generation mechanism.
Also applies to: 51-51
…dialogporten into performance/create-transmissions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
tests/k6/tests/scripts/run-test-in-k8s.sh (3)
40-40
: Add error handling for log retrieval.The log retrieval command should handle potential failures.
Apply this diff:
- kubectl --context "$K8S_CONTEXT" -n "$K8S_NAMESPACE" logs --tail=-1 $pod + if ! timeout "${LOG_TIMEOUT}s" kubectl --context "$K8S_CONTEXT" -n "$K8S_NAMESPACE" logs --tail=-1 $pod; then + echo "Warning: Failed to retrieve logs for $pod" + fi
97-97
: Ensure test ID follows k8s naming conventions.The current test ID format might not comply with k8s naming conventions. k8s names should consist of lowercase letters, numbers, and hyphens.
Apply this diff:
-testid="${name}_$(date '+%Y%m%dT%H%M%S')" +testid="${name}-$(date '+%Y%m%d-%H%M%S' | tr '[:upper:]' '[:lower:]')"
126-129
: Make Prometheus configuration more flexible.The Prometheus server URL and trend stats are hardcoded. Consider making these configurable through environment variables.
Apply this diff:
- name: K6_PROMETHEUS_RW_SERVER_URL - value: "http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write" + value: "${PROMETHEUS_URL:-http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write}" - name: K6_PROMETHEUS_RW_TREND_STATS - value: "avg,min,med,max,p(95),p(99),p(99.5),p(99.9),count" + value: "${TREND_STATS:-avg,min,med,max,p(95),p(99),p(99.5),p(99.9),count}"
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/k6/tests/performancetest_common/createDialog.js
(3 hunks)tests/k6/tests/scripts/run-test-in-k8s.sh
(1 hunks)tests/k6/tests/serviceowner/performance/create-transmissions.js
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/k6/tests/serviceowner/performance/create-transmissions.js
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
🔇 Additional comments (6)
tests/k6/tests/performancetest_common/createDialog.js (6)
9-10
: LGTM! New imports align with added functionality.The new imports for transmission-related functionality and token generation are appropriately added.
24-24
: LGTM! Token generation standardized.Token generation has been appropriately standardized using
getEnterpriseToken
across functions.Also applies to: 51-51
112-114
: Fix thread reset logic.The thread reset condition is incorrect. It resets at the start instead of at the end of maxTransmissionsInThread.
- if (i%maxTransmissionsInThread === 0) { + // Reset for the next transmission if we've reached max thread size + if ((i + 1) % maxTransmissionsInThread === 0) { relatedTransmissionId = 0; }
85-98
: Add parameter validation.The function lacks validation for input parameters.
export function createTransmissions(serviceOwner, endUser, traceCalls, numberOfTransmissions, maxTransmissionsInThread, testid) { + if (!numberOfTransmissions || numberOfTransmissions <= 0) { + throw new Error('numberOfTransmissions must be a positive number'); + } + if (!maxTransmissionsInThread || maxTransmissionsInThread <= 0) { + throw new Error('maxTransmissionsInThread must be a positive number'); + } + if (!serviceOwner || !endUser) { + throw new Error('serviceOwner and endUser are required'); + } let traceparent = uuidv4();
101-105
: Add error handling for dialog creation.Dialog creation should include proper error handling.
describe('create dialog', () => { let r = postSO('dialogs', dialogToInsert(endUser.ssn, endUser.resource), paramsWithToken); - dialogId = r.json(); - expect(r.status, 'response status').to.equal(201); + try { + expect(r.status, 'response status').to.equal(201); + dialogId = r.json(); + } catch (error) { + console.error(`Failed to create dialog: ${error.message}`); + throw error; + } });
131-135
: Add error handling for failed requests.The transmission creation lacks proper error handling.
describe('create transmission', () => { let r = postSO('dialogs/' + dialogId + '/transmissions', transmissionToInsert(relatedTransmissionId), paramsWithToken); - expect(r.status, 'response status').to.equal(201); - newRelatedTransmissionId = r.json(); + try { + expect(r.status, 'response status').to.equal(201); + newRelatedTransmissionId = r.json(); + } catch (error) { + console.error(`Failed to create transmission: ${error.message}`); + throw error; + } });
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Ser ut som det er gått bort fra å pre-generere tokens, bør generate_tokens.sh fjernes? Eller er den fremdeles relevant for kjøring lokalt?
- Er ./tests/{serviceowner,enduser,graphql}/perfomance/README.md oppdaterte?
- Er det hensiktsmessig å se på å konsolidere det som er lagt ifm perf med det som allerede finnes for funksjonelle tester? Nå har vi
./common
og./tests/performancetest_common
./scripts
og./tests/scripts
./tests/serviceowner/testdata
og./tests/performancetest_data
- Bør få lagt inn noe rundt perf i ./README på rot, siden det er den folk først vil begynne å se i, f.eks. referanser til de andre README-ene som ligger lengre ned i hierarkiet.
run-test-in-k8s.sh
trenger mer dokumentasjon/eksempler. Slik jeg tolker det er det vel ikke lagt opp til at denne skal kunne kjøres lokalt (vil vel kreve bruk avaz login
,az aks get-credentials ...
etc.)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (7)
tests/k6/tests/enduser/performance/README.md (3)
31-37
: Add documentation for token generator credentials.While the command structure is clear, it would be helpful to add:
- A brief explanation of what the token generator credentials are used for
- How to obtain these credentials
- Any restrictions or requirements for the credentials
56-56
: Fix heading level for consistency.The heading "IMPORTANT:" should be at the same level as its parent section (h4 instead of h5).
-##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. +#### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information🧰 Tools
🪛 Markdownlint (0.37.0)
56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time(MD001, heading-increment)
56-56: Punctuation: '.'
Trailing punctuation in heading(MD026, no-trailing-punctuation)
66-66
: Improve section heading and content clarity.To avoid repetition and improve readability:
-## Test Results -Test results can be found in GitHub action run log, grafana and in App Insights. +## Results +Results are available in: +- GitHub Actions run logs +- Grafana dashboards +- Application Insightstests/k6/tests/graphql/performance/README.md (2)
41-42
: Fix markdown formatting issues.The heading hierarchy and code block formatting need to be corrected:
- Use
####
instead of#####
for the heading to maintain proper hierarchy- Use fenced code blocks (```) instead of indented code blocks
- Replace `<username>` and `<passwd>`, same as for generating tokens above. -##### IMPORTANT: Ensure this file is added to .gitignore... +Replace `<username>` and `<passwd>`, same as for generating tokens above. +#### IMPORTANT: Ensure this file is added to .gitignore...🧰 Tools
🪛 Markdownlint (0.37.0)
42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time(MD001, heading-increment)
42-42: Punctuation: '.'
Trailing punctuation in heading(MD026, no-trailing-punctuation)
41-41: Expected: fenced; Actual: indented
Code block style(MD046, code-block-style)
51-52
: Remove duplicate heading.The "Test Results" heading appears twice in the document.
-## Test Results -Test results can be found in GitHub action run log, grafana and in App Insights. +Test results can be found in GitHub action run log, grafana and in App Insights.🧰 Tools
🪛 LanguageTool
[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...(PHRASE_REPETITION)
tests/k6/tests/serviceowner/performance/README.md (1)
49-50
: Fix markdown formatting issues.The heading hierarchy and code block formatting need to be corrected:
- Use
####
instead of#####
for the heading to maintain proper hierarchy- Use fenced code blocks (```) instead of indented code blocks
- Replace `<username>` and `<passwd>`, same as for generating tokens above. -##### IMPORTANT: Ensure this file is added to .gitignore... +Replace `<username>` and `<passwd>`, same as for generating tokens above. +#### IMPORTANT: Ensure this file is added to .gitignore...🧰 Tools
🪛 Markdownlint (0.37.0)
50-50: Punctuation: '.'
Trailing punctuation in heading(MD026, no-trailing-punctuation)
49-49: Expected: fenced; Actual: indented
Code block style(MD046, code-block-style)
tests/k6/README.md (1)
30-32
: Maintain consistent list style.Use asterisks for unordered lists to maintain consistency with the rest of the document.
-- [ServiceOwner](tests/serviceowner/performance/README.md) -- [GraphQL](tests/graphql/performance/README.md) -- [EndUser](tests/enduser/performance/README.md) +* [ServiceOwner](tests/serviceowner/performance/README.md) +* [GraphQL](tests/graphql/performance/README.md) +* [EndUser](tests/enduser/performance/README.md)🧰 Tools
🪛 Markdownlint (0.37.0)
30-30: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
31-31: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
32-32: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
README.md
(1 hunks)tests/k6/README.md
(1 hunks)tests/k6/tests/enduser/performance/README.md
(3 hunks)tests/k6/tests/graphql/performance/README.md
(2 hunks)tests/k6/tests/scripts/generate_tokens.sh
(0 hunks)tests/k6/tests/serviceowner/performance/README.md
(4 hunks)
💤 Files with no reviewable changes (1)
- tests/k6/tests/scripts/generate_tokens.sh
✅ Files skipped from review due to trivial changes (1)
- README.md
🧰 Additional context used
🪛 Markdownlint (0.37.0)
tests/k6/README.md
30-30: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
31-31: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
32-32: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
tests/k6/tests/serviceowner/performance/README.md
50-50: Punctuation: '.'
Trailing punctuation in heading
(MD026, no-trailing-punctuation)
49-49: Expected: fenced; Actual: indented
Code block style
(MD046, code-block-style)
tests/k6/tests/enduser/performance/README.md
56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time
(MD001, heading-increment)
56-56: Punctuation: '.'
Trailing punctuation in heading
(MD026, no-trailing-punctuation)
55-55: Expected: fenced; Actual: indented
Code block style
(MD046, code-block-style)
tests/k6/tests/graphql/performance/README.md
42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time
(MD001, heading-increment)
42-42: Punctuation: '.'
Trailing punctuation in heading
(MD026, no-trailing-punctuation)
41-41: Expected: fenced; Actual: indented
Code block style
(MD046, code-block-style)
🪛 LanguageTool
tests/k6/tests/enduser/performance/README.md
[grammar] ~65-~65: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...r/performance/enduser-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...
(PHRASE_REPETITION)
tests/k6/tests/graphql/performance/README.md
[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...
(PHRASE_REPETITION)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
🔇 Additional comments (3)
tests/k6/tests/enduser/performance/README.md (2)
13-13
: LGTM! Clear and well-structured endpoint documentation.The endpoint listing provides a comprehensive overview of the API paths being tested.
55-57
: Great addition of security warning!The explicit warning about not committing credentials is a valuable security reminder.
🧰 Tools
🪛 Markdownlint (0.37.0)
56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time(MD001, heading-increment)
56-56: Punctuation: '.'
Trailing punctuation in heading(MD026, no-trailing-punctuation)
55-55: Expected: fenced; Actual: indented
Code block style(MD046, code-block-style)
tests/k6/tests/serviceowner/performance/README.md (1)
13-13
: Document the new test file's purpose.The newly added
create-transmissions.js
test file is listed but its purpose and usage are not explained in the documentation.Could you provide a brief description of what this test file does and when it should be used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
tests/k6/tests/graphql/performance/README.md (1)
Line range hint
1-4
: Update introduction to reflect new token generation approach.The introduction still mentions "pre-generated tokens" which contradicts the PR's shift to on-the-fly token generation.
-This directory holds a performance test with POST for `graphql`. The test file `graphql-search.js` is responsible for executing the performance test. It includes a list of end users (ssn) with pre-generated tokens and performs a POST request to the GraphQL endpoint with the payload `input: { party: ["urn:altinn:person:identifier-no:${identifier}"]}`. This test is designed to measure the performance of the GraphQL search functionality. +This directory holds a performance test with POST for `graphql`. The test file `graphql-search.js` is responsible for executing the performance test. It includes a list of end users (ssn) and performs a POST request to the GraphQL endpoint with the payload `input: { party: ["urn:altinn:person:identifier-no:${identifier}"]}`, using dynamically generated tokens. This test is designed to measure the performance of the GraphQL search functionality.
🧹 Nitpick comments (6)
tests/k6/tests/graphql/performance/README.md (2)
41-42
: Fix markdown formatting issues.The heading level and indentation need to be adjusted for consistency.
- Replace `<username>` and `<passwd>`, same as for generating tokens above. -##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control. +Replace `<username>` and `<passwd>`, same as for generating tokens above. + +#### IMPORTANT +Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.🧰 Tools
🪛 Markdownlint (0.37.0)
42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time(MD001, heading-increment)
42-42: Punctuation: '.'
Trailing punctuation in heading(MD026, no-trailing-punctuation)
41-41: Expected: fenced; Actual: indented
Code block style(MD046, code-block-style)
51-52
: Fix heading duplication in test results section.The heading appears twice in the markdown.
-## Test Results -Test results can be found in GitHub action run log, grafana and in App Insights. +## Test Results +Results can be found in GitHub action run log, grafana and in App Insights.🧰 Tools
🪛 LanguageTool
[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...(PHRASE_REPETITION)
tests/k6/tests/enduser/performance/README.md (2)
55-56
: Fix markdown formatting issues.The heading level and indentation need to be adjusted for consistency.
- Replace `<username>` and `<passwd>`, same as for generating tokens above. -##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control. +Replace `<username>` and `<passwd>`, same as for generating tokens above. + +#### IMPORTANT +Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.🧰 Tools
🪛 Markdownlint (0.37.0)
56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time(MD001, heading-increment)
56-56: Punctuation: '.'
Trailing punctuation in heading(MD026, no-trailing-punctuation)
55-55: Expected: fenced; Actual: indented
Code block style(MD046, code-block-style)
65-66
: Fix heading duplication in test results section.The heading appears twice in the markdown.
-## Test Results -Test results can be found in GitHub action run log, grafana and in App Insights. +## Test Results +Results can be found in GitHub action run log, grafana and in App Insights.🧰 Tools
🪛 LanguageTool
[grammar] ~65-~65: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...r/performance/enduser-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...(PHRASE_REPETITION)
tests/k6/tests/serviceowner/performance/README.md (2)
49-50
: Fix markdown formatting issues.The heading level and indentation need to be adjusted for consistency.
- Replace `<username>` and `<passwd>`, same as for generating tokens above. -##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control. +Replace `<username>` and `<passwd>`, same as for generating tokens above. + +#### IMPORTANT +Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.🧰 Tools
🪛 Markdownlint (0.37.0)
50-50: Punctuation: '.'
Trailing punctuation in heading(MD026, no-trailing-punctuation)
49-49: Expected: fenced; Actual: indented
Code block style(MD046, code-block-style)
Line range hint
88-89
: Fix article usage in test results section.Minor grammar improvement needed.
-The test results can be found in the GitHub Actions run log, grafana and in App Insights. +Test results can be found in GitHub Actions run log, grafana and in App Insights.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.github/workflows/ci-cd-yt01.yml
(0 hunks)README.md
(1 hunks)tests/k6/tests/enduser/performance/README.md
(3 hunks)tests/k6/tests/graphql/performance/README.md
(2 hunks)tests/k6/tests/serviceowner/performance/README.md
(4 hunks)
💤 Files with no reviewable changes (1)
- .github/workflows/ci-cd-yt01.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- README.md
🧰 Additional context used
🪛 LanguageTool
tests/k6/tests/enduser/performance/README.md
[grammar] ~65-~65: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...r/performance/enduser-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...
(PHRASE_REPETITION)
tests/k6/tests/graphql/performance/README.md
[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...
(PHRASE_REPETITION)
🪛 Markdownlint (0.37.0)
tests/k6/tests/enduser/performance/README.md
56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time
(MD001, heading-increment)
56-56: Punctuation: '.'
Trailing punctuation in heading
(MD026, no-trailing-punctuation)
55-55: Expected: fenced; Actual: indented
Code block style
(MD046, code-block-style)
tests/k6/tests/graphql/performance/README.md
42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time
(MD001, heading-increment)
42-42: Punctuation: '.'
Trailing punctuation in heading
(MD026, no-trailing-punctuation)
41-41: Expected: fenced; Actual: indented
Code block style
(MD046, code-block-style)
tests/k6/tests/serviceowner/performance/README.md
50-50: Punctuation: '.'
Trailing punctuation in heading
(MD026, no-trailing-punctuation)
49-49: Expected: fenced; Actual: indented
Code block style
(MD046, code-block-style)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
🔇 Additional comments (1)
tests/k6/tests/serviceowner/performance/README.md (1)
13-13
: LGTM: New test file addition.The addition of
create-transmissions.js
aligns with the PR objective of facilitating the addition of transmissions to dialogs.
Har fjernet script
Dokumentasjon oppdatert
Har bevisst prøvd å unngå endringer i eksisterende kode, men mulig vi bør konsolidere litt nå. Kan vi ta det i en egen PR? Føler at denne er litt for stor allerede.
README på root har referanse til k6/non-functional tests. La inn referanser til performance i README på k6-root
Usikker på hvor åpen denne skal være, skal brukes til å kjøre i github workflow. Kan vi 'skjule' den ved å omdøpe til '.run-test-in-k8s.sh'? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/k6/README.md
(1 hunks)
🧰 Additional context used
🪛 Markdownlint (0.37.0)
tests/k6/README.md
30-30: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
31-31: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
32-32: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
🔇 Additional comments (2)
tests/k6/README.md (2)
26-26
: LGTM!The new section header follows the document's style and hierarchy.
30-32
: Fix relative paths and maintain consistent list style.
- The relative paths should be updated to be relative to the current file's location:
-- [ServiceOwner](./tests/serviceowner/performance/README.md) -- [GraphQL](./tests/graphql/performance/README.md) -- [EndUser](./tests/enduser/performance/README.md) +* [ServiceOwner](../serviceowner/performance/README.md) +* [GraphQL](../graphql/performance/README.md) +* [EndUser](../enduser/performance/README.md)
- Use asterisks (
*
) instead of dashes (-
) to maintain consistency with other lists in the document.🧰 Tools
🪛 Markdownlint (0.37.0)
30-30: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
31-31: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
32-32: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
Egen PR høres lurt ut.
Greit å bare la det være kjørbart i en workflow. Bør bare fremgå at det er tilfelle, og at lokalt får en kjøre det nettopp lokalt. |
Description
Start using a k6 operator k8s-cluster to run the performance/load tests.
Changed the logic around token generating from pregenerated to generating on the fly, taking advantage of the caching of tokens already implemented.
Created a test to add transmissions to dialogs
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)