Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(performance): run tests in k8s #1660

Merged
merged 24 commits into from
Jan 14, 2025
Merged

Conversation

dagfinno
Copy link
Collaborator

@dagfinno dagfinno commented Jan 8, 2025

Description

Start using a k6 operator k8s-cluster to run the performance/load tests.
Changed the logic around token generating from pregenerated to generating on the fly, taking advantage of the caching of tokens already implemented.
Created a test to add transmissions to dialogs

Related Issue(s)

Verification

  • Your code builds clean without any errors or warnings
  • Manual testing done (required)
  • Relevant automated test added (if you find this hard, leave it and we'll help out)

Documentation

  • Documentation is updated (either in docs-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)

@dagfinno dagfinno requested review from a team as code owners January 8, 2025 05:29
Copy link
Contributor

coderabbitai bot commented Jan 8, 2025

📝 Walkthrough

Walkthrough

This pull request introduces significant modifications to the performance testing infrastructure, focusing on simplifying token management and workflow configurations. The changes span multiple files across GitHub workflows, test scripts, and utility modules. Key modifications include removing token-related parameters from workflows, updating token retrieval mechanisms, and introducing new functions for creating transmissions and managing performance tests. The overall structure of the workflows remains intact, with an emphasis on streamlining the performance testing process and standardizing token generation and usage.

Changes

File Change Summary
.github/workflows/ci-cd-yt01.yml Removed K6_CLOUD_TOKEN, K6_CLOUD_PROJECT_ID secrets and tokens, numberOfTokens parameters from run-performance-tests job
.github/workflows/dispatch-k6-performance.yml Removed tokens input parameter, added new test suite path option
.github/workflows/workflow-run-k6-performance.yml Removed tokens, numberOfTokens, ttl inputs and K6_CLOUD_TOKEN, K6_CLOUD_PROJECT_ID secrets
tests/k6/common/token.js Made fetchToken function exported
tests/k6/tests/enduser/performance/enduser-search.js Replaced endUsersWithTokens with endUsers
tests/k6/tests/graphql/performance/graphql-search.js Replaced endUsersWithTokens with endUsers
tests/k6/tests/performancetest_common/getTokens.js Added getEnterpriseToken and getPersonalToken functions for token retrieval
tests/k6/tests/performancetest_common/createDialog.js Updated authorization token handling and added createTransmissions and createTransmission functions
tests/k6/tests/performancetest_common/readTestdata.js Updated service owners CSV path and removed endUsersWithTokens functionality
tests/k6/tests/performancetest_data/01-create-dialog.js Modified to return cleaned payload
tests/k6/tests/performancetest_data/create-transmission.js Added new module for creating transmission objects
tests/k6/tests/scripts/run-test-in-k8s.sh New script for running k6 tests in Kubernetes
tests/k6/tests/serviceowner/performance/create-transmissions.js New performance test script for creating transmissions
README.md Updated link formatting in Testing section
tests/k6/README.md Added new "Performance tests" section with links to related README files
tests/k6/tests/enduser/performance/README.md Simplified test description and updated run instructions
tests/k6/tests/graphql/performance/README.md Updated run instructions and setup process
tests/k6/tests/serviceowner/performance/README.md Added new test file to associated tests list and updated run instructions
tests/k6/tests/scripts/generate_tokens.sh Deleted script for generating tokens

Possibly related PRs

  • feat(apps): export logs to open telemetry endpoint #1617: The changes in this PR involve modifications to logging and telemetry configurations, including the removal of certain logging methods and the introduction of OpenTelemetry, which aligns with the main PR's focus on modifying CI/CD workflows related to performance tests and token management.
  • chore: Add localdev to performance tests #1682: This PR adds support for a new API environment labeled "localdev" in performance tests, which may relate to the overall testing framework and environment configurations mentioned in the main PR, although it does not directly modify the same files or parameters.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🧹 Nitpick comments (7)
tests/k6/tests/performancetest_common/createDialog.js (1)

24-26: Consider caching the enterprise token to enhance performance

The function getEnterpriseToken(serviceOwner) is called each time a dialog is created, which may lead to multiple network requests and increased latency if the tokens are valid for multiple requests. Consider caching the token within the serviceOwner object or implementing a token caching mechanism to reuse the token across multiple requests, improving performance and reducing overhead.

tests/k6/tests/performancetest_common/readTestdata.js (1)

30-30: LGTM! Consider error handling enhancement.

The environment-specific path for service owners aligns with the configuration strategy. The removal of endUsersWithTokens supports the new token generation approach.

Consider adding a check for file existence before parsing:

 function readCsv(filename) {
   try {
+    if (!open(filename)) {
+      throw new Error(`File not found: ${filename}`);
+    }
     return papaparse.parse(open(filename), { header: true, skipEmptyLines: true }).data;
   } catch (error) {
     console.log(`Error reading CSV file: ${error}`);
     return [];
   } 
 }
tests/k6/tests/serviceowner/performance/create-transmissions.js (2)

32-35: Consider load distribution strategy.

In multi-user mode, using the same service owner for all end users might not represent real-world usage patterns.

Consider rotating service owners:

-    let serviceOwner = randomItem(serviceOwners);
     for (let i = 0; i < endUsers.length; i++) {
+        let serviceOwner = serviceOwners[i % serviceOwners.length];
         createTransmissions(serviceOwner, endUsers[i], traceCalls, numberOfTransmissions, maxTransmissionsInThread);
     }   

1-4: Add test coverage expectations.

The documentation should specify the expected performance metrics and test coverage goals.

Add performance expectations to the documentation:

 /**
  * Performance test for creating dialogs with transmissions
  * Run: k6 run tests/k6/tests/serviceowner/performance/create-transmissions.js --vus 1 --iterations 1 -e numberOfTransmissions=100
+ * 
+ * Expected metrics:
+ * - p95 response time < 1000ms
+ * - Error rate < 1%
+ * - Throughput >= 10 transmissions/second
  */
tests/k6/tests/performancetest_data/create-transmission.js (1)

61-63: Simplify conditional logic

The conditional assignment of relatedTransmissionId can be simplified.

-    if (relatedTransmissionId != 0) {
-        transmission.relatedTransmissionId = relatedTransmissionId;
-    }
+    relatedTransmissionId && (transmission.relatedTransmissionId = relatedTransmissionId);
tests/k6/tests/scripts/run-test-in-k8s.sh (2)

76-95: Make Prometheus configuration more flexible.

The Prometheus server URL is hardcoded and might need to be different across environments.

Consider making it configurable:

+PROMETHEUS_URL="${PROMETHEUS_URL:-http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write}"
+
 cat <<EOF > config.yml
 apiVersion: k6.io/v1alpha1
 kind: TestRun
 metadata:
   name: $name
 spec:
   arguments: --out experimental-prometheus-rw --vus=$vus --duration=$duration
   parallelism: $parallelism
   script:
     configMap:
       name: $configmapname
       file: archive.tar
   runner:
     env:
       - name: K6_PROMETHEUS_RW_SERVER_URL
-        value: "http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write"
+        value: "$PROMETHEUS_URL"
     metadata:
       labels:
         k6-test: $name
EOF

99-101: Improve wait timeout calculation.

The current timeout calculation (${duration}100s) is arbitrary and might not be suitable for all test durations.

Consider a more flexible approach:

-wait_timeout="${duration}100s"
+# Add 5 minutes to test duration for setup/teardown
+wait_timeout="$((duration + 300))s"
 kubectl --context k6tests-cluster wait --for=jsonpath='{.status.stage}'=finished testrun/$name --timeout=$wait_timeout
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6603948 and 96adc8d.

📒 Files selected for processing (14)
  • .github/workflows/ci-cd-yt01.yml (0 hunks)
  • .github/workflows/dispatch-k6-performance.yml (1 hunks)
  • .github/workflows/workflow-run-k6-performance.yml (0 hunks)
  • tests/k6/common/token.js (1 hunks)
  • tests/k6/tests/enduser/performance/enduser-search.js (2 hunks)
  • tests/k6/tests/graphql/performance/graphql-search.js (1 hunks)
  • tests/k6/tests/performancetest_common/createDialog.js (3 hunks)
  • tests/k6/tests/performancetest_common/getTokens.js (1 hunks)
  • tests/k6/tests/performancetest_common/readTestdata.js (1 hunks)
  • tests/k6/tests/performancetest_common/simpleSearch.js (4 hunks)
  • tests/k6/tests/performancetest_data/01-create-dialog.js (1 hunks)
  • tests/k6/tests/performancetest_data/create-transmission.js (1 hunks)
  • tests/k6/tests/scripts/run-test-in-k8s.sh (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-transmissions.js (1 hunks)
💤 Files with no reviewable changes (2)
  • .github/workflows/ci-cd-yt01.yml
  • .github/workflows/workflow-run-k6-performance.yml
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
🔇 Additional comments (7)
tests/k6/tests/performancetest_common/createDialog.js (1)

112-114: 🛠️ Refactor suggestion

Verify the logic for resetting relatedTransmissionId

The condition if (i % maxTransmissionsInThread === 0) will evaluate to true when i is 0, causing relatedTransmissionId to be reset to 0 before creating the first transmission. This may not be the intended behavior if you want to reset relatedTransmissionId after every maxTransmissionsInThread transmissions. Consider adjusting the condition to avoid resetting on the first iteration.

tests/k6/tests/enduser/performance/enduser-search.js (1)

3-3: Ensure enduserSearch handles token retrieval for endUsers

The import has been changed from endUsersWithTokens to endUsers, indicating that tokens are no longer pre-fetched. Verify that the enduserSearch function internally handles token retrieval for each end user to ensure authentication works correctly during performance testing.

tests/k6/tests/performancetest_data/01-create-dialog.js (1)

57-58: LGTM! Good data validation practice.

Adding the cleanUp function call ensures consistent payload structure and proper validation before transmission.

tests/k6/tests/performancetest_data/create-transmission.js (1)

54-54: Avoid hardcoding environment-specific URLs

The URL contains a hardcoded environment reference 'tt02' which could cause issues when testing in different environments.

Consider making the URL configurable through environment variables:

-                        "url": "https://digdir.apps.tt02.altinn.no/some-other-url",
+                        "url": `https://digdir.apps.${__ENV.ENVIRONMENT}.altinn.no/some-other-url`,
tests/k6/common/token.js (1)

34-34: LGTM! Good token caching implementation

The export of the fetchToken function is a good change, and the implementation includes proper caching, error handling, and is thread-safe for k6 virtual users.

tests/k6/tests/performancetest_common/simpleSearch.js (1)

48-48: LGTM! Consistent token retrieval implementation

The changes consistently implement the new token retrieval mechanism across all search functions, improving token management.

Also applies to: 135-135, 163-163

.github/workflows/dispatch-k6-performance.yml (1)

45-45: LGTM! Good workflow simplification

The addition of the transmission test suite and removal of token configuration aligns well with the PR objectives of enhancing performance testing and simplifying token management.

tests/k6/tests/performancetest_common/createDialog.js Outdated Show resolved Hide resolved
tests/k6/tests/performancetest_common/getTokens.js Outdated Show resolved Hide resolved
tests/k6/tests/performancetest_data/create-transmission.js Outdated Show resolved Hide resolved
tests/k6/tests/scripts/run-test-in-k8s.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/run-test-in-k8s.sh Show resolved Hide resolved
tests/k6/tests/scripts/run-test-in-k8s.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/run-test-in-k8s.sh Outdated Show resolved Hide resolved
tests/k6/tests/scripts/run-test-in-k8s.sh Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (5)
tests/k6/tests/performancetest_data/create-transmission.js (3)

2-4: Add parameter validation and documentation.

The function would benefit from:

  1. Parameter type validation for relatedTransmissionId
  2. JSDoc documentation describing the function's purpose, parameters, and return value
 import { uuidv7 } from "../../common/uuid.js";
 
+/**
+ * Creates a transmission object for performance testing
+ * @param {string|number} relatedTransmissionId - The ID of a related transmission (0 if none)
+ * @returns {Object} The constructed transmission object
+ */
 export default function (relatedTransmissionId) {
+    if (relatedTransmissionId !== 0 && typeof relatedTransmissionId !== 'string') {
+        throw new Error('relatedTransmissionId must be 0 or a valid string UUID');
+    }

14-39: Improve test data management and validation.

Consider the following improvements:

  1. Move the test data to a separate configuration file for better maintainability
  2. Add validation for required language codes

Example validation:

 "content": {
+    const REQUIRED_LANGUAGES = ['nb', 'en'];
+    const validateLanguages = (values) => {
+        const codes = values.map(v => v.languageCode);
+        return REQUIRED_LANGUAGES.every(lang => codes.includes(lang));
+    };
+    if (!validateLanguages(title.value) || !validateLanguages(summary.value)) {
+        throw new Error('Missing required language translations');
+    }

61-63: Improve related transmission condition.

The comparison with zero suggests numeric input, but transmission IDs are typically UUIDs (strings). Consider making the condition more explicit:

-    if (relatedTransmissionId != 0) {
+    if (relatedTransmissionId && relatedTransmissionId !== '0') {
         transmission.relatedTransmissionId = relatedTransmissionId;
     }
tests/k6/tests/performancetest_common/getTokens.js (2)

3-4: Add documentation for environment variables.

Consider adding JSDoc comments to document the purpose and expected values of API_ENVIRONMENT and TTL environment variables.

+/**
+ * @constant {string} tokenGeneratorEnv - The API environment to use (e.g., 'yt01'). Defaults to 'yt01'.
+ */
 const tokenGeneratorEnv = __ENV.API_ENVIRONMENT || 'yt01';
+/**
+ * @constant {number} tokenTtl - Token time-to-live in seconds. Defaults to 3600 (1 hour).
+ */
 const tokenTtl = __ENV.TTL || 3600;

12-12: Consider architectural improvements for token generation.

The current implementation has a few architectural considerations:

  1. Hardcoded Azure URLs make environment changes difficult.
  2. External dependency could impact test reliability.

Consider:

  1. Moving the base URL to an environment variable.
  2. Adding retry logic or circuit breakers for resilience.
  3. Implementing local token generation for offline testing.

Also applies to: 21-21

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 96adc8d and 776b76b.

📒 Files selected for processing (6)
  • tests/k6/tests/graphql/performance/graphql-search.js (2 hunks)
  • tests/k6/tests/performancetest_common/createDialog.js (3 hunks)
  • tests/k6/tests/performancetest_common/getTokens.js (1 hunks)
  • tests/k6/tests/performancetest_data/create-transmission.js (1 hunks)
  • tests/k6/tests/scripts/run-test-in-k8s.sh (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-transmissions.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/k6/tests/serviceowner/performance/create-transmissions.js
  • tests/k6/tests/graphql/performance/graphql-search.js
  • tests/k6/tests/scripts/run-test-in-k8s.sh
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
  • GitHub Check: build / build-and-test
🔇 Additional comments (3)
tests/k6/tests/performancetest_data/create-transmission.js (1)

8-10: Verify placeholder values in transmission properties.

The values "element1" for authorizationAttribute and "string" for extendedType appear to be placeholders. Please verify if these should be replaced with actual production values or if they are intentionally set for testing purposes.

tests/k6/tests/performancetest_common/createDialog.js (2)

9-9: LGTM! Import statement properly added for transmission data.

The new import aligns well with the added transmission functionality.


24-24: LGTM! Token retrieval standardized using getEnterpriseToken.

The change improves token management by using a centralized token generation mechanism.

Also applies to: 51-51

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
tests/k6/tests/scripts/run-test-in-k8s.sh (3)

40-40: Add error handling for log retrieval.

The log retrieval command should handle potential failures.

Apply this diff:

-            kubectl --context "$K8S_CONTEXT" -n "$K8S_NAMESPACE" logs --tail=-1 $pod
+            if ! timeout "${LOG_TIMEOUT}s" kubectl --context "$K8S_CONTEXT" -n "$K8S_NAMESPACE" logs --tail=-1 $pod; then
+                echo "Warning: Failed to retrieve logs for $pod"
+            fi

97-97: Ensure test ID follows k8s naming conventions.

The current test ID format might not comply with k8s naming conventions. k8s names should consist of lowercase letters, numbers, and hyphens.

Apply this diff:

-testid="${name}_$(date '+%Y%m%dT%H%M%S')"
+testid="${name}-$(date '+%Y%m%d-%H%M%S' | tr '[:upper:]' '[:lower:]')"

126-129: Make Prometheus configuration more flexible.

The Prometheus server URL and trend stats are hardcoded. Consider making these configurable through environment variables.

Apply this diff:

       - name: K6_PROMETHEUS_RW_SERVER_URL
-        value: "http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write"
+        value: "${PROMETHEUS_URL:-http://kube-prometheus-stack-prometheus.monitoring:9090/api/v1/write}"
       - name: K6_PROMETHEUS_RW_TREND_STATS
-        value: "avg,min,med,max,p(95),p(99),p(99.5),p(99.9),count"
+        value: "${TREND_STATS:-avg,min,med,max,p(95),p(99),p(99.5),p(99.9),count}"
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 776b76b and a485da2.

📒 Files selected for processing (3)
  • tests/k6/tests/performancetest_common/createDialog.js (3 hunks)
  • tests/k6/tests/scripts/run-test-in-k8s.sh (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-transmissions.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/k6/tests/serviceowner/performance/create-transmissions.js
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
  • GitHub Check: build / build-and-test
🔇 Additional comments (6)
tests/k6/tests/performancetest_common/createDialog.js (6)

9-10: LGTM! New imports align with added functionality.

The new imports for transmission-related functionality and token generation are appropriately added.


24-24: LGTM! Token generation standardized.

Token generation has been appropriately standardized using getEnterpriseToken across functions.

Also applies to: 51-51


112-114: Fix thread reset logic.

The thread reset condition is incorrect. It resets at the start instead of at the end of maxTransmissionsInThread.

-        if (i%maxTransmissionsInThread === 0) {
+        // Reset for the next transmission if we've reached max thread size
+        if ((i + 1) % maxTransmissionsInThread === 0) {
             relatedTransmissionId = 0;
         }

85-98: Add parameter validation.

The function lacks validation for input parameters.

 export function createTransmissions(serviceOwner, endUser, traceCalls, numberOfTransmissions, maxTransmissionsInThread, testid) {
+    if (!numberOfTransmissions || numberOfTransmissions <= 0) {
+        throw new Error('numberOfTransmissions must be a positive number');
+    }
+    if (!maxTransmissionsInThread || maxTransmissionsInThread <= 0) {
+        throw new Error('maxTransmissionsInThread must be a positive number');
+    }
+    if (!serviceOwner || !endUser) {
+        throw new Error('serviceOwner and endUser are required');
+    }
     let traceparent = uuidv4();

101-105: Add error handling for dialog creation.

Dialog creation should include proper error handling.

     describe('create dialog', () => {
         let r = postSO('dialogs', dialogToInsert(endUser.ssn, endUser.resource), paramsWithToken);
-        dialogId = r.json();
-        expect(r.status, 'response status').to.equal(201);
+        try {
+            expect(r.status, 'response status').to.equal(201);
+            dialogId = r.json();
+        } catch (error) {
+            console.error(`Failed to create dialog: ${error.message}`);
+            throw error;
+        }
     });

131-135: Add error handling for failed requests.

The transmission creation lacks proper error handling.

     describe('create transmission', () => {
         let r = postSO('dialogs/' + dialogId + '/transmissions', transmissionToInsert(relatedTransmissionId), paramsWithToken);
-        expect(r.status, 'response status').to.equal(201);
-        newRelatedTransmissionId = r.json();
+        try {
+            expect(r.status, 'response status').to.equal(201);
+            newRelatedTransmissionId = r.json();
+        } catch (error) {
+            console.error(`Failed to create transmission: ${error.message}`);
+            throw error;
+        }
     });

tests/k6/tests/scripts/run-test-in-k8s.sh Show resolved Hide resolved
tests/k6/tests/scripts/run-test-in-k8s.sh Show resolved Hide resolved
Copy link

sonarqubecloud bot commented Jan 9, 2025

Copy link
Member

@elsand elsand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Ser ut som det er gått bort fra å pre-generere tokens, bør generate_tokens.sh fjernes? Eller er den fremdeles relevant for kjøring lokalt?
  • Er ./tests/{serviceowner,enduser,graphql}/perfomance/README.md oppdaterte?
  • Er det hensiktsmessig å se på å konsolidere det som er lagt ifm perf med det som allerede finnes for funksjonelle tester? Nå har vi
    ./common og ./tests/performancetest_common
    ./scripts og ./tests/scripts
    ./tests/serviceowner/testdata og ./tests/performancetest_data
  • Bør få lagt inn noe rundt perf i ./README på rot, siden det er den folk først vil begynne å se i, f.eks. referanser til de andre README-ene som ligger lengre ned i hierarkiet.
  • run-test-in-k8s.sh trenger mer dokumentasjon/eksempler. Slik jeg tolker det er det vel ikke lagt opp til at denne skal kunne kjøres lokalt (vil vel kreve bruk av az login, az aks get-credentials ... etc.)?

@dagfinno dagfinno requested a review from a team as a code owner January 14, 2025 07:59
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (7)
tests/k6/tests/enduser/performance/README.md (3)

31-37: Add documentation for token generator credentials.

While the command structure is clear, it would be helpful to add:

  1. A brief explanation of what the token generator credentials are used for
  2. How to obtain these credentials
  3. Any restrictions or requirements for the credentials

56-56: Fix heading level for consistency.

The heading "IMPORTANT:" should be at the same level as its parent section (h4 instead of h5).

-##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information.
+#### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information
🧰 Tools
🪛 Markdownlint (0.37.0)

56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


56-56: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


66-66: Improve section heading and content clarity.

To avoid repetition and improve readability:

-## Test Results
-Test results can be found in GitHub action run log, grafana and in App Insights.
+## Results
+Results are available in:
+- GitHub Actions run logs
+- Grafana dashboards
+- Application Insights
tests/k6/tests/graphql/performance/README.md (2)

41-42: Fix markdown formatting issues.

The heading hierarchy and code block formatting need to be corrected:

  • Use #### instead of ##### for the heading to maintain proper hierarchy
  • Use fenced code blocks (```) instead of indented code blocks
-    Replace `<username>` and `<passwd>`, same as for generating tokens above. 
-##### IMPORTANT: Ensure this file is added to .gitignore...
+Replace `<username>` and `<passwd>`, same as for generating tokens above. 
+#### IMPORTANT: Ensure this file is added to .gitignore...
🧰 Tools
🪛 Markdownlint (0.37.0)

42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


42-42: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


41-41: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)


51-52: Remove duplicate heading.

The "Test Results" heading appears twice in the document.

-## Test Results
-Test results can be found in GitHub action run log, grafana and in App Insights.
+Test results can be found in GitHub action run log, grafana and in App Insights.
🧰 Tools
🪛 LanguageTool

[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...

(PHRASE_REPETITION)

tests/k6/tests/serviceowner/performance/README.md (1)

49-50: Fix markdown formatting issues.

The heading hierarchy and code block formatting need to be corrected:

  • Use #### instead of ##### for the heading to maintain proper hierarchy
  • Use fenced code blocks (```) instead of indented code blocks
-    Replace `<username>` and `<passwd>`, same as for generating tokens above.
-##### IMPORTANT: Ensure this file is added to .gitignore...
+Replace `<username>` and `<passwd>`, same as for generating tokens above.
+#### IMPORTANT: Ensure this file is added to .gitignore...
🧰 Tools
🪛 Markdownlint (0.37.0)

50-50: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


49-49: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/README.md (1)

30-32: Maintain consistent list style.

Use asterisks for unordered lists to maintain consistency with the rest of the document.

-- [ServiceOwner](tests/serviceowner/performance/README.md)
-- [GraphQL](tests/graphql/performance/README.md)
-- [EndUser](tests/enduser/performance/README.md)
+* [ServiceOwner](tests/serviceowner/performance/README.md)
+* [GraphQL](tests/graphql/performance/README.md)
+* [EndUser](tests/enduser/performance/README.md)
🧰 Tools
🪛 Markdownlint (0.37.0)

30-30: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


31-31: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


32-32: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a485da2 and 632ee73.

📒 Files selected for processing (6)
  • README.md (1 hunks)
  • tests/k6/README.md (1 hunks)
  • tests/k6/tests/enduser/performance/README.md (3 hunks)
  • tests/k6/tests/graphql/performance/README.md (2 hunks)
  • tests/k6/tests/scripts/generate_tokens.sh (0 hunks)
  • tests/k6/tests/serviceowner/performance/README.md (4 hunks)
💤 Files with no reviewable changes (1)
  • tests/k6/tests/scripts/generate_tokens.sh
✅ Files skipped from review due to trivial changes (1)
  • README.md
🧰 Additional context used
🪛 Markdownlint (0.37.0)
tests/k6/README.md

30-30: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


31-31: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


32-32: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)

tests/k6/tests/serviceowner/performance/README.md

50-50: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


49-49: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/tests/enduser/performance/README.md

56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


56-56: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


55-55: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/tests/graphql/performance/README.md

42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


42-42: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


41-41: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

🪛 LanguageTool
tests/k6/tests/enduser/performance/README.md

[grammar] ~65-~65: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...r/performance/enduser-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...

(PHRASE_REPETITION)

tests/k6/tests/graphql/performance/README.md

[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...

(PHRASE_REPETITION)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
  • GitHub Check: build / build-and-test
🔇 Additional comments (3)
tests/k6/tests/enduser/performance/README.md (2)

13-13: LGTM! Clear and well-structured endpoint documentation.

The endpoint listing provides a comprehensive overview of the API paths being tested.


55-57: Great addition of security warning!

The explicit warning about not committing credentials is a valuable security reminder.

🧰 Tools
🪛 Markdownlint (0.37.0)

56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


56-56: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


55-55: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/tests/serviceowner/performance/README.md (1)

13-13: Document the new test file's purpose.

The newly added create-transmissions.js test file is listed but its purpose and usage are not explained in the documentation.

Could you provide a brief description of what this test file does and when it should be used?

tests/k6/README.md Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tests/k6/tests/graphql/performance/README.md (1)

Line range hint 1-4: Update introduction to reflect new token generation approach.

The introduction still mentions "pre-generated tokens" which contradicts the PR's shift to on-the-fly token generation.

-This directory holds a performance test with POST for `graphql`. The test file `graphql-search.js` is responsible for executing the performance test. It includes a list of end users (ssn) with pre-generated tokens and performs a POST request to the GraphQL endpoint with the payload `input: { party: ["urn:altinn:person:identifier-no:${identifier}"]}`. This test is designed to measure the performance of the GraphQL search functionality.
+This directory holds a performance test with POST for `graphql`. The test file `graphql-search.js` is responsible for executing the performance test. It includes a list of end users (ssn) and performs a POST request to the GraphQL endpoint with the payload `input: { party: ["urn:altinn:person:identifier-no:${identifier}"]}`, using dynamically generated tokens. This test is designed to measure the performance of the GraphQL search functionality.
🧹 Nitpick comments (6)
tests/k6/tests/graphql/performance/README.md (2)

41-42: Fix markdown formatting issues.

The heading level and indentation need to be adjusted for consistency.

-    Replace `<username>` and `<passwd>`, same as for generating tokens above. 
-##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.
+Replace `<username>` and `<passwd>`, same as for generating tokens above. 
+
+#### IMPORTANT
+Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.
🧰 Tools
🪛 Markdownlint (0.37.0)

42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


42-42: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


41-41: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)


51-52: Fix heading duplication in test results section.

The heading appears twice in the markdown.

-## Test Results
-Test results can be found in GitHub action run log, grafana and in App Insights.
+## Test Results
+Results can be found in GitHub action run log, grafana and in App Insights.
🧰 Tools
🪛 LanguageTool

[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...

(PHRASE_REPETITION)

tests/k6/tests/enduser/performance/README.md (2)

55-56: Fix markdown formatting issues.

The heading level and indentation need to be adjusted for consistency.

-    Replace `<username>` and `<passwd>`, same as for generating tokens above. 
-##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.
+Replace `<username>` and `<passwd>`, same as for generating tokens above. 
+
+#### IMPORTANT
+Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.
🧰 Tools
🪛 Markdownlint (0.37.0)

56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


56-56: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


55-55: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)


65-66: Fix heading duplication in test results section.

The heading appears twice in the markdown.

-## Test Results
-Test results can be found in GitHub action run log, grafana and in App Insights. 
+## Test Results
+Results can be found in GitHub action run log, grafana and in App Insights. 
🧰 Tools
🪛 LanguageTool

[grammar] ~65-~65: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...r/performance/enduser-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...

(PHRASE_REPETITION)

tests/k6/tests/serviceowner/performance/README.md (2)

49-50: Fix markdown formatting issues.

The heading level and indentation need to be adjusted for consistency.

-    Replace `<username>` and `<passwd>`, same as for generating tokens above.
-##### IMPORTANT: Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.
+Replace `<username>` and `<passwd>`, same as for generating tokens above.
+
+#### IMPORTANT
+Ensure this file is added to .gitignore to prevent accidental commits of sensitive information. Never commit actual credentials to version control.
🧰 Tools
🪛 Markdownlint (0.37.0)

50-50: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


49-49: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)


Line range hint 88-89: Fix article usage in test results section.

Minor grammar improvement needed.

-The test results can be found in the GitHub Actions run log, grafana and in App Insights.
+Test results can be found in GitHub Actions run log, grafana and in App Insights.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 632ee73 and 69625ad.

📒 Files selected for processing (5)
  • .github/workflows/ci-cd-yt01.yml (0 hunks)
  • README.md (1 hunks)
  • tests/k6/tests/enduser/performance/README.md (3 hunks)
  • tests/k6/tests/graphql/performance/README.md (2 hunks)
  • tests/k6/tests/serviceowner/performance/README.md (4 hunks)
💤 Files with no reviewable changes (1)
  • .github/workflows/ci-cd-yt01.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • README.md
🧰 Additional context used
🪛 LanguageTool
tests/k6/tests/enduser/performance/README.md

[grammar] ~65-~65: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...r/performance/enduser-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...

(PHRASE_REPETITION)

tests/k6/tests/graphql/performance/README.md

[grammar] ~51-~51: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...l/performance/graphql-search.js ``` ## Test Results Test results can be found in GitHub action run log, ...

(PHRASE_REPETITION)

🪛 Markdownlint (0.37.0)
tests/k6/tests/enduser/performance/README.md

56-56: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


56-56: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


55-55: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/tests/graphql/performance/README.md

42-42: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


42-42: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


41-41: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/tests/serviceowner/performance/README.md

50-50: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


49-49: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
  • GitHub Check: build / build-and-test
🔇 Additional comments (1)
tests/k6/tests/serviceowner/performance/README.md (1)

13-13: LGTM: New test file addition.

The addition of create-transmissions.js aligns with the PR objective of facilitating the addition of transmissions to dialogs.

@dagfinno
Copy link
Collaborator Author

  • Ser ut som det er gått bort fra å pre-generere tokens, bør generate_tokens.sh fjernes? Eller er den fremdeles relevant for kjøring lokalt?

Har fjernet script

  • Er ./tests/{serviceowner,enduser,graphql}/perfomance/README.md oppdaterte?

Dokumentasjon oppdatert

  • Er det hensiktsmessig å se på å konsolidere det som er lagt ifm perf med det som allerede finnes for funksjonelle tester? Nå har vi
    ./common og ./tests/performancetest_common
    ./scripts og ./tests/scripts
    ./tests/serviceowner/testdata og ./tests/performancetest_data

Har bevisst prøvd å unngå endringer i eksisterende kode, men mulig vi bør konsolidere litt nå. Kan vi ta det i en egen PR? Føler at denne er litt for stor allerede.

  • Bør få lagt inn noe rundt perf i ./README på rot, siden det er den folk først vil begynne å se i, f.eks. referanser til de andre README-ene som ligger lengre ned i hierarkiet.

README på root har referanse til k6/non-functional tests. La inn referanser til performance i README på k6-root

  • run-test-in-k8s.sh trenger mer dokumentasjon/eksempler. Slik jeg tolker det er det vel ikke lagt opp til at denne skal kunne kjøres lokalt (vil vel kreve bruk av az login, az aks get-credentials ... etc.)?

Usikker på hvor åpen denne skal være, skal brukes til å kjøre i github workflow. Kan vi 'skjule' den ved å omdøpe til '.run-test-in-k8s.sh'?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 69625ad and d8d22f7.

📒 Files selected for processing (1)
  • tests/k6/README.md (1 hunks)
🧰 Additional context used
🪛 Markdownlint (0.37.0)
tests/k6/README.md

30-30: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


31-31: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


32-32: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
  • GitHub Check: build / build-and-test
🔇 Additional comments (2)
tests/k6/README.md (2)

26-26: LGTM!

The new section header follows the document's style and hierarchy.


30-32: Fix relative paths and maintain consistent list style.

  1. The relative paths should be updated to be relative to the current file's location:
-- [ServiceOwner](./tests/serviceowner/performance/README.md)
-- [GraphQL](./tests/graphql/performance/README.md)
-- [EndUser](./tests/enduser/performance/README.md)
+* [ServiceOwner](../serviceowner/performance/README.md)
+* [GraphQL](../graphql/performance/README.md)
+* [EndUser](../enduser/performance/README.md)
  1. Use asterisks (*) instead of dashes (-) to maintain consistency with other lists in the document.
🧰 Tools
🪛 Markdownlint (0.37.0)

30-30: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


31-31: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


32-32: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)

tests/k6/README.md Show resolved Hide resolved
@elsand
Copy link
Member

elsand commented Jan 14, 2025

  • Er det hensiktsmessig å se på å konsolidere det som er lagt ifm perf med det som allerede finnes for funksjonelle tester? Nå har vi
    ./common og ./tests/performancetest_common
    ./scripts og ./tests/scripts
    ./tests/serviceowner/testdata og ./tests/performancetest_data

Har bevisst prøvd å unngå endringer i eksisterende kode, men mulig vi bør konsolidere litt nå. Kan vi ta det i en egen PR? Føler at denne er litt for stor allerede.

Egen PR høres lurt ut.

  • run-test-in-k8s.sh trenger mer dokumentasjon/eksempler. Slik jeg tolker det er det vel ikke lagt opp til at denne skal kunne kjøres lokalt (vil vel kreve bruk av az login, az aks get-credentials ... etc.)?

Usikker på hvor åpen denne skal være, skal brukes til å kjøre i github workflow. Kan vi 'skjule' den ved å omdøpe til '.run-test-in-k8s.sh'?

Greit å bare la det være kjørbart i en workflow. Bør bare fremgå at det er tilfelle, og at lokalt får en kjøre det nettopp lokalt.

@dagfinno dagfinno merged commit 8f5ce91 into main Jan 14, 2025
27 checks passed
@dagfinno dagfinno deleted the performance/create-transmissions branch January 14, 2025 13:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants