Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development: Fix flaky server tests #8248

Closed
wants to merge 8 commits into from

Conversation

julian-christl
Copy link
Member

@julian-christl julian-christl commented Mar 24, 2024

Checklist

General

Motivation and Context

These two flaky tests are annoying as hell. One more than the other. Succeeds, fails, succeeds, fails, succeeds, fails. Enough motivation for me :)

Description

ExerciseLifecycleServiceTest
The test is structured according to the logic of "once the values are set correctly, check if the futures are resolved." This logic is odd and apparently doesn't work very reliably. I inverted the logic to "once the expected future is resolved, check if the values are set correctly.", which works consistently.
I additionally removed the usage of the unnecessary helper method.

ParticipantScoreIntegrationTest:
For distinct combinations of exerciseId and participantId within the record, the hash sums are the same, resulting in overrides in the map and a chance not all scores are generated. Using the hashCode method in the String class is a different implementation and improves the situation.

Testserver States

Note

These badges show the state of the test servers.
Green = Currently available, Red = Currently locked







Review Progress

Code Review

  • Code Review 1
  • Code Review 2

Summary by CodeRabbit

Summary by CodeRabbit

  • Refactor
    • Improved the readability and simplicity of assertions in exercise lifecycle tests by using lambda expressions.
    • Replaced ParticipantScoreId record with a private method for task identification during scheduling.

@julian-christl julian-christl self-assigned this Mar 24, 2024
@julian-christl julian-christl requested a review from a team as a code owner March 24, 2024 23:05
Copy link

coderabbitai bot commented Mar 24, 2024

Walkthrough

The changes across the project involve refining assertion mechanisms in specific test methods to enhance the testing framework. By adopting lambda expressions for assertions, the modifications aim to improve test readability and efficiency, streamlining the testing process without altering core functionality.

Changes

Files Change Summary
.../ParticipantScoreScheduleService.java Replaced ParticipantScoreId with a private hashCode method for task identification during scheduling.
.../ExerciseLifecycleServiceTest.java Refactored assertions using lambda expressions for improved readability.

Related issues

  • Programming Exercise: Changing the short name while importing breaks tests #7188: The emphasis on refining tests and ensuring their reliability might indirectly support the objectives of ensuring functionality does not break upon modifications, such as changing the short name of an exercise. However, without direct context on handling test.json or exercise import mechanisms, the link is speculative.
  • Exam: Divide the Testing Process into Steps #6621: Although this PR focuses on improving test assertions and not on the exam programming exercise creation process, the general theme of enhancing test reliability and maintainability could indirectly support the objective of ensuring robust testing procedures during exams. However, the connection is indirect as this PR does not address the specific steps outlined for handling student and test code compilations.

The other issues listed do not directly relate to the changes made in this PR, as they focus on integration tests, exercise import issues, and missing build plans, which are not covered by the refactoring of assertions in the ExerciseLifecycleServiceTest.java file.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@github-actions github-actions bot added the tests label Mar 24, 2024
@github-actions github-actions bot added the server Pull requests that update Java code. (Added Automatically!) label Mar 25, 2024
@julian-christl julian-christl changed the title Development: Fix flaky test Development: Fix flaky server tests Mar 25, 2024
@julian-christl julian-christl marked this pull request as draft March 26, 2024 00:42
@julian-christl
Copy link
Member Author

Not a consistent fix. Need to investigate maybe in the future.

@krusche krusche deleted the tests/fix-flaky-lifecycle-test branch April 28, 2024 09:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready for review server Pull requests that update Java code. (Added Automatically!) tests
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

1 participant