Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workflow changes #857

Merged
merged 32 commits into from
Feb 7, 2023
Merged

Workflow changes #857

merged 32 commits into from
Feb 7, 2023

Conversation

kwitsch
Copy link
Collaborator

@kwitsch kwitsch commented Jan 31, 2023

Goal of this PR is to combat flakyness & streamline workflows.

Content:

  • changed timings of Eventually to combat flakyness
  • merged Build and e2e workflow(Makefile Workflow)
  • added go caching in workflow(Makefile Workflow)
  • skip docker or go checks by matrix fields(Makefile Workflow)
  • merged lint to Makefile Workflow

@codecov
Copy link

codecov bot commented Jan 31, 2023

Codecov Report

Base: 93.02% // Head: 93.14% // Increases project coverage by +0.12% 🎉

Coverage data is based on head (e7eac3a) compared to base (6e69d46).
Patch has no changes to coverable lines.

Additional details and impacted files
@@               Coverage Diff               @@
##           development     #857      +/-   ##
===============================================
+ Coverage        93.02%   93.14%   +0.12%     
===============================================
  Files               42       42              
  Lines             4944     4944              
===============================================
+ Hits              4599     4605       +6     
+ Misses             273      268       -5     
+ Partials            72       71       -1     
Impacted Files Coverage Δ
resolver/resolver.go 100.00% <ø> (ø)
redis/redis.go 93.23% <0.00%> (+0.96%) ⬆️
resolver/query_logging_resolver.go 98.54% <0.00%> (+2.91%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@kwitsch
Copy link
Collaborator Author

kwitsch commented Feb 4, 2023

The flakiness of e2e unit tests could be reduced by reducing the Eventually polling times from default 10ms to 2ms.

There seems to be a problem with with the upstart of test containers that occured randomly(?).

During my test i could reproduce the errors more often if polling times where higher (100-500ms).

Copy link
Owner

@0xERR0R 0xERR0R left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good 👍

@0xERR0R
Copy link
Owner

0xERR0R commented Feb 6, 2023

The flakiness of e2e unit tests could be reduced by reducing the Eventually polling times from default 10ms to 2ms.

There seems to be a problem with with the upstart of test containers that occured randomly(?).

During my test i could reproduce the errors more often if polling times where higher (100-500ms).

Reducing the polling time from 10ms to 2ms will increase the load and to be honest, I don't understand why it reduces the test flakyness 🤷‍♂️ . IMHO the poling time should have no effect, only the duration is relevant.

@0xERR0R 0xERR0R added the 🧰 technical debts Technical debts, refactoring label Feb 6, 2023
@0xERR0R 0xERR0R added this to the 0.21 milestone Feb 6, 2023
@kwitsch
Copy link
Collaborator Author

kwitsch commented Feb 6, 2023

The flakiness of e2e unit tests could be reduced by reducing the Eventually polling times from default 10ms to 2ms.

There seems to be a problem with with the upstart of test containers that occured randomly(?).

During my test i could reproduce the errors more often if polling times where higher (100-500ms).

Reducing the polling time from 10ms to 2ms will increase the load and to be honest, I don't understand why it reduces the test flakyness 🤷‍♂️ . IMHO the poling time should have no effect, only the duration is relevant.

I know that in theory but reality showed otherwise. 🫤

Could it be that the test containers keep restarting if expected to be stopped?
This would explain that it's more likely to get the expected results with faster polling.

@kwitsch kwitsch merged commit 101e2c5 into 0xERR0R:development Feb 7, 2023
@kwitsch kwitsch deleted the fb-unittest_fixes branch February 7, 2023 13:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🧰 technical debts Technical debts, refactoring
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants