-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rds enhanced monitoring bug fix #747
Rds enhanced monitoring bug fix #747
Conversation
recheck |
recheck |
2 similar comments
recheck |
recheck |
e056ea8
to
b1e419d
Compare
recheck |
2 similar comments
recheck |
recheck |
recheck |
@jillr @gundalow the recheck doesn't seem to be triggering Zuul for some reason now! I'm a little unsure on what to try next, this was a clean branch with cherry picked commits as I messed up a rebase on the old branch. I also tried creating a brand new commit on a new branch and it also doesn't seem to want to trigger :( |
@marknet15 The labels do not affect Zuul and I don't see anything that looks wrong here. Unfortunately all of our Zuul experts are in Canada and on public holiday today, tomorrow we should be able to figure out more. |
recheck |
recheck |
2 similar comments
recheck |
recheck |
recheck |
e55d3a9
to
861463e
Compare
@alinabuzachis @jillr @tremble this is finally ready for a review :) the tests now run and complete within the 1hr session |
861463e
to
7e2cd42
Compare
test update correct test var Enable tests Update tests/integration/targets/rds_instance/aliases Co-authored-by: Mark Chappell <mchappel@redhat.com> try leaving out the iam deletion for enhanced monitoring move IAM creation to earlier on The tests are exceeding a 1hr session limit - Condense tests where possible - Remove irrelevant snapshot tests - Up concurrency to 6 PR feedback disable wait on final delete further adjust update changelog speed up tests speed up tests test failure fixes fix final broken test split tagging out test adjustments final test failure hitting max length modify tests adjust version bump fix
7e2cd42
to
e839bc1
Compare
recheck |
2 similar comments
recheck |
recheck |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bugfix itself looks sound. Tests changes look ok, even if we've dropped some of the tests, and I've missed it, this is better than running no tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Rds enhanced monitoring bug fix SUMMARY (a copy of ansible-collections#712 as I messed up my branch by accident) This is a fix for an issue when an RDS instance already exists and you wish to enable enhanced monitoring, for the full details see the linked old reported issue: ansible/ansible#51772 But in summary currently if you enable enhanced monitoring on an RDS instance that already exists where it isn't already enabled then the following is returned: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'MonitoringRoleArn' fatal: [localhost_eu-west-1-pdv-qa-1 -> 127.0.0.1]: FAILED! => changed=false module_stderr: |- Traceback (most recent call last): File "master:/opt/mitogen/mitogen-0.2.9/ansible_mitogen/runner.py", line 975, in _run self._run_code(code, mod) File "master:/opt/mitogen/mitogen-0.2.9/ansible_mitogen/runner.py", line 939, in _run_code exec(code, vars(mod)) File "master:/tmp/build/4bef5c86/framework/library/cloud/aws/rds_instance.py", line 1245, in <module> File "master:/tmp/build/4bef5c86/framework/library/cloud/aws/rds_instance.py", line 1210, in main File "master:/tmp/build/4bef5c86/framework/library/cloud/aws/rds_instance.py", line 855, in get_parameters File "master:/tmp/build/4bef5c86/framework/library/cloud/aws/rds_instance.py", line 885, in get_options_with_changing_values File "master:/tmp/build/4bef5c86/framework/library/cloud/aws/rds_instance.py", line 983, in get_changing_options_with_consistent_keys KeyError: 'MonitoringRoleArn' module_stdout: '' msg: |- MODULE FAILURE See stdout/stderr for the exact error Originally-Depends-On: mattclay/aws-terminator#164 Other changes A load of issues have surfaced in the integration tests due to how slow RDS is to create / modify etc. I've condensed down the tests where possible reducing the number of inventory jobs to 6 and bumped serial to 6 so that hopefully all tests can run at once and finish within the 1 hr AWS session duration. ISSUE TYPE Bugfix Pull Request COMPONENT NAME rds_instance Reviewed-by: Mark Chappell <None> Reviewed-by: Mark Woolley <mw@marknet15.com> Reviewed-by: None <None> This commit was initially merged in https://github.com/ansible-collections/community.aws See: ansible-collections@8fe00cb
SUMMARY
(a copy of #712 as I messed up my branch by accident)
This is a fix for an issue when an RDS instance already exists and you wish to enable enhanced monitoring, for the full details see the linked old reported issue:
ansible/ansible#51772
But in summary currently if you enable enhanced monitoring on an RDS instance that already exists where it isn't already enabled then the following is returned:
Originally-Depends-On: mattclay/aws-terminator#164
Other changes
A load of issues have surfaced in the integration tests due to how slow RDS is to create / modify etc. I've condensed down the tests where possible reducing the number of inventory jobs to
6
and bumpedserial
to6
so that hopefully all tests can run at once and finish within the1
hr AWS session duration.ISSUE TYPE
COMPONENT NAME
rds_instance