Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[x-pack/metricbeat] ec2_integration_test #20951

Closed
v1v opened this issue Sep 3, 2020 · 6 comments
Closed

[x-pack/metricbeat] ec2_integration_test #20951

v1v opened this issue Sep 3, 2020 · 6 comments
Assignees
Labels
flaky-test Unstable or unreliable test cases. Metricbeat Metricbeat size/M Team:Platforms Label for the Integrations - Platforms team

Comments

@v1v
Copy link
Member

v1v commented Sep 3, 2020

Flaky Test

Stack Trace

    ec2_integration_test.go:28: 
        	Error Trace:	ec2_integration_test.go:28
        	Error:      	Should NOT be empty, but was []
        	Test:       	TestFetch

How to reproduce it

  • Go to build
  • Select awsCloudTests
  • Unselect windowsTest
  • Click on Build
@v1v v1v added Metricbeat Metricbeat flaky-test Unstable or unreliable test cases. labels Sep 3, 2020
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Sep 3, 2020
v1v added a commit to v1v/beats that referenced this issue Sep 3, 2020
@jsoriano jsoriano added the Team:Platforms Label for the Integrations - Platforms team label Sep 3, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations-platforms (Team:Platforms)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Sep 3, 2020
v1v added a commit to v1v/beats that referenced this issue Sep 3, 2020
@kaiyan-sheng
Copy link
Contributor

@jsoriano Could you point to me where is the terraform code? My suspicion is: we have a sleep/check waiting for the new created EC2 instance running but we don't check if this EC2 instance actually pushes metrics into CloudWatch yet before we start Metricbeat. Thank you!

@jsoriano
Copy link
Member

jsoriano commented Sep 3, 2020

@kaiyan-sheng scenario started in CI is the one in ./x-pack/metricbeat/module/aws/terraform.tf. I think the only instance it starts is the one for the database.

@kaiyan-sheng
Copy link
Contributor

@jsoriano Thank you!! Do we have a sleep timer or check to make sure the instance is up and running and reporting metrics to CloudWatch at somewhere else?

@jsoriano
Copy link
Member

jsoriano commented Sep 4, 2020

@kaiyan-sheng I think the terraform scenario waits till the instance is up and running (it takes some time to complete the db instance), but I don't think it explicitly waits to have metrics in CloudWatch. If we need to wait for that maybe we need to do it in the test, with some timeout.

Other thing that could be happening is that ec2 instances created for db instances don't report the same CloudWatch metrics (or no ec2 metrics at all), and the test was passing before because some other ec2 instance existed on this account. In that case we would need to add an ec2 instance to the scenario.

@kaiyan-sheng
Copy link
Contributor

@jsoriano Good point! I think adding a check for CloudWatch metrics will be needed. I will assign this issue to myself and put it on my todo list. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flaky-test Unstable or unreliable test cases. Metricbeat Metricbeat size/M Team:Platforms Label for the Integrations - Platforms team
Projects
None yet
Development

No branches or pull requests

5 participants