Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeding-generated test doesn't execute properly (AttributeFilter_ESTest) #2

Open
vmassol opened this issue Oct 8, 2019 · 6 comments

Comments

@vmassol
Copy link

vmassol commented Oct 8, 2019

Generated on xwiki-commons-xml using the setup from https://github.com/STAMP-project/botsing/issues/89

Results:

Screenshot 2019-10-08 at 08 13 25

@pderakhshanfar
Copy link
Collaborator

This is a flaky test generated by EvoSuite. It is not related to model seeding. Sometimes, EvoSuite generates tests that are not 100% pass when you directly execute them.

I should note that the EvoSuite model seeding issues are not related to Botsing. So, it is better to put these kinds of issues here.

@pderakhshanfar pderakhshanfar transferred this issue from STAMP-project/botsing Oct 8, 2019
@vmassol
Copy link
Author

vmassol commented Oct 8, 2019

Note that this is not just one "flaky" test, in practice we got 10 generated "flaky" tests out of 257 generated tests (i.e. 4%). That's a big issue (blocker) for being able to automatically commit generated tests.

@pderakhshanfar pderakhshanfar transferred this issue from STAMP-project/evosuite-model-seeding-usecases-output Oct 8, 2019
@pderakhshanfar
Copy link
Collaborator

Again, since EvoSuite executes everything in its own environment, it is possible that when you run them in your environment, you face these flaky tests. EvoSuite developers spent lots of effort to minimize them. However, it is still possible. I personally run them 5 times first. If they pass 100% of the time, I will count them as e usable test for line and mutation coverage.

@vmassol
Copy link
Author

vmassol commented Oct 8, 2019

However, it is still possible. I personally run them 5 times first. If they pass 100% of the time, I will count them as e usable test for line and mutation coverage.

Couldn't you execute the generated tests inside the botsing/evosuite process and remove the flaky tests? This would also ensure that they compile fine. Since the seeding part runs a dynamic analysis, which I assume is executing the tests, you already have a way to execute tests. That could be a post process part of the tool to remove false positives.

@pderakhshanfar
Copy link
Collaborator

The final test generation process ( compiling, minimizing, etc.) is in EvoSuite, and changing that part of code may impact other parts of pre-process. It is much easier to do it as a post-process of EvoSuite/Botsing.

@vmassol
Copy link
Author

vmassol commented Oct 8, 2019

It is much easier to do it as a post-process of EvoSuite/Botsing.

That's exactly what I'm proposing! :)

Since you already have a pre-process step (the seeding part), it doesn't change fundamentally to also do a postprocess part (it's even more symmetrical! ;)). And in the future wrap all of this in a single, simple to use, process (maven plugin, etc).

Just to make it clear this will allow you to "fix" several issues;

  • flakiness - ignore these tests
  • optionally capture stdout/stderr and discard tests that generate output when executed (a problem for xwiki)
  • verify that no file is generated outside a given directory and discard tests that do generate files
  • (add more here)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants