Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doctest regression suite for gdscript, loading scripts and corresponding par… #41616

Closed
wants to merge 1 commit into from

Conversation

strank
Copy link
Contributor

@strank strank commented Aug 30, 2020

…se trees from a directory.

This relates to godotengine/godot-proposals#1319 but might be interesting in itself, for @vnen I assume.
Should be easy to add more test cases, or integrate the ones already in place somewhere outside the repo.

Use ./bin/godot --test -tc="*GDScript*" to run the one test only. For some reason the long-form option --test-case=... does not work.

@Xrayez
Copy link
Contributor

Xrayez commented Aug 30, 2020

See #41074 and #41355. 😮

Basically @vnen already created GDScript test runner, what I've done is to adapt his existing system to doctest.

I stopped working on #41074 as I waited for Godot to merge #40980, but now it seems to be functional.

Also, @vnen doesn't seem to be enthusiastic regarding adding tests to the main repo, so I'm not sure. I personally think including tests in the main repo makes sense, especially since tests can be implemented per-module #40720, btw.

Perhaps worth to create proposal at GIP? Created: godotengine/godot-proposals#1429, feel free to elaborate.

For some reason the long-form option --test-case=... does not work.

That's strange, make sure to include double quotes.

@strank
Copy link
Contributor Author

strank commented Aug 30, 2020

Cheers @Xrayez ! I figured there must be something already - writing a full tokenizer/parser/compiler without a test-suite would have been... impressive ;)
Thanks for creating that proposal, I'll comment there.

@strank strank force-pushed the not-in-conditional branch 4 times, most recently from d864647 to 0ad25f1 Compare September 3, 2020 08:19
@strank strank changed the title Start of doctests for gdscript, loading scripts and corresponding par… Doctest regression suite for gdscript, loading scripts and corresponding par… Sep 3, 2020
@akien-mga
Copy link
Member

The tests should likely be moved to modules/gdscript/tests/ following #41355.

Also, @vnen doesn't seem to be enthusiastic regarding adding tests to the main repo, so I'm not sure. I personally think including tests in the main repo makes sense, especially since tests can be implemented per-module #40720, btw.

I also wonder about this. If we want to have a full test suite for all GDScript features, each time with a .gd file and more files to show what the expect behavior/output should be, it can quickly be a lot of files. It might make sense to have those in a separate repo that could be added here as a Git submodule.

…enized files from a directory.

Automatically tests all .gd files, intended as the start of a regression suite.
Could be improved by:
* avoiding code duplication with gdscript test command, see note in test_gdscript_doctest.h
* showing a diff on failure,
rather than the default doctest output of showing both sides of == below each other.
Easiest way is to generate the diff, somehow, and then compare against the empty output.
@vnen
Copy link
Member

vnen commented Sep 3, 2020

The results of the tokenizer/parser output aren't good for regression testing. Fixing a bug might change their output so there would be many cases of false positives whenever we add a small fix.

The test suite I proposed, which @Xrayez adapted to doctest in #41074, uses more high-level information such as error messages and console output. This way we can test the behavior of the script without caring about how the compiler is implemented.

If we had this before my rewrite would be much simpler to test, while using the parse tree output would be useless since the tree structure changed a lot.

The tree printer is only useful for debugging, so we can manually check what went wrong.

@strank
Copy link
Contributor Author

strank commented Sep 3, 2020

I agree that during a complete re-write, only the highest level tests are useful. I still think that minimal test cases for the tokenizer and parser are useful in a normal regression suite - minimal in the sense that they are as small as possible, so unrelated small fixes do not lead to too much work in updating the tests.
It would be very helpful to have some functionality like the python doctest module where you can ignore parts of the output with a an ... ellipsis. But I don't think the C++ doctest has that.

I'll set this one back to draft (it was just an offshoot of trying for a not-in operator) and see if I can help with #41074.
And I'll comment on the submodule vs keeping code and tests together question in the proposal #1429

@strank strank marked this pull request as draft September 3, 2020 18:38
@Xrayez
Copy link
Contributor

Xrayez commented Sep 19, 2020

It would be very helpful to have some functionality like the python doctest module where you can ignore parts of the output with a an ... ellipsis. But I don't think the C++ doctest has that.

If I understand correctly, this could be implemented using doctest decorators for test cases: doctest::may_fail().

@strank
Copy link
Contributor Author

strank commented Sep 21, 2020

Hmm. I'm not sure if doctest::may_fail could help. The python doctest allows you to check for a certain output, but ignore parts of it. It is a bit like replacing an equality comparison in the C++ doctest with an SQL-LIKE or a regexp match. The latter would probably be the easiest way to do it actually.

@fire
Copy link
Member

fire commented Apr 30, 2021

@vnen Can you give a status update on this?

@vnen
Copy link
Member

vnen commented Apr 30, 2021

Reiterating my comment, the output of those tests are not standard (and also not fully validated). Those are useful for debugging, but fixing actual bugs will likely change the output as well, which also makes it not unfeasible that the test misses a regression.

Since #47701 is now merged, we should use that system to test the issues and catch the regressions, since it's much less dependent on implementation details.

@vnen vnen closed this Apr 30, 2021
@Xrayez
Copy link
Contributor

Xrayez commented Apr 30, 2021

Since #47701 is now merged, we should use that system to test the issues and catch the regressions, since it's much less dependent on implementation details.

If so, could you please also review #47958 and #48029 (in order)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants