-
-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added test, pytest configuration, removed unnecessary import #46
Conversation
Please let me know if you have an idea for more tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Round 1:
- Most important: move the tests into a separate folder named
tests/
- For the rest take a look at the comments I made.
If you have any questions, just ask.
Ideally I want to have a test coverage of 100%. |
Thanks for the review. Also why |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Round 2: I already told you that in the last Round.
OK, something i thought very long about and what is probably my fault. I should've specified that I wanted unittests. This looks more like an integration test. (Fixed the description of the issue)
Exactly, shouldn't this module be executable since project is cli chess? Anyways, I will fix this tomorrow since it is getting late. |
If you would've taken a look into |
Can you tell me what functions should I unittest because in |
Here is a little definition:
I think there is a way to isolate the checkmate behavior much better from the rest of the program. And if that's not something you want to do right now, there are enough other parts of the code that need to be tested as well. As an example, What I'd like to say in conclusion is that it's fine if you want to verify that it actually analyzes, moves and recognizes everything (the whole process and how each small part is put together). But then you shouldn't do it just for checkmate. Just checking that it validates a specific example is not the idea of tests. Tests should cover as many possibilities as possible (which includes edge cases). And currently I don't really see this code covering anything (especially not edge cases). So if you wanna push tests that look like the one you pushed, keep going. but then keep in mind that you also have to test input, output, behaviour, etc. And don't forget edge cases (invalid input, etc.). Also if you only test set input, then you never test that the AI actually works. |
Yeah, added more tests, but it isn't easy writing 'isolated' tests while everything is entangled in code. |
That's a valid point. So you identified that the problem is in the code. A solution to this could also be to discuss how the code could be changed to be more isolated. But I don't wanna be so negative. Instead: the new tests look very good and seem to improve coverage quite a bit. 👍 |
You can merge it :D |
Resolves #34