-
-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add instructions on how to extract canonical data to the Contributing Guide #230
Comments
I had noted in this comment how(what appears to be) a comment made it to the test title. That should not happen. If we do need comments, they should be in the json file as "notes" or something similar, rather than the title. |
Agreed, it would be really great to have a comprehensive guide (that we could add to). Probably somewhere in the contributing guide. We haven't quite worked out the format for comments--if you have suggestions for how to improve them, that would be great. Maybe an optional key "notes" for any test case? That way we can put notes as comments in the test suite if they exist. |
Hey! "notes" is a good idea. 😆 I was thinking that messages for the tests may be helpful. The generated tests simply fail at the moment with no clarification of what may be the problem, other than the test title. So we have two kinds of "comments" to consider. I am not familiar enough with other testing frameworks to know how error messaging is handled, but "test-message" as a key for the test messages when deemed appropriate, and "notes" for commentary notes, such as "explanation for a magic number" may be nice. |
We did have messages for some tests, but I found that it was incredibly redundant. I get the benefit of working with both Go and Ruby which have very different approaches to testing. I think we can use the test data to generate messages where necessary (e.g. leap year test suite could have a message that says "case: 1998" or something like that). I really like the idea of having notes that are meant for the end user, though, for places where it makes sense. |
I agree, comments like the ones in food_chain (not generated) are meant to be informative to the learner, and I think they are in the right place. As far as messages... since I have (finally) done a generator class, I think it isn't such a big deal to get values in the messages for the tests. If the tests are actually descriptive enough (Perhaps like a test title that translates to this#231 pattern: "standard and odd year") we can then use the description in the failure message of the test, perhaps with the value of the offending call and the inputs. #231: A little more descriptive Leap Year test titles. |
Agreed - if the description is good, then that combined with inputs/outputs should help dramatically. |
Notes: In #289 there is a question on whether there should be any facility for marking tests as primarily testing performance so that tracks that auto-generate can read that and react accordingly (if they want to treat performance tests differently) Other musings I had were: Should there be a difference between comments that need only be visible to the track maintainers/anyone who want to port the exerciser to a new language) versus those that should directly be added to auto-generated test files so that student can see them? |
Quick note: this might be helpful: http://x.exercism.io/v3/problems/data-todos |
My general strategy:
I gots lots of opinions on this, apparently. Take or leave as you see fit. |
I'm not sure I have so much more to add, but I'd like to point out a small thing that is helpful to me (more helpful and up-to-date than the individual issues in the todo repository, for sure...) List problems: curl http://x.exercism.io/v3/problems/data-todos | jq '.[] | map(.["slug"])' Look at a given problem (replace curl http://x.exercism.io/v3/problems/data-todos | jq '.[] | map(select(.["slug"] == "accumulate"))' Lists implementing tracks, and is aware of whether they moved their exercises under |
We are fixing that in the Haskell track, @IanWhitney. 😄 exercism/haskell#211
We still have a few custom performance tests in xhaskell. Sorry! 😔
Makes perfect sense to me. 👍 |
As we have discussed at #375 (comment) - just as important as what does go in the canonical test data is what does not go in the canonical test data. There may be cases (such as negative lengths for the triangle exercise) that people might reasonably think should be tested, but we discuss and choose not to test (maybe to focus more on the problem at hand and/or avoid edge-itis). It is beneficial to record the decision of omission so that people don't have to trawl through issues and/or risk duplicating past work. The record may be made either in the JSON file or in the README. Especially in the README if we wish students to be aware of it and/or offer that they can add tests for such cases themselves, which seems like a fine thing to encourage. |
The contribution guide is updated. How about closing this issue? |
I have a bit of an identity crisis with this issue. Reading my opening statement on this issue, it appears I wished to discuss a standard for how to annotate an individual test case with the rationale for having it. At some point it appears it turned into a duplicate of #139 instead. I'll gladly close #139. It's my fault for not keeping this one on my original track. To make clear what I'm talking about, here is a question that I ask: When I look at a canonical-data.json, how can I be sure that every test case is necessary? What does each individual case test for and what mistaken solutions might it help catch? If it is not already clear from the test description, I want some way to tell. Now, we may decide that the way to tell is through the git commit message, and take no further action. It may be hard to look through many commit messages to find the case I'm looking for, but maybe we don't have that problem yet. But this is why I ask whether it should be written directly in the JSON file. There are discussions of various things in this issue:
So I'll either rename this issue to reflect the first topic since there are comments about it in here already (I'll do this first), or create a new issue and close this one if it's become too unwieldy. |
It's too distracting to have to read through everything. I'll create a new issue and reference comments from this issue where appropriate. |
No longer using the `www` prefix for the home page on RubyLearning.com fixes #230
the TODO issues like https://github.com/exercism/todo/issues/100 point us to some examples as guidelines for what should go in a test inputs/outputs json file. It is probably useful to assemble some guide somewhere about that using what we know of our existing json files.
This is prompted by exercism/discussions#10 (comment) where I muse that it could have been good if some non-obvious corner cases had been commented with their purpose. Sure, in many situations the test name is enough, but when it isn't, I'm always for more documentation.
The text was updated successfully, but these errors were encountered: