-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
passio: Implement exercise #2195
passio: Implement exercise #2195
Conversation
754113d
to
26f3ddc
Compare
I mean overall this is great ... I'd no doubt implement my example.py differently than yours, but the class overall makes sense for this (more advanced) exercise. I would say maybe we should clarify in the description.md that the "report" is just a tuple of those two counts, rather than forcing the student to figure that out from tests. Also, minor point, but I would try to avoid shadowing the |
Two questions for clarification. Do you think I should change anything in the example.py file about my implementation. It is literally the example for others to be inspired by. If you think It can be done better, I'm all ears. As to the stats properties, I'll look at adding a docstring to the properties in addition to putting it in the hints.md file so it is added to the README. Are you referring to the function shadowing in the example.py or the mocks? I can fix it either place. Come to think of it, I probably should in the example.py and stub file. I can do the same with the socket too. I originally did the signatures different as I did that before the tests. Might simplify a few things on both ends. I did go to good effort to get the exceptions and returns to be identical, so everything else should be too, in hindsight. I was looking at it and somewhat remember what I did. The signatures on the docs for the io module don't always match what they are for the objects and I put Sorry this is such a mess, but writing something without clear guidelines or expectations will do that. If you could answer my questions, any more insights you have, and give me a bit more time, I'll get it fixed up. edit, added two paragraphs at the end. |
I don't have specific feedback on the implementation. This is a difficult exercise to implement because there's no obvious good way of doing it, except by subclassing socket.socket and the various io.*IO classes, and even that would be brittle. In the real world I'm pretty sure you'd end up writing custom versions of the underlying C implementations so you could ensure that any new socket or opened file descriptor would be logging usage. I think what you've got is certainly OK ... as a kind of rough sketch both the class skeletons are reasonable enough. The mocking (per usual) is pretty involved and opaque, but it would kind of have to be.
I was specifically referring to the example.py / stub file. I'm not really concerned with shadowing inside the tests, though remove it if possible / noticed. The main thing is to present "best practice" in the student-facing files. |
I'm not married to the implementation. If you or @cmccandless or someone else thinks I should change something, I'm all ears. For example, splitting the stats properties into two different properties for each. Changing isn't difficult, since I have full branch coverage in the tests and, as a bonus, I won't take it personally. Constructing the test suite alone taught me things I thought I already knew about these concepts.
If a fundamental change needs to be made, I'd rather do it now before too much more work goes into this or it lands and then people who do the exercises get stuck in the lurch of a sudden redesign. |
Agreed. Overall I'm not exactly happy with the implementation, but frankly I've passed on implementing this exercise before because it's just a wall of edge cases problems unless you control the interpreter itself. So I think it's a solid pass at making a workable exercise from a largely unworkable problem description, so I don't have any specific thoughts on improving it. But let's give @cmccandless (and others) a chance to weigh in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good first revision for a workable exercise. It's likely that the tests could be refactored a bit to reduce the size of the file, but the code is easy to follow and seems to work as intended.
You will need to add an entry in config.json for this exercise before we can merge. |
This issue has been automatically marked as |
26f3ddc
to
01f9094
Compare
I thought I pushed this awhile ago. Regardless, I added sanity testing to the mocks, fixed some mutation failures and got 100% branch coverage. It still needs the config.json set up, but I think I’m too in the weeds to assign a difficulty or requisite skills. Any suggestions for those or things to change would be appreciated. |
@AnAccountForReportingBugs My apologies! I misplaced the notification for your latest comment. As for config.json, I would suggest the following, inserted at the end of the {
"slug": "paasio",
"uuid": "ec36f9ed-f376-43d9-90ee-be0bea0cfc98",
"core": false,
"unlocked_by": "book-store",
"difficulty": 5,
"topics": [
"classes",
"interfaces",
"networking"
]
} You can use that snippet verbatim; the |
01f9094
to
1b9ae9d
Compare
No worries. It happens. At least it didn’t get completely lost in the “hacktoberfest” barrage. I’ve added the information and the tests pass. The failure is because another exercise is out of date. Once that is fixed a in the master branch, a subsequent merge from master should give all greens here. |
Pending merge of #2249 , this is good to go! Thanks @AnAccountForReportingBugs for all your work on this! |
@AnAccountForReportingBugs can you please update your branch with the latest from |
The UI isn’t letting me, so I’ll just use a hammer. One of those commands worked. Fell free to squash or rebase or rewrite to you heart’s content. |
I noticed the GitHub action didn’t fail but does have an “annotation.” Don’t know what to make of it though. |
LGTM! Merging! Thanks @AnAccountForReportingBugs ! |
Closes #751
Not quite ready to go, but I thought some feedback might be appropriate at this point. The config.json needs some topics and other information before committing and their might be some stylistic changes needed. That said, there is full branch coverage and the exceptions/return values of the mocks should be identical to the real thing. It should also work on Python3.4 and above on all platforms.