Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

passio: Implement exercise #2195

Merged

Conversation

AnAccountForReportingBugs
Copy link
Contributor

@AnAccountForReportingBugs AnAccountForReportingBugs commented Apr 27, 2020

Closes #751

Not quite ready to go, but I thought some feedback might be appropriate at this point. The config.json needs some topics and other information before committing and their might be some stylistic changes needed. That said, there is full branch coverage and the exceptions/return values of the mocks should be identical to the real thing. It should also work on Python3.4 and above on all platforms.

@yawpitch
Copy link
Contributor

I mean overall this is great ... I'd no doubt implement my example.py differently than yours, but the class overall makes sense for this (more advanced) exercise. I would say maybe we should clarify in the description.md that the "report" is just a tuple of those two counts, rather than forcing the student to figure that out from tests.

Also, minor point, but I would try to avoid shadowing the file builtin in signatures.

@AnAccountForReportingBugs
Copy link
Contributor Author

AnAccountForReportingBugs commented Apr 28, 2020

Two questions for clarification. Do you think I should change anything in the example.py file about my implementation. It is literally the example for others to be inspired by. If you think It can be done better, I'm all ears. As to the stats properties, I'll look at adding a docstring to the properties in addition to putting it in the hints.md file so it is added to the README.

Are you referring to the function shadowing in the example.py or the mocks? I can fix it either place. Come to think of it, I probably should in the example.py and stub file. I can do the same with the socket too. I originally did the signatures different as I did that before the tests. Might simplify a few things on both ends. I did go to good effort to get the exceptions and returns to be identical, so everything else should be too, in hindsight.

I was looking at it and somewhat remember what I did. The signatures on the docs for the io module don't always match what they are for the objects and I put data for socket because I didn't want to shadow bytes in case I needed later, e.g. for an isinstance check or constructor. But if you think they won't need it, I can align that one a bit closer to get the signatures closer. I also wanted the SENTINEL in order to test for explicit passing in, as opposed to assuming the kwargs of the called function. In fact, the stub and example are different illustrates the change over time, so I'll fix that too.

Sorry this is such a mess, but writing something without clear guidelines or expectations will do that. If you could answer my questions, any more insights you have, and give me a bit more time, I'll get it fixed up.

edit, added two paragraphs at the end.

@yawpitch
Copy link
Contributor

Do you think I should change anything in the example.py file about my implementation. It is literally the example for others to be inspired by. If you think It can be done better, I'm all ears. As to the stats properties, I'll look at adding a docstring to the properties in addition to putting it in the hints.md file so it is added to the README.

I don't have specific feedback on the implementation. This is a difficult exercise to implement because there's no obvious good way of doing it, except by subclassing socket.socket and the various io.*IO classes, and even that would be brittle. In the real world I'm pretty sure you'd end up writing custom versions of the underlying C implementations so you could ensure that any new socket or opened file descriptor would be logging usage.

I think what you've got is certainly OK ... as a kind of rough sketch both the class skeletons are reasonable enough. The mocking (per usual) is pretty involved and opaque, but it would kind of have to be.

Are you referring to the function shadowing in the example.py or the mocks? I can fix it either place. Come to think of it, I probably should in the example.py and stub file. I can do the same with the socket too.

I was specifically referring to the example.py / stub file. I'm not really concerned with shadowing inside the tests, though remove it if possible / noticed. The main thing is to present "best practice" in the student-facing files.

@AnAccountForReportingBugs
Copy link
Contributor Author

I'm not married to the implementation. If you or @cmccandless or someone else thinks I should change something, I'm all ears. For example, splitting the stats properties into two different properties for each. Changing isn't difficult, since I have full branch coverage in the tests and, as a bonus, I won't take it personally. Constructing the test suite alone taught me things I thought I already knew about these concepts.

  • If you would like, I could change the functions to use functools.wraps and *args, **kwargs and look more like a real-life implementation but thought that might get too far away from an informative lesson on the general concepts of files, sockets, various common magic methods, and read-only properties.
  • Or, I could have them implement a file-like opener or monkeypatched open call and wrap the socket constructor for the socket.socket wrapping class for _socket (or even deal directly with the low-level I/O), but that seems too complicated/involved plus you suggested having the file and socket passed in.
  • I can also reimplement the skeletons in the stub/example. E.g. to split the properties or add/remove functions.

If a fundamental change needs to be made, I'd rather do it now before too much more work goes into this or it lands and then people who do the exercises get stuck in the lurch of a sudden redesign.

@yawpitch
Copy link
Contributor

If a fundamental change needs to be made, I'd rather do it now before too much more work goes into this or it lands and then people who do the exercises get stuck in the lurch of a sudden redesign.

Agreed. Overall I'm not exactly happy with the implementation, but frankly I've passed on implementing this exercise before because it's just a wall of edge cases problems unless you control the interpreter itself. So I think it's a solid pass at making a workable exercise from a largely unworkable problem description, so I don't have any specific thoughts on improving it.

But let's give @cmccandless (and others) a chance to weigh in.

Copy link
Contributor

@cmccandless cmccandless left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good first revision for a workable exercise. It's likely that the tests could be refactored a bit to reduce the size of the file, but the code is easy to follow and seems to work as intended.

@cmccandless
Copy link
Contributor

You will need to add an entry in config.json for this exercise before we can merge.

@stale
Copy link

stale bot commented May 20, 2020

This issue has been automatically marked as abandoned because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@AnAccountForReportingBugs
Copy link
Contributor Author

I thought I pushed this awhile ago. Regardless, I added sanity testing to the mocks, fixed some mutation failures and got 100% branch coverage. It still needs the config.json set up, but I think I’m too in the weeds to assign a difficulty or requisite skills. Any suggestions for those or things to change would be appreciated.

@cmccandless
Copy link
Contributor

@AnAccountForReportingBugs My apologies! I misplaced the notification for your latest comment.

As for config.json, I would suggest the following, inserted at the end of the exercises array:

    {
      "slug": "paasio",
      "uuid": "ec36f9ed-f376-43d9-90ee-be0bea0cfc98",
      "core": false,
      "unlocked_by": "book-store",
      "difficulty": 5,
      "topics": [
        "classes",
        "interfaces",
        "networking"
      ]
    }

You can use that snippet verbatim; the uuid is generated just for this.

@AnAccountForReportingBugs
Copy link
Contributor Author

No worries. It happens. At least it didn’t get completely lost in the “hacktoberfest” barrage.

I’ve added the information and the tests pass. The failure is because another exercise is out of date. Once that is fixed a in the master branch, a subsequent merge from master should give all greens here.

@AnAccountForReportingBugs AnAccountForReportingBugs marked this pull request as ready for review October 3, 2020 07:07
@cmccandless
Copy link
Contributor

Pending merge of #2249 , this is good to go!

Thanks @AnAccountForReportingBugs for all your work on this!

@cmccandless
Copy link
Contributor

@AnAccountForReportingBugs can you please update your branch with the latest from master?

@AnAccountForReportingBugs
Copy link
Contributor Author

AnAccountForReportingBugs commented Oct 10, 2020

The UI isn’t letting me, so I’ll just use a hammer.

One of those commands worked. Fell free to squash or rebase or rewrite to you heart’s content.

@AnAccountForReportingBugs
Copy link
Contributor Author

I noticed the GitHub action didn’t fail but does have an “annotation.” Don’t know what to make of it though.

@cmccandless
Copy link
Contributor

LGTM! Merging! Thanks @AnAccountForReportingBugs !

@cmccandless cmccandless merged commit f3e5532 into exercism:master Oct 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

paasio: implement exercise
3 participants