Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle warnings in tests #751
Handle warnings in tests #751
Changes from 22 commits
d7b50a9
b1484ed
eaf2195
c068a82
7ccc0cf
4db6e4e
8de3e86
b8935af
c729b23
40d3890
afdacfb
87832a2
c3b432a
841c6bc
602f361
8945cc2
1edb75f
2db6359
77f5e4b
d217f59
2c506ec
7a2f6d1
9868fa8
9251cc1
457e344
0a75462
b0a321d
ff8e482
d670d8c
9ec08a8
968b9c7
35645f8
8c4c472
d108c05
66c6dab
40bacf0
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should
self._create_context
be unset here?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is inside an
async def
, can this except branch ever occur? There should always be a running loop from inside a coroutine.The previous strategy of setting
_ready
outside a coroutine might not have a running loop, but now that you've moved it into the coroutine (good, I think), there should always be a running loop.Returning two types of Future depending on context would be tricky because they aren't interchangeable (CFuture is not awaitable without calling
asyncio.wrap_future(cfuture)
).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An important difference of this strategy, though: the attribute will not be set until an arbitrarily-later point in time due to asyncness. The previous strategy ensured
._ready
was defined immediately before any waiting, whereas putting it in the coroutine means the attribute will not be defined immediately.Two new things to consider (may well both be covered):
self._ready
that may now be called when_ready
is not yet_ready
be set and then re-set? If so, waits for_ready
may get a 'stale' state before the new_ready
future is attached. This can be avoided with a non-asyncdelattr
prior to the async bit.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Originally
_ready
was only defined in the constructor, which is why we had the fallback option. It seems like these futures are becoming problematic in general. Maybe what we really want is a state and a signal when the state changes, like we have on the JavaScript side.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, if we want to define
ready
andshutdown_ready
as "the most recent" starting and shutting down futures, we could also add anis_ready
andis_shutdown_ready
for when there are no pending of either. We would use this trick that tornado used to work around the new deprecation warnings:We could then make
.ready
and.shutdown
ready futures only available when using theAsyncKernelManager
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha oh my, nevermind, I'm digesting https://bugs.python.org/issue39529
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think part of the problem is that
jupyter_client
is a library, not an application, in the same way thattornado
is. It looks like we also can't rely onasyncio_run
in the future, since it relies onget_event_loop
andset_event_loop
. I think here's a sketch of what we need to do:return asyncio.new_event_loop().run_until_complete(async_method)
..ready
future and add new methods calledasync def wait_for_pending_start()
andasync def wait_for_pending_shutdown()
to handle pending kernels.With this new structure, the synchronous managers could also support pending kernels, breaking it into two synchronous steps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can handle the ready future in this PR and defer the other parts to a separate refactor PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, concurrent.futures are generally more compatible and flexible than asyncio futures - they are threadsafe, reusable across loops, etc. If you always use a cf.Future, you can await it in a coroutine with
asyncio.wrap_future(cf_future)
. That may be the simplest solution here. It's the inconsistency that I saw as a problem (self._ready
may be awaitable). Something like:ought to be more reliable.
To always use new event loops (or
asyncio.run
) for each sync method call, one important consideration is that all your methods completely resolve by the end of the coroutine (i.e. not hold references to the current loop for later calls, which means requiring pyzmq 22.2, among other things). A possibly simpler alternative is to maintain your own event loop reference.asyncio.get_event_loop()
is deprecated because it did too many things, but that doesn't mean nobody should hold onto persistent loop references. It just means that they don't want to track a global not-running loop. If each instance had aself._loop = asyncio.new_event_loop()
and always ran the sync wrappers in that loop (self._loop.run_until_complete(coro())
), it should work fine. That avoids the multiple-loop problem for sync uses because it's still a single persistent loop. Each instance may happen to use a different loop, but that should not be a problem in the sync case.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sweet, thanks for the insights Min. That sounds like a good approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More thoughts on ready future handling:
is_start_pending
andis_shutdown_pending
flags.start()
, check for_start_pending
and error if true. Set_start_pending
and set a new_start_ready
future. Clear_start_pending
when ready.shutdown()
, check for_shutdown_pending
and bail if true. Set_shutdown_pending
and set a new_shutdown_ready
future. Clear_shutdown_pending
when ready.wait_for_start_ready()
andwait_for_shutdown_ready()
store current ready, and wait for it. If new current ready is different from the stored one, recurse.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice - make this call to interrupt() conditional on start status. Good catch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏼
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this a little difficult to understand. I think the double calls to
getrlimit
are part of the confusion. Perhaps a comment as to the goal would be helpful. It appears to ensure a minimal soft limit ofDEFAULT_SOFT
if the current hard limit is at least that much.It seems like the
hard < soft
(L30) condition will never be true becauseold_soft < soft
(L29) can only be true ifsoft
was increased toDEFAULT_SOFT
, which implieshard >= soft
, which leads me to wonder some kind of update tohard
is missing.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaned up