-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add type annotations, refactor sync/async #623
Conversation
48176d9
to
b4d35ee
Compare
context = Instance(zmq.Context) | ||
def _context_default(self): | ||
context: Instance = Instance(zmq.Context) | ||
def _context_default(self) -> zmq.Context: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was surprised not to see the default
decorator here:
@default('context')
Am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The @default
decorator was added to traitlets more recently than this code was written. Traitlets used to exclusively use 'magic method names' instead of decorators. The decorator approach is not required for the default-generator, but fine to add if you are making a pass on all these bits.
def _client_factory_default(self): | ||
client_class: DottedObjectName = DottedObjectName('jupyter_client.blocking.BlockingKernelClient') | ||
client_factory: Type = Type(klass='jupyter_client.KernelClient') | ||
def _client_factory_default(self) -> Type: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No @defaults('client_factory')
here?
|
||
def _kernel_spec_manager_default(self): | ||
def _kernel_spec_manager_default(self) -> kernelspec.KernelSpecManager: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No @defaults('kernel_spec_manager')
here?
I don't know how to add type hints to traits, right now this doesn't have any effect (see ipython/traitlets#647). |
The fact that we are trying to support sync/async in a single code base is actually not good for type hints. For instance, a
I posted a question on Stack Overflow about that, it seems that this is a legit error. I'm not sure what to do about that. |
In #533 we suggested to have a single, async-native code base for jupyter-client, as we did in nbclient. The non-async methods would be wrappers around the async methods, and run them until complete. This would probably solve the issue I mentioned above, and it will also remove some duplicated code. |
I've thought the |
Thanks a lot @dhirschfeld, I didn't know this project. |
The folks on the |
Thanks, I was worried it would be only for |
This last commit refactors |
Isn't this going to break direct subclasses of Since we want to eventually have one implementation (async), what is the downside of a split-and-deprecate approach while using inheritance purely as the means to support configuration in a backwards-compatible manner? Yes, it means some duplication of code, but it doesn't stifle innovation, is much simpler to support and is temporary. |
Yes, previous subclasses of
I'm not sure to follow what you mean, could you explain further? |
OK. I guess we're talking about a major-release boundary here (e.g., 7.0) where any subclasses not updated or who haven't capped
By split-and-deprecate I mean we do not have Because the implementations are split and because we want to continue to innovate, new functionality - like Kernel Provisioning - can move forward on the async class while leaving the synchronous methods alone and not requiring proliferation of both behaviors - one of which we plan to abandon. Once we determine the appropriate deprecation cycle, we remove the We also wouldn't be introducing a new paradigm that runs every method of I think we could use a one-time (or semi-regularly) maintainer's sync-up of sorts - open to all - where we discuss these issues and a roadmap, in general. Would that be something worthwhile to others? |
Regarding breaking the API by renaming the current |
86a6989
to
f8396b6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the same issues will apply for KernelClient as well. Generally speaking, any methods called from within another method will not have their overridden methods called on subclasses. The aliasing trick breaks subclasses.
jupyter_client/manager.py
Outdated
@@ -338,10 +380,15 @@ def start_kernel(self, **kw): | |||
|
|||
# launch the kernel subprocess | |||
self.log.debug("Starting kernel: %s", kernel_cmd) | |||
self.kernel = self._launch_kernel(kernel_cmd, **kw) | |||
self.kernel = await self._async__launch_kernel(kernel_cmd, **kw) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks subclasses that override _launch_kernel()
as they won't have their override called.
jupyter_client/manager.py
Outdated
@@ -438,16 +485,16 @@ def shutdown_kernel(self, now=False, restart=False): | |||
# Stop monitoring for restarting while we shutdown. | |||
self.stop_restarter() | |||
|
|||
self.interrupt_kernel() | |||
await self._async_interrupt_kernel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks subclasses that override interrupt_kernel() as they won't have their override called.
jupyter_client/manager.py
Outdated
|
||
if now: | ||
self._kill_kernel() | ||
await self._async__kill_kernel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks subclasses that override _kill_kernel() as they won't have their override called.
jupyter_client/manager.py
Outdated
else: | ||
self.request_shutdown(restart=restart) | ||
# Don't send any additional kernel kill messages immediately, to give | ||
# the kernel a chance to properly execute shutdown actions. Wait for at | ||
# most 1s, checking every 0.1s. | ||
self.finish_shutdown() | ||
await self._async_finish_shutdown() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks subclasses that override finish_shutdown() as they won't have their override called.
jupyter_client/manager.py
Outdated
@@ -500,21 +554,23 @@ def restart_kernel(self, now=False, newports=False, **kw): | |||
"No previous call to 'start_kernel'.") | |||
else: | |||
# Stop currently running kernel. | |||
self.shutdown_kernel(now=now, restart=True) | |||
await self._async_shutdown_kernel(now=now, restart=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks subclasses that override shutdown_kernel() as they won't have their override called during restarts.
There should be a test for that. |
I think we should be more explicit as to what is async and what is sync. If a subclass of |
Deprecation cycles can take years ( |
I agree - in a world where we don't have to continue supporting clients, this is all true. The reality is we have a contract to uphold and introducing this level of change breaks that contract - and, yes, it makes things extremely difficult to change. If we're saying that we're free to completely rework our public contract whenever (and subclass authors be damned), then that's a violation of trust. |
Regarding the sync API deprecation, I meant that I'm not sure it should happen at all. Having an async-only API is a pretty big constraint for users, as once they use it async/await propagates to their entire code base. I think a sync API is very useful and should stay forever. |
Yes, I know.
Agreed.
I think it would be good to discuss this. I think we're going to find it hard to adequately support both, but that depends on the kinds of changes we'll be making. I guess with this approach, new functionality can be added within the dual-class structures - so long as it is async. Is that a correct understanding? If so, that is great.
Because we have documented, promoted, and allowed folks to subclass KernelManager and KernelClient - we are breaking a contract but that's what major release boundaries are for. I had proposed in the Kernel Provisioning proposal that it be delivered at a major boundary due to a hypothetical contract breakage (that has been extremely difficult to avoid - because it's a contract). I'm just not sure the addition of type hints warrants a major release.
Actually, subclass overrides of private-callable methods is exactly how you allow subclasses to influence their behavior on the class. It's important to distinguish between publically-callable methods - which is the contract for client applications and privately-callable methods - which amends the contract promised to clients extending your functionality. From what I've read in Python, a class that is intended to be subclassed would use double-underscore prefixes on method names to indicate those methods are truly private (and triggers the name-mangling). Single-underscored methods are more like We even document that subclasses override I'm sorry for being so persistent about this and I do appreciate the discussion (and it's helping me see things). Thanks for spending time on this (and patience with me 😄 ). Would you mind adding some description to the PR's description of the approach used (i.e., all sync methods are essentially placed onto a separate thread/event loop) so others don't need to glean this from the changes? You should also mention the API breakage so we can discuss what our release cycle would be if this were to be merged. If we're going to be introducing a major release - there are others we'll want to do like finally remove the 7-year-deprecated |
Thanks Kevin for the detailed explanation, and relevant background (as usual, and this is very appreciated as I lack those). |
1a7238b
to
2b2b69b
Compare
Thanks Kevin for taking the time to test and investigate. I could reproduce the bug in papermill's tests. info_msg = self.wait_for_reply(await self.kc.kernel_info()) This is not without consequences, since now the whole function call stack leading to this line has to be async. If this is unacceptable, then info_msg = self.wait_for_reply(run_sync(self.kc.kernel_info())) |
I managed to fix the issue by having requests return synchronously when |
Thanks David. I don't see a relevant issue or pull request in Papermill for this - do you know if something is in the works? I guess this raises some concerns regarding breaking existing clients. However, I would imagine the only clients it would break would be those using I thought I tried the synchronous call approach yesterday (using The alternative of forcing clients to plumb their call stack to be async may be too much to ask. (Not saying that Papermill wouldn't do that, just talking about clients in general.) I see you've just posted an update while I was writing this response. I'll give this a shot. Thanks! |
Works like a charm @davidbrochart - very nice! I also tried the more complicated scenario I referenced in which AKM and AKC subclasses come into play - ditto - like a charm! |
Awesome, thanks a lot for your help Kevin. |
I am - although I would like to see at least another maintainer review this due to its significant changes. Thank you for all the work (and patience) on this - this is really outstanding (IMHO). You've essentially flipped the script by making async the basis on which we can move forward! |
I will merge today, if no objection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!
🎉 |
🚀 |
Hey guys, this seems like a huge refactoring to be included in a patch release (6.1.13). Furthermore, it included breaking changes (see jupyter/qtconsole#476 and spyder-ide/spyder#15161), so a heads up to upstream projects would be nice for next time. |
Hi Carlos, although there has been big internal changes, I was not expecting any change for the user. But it looks like I was wrong, sorry for the inconvenience. |
def __init__(self, context=None, session=None, address=None, loop=None): | ||
def __init__( | ||
self, | ||
context: zmq.asyncio.Context, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the removal of the None
default is what is breaking qtconsole
, although this does imply they must not be using the HB channel since it unconditionally references through self.context
in its _create_socket()
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the removal of the None default is what is breaking qtconsole
Thanks for your help @kevin-bates. I found that too but now we have another problem (see below).
although this does imply they must not be using the HB channel since it unconditionally references through self.context in its _create_socket() method.
It has something to do with the way the channel inherits from HBChannel and QObject and a re-implementation of super
to make that work (I really don't understand that logic, @minrk implemented it).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the error I'm seeing now while starting a kernel:
ERROR:tornado.general:Uncaught exception in ZMQStream callback
Traceback (most recent call last):
File "/home/carlos/.virtualenvs/test-jupyter-client/lib/python3.8/site-packages/zmq/eventloop/zmqstream.py", line 434, in _run_callback
callback(*args, **kwargs)
File "/home/carlos/.virtualenvs/test-jupyter-client/lib/python3.8/site-packages/jupyter_client/threaded.py", line 101, in _handle_recv
ident, smsg = self.session.feed_identities(msg)
File "/home/carlos/.virtualenvs/test-jupyter-client/lib/python3.8/site-packages/jupyter_client/session.py", line 954, in feed_identities
idx = msg_list.index(DELIM)
AttributeError: '_asyncio.Future' object has no attribute 'index'
Any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also saw this @ccordoba12, it has to do with the threaded channels. Don't worry, I will clear all that out before the next release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! Please give me a ping in case you want me to test your changes.
Thanks for your help @davidbrochart! |
6.1.13 is yanked on PyPI now as well |
Just as an FYI, this change also breaks nbconvert 5.x branch, as it has no async wrapping around its EDIT: Ah I see this was addressed in #637 |
Fixes #553
Fixes #621