-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stop hook for extensions #526
Conversation
Codecov Report
@@ Coverage Diff @@
## master #526 +/- ##
==========================================
+ Coverage 76.32% 76.43% +0.11%
==========================================
Files 110 110
Lines 9819 9879 +60
Branches 1067 1079 +12
==========================================
+ Hits 7494 7551 +57
+ Misses 1944 1943 -1
- Partials 381 385 +4
Continue to review full report at Codecov.
|
Yeah I couldn't find an example of tests for |
Hi @mwakaba2 - although I feel a bit hypocritical saying this given the "unit test" state of kernels and terminals, I think we should minimally include a test that calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @oliver-sanders - thank you for this pull request!
This topic has come up recently, but more in the context of an event mechanism (see #466). This PR appears to target subclasses of ExtensionApp
and I wonder if it could be generally extended to include server extensions. Perhaps it could model the load functionality to call a stop-related method for those cases. (Note: this was the first time I've poked around the extensions subsystem, so please take what I say with a grain of salt.)
Should the call to stop_all_extensions()
reside within the IOLoop lifecycle so the extensions can use the same message loop as when they were running or continue to invoke methods in a similar manner? I realized this when I was about to suggest we make stop_extension()
async so we could then gather up the futures across the extensions and await them in parallel - then realized there's no loop at that point. I think not having an IOLoop may be problematic - or least something that might hinder what/how extension authors want to handle stop requests.
We might take a look at having ShutdownHandler
call into serverapp and marry that call (could be _cleanup_extensions()
) with _signal_stop()
so that extensions could be stopped on the IOLoop prior to its shutdown (perhaps).
Hi, thanks for looking in.
Ditto here.
Have had a look at doing this, I think it would require a different interface, something like
Good idea, the
Tested that |
d49040f
to
7637dac
Compare
That seems reasonable.
Yeah, they are async functions that must be called from non-async functions in this case, thus the wrapper. I spent some time with this today, and believe this might be difficult to accomplish. I think we'd need to call We could try playing with this more as I think it would be good to allow the extensions to stop from within the same ioloop, but that might be pushing things when coupled with the sig handler dynamic. (Note the first handler creates a thread to allow the user to answer y/n, but I think the second SIGINT is still handled on the main thread. We'll need to see if answering 'y' (vs. 2 ctrl-c) changes things as well. I see the comment regarding moving the |
Sorry, got my commits in a twist, I've fiddled the way the I've tested:
test examplefrom jupyter_server.base.handlers import JupyterHandler
from jupyter_server.extension.application import ExtensionApp
from jupyter_server.extension.handler import ExtensionHandlerMixin
import tornado
import tornado.gen
class MyExtensionHandler(ExtensionHandlerMixin, JupyterHandler):
@tornado.web.authenticated
def get(self):
self.write('goodbye world')
self.serverapp.stop()
class MyExtensionApp(ExtensionApp):
# -------------- Required traits --------------
name = "myextension"
default_url = "/myextension"
load_other_extensions = True
file_url_prefix = "/render"
def initialize_settings(self):
...
def initialize_handlers(self):
self.handlers.extend([
('/myextension', MyExtensionHandler)
])
def initialize_templates(self):
...
async def stop_extension(self):
print('#1')
await tornado.gen.sleep(1)
print('#2')
def _jupyter_server_extension_points():
return [
{
"module": "myext",
"app": MyExtensionApp
}
] |
Hi Oliver. I took this for a spin just now and it looks like both the kernel and terminal shutdowns are acting differently with these changes. The symptoms are that those shutdown operations are not taking place. Here's the output using the released server with 1 kernel and 1 terminal active during the two SIGINTS:
while running the same scenario with this branch yields:
I'm not too worried about the LSP messages, but it would be good to ensure the appropriate kernel/terminal shutdown sequences are taking place. Could you please confirm you see similar behavior when there are active kernels at shutdown? Thanks. |
* closes jupyter-server#241 * call a stop_extension method on server shutdown if present
Well spotted, thanks for reviewing. Yes I've managed to reproduce this using Jupyter Lab. It would appear that the Since This feels nicer, however, I'm not sure whether there will be downstream consequences. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the great work Oliver. This last set of changes appear to resolve the kernel/terminal shutdown issues - thank you!
Shoot - didn't catch the test failures. I see what you mean by the |
Dammit, should be fixed now. This came down to the disparity between I've added a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the great work here Oliver!
@oliver-sanders - We discussed the merge of this PR in the server meeting today and decided to go ahead and create a minor release to carry this change late next week. Since there could be folks that both derive from This will be a useful addition for extensions developers - thanks again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oliver-sanders, this is great! Thank you for taking the time to work on this.
Overall, I think this approach looks good. I left some small comments around implementation details in line.
First, I think we should just add stop_extension
as a method to ExtensionApp
and have it pass (i.e. do nothing) by default.
Second, I don't think we need an attribute in the ExtensionManager to store ExtensionApps. I think we can just loop through the ExtensionPoints and ask “are you an ExtensionApp?” Otherwise, we open the possibility for that attribute to go stale over time.
@Zsailer, sorted, deviated a little to keep the extension_app property a mapping of extension names rather than extension point names as extension names are more user-facing (so make more sense to the person reading the log files). In anticipation of downstream developers needing to debug shutdown logic I added some debug logging so we can see which extensions have started/finished shutting down. |
Thanks, @oliver-sanders. This looks good to me. Merging away! |
closes #241
Call a
stop_extension
method for each extension app on server shutdown if present.Not sure how to go about testing this, any examples for the other
cleanup_*
methods?