-
-
Notifications
You must be signed in to change notification settings - Fork 753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server lifecycle #1497
base: master
Are you sure you want to change the base?
Server lifecycle #1497
Conversation
ddccce5
to
0ecc6af
Compare
try: | ||
yield server | ||
finally: | ||
await server.shutdown() | ||
cancel_handle.cancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth pointing this bit out as it suffers from all of the same problems that our test fixtures have suffered from:
asyncio.sleep(0.1)
instead of checking for startup properly.- Two handles required to be able to shutdown the service fully.
server.shutdown()
not actually killing the server main loop.
I think it's a good demonstration of my motivation for proposing these changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep pretty interesting, unfortunate I cant dedicate more time on this, the only real "interrogation" I have it the interaction it may have with --reload and --workers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I can tell, there shouldn't be any interaction. Both just call server.run
in a subprocess and kill the process by sending the appropriate signal. There doesn't appear to be much difference from running without multiple workers or a watcher, which is as it should be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
going to play with this but re-reading it I like it,
@florimondmanca what are your thoughts on this, you extensively played with this a while ago, your input should be very interesting
As far as I can see, this PR introduces public API changes to The reason is, right now my main question to assess these changes would be: how would you use the introduced changes to solve 1) and 2) in your situation? What's your current programmatic usage of Uvicorn like, and what does it become with these changes? There are several ways we could go about "Manage server lifecycle programmatically". For example another way would be taking inspiration from structured concurrency, and promoting an async context manager usage. It would do the equivalent of async with Server(...):
# Server has started (no need to poll)
...
# Server has shut down We'd allow users to use this instead of having to write an equivalent of our Non-block usage would be possible with from contextlib import AsyncExittack
exit_stack = AsyncExitStack()
await exit_stack.enter_async_context(Server(...)) # Starts up
...
await exit_stack.aclose() # Shuts down Also, I'm not sure what the use case for a |
Hi @florimondmanca - Regarding our situation: Our test fixtures currently look a bit like this: @pytest.fixture
async def server() -> AsyncIterator[str]:
host = "127.0.0.1"
port = choose_port(host)
app = create_app()
# Uvicorn will always try to override the log config with something.
# These settings should cause this to be a no-op.
log_config = {"version": 1, "disable_existing_loggers": False, "incremental": True}
uvicorn_config = uvicorn.config.Config(
app, host=host, port=port, reload=False, workers=1, log_config=log_config
)
server = uvicorn.server.Server(uvicorn_config)
# TODO uvicorn unconditionally overrides signal handlers if called from
# the main thread. This completely breaks `oaknorth_processes`' soft
# shutdown mechanism if we don't do something about it. A proper fix is
# needed upstream.
server.install_signal_handlers = lambda: None # type: ignore
task = asyncio.create_task(server.serve())
while not server.started:
await asyncio.sleep(0.01)
yield f"http://{host}:{port}"
server.should_exit = True
try:
await asyncio.wait_for(task, 1.0)
except asyncio.exceptions.TimeoutError:
logger.exception("Could not shutdown server cleanly") It all works most of the time, but it's brittle (for example, if the server fails to start it will block forever) and the polling, both in the fixture and the Regarding the choice of interface: I picked this exact set of methods because it matches the interface of Strongly agree on making server a context manager, but I would prefer to do what the stdlib does and implement Going back to our situation: in the ideal case, I would like for our test fixtures to look something like this: @pytest.fixture
async def server() -> AsyncIterator[str]:
host = "127.0.0.1"
port = choose_port(host)
app = create_app()
uvicorn_config = uvicorn.config.Config(
app,
host=host,
port=port,
reload=False,
workers=1,
register_signal_handlers=False,
log_config=None,
)
async with uvicorn.server.Server(uvicorn_config):
yield f"http://{host}:{port}" With only this PR merged, we could go to something like this: @pytest.fixture
async def server() -> AsyncIterator[str]:
host = "127.0.0.1"
port = choose_port(host)
app = create_app()
# Uvicorn will always try to override the log config with something.
# These settings should cause this to be a no-op.
log_config = {"version": 1, "disable_existing_loggers": False, "incremental": True}
uvicorn_config = uvicorn.config.Config(
app, host=host, port=port, reload=False, workers=1, log_config=log_config
)
server = uvicorn.server.Server(uvicorn_config)
await server.start_serving()
yield f"http://{host}:{port}"
server.close()
await server.wait_closed() which would already be a substantial improvement. |
@bwhmather Thanks. I think I understand the idea of having aenter/aexit be straight-up calls to other public methods, basically making the async context manager usage a shorthand more than anything else. My comment on documentation still holds, I believe. I think the idea of a publicly documented way to run a server programmatically has been around for a while. It'd be great if we could clear that out with a fully documented and refined solution. Please note the current documentation: https://www.uvicorn.org/#config-and-server-instances For reference, we use a Uvicorn server in the HTTPX test suite, and what we do is run Uvicorn in a thread instead. Looks like this: class TestServer(Server):
def install_signal_handlers(self) -> None:
pass
def serve_in_thread(server: Server):
thread = threading.Thread(target=server.run)
thread.start()
try:
while not server.started:
time.sleep(1e-3)
yield server
finally:
server.should_exit = True
thread.join()
@pytest.fixture(scope="session")
def server():
config = Config(app=app, lifespan="off", loop="asyncio")
server = TestServer(config=config)
yield from serve_in_thread(server) I believe it suffers from some of the same limitations, esp. regarding hanging on startup failures, as well as asyncio/threading interactions. I'm curious what we'd have to change in HTTPX to use the methods proposed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this locally and I think overall this is looking great. Thanks a lot.
I did a thorough review, leaving some comments.
To finalize things, we might want to drop / extract anything that's not strictly related to this new start_serving/close/wait_closed/shutdown API.
Eg. I'm not sure I understood what needed fixing in #1301, or how this PR fixes that in particular. Is there a way to fix it without having this PR in? Is it possible to fix it after merging this PR?
Co-authored-by: Florimond Manca <florimond.manca@protonmail.com>
483949b
to
79adcdd
Compare
Yup, that looks very familiar! We were doing the same, but then tried to apply technique to a different project which expected to be create the app with already created resources which referenced the main thread event loop. Regarding docs: Sorry, yes, of course. Regarding #1301: You are right. This doesn't actually resolve it. I think I misread the |
One other thing that is concerning me on re-reading this PR is the question of what to do about exceptions? Should they be re-raised by asyncio.Server appears to opt for swallowing any errors. |
This seems like a great idea. I’m in favor of promoting and developing better APIs to run Uvicorn programmatically |
Adds
close
,wait_closed
andstart_serving
methods that match the semantics ofasyncio.Server
.Adds
sockets
argument constructor in order to enable new methods to be used with existing bindings.This makes it easier to use uvicorn as part of a larger application, by making it possible to block until a service has started, and easier to shut down cleanly.
For background, we use
uvicorn
extensively and are very happy with its behaviour in production, but have had a lot of difficulty with server lifecycle when trying to use it as part of our test suite. In particular:asyncio.Task
wrappingserve
.It also took a fair bit of digging in the source code to figure out that calling
Server.shutdown
from outside does not do what one might expect.I'd like to propose a few changes, the first of which I have attempted in this PR, which I think will resolve the issues we've encountered:
close
,wait_closed
andstart_serving
methods that match the semantics ofasyncio.Server
(resolves issues 1 and 2).asyncio.Event
to make it so thatclose
can pre-empt the polling loops (resolves issue 3).register_event_handlers
keyword argument toserve
which can be used to disable registering of event handlers (resolves issue 4).If you are happy with this approach then I would be glad to work on 2, 3 and 4 as follow ups.
Resolves: