Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for django apps in gaiohttp #803

Closed
asvetlov opened this issue Jun 23, 2014 · 15 comments
Closed

support for django apps in gaiohttp #803

asvetlov opened this issue Jun 23, 2014 · 15 comments

Comments

@asvetlov
Copy link
Collaborator

The source of the issue is #788
@benoitc @fafhrd91 @c-schmitt we can discuss it here.
I still did not dig into .read(n) problem hard enough, sorry.

We can create an asyncio.Task for fetching data from input and return collected data as .read(n) result, isn't it?

@fafhrd91
Copy link
Collaborator

@asvetlov how can you do this without "yield from" support from django? do you want to run wsgi execution in separate sync thread?

@benoitc
Copy link
Owner

benoitc commented Jun 23, 2014

@asvetlov i guess it should be possible. I may have a look tomorrow on that, Imo we could wrap that in an io.BufferedReader or something like it. I do that for example in the http-parser.

@benoitc
Copy link
Owner

benoitc commented Jun 23, 2014

@fafhrd91 well you can yeld from in a wrapper, can't you?

@asvetlov
Copy link
Collaborator Author

@fafhrd91 I guess (without confirmation) to create asyncio.Task for reading from input source.
When data will be available then .read(n) can fetch the data from buffer.
Maybe I'm wrong but I like to try the way.

@fafhrd91
Copy link
Collaborator

@benoitc you can, but then thread will work in sync mode. there is no way to make sync application asynchronous just by using aiohttp. thread will work in sync or async mode.
it is possible to run async code in separate thread, but is it worse of effort?

@fafhrd91
Copy link
Collaborator

@asvetlov if you run blocking code, it will block event loop. there is no magic

@asvetlov
Copy link
Collaborator Author

BTW, how tornado worker solves the issue?

@benoitc
Copy link
Owner

benoitc commented Jun 23, 2014

@fafhrd91 i thought that asyncio like any eventloop was spawning a thread for blocking operations... Anyway yes what we could do is spawning a futures when the socket is selected for read and wait its return to fill the buffer. It would appear as blocking for the user.

@benoitc
Copy link
Owner

benoitc commented Jun 23, 2014

@asvetlov apparently it read the full body: https://github.com/tornadoweb/tornado/blob/master/tornado/wsgi.py#L209-L213

not sure why though

@fafhrd91
Copy link
Collaborator

@benoitc yes, you can run blocking code in separate thread.

WSGIServerHttpProtocol also supports reading full body, i can change readpayload=True.

@schmitch
Copy link

btw. just a side note, we ran gevent on production a long time and there were no blocking issues, than we changed to python3 and used the threaded worker and we run into blocking problems. btw. this issue is huge, even more huge than you think, since a lot of servers on python using wsgi. Even Openstack uses wsgi. And Django has a ticket open for changing to gunicorn: https://code.djangoproject.com/ticket/21978

I didn't had the time to check flask and other wsgi complicant applications but i will do if needed.
Ah and as a sidenote GET is working fine, just POST won't work. Somebody mentioned that it won't work in flask, too.

The question from my side is, why could gevent be non blocking, while aiohttp is, even if they are barely the same things?

@benoitc
Copy link
Owner

benoitc commented Jun 27, 2014

@c-schmitt can you open a ticket about the blocking issues with the gthreaded worker? I can have a look on it this we. If you have a way to reproduce them that would also help a lot.

About the gaiohttp worker, I will do some tries using the io.BufferedReader or like. In theory it should be possible to claim for more data when needed using it. Something in the spirit of what I do in http-parser: https://github.com/benoitc/http-parser/blob/master/http_parser/reader.py#L12

If it's really not possible then we should go for the "readpayload" option. I think the tornado worker is doing the same right now, but we should check it. cc @fafhrd91 @asvetlov

@schmitch
Copy link

@benoitc - the only thing i could reproduce it was a low worker count + generating a pdf with weasyprint. i make a ticket and open an example project.

@asvetlov
Copy link
Collaborator Author

@benoitc @fafhrd91 I guess to switch on readpayload option for now.
And open a ticket for more smart behavior.

readpayload is not acceptable for uploading huge files and for streaming data -- that's true.
But it works for regular POST HTTP requests with data from user forms etc.

@fafhrd91
Copy link
Collaborator

sounds good

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants