Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve async performance. #3215

Open
MarkusSintonen opened this issue Jun 4, 2024 · 36 comments
Open

Improve async performance. #3215

MarkusSintonen opened this issue Jun 4, 2024 · 36 comments
Labels
perf Issues relating to performance

Comments

@MarkusSintonen
Copy link

MarkusSintonen commented Jun 4, 2024

There seems to be some performance issues in httpx (0.27.0) as it has much worse performance than aiohttp (3.9.4) with concurrently running requests (in python 3.12). The following benchmark shows how running 20 requests concurrently is over 10x slower with httpx compared to aiohttp. The benchmark has very basic httpx usage for doing multiple GET requests with limited concurrency. The script outputs a figure showing how duration of each GET request has a huge duration variance with httpx.

Figure_1

# requirements.txt:
# httpx == 0.27.0
# aiohttp == 3.9.4
# matplotlib == 3.9.0
# 
# 1. start server: python bench.py server
# 2. run client test: python bench.py client

import asyncio
import sys
from typing import Any, Coroutine, Iterator
import aiohttp
import time
import httpx
from aiohttp import web
import matplotlib.pyplot as plt


PORT = 1234
URL = f"http://localhost:{PORT}/req"
RESP = "a" * 2000
REQUESTS = 100
CONCURRENCY = 20


def run_web_server():
    async def handle(_request):
        return web.Response(text=RESP)

    app = web.Application()
    app.add_routes([web.get('/req', handle)])
    web.run_app(app, host="localhost", port=PORT)


def duration(start: float) -> int:
    return int((time.monotonic() - start) * 1000)


async def run_requests(axis: plt.Axes):
    async def gather_limited_concurrency(coros: Iterator[Coroutine[Any, Any, Any]]):
        sem = asyncio.Semaphore(CONCURRENCY)
        async def coro_with_sem(coro):
            async with sem:
                return await coro
        return await asyncio.gather(*(coro_with_sem(c) for c in coros))

    async def httpx_get(session: httpx.AsyncClient, timings: list[int]):
        start = time.monotonic()
        res = await session.request("GET", URL)
        assert len(await res.aread()) == len(RESP)
        assert res.status_code == 200, f"status_code={res.status_code}"
        timings.append(duration(start))

    async def aiohttp_get(session: aiohttp.ClientSession, timings: list[int]):
        start = time.monotonic()
        async with session.request("GET", URL) as res:
            assert len(await res.read()) == len(RESP)
            assert res.status == 200, f"status={res.status}"
        timings.append(duration(start))

    async with httpx.AsyncClient() as session:
        # warmup
        await asyncio.gather(*(httpx_get(session, []) for _ in range(REQUESTS)))

        timings = []
        start = time.monotonic()
        await gather_limited_concurrency((httpx_get(session, timings) for _ in range(REQUESTS)))
        axis.plot([*range(REQUESTS)], timings, label=f"httpx (tot={duration(start)}ms)")

    async with aiohttp.ClientSession() as session:
        # warmup
        await asyncio.gather(*(aiohttp_get(session, []) for _ in range(REQUESTS)))

        timings = []
        start = time.monotonic()
        await gather_limited_concurrency((aiohttp_get(session, timings) for _ in range(REQUESTS)))
        axis.plot([*range(REQUESTS)], timings, label=f"aiohttp (tot={duration(start)}ms)")


def main(mode: str):
    assert mode in {"server", "client"}, f"invalid mode: {mode}"

    if mode == "server":
        run_web_server()
    else:
        fig, ax = plt.subplots()
        asyncio.run(run_requests(ax))
        plt.legend(loc="upper left")
        ax.set_xlabel("# request")
        ax.set_ylabel("[ms]")
        plt.show()

    print("DONE", flush=True)


if __name__ == "__main__":
    assert len(sys.argv) == 2, f"Usage: {sys.argv[0]} server|client"
    main(sys.argv[1])

I found the following issue but seems its not related as the workaround doesnt make a difference here #838 (comment)

@MarkusSintonen
Copy link
Author

Found some related discussions:

Opening a proper issue is warranted to get better visibility for this. So the issue is easier to find for others. In its current state httpx is not a good option for highly concurrent applications. Hopefully the issue gets fixed as otherwise the library is great, so thanks for it!

@tomchristie
Copy link
Member

tomchristie commented Jun 6, 2024

Oh, interesting. There's some places I can think of where we might want to be digging into here...

  • A comparison of threaded performance would also be worthwhile. requests compared against httpx, with multithreaded requests.
  • A comparison of performance against a remote server would be more representative than performance against localhost.

Possibly points of interest here...

  • Do we have the same socket options as aiohttp? Are we sending simple GET requests across more than one TCP packet unneccessarily, either due to socket options or due to our flow in writing the request to the stream, or both? Eg. see https://brooker.co.za/blog/2024/05/09/nagle.html
  • We're currently using h11 for our HTTP construction and parsing. This is the best python option for careful spec correctness, tho it has more CPU overhead than eg. httptools.
  • We're currently using anyio for our async support. We did previously have a native asyncio backend, there might be some CPU overhead to be saved here, which in this localhost case might be outweighing network overheads.
  • Also worth noting here that aiohttp currently supports DNS caching where httpx does not, although not relevant in this particular case.

Also, the tracing support in both aiohttp and in httpx are likely to be extremely valuable to us here.

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jun 6, 2024

Thank you for the good points!

A comparison of performance against a remote server would be more representative than performance against localhost.

My original benchmark hit AWS S3. There I got very similar results where httpx had a huge variance with requests timings with concurrent requests. This investigation was due to us observing some strange requests durations when servers were under heavy load in production. For now we have switched to aiohttp and it seems to have fixed the issue.

@tomchristie
Copy link
Member

My original benchmark hit AWS S3. There I got very similar results [...]

Okay, thanks. Was that also testing small GET requests / similar approach to above?

@MarkusSintonen
Copy link
Author

Okay, thanks. Was that also testing small GET requests / similar approach to above?

Yes pretty much, GET of a file with size of a couple KB. In the real system the sizes ofcourse vary alot.

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jun 7, 2024

We're currently using anyio for our async support. We did previously have a native asyncio backend, there might be some CPU overhead to be saved here, which in this localhost case might be outweighing network overheads.

@tomchristie you were right, this is the issue ^!

When I just do a simple patch into httpcore to replace anyio.Lock with asyncio.Lock the performance improves greatly. Why does httpcore use AnyIO there instead of asyncio? Seems AnyIO may have some issues.

With asyncio:
asyncio

With anyio:
anyio

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jun 7, 2024

There is another hot spot in AsyncHTTP11Connection.has_expired which is called eg from AsyncConnectionPool heavily. This checks the connection status via this is_readable logic. That seems to be a particularly heavy check.

The logic in connection pool is quite heavy as it rechecks all of the connections every time requests are assigned to the connectors. It might be possible to skip the is_readable checks in the pool side if we just take a connector from the pool and take another if the picked connector was actually not healthy. Instead of checking them all every time. What do you think?

Probably it would be good idea to add some performance tests to httpx/httpcore CI.

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jun 7, 2024

I can probably help with a PR if you give me pointers about how to proceed :)

I could eg replace the synchronization primitives to use the native asyncio.

@tomchristie
Copy link
Member

Why does httpcore use AnyIO there instead of asyncio?

See encode/httpcore#344, #1511, and encode/httpcore#345 for where/why we switched over to anyio.

I can probably help with a PR if you give me pointers about how to proceed

A good first pass onto this would be to add an asyncio.py backend, without switching the default over.

You might want to work from the last version that had an asyncio native backend, although I think the backend API has probably changed slightly.

Docs... https://www.encode.io/httpcore/network-backends/


Other context...

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jun 8, 2024

Thanks @tomchristie

What about this case I pointed:

When I just do a simple patch into httpcore to replace anyio.Lock with asyncio.Lock the performance improves greatly

There switching network backend won't help as the lock is not defined by the network implementation. The lock implementation is a global one. Should we just change the synchronization to use asyncio?

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jun 10, 2024

I'm able to push the performance of httpcore to be exactly on par with aiohttp:
new

Previously (in httpcore master) the performance is not great and the latency behaves very randomly:
old

You can see the benchmark here.

Here are the changes. There are 3 things required to improve the performance to get it as fast as aiohttp (in separate commits):

  1. Commit 1. Change synchronization primitives (in _synchronization.py) to use asyncio and not anyio
  2. Commit 2. Bringing back asyncio-based backend which was removed in the past (AsyncIOStream)
  3. Commit 3. Optimize the AsyncConnectionPool to avoid calling the socket poll every time the pool is used. Also fixing idle connection checking to have lower time complexity for it

I'm happy to open a PR from these. What do you think @tomchristie?

@tomchristie
Copy link
Member

@MarkusSintonen - Nice one. Let's work through those as individual PRs.

Is it worth submitting a PR where we add a scripts/benchmark?

@MarkusSintonen
Copy link
Author

Is it worth submitting a PR where we add a scripts/benchmark?

I think it would be beneficial to have benchmark run in CI so we would see the difference. Previously I have contributed to Pydantic and they use codspeed. That outputs benchmark diffs to PR when the benchmarked behaviour changes. It should be free for open-source projects.

@tomchristie
Copy link
Member

tomchristie commented Jun 10, 2024

That's an interesting idea. I'd clearly be in agreement with adding a scripts/benchmark. I'm uncertain on if we'd want the extra CI runs everytime or not. Suggest proceeding with the uncontroversial progression to start with, and then afterwards figure out if/how to tie it into CI. (Reasonable?)

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jun 10, 2024

@tomchristie I have now opened the 2 fix PRs:

Maybe Ill open the network backend addition after these as its the most complex one.

@rafalkrupinski
Copy link

Isn't usage of http.CookieJar a part of the problem?

self.jar = CookieJar()

https://github.com/python/cpython/blob/68e279b37aae3019979a05ca55f462b11aac14be/Lib/http/cookiejar.py#L1266

@MarkusSintonen
Copy link
Author

Isn't usage of http.CookieJar a part of the problem?

@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general. I run similar benchmarks from httpcore side with httpx. Performance is at similar levels as with aiohttp and urllib3 when using the performance fixes from the PRs:

(Waiting for review from @tomchristie)

Async (httpx vs aiohttp):
async

Sync (httpx vs urllib3):
sync

@rafalkrupinski
Copy link

TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.

@MarkusSintonen
Copy link
Author

MarkusSintonen commented Jul 10, 2024

TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.

I'm not aware of it ditching it completely. It will still support using it, it's just optional. Trio will be also supported by httpcore.

@rafalkrupinski
Copy link

@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general

These are really cool speed-ups. Can't wait for httpx to overtake aiohttp ;)

@tirkarthi
Copy link

Since the benchmark seems to be using http I think below is also a related issue where creation of ssl context in httpx had some overhead compared to aiohttp.

Ref : #838

@tyteen4a03
Copy link

tyteen4a03 commented Sep 7, 2024

Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.

@HuiDBK
Copy link

HuiDBK commented Sep 8, 2024

Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.

I use aiohttp to encapsulate a chain call method, which I personally feel is pretty good.

url = "https://juejin.cn/"
resp = await AsyncHttpClient().get(url).execute()
# json_data = await AsyncHttpClient().get(url).json()
text_data = await AsyncHttpClient(new_session=True).get(url).text()
byte_data = await AsyncHttpClient().get(url).bytes()

example:https://github.com/HuiDBK/py-tools/blob/master/demo/connections/http_client_demo.py

@tomchristie tomchristie changed the title httpx.AsyncClient has much worse performance than aiohttp.ClientSession with concurrent requests Improve async permforance. Sep 27, 2024
@tomchristie tomchristie added the perf Issues relating to performance label Sep 27, 2024
@tomchristie tomchristie changed the title Improve async permforance. Improve async performance. Sep 27, 2024
@pleomax0730
Copy link

Is there any progress on this issue?

@hidaris
Copy link

hidaris commented Nov 24, 2024

In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?

@lizeyan
Copy link

lizeyan commented Nov 25, 2024

Hello guys, I think I've encountered the same issue. However, our production code heavily relies on httpx, and our tests depend on respx, making it difficult to migrate to aiohttp. If anyone has faced similar challenges, I think there's a workaround: take advantage of httpx's custom transport capability to use aiohttp for the actual requests:

import asyncio
import typing
import time
import aiohttp
from aiohttp import ClientSession
import httpx
from concurrent.futures import ProcessPoolExecutor
import statistics

ADDRESS = "https://www.baidu.com"

async def request_with_aiohttp(session):
    async with session.get(ADDRESS) as rsp:
        return await rsp.text()

async def request_with_httpx(client):
    rsp = await client.get(ADDRESS)
    return rsp.text

# 性能测试函数
async def benchmark_aiohttp(n):
    async with ClientSession() as session:
        # make sure code is right
        print(await request_with_aiohttp(session))
        start = time.time()
        tasks = []
        for i in range(n):
            tasks.append(request_with_aiohttp(session))
        await asyncio.gather(*tasks)
        return time.time() - start

async def benchmark_httpx(n):
    async with httpx.AsyncClient(
        timeout=httpx.Timeout(
            timeout=10,
        ),
    ) as client:
        # make sure code is right
        print(await request_with_httpx(client))

        start = time.time()
        tasks = []
        for i in range(n):
            tasks.append(request_with_httpx(client))
        await asyncio.gather(*tasks)
        return time.time() - start
    
class AiohttpTransport(httpx.AsyncBaseTransport):
    def __init__(self, session: typing.Optional[aiohttp.ClientSession] = None):
        self._session = session or aiohttp.ClientSession()
        self._closed = False

    async def handle_async_request(self, request: httpx.Request) -> httpx.Response:
        if self._closed:
            raise RuntimeError("Transport is closed")

        # 转换headers
        headers = dict(request.headers)
        
        # 准备请求参数
        method = request.method
        url = str(request.url)
        content = request.content
        
        async with self._session.request(
            method=method,
            url=url,
            headers=headers,
            data=content,
            allow_redirects=False,
        ) as aiohttp_response:
            # 读取响应内容
            content = await aiohttp_response.read()
            
            # 转换headers
            headers = [(k.lower(), v) for k, v in aiohttp_response.headers.items()]
            
            # 构建httpx.Response
            return httpx.Response(
                status_code=aiohttp_response.status,
                headers=headers,
                content=content,
                request=request
            )

    async def aclose(self):
        if not self._closed:
            self._closed = True
            await self._session.close()


async def benchmark_httpx_with_aiohttp_transport(n):
    async with httpx.AsyncClient(
        timeout=httpx.Timeout(
            timeout=10,
        ),
        transport=AiohttpTransport(),
    ) as client:
        start = time.time()
        tasks = []
        for i in range(n):
            tasks.append(request_with_httpx(client))
        await asyncio.gather(*tasks)
        return time.time() - start
    

async def run_benchmark(requests=1000, rounds=3):
    aiohttp_times = []
    httpx_times = []
    httpx_aio_times = []
    
    print(f"开始测试 {requests} 并发请求...")
    
    for i in range(rounds):
        print(f"\n{i+1} 轮测试:")
        
        # aiohttp 测试
        aiohttp_time = await benchmark_aiohttp(requests)
        aiohttp_times.append(aiohttp_time)
        print(f"aiohttp 耗时: {aiohttp_time:.2f} 秒")
        
        # 短暂暂停让系统冷却
        await asyncio.sleep(1)
        
        # httpx 测试
        httpx_time = await benchmark_httpx(requests)
        httpx_times.append(httpx_time)
        print(f"httpx 耗时: {httpx_time:.2f} 秒")

        # 短暂暂停让系统冷却
        await asyncio.sleep(1)
        
        # httpx 测试
        httpx_time = await benchmark_httpx_with_aiohttp_transport(requests)
        httpx_aio_times.append(httpx_time)
        print(f"httpx (aiohttp transport) 耗时: {httpx_time:.2f} 秒")
    
    print("\n测试结果汇总:")
    print(f"aiohttp 平均耗时: {statistics.mean(aiohttp_times):.2f} 秒")
    print(f"httpx 平均耗时: {statistics.mean(httpx_times):.2f} 秒")
    print(f"httpx aio 平均耗时: {statistics.mean(httpx_aio_times):.2f} 秒")

if __name__ == '__main__':
    # 运行基准测试
    asyncio.run(run_benchmark(512))
测试结果汇总:
aiohttp 平均耗时: 0.49 秒
httpx 平均耗时: 1.55 秒
httpx aio 平均耗时: 0.51 秒

@pleomax0730
Copy link

In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?

We encountered an issue with httpx sending requests to our self-hosted embedding API, where it sometimes returned a status code 500 without any clear reason. We've switched to aiohttp to see if the issue persists.

@lizeyan
Copy link

lizeyan commented Nov 25, 2024

In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?

We encountered an issue with httpx sending requests to our self-hosted embedding API, where it sometimes returned a status code 500 without any clear reason. We've switched to aiohttp to see if the issue persists.

Although 500 is usually related to server-side error, it seems that httpx still has shortcomings in handling various corner cases compared to mature libraries like requests, and we've also encountered similar strange issues. #3269

@pleomax0730
Copy link

In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?

We encountered an issue with httpx sending requests to our self-hosted embedding API, where it sometimes returned a status code 500 without any clear reason. We've switched to aiohttp to see if the issue persists.

Although 500 is usually related to server-side error, it seems that httpx still has shortcomings in handling various corner cases compared to mature libraries like requests, and we've also encountered similar strange issues. #3269

That's what we're thinking at first, but it's just a simple encode and return endpoint. Nothing can go wrong.

@RyanMarten
Copy link

RyanMarten commented Dec 3, 2024

@lizeyan
Copy link

lizeyan commented Dec 6, 2024

Hello guys, I think I've encountered the same issue. However, our production code heavily relies on httpx, and our tests depend on respx, making it difficult to migrate to aiohttp. If anyone has faced similar challenges, I think there's a workaround: take advantage of httpx's custom transport capability to use aiohttp for the actual requests:

import asyncio
import typing
import time
import aiohttp
from aiohttp import ClientSession
import httpx
from concurrent.futures import ProcessPoolExecutor
import statistics

ADDRESS = "https://www.baidu.com"

async def request_with_aiohttp(session):
    async with session.get(ADDRESS) as rsp:
        return await rsp.text()

async def request_with_httpx(client):
    rsp = await client.get(ADDRESS)
    return rsp.text

# 性能测试函数
async def benchmark_aiohttp(n):
    async with ClientSession() as session:
        # make sure code is right
        print(await request_with_aiohttp(session))
        start = time.time()
        tasks = []
        for i in range(n):
            tasks.append(request_with_aiohttp(session))
        await asyncio.gather(*tasks)
        return time.time() - start

async def benchmark_httpx(n):
    async with httpx.AsyncClient(
        timeout=httpx.Timeout(
            timeout=10,
        ),
    ) as client:
        # make sure code is right
        print(await request_with_httpx(client))

        start = time.time()
        tasks = []
        for i in range(n):
            tasks.append(request_with_httpx(client))
        await asyncio.gather(*tasks)
        return time.time() - start
    
class AiohttpTransport(httpx.AsyncBaseTransport):
    def __init__(self, session: typing.Optional[aiohttp.ClientSession] = None):
        self._session = session or aiohttp.ClientSession()
        self._closed = False

    async def handle_async_request(self, request: httpx.Request) -> httpx.Response:
        if self._closed:
            raise RuntimeError("Transport is closed")

        # 转换headers
        headers = dict(request.headers)
        
        # 准备请求参数
        method = request.method
        url = str(request.url)
        content = request.content
        
        async with self._session.request(
            method=method,
            url=url,
            headers=headers,
            data=content,
            allow_redirects=False,
        ) as aiohttp_response:
            # 读取响应内容
            content = await aiohttp_response.read()
            
            # 转换headers
            headers = [(k.lower(), v) for k, v in aiohttp_response.headers.items()]
            
            # 构建httpx.Response
            return httpx.Response(
                status_code=aiohttp_response.status,
                headers=headers,
                content=content,
                request=request
            )

    async def aclose(self):
        if not self._closed:
            self._closed = True
            await self._session.close()


async def benchmark_httpx_with_aiohttp_transport(n):
    async with httpx.AsyncClient(
        timeout=httpx.Timeout(
            timeout=10,
        ),
        transport=AiohttpTransport(),
    ) as client:
        start = time.time()
        tasks = []
        for i in range(n):
            tasks.append(request_with_httpx(client))
        await asyncio.gather(*tasks)
        return time.time() - start
    

async def run_benchmark(requests=1000, rounds=3):
    aiohttp_times = []
    httpx_times = []
    httpx_aio_times = []
    
    print(f"开始测试 {requests} 并发请求...")
    
    for i in range(rounds):
        print(f"\n{i+1} 轮测试:")
        
        # aiohttp 测试
        aiohttp_time = await benchmark_aiohttp(requests)
        aiohttp_times.append(aiohttp_time)
        print(f"aiohttp 耗时: {aiohttp_time:.2f} 秒")
        
        # 短暂暂停让系统冷却
        await asyncio.sleep(1)
        
        # httpx 测试
        httpx_time = await benchmark_httpx(requests)
        httpx_times.append(httpx_time)
        print(f"httpx 耗时: {httpx_time:.2f} 秒")

        # 短暂暂停让系统冷却
        await asyncio.sleep(1)
        
        # httpx 测试
        httpx_time = await benchmark_httpx_with_aiohttp_transport(requests)
        httpx_aio_times.append(httpx_time)
        print(f"httpx (aiohttp transport) 耗时: {httpx_time:.2f} 秒")
    
    print("\n测试结果汇总:")
    print(f"aiohttp 平均耗时: {statistics.mean(aiohttp_times):.2f} 秒")
    print(f"httpx 平均耗时: {statistics.mean(httpx_times):.2f} 秒")
    print(f"httpx 平均耗时: {statistics.mean(httpx_aio_times):.2f} 秒")

if __name__ == '__main__':
    # 运行基准测试
    asyncio.run(run_benchmark(512))
测试结果汇总:
aiohttp 平均耗时: 0.49 秒
httpx 平均耗时: 1.55 秒
httpx aio 平均耗时: 0.51 秒

Here is a more complete version for this workaround. I used it in my production code, and it works well.

class AiohttpTransport(AsyncBaseTransport):
    def __init__(self, session: aiohttp.ClientSession | None = None):
        self._session = session or aiohttp.ClientSession()
        self._closed = False

    async def handle_async_request(self, request: httpx.Request) -> httpx.Response:
        if (
            _rsp := try_to_get_mocked_response(request)
        ) is not None:  # 为了兼容RESPX mock
            return _rsp

        if self._closed:
            raise RuntimeError("Transport is closed")

        # 应用认证
        headers = dict(request.headers)

        # 准备请求参数
        method = request.method
        url = str(request.url)
        content = request.content

        async with self._session.request(
            method=method, url=url, headers=headers, data=content, allow_redirects=False
        ) as aiohttp_response:
            # 读取响应内容
            content = await aiohttp_response.read()

            # 转换headers
            headers = [
                (k.lower(), v)
                for k, v in aiohttp_response.headers.items()
                if k.lower() != "content-encoding"
            ]

            # 构建httpx.Response
            return httpx.Response(
                status_code=aiohttp_response.status,
                headers=headers,
                content=content,
                request=request,
            )

    async def aclose(self):
        if not self._closed:
            self._closed = True
            await self._session.close()


mock_router = ContextVar("mock_router")


def try_to_get_mocked_response(request: Request) -> Response | None:
    try:
        _mock_handler = mock_router.get()
    except LookupError:
        return None
    return _mock_handler(request)


def create_aiohttp_backed_httpx_client(
    *,
    headers: dict[str, str] | None = None,
    total_timeout: float | None = None,
    base_url: str = "",
    proxy: str | None = None,
    keepalive_timeout: float = 15,
    max_connections: int = 100,
    max_connections_per_host: int = 0,
    verify_ssl: bool = False,
    login: str | None = None,
    password: str | None = None,
    encoding: str = "latin1",
) -> httpx.AsyncClient:
    timeout = aiohttp.ClientTimeout(total=total_timeout)
    connector = aiohttp.TCPConnector(
        keepalive_timeout=keepalive_timeout,
        limit=max_connections,
        limit_per_host=max_connections_per_host,
        verify_ssl=verify_ssl,
        enable_cleanup_closed=True,
    )
    if login and password:
        auth = aiohttp.BasicAuth(login=login, password=password, encoding=encoding)
    else:
        auth = None
    return httpx.AsyncClient(
        base_url=base_url,
        verify=False,
        transport=AiohttpTransport(
            session=aiohttp.ClientSession(
                proxy=proxy,
                auth=auth,
                timeout=timeout,
                connector=connector,
                headers=headers,
            )
        ),
    

@hidaris
Copy link

hidaris commented Dec 6, 2024

@lizeyan Using this method, are all the APIs consistent with httpx?

@lizeyan
Copy link

lizeyan commented Dec 6, 2024

@lizeyan Using this method, are all the APIs consistent with httpx?

I'm not entirely certain, but in my experience, the features in the code—authentication, timeout, connection limit, and proxy—function effectively.

@hidaris
Copy link

hidaris commented Dec 6, 2024

@lizeyan Using this method, are all the APIs consistent with httpx?

I'm not entirely certain, but in my experience, the features in the code—authentication, timeout, connection limit, and proxy—function effectively.

I am curious about the response api, should I also use context manager like aiohttp?

@tomchristie
Copy link
Member

What is the current status of the PR's addressing the performance issues? Is there a plan for merging these in?

@RyanMarten - I have been doing some work on this in the background which I'll share sometime soon. Our connection pooling is a little overcomplicated, and there's some serious refactoring we can dig into here.

(Also, having urllib3 and aiohttp transport classes available as alternates is a really nice property for us to have.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf Issues relating to performance
Projects
None yet
Development

No branches or pull requests

10 participants