Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compress/flat: how to optimize memory allocations #141

Closed
vtolstov opened this issue Jul 17, 2019 · 2 comments
Closed

compress/flat: how to optimize memory allocations #141

vtolstov opened this issue Jul 17, 2019 · 2 comments
Labels

Comments

@vtolstov
Copy link

vtolstov commented Jul 17, 2019

I'm try to use compress/flat and compress/gzip in http middleware
from gorilla/handlers:
https://github.com/gorilla/handlers/blob/master/compress.go

payload that sent/received is relative small 300-500 bytes.
When i'm profile my code i see:

(pprof) top
Showing nodes accounting for 2920.49MB, 95.58% of 3055.68MB total
Dropped 223 nodes (cum <= 15.28MB)
Showing top 10 nodes out of 76
      flat  flat%   sum%        cum   cum%
 1298.35MB 42.49% 42.49%  2358.66MB 77.19%  compress/flate.NewWriter
  658.62MB 21.55% 64.04%  1060.31MB 34.70%  compress/flate.(*compressor).init
  451.80MB 14.79% 78.83%   451.80MB 14.79%  regexp.(*bitState).reset
  391.68MB 12.82% 91.65%   391.68MB 12.82%  compress/flate.newDeflateFast

does it possible to minimize memory allocations/usage for such kind of payload

@klauspost
Copy link
Owner

klauspost commented Jul 17, 2019

Yes, but reusing the writers and use the Reset(io.Writer) to re-initialize it.

If you store your used *Writer, either in a channel or in a sync.Pool, you can re-use it.

Also, after #107 was merged the fastest modes take significantly less space.

Here is an example using pool: https://github.com/xi2/httpgzip/blob/master/httpgzip.go#L106

@vtolstov
Copy link
Author

thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants