We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm try to use compress/flat and compress/gzip in http middleware from gorilla/handlers: https://github.com/gorilla/handlers/blob/master/compress.go
payload that sent/received is relative small 300-500 bytes. When i'm profile my code i see:
(pprof) top Showing nodes accounting for 2920.49MB, 95.58% of 3055.68MB total Dropped 223 nodes (cum <= 15.28MB) Showing top 10 nodes out of 76 flat flat% sum% cum cum% 1298.35MB 42.49% 42.49% 2358.66MB 77.19% compress/flate.NewWriter 658.62MB 21.55% 64.04% 1060.31MB 34.70% compress/flate.(*compressor).init 451.80MB 14.79% 78.83% 451.80MB 14.79% regexp.(*bitState).reset 391.68MB 12.82% 91.65% 391.68MB 12.82% compress/flate.newDeflateFast
does it possible to minimize memory allocations/usage for such kind of payload
The text was updated successfully, but these errors were encountered:
Yes, but reusing the writers and use the Reset(io.Writer) to re-initialize it.
Reset(io.Writer)
If you store your used *Writer, either in a channel or in a sync.Pool, you can re-use it.
*Writer
sync.Pool
Also, after #107 was merged the fastest modes take significantly less space.
Here is an example using pool: https://github.com/xi2/httpgzip/blob/master/httpgzip.go#L106
Sorry, something went wrong.
thank you!
No branches or pull requests
I'm try to use compress/flat and compress/gzip in http middleware
from gorilla/handlers:
https://github.com/gorilla/handlers/blob/master/compress.go
payload that sent/received is relative small 300-500 bytes.
When i'm profile my code i see:
does it possible to minimize memory allocations/usage for such kind of payload
The text was updated successfully, but these errors were encountered: