Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rfc7692 context-takeover implemention #342

Open
wants to merge 70 commits into
base: main
Choose a base branch
from

Conversation

smith-30
Copy link

@smith-30 smith-30 commented Feb 7, 2018

related #339

misu and others added 30 commits January 18, 2018 01:41
@smith-30
Copy link
Author

I prepared a branch that organized commit so I will replace it if necessary.
this branch is created by

$ git merge --squash feature/context-takeover

@JordanP
Copy link
Contributor

JordanP commented May 22, 2018

Could you make a benchmark with and without CompressionContextTakeOver and compare the memory allocation/memory usage. I ran some tests internally and I noticed that memory usage was multiplied by 10. So I am not sure, maybe there's something wrong with my bench or a memory leak in the code...

doc.go Outdated
@@ -171,10 +171,17 @@
//
// conn.EnableWriteCompression(false)
//
// Currently this package does not support compression with "context takeover".
// Currently this package support compression with "context takeover".
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pluralize support.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@theckman
Copy link

theckman commented Aug 5, 2018

@smith-30 have you had the opportunity to look at the comment from @JordanP around the size of magnitude of the memory increase?

@smith-30
Copy link
Author

smith-30 commented Aug 6, 2018

@theckman I'm sorry, I missed replying and I was late. thank you for your comment.

First of all, @JordanP pointed out an increase in memory usage, but since it has not happened with my product, I do not know if I write test code.
By the way, the benchmark environment I tried is 3 for api server and 180 for client. This is because it is a very high transmission frequency that one client sends.

I think that it will take 1-2 weeks for that response.

After that, I will also modify the comments. Thank you for review.

@JordanP
Copy link
Contributor

JordanP commented Aug 6, 2018

On my side, my workload is different: tens of thousand clients, each sending one Websocket message every 5sec (i.e not frequently).

@smith-30
Copy link
Author

smith-30 commented Aug 6, 2018

So, in this implementation, I think that it has a maximum dictionary of 32 kb to share between client and server. Therefore, using context - takeover will increase memory usage.

@hexdigest
Copy link

Hi everyone!

Is there a chance to see this PR in master?
Benchmarks show no performance degradation:

Master:

goos: linux
goarch: amd64
BenchmarkWriteNoCompression-4     	 5000000	       331 ns/op	      48 B/op	       1 allocs/op
BenchmarkWriteWithCompression-4   	  300000	      4942 ns/op	     164 B/op	       3 allocs/op
BenchmarkBroadcast/NoCompression-4         	     200	   9503267 ns/op	       0 B/op	       0 allocs/op
BenchmarkBroadcast/WithCompression-4       	      50	  30607582 ns/op	 1602198 B/op	   30010 allocs/op
BenchmarkBroadcast/NoCompressionPrepared-4 	     200	  10139983 ns/op	     167 B/op	       1 allocs/op
BenchmarkBroadcast/WithCompressionPrepared-4         	     100	  10455902 ns/op	   12451 B/op	       3 allocs/op

PR:

BenchmarkWriteNoCompression-4                      	 5000000	       325 ns/op	      48 B/op	       1 allocs/op
BenchmarkWriteWithCompression-4                    	  300000	      4921 ns/op	     164 B/op	       3 allocs/op
BenchmarkWriteWithCompressionOfContextTakeover-4   	  300000	      4806 ns/op	      60 B/op	       2 allocs/op
BenchmarkBroadcast/NoCompression-4                 	     200	   9663945 ns/op	       0 B/op	       0 allocs/op
BenchmarkBroadcast/WithCompression-4               	      50	  29305592 ns/op	 1529686 B/op	   30006 allocs/op
BenchmarkBroadcast/NoCompressionPrepared-4         	     200	  10326534 ns/op	     166 B/op	       1 allocs/op
BenchmarkBroadcast/WithCompressionPrepared-4       	     100	  10340785 ns/op	   12396 B/op	       3 allocs/op

Since this feature is optional why not have it in master?

@JordanP
Copy link
Contributor

JordanP commented Mar 25, 2019

@hexdigest great that you have interest in this too. Could you share some numbers about your workload ? With compression enabled, how much more memory does your app need ? What about the CPU cost, did you measure some impact with/without this ?

@hexdigest
Copy link

@JordanP I don't think these are legit questions. These two modes: with context takeover and with no context takeover serve different purposes. In my case, there are a few thousand long living clients that send ~1 message per second where every message is more or less the same small payload (around 200 bytes) with very few differences. For my case, having context takeover is crucial because we're paying for the traffic.

I understand that the current compression implementation in the master branch uses the pool to minimize the overhead of memory allocations. This approach is especially good for the applications with hundreds of thousands of short living connections where each client rarely sends big chunks of random data. Unfortunately this is not my and @smith-30 use.

Per message CPU costs are shown here:

BenchmarkWriteWithCompression-4                    	  300000	      4921 ns/op	     164 B/op	       3 allocs/op
BenchmarkWriteWithCompressionOfContextTakeover-4   	  300000	      4806 ns/op	      60 B/op	       2 allocs/op

With same compression level context takeover has less impact on CPU per message because there are less allocations.
Of course there are hidden costs for allocating/freeing memory for the flate reader/writer for each connection because the pool can't be used in the case of context takeover mode. But we're ready to pay this price in exchange for 10x-20х better compression. Considering the fact that this feature is optional and disabled by default why not?

@verebes
Copy link

verebes commented Dec 5, 2021

@smith-30 & @elithrar, what is the reason this PR has not yet been merged? How can I help to finish it?

@smith-30
Copy link
Author

smith-30 commented Dec 6, 2021

@verebes I think it's because there is no maintainer to make the merge decision..
#370

Copy link
Member

@jaitaiwan jaitaiwan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexVulaj I think this is worthy of inclusion in the next feature release sans any other PRs that tackle this better. I have one query of the PR but otherwise LGTM.

req.Header["Sec-WebSocket-Extensions"] = []string{"permessage-deflate; server_no_context_takeover; client_no_context_takeover"}
switch {
case d.EnableCompression && d.AllowClientContextTakeover:
req.Header.Set("Sec-Websocket-Extensions", "permessage-deflate; server_max_window_bits=15; client_max_window_bits=15")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the significance of the bit size that's been chosen here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 👀 In review
Development

Successfully merging this pull request may close these issues.