Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User limit per channel - how to get the response in client side? #396

Closed
xoraingroup opened this issue Oct 6, 2020 · 28 comments
Closed

Comments

@xoraingroup
Copy link

Ask your question

Please provide as many details as possible – you will get the answer much faster then!
Using Centrifugo v2.5.1 (Go version: go1.14.3) on linux Centos 7.

Basically we have restricted users to one connection to a specific channel using "client_user_connection_limit": 1, parameter.

Everything works fine. We are trying to find a way to handle connection limit error on client side.

image

We have checked all events, all we get is disconnected event in the above sceanrio.

Any help would be appreciated.

"Show us an example of what you mean!"

An example can be useful, otherwise we're left guessing!

  1. run centrifugo server with client_user_connection_limit": 1.
  2. Try to establish connection from one browser tab.
  3. Try to duplicate tab and open another session to centrifugo.
  4. Open inspect and check WS tab in inspect for websocket responses.

How to handle that error on client side?

@FZambia
Copy link
Member

FZambia commented Oct 6, 2020

Hello, maybe this should be a custom Disconnect object from the server instead of an error so it will be possible to handle in the way you want. I suppose you don't need a client to reconnect after exceeding that limit – right?

One thing to mention is that user limit works only per one Centrifugo node and exists mostly to protect Centrifugo from many connections from a single user but not for business logic limitations – this means that if you will scale nodes - say run 10 Centrifugo nodes – then a user will be able to create 10 connections (one to each node). Are you aware of this?

@FZambia
Copy link
Member

FZambia commented Oct 6, 2020

Added more details to doc in 791d41e

@xoraingroup
Copy link
Author

Yes, thats correct. Client will not reconnect. So how do I handle this?

Hello, maybe this should be a custom Disconnect object from the server instead of an error so it will be possible to handle in the way you want. I suppose you don't need a client to reconnect after exceeding that limit – right?

Ok, this is something new for me, but as we are using one centrifugo instance, so that should be ok. But It should be possible to limit the user even in scaled nodes. shouldn't it? Like using redis to check if user has connection open. May be you have more information on this.

One thing to mention is that user limit works only per one Centrifugo node and exists mostly to protect Centrifugo from many connections from a single user but not for business logic limitations – this means that if you will scale nodes - say run 10 Centrifugo nodes – then a user will be able to create 10 connections (one to each node). Are you aware of this?

Thank you for the update and waiting for your reply.

@FZambia
Copy link
Member

FZambia commented Oct 6, 2020

Client will not reconnect. So how do I handle this?

Since it was not supposed to be used for business scenarios there is no way at the moment to handle it. I think it's reasonable to change an error to be Disconnect on the server side – but it's not implemented at moment, I suppose I can do this for the next release since custom disconnect with no reconnect is the desired behavior for exceeding connection limit in most cases I think.

It should be possible to limit the user even in scaled nodes. shouldn't it?

There is no such feature at moment. One possible workaround is using the channel for each user with enabled presence. Always subscribe to it (BTW it can be server-side). Then use connect proxy and check whether the user currently connected and subscribed to that personal channel (using presence server API). After the check done it's possible to return custom disconnect to a client. This way it will scale with nodes.

@FZambia
Copy link
Member

FZambia commented Oct 6, 2020

Actually, this presence check can live in Centrifugo itself – so you could continue using JWT authentication but still have this behavior and avoid round trips from Centrifugo to backend and back to Centrifugo for presence. It can be part of personal server-side channel sugar. What do you think?

@xoraingroup
Copy link
Author

Do you think 100000 channels / 100000 users are ok i.e. each channel with one user?

@xoraingroup
Copy link
Author

Also the presence call using server side may be an expensive call if I cannot filter a specific user in the query. Think about thousand of users in a channel. What is your opinion?

@xoraingroup
Copy link
Author

Client will not reconnect. So how do I handle this?

Since it was not supposed to be used for business scenarios there is no way at the moment to handle it. I think it's reasonable to change an error to be Disconnect on the server side – but it's not implemented at moment, I suppose I can do this for the next release since custom disconnect with no reconnect is the desired behavior for exceeding connection limit in most cases I think.

It should be possible to limit the user even in scaled nodes. shouldn't it?

This one sounds reasonable. But still it would have been great to get simply answer from centrifugo node even in scaled mode. Also your opinion on this issue

There is no such feature at moment. One possible workaround is using the channel for each user with enabled presence. Always subscribe to it (BTW it can be server-side). Then use connect proxy and check whether the user currently connected and subscribed to that personal channel (using presence server API). After the check done it's possible to return custom disconnect to a client. This way it will scale with nodes.

@FZambia
Copy link
Member

FZambia commented Oct 7, 2020

Do you think 100000 channels / 100000 users are ok i.e. each channel with one user?

This scales horizontally – so yes, this is fine.

Also the presence call using server side may be an expensive call if I cannot filter a specific user in the query. Think about thousand of users in a channel. What is your opinion?

I suppose this is not relevant since you will use personal channels with only one possible user in it.

@xoraingroup
Copy link
Author

One last question, what is the memory footprint of one private channel / user. I just want to know how many channels a server with 128GB RAM and 16 Cores CPU + 10 Gbps uplink ( for both private and public) can accomodate? Do you have some rough metrics or readings?

@FZambia
Copy link
Member

FZambia commented Oct 7, 2020

One last question, what is the memory footprint of one private channel / user. I just want to know how many channels a server with 128GB RAM and 16 Cores CPU + 10 Gbps uplink

See FAQ – this one and about memory usage. Also test stand with million connections in Kubernetes.

In the case of your server configuration, the first thing that you will be limited by is CPU I suppose.

@xoraingroup
Copy link
Author

Ok Awesome. Thank you for all the information. It can be closed if needed! Very much appreciated for your efforts.

@FZambia
Copy link
Member

FZambia commented Oct 11, 2020

Lets keep it open to track things discussed

@FZambia
Copy link
Member

FZambia commented Oct 22, 2020

Just did MVP locally limiting user connection count globally with personal server-side channel and presence on – seems to work fine. I am a bit busy at the moment so this can take some time to finish and release. Hopefully in a couple of weeks.

@xoraingroup
Copy link
Author

Looking forward to it. Thank you @FZambia

@FZambia
Copy link
Member

FZambia commented Nov 3, 2020

@xoraingroup hello, had some time to think more about this. I have several concerns about implementing a global limit so need answers to the following questions:

  1. What is a use case where you need to hard limit concurrent connections from a single user to 1?
  2. What user supposed to do if he left one device (desktop computer for example) open with the application running at home, then tries to connect from another device (mobile phone)? In this case, a user can't easily close its previous connection to use your application.
  3. If we add a global connection limit check over presence then there could be rare cases when a user can still have more than a configured number of connections simultaneously (due to race conditions in presence checks). Is this is acceptable?

@xoraingroup
Copy link
Author

xoraingroup commented Nov 3, 2020

@xoraingroup hello, had some time to think more about this. I have several concerns about implementing a global limit so need answers to the following questions:

Thank you for getting back, sure...

  1. What is a use case where you need to hard limit concurrent connections from a single user to 1?

Well in our case we wanted to restrict user to open up a single socket connection to consume resources. The user had a single private token that was used to create a channel (namespace:unique_token) for him through which the websocket connection to centrifugo was used. Even if the user would have come from a different ip address, the connection limit based on his channel id ( namespace:unique_token) would have kicked in to stop him using resources using multiple connections.

  1. What user supposed to do if he left one device (desktop computer for example) open with the application running at home, then tries to connect from another device (mobile phone)? In this case, a user can't easily close its previous connection to use your application.

I think the new connection should override the old one and kick out the old connection.

  1. If we add a global connection limit check over presence then there could be rare cases when a user can still have more than a configured number of connections simultaneously (due to race conditions in presence checks). Is this is acceptable?

2-3+ connections tolerance should be ok, but if the user can spawn multiple tens of connection even with the limit set, then i think its not useful. Also how hard is it to control via Redis using atomic/lock operations?

@FZambia
Copy link
Member

FZambia commented Nov 3, 2020

I think the new connection should override the old one and kick out the old connection.

Makes sense. In this case, my original idea does not fit well here since it won't help to close old connections. The automatic disconnect of old connections is not possible at the moment without more work done internally (it's possible to Disconnect all connections of the user, but it has a chance to also disconnect the current connection – so we need a way to exclude it).

Actually, it will also be a bit tricky to maintain N connections alive if N > 1. Since we additionally need a way to choose which connections to close. While this is still possible it makes implementation heavier as we will have to call full presence on connect and inspect client IDs to choose ones to disconnect (or ones to exclude).

For N = 1 implementation can be lighter (see below).

2-3+ connections tolerance should be ok

I mentioned a pretty rare case – so in most cases things should work just fine. Just wanted to make sure this is done to save resources but not a business-critical application requirement. Though I am not sure at the moment whether this is still relevant with new knowledge about desired disconnect behavior.

Also how hard is it to control via Redis using atomic/lock operations?

Not sure that's possible since presence and PUB/SUB done separately at the moment and, moreover, it's not possible to atomically SUBSCRIBE and call presence in Redis since SUBSCRIBE is a bit special in Redis protocol.

I need more time to try implementing disconnection MVP for N=1 case. Maybe all this case requires is calling the disconnect command with the current client ID excluded.

@xoraingroup
Copy link
Author

I think the new connection should override the old one and kick out the old connection.

Makes sense. In this case, my original idea does not fit well here since it won't help to close old connections. The automatic disconnect of old connections is not possible at the moment without more work done internally (it's possible to Disconnect all connections of the user, but it has a chance to also disconnect the current connection – so we need a way to exclude it).

Actually, it will also be a bit tricky to maintain N connections alive if N > 1. Since we additionally need a way to choose which connections to close. While this is still possible it makes implementation heavier as we will have to call full presence on connect and inspect client IDs to choose ones to disconnect (or ones to exclude).

For N = 1 implementation can be lighter (see below).

2-3+ connections tolerance should be ok

I mentioned a pretty rare case – so in most cases things should work just fine. Just wanted to make sure this is done to save resources but not a business-critical application requirement. Though I am not sure at the moment whether this is still relevant with new knowledge about desired disconnect behavior.

Ok. Sounds good.

Also how hard is it to control via Redis using atomic/lock operations?

Not sure that's possible since presence and PUB/SUB done separately at the moment and, moreover, it's not possible to atomically SUBSCRIBE and call presence in Redis since SUBSCRIBE is a bit special in Redis protocol.

I need more time to try implementing disconnection MVP for N=1 case. Maybe all this case requires is calling the disconnect command with the current client ID excluded.

Ok Its pub sub based, then of course those commands operations will not work. But of course please take your time and let me know if i can give more information.

@FZambia
Copy link
Member

FZambia commented Nov 7, 2020

@xoraingroup hello, could you please take a look at #400 - will it suit your case?

@xoraingroup
Copy link
Author

Thank you. If that works in cluster mode, I think this would be the best solution.

@xoraingroup
Copy link
Author

Is it production ready or are you going to make a release out of master branch soon?

I am using centrifugo release version instead of master branch in production and don't want any regression bugs of master branch to land in our production environment.

@FZambia
Copy link
Member

FZambia commented Nov 13, 2020

The feature will be part of v2.8.0 release. Will try to release it in the next couple of weeks. I think master branch should be stable enough but there are lots of changes so you better to test it before moving to production anyway.

@xoraingroup
Copy link
Author

Ok perfect. I will test and if there are issues, i will report and wait for v2.8.0 release.

Thank you for the awesome work :)

@FZambia
Copy link
Member

FZambia commented Nov 16, 2020

v2.8.0 released. @xoraingroup will appreciate your feedback as soon as you test this feature. You can still write here, feel free to reopen an issue if there are any problems/bugs.

@FZambia FZambia closed this as completed Nov 16, 2020
@xoraingroup
Copy link
Author

I will test the feature and will get back to you. Thank you once again.

@livingrockrises
Copy link

Hi, can anyone help us with below error

{"level":"info","client":"573a4996-3671-4c01-80c4-a08ba93cee56","limit":128,"user":"","time":"2022-04-28T16:42:15Z","message":"maximum limit of channels per client reached"}
{"level":"info","client":"573a4996-3671-4c01-80c4-a08ba93cee56","code":106,"command":"id:165  method:SUBSCRIBE  params:\"{\\\"channel\\\":\\\"transaction:0xff6350e51a80467d2b5246684c9544b25542d7006fd586742f072c3c3237a11b\\\"}\"","error":"limit exceeded","reply":"id:165  error:{code:106  message:\"limit exceeded\"}","user":"","time":"2022-04-28T16:42:15Z","message":"client command error"}

@FZambia
Copy link
Member

FZambia commented Apr 30, 2022

@livingrockrises hello, see https://centrifugal.dev/docs/server/configuration#client_channel_limit

Reaching this limit usually means that you are using unbounded number of channels per connection - which is not a good thing. See https://centrifugal.dev/docs/faq#what-about-best-practices-with-the-number-of-channels

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants