Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redis 7.0 Streams - use Lag instead of pending #3127

Closed
Jouse82 opened this issue Jun 6, 2022 · 17 comments
Closed

Redis 7.0 Streams - use Lag instead of pending #3127

Jouse82 opened this issue Jun 6, 2022 · 17 comments

Comments

@Jouse82
Copy link

Jouse82 commented Jun 6, 2022

Existing Redis Streams scaler uses the Xpending which may be completely false information. XPending only shows the number of read but unacked messages and therefore may not reflect the queue status.

Redis 7.0 has introduced a new metric "Lag", which shows the amount of unread messages for a Stream and Group.
XINFO stream group

This should be the made available for the Redis streams scaler to use instead of the Xpending.

@JorTurFer
Copy link
Member

Hey @Jouse82
This sounds totally legit. My only concern is if this change can be done keeping the backward compatibility in any way or we will need to create (another) redis scaler 🤔

BTW, are you willing to contribute with this?

@tomkerkhove
Copy link
Member

We should have the capability to configure which approach to use given this is only in Redis 7 as well

@KomTak001
Copy link

backward compatibility

Prerequisites: The threshold for stream and consumer group is obtained from yaml.

If XINFO GROUPS [streameName] has lag

threashold <= lag

Then scale

If there is no lag
Get last-delivered-id and get the entry list with, count the number,

XRANGE [streameName] [last-derivered-id for specific group] + COUNT [threashold]

and,

[Number] = threashold

Then scale

I think this will maintain backward compatibility to some extent, how about this?

@JorTurFer
Copy link
Member

I'm not an expert on Redis but it sounds good

@stale
Copy link

stale bot commented Aug 31, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Aug 31, 2022
@stale
Copy link

stale bot commented Sep 7, 2022

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Sep 7, 2022
@v-shenoy v-shenoy reopened this Sep 8, 2022
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Sep 8, 2022
@stale
Copy link

stale bot commented Nov 7, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Nov 7, 2022
@sergio-ferreira-jscrambler
Copy link

sergio-ferreira-jscrambler commented Nov 7, 2022

backward compatibility

Prerequisites: The threshold for stream and consumer group is obtained from yaml.

If XINFO GROUPS [streameName] has lag

threashold <= lag

Then scale

If there is no lag Get last-delivered-id and get the entry list with, count the number,

XRANGE [streameName] [last-derivered-id for specific group] + COUNT [threashold]

and,

[Number] = threashold

Then scale

I think this will maintain backward compatibility to some extent, how about this?

I'm worried of the performance hit this brings with the XRANGE command, which iterates all entries from the last-delivered-id to the last entry (+).

My team is using Redis 6 in production and since we don't have the lag value directly from Redis, we're using a script which does pretty much the same as your code to get the lag. It iterates with XRANGE since the last-delivered-id to the last message. However, we noticed that under high loads, and depending on the COUNT value, this iteration will block Redis for a few seconds - we've had test cases where this blocked Redis to up to 15 seconds

@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Nov 7, 2022
@JorTurFer JorTurFer added the stale-bot-ignore All issues that should not be automatically closed by our stale bot label Dec 30, 2022
@JorTurFer JorTurFer removed the stale-bot-ignore All issues that should not be automatically closed by our stale bot label Jan 3, 2023
@stale
Copy link

stale bot commented Mar 4, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Mar 4, 2023
@stale
Copy link

stale bot commented Mar 11, 2023

This issue has been automatically closed due to inactivity.

@krishnadurai
Copy link

krishnadurai commented Jun 1, 2023

@JorTurFer can you please check the PR addressing this feature by my colleague @mikelam-us addressing this issue? We are both willing to maintain the Redis implementations:

#4592

@zroubalik zroubalik reopened this Jun 1, 2023
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Jun 1, 2023
@zroubalik
Copy link
Member

Thanks @krishnadurai & @mikelam-us !

@zroubalik
Copy link
Member

Implented in 2.11

@elpablete
Copy link

Is this available to use on "redis-streams" docs 2.13 ? I only see it in the docs for "redis-cluster-streams" docs 2.13

@JorTurFer
Copy link
Member

Based on e2e tests I'd say that yes and it's not properly updated docs

@elpablete
Copy link

Can I help updating the docs?

@JorTurFer
Copy link
Member

Thanks for the proposal! ❤️
I did it yesterday : kedacore/keda-docs#1304

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

9 participants