Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement distributied lock with redis cluster #284

Merged
merged 17 commits into from
Nov 11, 2021

Conversation

whalesongAndLittleFish
Copy link
Contributor

What this PR does:

add redis cluster lock(red lock implement)

Which issue(s) this PR fixes:

Fixes #249

@mosn-community-bot
Copy link

Hi @whalesongAndLittleFish, welcome to mosn community, Please sign Contributor License Agreement!

After you signed CLA, we will automatically sync the status of this pull request in 3 minutes.

@seeflood
Copy link
Member

Thanks for your contribution!

@whalesongAndLittleFish
Copy link
Contributor Author

@seeflood 你好,我在实现的时候是并发进行的加锁和解锁的过程,请问一下需要改用采用串行加锁的方式吗。如果这两种方式都可以的话,需要改成根据配置来决定串行还是并发吗

@ZLBer
Copy link
Member

ZLBer commented Oct 27, 2021

@whalesongAndLittleFish 串行加锁有什么优势吗

@whalesongAndLittleFish
Copy link
Contributor Author

@whalesongAndLittleFish 串行加锁有什么优势吗

官方文档提到的做法是依次对单个节点去进行加锁的操作,但为什么要依次进行没有看到解释,然后当时做的时候我觉得串行的做法会对效率有影响,所以用并发的方式做的。但是今天同事提到并发的方式可能会导致请求量较多的时候在短时间创建过多的协程,所以想问下需不需要根据不同的场景,通过参数来决定选用哪种方式。(不知道考虑的对不对,麻烦大佬们多多指教(⊙﹏⊙))

@seeflood
Copy link
Member

@whalesongAndLittleFish 如果怕短时间创建大量协程,可以通过协程池的方式优化,比如 pkg/filter/network/tcpcopy/persistence/work_pool.go 就是通过协程池做处理,不过这段代码不是通用协程池;
如果要改变算法,除了性能,关键是小心正确性哈~redlock我很久没看忘记了,你先检查下正确性,晚点我复习一下

@whalesongAndLittleFish
Copy link
Contributor Author

@whalesongAndLittleFish 如果怕短时间创建大量协程,可以通过协程池的方式优化,比如 pkg/filter/network/tcpcopy/persistence/work_pool.go 就是通过协程池做处理,不过这段代码不是通用协程池; 如果要改变算法,除了性能,关键是小心正确性哈~redlock我很久没看忘记了,你先检查下正确性,晚点我复习一下

okkkk,我再验证下~

@nejisama
Copy link
Contributor

@whalesongAndLittleFish 如果怕短时间创建大量协程,可以通过协程池的方式优化,比如 pkg/filter/network/tcpcopy/persistence/work_pool.go 就是通过协程池做处理,不过这段代码不是通用协程池; 如果要改变算法,除了性能,关键是小心正确性哈~redlock我很久没看忘记了,你先检查下正确性,晚点我复习一下

不要使用这个tcpcopy 当作性能优化的点,这个tcpcopy目前的性能就不太行。可以找一下其他go 实现的协程池逻辑,比如https://github.com/mosn/mosn/blob/master/pkg/sync/workerpool.go

@seeflood
Copy link
Member

恩恩我拿pkg/filter/network/tcpcopy/persistence/work_pool.go举个例子,不要直接用哈~
通用协程池库有很多,比如下面这些(我没用过),或者看看@nejisama 帖的mosn里的

https://github.com/gammazero/workerpool
https://github.com/ivpusic/grpool
https://github.com/panjf2000/ants
https://github.com/Jeffail/tunny

@whalesongAndLittleFish
Copy link
Contributor Author

恩恩我拿pkg/filter/network/tcpcopy/persistence/work_pool.go举个例子,不要直接用哈~ 通用协程池库有很多,比如下面这些(我没用过),或者看看@nejisama 帖的mosn里的

https://github.com/gammazero/workerpool https://github.com/ivpusic/grpool https://github.com/panjf2000/ants https://github.com/Jeffail/tunny

okk~ 我去瞅瞅

@whalesongAndLittleFish
Copy link
Contributor Author

whalesongAndLittleFish commented Nov 1, 2021

恩恩我拿pkg/filter/network/tcpcopy/persistence/work_pool.go举个例子,不要直接用哈~ 通用协程池库有很多,比如下面这些(我没用过),或者看看@nejisama 帖的mosn里的

https://github.com/gammazero/workerpool https://github.com/ivpusic/grpool https://github.com/panjf2000/ants https://github.com/Jeffail/tunny

你好,我在新的提交里尝试使用了@nejisama 推荐的https://github.com/mosn/mosn/blob/master/pkg/sync/workerpool.go
里的协程池来控制并发。然后和原算法比对,暂时没有发现正确性的问题,相比于原算法中的依次加锁记录累计时间,并发加锁计算全部完成的累计时间我认为应该不会有额外的正确性问题出现,其他流程都一致,只是执行任务、收集结果的方式不同。

@codecov
Copy link

codecov bot commented Nov 4, 2021

Codecov Report

Merging #284 (aaa7c65) into main (6abb5ec) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##             main     #284   +/-   ##
=======================================
  Coverage   56.65%   56.65%           
=======================================
  Files          48       48           
  Lines        2058     2058           
=======================================
  Hits         1166     1166           
  Misses        769      769           
  Partials      123      123           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6abb5ec...aaa7c65. Read the comment docs.

@seeflood
Copy link
Member

seeflood commented Nov 4, 2021

@ZLBer @stulzq 各位大佬有空帮忙review下这个PR么~

@ZLBer
Copy link
Member

ZLBer commented Nov 5, 2021

@whalesongAndLittleFish 我有个疑问,既然4个、5个节点都需要3个节点加锁成功才算整体加锁成功,为什么不维护更少的4个节点?

@whalesongAndLittleFish
Copy link
Contributor Author

@whalesongAndLittleFish 我有个疑问,既然4个、5个节点都需要3个节点加锁成功才算整体加锁成功,为什么不维护更少的4个节点?

5个节点的话就能够容忍两个节点宕机呀,如果4个就只能容忍一个节点宕机了

@ZLBer
Copy link
Member

ZLBer commented Nov 5, 2021

@whalesongAndLittleFish 一定要限制5个及以上吗 ?3个? 甚至1个可以把单节点的情况都覆盖了

@whalesongAndLittleFish
Copy link
Contributor Author

@whalesongAndLittleFish 一定要限制5个及以上吗 ?3个? 甚至1个可以把单节点的情况都覆盖了

实在要用三个其实也可以,但是和四个节点的情况一样只能容忍一个节点异常,最好还是五个嘛。我想的话,如果1个的话感觉可以直接使用已经实现的单点锁,就不用这个集群方案了

@whalesongAndLittleFish
Copy link
Contributor Author

@whalesongAndLittleFish 一定要限制5个及以上吗 ?3个? 甚至1个可以把单节点的情况都覆盖了

我在实现的时候没有硬性判定需要五个节点以上,节点数原则上不限制,判定成功的的规则都是过半就成功,如果真要这样使用也可以的

components/lock/redis/cluster_redis_lock.go Outdated Show resolved Hide resolved
components/lock/redis/cluster_redis_lock.go Outdated Show resolved Hide resolved
components/lock/redis/cluster_redis_lock.go Outdated Show resolved Hide resolved
components/lock/redis/cluster_redis_lock.go Show resolved Hide resolved
@ZLBer
Copy link
Member

ZLBer commented Nov 5, 2021

实在要用三个其实也可以,但是和四个节点的情况一样只能容忍一个节点异常,最好还是五个嘛。我想的话,如果1个的话感觉可以直接使用已经实现的单点锁,就不用这个集群方案了

所以我在想是不是两个方案可以统一起来了 ? 我看代码里有判断五个节点啊。

@whalesongAndLittleFish
Copy link
Contributor Author

实在要用三个其实也可以,但是和四个节点的情况一样只能容忍一个节点异常,最好还是五个嘛。我想的话,如果1个的话感觉可以直接使用已经实现的单点锁,就不用这个集群方案了

所以我在想是不是两个方案可以统一起来了 ? 我看代码里有判断五个节点啊。

嗷嗷感谢提醒,我修改一下,两个方案统一的话感觉需要和他们讨论一下,我先解决下问题

@whalesongAndLittleFish
Copy link
Contributor Author

实在要用三个其实也可以,但是和四个节点的情况一样只能容忍一个节点异常,最好还是五个嘛。我想的话,如果1个的话感觉可以直接使用已经实现的单点锁,就不用这个集群方案了

所以我在想是不是两个方案可以统一起来了 ? 我看代码里有判断五个节点啊。

嗷嗷感谢提醒,我修改一下,两个方案统一的话感觉需要和他们讨论一下,我先解决下问题

已修改

ZLBer
ZLBer previously approved these changes Nov 5, 2021
Copy link
Member

@ZLBer ZLBer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@whalesongAndLittleFish
Copy link
Contributor Author

LGTM

感谢大佬的review~,辛苦了(ง •̀o•́)ง

stulzq
stulzq previously approved these changes Nov 5, 2021
Copy link
Member

@stulzq stulzq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

components/lock/redis/cluster_redis_lock.go Outdated Show resolved Hide resolved
@seeflood seeflood dismissed stale reviews from stulzq and ZLBer via cfbc104 November 10, 2021 04:42
WhaleSong added 3 commits November 10, 2021 17:25
Copy link
Member

@seeflood seeflood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@seeflood seeflood merged commit 8c0d0f9 into mosn:main Nov 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

implement distributied lock with redis cluster
6 participants