-
Notifications
You must be signed in to change notification settings - Fork 728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mcs: reorganize cluster start and stop process #7155
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -47,6 +47,7 @@ type Cluster struct { | |
checkMembershipCh chan struct{} | ||
apiServerLeader atomic.Value | ||
clusterID uint64 | ||
running atomic.Bool | ||
} | ||
|
||
const regionLabelGCInterval = time.Hour | ||
|
@@ -215,6 +216,9 @@ func (c *Cluster) updateScheduler() { | |
// Make sure the check will be triggered once later. | ||
notifier <- struct{}{} | ||
c.persistConfig.SetSchedulersUpdatingNotifier(notifier) | ||
ticker := time.NewTicker(time.Second) | ||
defer ticker.Stop() | ||
|
||
for { | ||
select { | ||
case <-c.ctx.Done(): | ||
|
@@ -224,6 +228,18 @@ func (c *Cluster) updateScheduler() { | |
// This is triggered by the watcher when the schedulers are updated. | ||
} | ||
|
||
if !c.running.Load() { | ||
select { | ||
case <-c.ctx.Done(): | ||
log.Info("cluster is closing, stop listening the schedulers updating notifier") | ||
return | ||
case <-ticker.C: | ||
// retry | ||
notifier <- struct{}{} | ||
continue | ||
} | ||
} | ||
|
||
log.Info("schedulers updating notifier is triggered, try to update the scheduler") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If stop server here, is there data race? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it is the same as the current PD. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In other word,is it possible to meet data race when add scheduler and coordinator wait at the same time? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think so but the possibility is much smaller than before. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Another way: we can check the cluster status before adding a scheduler every time. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But there is still a gap between check status and adding scheduluer, if stop server here after checking the cluster status and before adding scheduler, it is possible to meet data race too. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The problem is the way we use the wait group for the scheduler controller is not proper instead of the wait group itself. |
||
var ( | ||
schedulersController = c.coordinator.GetSchedulersController() | ||
|
@@ -394,15 +410,29 @@ func (c *Cluster) runUpdateStoreStats() { | |
} | ||
} | ||
|
||
// runCoordinator runs the main scheduling loop. | ||
func (c *Cluster) runCoordinator() { | ||
defer logutil.LogPanic() | ||
defer c.wg.Done() | ||
c.coordinator.RunUntilStop() | ||
} | ||
|
||
// StartBackgroundJobs starts background jobs. | ||
func (c *Cluster) StartBackgroundJobs() { | ||
c.wg.Add(2) | ||
c.wg.Add(3) | ||
go c.updateScheduler() | ||
go c.runUpdateStoreStats() | ||
go c.runCoordinator() | ||
c.running.Store(true) | ||
} | ||
|
||
// StopBackgroundJobs stops background jobs. | ||
func (c *Cluster) StopBackgroundJobs() { | ||
if !c.running.Load() { | ||
return | ||
} | ||
c.running.Store(false) | ||
c.coordinator.Stop() | ||
c.cancel() | ||
c.wg.Wait() | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible we have a deadlock here? Since the length of the channel is only 1 and if the scheduler config watcher just sent it before, it could be blocked here.