-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(blooms): Add bloom planner and bloom builder to backend
target
#13997
feat(blooms): Add bloom planner and bloom builder to backend
target
#13997
Conversation
@@ -43,8 +43,8 @@ and querying the bloom filters that only pays off at large scale deployments. | |||
{{< /admonition >}} | |||
|
|||
To start building and using blooms you need to: | |||
- Deploy the [Bloom Planner and Builder](#bloom-planner-and-builder) components and enable the component in the [Bloom Build config][bloom-build-cfg]. | |||
- Deploy the [Bloom Gateway](#bloom-gateway) component (as a [microservice][microservices] or via the [SSD][ssd] Backend target) and enable the component in the [Bloom Gateway config][bloom-gateway-cfg]. | |||
- Deploy the [Bloom Planner and Builder](#bloom-planner-and-builder) components (as a [microservic][microservices] or via the [SSD][ssd] `backend` target) and enable the components in the [Bloom Build config][bloom-build-cfg]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: microservice
pkg/bloombuild/planner/planner.go
Outdated
if err := p.runOne(ctx); err != nil { | ||
level.Error(p.logger).Log("msg", "bloom build iteration failed for the first time", "err", err) | ||
} | ||
// run once at beginning, but deplay by 1m to allow ring consolidation when running in SSD mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: deplay -> delay
pkg/bloombuild/planner/planner.go
Outdated
@@ -901,6 +934,11 @@ func (p *Planner) BuilderLoop(builder protos.PlannerForBuilder_BuilderLoopServer | |||
|
|||
builderID := resp.GetBuilderID() | |||
logger := log.With(p.logger, "builder", builderID) | |||
|
|||
if !p.isLeader() { | |||
return fmt.Errorf("planner is not leader") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be errPlannerIsNotLeader
?
if err := p.runOne(ctx); err != nil { | ||
level.Error(p.logger).Log("msg", "bloom build iteration failed for the first time", "err", err) | ||
} | ||
// run once at beginning, but delay by 1m to allow ring consolidation when running in SSD mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this ticker could be replaced with a simpler time.Sleep
at the beginning of the function if the ringWatcher is not null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
time.Sleep
, time.After
, and time.NewTimer
do essentially the same, but since we already have a select
loop, I think it's cleaner to avoid time.Sleep
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, a sleep is easier to understand as I don't need to think about which select will trigger earlier. But I don't have a strong opinion on this. Approved.
in simple scalable deployment Signed-off-by: Christian Haudum <christian.haudum@gmail.com>
6ab3715
to
be9eb50
Compare
…#13997) Previously, the bloom compactor component was part of the `backend` target in the Simple Scalable Deployment (SSD) mode. However, the bloom compactor was removed (#13969) in favour of planner and builder, and therefore also removed from the backend target. This PR adds the planner and builder components to the backend target so it can continue building blooms if enabled. The planner needs to be run as singleton, therefore there must only be one instance that creates tasks for the builders, even if multiple replicas of the backend target are deployed. This is achieved by leader election through the already existing index gateway ring in the backend target. The planner leader is determined by the ownership of the leader key. Builders connect to the planner leader to pull tasks. ---- Signed-off-by: Christian Haudum <christian.haudum@gmail.com> (cherry picked from commit bf60455)
What this PR does / why we need it:
Previously, the bloom compactor component was part of the
backend
target in the Simple Scalable Deployment (SSD) mode. However, the bloom compactor was removed (#13969) in favour of planner and builder, and therefore also removed from the backend target.This PR adds the planner and builder components to the backend target so it can continue building blooms if enabled.
Special notes for your reviewer:
The planner needs to be run as singleton, therefore there must only be one instance that creates tasks for the builders, even if multiple replicas of the backend target are deployed.
This is achieved by leader election through the already existing index gateway ring in the backend target. The planner leader is determined by the ownership of the leader key. Builders connect to the planner leader to pull tasks.
Checklist
CONTRIBUTING.md
guide (required)feat
PRs are unlikely to be accepted unless a case can be made for the feature actually being a bug fix to existing behavior.docs/sources/setup/upgrade/_index.md
production/helm/loki/Chart.yaml
and updateproduction/helm/loki/CHANGELOG.md
andproduction/helm/loki/README.md
. Example PRdeprecated-config.yaml
anddeleted-config.yaml
files respectively in thetools/deprecated-config-checker
directory. Example PR