Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-23.1: changefeedccl: webhook sink rewrite #100639

Merged
merged 2 commits into from
Apr 5, 2023

Conversation

blathers-crl[bot]
Copy link

@blathers-crl blathers-crl bot commented Apr 4, 2023

Backport 2/2 commits from #99086 on behalf of @samiskin.

/cc @cockroachdb/release


Resolves #84676
Epic: https://cockroachlabs.atlassian.net/browse/CRDB-11356

This PR implements the Webhook sink as part of a more general batchingSink framework that can be used to make adding new sinks an easier process, making it far more performant than it was previously.

A followup PR will be made to use the batchingSink for the pubsub client which also suffers performance issues.


Sink-specific code is encapsulated in a SinkClient interface

type SinkClient interface {
        MakeResolvedPayload(body []byte, topic string) (SinkPayload, error)
        MakeBatchWriter() BatchWriter
        Flush(context.Context, SinkPayload) error
        Close() error
}

type BatchWriter interface {
        AppendKV(key []byte, value []byte, topic string)
        ShouldFlush() bool
        Close() (SinkPayload, error)
}

type SinkPayload interface{}

Once the Batch is ready to be Flushed, the writer can be Close()'d to do any final formatting (ex: wrap in a json object with extra metadata) of the buffer-able data and obtain a final SinkPayload that is ready to be passed to SinkClient.Flush.

The SinkClient has a separate MakeResolvedPayload since the sink may require resolved events be formatted differently to a batch of kvs.

Flush(ctx, payload) encapsulates sending a blocking IO request to the sink endpoint, and may be called multiple times with the same payload due to retries. Any kind of formatting work should be served to run in the buffer's Close and stored as a SinkPayload to avoid multiple calls to Flush repeating work upon retries.


The batchingSink handles all the logic to take a SinkClient and form a full Sink implementation.

type batchingSink struct {
        client             SinkClient
        ioWorkers          int
        minFlushFrequency  time.Duration
        retryOpts          retry.Options
        eventPool          sync.Pool
        batchPool          sync.Pool
        eventCh            chan interface{}
        pacer              *admission.Pacer
        ...
}

var _ Sink = (*batchingSink)(nil)

It involves a single goroutine which handles:

  • Creating, building up, and finalizing BatchWriters to eventually form a SinkPayload to emit
  • Flushing batches when they have persisted longer than a configured minFlushFrequency
  • Flushing deliberately and being able to block until the Flush has completed
  • Logging all the various sink metrics

EmitRow calls are thread-safe therefore the use of the safeSink wrapper is not required for users of this sink.

Events sent through the goroutines would normally need to exist on the heap, but to avoid excessive garbage collection of hundreds of thousands of tiny structs, both the kvEvents{<data from EmitRow>} events (sent from the EmitRow caller to the batching worker) and the sinkBatchBuffer{<data about the batch>} events (sent from the batching worker to the IO routine in the next section) are allocated on object pools.


For a sink like Cloudstorage where there are large batches, doing the above and just one-by-one flushing the batch payloads on a separate routine is plenty good enough. Unfortunately the Webhook sink can be used with no batching at all with users wanting the lowest latency while still having good throughput. This means we need to be able to have multiple requests in flight. The difficulty here is if a batch with keys [a1,b1] is in flight, a batch with keys [b2,c1] needs to block until [a1,b1] completes as b2 cannot be sent and risk arriving at the destination prior to b1.

Flushing out Payloads in a way that is both able to maintain key-ordering guarantees but is able to run in parallel is done by a separate parallel_io struct.

type parallelIO struct {
	retryOpts retry.Options
	ioHandler IOHandler
	requestCh chan IORequest
	resultCh  chan IORequest
  ...
}

type IOHandler func(context.Context, IORequest) error

type IORequest interface {
	Keys() intsets.Fast
	SetError(error)
}

It involves one goroutine to manage the key ordering guarantees and a configurable number of IO Worker goroutines that simply call ioHandler on an IORequest.

IORequests represent the keys they shouldn't conflict on by providing a intsets.Fast struct, which allows for efficient Union/Intersects/Difference operations on them that parallelIO needs to maintain ordering guarantees.

Requests are received as IORequests and the response is also returned as an IORequest. This way the parallelIO struct does not have to do any heap allocations to communicate, the user of it can manage creating and freeing these objects in pools. The only heap allocations that occur are part of the intset operations as it uses a linkedlist internally.


The webhook sink is therefore formed by:

  1. EmitRow is called, creating kvEvents that are sent to a Batching worker
  2. The batching worker takes events and appends them to a batch
  3. Once the batch is full, its encoded into an HTTP request
  4. The request object is then sharded across a set of IO workers to be fully sent out in parallel with other non-key-conflicting requests.

With this setup, looking at the CPU flamegraph, at high throughputs most of the batchingSink/parallelIO work didn't really show up much, the work was largely just step 3, where taking a list of messages and calling json.Marshal on it took almost 10% of the time, specifically a call to json.Compress.

Since this isn't needed, and all we're doing is simply putting a list of already-formatted JSON messages into a surrounding JSON array and small object, I also swapped json.Marshal to just stitch together characters manually into a buffer.


In the following flamegraph of a node at around 35% CPU usage, only 5.56% of the total cputime in the graph (three small chunks between the parallelEventConsumer chunk and the kvevent chunk) is taken up by the paralelIO and batchingSink workers. This is with batch size of 100.

Screenshot 2023-03-20 at 10 48 33 PM

The max CPU usage here was around 37% with a max throughput of 135k for a single node (the other nodes had run out of data at this point). Since the majority of the flamegraph shows time spent in the event processing code I'm going to assume this will be handled by the pacer and won't be much of an issue.

In the above flamegraph runtime.gcDrain does show up using 10.55% cpu but when trying the cloudstorage sink it had around the same value. I'm guessing this means there isn't an extra gc thrashing issue. I believe the only non-pool allocations that occur are the intsets.


Since Matt's talked about a new significance being placed on Feature flagging new work to avoid need for technical advisories, I placed this new implementation under the changefeed.new_webhook_sink_enabled setting and defaulted it to be disabled.

Right now its sink_webhook_v2 just to keep sink_webhook.go unchanged so that this review is easier to do. I may move sink_webhook to deprecated_sink_webhook and sink_webhook_v2 to just be sink_webhook prior to merging.


Release note (performance improvement): the webhook sink is now able to handle a drastically higher maximum throughput by enabling the "changefeed.new_webhook_sink_enabled" cluster setting.


Release justification: feature flagged and default off, while being a substantial improvement

The existing webhook sink roachtest code was set up to only function
locally.  This change mirrors the cdc-webhook-sink-test-server code to
set up a server that other roachtest nodes can connect to.

Release note: None
Resolves #84676
Epic: https://cockroachlabs.atlassian.net/browse/CRDB-11356

This PR implements the Webhook sink as part of a more general
`batchingSink` framework that can be used to make adding new sinks an
easier process, making it far more performant than it was previously.

A followup PR will be made to use the `batchingSink` for the pubsub
client which also suffers performance issues.

---

Sink-specific code is encapsulated in a SinkClient interface

```go
type SinkClient interface {
        MakeResolvedPayload(body []byte, topic string) (SinkPayload, error)
        MakeBatchBuffer() BatchBuffer
        Flush(context.Context, SinkPayload) error
        Close() error
}

type BatchBuffer interface {
        Append(key []byte, value []byte, topic string)
        ShouldFlush() bool
        Close() (SinkPayload, error)
}

type SinkPayload interface{}
```

Once the Batch is ready to be Flushed, the buffer can be `Close()`'d to
do any final formatting (ex: wrap in a json object with extra metadata)
of the buffer-able data and obtain a final `SinkPayload` that is ready
to be passed to `SinkClient.Flush`.

The `SinkClient` has a separate `MakeResolvedPayload` since the sink may
require resolved events be formatted differently to a batch of kvs.

`Flush(ctx, payload)` encapsulates sending a blocking IO request to the
sink endpoint, and may be called multiple times with the same payload
due to retries.  Any kind of formatting work should be served to run in
the buffer's `Close` and stored as a `SinkPayload` to avoid multiple
calls to `Flush` repeating work upon retries.

---

The `batchingSink` handles all the logic to take a SinkClient and form a
full Sink implementation.

```go
type batchingSink struct {
        client             SinkClient
        ioWorkers          int
        minFlushFrequency  time.Duration
        retryOpts          retry.Options
        eventCh            chan interface{}
        pacer              *admission.Pacer
        ...
}

var _ Sink = (*batchingSink)(nil)
```

It involves a single goroutine which handles:
- Creating, building up, and finalizing `BatchBuffer`s to eventually
form a `SinkPayload` to emit
- Flushing batches when they have persisted longer than a configured
`minFlushFrequency`
- Flushing deliberately and being able to block until the Flush has completed
- Logging all the various sink metrics

`EmitRow` calls are thread-safe therefore the use of the `safeSink` wrapper is
not required for users of this sink.

Events sent through the goroutines would normally need to exist on the
heap, but to avoid excessive garbage collection of hundreds of thousands
of tiny structs, both the `kvEvents{<data from EmitRow>}` events (sent from the
EmitRow caller to the batching wokrer) and the `sinkBatchBuffer{<data
about the batch>}` events (sent from the batching worker to the IO
routine in the next section) are allocated on object pools.

---

For a sink like Cloudstorage where there are large batches, doing the
above and just one-by-one flushing the batch payloads on a separate
routine is plenty good enough.  Unfortunately the Webhook sink can be
used with no batching at all with users wanting the lowest latency while
still having good throughput.  This means we need to be able to have
multiple requests in flight.  The difficulty here is if a batch with
keys [a1,b1] is in flight, a batch with keys [b2,c1] needs to block
until [a1,b1] completes as b2 cannot be sent and risk arriving at the
destination prior to b1.

Flushing out Payloads in a way that is both able to maintain
key-ordering guarantees but is able to run in parallel is done by a
separate `parallel_io` struct.

```go
type parallelIO struct {
	retryOpts retry.Options
	ioHandler IOHandler
	requestCh chan IORequest
	resultCh  chan *ioResult
  ...
}

type IOHandler func(context.Context, IORequest) error

type IORequest interface {
	Keys() intsets.Fast
}

type ioResult struct {
	request IORequest
	err     error
}
```

It involves one goroutine to manage the key ordering guarantees and a
configurable number of IO Worker goroutines that simply call `ioHandler`
on an `IORequest`.

IORequests represent the keys they shouldn't conflict on by providing a
`intsets.Fast` struct, which allows for efficient
Union/Intersects/Difference operations on them that `parallelIO` needs
to maintain ordering guarantees.

The request and its error (if one occured despite the retries) are returned on
resultCh.

---

The webhook sink is therefore formed by:
1. EmitRow is called, creating kvEvents that are sent to a Batching worker
2. The batching worker takes events and appends them to a batch
3. Once the batch is full, its encoded into an HTTP request
4. The request object is then sharded across a set of IO workers to be
   fully sent out in parallel with other non-key-conflicting requests.

With this setup, looking at the CPU flamegraph, at high throughputs most
of the `batchingSink`/`parallelIO` work didn't really show up much, the
work was largely just step 3, where taking a list of messages and
calling `json.Marshal` on it took almost 10% of the time, specifically a
call to `json.Compress`.

Since this isn't needed, and all we're doing is simply putting a list of
already-formatted JSON messages into a surrounding JSON array and small
object, I also swapped `json.Marshal` to just stitch together characters
manually into a buffer.

---

Since Matt's talked about a new significance being placed on Feature
flagging new work to avoid need for technical advisories, I placed this
new implementation under the changefeed.new_webhook_sink_enabled setting
and defaulted it to be disabled.

---

Release note (performance improvement): the webhook sink is now able to
handle a drastically higher maximum throughput by enabling the
"changefeed.new_webhook_sink_enabled" cluster setting.
@blathers-crl blathers-crl bot requested a review from a team April 4, 2023 20:27
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-23.1-99086 branch from 415c8fb to 0a08140 Compare April 4, 2023 20:27
@blathers-crl blathers-crl bot requested review from a team as code owners April 4, 2023 20:27
@blathers-crl blathers-crl bot requested review from smg260 and renatolabs and removed request for a team April 4, 2023 20:27
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-23.1-99086 branch 2 times, most recently from 0942512 to f09cb0e Compare April 4, 2023 20:27
@blathers-crl
Copy link
Author

blathers-crl bot commented Apr 4, 2023

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@blathers-crl blathers-crl bot requested review from shermanCRL and removed request for a team April 4, 2023 20:27
@blathers-crl blathers-crl bot added blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot. labels Apr 4, 2023
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@samiskin samiskin merged commit e847e43 into release-23.1 Apr 5, 2023
@samiskin samiskin deleted the blathers/backport-release-23.1-99086 branch April 5, 2023 01:41
@Hades32
Copy link

Hades32 commented Aug 4, 2023

this setting seems to be missing docs at https://www.cockroachlabs.com/docs/v23.1/cluster-settings

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants