-
Notifications
You must be signed in to change notification settings - Fork 751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GOBBLIN-1934] Monitor High Level Consumer queue size #3805
[GOBBLIN-1934] Monitor High Level Consumer queue size #3805
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree w/ the intent behind this metric(!), but do wish to consider the mechanics of how we collect values for it
gobblin-runtime/src/main/java/org/apache/gobblin/runtime/kafka/HighLevelConsumer.java
Show resolved
Hide resolved
// Increment queue size metric | ||
updateQueueSizes(idx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a pretty fundamental change to put into the fast path! we gotta be certain that no critical use is adversely affected by perf regression.
I see ConcurrentLinkedQueue::size
javadoc saying O(n)
, but I can't find a statement on LinkedBlockingQueue::size
...
(it's possible it IS O(1)
, but an approximation... or it may be exact, but at the expense of coordination overhead.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, I was thinking we can have a background thread update the size every minute (or some other interval). That will move this out of the fast path. Alternatively we can call this method in an async thread.
int finalI = i; | ||
this.queueSizeGauges[i] = this.metricContext.newContextAwareGauge( | ||
RuntimeMetrics.GOBBLIN_KAFKA_HIGH_LEVEL_CONSUMER_QUEUE_SIZE + "-" + i, | ||
() -> queueSizeGaugeValues.get(finalI)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
w/o a comment to explain the rationale, I'm left to wonder: why couldn't this be written:
() -> queues[finalI].size()
?
that would dispense w/ the need for queueSizeGaugeValues
and with it the potential perf hit of updating that array upon every enqueing.
and on that topic... why is updateQueueSizes
only recalculated at enqueue time, rather than at dequeue time as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updating to use queues[finalI].size()
to avoid having to call a function to explicitly update this
gobblin-runtime/src/main/java/org/apache/gobblin/runtime/kafka/HighLevelConsumer.java
Outdated
Show resolved
Hide resolved
gobblin-runtime/src/main/java/org/apache/gobblin/runtime/metrics/RuntimeMetrics.java
Outdated
Show resolved
Hide resolved
gobblin-runtime/src/main/java/org/apache/gobblin/runtime/kafka/HighLevelConsumer.java
Outdated
Show resolved
Hide resolved
dd206e2
to
1187b4a
Compare
gobblin-runtime/src/main/java/org/apache/gobblin/runtime/kafka/HighLevelConsumer.java
Outdated
Show resolved
Hide resolved
gobblin-runtime/src/main/java/org/apache/gobblin/runtime/kafka/HighLevelConsumer.java
Outdated
Show resolved
Hide resolved
super.messagesRead = this.getMetricContext().counter(RuntimeMetrics.DAG_ACTION_STORE_MONITOR_PREFIX + "." + RuntimeMetrics.GOBBLIN_KAFKA_HIGH_LEVEL_CONSUMER_MESSAGES_READ); | ||
super.queueSizeGauges = new ContextAwareGauge[super.numThreads]; | ||
for (int i=0; i < numThreads; i++) { | ||
// An 'effectively' final variable is needed inside the lambda expression below | ||
int finalI = i; | ||
this.queueSizeGauges[i] = this.getMetricContext().newContextAwareGauge( | ||
RuntimeMetrics.GOBBLIN_KAFKA_HIGH_LEVEL_CONSUMER_QUEUE_SIZE_PREFIX + "-" + i, | ||
() -> super.queues[finalI].size()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this may not be clicking for me... but why can't this all be replaced by super.createMetrics()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh oops I overwrote the important piece of the code, but I need to add the gobblin.service
and dagActionMonitor
prefix to actually report this metric and be able to distinguish it from any other classes that use HighLevelConsumer
...service/src/main/java/org/apache/gobblin/service/monitoring/DagActionStoreChangeMonitor.java
Outdated
Show resolved
Hide resolved
@@ -130,7 +129,7 @@ public HighLevelConsumer(String topic, Config config, int numThreads) { | |||
this.consumerExecutor = Executors.newSingleThreadScheduledExecutor(ExecutorsUtils.newThreadFactory(Optional.of(log), Optional.of("HighLevelConsumerThread"))); | |||
this.queueExecutor = Executors.newFixedThreadPool(this.numThreads, ExecutorsUtils.newThreadFactory(Optional.of(log), Optional.of("QueueProcessor-%d"))); | |||
this.queues = new LinkedBlockingQueue[numThreads]; | |||
for(int i=0; i<queues.length; i++) { | |||
for(int i = 0; i<queues.length; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now we have spaces issue w/ i < queues.length
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good callout, this was the original code actually but adding the spacing.
// An 'effectively' final variable is needed inside the lambda expression below | ||
int finalI = i; | ||
this.queueSizeGauges[i] = this.getMetricContext().newContextAwareGauge( | ||
RuntimeMetrics.DAG_ACTION_STORE_MONITOR_PREFIX + "." + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couldn't we just define a method:
protected String getMetricsPrefix()
and have super.createMetrics()
call that to make this minor adjustment, rather than basically copying and pasting the def w/ only a slight change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made the changes you described above
Codecov Report
@@ Coverage Diff @@
## master #3805 +/- ##
============================================
- Coverage 47.36% 47.34% -0.02%
- Complexity 10989 10990 +1
============================================
Files 2155 2155
Lines 85228 85236 +8
Branches 9478 9479 +1
============================================
- Hits 40372 40359 -13
- Misses 41195 41215 +20
- Partials 3661 3662 +1
... and 5 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great! (one small nit)
this.messagesRead = this.metricContext.counter(prefix + | ||
RuntimeMetrics.GOBBLIN_KAFKA_HIGH_LEVEL_CONSUMER_MESSAGES_READ); | ||
this.queueSizeGauges = new ContextAwareGauge[numThreads]; | ||
for (int i=0; i < numThreads; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, one more i=0
(spaces)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perfect!
* Emit metrics to monitor high level consumer queue size * Empty commit to trigger tests * Use BlockingQueue.size() func instead of atomic integer array * Remove unused import & add DagActionChangeMonitor prefix to metric * Refactor to avoid repeating code * Make protected variables private where possible * Fix white space --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>
* [GOBBLIN-1921] Properly handle reminder events (apache#3790) * Add millisecond level precision to timestamp cols & proper timezone conversion - existing tests pass with minor modifications * Handle reminder events properly * Fix compilation errors & add isReminder flag * Add unit tests * Address review comments * Add newline to address comment * Include reminder/original tag in logging * Clarify timezone issues in comment --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1924] Reminder event flag true (apache#3795) * Set reminder event flag to true for reminders * Update unit tests * remove unused variable --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1923] Add retention for lease arbiter table (apache#3792) * Add retention for lease arbiter table * Replace blocking thread with scheduled thread pool executor * Make Calendar instance thread-safe * Rename variables, make values more clear * Update timestamp related cols --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * Change debug statements to info temporarily to debug (apache#3796) Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1926] Fix Reminder Event Epsilon Comparison (apache#3797) * Fix Reminder Event Epsilon Comparison * Add TODO comment --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1930] Improve Multi-active related logs and metrics (apache#3800) * Improve Multi-active related logs and metrics * Add more metrics and logs around forwarding dag action to DagManager * Improve logs in response to review comments * Replace flow execution id with trigger timestamp from multi-active * Update flow action execution id within lease arbiter * Fix test & make Lease Statuses more lean * Update javadoc --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * add dataset root in some common type of datasets * [GOBBLIN-1927] Add topic validation support in KafkaSource, and add TopicNameValidator (apache#3793) * * Add generic topic validation support * Add the first validator TopicNameValidator into the validator chain, as a refactor of existing codes * Refine to address comments * Refine --------- Co-authored-by: Tao Qin <tqin@linkedin.com> * [GOBBLIN-1931] Refactor dag action updating method & add clarifying comment (apache#3801) * Refactor dag action updating method & add clarifying comment * Log filtered out duplicate messages * logs and metrics for missing messages from change monitor * Only add gobblin.service prefix for dagActionStoreChangeMonitor --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1934] Monitor High Level Consumer queue size (apache#3805) * Emit metrics to monitor high level consumer queue size * Empty commit to trigger tests * Use BlockingQueue.size() func instead of atomic integer array * Remove unused import & add DagActionChangeMonitor prefix to metric * Refactor to avoid repeating code * Make protected variables private where possible * Fix white space --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1935] Skip null dag action types unable to be processed (apache#3807) * Skip over null dag actions from malformed messages * Add new metric for skipped messages --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1922]Add function in Kafka Source to recompute workUnits for filtered partitions (apache#3798) * add function in Kafka Source to recompute workUnits for filtered partitions * address comments * set default min container value to 1 * add condition when create empty wu * update the condition * Expose functions to fetch record partitionColumn value (apache#3810) * [GOBBLIN-1938] preserve x bit in manifest file based copy (apache#3804) * preserve x bit in manifest file based copy * fix project structure preventing running unit tests from intellij * fix unit test * [GOBBLIN-1919] Simplify a few elements of MR-related job exec before reusing code in Temporal-based execution (apache#3784) * Simplify a few elements of MR-related job exec before reusing code in Temporal-based execution * Add JSON-ification to several foundational config-state representations, plus encapsulated convience method `JobState.getJobIdFromProps` * Update javadoc comments * Encapsulate check for whether a path has the extension of a multi-work-unit * [GOBBLIN-1939] Bump AWS version to use a compatible version of Jackson with Gobblin (apache#3809) * Bump AWS version to use a compatible version of jackson with Gobblin * use shared aws version * [GOBBLIN-1937] Quantify Missed Work Completed by Reminders (apache#3808) * Quantify Missed Work Completed by Reminders Also fix bug to filter out heartbeat events before extracting field * Refactor changeMonitorUtils & add delimiter to metrics prefix * Re-order params to group similar ones --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * GOBBLIN-1933]Change the logic in completeness verifier to support multi reference tier (apache#3806) * address comments * use connectionmanager when httpclient is not cloesable * [GOBBLIN-1933] Change the logic in completeness verifier to support multi reference tier * add uite test * fix typo * change the javadoc * change the javadoc --------- Co-authored-by: Zihan Li <zihli@zihli-mn2.linkedin.biz> * [GOBBLIN-1943] Use AWS version 1.12.261 to fix a security vulnerability in the previous version (apache#3813) * [GOBBLIN-1941] Develop Temporal abstractions, including `Workload` for workflows of unbounded size through sub-workflow nesting (apache#3811) * Define `Workload` abstraction for Temporal workflows of unbounded size through sub-workflow nesting * Adjust Gobblin-Temporal configurability for consistency and abstraction * Define `WorkerConfig`, to pass the `TemporalWorker`'s configuration to the workflows and activities it hosts * Improve javadoc * Javadoc fixup * Minor changes * Update per review suggestions * Insert pause, to spread the load on the temporal server, before launch of each child workflow that may have direct leaves of its own * Appease findbugs by having `SeqSliceBackedWorkSpan::next` throw `NoSuchElementException` * Add comment * [GOBBLIN-1944] Add gobblin-temporal load generator for a single subsuming super-workflow with a configurable number of activities nested beneath (apache#3815) * Add gobblin-temporal load generator for a single subsuming super-workflow with a configurable number of activities nested beneath * Update per findbugs advice * Improve processing of int props * [GOBBLIN-1945] Implement Distributed Data Movement (DDM) Gobblin-on-Temporal `WorkUnit` evaluation (apache#3816) * Implement Distributed Data Movement (DDM) Gobblin-on-Temporal `WorkUnit` evaluation * Adjust work unit processing tuning for start-to-close timeout and nested execution branching * Rework `ProcessWorkUnitImpl` and fix `FileSystem` misuse; plus convenience abstractions to load `FileSystem`, `JobState`, and `StateStore<TaskState>` * Fix `FileSystem` resource lifecycle, uniquely name each workflow, and drastically reduce worker concurrent task execution * Heed findbugs advice * prep before commit * Improve processing of required props * Update comment in response to PR feedback * [GOBBLIN-1942] Create MySQL util class for re-usable methods and setup MysqlDagActio… (apache#3812) * Create MySQL util class for re-usable methods and setup MysqlDagActionStore retention * Add a java doc * Address review comments * Close scheduled executors on shutdown & clarify naming and comments * Remove extra period making config key invalid * implement Closeable * Use try with resources --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [Hotfix][GOBBLIN-1949] add option to detect malformed orc during commit (apache#3818) * add option to detect malformed ORC during commit phase * better logging * address comment * catch more generic exception * validate ORC file after close * move validate in between close and commit * syntax * whitespace * update log * [GOBBLIN-1948] Use same flowExecutionId across participants (apache#3819) * Use same flowExecutionId across participants * Set config field as well in new FlowSpec * Use gobblin util to create config * Rename function and move to util --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * Allow extension of functions in GobblinMCEPublisher and customization of fileList file metrics are calculated for (apache#3820) * [GOBBLIN-1951] Emit GTE when deleting corrupted ORC files (apache#3821) * [GOBBLIN-1951] Emit GTE when deleting corrupted ORC files This commit adds ORC file validation during the commit phase and deletes corrupted files. It also includes a test for ORC file validation. * Linter fixes * Add framework and unit tests for DagActionStoreChangeMonitor (apache#3817) * Add framework and unit tests for DagActionStoreChangeMonitor * Add more test cases and validation * Add header for new file * Move FlowSpec static function to Utils class * Remove unused import * Fix compile error * Fix unit tests --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1952] Make jobname shortening in GaaS more aggressive (apache#3822) * Make jobname shortening in GaaS more aggressive * Change long name prefix to flowgroup * Make KafkaTopicGroupingWorkUnitPacker pack with desired num of container (apache#3814) * Make KafkaTopicGroupingWorkUnitPacker pack with desired num of container * update comment * [GOBBLIN-1953] Add an exception message to orc writer validation GTE (apache#3826) * Fix FlowSpec Updating Function (apache#3823) * Fix FlowSpec Updating Function * makes Config object with FlowSpec mutable * adds unit test to ensure flow compiles after updating FlowSpec * ensure DagManager resilient to exceptions on leadership change * Only update Properties obj not Config to avoid GC overhead * Address findbugs error * Avoid updating or creating new FlowSpec objects by passing flowExecutionId directly to metadata * Remove changes that are not needed anymore * Add TODO to handle failed DagManager leadership change * Overload function and add more documentation --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * Emit metric to tune LeaseArbiter Linger metric (apache#3824) * Monitor number of failed persisting leases to tune linger * Increase default linger and epsilon values * Add metric for lease persisting success * Rename metrics --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1956]Make Kafka streaming pipeline be able to config the max poll records during runtime (apache#3827) * address comments * use connectionmanager when httpclient is not cloesable * add uite test * fix typo * [GOBBLIN-1956] Make Kafka streaming pipeline be able to config the max poll records during runtime * small refractor --------- Co-authored-by: Zihan Li <zihli@zihli-mn2.linkedin.biz> * Add semantics for failure on partial success (apache#3831) * Consistly handle Rest.li /flowexecutions KILL and RESUME actions (apache#3830) * [GOBBLIN-1957] GobblinOrcwriter improvements for large records (apache#3828) * WIP * Optimization to limit batchsize based on large record sizes * Address review * Use DB-qualified table ID as `IcebergTable` dataset descriptor (apache#3834) * [GOBBLIN-1961] Allow `IcebergDatasetFinder` to use separate names for source vs. destination-side DB and table (apache#3835) * Allow `IcebergDatasetFinder` to use separate names for source vs. destination-side DB and table * Adjust Mockito.verify to pass test * Prevent NPE in `FlowCompilationValidationHelper.validateAndHandleConcurrentExecution` (apache#3836) * Prevent NPE in `FlowCompilationValidationHelper.validateAndHandleConcurrentExecution` * improved `MultiHopFlowCompiler` javadoc * Delete Launch Action Events After Processing (apache#3837) * Delete launch action event after persisting * Fix default value for flowExecutionId retrieval from metadata map * Address review comments and add unit test * Code clean up --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1960] Emit audit count after commit in IcebergMetadataWriter (apache#3833) * Emit audit count after commit in IcebergMetadataWriter * Unit tests by extracting to a post commit * Emit audit count first * find bugs complaint * [GOBBLIN-1967] Add external data node for generic ingress/egress on GaaS (apache#3838) * Add external data node for generic ingress/egress on GaaS * Address reviews and cleanup * Use URI representation for external dataset descriptor node * Fix error message in containing check * Address review * [GOBBLIN-1971] Allow `IcebergCatalog` to specify the `DatasetDescriptor` name for the `IcebergTable`s it creates (apache#3842) * Allow `IcebergCatalog` to specify the `DatasetDescriptor` name for the `IcebergTable`s it creates * small method javadoc * [GOBBLIN-1970] Consolidate processing dag actions to one code path (apache#3841) * Consolidate processing dag actions to one code path * Delete dag action in failure cases too * Distinguish metrics for startup * Refactor to avoid duplicated code and create static metrics proxy class * Remove DagManager checks that don't apply on startup * Add test to check kill/resume dag action removal after processing * Remove unused import * Initialize metrics proxy with Null Pattern --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1972] Fix `CopyDataPublisher` to avoid committing post-publish WUs before they've actually run (apache#3844) * Fix `CopyDataPublisher` to avoid committing post-publish WUs before they've actually run * fixup findbugsMain * [GOBBLIN-1975] Keep job.name configuration immutable if specified on GaaS (apache#3847) * Revert "[GOBBLIN-1952] Make jobname shortening in GaaS more aggressive (apache#3822)" This reverts commit 5619a0a. * use configuration to keep specified jobname if enabled * Cleanup * [GOBBLIN-1974] Ensure Adhoc Flows can be Executed in Multi-active Scheduler state (apache#3846) * Ensure Adhoc Flows can be Executed in Multi-active Scheduler state * Only delete spec for adhoc flows & always after orchestration * Delete adhoc flows when dagManager is not present as well * Fix flaky test for scheduler * Add clarifying comment about failure recovery * Re-ordered private method * Move private methods again * Enforce sequential ordering of unit tests to make more reliable --------- Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> * [GOBBLIN-1973] Change Manifest distcp logic to compare permissions of source and dest files even when source is older (apache#3845) * change should copy logic * Add tests, address review * Fix checkstyle * Remove unused imports * [GOBBLIN-1976] Allow an `IcebergCatalog` to override the `DatasetDescriptor` platform name for the `IcebergTable`s it creates (apache#3848) * Allow an `IcebergCatalog` to override the `DatasetDescriptor` platform for the `IcebergTable`s it creates * fixup javadoc * Log when `PasswordManager` fails to load any master password (apache#3849) * [GOBBLIN-1968] Temporal commit step integration (apache#3829) Add commit step to Gobblin temporal workflow for job publish * Add codeql analysis * Make gradle specific * Add codeql as part of build script * Initialize codeql * Use separate workflow for codeql instead with custom build function as autobuild seems to not work * Add jdk jar for global dependencies script --------- Co-authored-by: umustafi <umust77@gmail.com> Co-authored-by: Urmi Mustafi <umustafi@linkedin.com> Co-authored-by: Arjun <abora@linkedin.com> Co-authored-by: Tao Qin <35046097+wsarecv@users.noreply.github.com> Co-authored-by: Tao Qin <tqin@linkedin.com> Co-authored-by: Hanghang Nate Liu <nate.hanghang.liu@gmail.com> Co-authored-by: Andy Jiang <andy.jiang99@outlook.com> Co-authored-by: Kip Kohn <ckohn@linkedin.com> Co-authored-by: Zihan Li <zihli@linkedin.com> Co-authored-by: Zihan Li <zihli@zihli-mn2.linkedin.biz> Co-authored-by: Matthew Ho <mho@linkedin.com>
Dear Gobblin maintainers,
Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!
JIRA
Description
We suspect that the HighLevelConsumer class may be dropping Kafka messages if the thread for a particular consumption queue is interrupted or stalled. Before changing how these queues are handled, we want to verify that blocked or dropped queues is the reason for missed messages by emitting metrics about each queue's size.
Tests
Metrics added to be verified by monitoring
Commits