Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add framework and unit tests for DagActionStoreChangeMonitor #3817

Merged
merged 8 commits into from
Nov 8, 2023

Conversation

umustafi
Copy link
Contributor

@umustafi umustafi commented Nov 1, 2023

Dear Gobblin maintainers,

Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!

JIRA

Description

  • Here are some details about my PR, including screenshots (if applicable):
    Add basic unit testing framework for DagActionStoreChangeMonitor.processMessage() function which is used for processing launch, kill, and resume flow action events. This is a crucial piece of functionality used to launch events, so it needs to be resilient to malformed message types and handle them without interrupting the thread processing the queue.

Tests

  • My PR adds the following unit tests OR does not need testing for this extremely good reason:
    Tests calling process message on HEARTBEAT, INSERT, UPDATE, DELETE operation types. The main functionality we seek to test is to ensure heartbeat type messages or messages with null dag action types are handled without throwing a NullPointerException. The unit tests are rudimentary other than the aforementioned functionality, mocking all related classes so their states and the actual result of processing the message is not validated at the moment. 

Commits

  • My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
    1. Subject is separated from body by a blank line
    2. Subject is limited to 50 characters
    3. Subject does not end with a period
    4. Subject uses the imperative mood ("add", not "adding")
    5. Body wraps at 72 characters
    6. Body explains "what" and "why", not "how"

@umustafi
Copy link
Contributor Author

umustafi commented Nov 1, 2023

Checks running on my fork: https://github.com/umustafi/gobblin/pull/8/checks


@Override
protected void processMessage(DecodeableKafkaRecord record) {
super.processMessage(record);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why to override when all the method do is call super's method? same for the startUp

Copy link
Contributor Author

@umustafi umustafi Nov 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The methods are protected access but these are in a diff module so override it only to access the method. They are in this module to use the Kafka 09 version

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks better now. maybe we do not need super. there

public void testProcessMessageWithInsert() {
Kafka09ConsumerClient.Kafka09ConsumerRecord consumerRecord =
wrapDagActionStoreChangeEvent(OperationType.INSERT, FLOW_GROUP, FLOW_NAME, FLOW_EXECUTION_ID, DagActionValue.LAUNCH);
mockDagActionStoreChangeMonitor.processMessage(consumerRecord);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we just need to test that processMessage returns without any errors, or can we verify any other stuff too? I havent looked what all processMessage do

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to verify more state ie: check meter counts but for simplicity mocked the metric context, DagManager etc.... processing mostly extracts properties and hands over to the DagManager. I can take a look to see if we can verify more functionality but capped it due to return on LOE of developing more infrastructure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a few more verifications that checks which mocked classes are called with processMessage. The method mostly parses the information and passes it only DagManager for kill, resume, or launch so verified which methods are called.

* types, even with undesired formats, without throwing exceptions.
*/
@Slf4j
public class DagActionStoreChangeMonitorTest {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we test a negative scenario where Kafka message is too big. I have seen that error often. We need to decide what to do with those messages and handle them gracefully.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are messages all containing only a few fields, so I don't expect this case but I can test it if I find a way to create. What would the too large message cause as an error? How can we test?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this error would occur on the producer side, not on the consumer side

* Tests the main functionality of {@link DagActionStoreChangeMonitor} to process {@link DagActionStoreChangeEvent} type
* events stored in a {@link org.apache.gobblin.kafka.client.KafkaConsumerRecord}. The
* processMessage(DecodeableKafkaRecord message) function should be able to gracefully process a variety of message
* types, even with undesired formats, without throwing exceptions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a test that creates records with undesired format or some other kind of bad data ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

* Tests the main functionality of {@link DagActionStoreChangeMonitor} to process {@link DagActionStoreChangeEvent} type
* events stored in a {@link org.apache.gobblin.kafka.client.KafkaConsumerRecord}. The
* processMessage(DecodeableKafkaRecord message) function should be able to gracefully process a variety of message
* types, even with undesired formats, without throwing exceptions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

public MockDagActionStoreChangeMonitor(String topic, Config config, int numThreads,
boolean isMultiActiveSchedulerEnabled) {
super(topic, config, mock(DagActionStore.class), mock(DagManager.class), numThreads, mock(FlowCatalog.class),
mock(Orchestrator.class), isMultiActiveSchedulerEnabled);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are mocking the dagManager and orchestrator, it is possible for us to actually test side effects after each processMessage by measuring the function that is called and asserting on it. See https://stackoverflow.com/questions/9841623/mockito-how-to-verify-method-was-called-on-an-object-created-within-a-method

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added verification calls to each test case to ensure the right method is called depending on the input.

@codecov-commenter
Copy link

codecov-commenter commented Nov 2, 2023

Codecov Report

Merging #3817 (e6b87d0) into master (6b6b70c) will decrease coverage by 0.02%.
Report is 2 commits behind head on master.
The diff coverage is 40.00%.

@@             Coverage Diff              @@
##             master    #3817      +/-   ##
============================================
- Coverage     47.56%   47.55%   -0.02%     
- Complexity    11025    11028       +3     
============================================
  Files          2156     2156              
  Lines         85339    85341       +2     
  Branches       9487     9487              
============================================
- Hits          40593    40583      -10     
- Misses        41045    41064      +19     
+ Partials       3701     3694       -7     
Files Coverage Δ
.../java/org/apache/gobblin/runtime/api/FlowSpec.java 43.47% <100.00%> (ø)
...ervice/monitoring/DagActionStoreChangeMonitor.java 0.00% <0.00%> (ø)

... and 12 files with indirect coverage changes

📣 Codecov offers a browser extension for seamless coverage viewing on GitHub. Try it in Chrome or Firefox today!

@umustafi
Copy link
Contributor Author

umustafi commented Nov 3, 2023

ElasticsearchTransportClientWriterTest appears flaky, passes locally.

@umustafi
Copy link
Contributor Author

umustafi commented Nov 7, 2023

Tests passing on own fork https://github.com/umustafi/gobblin/pull/10/checks

@Will-Lo Will-Lo merged commit 5339332 into apache:master Nov 8, 2023
6 checks passed
Will-Lo added a commit to Will-Lo/incubator-gobblin that referenced this pull request Dec 20, 2023
* [GOBBLIN-1921] Properly handle reminder events (apache#3790)

* Add millisecond level precision to timestamp cols & proper timezone conversion

	- existing tests pass with minor modifications

* Handle reminder events properly

* Fix compilation errors & add isReminder flag

* Add unit tests

* Address review comments

* Add newline to address comment

* Include reminder/original tag in logging

* Clarify timezone issues in comment

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1924] Reminder event flag true (apache#3795)

* Set reminder event flag to true for reminders

* Update unit tests

* remove unused variable

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1923] Add retention for lease arbiter table (apache#3792)

* Add retention for lease arbiter table

* Replace blocking thread with scheduled thread pool executor

* Make Calendar instance thread-safe

* Rename variables, make values more clear

* Update timestamp related cols

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* Change debug statements to info temporarily to debug (apache#3796)

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1926] Fix Reminder Event Epsilon Comparison (apache#3797)

* Fix Reminder Event Epsilon Comparison

* Add TODO comment

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1930] Improve Multi-active related logs and metrics (apache#3800)

* Improve Multi-active related logs and metrics

* Add more metrics and logs around forwarding dag action to DagManager

* Improve logs in response to review comments

* Replace flow execution id with trigger timestamp from multi-active

* Update flow action execution id within lease arbiter

* Fix test & make Lease Statuses more lean

* Update javadoc

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* add dataset root in some common type of datasets

* [GOBBLIN-1927] Add topic validation support in KafkaSource, and add TopicNameValidator (apache#3793)

* * Add generic topic validation support
* Add the first validator TopicNameValidator into the validator chain, as a refactor of existing codes

* Refine to address comments

* Refine

---------

Co-authored-by: Tao Qin <tqin@linkedin.com>

* [GOBBLIN-1931] Refactor dag action updating method & add clarifying comment (apache#3801)

* Refactor dag action updating method & add clarifying comment

* Log filtered out duplicate messages

* logs and metrics for missing messages from change monitor

* Only add gobblin.service prefix for dagActionStoreChangeMonitor

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1934] Monitor High Level Consumer queue size (apache#3805)

* Emit metrics to monitor high level consumer queue size

* Empty commit to trigger tests

* Use BlockingQueue.size() func instead of atomic integer array

* Remove unused import & add DagActionChangeMonitor prefix to metric

* Refactor to avoid repeating code

* Make protected variables private where possible

* Fix white space

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1935] Skip null dag action types unable to be processed (apache#3807)

* Skip over null dag actions from malformed messages

* Add new metric for skipped messages

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1922]Add function in Kafka Source to recompute workUnits for filtered partitions (apache#3798)

* add function in Kafka Source to recompute workUnits for filtered partitions

* address comments

* set default min container value to 1

* add condition when create empty wu

* update the condition

* Expose functions to fetch record partitionColumn value (apache#3810)

* [GOBBLIN-1938] preserve x bit in manifest file based copy (apache#3804)

* preserve x bit in manifest file based copy
* fix project structure preventing running unit tests from intellij
* fix unit test

* [GOBBLIN-1919] Simplify a few elements of MR-related job exec before reusing code in Temporal-based execution (apache#3784)

* Simplify a few elements of MR-related job exec before reusing code in Temporal-based execution

* Add JSON-ification to several foundational config-state representations, plus encapsulated convience method `JobState.getJobIdFromProps`

* Update javadoc comments

* Encapsulate check for whether a path has the extension of a multi-work-unit

* [GOBBLIN-1939] Bump AWS version to use a compatible version of Jackson with Gobblin (apache#3809)

* Bump AWS version to use a compatible version of jackson with Gobblin

* use shared aws version

* [GOBBLIN-1937] Quantify Missed Work Completed by Reminders (apache#3808)

* Quantify Missed Work Completed by Reminders
   Also fix bug to filter out heartbeat events before extracting field

* Refactor changeMonitorUtils & add delimiter to metrics prefix

* Re-order params to group similar ones

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* GOBBLIN-1933]Change the logic in completeness verifier to support multi reference tier (apache#3806)

* address comments

* use connectionmanager when httpclient is not cloesable

* [GOBBLIN-1933] Change the logic in completeness verifier to support multi reference tier

* add uite test

* fix typo

* change the javadoc

* change the javadoc

---------

Co-authored-by: Zihan Li <zihli@zihli-mn2.linkedin.biz>

* [GOBBLIN-1943] Use AWS version 1.12.261 to fix a security vulnerability in the previous version (apache#3813)

* [GOBBLIN-1941] Develop Temporal abstractions, including `Workload` for workflows of unbounded size through sub-workflow nesting (apache#3811)

* Define `Workload` abstraction for Temporal workflows of unbounded size through sub-workflow nesting

* Adjust Gobblin-Temporal configurability for consistency and abstraction

* Define `WorkerConfig`, to pass the `TemporalWorker`'s configuration to the workflows and activities it hosts

* Improve javadoc

* Javadoc fixup

* Minor changes

* Update per review suggestions

* Insert pause, to spread the load on the temporal server, before launch of each child workflow that may have direct leaves of its own

* Appease findbugs by having `SeqSliceBackedWorkSpan::next` throw `NoSuchElementException`

* Add comment

* [GOBBLIN-1944] Add gobblin-temporal load generator for a single subsuming super-workflow with a configurable number of activities nested beneath (apache#3815)

* Add gobblin-temporal load generator for a single subsuming super-workflow with a configurable number of activities nested beneath

* Update per findbugs advice

* Improve processing of int props

* [GOBBLIN-1945] Implement Distributed Data Movement (DDM) Gobblin-on-Temporal `WorkUnit` evaluation (apache#3816)

* Implement Distributed Data Movement (DDM) Gobblin-on-Temporal `WorkUnit` evaluation

* Adjust work unit processing tuning for start-to-close timeout and nested execution branching

* Rework `ProcessWorkUnitImpl` and fix `FileSystem` misuse; plus convenience abstractions to load `FileSystem`, `JobState`, and `StateStore<TaskState>`

* Fix `FileSystem` resource lifecycle, uniquely name each workflow, and drastically reduce worker concurrent task execution

* Heed findbugs advice

* prep before commit

* Improve processing of required props

* Update comment in response to PR feedback

* [GOBBLIN-1942] Create MySQL util class for re-usable methods and setup MysqlDagActio… (apache#3812)

* Create MySQL util class for re-usable methods and setup MysqlDagActionStore retention

* Add a java doc

* Address review comments

* Close scheduled executors on shutdown & clarify naming and comments

* Remove extra period making config key invalid

* implement Closeable

* Use try with resources

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [Hotfix][GOBBLIN-1949] add option to detect malformed orc during commit (apache#3818)

* add option to detect malformed ORC during commit phase

* better logging

* address comment

* catch more generic exception

* validate ORC file after close

* move validate in between close and commit

* syntax

* whitespace

* update log

* [GOBBLIN-1948] Use same flowExecutionId across participants (apache#3819)

* Use same flowExecutionId across participants
* Set config field as well in new FlowSpec
* Use gobblin util to create config
* Rename function and move to util
---------
Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* Allow extension of functions in GobblinMCEPublisher and customization of fileList file metrics are calculated for (apache#3820)

* [GOBBLIN-1951] Emit GTE when deleting corrupted ORC files (apache#3821)

* [GOBBLIN-1951] Emit GTE when deleting corrupted ORC files

This commit adds ORC file validation during the commit phase and deletes
corrupted files. It also includes a test for ORC file validation.

* Linter fixes

* Add framework and unit tests for DagActionStoreChangeMonitor (apache#3817)

* Add framework and unit tests for DagActionStoreChangeMonitor

* Add more test cases and validation

* Add header for new file

* Move FlowSpec static function to Utils class

* Remove unused import

* Fix compile error

* Fix unit tests

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1952] Make jobname shortening in GaaS more aggressive (apache#3822)

* Make jobname shortening in GaaS more aggressive

* Change long name prefix to flowgroup

* Make KafkaTopicGroupingWorkUnitPacker pack with desired num of container (apache#3814)

* Make KafkaTopicGroupingWorkUnitPacker pack with desired num of container

* update comment

* [GOBBLIN-1953] Add an exception message to orc writer validation GTE (apache#3826)

* Fix FlowSpec Updating Function (apache#3823)

* Fix FlowSpec Updating Function
   * makes Config object with FlowSpec mutable
   * adds unit test to ensure flow compiles after updating FlowSpec
   * ensure DagManager resilient to exceptions on leadership change

* Only update Properties obj not Config to avoid GC overhead

* Address findbugs error

* Avoid updating or creating new FlowSpec objects by passing flowExecutionId directly to metadata

* Remove changes that are not needed anymore

* Add TODO to handle failed DagManager leadership change

* Overload function and add more documentation

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* Emit metric to tune LeaseArbiter Linger metric  (apache#3824)

* Monitor number of failed persisting leases to tune linger

* Increase default linger and epsilon values

* Add metric for lease persisting success

* Rename metrics

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1956]Make Kafka streaming pipeline be able to config the max poll records during runtime (apache#3827)

* address comments

* use connectionmanager when httpclient is not cloesable

* add uite test

* fix typo

* [GOBBLIN-1956] Make Kafka streaming pipeline be able to config the max poll records during runtime

* small refractor

---------

Co-authored-by: Zihan Li <zihli@zihli-mn2.linkedin.biz>

* Add semantics for failure on partial success (apache#3831)

* Consistly handle Rest.li /flowexecutions KILL and RESUME actions (apache#3830)

* [GOBBLIN-1957] GobblinOrcwriter improvements for large records (apache#3828)

* WIP

* Optimization to limit batchsize based on large record sizes

* Address review

* Use DB-qualified table ID as `IcebergTable` dataset descriptor (apache#3834)

* [GOBBLIN-1961] Allow `IcebergDatasetFinder` to use separate names for source vs. destination-side DB and table (apache#3835)

* Allow `IcebergDatasetFinder` to use separate names for source vs. destination-side DB and table

* Adjust Mockito.verify to pass test

* Prevent NPE in `FlowCompilationValidationHelper.validateAndHandleConcurrentExecution` (apache#3836)

* Prevent NPE in `FlowCompilationValidationHelper.validateAndHandleConcurrentExecution`

* improved `MultiHopFlowCompiler` javadoc

* Delete Launch Action Events After Processing (apache#3837)

* Delete launch action event after persisting

* Fix default value for flowExecutionId retrieval from metadata map

* Address review comments and add unit test

* Code clean up

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1960] Emit audit count after commit in IcebergMetadataWriter (apache#3833)

* Emit audit count after commit in IcebergMetadataWriter

* Unit tests by extracting to a post commit

* Emit audit count first

* find bugs complaint

* [GOBBLIN-1967] Add external data node for generic ingress/egress on GaaS (apache#3838)

* Add external data node for generic ingress/egress on GaaS

* Address reviews and cleanup

* Use URI representation for external dataset descriptor node

* Fix error message in containing check

* Address review

* [GOBBLIN-1971] Allow `IcebergCatalog` to specify the `DatasetDescriptor` name for the `IcebergTable`s it creates (apache#3842)

* Allow `IcebergCatalog` to specify the `DatasetDescriptor` name for the `IcebergTable`s it creates

* small method javadoc

* [GOBBLIN-1970] Consolidate processing dag actions to one code path (apache#3841)

* Consolidate processing dag actions to one code path

* Delete dag action in failure cases too

* Distinguish metrics for startup

* Refactor to avoid duplicated code and create static metrics proxy class

* Remove DagManager checks that don't apply on startup

* Add test to check kill/resume dag action removal after processing

* Remove unused import

* Initialize metrics proxy with Null Pattern

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1972] Fix `CopyDataPublisher` to avoid committing post-publish WUs before they've actually run (apache#3844)

* Fix `CopyDataPublisher` to avoid committing post-publish WUs before they've actually run

* fixup findbugsMain

* [GOBBLIN-1975] Keep job.name configuration immutable if specified on GaaS (apache#3847)

* Revert "[GOBBLIN-1952] Make jobname shortening in GaaS more aggressive (apache#3822)"

This reverts commit 5619a0a.

* use configuration to keep specified jobname if enabled

* Cleanup

* [GOBBLIN-1974] Ensure Adhoc Flows can be Executed in Multi-active Scheduler state (apache#3846)

* Ensure Adhoc Flows can be Executed in Multi-active Scheduler state

* Only delete spec for adhoc flows & always after orchestration

* Delete adhoc flows when dagManager is not present as well

* Fix flaky test for scheduler

* Add clarifying comment about failure recovery

* Re-ordered private method

* Move private methods again

* Enforce sequential ordering of unit tests to make more reliable

---------

Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>

* [GOBBLIN-1973] Change Manifest distcp logic to compare permissions of source and dest files even when source is older (apache#3845)

* change should copy logic

* Add tests, address review

* Fix checkstyle

* Remove unused imports

* [GOBBLIN-1976] Allow an `IcebergCatalog` to override the `DatasetDescriptor` platform name for the `IcebergTable`s it creates (apache#3848)

* Allow an `IcebergCatalog` to override the `DatasetDescriptor` platform for the `IcebergTable`s it creates

* fixup javadoc

* Log when `PasswordManager` fails to load any master password (apache#3849)

* [GOBBLIN-1968] Temporal commit step integration (apache#3829)

Add commit step to Gobblin temporal workflow for job publish

* Add codeql analysis

* Make gradle specific

* Add codeql as part of build script

* Initialize codeql

* Use separate workflow for codeql instead with custom build function as autobuild seems to not work

* Add jdk jar for global dependencies script

---------

Co-authored-by: umustafi <umust77@gmail.com>
Co-authored-by: Urmi Mustafi <umustafi@linkedin.com>
Co-authored-by: Arjun <abora@linkedin.com>
Co-authored-by: Tao Qin <35046097+wsarecv@users.noreply.github.com>
Co-authored-by: Tao Qin <tqin@linkedin.com>
Co-authored-by: Hanghang Nate Liu <nate.hanghang.liu@gmail.com>
Co-authored-by: Andy Jiang <andy.jiang99@outlook.com>
Co-authored-by: Kip Kohn <ckohn@linkedin.com>
Co-authored-by: Zihan Li <zihli@linkedin.com>
Co-authored-by: Zihan Li <zihli@zihli-mn2.linkedin.biz>
Co-authored-by: Matthew Ho <mho@linkedin.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants