-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filter red blocks #4393
base: dag-master
Are you sure you want to change the base?
Filter red blocks #4393
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis pull request introduces enhancements across multiple modules in the blockchain project. The changes add extra logic for ghost data verification with bounded merge depth in the chain module, extend the miner configuration with new DAG parameters, and implement a comprehensive block depth management system including new storage, calculation, and pruning mechanisms. Additionally, new events and handlers are introduced to propagate and manage DAG block notifications across miner, node, and sync services. Testing functionalities have been expanded while the public API surface remains unchanged. Changes
Sequence Diagram(s)sequenceDiagram
participant Peer
participant BlockConnectorService
participant SyncService
participant GenerateBlockEventPacemaker
Peer->>BlockConnectorService: New Block Received
BlockConnectorService-->>SyncService: Broadcast NewDagBlockFromPeer (if synced)
SyncService-->>GenerateBlockEventPacemaker: Forward NewDagBlockFromPeer event
GenerateBlockEventPacemaker->>GenerateBlockEventPacemaker: Check last_time_received & time threshold
GenerateBlockEventPacemaker-->>Peer: Trigger block event if criteria met
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (12)
flexidag/src/consensusdb/consensus_block_depth.rs (4)
31-44
: Consider error context for key decoding.
When decoding a Hash from bytes, the custom error message is concise. If further debugging is needed, consider enhancing the message or logging the erroneous data’s hex.
59-62
: Trait usage note.
The “append only” comment suggests no deletions. In the future, consider specifying a method for updates if partial block depth info might change.
80-90
: Cautious logging for absent info.
Logging only a warning for missing data is fine if the store truly allows absent keys. Confirm that ignoring failures to retrieve data isn’t hiding critical issues.
93-98
: Batch support for insertions.
The insert method writes directly. If the system requires multiple updates in an atomic step, consider a batch approach or a combined writer to reduce overhead and ensure consistency.flexidag/src/block_depth/block_depth_info.rs (3)
14-24
: Generic constraints remain consistent with usage.
BlockDepthManagerT is parameterized by three reader traits. Confirm that implementing each separately is the desired approach for your indexing and read performance strategy.
60-111
: Potential performance bottleneck in forward iteration.
“calculate_block_at_depth” iterates the forward chain from a prior known block until required_blue_score is reached. For very large chain segments, consider optimizing (e.g., skip-list, storing partial sums) if performance becomes a concern.
113-129
: Partial iteration for kosherizing blocks.
Filtering mergeset_blues for kosherizing candidates might be expensive if mergeset_blues is large. If needed, consider short-circuiting or caching repeated queries in larger DAGs.flexidag/src/blockdag.rs (3)
730-732
: Commented-out return blocks.
The lines appear to demonstrate prior logic for returning MineNewDagBlockInfo directly. If not used, consider removing or clarifying these comments to avoid confusion.Also applies to: 737-742, 750-750, 755-760
800-850
: Bounded merge depth check.
The logic validates that no “bad red” block is introduced into the chain. The approach is correct but might become more expensive as DAG complexity grows. Evaluate caching or index structures if performance degrades.
852-905
: Parent-pruning logic for invalid merges.
“remove_bounded_merge_breaking_parents” recalculates ghostdag data after removing violating parents. This approach can be correct yet computationally expensive. Consider a more incremental approach if performance or intermediate states become an issue.flexidag/src/lib.rs (1)
11-17
: Consider documenting the error handling behavior.The function provides a useful error handling pattern for idempotent operations, but its purpose and usage context should be documented.
Add documentation to clarify the function's purpose:
+/// Converts KeyAlreadyExists errors to Ok(()), preserving other errors. +/// This is useful for idempotent operations where duplicate keys are expected. pub fn process_key_already_error(result: Result<(), StoreError>) -> Result<(), StoreError> {types/src/system_events.rs (1)
39-41
: Consider adding documentation for the event struct.While the struct follows the pattern of other event types, its purpose and usage context should be documented.
Add documentation to clarify the event's purpose:
+/// Event emitted when a new DAG block is received from a peer. +/// This event is used to coordinate block synchronization across the network. #[derive(Clone, Debug)] pub struct NewDagBlockFromPeer;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
chain/src/chain.rs
(1 hunks)config/src/miner_config.rs
(4 hunks)flexidag/src/block_depth/block_depth_info.rs
(1 hunks)flexidag/src/block_depth/mod.rs
(1 hunks)flexidag/src/blockdag.rs
(8 hunks)flexidag/src/consensusdb/consensus_block_depth.rs
(1 hunks)flexidag/src/consensusdb/db.rs
(4 hunks)flexidag/src/consensusdb/mod.rs
(1 hunks)flexidag/src/lib.rs
(1 hunks)flexidag/src/prune/pruning_point_manager.rs
(1 hunks)flexidag/tests/tests.rs
(2 hunks)miner/src/generate_block_event_pacemaker.rs
(4 hunks)node/src/node.rs
(1 hunks)sync/src/block_connector/block_connector_service.rs
(3 hunks)sync/src/sync.rs
(4 hunks)types/src/system_events.rs
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- flexidag/src/block_depth/mod.rs
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Generate code coverage report
🔇 Additional comments (30)
flexidag/src/consensusdb/consensus_block_depth.rs (4)
17-21
: Ensure default values meet expected behavior.
The struct derives Default, but both fields are cryptographic hashes. Confirm that using HashValue::zero() for defaults in production is correct, or consider an explicit default.
45-53
: Confirm BCS usage consistency.
BCS is used for BlockDepthInfo serialization/deserialization. Ensure it aligns with the rest of the codebase for data consistency and potential forward/backward compatibility.
55-57
: Trait naming clarifies read-only semantics.
The BlockDepthInfoReader trait name nicely communicates its read-only design. No changes required here.
64-69
: DbBlockDepthInfoStore concurrency approach.
This store holds an Arc and a CachedDbAccess. If multiple threads write concurrently, ensure external synchronization is enforced or each write is thread-safe.flexidag/src/blockdag.rs (5)
3-9
: Enhanced imports for block depth.
The newly added imports for BlockDepthInfo and the G_MERGE_DEPTH constant are consistent with deeper DAG-based logic. No issues here.Also applies to: 24-24, 30-30
53-54
: Type alias for convenience.
Using BlockDepthManager as a specific instantiation of BlockDepthManagerT clarifies usage and reduces verbosity. This is a clean addition.
68-68
: New field introduction.
“block_depth_manager” in BlockDAG centralizes depth logic. Confirm no concurrency or locking changes are required beyond what commit_lock provides.
73-73
: Constructor and store initialization.
The DAG now accepts a merge_depth parameter and instantiates BlockDepthManager. Ensure callers supply consistent merge_depth, especially in production vs. test environments.Also applies to: 76-76, 89-90, 92-97, 103-103
117-120
: Testing methods for merge depth.
Both create_for_testing_with_parameters and create_for_testing_with_k_and_merge_depth provide flexible test instantiation. This is helpful for coverage. No issues noted.Also applies to: 122-129
flexidag/src/lib.rs (1)
3-3
: LGTM! New module declaration for block depth management.The addition of the
block_depth
module aligns with the broader changes for managing block depth in the DAG.flexidag/src/consensusdb/mod.rs (1)
4-4
: LGTM! New consensus block depth module.The addition of
consensus_block_depth
as a public module is well-placed among other consensus-related modules.flexidag/src/consensusdb/db.rs (4)
3-3
: LGTM! Import for block depth types.The import is properly organized with other consensus-related imports.
25-25
: LGTM! New block depth info store field.The field is properly wrapped in Arc for thread-safe access and follows the pattern of other store fields.
105-108
: LGTM! Block depth store initialization.The initialization follows the established pattern of other stores, using the same configuration parameters.
79-79
: LGTM! New column family for block depth info.The column family is properly added to the database initialization list.
flexidag/src/prune/pruning_point_manager.rs (1)
110-110
: LGTM! Code simplification looks good.The change simplifies the return logic while maintaining the same behavior.
miner/src/generate_block_event_pacemaker.rs (2)
49-49
: LGTM! Event subscription management is properly handled.The
NewDagBlockFromPeer
event is correctly subscribed to and unsubscribed from in the service lifecycle.Also applies to: 60-60
98-120
: Consider adding documentation for the time-based block generation logic.The implementation looks correct but would benefit from documentation explaining:
- The purpose of the time window check
- Why block generation is triggered after the time window
- The rationale behind resetting the timer on errors
Additionally, verify that the time window configuration is appropriate for your network's block time target.
✅ Verification successful
🌐 Web query:
What is a typical block time target for blockchain networks?
💡 Result:
Block time refers to the average duration required for a blockchain network to generate a new block containing validated transactions. This metric varies significantly across blockchain protocols due to differing design goals and consensus mechanisms[1][3][7]. ### Typical Block Time Targets Here are common examples of blockchain networks and their expected block times: | Blockchain | Expected Block Time | Key Factors Influencing Time | |-----------------|---------------------|------------------------------| | **Bitcoin** | 10 minutes | Proof-of-Work difficulty adjustments every 2016 blocks[6][10] | | **Ethereum** | 12–14 seconds | Transitioning from Proof-of-Work to Proof-of-Stake consensus[2][7] | | **Litecoin** | 2.5 minutes | Scrypt-based mining algorithm[3] | | **Solana** | 400–800 milliseconds| Optimized parallel processing architecture[7] | | **Cardano** | 20 seconds | Ouroboros Proof-of-Stake protocol[7] | ### Key Observations 1. **Speed vs Security Trade-Off**: Shorter block times (e.g., Ethereum’s ~13 seconds) enable faster transactions but increase orphaned block risks[4][8]. Longer intervals like Bitcoin’s 10 minutes prioritize network stability and security[10]. 2. **Difficulty Adjustment**: Proof-of-Work chains like Bitcoin automatically recalibrate mining difficulty every 2016 blocks (~2 weeks) to maintain target block times despite hash rate fluctuations[6][10]. 3. **Protocol-Specific Targets**: - Bitcoin Cash mirrors Bitcoin’s 10-minute target[3] - Binance Smart Chain averages ~3 seconds[7] - Polkadot ranges between 6–12 seconds depending on network configuration[7] For real-time tracking, platforms like Chainspect provide live metrics for various chains[1]. Citations: - 1: https://chainspect.app/blog/block-time - 2: https://www.nadcab.com/blog/block-interval - 3: https://www.bitstamp.net/en-gb/learn/crypto-101/what-is-block-time/ - 4: https://www.uniblock.dev/glossary/block-time-web3-explained-56994 - 5: https://medium.facilelogin.com/the-mystery-behind-block-time-63351e35603a?gi=5eda1cba8846 - 6: https://learnmeabitcoin.com/technical/mining/target/ - 7: https://www.nervos.org/knowledge-base/block_time_in_blockchain_(explainCKBot) - 8: https://woolypooly.com/en/blog/block-time - 9: https://www.investopedia.com/terms/b/block-time-cryptocurrency.asp - 10: https://newhedge.io/bitcoin/block-speed
Documentation for Time-Based Block Generation Logic
The implementation is correct, but adding documentation would improve clarity. Consider including details on the following:
- The time window check compares the elapsed time since the last received DAG block against the configured threshold (via self.config.miner.dag_block_receive_time_window()), ensuring that block generation is triggered only after a set period without receiving new blocks.
- A new block event is triggered if the elapsed time exceeds the configured time window, which should be chosen based on your network's block time target. (For example, blockchain networks can have block times ranging from around 10 minutes for Bitcoin to a few seconds for Ethereum or newer protocols.)
- On encountering an error in calculating the duration, the timer is reset (and an error logged) to prevent stale state issues and ensure consistent behavior in subsequent events.
The web query confirms that acceptable block time targets vary by network, so double-check that your configuration aligns with your specific network requirements.
config/src/miner_config.rs (3)
46-52
: LGTM! Configuration structure is well-defined.The new configuration options are:
- Properly documented with CLI flags
- Correctly marked as optional
- Include appropriate serialization control
84-91
: LGTM! Getter methods provide good defaults.The implementation correctly falls back to static constants when values are not provided.
11-12
: Verify the default values for DAG parameters.The default values need verification:
G_DAG_BLOCK_RECEIVE_TIME_WINDOW = 2s
: Is this sufficient for network propagation?G_MERGE_DEPTH = 3600
: The comment indicates this should be smaller than pruning finality.✅ Verification successful
🌐 Web query:
What is a safe block propagation time in blockchain networks?
💡 Result:
Block propagation time—the duration for a newly created block to reach all network nodes—varies significantly across blockchain designs but is critical for maintaining network security and efficiency. A "safe" propagation time minimizes risks like chain splits (forks) and selfish mining while aligning with the blockchain's consensus rules and performance requirements. ### Key Factors Influencing Safe Propagation Time 1. **Network Size & Topology**: - Larger networks (100+ nodes) inherently take longer due to more hops[1][3]. - Decentralized structures using protocols like **Gossipsub** achieve sub-200ms propagation in small-to-medium networks (10–100 nodes)[1]. 2. **Block Size & Frequency**: - Bitcoin’s **10-minute block interval** tolerates median propagation times of **6.5–12.6 seconds**[3][7], while high-frequency chains (e.g., Dymension’s **0.2-second blocks**) require sub-200ms times[1]. - Larger blocks (>100KB) increase congestion risks; Dymension caps blocks at 100KB to avoid degradation[1]. 3. **Security Trade-offs**: - Slower propagation increases vulnerability to **51% attacks** or stale blocks[3][5]. - Bitcoin’s design prioritizes security over speed, accepting longer confirmation times[2][7]. ### Benchmark Examples | Blockchain | Block Time | Safe Propagation Time | Key Mechanism | |------------------|------------|-----------------------|------------------------| | Bitcoin | 10 minutes | ≤40 seconds | Gossip protocol[3][7] | | Dymension Rollup | 0.2 seconds| ≤200ms | Optimized Gossipsub[1]| ### Challenges & Solutions - **Latency/Bandwidth**: High-latency links delay propagation; solutions like compact blocks reduce data transfer[5]. - **Sybil/Spam Attacks**: Permissionless networks use signature checks (e.g., sequencer validation) to filter spam[1]. - **Node Churn**: Gossipsub recovers within ~1 second when nodes join/leave[1]. ### Conclusion A safe propagation time is context-dependent: - **Bitcoin-like chains** tolerate ~10–40 seconds due to infrequent blocks and robust consensus[2][3]. - **High-speed networks** (e.g., rollups) require sub-second times (<200ms) to prevent forks during rapid block production[1]. Balancing speed with security remains central—adjusting parameters like block size, node degree (e.g., Gossipsub’s D=8), and validation checks ensures reliability[1][5]. Citations: - 1: https://forum.dymension.xyz/t/rapid-block-propagation-mechanism/45 - 2: https://www.nadcab.com/blog/block-time-in-bitcoin - 3: https://thebitcoinmanual.com/articles/block-propagation/ - 4: https://www.uniblock.dev/glossary/block-propagation-web3-explained-a9531 - 5: https://www.nadcab.com/blog/block-propagation-in-blockchain - 6: https://cryptosafetyfirst.com/transaction-propagation/ - 7: https://medium.facilelogin.com/the-mystery-behind-block-time-63351e35603a?gi=5eda1cba8846 - 8: https://www.topcoder.com/thrive/articles/block-propagation-in-blockchain
Default DAG parameter values look acceptable.
- G_DAG_BLOCK_RECEIVE_TIME_WINDOW is set to 2 seconds, which provides a reasonable propagation window for networks with moderate block intervals. (In many blockchain systems, even with Bitcoin tolerating propagation times up to ~40 seconds, a 2‑second window is a practical default when blocks are expected at faster cadences.)
- G_MERGE_DEPTH is defined as 3600, ensuring it remains lower than the anticipated pruning finality.
These defaults appear consistent with the intended design goals, though final tuning might depend on your network’s specific performance and consensus requirements.
node/src/node.rs (1)
292-292
: Verify DAG initialization with the new merge depth parameter.The change correctly integrates the merge depth configuration, but since this affects DAG initialization, please ensure:
- The merge depth is properly propagated to all DAG operations
- The value doesn't conflict with other DAG parameters
- Existing DAG data remains compatible
Consider adding validation to ensure the merge depth is always smaller than the pruning finality to prevent potential issues.
sync/src/block_connector/block_connector_service.rs (2)
41-41
: LGTM!The import of
NewDagBlockFromPeer
is correctly added to support the new event broadcasting functionality.
384-386
: LGTM!The broadcast of
NewDagBlockFromPeer
event is correctly added after successful block connection, and it's properly guarded by the sync status check.sync/src/sync.rs (2)
38-40
: LGTM!The imports are correctly organized to include
NewDagBlockFromPeer
along with other system events.
698-700
: LGTM!The
SpecificSyncDone
event struct and its handler are well-implemented, providing a clean way to notify when specific sync is complete.Also applies to: 701-705
flexidag/tests/tests.rs (3)
23-23
: LGTM!The
U256
type is correctly imported to handle block difficulty values.
1256-1288
: LGTM!The
add_and_print_with_pruning_point_and_difficulty
helper function is well-implemented, providing a clean way to create test blocks with specific difficulty and pruning points.
1291-1496
: LGTM!The
test_merge_bounded
test is comprehensive and well-structured:
- Tests initialization with specific parameters (k=3, pruning_depth=4, pruning_finality=3, merge_depth=3)
- Verifies pruning point calculations and merge depth checks
- Includes assertions to validate the behavior
chain/src/chain.rs (1)
1396-1407
: LGTM!The
verify_and_ghostdata
method is enhanced with proper bounded merge depth verification:
- Gets the DAG instance
- Verifies ghost data
- Checks bounded merge depth with the correct pruning configuration
Benchmark for 1147e2dClick to view benchmark
|
Benchmark for 2151222Click to view benchmark
|
Benchmark for 9c056b8Click to view benchmark
|
Pull request type
Please check the type of change your PR introduces:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Other information
Summary by CodeRabbit
New Features
- Introduced new miner configuration options for adjusting DAG block reception timing and merge depth thresholds, enhancing block processing consistency.
- Added event notifications that alert peers when new DAG blocks arrive and sync tasks complete, improving network responsiveness.
Refactor
- Revamped block verification and block depth calculations to improve consensus reliability.
- Streamlined pruning logic for more predictable behavior and enhanced overall system stability.