Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter red blocks #4393

Draft
wants to merge 22 commits into
base: dag-master
Choose a base branch
from
Draft

Filter red blocks #4393

wants to merge 22 commits into from

Conversation

jackzhhuang
Copy link
Collaborator

@jackzhhuang jackzhhuang commented Feb 10, 2025

Pull request type

Please check the type of change your PR introduces:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation content changes
  • Other (please describe):

What is the current behavior?

Issue Number: N/A

What is the new behavior?

Other information

Summary by CodeRabbit

  • New Features
     - Introduced new miner configuration options for adjusting DAG block reception timing and merge depth thresholds, enhancing block processing consistency.
     - Added event notifications that alert peers when new DAG blocks arrive and sync tasks complete, improving network responsiveness.

  • Refactor
     - Revamped block verification and block depth calculations to improve consensus reliability.
     - Streamlined pruning logic for more predictable behavior and enhanced overall system stability.

Copy link

coderabbitai bot commented Feb 10, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This pull request introduces enhancements across multiple modules in the blockchain project. The changes add extra logic for ghost data verification with bounded merge depth in the chain module, extend the miner configuration with new DAG parameters, and implement a comprehensive block depth management system including new storage, calculation, and pruning mechanisms. Additionally, new events and handlers are introduced to propagate and manage DAG block notifications across miner, node, and sync services. Testing functionalities have been expanded while the public API surface remains unchanged.

Changes

File(s) Change Summary
chain/src/chain.rs Modified verify_and_ghostdata: replaced single-line return with multi-step ghost data verification and bounded merge depth check.
config/src/miner_config.rs Added static constants (G_DAG_BLOCK_RECEIVE_TIME_WINDOW, G_MERGE_DEPTH), new optional fields, getter methods, and merge_with_opt integration.
flexidag/src/block_depth/block_depth_info.rs
flexidag/src/block_depth/mod.rs
Introduced new BlockDepthManagerT struct with methods for merge depth calculation, finality point computation, and kosherizing blues.
flexidag/src/blockdag.rs Updated BlockDAG: constructor now accepts merge_depth, integrated BlockDepthManager, added test-specific creation method, and new merge depth check methods.
flexidag/src/consensusdb/consensus_block_depth.rs
flexidag/src/consensusdb/db.rs
flexidag/src/consensusdb/mod.rs
flexidag/src/lib.rs
Added block depth info support: new BlockDepthInfo struct, traits and store implementation, DB field, constants, and a process_key_already_error function.
flexidag/src/prune/pruning_point_manager.rs Simplified next_pruning_point method logic by consolidating conditionals and removing obsolete code.
flexidag/tests/tests.rs Added add_and_print_with_pruning_point_and_difficulty function and new test_merge_bounded to validate block addition with extra parameters.
miner/src/generate_block_event_pacemaker.rs Added last_time_received field and event handler for NewDagBlockFromPeer to manage DAG block reception timing.
node/src/node.rs Updated dag initialization to include dag_merge_depth from the miner configuration.
sync/src/block_connector/block_connector_service.rs
sync/src/sync.rs
types/src/system_events.rs
Introduced NewDagBlockFromPeer event with related imports and handlers to propagate new DAG block notifications through sync services.

Sequence Diagram(s)

sequenceDiagram
    participant Peer
    participant BlockConnectorService
    participant SyncService
    participant GenerateBlockEventPacemaker

    Peer->>BlockConnectorService: New Block Received
    BlockConnectorService-->>SyncService: Broadcast NewDagBlockFromPeer (if synced)
    SyncService-->>GenerateBlockEventPacemaker: Forward NewDagBlockFromPeer event
    GenerateBlockEventPacemaker->>GenerateBlockEventPacemaker: Check last_time_received & time threshold
    GenerateBlockEventPacemaker-->>Peer: Trigger block event if criteria met
Loading

Possibly related PRs

Suggested reviewers

  • welbon
  • nkysg
  • simonjiao
  • sanlee42

Poem

I'm a bunny coding under moonlit skies,
Hopping through modules with curious eyes.
Ghost data and merge depths now clearly in sight,
With new tests and events making everything right.
I twirl my whiskers in delight—
A joyful hop in our code’s magic light! 🥕🐇


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jackzhhuang jackzhhuang marked this pull request as draft February 10, 2025 02:10
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (12)
flexidag/src/consensusdb/consensus_block_depth.rs (4)

31-44: Consider error context for key decoding.
When decoding a Hash from bytes, the custom error message is concise. If further debugging is needed, consider enhancing the message or logging the erroneous data’s hex.


59-62: Trait usage note.
The “append only” comment suggests no deletions. In the future, consider specifying a method for updates if partial block depth info might change.


80-90: Cautious logging for absent info.
Logging only a warning for missing data is fine if the store truly allows absent keys. Confirm that ignoring failures to retrieve data isn’t hiding critical issues.


93-98: Batch support for insertions.
The insert method writes directly. If the system requires multiple updates in an atomic step, consider a batch approach or a combined writer to reduce overhead and ensure consistency.

flexidag/src/block_depth/block_depth_info.rs (3)

14-24: Generic constraints remain consistent with usage.
BlockDepthManagerT is parameterized by three reader traits. Confirm that implementing each separately is the desired approach for your indexing and read performance strategy.


60-111: Potential performance bottleneck in forward iteration.
“calculate_block_at_depth” iterates the forward chain from a prior known block until required_blue_score is reached. For very large chain segments, consider optimizing (e.g., skip-list, storing partial sums) if performance becomes a concern.


113-129: Partial iteration for kosherizing blocks.
Filtering mergeset_blues for kosherizing candidates might be expensive if mergeset_blues is large. If needed, consider short-circuiting or caching repeated queries in larger DAGs.

flexidag/src/blockdag.rs (3)

730-732: Commented-out return blocks.
The lines appear to demonstrate prior logic for returning MineNewDagBlockInfo directly. If not used, consider removing or clarifying these comments to avoid confusion.

Also applies to: 737-742, 750-750, 755-760


800-850: Bounded merge depth check.
The logic validates that no “bad red” block is introduced into the chain. The approach is correct but might become more expensive as DAG complexity grows. Evaluate caching or index structures if performance degrades.


852-905: Parent-pruning logic for invalid merges.
“remove_bounded_merge_breaking_parents” recalculates ghostdag data after removing violating parents. This approach can be correct yet computationally expensive. Consider a more incremental approach if performance or intermediate states become an issue.

flexidag/src/lib.rs (1)

11-17: Consider documenting the error handling behavior.

The function provides a useful error handling pattern for idempotent operations, but its purpose and usage context should be documented.

Add documentation to clarify the function's purpose:

+/// Converts KeyAlreadyExists errors to Ok(()), preserving other errors.
+/// This is useful for idempotent operations where duplicate keys are expected.
 pub fn process_key_already_error(result: Result<(), StoreError>) -> Result<(), StoreError> {
types/src/system_events.rs (1)

39-41: Consider adding documentation for the event struct.

While the struct follows the pattern of other event types, its purpose and usage context should be documented.

Add documentation to clarify the event's purpose:

+/// Event emitted when a new DAG block is received from a peer.
+/// This event is used to coordinate block synchronization across the network.
 #[derive(Clone, Debug)]
 pub struct NewDagBlockFromPeer;
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5f22314 and 9395417.

📒 Files selected for processing (16)
  • chain/src/chain.rs (1 hunks)
  • config/src/miner_config.rs (4 hunks)
  • flexidag/src/block_depth/block_depth_info.rs (1 hunks)
  • flexidag/src/block_depth/mod.rs (1 hunks)
  • flexidag/src/blockdag.rs (8 hunks)
  • flexidag/src/consensusdb/consensus_block_depth.rs (1 hunks)
  • flexidag/src/consensusdb/db.rs (4 hunks)
  • flexidag/src/consensusdb/mod.rs (1 hunks)
  • flexidag/src/lib.rs (1 hunks)
  • flexidag/src/prune/pruning_point_manager.rs (1 hunks)
  • flexidag/tests/tests.rs (2 hunks)
  • miner/src/generate_block_event_pacemaker.rs (4 hunks)
  • node/src/node.rs (1 hunks)
  • sync/src/block_connector/block_connector_service.rs (3 hunks)
  • sync/src/sync.rs (4 hunks)
  • types/src/system_events.rs (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • flexidag/src/block_depth/mod.rs
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Generate code coverage report
🔇 Additional comments (30)
flexidag/src/consensusdb/consensus_block_depth.rs (4)

17-21: Ensure default values meet expected behavior.
The struct derives Default, but both fields are cryptographic hashes. Confirm that using HashValue::zero() for defaults in production is correct, or consider an explicit default.


45-53: Confirm BCS usage consistency.
BCS is used for BlockDepthInfo serialization/deserialization. Ensure it aligns with the rest of the codebase for data consistency and potential forward/backward compatibility.


55-57: Trait naming clarifies read-only semantics.
The BlockDepthInfoReader trait name nicely communicates its read-only design. No changes required here.


64-69: DbBlockDepthInfoStore concurrency approach.
This store holds an Arc and a CachedDbAccess. If multiple threads write concurrently, ensure external synchronization is enforced or each write is thread-safe.

flexidag/src/blockdag.rs (5)

3-9: Enhanced imports for block depth.
The newly added imports for BlockDepthInfo and the G_MERGE_DEPTH constant are consistent with deeper DAG-based logic. No issues here.

Also applies to: 24-24, 30-30


53-54: Type alias for convenience.
Using BlockDepthManager as a specific instantiation of BlockDepthManagerT clarifies usage and reduces verbosity. This is a clean addition.


68-68: New field introduction.
“block_depth_manager” in BlockDAG centralizes depth logic. Confirm no concurrency or locking changes are required beyond what commit_lock provides.


73-73: Constructor and store initialization.
The DAG now accepts a merge_depth parameter and instantiates BlockDepthManager. Ensure callers supply consistent merge_depth, especially in production vs. test environments.

Also applies to: 76-76, 89-90, 92-97, 103-103


117-120: Testing methods for merge depth.
Both create_for_testing_with_parameters and create_for_testing_with_k_and_merge_depth provide flexible test instantiation. This is helpful for coverage. No issues noted.

Also applies to: 122-129

flexidag/src/lib.rs (1)

3-3: LGTM! New module declaration for block depth management.

The addition of the block_depth module aligns with the broader changes for managing block depth in the DAG.

flexidag/src/consensusdb/mod.rs (1)

4-4: LGTM! New consensus block depth module.

The addition of consensus_block_depth as a public module is well-placed among other consensus-related modules.

flexidag/src/consensusdb/db.rs (4)

3-3: LGTM! Import for block depth types.

The import is properly organized with other consensus-related imports.


25-25: LGTM! New block depth info store field.

The field is properly wrapped in Arc for thread-safe access and follows the pattern of other store fields.


105-108: LGTM! Block depth store initialization.

The initialization follows the established pattern of other stores, using the same configuration parameters.


79-79: LGTM! New column family for block depth info.

The column family is properly added to the database initialization list.

flexidag/src/prune/pruning_point_manager.rs (1)

110-110: LGTM! Code simplification looks good.

The change simplifies the return logic while maintaining the same behavior.

miner/src/generate_block_event_pacemaker.rs (2)

49-49: LGTM! Event subscription management is properly handled.

The NewDagBlockFromPeer event is correctly subscribed to and unsubscribed from in the service lifecycle.

Also applies to: 60-60


98-120: Consider adding documentation for the time-based block generation logic.

The implementation looks correct but would benefit from documentation explaining:

  • The purpose of the time window check
  • Why block generation is triggered after the time window
  • The rationale behind resetting the timer on errors

Additionally, verify that the time window configuration is appropriate for your network's block time target.

✅ Verification successful

🌐 Web query:

What is a typical block time target for blockchain networks?

💡 Result:

Block time refers to the average duration required for a blockchain network to generate a new block containing validated transactions. This metric varies significantly across blockchain protocols due to differing design goals and consensus mechanisms[1][3][7].

### Typical Block Time Targets
Here are common examples of blockchain networks and their expected block times:

| Blockchain      | Expected Block Time | Key Factors Influencing Time |
|-----------------|---------------------|------------------------------|
| **Bitcoin**     | 10 minutes          | Proof-of-Work difficulty adjustments every 2016 blocks[6][10] |
| **Ethereum**    | 12–14 seconds       | Transitioning from Proof-of-Work to Proof-of-Stake consensus[2][7] |
| **Litecoin**    | 2.5 minutes         | Scrypt-based mining algorithm[3] |
| **Solana**      | 400–800 milliseconds| Optimized parallel processing architecture[7] |
| **Cardano**     | 20 seconds           | Ouroboros Proof-of-Stake protocol[7] |

### Key Observations
1. **Speed vs Security Trade-Off**:  
   Shorter block times (e.g., Ethereum’s ~13 seconds) enable faster transactions but increase orphaned block risks[4][8]. Longer intervals like Bitcoin’s 10 minutes prioritize network stability and security[10].

2. **Difficulty Adjustment**:  
   Proof-of-Work chains like Bitcoin automatically recalibrate mining difficulty every 2016 blocks (~2 weeks) to maintain target block times despite hash rate fluctuations[6][10].

3. **Protocol-Specific Targets**:  
   - Bitcoin Cash mirrors Bitcoin’s 10-minute target[3]  
   - Binance Smart Chain averages ~3 seconds[7]  
   - Polkadot ranges between 6–12 seconds depending on network configuration[7]

For real-time tracking, platforms like Chainspect provide live metrics for various chains[1].

Citations:

- 1: https://chainspect.app/blog/block-time
- 2: https://www.nadcab.com/blog/block-interval
- 3: https://www.bitstamp.net/en-gb/learn/crypto-101/what-is-block-time/
- 4: https://www.uniblock.dev/glossary/block-time-web3-explained-56994
- 5: https://medium.facilelogin.com/the-mystery-behind-block-time-63351e35603a?gi=5eda1cba8846
- 6: https://learnmeabitcoin.com/technical/mining/target/
- 7: https://www.nervos.org/knowledge-base/block_time_in_blockchain_(explainCKBot)
- 8: https://woolypooly.com/en/blog/block-time
- 9: https://www.investopedia.com/terms/b/block-time-cryptocurrency.asp
- 10: https://newhedge.io/bitcoin/block-speed

Documentation for Time-Based Block Generation Logic

The implementation is correct, but adding documentation would improve clarity. Consider including details on the following:

  • The time window check compares the elapsed time since the last received DAG block against the configured threshold (via self.config.miner.dag_block_receive_time_window()), ensuring that block generation is triggered only after a set period without receiving new blocks.
  • A new block event is triggered if the elapsed time exceeds the configured time window, which should be chosen based on your network's block time target. (For example, blockchain networks can have block times ranging from around 10 minutes for Bitcoin to a few seconds for Ethereum or newer protocols.)
  • On encountering an error in calculating the duration, the timer is reset (and an error logged) to prevent stale state issues and ensure consistent behavior in subsequent events.

The web query confirms that acceptable block time targets vary by network, so double-check that your configuration aligns with your specific network requirements.

config/src/miner_config.rs (3)

46-52: LGTM! Configuration structure is well-defined.

The new configuration options are:

  • Properly documented with CLI flags
  • Correctly marked as optional
  • Include appropriate serialization control

84-91: LGTM! Getter methods provide good defaults.

The implementation correctly falls back to static constants when values are not provided.


11-12: Verify the default values for DAG parameters.

The default values need verification:

  • G_DAG_BLOCK_RECEIVE_TIME_WINDOW = 2s: Is this sufficient for network propagation?
  • G_MERGE_DEPTH = 3600: The comment indicates this should be smaller than pruning finality.
✅ Verification successful

🌐 Web query:

What is a safe block propagation time in blockchain networks?

💡 Result:

Block propagation time—the duration for a newly created block to reach all network nodes—varies significantly across blockchain designs but is critical for maintaining network security and efficiency. A "safe" propagation time minimizes risks like chain splits (forks) and selfish mining while aligning with the blockchain's consensus rules and performance requirements.

### Key Factors Influencing Safe Propagation Time
1. **Network Size & Topology**:  
   - Larger networks (100+ nodes) inherently take longer due to more hops[1][3].  
   - Decentralized structures using protocols like **Gossipsub** achieve sub-200ms propagation in small-to-medium networks (10–100 nodes)[1].

2. **Block Size & Frequency**:  
   - Bitcoin’s **10-minute block interval** tolerates median propagation times of **6.5–12.6 seconds**[3][7], while high-frequency chains (e.g., Dymension’s **0.2-second blocks**) require sub-200ms times[1].  
   - Larger blocks (>100KB) increase congestion risks; Dymension caps blocks at 100KB to avoid degradation[1].

3. **Security Trade-offs**:  
   - Slower propagation increases vulnerability to **51% attacks** or stale blocks[3][5].  
   - Bitcoin’s design prioritizes security over speed, accepting longer confirmation times[2][7].

### Benchmark Examples
| Blockchain       | Block Time | Safe Propagation Time | Key Mechanism          |  
|------------------|------------|-----------------------|------------------------|  
| Bitcoin          | 10 minutes | ≤40 seconds           | Gossip protocol[3][7] |  
| Dymension Rollup | 0.2 seconds| ≤200ms                | Optimized Gossipsub[1]|  

### Challenges & Solutions
- **Latency/Bandwidth**: High-latency links delay propagation; solutions like compact blocks reduce data transfer[5].  
- **Sybil/Spam Attacks**: Permissionless networks use signature checks (e.g., sequencer validation) to filter spam[1].  
- **Node Churn**: Gossipsub recovers within ~1 second when nodes join/leave[1].

### Conclusion
A safe propagation time is context-dependent:  
- **Bitcoin-like chains** tolerate ~10–40 seconds due to infrequent blocks and robust consensus[2][3].  
- **High-speed networks** (e.g., rollups) require sub-second times (<200ms) to prevent forks during rapid block production[1].  
Balancing speed with security remains central—adjusting parameters like block size, node degree (e.g., Gossipsub’s D=8), and validation checks ensures reliability[1][5].

Citations:

- 1: https://forum.dymension.xyz/t/rapid-block-propagation-mechanism/45
- 2: https://www.nadcab.com/blog/block-time-in-bitcoin
- 3: https://thebitcoinmanual.com/articles/block-propagation/
- 4: https://www.uniblock.dev/glossary/block-propagation-web3-explained-a9531
- 5: https://www.nadcab.com/blog/block-propagation-in-blockchain
- 6: https://cryptosafetyfirst.com/transaction-propagation/
- 7: https://medium.facilelogin.com/the-mystery-behind-block-time-63351e35603a?gi=5eda1cba8846
- 8: https://www.topcoder.com/thrive/articles/block-propagation-in-blockchain

Default DAG parameter values look acceptable.

  • G_DAG_BLOCK_RECEIVE_TIME_WINDOW is set to 2 seconds, which provides a reasonable propagation window for networks with moderate block intervals. (In many blockchain systems, even with Bitcoin tolerating propagation times up to ~40 seconds, a 2‑second window is a practical default when blocks are expected at faster cadences.)
  • G_MERGE_DEPTH is defined as 3600, ensuring it remains lower than the anticipated pruning finality.

These defaults appear consistent with the intended design goals, though final tuning might depend on your network’s specific performance and consensus requirements.

node/src/node.rs (1)

292-292: Verify DAG initialization with the new merge depth parameter.

The change correctly integrates the merge depth configuration, but since this affects DAG initialization, please ensure:

  1. The merge depth is properly propagated to all DAG operations
  2. The value doesn't conflict with other DAG parameters
  3. Existing DAG data remains compatible

Consider adding validation to ensure the merge depth is always smaller than the pruning finality to prevent potential issues.

sync/src/block_connector/block_connector_service.rs (2)

41-41: LGTM!

The import of NewDagBlockFromPeer is correctly added to support the new event broadcasting functionality.


384-386: LGTM!

The broadcast of NewDagBlockFromPeer event is correctly added after successful block connection, and it's properly guarded by the sync status check.

sync/src/sync.rs (2)

38-40: LGTM!

The imports are correctly organized to include NewDagBlockFromPeer along with other system events.


698-700: LGTM!

The SpecificSyncDone event struct and its handler are well-implemented, providing a clean way to notify when specific sync is complete.

Also applies to: 701-705

flexidag/tests/tests.rs (3)

23-23: LGTM!

The U256 type is correctly imported to handle block difficulty values.


1256-1288: LGTM!

The add_and_print_with_pruning_point_and_difficulty helper function is well-implemented, providing a clean way to create test blocks with specific difficulty and pruning points.


1291-1496: LGTM!

The test_merge_bounded test is comprehensive and well-structured:

  • Tests initialization with specific parameters (k=3, pruning_depth=4, pruning_finality=3, merge_depth=3)
  • Verifies pruning point calculations and merge depth checks
  • Includes assertions to validate the behavior
chain/src/chain.rs (1)

1396-1407: LGTM!

The verify_and_ghostdata method is enhanced with proper bounded merge depth verification:

  1. Gets the DAG instance
  2. Verifies ghost data
  3. Checks bounded merge depth with the correct pruning configuration

Copy link

Benchmark for 1147e2d

Click to view benchmark
Test Base PR %
accumulator_append 789.3±109.95µs 780.1±117.11µs -1.17%
block_apply/block_apply_10 374.0±20.88ms 367.7±5.48ms -1.68%
block_apply/block_apply_1000 40.2±0.68s 40.9±2.04s +1.74%
get_with_proof/db_store 47.3±3.89µs 44.5±5.27µs -5.92%
get_with_proof/mem_store 36.2±3.15µs 35.5±1.54µs -1.93%
put_and_commit/db_store/1 121.9±14.23µs 122.7±12.75µs +0.66%
put_and_commit/db_store/10 1099.0±83.41µs 1031.2±59.52µs -6.17%
put_and_commit/db_store/100 9.4±0.48ms 9.8±1.03ms +4.26%
put_and_commit/db_store/5 561.8±87.40µs 513.7±25.98µs -8.56%
put_and_commit/db_store/50 5.0±0.28ms 4.8±0.39ms -4.00%
put_and_commit/mem_store/1 71.9±7.92µs 79.0±17.06µs +9.87%
put_and_commit/mem_store/10 648.4±52.67µs 654.9±51.77µs +1.00%
put_and_commit/mem_store/100 6.4±0.33ms 6.5±0.55ms +1.56%
put_and_commit/mem_store/5 337.2±40.02µs 339.6±38.12µs +0.71%
put_and_commit/mem_store/50 3.2±0.26ms 3.2±0.26ms 0.00%
query_block/query_block_in(10)_times(100) 8.4±0.43ms 8.3±0.39ms -1.19%
query_block/query_block_in(10)_times(1000) 83.1±2.57ms 81.6±2.26ms -1.81%
query_block/query_block_in(10)_times(10000) 842.0±40.01ms 826.6±31.10ms -1.83%
query_block/query_block_in(1000)_times(100) 1295.4±135.86µs 1167.8±31.73µs -9.85%
query_block/query_block_in(1000)_times(1000) 12.2±0.28ms 12.5±0.94ms +2.46%
query_block/query_block_in(1000)_times(10000) 128.6±6.85ms 120.7±3.15ms -6.14%
storage_transaction 1180.7±414.18µs 1107.7±401.36µs -6.18%
vm/transaction_execution/1 406.8±13.89ms 396.0±10.14ms -2.65%
vm/transaction_execution/10 128.6±7.81ms 127.4±7.04ms -0.93%
vm/transaction_execution/20 116.7±3.98ms 122.8±5.49ms +5.23%
vm/transaction_execution/5 158.5±5.85ms 157.7±7.08ms -0.50%
vm/transaction_execution/50 138.3±7.88ms 139.0±9.27ms +0.51%

Copy link

Benchmark for 2151222

Click to view benchmark
Test Base PR %
accumulator_append 756.4±61.27µs 765.7±87.29µs +1.23%
block_apply/block_apply_10 365.7±7.48ms 361.6±3.60ms -1.12%
block_apply/block_apply_1000 39.8±0.72s 40.8±1.44s +2.51%
get_with_proof/db_store 43.6±3.59µs 43.9±2.05µs +0.69%
get_with_proof/mem_store 35.9±2.55µs 36.6±3.08µs +1.95%
put_and_commit/db_store/1 164.9±41.87µs 116.9±7.37µs -29.11%
put_and_commit/db_store/10 1039.2±86.01µs 1045.9±61.55µs +0.64%
put_and_commit/db_store/100 9.8±0.68ms 9.7±0.75ms -1.02%
put_and_commit/db_store/5 527.0±20.90µs 586.0±54.97µs +11.20%
put_and_commit/db_store/50 5.1±0.42ms 4.9±0.30ms -3.92%
put_and_commit/mem_store/1 71.1±8.59µs 69.8±6.60µs -1.83%
put_and_commit/mem_store/10 693.2±66.91µs 663.9±80.67µs -4.23%
put_and_commit/mem_store/100 6.4±0.63ms 8.4±1.10ms +31.25%
put_and_commit/mem_store/5 330.5±37.26µs 337.2±32.59µs +2.03%
put_and_commit/mem_store/50 3.2±0.19ms 3.3±0.23ms +3.13%
query_block/query_block_in(10)_times(100) 8.3±0.45ms 8.5±0.49ms +2.41%
query_block/query_block_in(10)_times(1000) 83.7±1.73ms 83.5±3.31ms -0.24%
query_block/query_block_in(10)_times(10000) 863.9±54.92ms 834.5±30.91ms -3.40%
query_block/query_block_in(1000)_times(100) 1188.1±26.93µs 1207.8±65.06µs +1.66%
query_block/query_block_in(1000)_times(1000) 12.7±0.97ms 12.0±0.27ms -5.51%
query_block/query_block_in(1000)_times(10000) 119.1±4.64ms 122.1±5.18ms +2.52%
storage_transaction 1085.5±379.84µs 1154.3±492.48µs +6.34%
vm/transaction_execution/1 402.3±10.42ms 404.6±16.13ms +0.57%
vm/transaction_execution/10 133.6±16.35ms 128.9±4.78ms -3.52%
vm/transaction_execution/20 121.6±7.21ms 118.0±4.93ms -2.96%
vm/transaction_execution/5 156.8±5.68ms 154.2±5.95ms -1.66%
vm/transaction_execution/50 138.5±7.42ms 139.3±10.81ms +0.58%

Copy link

Benchmark for 9c056b8

Click to view benchmark
Test Base PR %
accumulator_append 839.0±161.26µs 860.2±165.76µs +2.53%
block_apply/block_apply_10 384.9±20.91ms 371.3±10.40ms -3.53%
block_apply/block_apply_1000 42.0±1.57s 41.9±2.02s -0.24%
get_with_proof/db_store 44.2±1.84µs 47.6±5.26µs +7.69%
get_with_proof/mem_store 38.0±3.93µs 35.8±2.80µs -5.79%
put_and_commit/db_store/1 122.1±11.51µs 133.7±20.09µs +9.50%
put_and_commit/db_store/10 1133.1±104.84µs 1145.2±151.31µs +1.07%
put_and_commit/db_store/100 10.0±0.92ms 10.5±1.55ms +5.00%
put_and_commit/db_store/5 542.1±41.95µs 545.5±44.83µs +0.63%
put_and_commit/db_store/50 5.1±0.21ms 5.0±0.33ms -1.96%
put_and_commit/mem_store/1 70.8±7.68µs 74.4±10.82µs +5.08%
put_and_commit/mem_store/10 663.2±66.41µs 677.9±59.75µs +2.22%
put_and_commit/mem_store/100 6.6±0.45ms 6.6±0.68ms 0.00%
put_and_commit/mem_store/5 337.9±31.35µs 345.3±55.21µs +2.19%
put_and_commit/mem_store/50 3.2±0.14ms 3.3±0.41ms +3.13%
query_block/query_block_in(10)_times(100) 8.6±0.40ms 8.2±0.33ms -4.65%
query_block/query_block_in(10)_times(1000) 87.4±6.49ms 87.1±7.47ms -0.34%
query_block/query_block_in(10)_times(10000) 858.2±40.94ms 832.3±27.87ms -3.02%
query_block/query_block_in(1000)_times(100) 1890.3±480.82µs 1447.6±304.82µs -23.42%
query_block/query_block_in(1000)_times(1000) 12.2±0.31ms 12.4±0.69ms +1.64%
query_block/query_block_in(1000)_times(10000) 128.3±9.02ms 136.3±20.50ms +6.24%
storage_transaction 1113.6±449.61µs 1124.2±432.17µs +0.95%
vm/transaction_execution/1 445.3±30.64ms 422.2±17.30ms -5.19%
vm/transaction_execution/10 141.7±11.04ms 135.3±13.20ms -4.52%
vm/transaction_execution/20 174.5±24.19ms 125.8±9.49ms -27.91%
vm/transaction_execution/5 174.0±15.22ms 163.3±11.14ms -6.15%
vm/transaction_execution/50 152.5±13.51ms 141.3±6.63ms -7.34%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant