Skip to content

Commit

Permalink
add block traces to framework & add TracesEvent
Browse files Browse the repository at this point in the history
  • Loading branch information
gibz104 committed Nov 3, 2024
1 parent 6091fa4 commit 480892e
Show file tree
Hide file tree
Showing 9 changed files with 436 additions and 65 deletions.
5 changes: 5 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,16 @@ reth-node-ethereum = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.
reth-primitives = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.0" }
reth-execution-types = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.0" }
reth-tracing = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.0" }
reth-rpc = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.0" }
reth-rpc-eth-types = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.0" }
reth-rpc-server-types = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.0" }
reth-tasks = { git = "https://github.com/paradigmxyz/reth", tag = "v1.1.0" }

alloy-rlp = { version = "0.3.8", features = ["derive"] }
alloy-primitives = { version = "0.8.8", features = ["serde"] }
alloy-rpc-types = "0.5.4"
alloy-rpc-types-beacon = { version = "0.5.4", features = ["ssz"] }
alloy-rpc-types-trace = { version = "0.4.2", default-features = false }

chrono = "0.4.38"
eyre = "0.6.12"
Expand Down
69 changes: 35 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,32 @@
# <h1 align="center">exex-indexer</h1>

**A real-time Ethereum indexer that writes on-chain data and off-chain mev-boost relay data to a postgres database.
**A real-time Ethereum indexer that writes on-chain data and off-chain mev-boost relay data to a postgres database.
Reth's Execution Extensions (ExEx) framework is used for efficient real-time block notifications and processing.**

[![Build](https://github.com/gibz104/exex-indexer/actions/workflows/build.yml/badge.svg)](https://github.com/gibz104/exex-indexer/actions/workflows/build.yml)

# Datasets
| Dataset | Implemented | Rust Struct | Postgres Table |
|-------------------|-------------|-----------------------|-------------------|
| Headers | x | HeadersEvent | headers |
| Transactions | x | TransactionsEvent | transactions |
| Builder Bids | x | BuilderBidsEvent | builder_bids |
| Proposer Payloads | x | ProposerPayloadsEvent | proposer_payloads |
| Logs | x | LogsEvent | logs |
| Uncles | x | OmmersEvent | ommers |
| Withdrawals | x | WithdrawalsEvent | withdrawals |
| ERC-20 Transfers | x | Erc20TransfersEvent | erc20_transfers |
| Native Transfers | | | |
| Contracts | | | |
| Dataset | Source | Rust Struct | Postgres Table |
|-------------------|----------------------|-----------------------|-------------------|
| Headers | On-chain | HeadersEvent | headers |
| Transactions | On-chain | TransactionsEvent | transactions |
| Logs | On-chain | LogsEvent | logs |
| Traces | On-chain | TracesEvent | traces |
| Uncles | On-chain | OmmersEvent | ommers |
| Withdrawals | On-chain | WithdrawalsEvent | withdrawals |
| Builder Bids | Off-chain (mevboost) | BuilderBidsEvent | builder_bids |
| Proposer Payloads | Off-chain (mevboost) | ProposerPayloadsEvent | proposer_payloads |
| ERC-20 Transfers | Derived (logs) | Erc20TransfersEvent | erc20_transfers |
| Native Transfers | Derived (traces) | | |
| Contracts | Derived (traces) | | |


# Setup

1. Ensure you have Rust and Cargo installed
2. Clone this repository
3. Set the `DATABASE_URL` environment variable to your Postgres connection string
* example: `postgresql://username:password@host:port/database`
* example: `postgresql://username:password@host:port/database`
4. Build and run the project with `cargo run`

# Key components
Expand All @@ -40,21 +41,21 @@ Reth's Execution Extensions (ExEx) framework is used for efficient real-time blo

# How it works
The trait `ProcessingEvent` is setup to run each time a new block is committed. `ProcessingEvent.process()` receives,
or retrieves, data and writes it to Postgres each time a new block is committed. `ProcessingEvent.revert()` deletes
data from postgres each time a new block is reverted. Multiple processing events are already setup (`HeaderEvent`,
or retrieves, data and writes it to Postgres each time a new block is committed. `ProcessingEvent.revert()` deletes
data from postgres each time a new block is reverted. Multiple processing events are already setup (`HeaderEvent`,
`BuilderBidsEvent`, etc.) that correspond one-to-one to postgres tables. However, processing events can also be setup
where one event writes to multiple postgres tables or multiple events write to the same postgres table. At the end
of the day, each `ProcessingEvent` setup is compute you are defining for when a block is either commited or reverted
to the chain. In a way this is just an orchestration system where the trigger is a new Ethereum block (12 seconds) and
where one event writes to multiple postgres tables or multiple events write to the same postgres table. At the end
of the day, each `ProcessingEvent` setup is compute you are defining for when a block is either commited or reverted
to the chain. In a way this is just an orchestration system where the trigger is a new Ethereum block (12 seconds) and
the tasks/jobs that are being triggered are the compute defined in each `ProcessingEvent` implementation.

When a new exex commit notification is received, its chain state is added to a tokio mpsc queue. A tokio task reads
from this mpsc queue and concurrently runs each `ProcessingEvent` with the provided chain state. The next block's
processing will not begin until all current block's processing events have completed. This allows a build-up of
blocks in the queue to be possible -- this would happen when a block takes longer than 12 seconds to finish its
processing events. Most processing events are very fast to read/retrieve the data as well as writing it to postgres;
however, getting data from mev-boost relays requires making external http calls to each running relay and these calls
can take a longer time to run. Having the mpsc queue, and allowing a build-up/queue of processing events, gives this
When a new exex commit notification is received, its chain state is added to a tokio mpsc queue. A tokio task reads
from this mpsc queue and concurrently runs each `ProcessingEvent` with the provided chain state. The next block's
processing will not begin until all current block's processing events have completed. This allows a build-up of
blocks in the queue to be possible -- this would happen when a block takes longer than 12 seconds to finish its
processing events. Most processing events are very fast to read/retrieve the data as well as writing it to postgres;
however, getting data from mev-boost relays requires making external http calls to each running relay and these calls
can take a longer time to run. Having the mpsc queue, and allowing a build-up/queue of processing events, gives this
indexer flexibility in the types of processing it can support.

# How to add a new `ProcessingEvent`
Expand All @@ -65,13 +66,13 @@ indexer flexibility in the types of processing it can support.
* This newly defined processing event/struct will now run each block

# Database schema
- `Headers` - Canonical block headers (1 record per block)
- `Transactions` - Canonical block transactions (many records per block)
- `Builder Bids` - Block builder header submissions to mev-boost relays; contains block bid values and timestamps (in ms) relay received the bid (many records per block)
- `Proposer Payloads` - Block header mev-boost relays sent to block proposer (1 record per block)
- `Logs` - Smart contract log events emitted from canonical blocks (many records per block)
- `Uncles` - Uncle block headers (many records per block)
- `Withdrawals` - Validator withdrawals from the beacon chain that were processed in canonical blocks (many records per block)
- `ERC-20 Transfers` - ERC-20 transfers between addresses, based on emitted contract log events (many records per block)
- `Headers` - Canonical block headers
- `Transactions` - Canonical block transactions
- `Builder Bids` - Block builder header submissions to mev-boost relays; contains block bid values and timestamps (in ms) relay received the bid
- `Proposer Payloads` - Block header mev-boost relays sent to block proposer
- `Logs` - Smart contract log events emitted from canonical blocks
- `Uncles` - Uncle block headers
- `Withdrawals` - Validator withdrawals from the beacon chain that were processed in canonical blocks
- `ERC-20 Transfers` - ERC-20 transfers between addresses, based on emitted contract log events

![img.png](assets/img.png)
Binary file modified assets/img.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ enabled_events:
ProposerPayloadsEvent: true
OmmersEvent: true
WithdrawalsEvent: true
Erc20TransfersEvent: true
Erc20TransfersEvent: true
TracesEvent: true
26 changes: 26 additions & 0 deletions src/db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,32 @@ pub async fn create_tables(client: &Client) -> eyre::Result<()> {
PRIMARY KEY (block_number, transaction_index, log_index)
);
"#,
r#"
CREATE TABLE IF NOT EXISTS traces (
block_number BIGINT NOT NULL,
block_hash VARCHAR NOT NULL,
transaction_hash VARCHAR,
transaction_index INTEGER NOT NULL,
trace_address TEXT NOT NULL,
subtraces INTEGER NOT NULL,
action_type VARCHAR NOT NULL,
from_address VARCHAR,
to_address VARCHAR,
value VARCHAR,
gas BIGINT,
gas_used BIGINT,
input TEXT,
output VARCHAR,
success BOOLEAN,
tx_success BOOLEAN,
error VARCHAR,
deployed_contract_address VARCHAR,
deployed_contract_code VARCHAR,
call_type VARCHAR,
reward_type VARCHAR,
updated_at TIMESTAMP WITH TIME ZONE NOT NULL
);
"#,
];

for query in queries {
Expand Down
Loading

0 comments on commit 480892e

Please sign in to comment.