Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

blockprod: Ability to specify custom transactions #1270

Merged
merged 2 commits into from
Oct 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 56 additions & 44 deletions blockprod/src/detail/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ use common::{
block_body::BlockBody, signed_block_header::SignedBlockHeader,
timestamp::BlockTimestamp, BlockCreationError, BlockHeader, BlockReward, ConsensusData,
},
Block, ChainConfig, GenBlock, SignedTransaction,
Block, ChainConfig, GenBlock, SignedTransaction, Transaction,
},
primitives::{BlockHeight, Id},
primitives::{Amount, BlockHeight, Id, Idable},
time_getter::TimeGetter,
};
use consensus::{
Expand All @@ -42,7 +42,7 @@ use consensus::{
use crypto::random::{make_true_rng, Rng};
use logging::log;
use mempool::{
tx_accumulator::{DefaultTxAccumulator, TransactionAccumulator},
tx_accumulator::{DefaultTxAccumulator, PackingStrategy, TransactionAccumulator},
MempoolHandle,
};
use p2p::P2pHandle;
Expand Down Expand Up @@ -180,19 +180,31 @@ impl BlockProduction {
&self,
current_tip: Id<GenBlock>,
min_block_timestamp: BlockTimestamp,
) -> Result<Option<Box<dyn TransactionAccumulator>>, BlockProductionError> {
let max_block_size = self.chain_config.max_block_size_from_std_scripts();
transactions: Vec<SignedTransaction>,
transaction_ids: Vec<Id<Transaction>>,
packing_strategy: PackingStrategy,
) -> Result<Box<dyn TransactionAccumulator>, BlockProductionError> {
let mut accumulator = Box::new(DefaultTxAccumulator::new(
self.chain_config.max_block_size_from_std_scripts(),
current_tip,
min_block_timestamp,
));

for transaction in transactions.into_iter() {
let transaction_id = transaction.transaction().get_id();

accumulator
.add_tx(transaction, Amount::ZERO.into())
.map_err(|err| BlockProductionError::FailedToAddTransaction(transaction_id, err))?
Comment on lines +196 to +198
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are we making sure that these raw transactions are valid against what's coming from the mempool? I do hope we're not depending on the temporary solution we deployed that we wanted to solve later, where we include a transaction verifier inside the accumulator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no guarantee currently. Even transaction verifier is not sufficient because mempool deps are used to figure out the parent/child relationships. So it only properly works for transactions that are "independent" of transactions in the mempool. Could be removed since the guarantees are much weaker than for transactions specified by IDs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we agreed to just remove it...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we agreed to just remove it...

Ok will do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm there are many existing functional tests that rely on this functionality, and some are not easy to adapt so the transactions can be pushed through mempool.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just keep it, and write in the documentation that adding transactions there relies on the user ensuring the validity, and is done only in testing.

}

let returned_accumulator = self
.mempool_handle
.call(move |mempool| {
mempool.collect_txs(Box::new(DefaultTxAccumulator::new(
max_block_size,
current_tip,
min_block_timestamp,
)))
mempool.collect_txs(accumulator, transaction_ids, packing_strategy)
})
.await?
.map_err(|_| BlockProductionError::MempoolChannelClosed)?;
.await??;

Ok(returned_accumulator)
}

Expand Down Expand Up @@ -310,15 +322,26 @@ impl BlockProduction {
pub async fn produce_block(
&self,
input_data: GenerateBlockInputData,
transactions_source: TransactionsSource,
transactions: Vec<SignedTransaction>,
transaction_ids: Vec<Id<Transaction>>,
packing_strategy: PackingStrategy,
) -> Result<(Block, oneshot::Receiver<usize>), BlockProductionError> {
self.produce_block_with_custom_id(input_data, transactions_source, None).await
self.produce_block_with_custom_id(
input_data,
transactions,
transaction_ids,
packing_strategy,
None,
)
.await
}

async fn produce_block_with_custom_id(
&self,
input_data: GenerateBlockInputData,
transactions_source: TransactionsSource,
transactions: Vec<SignedTransaction>,
transaction_ids: Vec<Id<Transaction>>,
packing_strategy: PackingStrategy,
custom_id_maybe: Option<Vec<u8>>,
) -> Result<(Block, oneshot::Receiver<usize>), BlockProductionError> {
if !self.blockprod_config.skip_ibd_check {
Expand Down Expand Up @@ -435,36 +458,25 @@ impl BlockProduction {
));
}

// TODO: see if we can simplify this
let transactions = match transactions_source.clone() {
TransactionsSource::Mempool => {
// We conservatively use the minimum timestamp here in order to figure out
// which transactions are valid for the block.
// TODO: Alternatively, we can construct the transaction sequence from the
// scratch every time a different timestamp is attempted. That is more costly
// in terms of computational resources but will allow the node to include more
// transactions since the passing time may release some time locks.
let accumulator = self
.collect_transactions(
current_tip_index.block_id(),
min_constructed_block_timestamp,
)
.await?;
match accumulator {
Some(acc) => acc.transactions().clone(),
None => {
// If the mempool rejects the accumulator (due to tip mismatch, or otherwise), we should start over and try again
log::info!(
"Mempool rejected the transaction accumulator. Restarting the block production attempt."
);
continue;
}
}
}
TransactionsSource::Provided(txs) => txs,
};
// We conservatively use the minimum timestamp here in order to figure out
// which transactions are valid for the block.
// TODO: Alternatively, we can construct the transaction sequence from
// scratch every time a different timestamp is attempted. That is more costly
// in terms of computational resources but will allow the node to include more
// transactions since the passing time may release some time locks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going through the above code and then this change, I'm thinking that the minimum here should be MIN(last_timestamp_seconds_used, min_constructed_block_timestamp) given that transactions may be earlier than min_constructed_block_timestamp.

min_contstructed_block_timestamp should probably by locally scoped to the construction of max_constructed_block_timestamp as it's only used to calculate that value.

let collected_transactions = self
.collect_transactions(
current_tip_index.block_id(),
min_constructed_block_timestamp,
transactions.clone(),
transaction_ids.clone(),
packing_strategy,
)
.await?
.transactions()
.clone();

let block_body = BlockBody::new(block_reward, transactions);
let block_body = BlockBody::new(block_reward, collected_transactions);

// A synchronous channel that sends only when the mining/staking is done
let (ended_sender, ended_receiver) = mpsc::channel::<()>();
Expand Down
Loading
Loading