Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use sync calls to add txs to pool in tests #2277

Merged
merged 1 commit into from
Mar 4, 2024

Conversation

piersy
Copy link
Contributor

@piersy piersy commented Feb 29, 2024

Without the use of sync calls there is no guarantee that txpoool reorg will run between adding txs and chekcking some condition, this causes tests to fail randomly.

This PR ensures the use of sync calls.

Tested

To verify my suspicion that the lack of reorg running was the cause of the error I was seeing in TestTransactionPoolUnderpricing i used the following patch to reliably recreate the error.

diff --git a/core/tx_pool.go b/core/tx_pool.go
index 9a3003d6f..0a35c8a76 100644
--- a/core/tx_pool.go
+++ b/core/tx_pool.go
@@ -263,6 +263,8 @@ type txPoolContext struct {
 // current state) and future transactions. Transactions move between those
 // two states over time as they are received and processed.
 type TxPool struct {
+	reorgSkip   int32
+	skipLock    sync.Mutex
 	config      TxPoolConfig
 	chainconfig *params.ChainConfig
 	chain       blockChain
@@ -1248,6 +1250,17 @@ func (pool *TxPool) scheduleReorgLoop() {

 // runReorg runs reset and promoteExecutables on behalf of scheduleReorgLoop.
 func (pool *TxPool) runReorg(done chan struct{}, reset *txpoolResetRequest, dirtyAccounts *accountSet, events map[common.Address]*txSortedMap) {
+	pool.skipLock.Lock()
+	pool.reorgSkip++
+	if pool.reorgSkip >= 3 {
+		pool.skipLock.Unlock()
+		close(done)
+		return
+	}
+	pool.skipLock.Unlock()
+
+	println("reorging")
+
 	defer func(t0 time.Time) {
 		reorgDurationTimer.Update(time.Since(t0))
 	}(time.Now())
diff --git a/core/tx_pool_test.go b/core/tx_pool_test.go
index 2a2ae348f..e48733f05 100644
--- a/core/tx_pool_test.go
+++ b/core/tx_pool_test.go
@@ -1899,7 +1899,9 @@ func TestTransactionPoolUnderpricing(t *testing.T) {
 	ltx := pricedTransaction(0, 100000, big.NewInt(1), keys[2])

 	// Import the batch and that both pending and queued transactions match up
+	println("adding remote")
 	pool.AddRemotes(txs)
+	println("adding local")
 	pool.AddLocal(ltx)

 	pending, queued := pool.Stats()
@@ -1915,17 +1917,21 @@ func TestTransactionPoolUnderpricing(t *testing.T) {
 	if err := validateTxPoolInternals(pool); err != nil {
 		t.Fatalf("pool internal state corrupted: %v", err)
 	}
+	println("adding remote")
 	// Ensure that adding an underpriced transaction on block limit fails
 	if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(1), keys[1])); err != ErrUnderpriced {
 		t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced)
 	}
+	println("adding remote")
 	// Ensure that adding high priced transactions drops cheap ones, but not own
 	if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(3), keys[1])); err != nil { // +K1:0 => -K1:1 => Pend K0:0, K0:1, K1:0, K2:0; Que -
 		t.Fatalf("failed to add well priced transaction: %v", err)
 	}
+	println("adding remote")
 	if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(4), keys[1])); err != nil { // +K1:2 => -K0:0 => Pend K1:0, K2:0; Que K0:1 K1:2
 		t.Fatalf("failed to add well priced transaction: %v", err)
 	}
+	println("adding remote")
 	if err := pool.AddRemote(pricedTransaction(3, 100000, big.NewInt(5), keys[1])); err != nil { // +K1:3 => -K0:1 => Pend K1:0, K2:0; Que K1:2 K1:3
 		t.Fatalf("failed to add well priced transaction: %v", err)
 	}

Related issues

Without the use of sync calls there is no guarantee that txpoool reorg
will run between adding txs and chekcking some condition, this causes
tests to fail randomly.
Copy link

github-actions bot commented Feb 29, 2024

Coverage from tests in ./e2e_test/... for ./consensus/istanbul/... at commit 2aa6d46

coverage: 46.9% of statements across all listed packages
coverage:  57.2% of statements in consensus/istanbul
coverage:  23.7% of statements in consensus/istanbul/announce
coverage:  54.4% of statements in consensus/istanbul/backend
coverage:   0.0% of statements in consensus/istanbul/backend/backendtest
coverage:  24.3% of statements in consensus/istanbul/backend/internal/replica
coverage:  64.9% of statements in consensus/istanbul/core
coverage:  45.0% of statements in consensus/istanbul/db
coverage:   0.0% of statements in consensus/istanbul/proxy
coverage:  64.2% of statements in consensus/istanbul/uptime
coverage:  51.8% of statements in consensus/istanbul/validator
coverage:  79.2% of statements in consensus/istanbul/validator/random

Copy link

5891 passed, 45 skipped

Copy link
Contributor

@karlb karlb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds reasonable for at least some of the cases. I'm not sure we need the sync everywhere but I also don't see any harm in it (it can only slow things down, but not to a degree that matters, AFAICS).

As so often, upstream already solved the issue in at least some cases, see ethereum/go-ethereum@c866dfd#diff-6083272455b210c86aaa70e7a348d038d2867870cedb4d402b4909abef8fa23b .

@piersy
Copy link
Contributor Author

piersy commented Mar 4, 2024

This sounds reasonable for at least some of the cases. I'm not sure we need the sync everywhere but I also don't see any harm in it (it can only slow things down, but not to a degree that matters, AFAICS).

As so often, upstream already solved the issue in at least some cases, see ethereum/go-ethereum@c866dfd#diff-6083272455b210c86aaa70e7a348d038d2867870cedb4d402b4909abef8fa23b .

Yep, I did check upstream but it seems unlikely we will do any more upstream merges, so this shouldn't cause us any trouble.

@piersy piersy merged commit 2aa6d46 into master Mar 4, 2024
26 checks passed
@piersy piersy deleted the piersy/fix_txpool_flaky_tests branch March 4, 2024 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants