Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge zkevm-2.60 to zkevm #1293

Merged
merged 3,473 commits into from
Oct 9, 2024
Merged

Merge zkevm-2.60 to zkevm #1293

merged 3,473 commits into from
Oct 9, 2024

Conversation

cffls
Copy link

@cffls cffls commented Oct 7, 2024

No description provided.

AskAlexSharov and others added 30 commits April 20, 2024 09:53
This task is mostly implemented to be used in
`erigon/erigon-lib/downloader/mdbx_piece_completion.go` and maybe in
`nodesDB` (where we need many parallel RwTx)

I was agains adding this "trick"/"api" last years, because thought that
we can implement our App to be more 1-big-rwtx-friendly. And we did it
in Erigon - StagedSync. TxPool also did, but with a bit less happy face
- by "map+mutex with periodic flush to db". But `anacrolix/torrent` is
external library and unlikely will survive such big mind-model-change.
Maybe it's time to add `db.Batch()`.

#### Batch Rw transactions

Each `DB.Update()` waits for disk to commit the writes. This overhead
can be minimized by combining multiple updates with the `DB.Batch()`
function:

```go
err := db.Batch(func(tx *bolt.Tx) error {
	...
	return nil
})
```

Concurrent Batch calls are opportunistically combined into larger
transactions. Batch is only useful when there are multiple goroutines
calling it.

The trade-off is that `Batch` can call the given
function multiple times, if parts of the transaction fail. The
function must be idempotent and side effects must take effect only
after a successful return from `DB.Batch()`.

For example: don't display messages from inside the function, instead
set variables in the enclosing scope:

```go
var id uint64
err := db.Batch(func(tx *bolt.Tx) error {
	// Find last key in bucket, decode as bigendian uint64, increment
	// by one, encode back to []byte, and add new key.
	...
	id = newValue
	return nil
})
if err != nil {
	return ...
}
fmt.Println("Allocated ID %d", id)
```


---- 

Implementation mostly taken from
https://github.com/etcd-io/bbolt/?tab=readme-ov-file#batch-read-write-transactions

Maybe in future can push-down it to
https://github.com/erigontech/mdbx-go
- PutUvarint can produce 10 bytes
- re-using buffer - faster and less gc
…or: See comment (erigontech#9972)

* Fixed and simplified unaggregated bits check.
* There are 2 bits on, one for the attester and one for the
End-of-bitlist, needed to account for end of bitlist bit
 * Wrong publishing topic for sync_committee_ messages
* Added more Ignore by receiving specific errors to avoid forwarding
useless data.
 * Replaced `validateAttestation` with full message processing
 * Fixed forwarding of sync committee aggregates
 * Fixed subnet announcements

---------

Co-authored-by: kewei <kewei.train@gmail.com>
…n't check .lock. Otherwise can't create .torrent for existing .seg files. (erigontech#10004)
…ch#10014)

This is for E2.

It implements the backward compatible output field for traces on
ots_traceTransaction: otterscan/execution-apis#1

It'll be consumed by Otterscan in an upcoming release of this feature:
otterscan/otterscan#1530
there was error:
```
prog.go:12:2: missing go.sum entry for module providing package github.com/golang/mock/mockgen/model; to add:
	go mod download github.com/golang/mock
```
To Follow suit with rest of the naming
- voluntary_exit
- bls_to_execution_change
- proposer_slashing
- expirable lru

---------

Co-authored-by: Giulio <giulio.rebuffo@gmail.com>
Added command which prints to console diagnostics data. In this initial
version it is possible to print stages list and snapshot download
progress. Erigon should be running with --metrics flag
There are two available commands:
- "downloader"
- "stages" "current"
There are two possible options for output: text and json
Run command - ./build/bin/diag [command] [text | json]

---------

Co-authored-by: Mark Holt <mark@distributed.vision>
Co-authored-by: awskii <artem.tsskiy@gmail.com>
Add missing body close method when webseed roundtrip is retried
Set baseFeePerGas value in graphql resolver for block
The `StateTransition` property `gas` actually tracks the remaining gas
in the current context. This PR is to improve code readability.
Geth also uses similar naming.
Signed-off-by: fuyangpengqi <995764973@qq.com>
- attempt to integrate sonar with test coverage by following 
-
https://sonarcloud.io/project/configuration/GitHubActions?id=ledgerwatch_erigon
-
https://docs.sonarsource.com/sonarcloud/advanced-setup/ci-based-analysis/github-actions-for-sonarcloud/
- adds sonar properties file to specify code coverage output
- also properties file can be used to filter out generated code from
sonar scan
    - protobuf
    - graphql
    - ignore pedersen hash bindings code
- ... there will be more ignores coming in later PRs (e.g. some c/c++
code we dont need to scan, some js code, some contract gen code, etc.)
Fixes error in Sonar GitHub action:
<img width="1645" alt="Screenshot 2024-04-23 at 17 46 01"
src="https://github.com/ledgerwatch/erigon/assets/94537774/3833db1c-6a8a-4db2-8bb7-5de58b57e638">
…tech#10009)

This PR allows the computation for the computation of the
`SyncAggregate` field in block production:
https://sepolia.beaconcha.in/slot/4832922 proof of the code working is
that now Caplin validators can include sync aggregates in their blocks.

Things modified:
* We do not aggregate pre-aggregated `SyncContributionAndProof`s,
instead we just listen to the network and pick the most profitable ones
for each sub sync committee (4 sync subcommittee on mainnet).
profitability == most bits set in `AggregationBits` field.
* Separate aggregates set for contribution to be included in a block
from the ones constructed from `SyncCommitteeMessage`s, combining the
two causes some contributions to be marked as invalid and not
aggregable.
* Remove SyncContributionMock in favor of gomock
…ontech#10032)

Observed the following issue in a long running Astrid sync on
bor-mainnet:
```
[DBUG] [04-17|14:25:43.504] [p2p.peerEventObserver] received new peer event id=Disconnect peerId=51935aa1eeabdb73b70d36c7d5953a3bfdf5c84e88241c44a7d16d508b281d397bdd8504c934bfb45af146b86eb5899ccea85e590774f9823d056a424080b763
[DBUG] [04-17|14:25:43.504] [p2p.peerEventObserver] received new peer event id=Connect peerId=51935aa1eeabdb73b70d36c7d5953a3bfdf5c84e88241c44a7d16d508b281d397bdd8504c934bfb45af146b86eb5899ccea85e590774f9823d056a424080b763
```

Note the timestamps are the same on the millisecond level, however the
disconnect was processed before the connect which is wrong (connect
should always be first).

This then got the `PeerTracker` in a bad state - it kept on returning
peer
`51935aa1eeabdb73b70d36c7d5953a3bfdf5c84e88241c44a7d16d508b281d397bdd8504c934bfb45af146b86eb5899ccea85e590774f9823d056a424080b763`
as a valid peer to download from, which caused repeated `peer not found`
errors when sending messages to it.

Fix is to have the message listener wait for all observers to finish
processing peer event 1 before proceeding to notifying them about peer
event 2.
- Excludes go mock generated files from analysis
- Excludes broken js files (valid as they are used for tracers and test
data) to fix below warnings
<img width="1658" alt="Screenshot 2024-04-24 at 11 12 04"
src="https://github.com/ledgerwatch/erigon/assets/94537774/7925d07f-37f3-43c9-b34a-9a5361e48a8a">
Accept a slice of block numbers that represents the final block number
that will be available to the client of the simulator.Any data after the
iteration stage end is not accessible to the client.

The iteration moves to the next stage under certain conditions:
- requesting the latest span via `FetchSpan`
- requesting state sync events beyond current last iteration block's
timestamp
This fixes this issue:

erigontech#9499

which is caused by restarting erigon during the bor-heimdall stage.

Previously after the initial call to bor-heimdall (header 0), forward
downloading was disabled, but backward
downloading recursively collects headers - holding results in memory
until it can roll them forward. This should
only be called for a limited number of headers, otherwise it leads to a
large amount of memory >45GB for bor
main net if the process is stopped at block 1.
The downloader is not complete until all of its requested files have
been downloaded.

This changes adds a request count to the downloader stats to be checked
for completeness, otherwise the downloader
may appear complete before all required torrents have been added.
We reverted support of this flag in `updateForkChoice` because
implementation was too complex and fragile:
erigontech#9900

But it's good-enough if StageSenders will preserve this flag - then next
stages (exec) will also follow (because they look at prev stage
progress).

It's good-enough - because users just want to save some partial progress
after restoring node from backup (long downtime). And enforce "all
stages progress together" invariant
Copy link

cla-bot bot commented Oct 7, 2024

We require contributors/corporates @racytech, @wmitsuda, @dvovk, @scorring, @fuyangpengqi, @domiwei to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

Copy link

cla-bot bot commented Oct 7, 2024

We require contributors/corporates @racytech, @wmitsuda, @dvovk, @scorring, @fuyangpengqi, @domiwei to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

@hexoscott
Copy link
Collaborator

There is no clean way to review this, so merge and fix forwards.

hexoscott
hexoscott previously approved these changes Oct 8, 2024
Copy link
Collaborator

@hexoscott hexoscott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving on the basis of it being near impossible to review this many changes in a single pass. We will regression test and fix forwards

Copy link

cla-bot bot commented Oct 8, 2024

We require contributors/corporates @racytech, @wmitsuda, @dvovk, @scorring, @fuyangpengqi, @domiwei to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

@cffls
Copy link
Author

cffls commented Oct 8, 2024

I was able to make unwind test build and run. Differences were found in Sequence.txt and BlockBody.txt between phase2-dump1 and phase2-dump2.

Copy link

cla-bot bot commented Oct 8, 2024

We require contributors/corporates @racytech, @wmitsuda, @dvovk, @scorring, @fuyangpengqi, @domiwei to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

# Conflicts:
#	zk/stages/stage_sequence_execute_blocks.go
Copy link

cla-bot bot commented Oct 9, 2024

We require contributors/corporates @goofylfg to read our Contributor License Agreement, please check the Individual CLA document/Corporate CLA document

@hexoscott hexoscott merged commit fe753f2 into zkevm Oct 9, 2024
15 of 17 checks passed
@hexoscott hexoscott deleted the zkevm-2.60 branch October 9, 2024 12:47
cffls added a commit that referenced this pull request Oct 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.