Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: fix typos #1149

Merged
merged 4 commits into from
Dec 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ endif
# allow users to pass additional flags via the conventional LDFLAGS variable
LD_FLAGS += $(LDFLAGS)

# Process Docker environment varible TARGETPLATFORM
# Process Docker environment variable TARGETPLATFORM
# in order to build binary with correspondent ARCH
# by default will always build for linux/amd64
TARGETPLATFORM ?=
Expand Down Expand Up @@ -322,7 +322,7 @@ endif

# Run a nodejs tool to test endpoints against a localnet
# The command takes care of starting and stopping the network
# prerequisits: build-contract-tests-hooks build-linux
# prerequisites: build-contract-tests-hooks build-linux
# the two build commands were not added to let this command run from generic containers or machines.
# The binaries should be built beforehand
contract-tests:
Expand Down
2 changes: 1 addition & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ also be interested in!

* Conceptual flaws
* Ambiguities, inconsistencies, or incorrect statements
* Mis-match between specification and implementation of any component
* Mismatch between specification and implementation of any component

### Consensus

Expand Down
2 changes: 1 addition & 1 deletion abci/types/application.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ type Application interface {
ListSnapshots(RequestListSnapshots) ResponseListSnapshots // List available snapshots
OfferSnapshot(RequestOfferSnapshot) ResponseOfferSnapshot // Offer a snapshot to the application
LoadSnapshotChunk(RequestLoadSnapshotChunk) ResponseLoadSnapshotChunk // Load a snapshot chunk
ApplySnapshotChunk(RequestApplySnapshotChunk) ResponseApplySnapshotChunk // Apply a shapshot chunk
ApplySnapshotChunk(RequestApplySnapshotChunk) ResponseApplySnapshotChunk // Apply a snapshot chunk
}

//-------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion abci/types/types.pb.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion behaviour/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Package Behaviour provides a mechanism for reactors to report behaviour of peers.

Instead of a reactor calling the switch directly it will call the behaviour module which will
handle the stoping and marking peer as good on behalf of the reactor.
handle the stopping and marking peer as good on behalf of the reactor.

There are four different behaviours a reactor can report.

Expand Down
4 changes: 2 additions & 2 deletions consensus/common_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ func (vs *validatorStub) signVote(
return nil, fmt.Errorf("sign vote failed: %w", err)
}

// ref: signVote in FilePV, the vote should use the privious vote info when the sign data is the same.
// ref: signVote in FilePV, the vote should use the previous vote info when the sign data is the same.
if signDataIsEqual(vs.lastVote, v) {
v.Signature = vs.lastVote.Signature
v.Timestamp = vs.lastVote.Timestamp
Expand Down Expand Up @@ -829,7 +829,7 @@ func getSwitchIndex(switches []*p2p.Switch, peer p2p.Peer) int {
return i
}
}
panic("didnt find peer in switches")
panic("didn't find peer in switches")
}

//-------------------------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion consensus/reactor.go
Original file line number Diff line number Diff line change
Expand Up @@ -607,7 +607,7 @@ OUTER_LOOP:
if blockStoreBase > 0 && 0 < prs.Height && prs.Height < rs.Height && prs.Height >= blockStoreBase {
heightLogger := logger.With("height", prs.Height)

// if we never received the commit message from the peer, the block parts wont be initialized
// if we never received the commit message from the peer, the block parts won't be initialized
if prs.ProposalBlockParts == nil {
blockMeta := conR.conS.blockStore.LoadBlockMeta(prs.Height)
if blockMeta == nil {
Expand Down
2 changes: 1 addition & 1 deletion consensus/replay_file.go
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ func (pb *playback) replayReset(count int, newStepSub types.Subscription) error
pb.fp = fp
pb.dec = NewWALDecoder(fp)
count = pb.count - count
fmt.Printf("Reseting from %d to %d\n", pb.count, count)
fmt.Printf("Resetting from %d to %d\n", pb.count, count)
pb.count = 0
pb.cs = newCS
var msg *TimedWALMessage
Expand Down
6 changes: 3 additions & 3 deletions consensus/replay_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ func TestMain(m *testing.M) {
// These tests ensure we can always recover from failure at any part of the consensus process.
// There are two general failure scenarios: failure during consensus, and failure while applying the block.
// Only the latter interacts with the app and store,
// but the former has to deal with restrictions on re-use of priv_validator keys.
// but the former has to deal with restrictions on reuse of priv_validator keys.
// The `WAL Tests` are for failures during the consensus;
// the `Handshake Tests` are for failures in applying the block.
// With the help of the WAL, we can recover from it all!
Expand Down Expand Up @@ -1102,7 +1102,7 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
}
commitHeight := thisBlockCommit.Height
if commitHeight != height+1 {
panic(fmt.Sprintf("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
panic(fmt.Sprintf("commit doesn't match. got height %d, expected %d", commitHeight, height+1))
}
blocks = append(blocks, block)
commits = append(commits, thisBlockCommit)
Expand Down Expand Up @@ -1141,7 +1141,7 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
}
commitHeight := thisBlockCommit.Height
if commitHeight != height+1 {
panic(fmt.Sprintf("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
panic(fmt.Sprintf("commit doesn't match. got height %d, expected %d", commitHeight, height+1))
}
blocks = append(blocks, block)
commits = append(commits, thisBlockCommit)
Expand Down
4 changes: 2 additions & 2 deletions consensus/state.go
Original file line number Diff line number Diff line change
Expand Up @@ -1678,7 +1678,7 @@ func (cs *State) finalizeCommit(height int64) {
stateCopy := cs.state.Copy()

// Execute and commit the block, update and save the state, and update the mempool.
// NOTE The block.AppHash wont reflect these txs until the next block.
// NOTE The block.AppHash won't reflect these txs until the next block.
var (
err error
retainHeight int64
Expand Down Expand Up @@ -2305,7 +2305,7 @@ func (cs *State) signAddVote(msgType cmtproto.SignedMsgType, hash []byte, header
return nil
}

// updatePrivValidatorPubKey get's the private validator public key and
// updatePrivValidatorPubKey gets the private validator public key and
// memoizes it. This func returns an error if the private validator is not
// responding or responds with an error.
func (cs *State) updatePrivValidatorPubKey() error {
Expand Down
10 changes: 5 additions & 5 deletions consensus/state_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -645,7 +645,7 @@ func TestStateLockPOLRelock(t *testing.T) {
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())

round++ // moving to the next round
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
//XXX: this isn't guaranteed to get there before the timeoutPropose ...
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
Expand Down Expand Up @@ -745,7 +745,7 @@ func TestStateLockPOLUnlock(t *testing.T) {
Round2 (vs2, C) // B nil nil nil // nil nil nil _
cs1 unlocks!
*/
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
//XXX: this isn't guaranteed to get there before the timeoutPropose ...
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
Expand Down Expand Up @@ -950,7 +950,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
round++ // moving to the next round
ensureNewRound(newRoundCh, height, round)

//XXX: this isnt guaranteed to get there before the timeoutPropose ...
//XXX: this isn't guaranteed to get there before the timeoutPropose ...
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
Expand Down Expand Up @@ -1101,7 +1101,7 @@ func TestStateLockPOLSafety2(t *testing.T) {
ensureNewRound(newRoundCh, height, round)
t.Log("### ONTO Round 2")
/*Round2
// now we see the polka from round 1, but we shouldnt unlock
// now we see the polka from round 1, but we shouldn't unlock
*/
ensureNewProposal(proposalCh, height, round)

Expand Down Expand Up @@ -1767,7 +1767,7 @@ func TestStateHalt1(t *testing.T) {
validatePrecommit(t, cs1, round, round, vss[0], propBlock.Hash(), propBlock.Hash())

// add precommits from the rest
signAddVotes(cs1, cmtproto.PrecommitType, nil, types.PartSetHeader{}, vs2) // didnt receive proposal
signAddVotes(cs1, cmtproto.PrecommitType, nil, types.PartSetHeader{}, vs2) // didn't receive proposal
signAddVotes(cs1, cmtproto.PrecommitType, propBlock.Hash(), propBlockParts.Header(), vs3)
// we receive this later, but vs3 might receive it earlier and with ours will go to commit!
precommit4 := signVote(vs4, cmtproto.PrecommitType, propBlock.Hash(), propBlockParts.Header())
Expand Down
2 changes: 1 addition & 1 deletion consensus/ticker.go
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ func (t *timeoutTicker) stopTimer() {
}

// send on tickChan to start a new timer.
// timers are interupted and replaced by new ticks from later steps
// timers are interrupted and replaced by new ticks from later steps
// timeouts of 0 on the tickChan will be immediately relayed to the tockChan
func (t *timeoutTicker) timeoutRoutine() {
t.Logger.Debug("Starting timeout routine")
Expand Down
2 changes: 1 addition & 1 deletion consensus/wal.go
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ func IsDataCorruptionError(err error) bool {
return ok
}

// DataCorruptionError is an error that occures if data on disk was corrupted.
// DataCorruptionError is an error that occurs if data on disk was corrupted.
type DataCorruptionError struct {
cause error
}
Expand Down
2 changes: 1 addition & 1 deletion crypto/merkle/proof.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import (
const (
// MaxAunts is the maximum number of aunts that can be included in a Proof.
// This corresponds to a tree of size 2^100, which should be sufficient for all conceivable purposes.
// This maximum helps prevent Denial-of-Service attacks by limitting the size of the proofs.
// This maximum helps prevent Denial-of-Service attacks by limiting the size of the proofs.
MaxAunts = 100
)

Expand Down
2 changes: 1 addition & 1 deletion crypto/merkle/proof_op.go
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ func (prt *ProofRuntime) VerifyValueFromKeys(proof *cmtcrypto.ProofOps, root []b
return prt.VerifyFromKeys(proof, root, keys, [][]byte{value})
}

// TODO In the long run we'll need a method of classifcation of ops,
// TODO In the long run we'll need a method of classification of ops,
// whether existence or absence or perhaps a third?
func (prt *ProofRuntime) VerifyAbsence(proof *cmtcrypto.ProofOps, root []byte, keypath string) (err error) {
return prt.Verify(proof, root, keypath, nil)
Expand Down
2 changes: 1 addition & 1 deletion crypto/merkle/tree_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ func TestProof(t *testing.T) {
proof := proofs[i]

// Check total/index
require.EqualValues(t, proof.Index, i, "Unmatched indicies: %d vs %d", proof.Index, i)
require.EqualValues(t, proof.Index, i, "Unmatched indices: %d vs %d", proof.Index, i)

require.EqualValues(t, proof.Total, total, "Unmatched totals: %d vs %d", proof.Total, total)

Expand Down
2 changes: 1 addition & 1 deletion crypto/secp256k1/secp256k1_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ func TestSecp256k1LoadPrivkeyAndSerializeIsIdentity(t *testing.T) {
}

func TestGenPrivKeySecp256k1(t *testing.T) {
// curve oder N
// curve order N
N := underlyingSecp256k1.S256().N
tests := []struct {
name string
Expand Down
2 changes: 1 addition & 1 deletion docs/celestia-architecture/adr-002-ipld-da-sampling.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ Proposed
### Positive

- simplicity & ease of implementation
- can re-use an existing networking and p2p stack (go-ipfs)
- can reuse an existing networking and p2p stack (go-ipfs)
- potential support of large, cool, and helpful community
- high-level API definitions independent of the used stack

Expand Down
2 changes: 1 addition & 1 deletion docs/celestia-architecture/adr-004-mvp-light-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ The light client stores data in its own [badgerdb instance](https://github.com/c
db, err := badgerdb.NewDB("light-client-db", dir)
```

While it is not critical for this feature, we should at least try to re-use that same DB instance for the local ipld store.
While it is not critical for this feature, we should at least try to reuse that same DB instance for the local ipld store.
Otherwise, we introduce yet another DB instance; something we want to avoid, especially on the long run (see [#283](https://github.com/celestiaorg/celestia-core/issues/283)).
For the first implementation, it might still be simpler to create a separate DB instance and tackle cleaning this up in a separate pull request, e.g. together with other [instances]([#283](https://github.com/celestiaorg/celestia-core/issues/283)).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ describe how to hash the block data here:

#### Optionally remove some unused code

- Removing misc unsued code (https://github.com/celestiaorg/celestia-core/pull/208)
- Removing misc unused code (https://github.com/celestiaorg/celestia-core/pull/208)
- Remove docs deployment (https://github.com/celestiaorg/celestia-core/pull/134)
- Start deleting docs (https://github.com/celestiaorg/celestia-core/pull/209)
- Remove tendermint-db in favor of badgerdb (https://github.com/celestiaorg/celestia-core/pull/241)
Expand Down Expand Up @@ -174,7 +174,7 @@ minimum desired changes specified above.
- Introduction (https://github.com/celestiaorg/celestia-core/pull/144)
- Initial integration (https://github.com/celestiaorg/celestia-core/pull/152)
- Custom Multihash (https://github.com/celestiaorg/celestia-core/pull/155)
- Puting data during proposal (https://github.com/celestiaorg/celestia-core/pull/178)
- Putting data during proposal (https://github.com/celestiaorg/celestia-core/pull/178)
- Module name (https://github.com/celestiaorg/celestia-core/pull/151)
- Update rsmt2d (https://github.com/celestiaorg/celestia-core/pull/290)
- Make plugin a package (https://github.com/celestiaorg/celestia-core/pull/294)
Expand Down
4 changes: 2 additions & 2 deletions docs/celestia-architecture/adr-009-cat-pool.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

## Context

One of the criterias of success for Celestia as a reliable data availability layer is the ability to handle large transactional throughput. A component that plays a significant role in this is the mempool. It's purpose is to receive transactions from clients and broadcast them to all other nodes, eventually reaching the next block proposer who includes it in their block. Given Celestia's aggregator-like role whereby larger transactions, i.e. blobs, are expected to dominate network traffic, a content-addressable algorithm, common in many other [peer-to-peer file sharing protocols](https://en.wikipedia.org/wiki/InterPlanetary_File_System), could be far more beneficial than the current transaction-flooding protocol that Tendermint currently uses.
One of the criteria of success for Celestia as a reliable data availability layer is the ability to handle large transactional throughput. A component that plays a significant role in this is the mempool. It's purpose is to receive transactions from clients and broadcast them to all other nodes, eventually reaching the next block proposer who includes it in their block. Given Celestia's aggregator-like role whereby larger transactions, i.e. blobs, are expected to dominate network traffic, a content-addressable algorithm, common in many other [peer-to-peer file sharing protocols](https://en.wikipedia.org/wiki/InterPlanetary_File_System), could be far more beneficial than the current transaction-flooding protocol that Tendermint currently uses.

This ADR describes the content addressable transaction protocol and through a comparative analysis with the existing gossip protocol, presents the case for it's adoption in Celestia.

Expand All @@ -32,7 +32,7 @@ A series of new metrics have been added to monitor effectiveness:
- SuccessfulTxs: number of transactions committed in a block (to be used as a baseline)
- AlreadySeenTxs: transactions that are received more than once
- RequestedTxs: the number of initial requests for a transaction
- RerequestedTxs: the numer of follow up requests for a transaction. If this is high, it may indicate that the request timeout is too short.
- RerequestedTxs: the number of follow up requests for a transaction. If this is high, it may indicate that the request timeout is too short.

The CAT pool has had numerous unit tests added. It has been tested in the local e2e networks and put under strain in large, geographically dispersed 100 node networks.

Expand Down
2 changes: 1 addition & 1 deletion docs/core/running-in-production.md
Original file line number Diff line number Diff line change
Expand Up @@ -358,7 +358,7 @@ applications, setting it to true is not a problem.
- `consensus.peer_gossip_sleep_duration`

You can try to reduce the time your node sleeps before checking if
theres something to send its peers.
there's something to send its peers.

- `consensus.timeout_commit`

Expand Down
10 changes: 5 additions & 5 deletions docs/qa/CometBFT-QA-34.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ This section reports on the key Prometheus metrics extracted from the following
* Mixed network, 1/3 Tendermint Core `v0.34.26` and 2/3 running CometBFT: experiment with UUID starting with `fc5e`.
* Mixed network, 2/3 Tendermint Core `v0.34.26` and 1/3 running CometBFT: experiment with UUID starting with `4759`.

We make explicit comparisons between the baseline and the homogenous setups, but refrain from
We make explicit comparisons between the baseline and the homogeneous setups, but refrain from
commenting on the mixed network experiment unless they show some exceptional results.

### Mempool Size
Expand Down Expand Up @@ -191,7 +191,7 @@ on the corresponding plot, shown above.

### Peers

The following plots show how many peers a node had throughtout the experiment.
The following plots show how many peers a node had throughout the experiment.

The thick red dashed line represents the moving average over a sliding window of 20 seconds.

Expand Down Expand Up @@ -236,7 +236,7 @@ The thick red dashed line show the rates' moving averages.

#### Baseline

The average number of blocks/minute oscilate between 10 and 40.
The average number of blocks/minute oscillate between 10 and 40.

![heights](img34/baseline/block_rate_regular.png)

Expand Down Expand Up @@ -327,7 +327,7 @@ command, and their average value.

#### CometBFT Homogeneous network

The load in the homogenous network is, similarly to the baseline case, below 5 and, therefore, normal.
The load in the homogeneous network is, similarly to the baseline case, below 5 and, therefore, normal.

![load1-homogeneous](img34/homogeneous/cpu.png)

Expand Down Expand Up @@ -358,7 +358,7 @@ As expected, the average plot also looks similar.
The comparison of the baseline results and the homogeneous case show that both scenarios had similar numbers and are therefore equivalent.

The mixed nodes cases show that networks operate normally with a mix of compatible Tendermint Core and CometBFT versions.
Although not the main goal, a comparison of metric numbers with the homogenous case and the baseline scenarios show similar results and therefore we can conclude that mixing compatible Tendermint Core and CometBFT introduces not performance degradation.
Although not the main goal, a comparison of metric numbers with the homogeneous case and the baseline scenarios show similar results and therefore we can conclude that mixing compatible Tendermint Core and CometBFT introduces not performance degradation.

A conclusion of these tests is shown in the following table, along with the commit versions used in the experiments.

Expand Down
Loading
Loading