Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make sure lists are unique in zebra-network Requests and Responses #2244

Closed
19 tasks
teor2345 opened this issue Jun 3, 2021 · 2 comments
Closed
19 tasks

Make sure lists are unique in zebra-network Requests and Responses #2244

teor2345 opened this issue Jun 3, 2021 · 2 comments
Labels
A-network Area: Network protocol updates or fixes A-rust Area: Updates to Rust code C-security Category: Security issues I-invalid-data Zebra relies on invalid or untrusted data, or sends invalid data

Comments

@teor2345
Copy link
Contributor

teor2345 commented Jun 3, 2021

Motivation

Most of zebra-network's internal Requests use HashSets to deduplicate outbound Zebra requests and inbound peer requests.

But we don't do the same for Zebra's internal Responses. (Outbound Zebra responses to peers, and inbound peer responses to Zebra.)

We also don't ensure uniqueness in ordered maps (like known_blocks lists in Requests).

Scheduling

This risk is acceptable for the stable release, but we need to fix it before we support lightwalletd.

Specifications

This change impacts Bitcoin-style data and control messages:
https://developer.bitcoin.org/reference/p2p_networking.html#data-messages
https://developer.bitcoin.org/reference/p2p_networking.html#control-messages

The specs for each message type are linked below.

Solution

Parsing

Duplicates in inbound messages should be:

  • errors during conversion to Requests
  • errors during conversion to Responses

Duplicates in outbound messages are structurally impossible. The code that generates these lists should be modified to use the same indexes.

Request Types

Update the request types to ensure:

FindBlocks

known_blocks contains unique BlockHashes in chain order:

  • Use IndexMap
  • Reject vectors that contain duplicate hashes
    • in requests as well?
    • inbound duplicates are an error, the remote peer is not tracking the chain correctly
    • outbound duplicates are a panic, Zebra is not verifying the chain correctly
  • Add a note that order is important

https://developer.bitcoin.org/reference/p2p_networking.html#getblocks

FindHeaders

known_blocks contains unique BlockHashes in chain order:

  • Use IndexMap
  • Reject vectors that contain duplicate hashes
    • in requests as well?
    • inbound duplicates are an error, the remote peer is not tracking the chain correctly
    • outbound duplicates are a panic, Zebra is not verifying the chain correctly
  • Add a note that order is important

https://developer.bitcoin.org/reference/p2p_networking.html#getheaders

Response Types

Update the response types to ensure:

Peers

Having different times or services for the same address is also an error.

Unique peer addresses

  • Change Peers(Vec<MetaAddr>) to Peers(HashMap<SocketAddr, MetaAddr>) - order is not important
  • If there are duplicates, deduplicate and warn
  • Add a note explaining that our address equality checks could be different to other nodes (this will become particularly important if we support arbitrary-length non-IP addresses as part of future network protocol changes)

https://developer.bitcoin.org/reference/p2p_networking.html#addr

Blocks

Unique blocks

  • Change Blocks(Vec<Arc<Block>>) to Blocks(IndexMap<block::Hash, Arc<Block>>) - download order could be important for performance
    • in requests as well?
    • inbound duplicates are an error, the remote peer is not validating blocks correctly
    • outbound duplicates are a panic, Zebra's indexing or hashing is broken

https://developer.bitcoin.org/reference/p2p_networking.html#block

BlockHashes

Unique BlockHashes in chain order:

  • Use IndexMap
  • Reject vectors that contain duplicate hashes
    • inbound duplicates are an error, the remote peer is not tracking the chain correctly
    • outbound duplicates are a panic, Zebra is not verifying the chain correctly
  • Add a note that order is important

https://developer.bitcoin.org/reference/p2p_networking.html#inv

BlockHeaders

Unique block headers

  • Change BlockHeaders(Vec<block::CountedHeader>) to BlockHeaders(HashMap<block::Hash, block::CountedHeader>) - download order could be important for performance
    • in requests as well?
    • inbound duplicates are an error, the remote peer is not validating blocks correctly
    • outbound duplicates are a panic, Zebra's indexing or hashing is broken

https://developer.bitcoin.org/reference/p2p_networking.html#headers

Transactions

Unique transactions

  • Change Transactions(Vec<Arc<Transaction>>) to Transactions(HashMap<transaction::Hash, Arc<Transaction>>)
    • in requests as well?
    • inbound duplicates are an error, the remote peer is not validating transactions correctly
    • outbound duplicates are a panic, Zebra's indexing or hashing is broken

https://developer.bitcoin.org/reference/p2p_networking.html#tx

TransactionHashes

Unique transaction hashes

  • Change TransactionHashes(Vec<_>) to TransactionHashes(HashSet<_>)
    • change requests as well
    • inbound duplicates are an error, the remote peer is not validating transactions correctly
    • outbound duplicates are a panic, Zebra's indexing or hashing is broken

https://developer.bitcoin.org/reference/p2p_networking.html#inv

Unlike BlockHashes, loose transactions don't have any chain or block ordering.

Request HashMaps

  • Check for request HashSets that could become HashMaps, because part of the data structure is unique, not the whole thing

Alternatives

We could drop duplicates instead, which reduces the impact of attacks where malicious peers trick honest peers into sending duplicates. But it also allows malicious peers to perform more attacks directly.

We could de-duplicate when processing messages.

This change is not strictly required, but it does improve security in some edge-cases.

@teor2345 teor2345 added A-rust Area: Updates to Rust code C-cleanup Category: This is a cleanup S-needs-triage Status: A bug report needs triage P-Medium C-security Category: Security issues I-invalid-data Zebra relies on invalid or untrusted data, or sends invalid data A-network Area: Network protocol updates or fixes labels Jun 3, 2021
@teor2345 teor2345 changed the title Use HashSets for zebra-network responses Use HashMaps for zebra-network responses Jun 14, 2021
@mpguerra mpguerra removed the S-needs-triage Status: A bug report needs triage label Jun 14, 2021
@teor2345 teor2345 changed the title Use HashMaps for zebra-network responses Make sure lists are unique in zebra-network Requests and Responses Dec 1, 2021
@teor2345
Copy link
Contributor Author

Blocked by #2214, because they both change the same code.

@teor2345
Copy link
Contributor Author

teor2345 commented Mar 1, 2022

This is probably something we should do eventually, but it's not urgent.

@teor2345 teor2345 closed this as completed Mar 1, 2022
@mpguerra mpguerra closed this as not planned Won't fix, can't repro, duplicate, stale May 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-network Area: Network protocol updates or fixes A-rust Area: Updates to Rust code C-security Category: Security issues I-invalid-data Zebra relies on invalid or untrusted data, or sends invalid data
Projects
None yet
Development

No branches or pull requests

2 participants