From 1e5e8720d55d6ca1c48fa0ce1c6607edf3ddadb4 Mon Sep 17 00:00:00 2001 From: Yiannis Psaras <52073247+yiannisbot@users.noreply.github.com> Date: Fri, 4 Sep 2020 12:45:51 +0300 Subject: [PATCH] fix(content): libraries section update (#1115) This PR is adding content and pointers to some of the libraries listed in this section, in particular for Multiformats, IPLD and libp2p. Given that all of these are external to Filecoin, the content added is only a basic description and a pointer to the corresponding spec. - IPFS and Bitswap refs - libraries hierarchy fix and file cleanup Co-authored-by: MollyM Co-authored-by: Hugo Dias --- content/glossary/_index.md | 2 +- content/libraries/fcs/_index.md | 10 - content/libraries/filcrypto/_index.md | 12 - .../libraries/filcrypto/filproofs/_index.md | 18 - .../filcrypto/filproofs/algorithms.go | 835 ------------------ .../filcrypto/filproofs/algorithms.id | 299 ------- .../libraries/filcrypto/filproofs/feistel.go | 92 -- .../filproofs/filecoin_proofs_subsystem.go | 9 - .../filproofs/filecoin_proofs_subsystem.id | 10 - .../libraries/filcrypto/filproofs/hashing.go | 75 -- content/libraries/filcrypto/filproofs/tree.go | 56 -- content/libraries/filcrypto/filproofs/util.go | 154 ---- .../filcrypto/filproofs/win_stacked_sdr.go | 524 ----------- content/libraries/ipfs/_index.md | 25 +- content/libraries/ipfs/bitswap.md | 9 - content/libraries/ipfs/graphsync.md | 9 - content/libraries/ipfs/unixfs.md | 9 - content/libraries/ipld/_index.md | 37 +- content/libraries/ipld/cid.md | 16 - content/libraries/ipld/datamodel.md | 9 - content/libraries/ipld/ipld.id | 17 - content/libraries/ipld/selectors.id | 153 ---- content/libraries/ipld/selectors.md | 12 - content/libraries/libp2p/_index.md | 25 +- content/libraries/libp2p/fil_libp2p_nodes.md | 10 - content/libraries/libp2p/gossipsub.md | 12 - content/libraries/libp2p/kad_dht.md | 10 - content/libraries/libp2p/libp2p.id | 94 -- content/libraries/multiformats/_index.md | 33 +- content/libraries/multiformats/multiaddr.id | 1 - content/libraries/multiformats/multihash.id | 1 - .../filecoin_markets/storage_market/_index.md | 2 +- .../repository/ipldstore/_index.md | 1 - 33 files changed, 99 insertions(+), 2482 deletions(-) delete mode 100644 content/libraries/fcs/_index.md delete mode 100644 content/libraries/filcrypto/_index.md delete mode 100644 content/libraries/filcrypto/filproofs/_index.md delete mode 100644 content/libraries/filcrypto/filproofs/algorithms.go delete mode 100644 content/libraries/filcrypto/filproofs/algorithms.id delete mode 100644 content/libraries/filcrypto/filproofs/feistel.go delete mode 100644 content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.go delete mode 100644 content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.id delete mode 100644 content/libraries/filcrypto/filproofs/hashing.go delete mode 100644 content/libraries/filcrypto/filproofs/tree.go delete mode 100644 content/libraries/filcrypto/filproofs/util.go delete mode 100644 content/libraries/filcrypto/filproofs/win_stacked_sdr.go delete mode 100644 content/libraries/ipfs/bitswap.md delete mode 100644 content/libraries/ipfs/graphsync.md delete mode 100644 content/libraries/ipfs/unixfs.md delete mode 100644 content/libraries/ipld/cid.md delete mode 100644 content/libraries/ipld/datamodel.md delete mode 100644 content/libraries/ipld/ipld.id delete mode 100644 content/libraries/ipld/selectors.id delete mode 100644 content/libraries/ipld/selectors.md delete mode 100644 content/libraries/libp2p/fil_libp2p_nodes.md delete mode 100644 content/libraries/libp2p/gossipsub.md delete mode 100644 content/libraries/libp2p/kad_dht.md delete mode 100644 content/libraries/libp2p/libp2p.id delete mode 100644 content/libraries/multiformats/multiaddr.id delete mode 100644 content/libraries/multiformats/multihash.id diff --git a/content/glossary/_index.md b/content/glossary/_index.md index 29a41ffda..0d2135cd3 100644 --- a/content/glossary/_index.md +++ b/content/glossary/_index.md @@ -220,7 +220,7 @@ See `Height` for definition. They are synonymous. ### SEAL/UNSEAL -See [Filecoin Proofs](filproofs) +TODO ### Sector diff --git a/content/libraries/fcs/_index.md b/content/libraries/fcs/_index.md deleted file mode 100644 index d0baff1c6..000000000 --- a/content/libraries/fcs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: FCS -weight: 2 -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: missing -dashboardTests: 0 ---- - -# FCS \ No newline at end of file diff --git a/content/libraries/filcrypto/_index.md b/content/libraries/filcrypto/_index.md deleted file mode 100644 index 9ece98c55..000000000 --- a/content/libraries/filcrypto/_index.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: FIL Crypto -description: Cryptographic libraries used in Filecoin -weight: 1 -bookCollapseSection: true -dashboardWeight: 1.5 -dashboardState: wip -dashboardAudit: n/a -dashboardTests: 0 ---- - -# FIL Crypto \ No newline at end of file diff --git a/content/libraries/filcrypto/filproofs/_index.md b/content/libraries/filcrypto/filproofs/_index.md deleted file mode 100644 index 1985c4eba..000000000 --- a/content/libraries/filcrypto/filproofs/_index.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: FIL Proofs -description: Filecoin Storage Proofs -dashboardWeight: 1.5 -dashboardState: incorrect -dashboardTests: 0 -dashboardAudit: done -dashboardAuditDate: '2020-07-28' -dashboardAuditURL: https://github.com/filecoin-project/rust-fil-proofs/blob/master/audits/Sigma-Prime-Protocol-Labs-Filecoin-Proofs-Security-Review-v2.1.pdf - ---- -# Filecoin Storage Proofs - -{{< embed src="filecoin_proofs_subsystem.id" lang="go" >}} - -{{< embed src="algorithms.id" lang="go" >}} - -{{< embed src="algorithms.go" lang="go" >}} diff --git a/content/libraries/filcrypto/filproofs/algorithms.go b/content/libraries/filcrypto/filproofs/algorithms.go deleted file mode 100644 index ea717dc98..000000000 --- a/content/libraries/filcrypto/filproofs/algorithms.go +++ /dev/null @@ -1,835 +0,0 @@ -package filproofs - -import ( - "bytes" - "errors" - "fmt" - "math" - "math/rand" - - "encoding/binary" - big "math/big" - - abi "github.com/filecoin-project/specs-actors/actors/abi" - file "github.com/filecoin-project/specs/systems/filecoin_files/file" - sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector" - sector_index "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index" - util "github.com/filecoin-project/specs/util" - cid "github.com/ipfs/go-cid" -) - -type Bytes32 []byte -type UInt = util.UInt -type PieceInfo = abi.PieceInfo -type Label Bytes32 -type Commitment = sector.Commitment -type PrivatePostCandidateProof = abi.PrivatePoStCandidateProof -type SectorSize = abi.SectorSize -type RegisteredProof = abi.RegisteredProof - -const WRAPPER_LAYER_WINDOW_INDEX = -1 - -const NODE_SIZE = 32 -const ELECTION_POST_PARTITIONS = 1 -const SURPRISE_POST_PARTITIONS = 1 -const POST_LEAF_CHALLENGE_COUNT = 66 -const POST_CHALLENGE_RANGE_SIZE = 1 - -const GIB_32 = 32 * 1024 * 1024 * 1024 - -var PROOFS ProofRegistry = ProofRegistry(map[util.UInt]ProofInstance{util.UInt(abi.RegisteredProof_WinStackedDRG32GiBSeal): &ProofInstance_I{ - ID_: abi.RegisteredProof_WinStackedDRG32GiBSeal, - Type_: ProofType_SealProof, - CircuitType_: &ConcreteCircuit_I{ - Name_: "HASHOFCIRCUITPARAMETERS1", - }, -}, - util.UInt(abi.RegisteredProof_WinStackedDRG32GiBPoSt): &ProofInstance_I{ - ID_: abi.RegisteredProof_WinStackedDRG32GiBPoSt, - Type_: ProofType_PoStProof, - CircuitType_: &ConcreteCircuit_I{ - Name_: "HASHOFCIRCUITPARAMETERS2", - }, - Cfg_: ProofCfg_Make_PoStCfg(&PoStInstanceCfg_I{}), - // FIXME: integrate - // return sector.PoStInstanceCfg_Make_PoStCfgV1(§or.PoStCfgV1_I{ - // Type_: pType, - // Nodes_: nodes, - // Partitions_: partitions, - // LeafChallengeCount_: POST_LEAF_CHALLENGE_COUNT, - // ChallengeRangeSize_: POST_CHALLENGE_RANGE_SIZE, - // }).Impl() - - }, - util.UInt(abi.RegisteredProof_StackedDRG32GiBSeal): &ProofInstance_I{ - ID_: abi.RegisteredProof_StackedDRG32GiBSeal, - Type_: ProofType_SealProof, - CircuitType_: &ConcreteCircuit_I{ - Name_: "HASHOFCIRCUITPARAMETERS3", - }, - }, - util.UInt(abi.RegisteredProof_StackedDRG32GiBPoSt): &ProofInstance_I{ - ID_: abi.RegisteredProof_StackedDRG32GiBPoSt, - Type_: ProofType_PoStProof, - CircuitType_: &ConcreteCircuit_I{ - Name_: "HASHOFCIRCUITPARAMETERS4", - }, - }, -}) - -func RegisteredProofInstance(r RegisteredProof) ProofInstance { - return PROOFS[util.UInt(r)] -} - -func (c *ConcreteCircuit_I) GrothParameterFileName() string { - return c.Name() + ".params" -} - -func (c *ConcreteCircuit_I) VerifyingKeyFileName() string { - return c.Name() + ".vk" -} - -func (cfg *SealInstanceCfg_I) SectorSize() SectorSize { - switch cfg.Which() { - case SealInstanceCfg_Case_WinStackedDRGCfgV1: - { - return cfg.As_WinStackedDRGCfgV1().SectorSize() - } - } - panic("TODO") -} - -func PoStCfg(pType PoStType, sectorSize SectorSize, partitions UInt) RegisteredProof { - return abi.RegisteredProof_WinStackedDRG32GiBPoSt -} - -func MakeSealVerifier(registeredProof abi.RegisteredProof) *SealVerifier_I { - return &SealVerifier_I{ - SealCfg_: RegisteredProofInstance(registeredProof).Cfg().As_SealCfg(), - } -} - -func SurprisePoStCfg(sectorSize SectorSize) RegisteredProof { - return PoStCfg(PoStType_SurprisePoSt, sectorSize, SURPRISE_POST_PARTITIONS) -} - -func ElectionPoStCfg(sectorSize SectorSize) RegisteredProof { - return PoStCfg(PoStType_ElectionPoSt, sectorSize, ELECTION_POST_PARTITIONS) -} - -func MakeElectionPoStVerifier(registeredProof RegisteredProof) *PoStVerifier_I { - return &PoStVerifier_I{ - PoStCfg_: RegisteredProofInstance(registeredProof).Cfg().As_PoStCfg(), - } -} - -func MakeSurprisePoStVerifier(registeredProof RegisteredProof) *PoStVerifier_I { - return &PoStVerifier_I{ - PoStCfg_: RegisteredProofInstance(registeredProof).Cfg().As_PoStCfg(), - } -} - -func (drg *DRG_I) Parents(node UInt) []UInt { - config := drg.Config() - degree := UInt(config.Degree()) - return DRGAlgorithmComputeParents(config.Algorithm().ParentsAlgorithm(), degree, node) -} - -// TODO: Verify this. Both the port from impl and the algorithm. -func DRGAlgorithmComputeParents(alg DRGCfg_Algorithm_ParentsAlgorithm, degree UInt, node UInt) (parents []UInt) { - switch alg { - case DRGCfg_Algorithm_ParentsAlgorithm_DRSample: - util.Assert(node > 0) - parents = append(parents, node-1) - - m := degree - 1 - - var k UInt - for k = 0; k < m; k++ { - logi := int(math.Floor(math.Log2(float64(node * m)))) - // FIXME: Make RNG parameterizable and specify it. - j := rand.Intn(logi) - jj := math.Min(float64(node*m+k), float64(UInt(1)<>1), 2)), int(jj+1)) - out := (node*m + k - backDist) / m - - parents = append(parents, out) - } - - return parents - - default: - panic(fmt.Sprintf("DRG algorithm not supported: %v", alg)) - } -} - -func randInRange(lowInclusive int, highExclusive int) UInt { - // NOTE: current implementation uses a more sophisticated method for repeated sampling within a range. - // We need to converge on and fully specify the actual method, since this must be deterministic. - return UInt(rand.Intn(highExclusive-lowInclusive) + lowInclusive) -} - -func (exp *ExpanderGraph_I) Parents(node UInt) []UInt { - d := exp.Config().Degree() - - // TODO: How do we handle choice of algorithm generically? - return exp.Config().Algorithm().As_ChungExpanderAlgorithm().Parents(node, d, exp.Config().Nodes()) -} - -func (chung *ChungExpanderAlgorithm_I) Parents(node UInt, d ExpanderGraphDegree, nodes ExpanderGraphNodeCount) []UInt { - var parents []UInt - var i UInt - for i = 0; i < UInt(d); i++ { - parent := chung._ithParent(node, i, d, nodes) - parents = append(parents, parent) - } - return parents -} - -func (chung *ChungExpanderAlgorithm_I) _ithParent(node UInt, i UInt, degree ExpanderGraphDegree, nodes ExpanderGraphNodeCount) UInt { - // ithParent generates one of d parents of node. - d := UInt(degree) - - // This is done by operating on permutations of a set with d elements per node. - setSize := UInt(nodes) * d - - // There are d ways of mapping each node into the set, and we choose the ith. - // Note that we can project the element back to the original node: element / d == node. - element := node*d + i - - // Permutations of the d elements corresponding to each node yield d new elements, - permuted := chung.PermutationAlgorithm().As_Feistel().Permute(setSize, element) - - // each of which can be projected back to a node. - projected := permuted / d - - // We have selected the ith such parent of node. - return projected -} - -func (f *Feistel_I) Permute(size UInt, i UInt) UInt { - // Call into feistel.go. - panic("TODO") -} - -func getProverID(minerID abi.ActorID) []byte { - // return leb128(minerID) - panic("TODO") -} - -func computeSealSeed(sid abi.SectorID, randomness abi.SealRandomness, commD abi.UnsealedSectorCID) sector.SealSeed { - proverId := getProverID(sid.Miner) - sectorNumber := sid.Number - - var preimage []byte - preimage = append(preimage, proverId...) - preimage = append(preimage, bigEndianBytesFromUInt(UInt(sectorNumber), 8)...) - preimage = append(preimage, randomness...) - preimage = append(preimage, Commitment_UnsealedSectorCID(commD)...) - - sealSeed := HashBytes_SHA256Hash(preimage) - return sector.SealSeed(sealSeed) -} - -func generateSDRKeyLayers(drg *DRG_I, expander *ExpanderGraph_I, sealSeed sector.SealSeed, window int, nodes int, layers int, nodeSize int, modulus big.Int) [][]byte { - var keyLayers [][]byte - var prevLayer []byte - - for i := 0; i < layers; i++ { - currentLayer := labelLayer(drg, expander, sealSeed, window, nodeSize, nodes, prevLayer) - keyLayers = append(keyLayers, currentLayer) - prevLayer = currentLayer - } - - return keyLayers -} - -func labelLayer(drg *DRG_I, expander *ExpanderGraph_I, sealSeed sector.SealSeed, window int, nodeSize int, nodes int, prevLayer []byte) []byte { - size := nodes * nodeSize - labels := make([]byte, size) - - for i := 0; i < nodes; i++ { - var parents []Label - - // The first node of every layer has no DRG Parents. - if i > 0 { - for parent := range drg.Parents(UInt(i)) { - start := parent * nodeSize - parents = append(parents, labels[start:start+nodeSize]) - } - } - - // The first layer has no expander parents. - if prevLayer != nil { - for parent := range expander.Parents(UInt(i)) { - start := parent * nodeSize - parents = append(parents, prevLayer[start:start+nodeSize]) - } - } - - label := generateLabel(sealSeed, i, window, parents) - labels = append(labels, label...) - } - - return labels -} - -// Encodes data in-place, mutating it. -func encodeDataInPlace(data []byte, key []byte, nodeSize int, modulus *big.Int) []byte { - if len(data) != len(key) { - panic("Key and data must be same length.") - } - - for i := 0; i < len(data); i += nodeSize { - copy(data[i:i+nodeSize], encodeNode(data[i:i+nodeSize], key[i:i+nodeSize], modulus, nodeSize)) - } - - return data -} - -func generateLabel(sealSeed sector.SealSeed, node int, window int, dependencies []Label) []byte { - preimage := sealSeed - - if window != WRAPPER_LAYER_WINDOW_INDEX { - windowBytes := make([]byte, 8) - binary.LittleEndian.PutUint64(windowBytes, uint64(window)) - - preimage = append(preimage, windowBytes...) - } - - nodeBytes := make([]byte, 8) - binary.LittleEndian.PutUint64(nodeBytes, uint64(node)) - - preimage = append(preimage, nodeBytes...) - for _, dependency := range dependencies { - preimage = append(preimage, dependency...) - } - - return deriveLabel(preimage) -} - -func deriveLabel(elements []byte) []byte { - return trimToFr32(HashBytes_SHA256Hash(elements)) -} - -func computeCommC(keyLayers [][]byte, nodeSize int) (PedersenHash, file.Path) { - leaves := make([]byte, len(keyLayers[0])) - - // For each node in the graph, - for start := 0; start < len(leaves); start += nodeSize { - end := start + nodeSize - - var column []Label - // Concatenate that node's label at each layer, in order, into a column. - for i := 0; i < len(keyLayers); i++ { - label := keyLayers[i][start:end] - column = append(column, label) - } - - // And hash that column to create the leaf of a new tree. - hashed := hashColumn(column) - copy(leaves[start:end], hashed[:]) - } - - // Return the root of and path to the column tree. - return BuildTree_PedersenHash(leaves) -} - -func computeCommQ(layerBytes []byte, nodeSize int) (PedersenHash, file.Path) { - leaves := make([]byte, len(layerBytes)/nodeSize) - for i := 0; i < len(leaves); i++ { - leaves = append(leaves, layerBytes[i*nodeSize:(i+1)*nodeSize]...) - } - - return BuildTree_PedersenHash(leaves) -} - -func hashColumn(column []Label) PedersenHash { - var preimage []byte - for _, label := range column { - preimage = append(preimage, label...) - } - return HashBytes_PedersenHash(preimage) -} - -func createColumnProofs(drg *DRG_I, expander *ExpanderGraph_I, challenge UInt, nodeSize UInt, columnTree MerkleTree, aux sector.ProofAuxTmp, windows int, windowSize int) []SDRColumnProof { - columnElements := getColumnElements(drg, expander, challenge) - - var columnProofs []SDRColumnProof - for c := range columnElements { - chall := UInt(c) - - columnProof := createColumnProof(chall, nodeSize, windows, windowSize, columnTree, aux) - columnProofs = append(columnProofs, columnProof) - } - - return columnProofs -} - -func createWindowProof(drg *DRG_I, expander *ExpanderGraph_I, challenge UInt, nodeSize UInt, dataTree MerkleTree, columnTree MerkleTree, qLayerTree MerkleTree, aux sector.ProofAuxTmp, windows int, windowSize int) (proof OfflineWindowProof) { - columnElements := getColumnElements(drg, expander, challenge) - - var columnProofs []SDRColumnProof - for c := range columnElements { - chall := UInt(c) - - columnProof := createColumnProof(chall, nodeSize, windows, windowSize, columnTree, aux) - columnProofs = append(columnProofs, columnProof) - } - - dataProof := dataTree.ProveInclusion(challenge) - qLayerProof := qLayerTree.ProveInclusion(challenge) - - proof = OfflineWindowProof{ - DataProof: dataProof, - QLayerProof: qLayerProof, - } - - return proof -} - -func createWrapperProof(drg *DRG_I, expander *ExpanderGraph_I, sealSeed sector.SealSeed, challenge UInt, nodeSize UInt, qTree MerkleTree, replicaTree MerkleTree, aux sector.ProofAuxTmp, windows int, windowSize int) (proof OfflineWrapperProof) { - proof.ReplicaProof = replicaTree.ProveInclusion(challenge) - - parents := expander.Parents(challenge) - - for _, parent := range parents { - proof.QLayerProofs = append(proof.QLayerProofs, qTree.ProveInclusion(parent)) - } - return proof -} - -func getColumnElements(drg *DRG_I, expander *ExpanderGraph_I, challenge UInt) (columnElements []UInt) { - columnElements = append(columnElements, challenge) - columnElements = append(columnElements, drg.Parents(challenge)...) - columnElements = append(columnElements, expander.Parents(challenge)...) - - return columnElements -} - -func createColumnProof(c UInt, nodeSize UInt, windowSize int, windows int, columnTree MerkleTree, aux sector.ProofAuxTmp) (columnProof SDRColumnProof) { - layers := aux.KeyLayers() - var column []Label - - for w := 0; w < windows; w++ { - for i := 0; i < len(layers); i++ { - start := (w * windowSize) + int(c) - end := start + int(nodeSize) - column = append(column, layers[i][start:end]) - } - } - columnProof = SDRColumnProof{ - Column: column, - InclusionProof: columnTree.ProveInclusion(c), - } - - return columnProof -} - -type PrivateOfflineProof struct { - ColumnProofs []SDRColumnProof - WindowProofs []OfflineWindowProof - WrapperProofs []OfflineWrapperProof -} - -type OfflineWindowProof struct { - // TODO: these proofs need to depend on hash function. - DataProof InclusionProof // SHA256 - QLayerProof InclusionProof -} - -type OfflineWrapperProof struct { - ReplicaProof InclusionProof // Pedersen - QLayerProofs []InclusionProof -} - -func (ip *InclusionProof_I) Leaf() []byte { - panic("TODO") -} - -func (ip *InclusionProof_I) LeafIndex() UInt { - panic("TODO") -} - -func (ip *InclusionProof_I) Root() Commitment { - panic("TODO") -} - -func (mt *MerkleTree_I) ProveInclusion(challenge UInt) InclusionProof { - panic("TODO") -} - -func (mt *MerkleTree_I) Leaf(index UInt) []byte { - panic("TODO") -} - -func LoadMerkleTree(path file.Path) MerkleTree { - panic("TODO") -} - -func (ip *InclusionProof_I) Verify(root []byte, challenge UInt) bool { - // FIXME: need to verify proof length of private inclusion proofs. - panic("TODO") -} - -type SDRColumnProof struct { - Column []Label - InclusionProof InclusionProof -} - -func (proof *SDRColumnProof) Verify(root []byte, challenge UInt) bool { - if !bytes.Equal(hashColumn(proof.Column), proof.InclusionProof.Leaf()) { - return false - } - - if proof.InclusionProof.LeafIndex() != challenge { - return false - } - - return proof.InclusionProof.Verify(root, challenge) -} - -func generateOfflineChallenges(challengeRange int, sealSeed sector.SealSeed, randomness abi.InteractiveSealRandomness, challengeCount int) []UInt { - var challenges []UInt - challengeRangeSize := challengeRange - 1 // Never challenge the first node. - challengeModulus := new(big.Int) - challengeModulus.SetUint64(uint64(challengeRangeSize)) - - // Maybe factor this into a separate function, since the logic is the same... - - for i := 0; i < challengeCount; i++ { - var preimage []byte - preimage = append(preimage, sealSeed...) - preimage = append(preimage, randomness...) - preimage = append(preimage, littleEndianBytesFromInt(i, 4)...) - - hash := HashBytes_SHA256Hash(preimage) - bigChallenge := bigIntFromLittleEndianBytes(hash) - bigChallenge = bigChallenge.Mod(bigChallenge, challengeModulus) - - // Sectors nodes must be 64-bit addressable, always a safe assumption. - challenge := bigChallenge.Uint64() - challenge += 1 // Never challenge the first node. - challenges = append(challenges, challenge) - } - return challenges -} - -func encodeNode(data []byte, key []byte, modulus *big.Int, nodeSize int) []byte { - // TODO: Make this a method of WinStackedDRG. - return addEncode(data, key, modulus, nodeSize) -} - -func addEncode(data []byte, key []byte, modulus *big.Int, nodeSize int) []byte { - - d := bigIntFromLittleEndianBytes(data) - k := bigIntFromLittleEndianBytes(key) - - sum := new(big.Int).Add(d, k) - result := new(big.Int).Mod(sum, modulus) - - return littleEndianBytesFromBigInt(result, nodeSize) -} - -//////////////////////////////////////////////////////////////////////////////// -// Seal Verification - -func (sv *SealVerifier_I) VerifySeal(svi abi.SealVerifyInfo) bool { - switch svi.OnChain.RegisteredProof { - case abi.RegisteredProof_WinStackedDRG32GiBSeal: - { - sdr := WinSDRParams(svi.OnChain.RegisteredProof) - - return sdr.VerifySeal(svi) - } - case abi.RegisteredProof_StackedDRG32GiBSeal: - { - panic("TODO") - } - } - - return false -} - -func ComputeUnsealedSectorCIDFromPieceInfos(sectorSize abi.SectorSize, pieceInfos []PieceInfo) (unsealedCID abi.UnsealedSectorCID, err error) { - rootPieceInfo := computeRootPieceInfo(pieceInfos) - rootSize := uint64(rootPieceInfo.Size) - - if rootSize != uint64(sectorSize) { - return unsealedCID, errors.New("Wrong sector size.") - } - - return UnsealedSectorCID(rootPieceInfo.PieceCID.Bytes()), nil -} - -func computeRootPieceInfo(pieceInfos []PieceInfo) PieceInfo { - // Construct root PieceInfo by (shift-reduce) parsing the constituent PieceInfo array. - // Later pieces must always be joined with equal-sized predecessors to create a new root twice their size. - // So if a piece is larger than the current root (top of stack), add padding until it is not. - // If a piece is smaller than the root, let it be the new root (top of stack) until reduced to a replacement that can be joined - // with the previous. - var stack []PieceInfo - - shift := func(p PieceInfo) { - stack = append(stack, p) - } - peek := func() PieceInfo { - return stack[len(stack)-1] - } - peek2 := func() PieceInfo { - return stack[len(stack)-2] - } - pop := func() PieceInfo { - stack = stack[:len(stack)-1] - return stack[len(stack)-1] - } - reduce1 := func() bool { - if len(stack) > 1 && peek().Size == peek2().Size { - right := pop() - left := pop() - joined := joinPieceInfos(left, right) - shift(joined) - return true - } - return false - } - reduce := func() { - for reduce1() { - } - } - shiftReduce := func(p PieceInfo) { - shift(p) - reduce() - } - - // Prime the pump with first pieceInfo. - shift(pieceInfos[0]) - - // Consume the remainder. - for _, pieceInfo := range pieceInfos[1:] { - // TODO: Assert that pieceInfo.Size is a power of 2. - - // Add padding until top of stack is large enough to receive current pieceInfo. - for peek().Size < pieceInfo.Size { - shiftReduce(zeroPadding(peek().Size)) - } - - // Add the current piece. - shiftReduce(pieceInfo) - } - - // Add any necessary final padding. - for len(stack) > 1 { - shiftReduce(zeroPadding(peek().Size)) - } - util.Assert(len(stack) == 1) - - return pop() -} - -func zeroPadding(size int64) PieceInfo { - return abi.PieceInfo{ - Size: size, - // CommP_: FIXME: Implement. - } -} - -func joinPieceInfos(left PieceInfo, right PieceInfo) PieceInfo { - util.Assert(left.Size == right.Size) - - // FIXME: make this whole function generic? - // Note: cid.Bytes() isn't actually the payload data that we want input to the binary hash function, for more - // information see discussion: https://filecoinproject.slack.com/archives/CHMNDCK9P/p1578629688082700 - sectorPieceCID, err := cid.Cast(BinaryHash_SHA256Hash(cid.Cid(left.PieceCID).Bytes(), cid.Cid(right.PieceCID).Bytes())) - util.Assert(err == nil) - - return abi.PieceInfo{ - Size: left.Size + right.Size, - PieceCID: abi.PieceCID(sectorPieceCID), - } -} - -//////////////////////////////////////////////////////////////////////////////// -// PoSt - -func getChallengedSectors(sectorIDs []abi.SectorID, randomness abi.PoStRandomness, eligibleSectors []abi.SectorID, candidateCount int) (sectors []abi.SectorID) { - for i := 0; i < candidateCount; i++ { - sector := generateSectorChallenge(randomness, i, sectorIDs) - sectors = append(sectors, sector) - } - - return sectors -} - -func generateSectorChallenge(randomness abi.PoStRandomness, n int, sectorIDs []abi.SectorID) (sector abi.SectorID) { - preimage := append(randomness, littleEndianBytesFromInt(n, 8)...) - hash := SHA256Hash(preimage) - sectorChallenge := bigIntFromLittleEndianBytes(hash) - - challengeModulus := new(big.Int) - challengeModulus.SetUint64(uint64(len(sectorIDs))) - - sectorIndex := sectorChallenge.Mod(sectorChallenge, challengeModulus) - return sectorIDs[int(sectorIndex.Uint64())] -} - -func generateLeafChallenge(randomness abi.PoStRandomness, sectorChallengeIndex UInt, leafChallengeIndex int, nodes int, challengeRangeSize int) UInt { - preimage := append(randomness, littleEndianBytesFromUInt(sectorChallengeIndex, 8)...) - preimage = append(preimage, littleEndianBytesFromInt(leafChallengeIndex, 8)...) - hash := SHA256Hash(preimage) - bigHash := bigIntFromLittleEndianBytes(hash) - - challengeSpaceSize := nodes / challengeRangeSize - challengeModulus := new(big.Int) - challengeModulus.SetUint64(UInt(challengeSpaceSize)) - - leafChallenge := bigHash.Mod(bigHash, challengeModulus) - - return leafChallenge.Uint64() -} - -func generateCandidate(randomness abi.PoStRandomness, aux sector.PersistentProofAux, sectorID abi.SectorID, sectorChallengeIndex UInt) abi.PoStCandidate { - var candidate abi.PoStCandidate - - // switch algorithm { - // case ProofAlgorithm_StackedDRGSeal: - // panic("TODO") - // case ProofAlgorithm_WinStackedDRGSeal: - // sdr := WinStackedDRG_I{} - // candidate = sdr._generateCandidate(cfg, randomness, aux, sectorID, sectorChallengeIndex) - // } - return candidate -} - -func computePartialTicket(randomness abi.PoStRandomness, sectorID abi.SectorID, data []byte) abi.PartialTicket { - preimage := randomness - preimage = append(preimage, getProverID(sectorID.Miner)...) - preimage = append(preimage, littleEndianBytesFromUInt(UInt(sectorID.Number), 8)...) - preimage = append(preimage, data...) - partialTicket := abi.PartialTicket(HashBytes_PedersenHash(preimage)) - - return partialTicket -} - -type PoStCandidatesMap map[ProofAlgorithm][]abi.PoStCandidate - -func CreatePoStProof(privateCandidateProofs []PrivatePostCandidateProof, challengeSeed abi.PoStRandomness) []abi.PoStProof { - var proofsMap map[RegisteredProof][]PrivatePostCandidateProof - - for _, proof := range privateCandidateProofs { - registeredProof := proof.RegisteredProof - proofsMap[registeredProof] = append(proofsMap[registeredProof], proof) - } - - var circuitProofs []abi.PoStProof - for registeredProof, proofs := range proofsMap { - privateProof := createPrivatePoStProof(registeredProof, proofs, challengeSeed) - circuitProof := createPoStCircuitProof(registeredProof, privateProof) - circuitProofs = append(circuitProofs, circuitProof) - } - - return circuitProofs -} - -type PrivatePoStProof struct { - RegisteredProof RegisteredProof - ChallengeSeed abi.PoStRandomness - CandidateProofs []PrivatePostCandidateProof -} - -func createPrivatePoStProof(registeredProof abi.RegisteredProof, candidateProofs []PrivatePostCandidateProof, challengeSeed abi.PoStRandomness) PrivatePoStProof { - // TODO: Verify that all candidateProofs share algorithm. - return PrivatePoStProof{ - RegisteredProof: registeredProof, - ChallengeSeed: challengeSeed, - CandidateProofs: candidateProofs, - } -} - -type InternalPrivateCandidateProof struct { - InclusionProofs []InclusionProof -} - -// This exists because we need to pass private proofs out of filproofs for winner selection. -// Actually implementing it would (will?) be tedious, since it means doing the same for InclusionProofs. - -func (p *InternalPrivateCandidateProof) externalize(registeredProof RegisteredProof) abi.PrivatePoStCandidateProof { - return abi.PrivatePoStCandidateProof{ - RegisteredProof: registeredProof, - Externalized: []byte{}, // Unimplemented. - } -} - -// This is the inverse of InternalPrivateCandidateProof.externalize and equally tedious. -func newInternalPrivateProof(externalPrivateProof abi.PrivatePoStCandidateProof) InternalPrivateCandidateProof { - return InternalPrivateCandidateProof{} -} - -func createPoStCircuitProof(registeredProof abi.RegisteredProof, privateProof PrivatePoStProof) (proof abi.PoStProof) { - switch registeredProof { - case abi.RegisteredProof_WinStackedDRG32GiBPoSt: - sdr := WinStackedDRG_I{} - proof = sdr._createPoStCircuitProof(privateProof) - case abi.RegisteredProof_StackedDRG32GiBPoSt: - panic("TODO") - } - - return proof -} - -func (pv *PoStVerifier_I) _verifyPoStProof(sv abi.PoStVerifyInfo) bool { - // commT := sv.CommT() - // candidates := sv.Candidates() - // randomness := sv.Randomness() - // postProofs := sv.OnChain.Proofs() - - // Verify circuit proof. - panic("TODO") -} - -//////////////////////////////////////////////////////////////////////////////// -// General PoSt - -func generatePoStCandidates(challengeSeed abi.PoStRandomness, eligibleSectors []abi.SectorID, candidateCount int, sectorStore sector_index.SectorStore) (candidates []abi.PoStCandidate) { - challengedSectors := getChallengedSectors(eligibleSectors, challengeSeed, eligibleSectors, candidateCount) - - for i, sectorID := range challengedSectors { - proofAux := sectorStore.GetSectorPersistentProofAux(sectorID) - - candidate := generateCandidate(challengeSeed, proofAux, sectorID, UInt(i)) - - candidates = append(candidates, candidate) - } - - return candidates -} - -//////////////////////////////////////////////////////////////////////////////// -// Election PoSt - -func GenerateElectionPoStCandidates(challengeSeed abi.PoStRandomness, eligibleSectors []abi.SectorID, candidateCount int, sectorStore sector_index.SectorStore) (candidates []abi.PoStCandidate) { - return generatePoStCandidates(challengeSeed, eligibleSectors, candidateCount, sectorStore) -} - -func CreateElectionPoStProof(privateCandidateProofs []PrivatePostCandidateProof, challengeSeed abi.PoStRandomness) []abi.PoStProof { - return CreatePoStProof(privateCandidateProofs, challengeSeed) -} - -func (pv *PoStVerifier_I) VerifyElectionPoSt(sv abi.PoStVerifyInfo) bool { - return pv._verifyPoStProof(sv) -} - -//////////////////////////////////////////////////////////////////////////////// -// Surprise PoSt - -func GenerateSurprisePoStCandidates(challengeSeed abi.PoStRandomness, eligibleSectors []abi.SectorID, candidateCount int, sectorStore sector_index.SectorStore) []abi.PoStCandidate { - panic("TODO") -} - -func CreateSurprisePoStProof(privateCandidateProofs []PrivatePostCandidateProof, challengeSeed abi.PoStRandomness) []abi.PoStProof { - return CreatePoStProof(privateCandidateProofs, challengeSeed) -} - -func (pv *PoStVerifier_I) VerifySurprisePoSt(sv abi.PoStVerifyInfo) bool { - return pv._verifyPoStProof(sv) -} diff --git a/content/libraries/filcrypto/filproofs/algorithms.id b/content/libraries/filcrypto/filproofs/algorithms.id deleted file mode 100644 index 1dd872e71..000000000 --- a/content/libraries/filcrypto/filproofs/algorithms.id +++ /dev/null @@ -1,299 +0,0 @@ -import abi "github.com/filecoin-project/specs-actors/actors/abi" -import file "github.com/filecoin-project/specs/systems/filecoin_files/file" -import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector" -import sectorIndex "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index" - -type WinStackedDRGLayers UInt -type WinStackedDRGNodeSize UInt -type WinStackedDRGNodes UInt -type WinStackedDRGWindowCount UInt -type WinStackedDRGPartitions UInt -type WinStackedDRGChallenges UInt -type WinStackedDRGWindowChallenges UInt - -type PoStLeafChallengeCount UInt -type PoStChallengeRangeSize UInt - -type DRGDepth struct {} -type DRGFraction struct {} -type DRGDegree UInt -type DRGSeed struct {} -type DRGNodeCount UInt -type ExpanderGraphNodeCount UInt -type ChungExpanderPermutationFeistelKeys [UInt] -type ChungExpanderPermutationFeistelRounds UInt -type ChungExpanderPermutationFeistelHashFunction enum { - Blake2S - SHA256 -} -type ChungExpanderAlpha struct {} -type ChungExpanderBeta struct {} -type ExpanderGraphDegree UInt -type ExpanderGraphSeed struct {} -type DRGNodeSize UInt - -type SealAlgorithmArtifacts struct { - AlgorithmWideSetupArtifacts struct { - // trusted setup output parameters go here - // updates to public parameters go here - } - - SealSetupArtifacts - - // ProveArtifacts or - ChallengeArtifacts struct { - // inputs into prove() go here - } - - VerifyArtifacts struct { - // inputs into verify() go here - } -} - -// per-sector setup artifacts go here -type SealSetupArtifacts struct { - CommD sector.Commitment - CommR abi.SealedSectorCID - CommC sector.Commitment - CommQ sector.Commitment - CommRLast sector.Commitment - CommDTreePath file.Path - CommCTreePath file.Path - CommQTreePath file.Path - CommRLastTreePath file.Path - Seed sector.SealSeed - KeyLayers [Bytes] - Replica Bytes // This is what we challenge in PoSt. It will be regenerated just in time. Should probably be removed from here. -} - -type EllipticCurve struct { - FieldModulus &util.BigInt -} - -type WinStackedDRG struct { - Layers WinStackedDRGLayers - NodeSize WinStackedDRGNodeSize - Nodes WinStackedDRGNodes - WindowCount WinStackedDRGWindowCount - Partitions WinStackedDRGPartitions - Challenges WinStackedDRGChallenges - WindowChallenges WinStackedDRGWindowChallenges - Algorithm struct {} - DRGCfg - ExpanderGraphCfg - WindowDRGCfg DRGCfg - WindowExpanderGraphCfg ExpanderGraphCfg - // invariant: DRGCfg.Nodes == ExpanderGraphCfg.Nodes - Curve EllipticCurve - RegisteredProof abi.RegisteredProof - Cfg SealInstanceCfg - - Drg() DRG - Expander() ExpanderGraph - - WindowDrg() DRG - WindowExpander() ExpanderGraph - - Seal( - registeredProof abi.RegisteredProof - sid abi.SectorID - data Bytes - randomness abi.SealRandomness - ) SealSetupArtifacts - CreateSealProof( - challengeSeed abi.InteractiveSealRandomness - aux sector.ProofAuxTmp - ) abi.SealProof - CreatePrivateSealProof( - randomness abi.InteractiveSealRandomness - aux sector.ProofAuxTmp - ) PrivateOfflineProof - - CreateOfflineCircuitProof( - challengeProofs [OfflineWindowProof] - aux sector.ProofAuxTmp - ) abi.SealProof - VerifyPrivateSealProof( - privateProof [OfflineWindowProof] - sealSeeds [sector.SealSeed] - randomness abi.InteractiveSealRandomness - commD Commitment - commR abi.SealedSectorCID - ) bool - VerifySeal(sv abi.SealVerifyInfo) bool - - GenerateElectionPoStCandidates( - challengeSeed abi.PoStRandomness - eligibleSectors [abi.SectorNumber] - candidateCount int - sectorStore sectorIndex.SectorStore - ) [abi.PoStCandidate] - CreateElectionPoStProof(privateProofs [PrivatePoStProof]) abi.PoStProof - VerifyElectionPoSt(sv abi.PoStVerifyInfo) bool - - GenerateSurprisePoStCandidates( - challengeSeed abi.PoStRandomness - eligibleSectors [abi.SectorNumber] - candidateCount int - sectorStore sectorIndex.SectorStore - ) [abi.PoStCandidate] - CreateSurprisePoStProof(privateProofs [PrivatePoStProof]) abi.PoStProof - VerifySurprisePoSt(sv abi.PoStVerifyInfo) bool - CreatePrivatePoStProof( - candidateProofs [abi.PrivatePoStCandidateProof] - challengeSeed abi.PoStRandomness - aux sector.PersistentProofAux - ) PrivatePoStProof - VerifyPrivatePoStProof( - privateProof PrivatePoStProof - candidates [abi.PoStCandidate] - commRLast sector.Commitment - ) bool -} - -type SealVerifier struct { - SealCfg SealInstanceCfg -} - -type PoStVerifier struct { - PoStCfg PoStInstanceCfg -} - -type DRGCfg struct { - Algorithm struct { - Depth DRGDepth // D - Fraction DRGFraction // E - - ParentsAlgorithm enum { - DRSample - } - - RNGAlgorithm enum { - ChaCha20 - } - } - Degree DRGDegree - Seed DRGSeed - Nodes DRGNodeCount -} - -type DRG struct { - Config DRGCfg - Parents(UInt) [UInt] -} - -type ExpanderGraphCfg struct { - Algorithm union { - ChungExpanderAlgorithm - } - - Degree ExpanderGraphDegree - Seed ExpanderGraphSeed - Nodes ExpanderGraphNodeCount -} - -type ExpanderGraph struct { - Config ExpanderGraphCfg - Parents(UInt) [UInt] -} - -type ChungExpanderAlgorithm struct { - Alpha ChungExpanderAlpha - Beta ChungExpanderBeta - PermutationAlgorithm union { - Feistel - } - Parents(node UInt, ExpanderGraphDegree, nodes ExpanderGraphNodeCount) [UInt] -} - -type Feistel struct { - Keys ChungExpanderPermutationFeistelKeys - Rounds ChungExpanderPermutationFeistelRounds - HashFunction ChungExpanderPermutationFeistelHashFunction - Permute(size UInt, n UInt) UInt -} - -type MerkleTree struct { - ProveInclusion(challenge UInt) InclusionProof - Leaf(index UInt) Bytes -} - -// TODO: Needs to be generic over hash. -type InclusionProof struct { - Leaf() Bytes - LeafIndex() UInt - Root() Commitment - Verify(root Bytes, challenge UInt) bool -} - -type ProofRegistry {UInt: ProofInstance} - -type ProofInstance struct { - // FIXME: move some or all of these types into filproofs. - ID abi.RegisteredProof - Type ProofType - Algorithm ProofAlgorithm - CircuitType ConcreteCircuit - Partitions UInt - Cfg ProofCfg -} - -type ConcreteCircuit struct { - // Name must be globally unique. It will be a hash derived from semantic content of circuit. - Name string - GrothParameterFileName() string - VerifyingKeyFileName() string -} - -type ProofCfg union { - SealCfg SealInstanceCfg - PoStCfg PoStInstanceCfg -} - -type ProofType enum { - SealProof - PoStProof -} - -type ProofAlgorithm enum { - StackedDRGSeal - WinStackedDRGSeal - StackedDRGPoSt - WinStackedDRGPoSt -} - -// New proof ProofInstances can add new cfg types if needed. -type SealInstanceCfg union { - WinStackedDRGCfgV1 -} - -type WinStackedDRGCfgV1 struct { - SectorSize abi.SectorSize - WindowCount UInt -} - -// New proof ProofInstances can add new cfg types if needed. -type PoStInstanceCfg union { - PoStCfgV1 - PoStCfgVBogus -} - -type PoStCfgV1 struct { - Type PoStType - Nodes UInt - Partitions UInt - LeafChallengeCount UInt - ChallengeRangeSize UInt -} - -type PoStCfgVBogus struct { - Type PoStType - Nodes UInt - Partitions UInt - QuantumMonkeyBrains UInt -} - -type PoStType enum { - ElectionPoSt - SurprisePoSt -} diff --git a/content/libraries/filcrypto/filproofs/feistel.go b/content/libraries/filcrypto/filproofs/feistel.go deleted file mode 100644 index 40f584e6e..000000000 --- a/content/libraries/filcrypto/filproofs/feistel.go +++ /dev/null @@ -1,92 +0,0 @@ -package filproofs - -import ( - "golang.org/x/crypto/blake2b" -) - -// TODO/FIXME: Update to use uint64, not uint32. - -func Permute(numElements uint32, index uint32, keys []uint32) uint32 { - u := encode(numElements, index, keys) - for u >= numElements { - u = encode(numElements, u, keys) - } - - return u -} - -func InvertPermute(numElements uint32, index uint32, keys []uint32) uint32 { - u := decode(numElements, index, keys) - for u >= numElements { - u = decode(numElements, u, keys) - } - - return u -} - -func encode(numElements uint32, index uint32, keys []uint32) uint32 { - // find nextPow4 - nextPow4 := uint32(4) - log4 := uint32(1) - for nextPow4 < numElements { - nextPow4 *= 4 - log4++ - } - - // left and right masks - leftMask := ((uint32(1) << log4) - 1) << log4 - rightMask := (uint32(1) << log4) - 1 - halfBits := log4 - - left := ((index & leftMask) >> halfBits) - right := (index & rightMask) - - for i := 0; i < 4; i++ { - left, right = right, left^feistel(right, keys[i], rightMask) - } - - return (left << halfBits) | right -} - -func decode(numElements uint32, index uint32, keys []uint32) uint32 { - - // find nextPow4 - nextPow4 := uint32(4) - log4 := uint32(1) - for nextPow4 < numElements { - nextPow4 *= 4 - log4++ - } - - // left and right masks - leftMask := ((uint32(1) << log4) - 1) << log4 - rightMask := (uint32(1) << log4) - 1 - halfBits := log4 - - left := ((index & leftMask) >> halfBits) - right := (index & rightMask) - - for i := 3; i > -1; i-- { - left, right = right^feistel(left, keys[i], rightMask), left - } - - return (left << halfBits) | right -} - -func feistel(right uint32, key uint32, rightMask uint32) uint32 { - var data [8]byte - data[0] = byte(right >> 24) - data[1] = byte(right >> 16) - data[2] = byte(right >> 8) - data[3] = byte(right) - - data[4] = byte(key >> 24) - data[5] = byte(key >> 16) - data[6] = byte(key >> 8) - data[7] = byte(key) - - hash := blake2b.Sum256(data[:]) - - r := uint32(hash[0])<<24 | uint32(hash[1])<<16 | uint32(hash[2])<<8 | uint32(hash[3]) - return r & rightMask -} diff --git a/content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.go b/content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.go deleted file mode 100644 index c48c735dc..000000000 --- a/content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.go +++ /dev/null @@ -1,9 +0,0 @@ -package filproofs - -import abi "github.com/filecoin-project/specs-actors/actors/abi" - -func (fps *FilecoinProofsSubsystem_I) VerifySeal(sealVerifyInfo abi.SealVerifyInfo) bool { - registeredProof := sealVerifyInfo.OnChain.RegisteredProof - sdr := WinSDRParams(registeredProof) - return sdr.VerifySeal(sealVerifyInfo) -} diff --git a/content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.id b/content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.id deleted file mode 100644 index be96bb8ed..000000000 --- a/content/libraries/filcrypto/filproofs/filecoin_proofs_subsystem.id +++ /dev/null @@ -1,10 +0,0 @@ -import abi "github.com/filecoin-project/specs-actors/actors/abi" - -type SectorInfo struct {} -type Block struct {} -type SectorID struct {} - -type FilecoinProofsSubsystem struct { - VerifySeal(sealVerifyInfo abi.SealVerifyInfo) bool - ValidateBlock(block Block) bool -} diff --git a/content/libraries/filcrypto/filproofs/hashing.go b/content/libraries/filcrypto/filproofs/hashing.go deleted file mode 100644 index 5b6b407d5..000000000 --- a/content/libraries/filcrypto/filproofs/hashing.go +++ /dev/null @@ -1,75 +0,0 @@ -package filproofs - -import util "github.com/filecoin-project/specs/util" - -type SHA256Hash Bytes32 -type PedersenHash Bytes32 - -//////////////////////////////////////////////////////////////////////////////// -/// Generic Hashing - -/// Binary hash compression. -// BinaryHash -func BinaryHash_T(left []byte, right []byte) util.T { - var preimage = append(left, right...) - return HashBytes_T(preimage) -} - -func TernaryHash_T(a []byte, b []byte, c []byte) util.T { - var preimage = append(a, append(b, c...)...) - return HashBytes_T(preimage) -} - -// BinaryHash -func BinaryHash_PedersenHash(left []byte, right []byte) PedersenHash { - return PedersenHash{} -} - -func TernaryHash_PedersenHash(a []byte, b []byte, c []byte) PedersenHash { - return PedersenHash{} -} - -// BinaryHash -func BinaryHash_SHA256Hash(left []byte, right []byte) SHA256Hash { - result := SHA256Hash{} - return trimToFr32(result) -} - -func TernaryHash_SHA256Hash(a []byte, b []byte, c []byte) SHA256Hash { - return SHA256Hash{} -} - -//////////////////////////////////////////////////////////////////////////////// - -/// Digest -// HashBytes -func HashBytes_T(data []byte) util.T { - return util.T{} -} - -// HashBytes -func HashBytes_PedersenHash(data []byte) PedersenHash { - return PedersenHash{} -} - -// HashBytes -func BuildTree_T(data []byte) (util.T, file.Path) { - // Plan: define this in terms of BinaryHash_T, then copy-paste changes into T-specific specializations, for now. - - // Nodes are always the digest size so data cannot be compressed to digest for storage. - nodeSize := DigestSize_T() - - // TODO: Fail if len(dat) is not a power of 2 and a multiple of the node size. - - rows := [][]byte{data} - - for row := []byte{}; len(row) > nodeSize; { - for i := 0; i < len(data); i += 2 * nodeSize { - left := data[i : i+nodeSize] - right := data[i+nodeSize : i+2*nodeSize] - - hashed := BinaryHash_T(left, right) - - row = append(row, AsBytes_T(hashed)...) - } - rows = append(rows, row) - } - - // Last row is the root - root := rows[len(rows)-1] - - if len(root) != nodeSize { - panic("math failed us") - } - - var filePath file.Path // TODO: dump tree to file. - // NOTE: merkle tree file layout is illustrative, not prescriptive. - - // TODO: Check above more carefully. It's just an untested sketch for the moment. - return fromBytes_T(root), filePath -} - -// BuildTree -func BuildTree_PedersenHash(data []byte) (PedersenHash, file.Path) { - return PedersenHash{}, file.Path("") // FIXME -} - -// BuildTree -func BuildTree_SHA256Hash(data []byte) (SHA256Hash, file.Path) { - return []byte{}, file.Path("") // FIXME -} diff --git a/content/libraries/filcrypto/filproofs/util.go b/content/libraries/filcrypto/filproofs/util.go deleted file mode 100644 index 7f563dcf5..000000000 --- a/content/libraries/filcrypto/filproofs/util.go +++ /dev/null @@ -1,154 +0,0 @@ -package filproofs - -import ( - "encoding/binary" - big "math/big" - - abi "github.com/filecoin-project/specs-actors/actors/abi" - file "github.com/filecoin-project/specs/systems/filecoin_files/file" - util "github.com/filecoin-project/specs/util" -) - -// Utilities - -func reverse(bytes []byte) { - for i, j := 0, len(bytes)-1; i < j; i, j = i+1, j-1 { - bytes[i], bytes[j] = bytes[j], bytes[i] - } -} - -func bigIntFromLittleEndianBytes(bytes []byte) *big.Int { - reverse(bytes) - return new(big.Int).SetBytes(bytes) -} - -func bigIntFromBigEndianBytes(bytes []byte) *big.Int { - return new(big.Int).SetBytes(bytes) -} - -// size is number of bytes to return -func littleEndianBytesFromBigInt(z *big.Int, size int) []byte { - bytes := z.Bytes()[0:size] - reverse(bytes) - - return bytes -} - -// size is number of bytes to return -func bigEndianBytesFromBigInt(z *big.Int, size int) []byte { - return z.Bytes()[0:size] -} - -func littleEndianBytesFromInt(n int, size int) []byte { - z := new(big.Int) - z.SetInt64(int64(n)) - return littleEndianBytesFromBigInt(z, size) -} - -func bigEndianBytesFromInt(n int, size int) []byte { - z := new(big.Int) - z.SetInt64(int64(n)) - return bigEndianBytesFromBigInt(z, size) -} - -func littleEndianBytesFromUInt(n UInt, size int) []byte { - z := new(big.Int) - z.SetUint64(uint64(n)) - return littleEndianBytesFromBigInt(z, size) -} - -func bigEndianBytesFromUInt(n UInt, size int) []byte { - z := new(big.Int) - z.SetUint64(uint64(n)) - return bigEndianBytesFromBigInt(z, size) -} - -func AsBytes_T(t util.T) []byte { - panic("Unimplemented for T") - - return []byte{} -} - -func AsBytes_UnsealedSectorCID(cid abi.UnsealedSectorCID) []byte { - panic("Unimplemented for UnsealedSectorCID") - - return []byte{} -} - -func AsBytes_SealedSectorCID(CID abi.SealedSectorCID) []byte { - panic("Unimplemented for SealedSectorCID") - - return []byte{} -} - -func AsBytes_PieceCID(CID abi.PieceCID) []byte { - panic("Unimplemented for PieceCID") - - return []byte{} -} - -func fromBytes_T(_ interface{}) util.T { - panic("Unimplemented for T") - return util.T{} -} - -func fromBytes_PieceCID(_ interface{}) abi.PieceCID { - panic("Unimplemented for PieceCID") -} - -func isPow2(n int) bool { - return n != 0 && n&(n-1) == 0 -} - -// FIXME: This does not belong in filproofs, and no effort is being made to ensure it has any particular properties. -func RandomInt(randomness util.Randomness, nonce int, limit *big.Int) *big.Int { - nonceBytes := make([]byte, 8) - binary.LittleEndian.PutUint64(nonceBytes, uint64(nonce)) - input := randomness - input = append(input, nonceBytes...) - ranHash := HashBytes_SHA256Hash(input[:]) - hashInt := bigIntFromLittleEndianBytes(ranHash) - num := hashInt.Mod(hashInt, limit) - return num -} - -//////////////////////////////////////////////////////////////////////////////// - -// Destructively trim data so most significant two bits of last byte are 0. -// This ensure data interpreted as little-endian will not exceed a field with 254-bit capacity. -// NOTE: 254 bits is the capacity of BLS12-381, but other curves with ~32-byte field elements -// may have a different capacity. (Example: BLS12-377 has a capacity of 252 bits.) -func trimToFr32(data []byte) []byte { - util.Assert(len(data) == 32) - data[31] &= 0x3f // 0x3f = 0b0011_1111 - return data -} - -func UnsealedSectorCID(h SHA256Hash) abi.UnsealedSectorCID { - panic("not implemented -- re-arrange bits") -} - -func SealedSectorCID(h PedersenHash) abi.SealedSectorCID { - panic("not implemented -- re-arrange bits") -} - -func Commitment_UnsealedSectorCID(cid abi.UnsealedSectorCID) Commitment { - panic("not implemented -- re-arrange bits") -} - -func Commitment_SealedSectorCID(cid abi.SealedSectorCID) Commitment { - panic("not implemented -- re-arrange bits") -} - -func ComputeDataCommitment(data []byte) (abi.UnsealedSectorCID, file.Path) { - // TODO: make hash parameterizable - hash, path := BuildTree_SHA256Hash(data) - return UnsealedSectorCID(hash), path -} - -// Compute CommP or CommD. -func ComputeUnsealedSectorCID(data []byte) (abi.UnsealedSectorCID, file.Path) { - // TODO: check that len(data) > minimum piece size and is a power of 2. - hash, treePath := BuildTree_SHA256Hash(data) - return UnsealedSectorCID(hash), treePath -} diff --git a/content/libraries/filcrypto/filproofs/win_stacked_sdr.go b/content/libraries/filcrypto/filproofs/win_stacked_sdr.go deleted file mode 100644 index eff7c4adf..000000000 --- a/content/libraries/filcrypto/filproofs/win_stacked_sdr.go +++ /dev/null @@ -1,524 +0,0 @@ -package filproofs - -import ( - "bytes" - big "math/big" - - abi "github.com/filecoin-project/specs-actors/actors/abi" - file "github.com/filecoin-project/specs/systems/filecoin_files/file" - sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector" - util "github.com/filecoin-project/specs/util" - "github.com/ipfs/go-cid" -) - -func WinSDRParams(registeredProof abi.RegisteredProof) *WinStackedDRG_I { - c := RegisteredProofInstance(registeredProof).Cfg().As_SealCfg() - cfg := c.As_WinStackedDRGCfgV1() - // TODO: Bridge constants with orient model. - const LAYERS = 10 - const OFFLINE_CHALLENGES = 6666 - const OFFLINE_WINDOW_CHALLENGES = 1111 - const FEISTEL_ROUNDS = 3 - var FEISTEL_KEYS = [FEISTEL_ROUNDS]UInt{1, 2, 3} - var FIELD_MODULUS = new(big.Int) - // https://github.com/zkcrypto/pairing/blob/master/src/bls12_381/fr.rs#L4 - FIELD_MODULUS.SetString("52435875175126190479447740508185965837690552500527637822603658699938581184513", 10) - - nodes := UInt(cfg.SectorSize() / NODE_SIZE) - - return &WinStackedDRG_I{ - Layers_: WinStackedDRGLayers(LAYERS), - Challenges_: WinStackedDRGChallenges(OFFLINE_CHALLENGES), - WindowChallenges_: WinStackedDRGWindowChallenges(OFFLINE_WINDOW_CHALLENGES), - NodeSize_: WinStackedDRGNodeSize(NODE_SIZE), - Nodes_: WinStackedDRGNodes(nodes), - Algorithm_: &WinStackedDRG_Algorithm_I{}, - DRGCfg_: &DRGCfg_I{ - Algorithm_: &DRGCfg_Algorithm_I{ - ParentsAlgorithm_: DRGCfg_Algorithm_ParentsAlgorithm_DRSample, - RNGAlgorithm_: DRGCfg_Algorithm_RNGAlgorithm_ChaCha20, - }, - Degree_: 6, - Nodes_: DRGNodeCount(nodes), - }, - ExpanderGraphCfg_: &ExpanderGraphCfg_I{ - Algorithm_: ExpanderGraphCfg_Algorithm_Make_ChungExpanderAlgorithm( - &ChungExpanderAlgorithm_I{ - PermutationAlgorithm_: ChungExpanderAlgorithm_PermutationAlgorithm_Make_Feistel(&Feistel_I{ - Keys_: FEISTEL_KEYS[:], - Rounds_: FEISTEL_ROUNDS, - HashFunction_: ChungExpanderPermutationFeistelHashFunction_SHA256, - }), - }), - Degree_: 8, - Nodes_: ExpanderGraphNodeCount(nodes), - }, - WindowDRGCfg_: &DRGCfg_I{ - Algorithm_: &DRGCfg_Algorithm_I{ - ParentsAlgorithm_: DRGCfg_Algorithm_ParentsAlgorithm_DRSample, - RNGAlgorithm_: DRGCfg_Algorithm_RNGAlgorithm_ChaCha20, - }, - Degree_: 0, - Nodes_: DRGNodeCount(nodes), - }, - WindowExpanderGraphCfg_: &ExpanderGraphCfg_I{ - Algorithm_: ExpanderGraphCfg_Algorithm_Make_ChungExpanderAlgorithm( - &ChungExpanderAlgorithm_I{ - PermutationAlgorithm_: ChungExpanderAlgorithm_PermutationAlgorithm_Make_Feistel(&Feistel_I{ - Keys_: FEISTEL_KEYS[:], - Rounds_: FEISTEL_ROUNDS, - HashFunction_: ChungExpanderPermutationFeistelHashFunction_SHA256, - }), - }), - Degree_: 8, - Nodes_: ExpanderGraphNodeCount(nodes), - }, - - Curve_: &EllipticCurve_I{ - FieldModulus_: *FIELD_MODULUS, - }, - Cfg_: c, - } -} - -func (sdr *WinStackedDRG_I) Drg() *DRG_I { - return &DRG_I{ - Config_: sdr.DRGCfg(), - } -} - -func (sdr *WinStackedDRG_I) Expander() *ExpanderGraph_I { - return &ExpanderGraph_I{ - Config_: sdr.ExpanderGraphCfg(), - } -} - -func (sdr *WinStackedDRG_I) WindowDrg() *DRG_I { - return &DRG_I{ - Config_: sdr.WindowDRGCfg(), - } -} - -func (sdr *WinStackedDRG_I) WindowExpander() *ExpanderGraph_I { - return &ExpanderGraph_I{ - Config_: sdr.WindowExpanderGraphCfg(), - } -} - -func (sdr *WinStackedDRG_I) Seal(registeredProof abi.RegisteredProof, sid abi.SectorID, data []byte, randomness abi.SealRandomness) SealSetupArtifacts { - - windowCount := int(sdr.WindowCount()) - nodeSize := int(sdr.NodeSize()) - nodes := int(sdr.Nodes()) - curveModulus := sdr.Curve().FieldModulus() - - var windowData [][]byte - - for i := 0; i < len(data); i += nodeSize { - windowData = append(windowData, data[i*nodeSize:(i+1)*nodeSize]) - } - - util.Assert(len(windowData) == windowCount) - - var windowKeyLayers [][]byte - var finalWindowKeyLayer []byte - - commD, commDTreePath := ComputeDataCommitment(data) - sealSeed := computeSealSeed(sid, randomness, commD) - - for i := 0; i < windowCount; i++ { - keyLayers := sdr._generateWindowKey(sealSeed, i, sid, commD, nodes, randomness) - - lastIndex := len(keyLayers) - 1 - windowKeyLayers = append(windowKeyLayers, keyLayers[:lastIndex]...) - finalWindowKeyLayer = append(finalWindowKeyLayer, keyLayers[lastIndex]...) - } - - qLayer := encodeDataInPlace(data, finalWindowKeyLayer, nodeSize, &curveModulus) - // NOTE: qLayer and data are now the same, and qLayer is introduced here for descriptive clarity only. - - replica := labelLayer(sdr.Drg(), sdr.Expander(), sealSeed, WRAPPER_LAYER_WINDOW_INDEX, nodes, nodeSize, qLayer) - - commC, commQ, commRLast, commR, commCTreePath, commQTreePath, commRLastTreePath := sdr.GenerateCommitments(replica, windowKeyLayers, qLayer) - - result := SealSetupArtifacts_I{ - CommD_: cid.Cid(commD).Bytes(), - CommR_: SealedSectorCID(commR), - CommC_: Commitment(commC), - CommQ_: Commitment(commQ), - CommRLast_: Commitment(commRLast), - CommDTreePath_: commDTreePath, - CommCTreePath_: commCTreePath, - CommQTreePath_: commQTreePath, - CommRLastTreePath_: commRLastTreePath, - Seed_: sealSeed, - KeyLayers_: windowKeyLayers, - Replica_: replica, - } - return &result -} - -func (sdr *WinStackedDRG_I) _generateWindowKey(sealSeed sector.SealSeed, windowIndex int, sid abi.SectorID, commD abi.UnsealedSectorCID, nodes int, randomness abi.SealRandomness) [][]byte { - nodeSize := int(sdr.NodeSize()) - curveModulus := sdr.Curve().FieldModulus() - layers := int(sdr.Layers()) - - keyLayers := generateSDRKeyLayers(sdr.WindowDrg(), sdr.WindowExpander(), sealSeed, windowIndex, nodes, layers, nodeSize, curveModulus) - - return keyLayers -} - -func (sdr *WinStackedDRG_I) GenerateCommitments(replica []byte, windowKeyLayers [][]byte, qLayer []byte) (commC PedersenHash, commQ PedersenHash, commRLast PedersenHash, commR PedersenHash, commCTreePath file.Path, commQTreePath file.Path, commRLastTreePath file.Path) { - commC, commCTreePath = computeCommC(windowKeyLayers, int(sdr.NodeSize())) - commQ, commQTreePath = computeCommQ(qLayer, int(sdr.NodeSize())) - commRLast, commRLastTreePath = BuildTree_PedersenHash(replica) - commR = TernaryHash_PedersenHash(commC, commQ, commRLast) - - return commC, commQ, commRLast, commR, commCTreePath, commQTreePath, commRLastTreePath -} - -func (sdr *WinStackedDRG_I) CreateSealProof(challengeSeed abi.InteractiveSealRandomness, aux sector.ProofAuxTmp) abi.SealProof { - privateProof := sdr.CreatePrivateSealProof(challengeSeed, aux) - - // Sanity check: newly-created proofs must pass verification. - util.Assert(sdr.VerifyPrivateSealProof(privateProof, aux.Seed(), challengeSeed, aux.CommD(), aux.CommR())) - - return sdr.CreateOfflineCircuitProof(privateProof, aux) -} - -func (sdr *WinStackedDRG_I) CreatePrivateSealProof(randomness abi.InteractiveSealRandomness, aux sector.ProofAuxTmp) (privateProof PrivateOfflineProof) { - sealSeed := aux.Seed() - nodeSize := UInt(sdr.NodeSize()) - wrapperChallenges, windowChallenges := sdr._generateOfflineChallenges(sealSeed, randomness, sdr.Challenges(), sdr.WindowChallenges()) - - dataTree := LoadMerkleTree(aux.CommDTreePath()) - columnTree := LoadMerkleTree(aux.CommCTreePath()) - replicaTree := LoadMerkleTree(aux.PersistentAux().CommRLastTreePath()) - qTree := LoadMerkleTree(aux.CommQTreePath()) - - windows := int(sdr.WindowCount()) - windowSize := int(uint64(sdr.Cfg().As_WinStackedDRGCfgV1().SectorSize()) / UInt(sdr.WindowCount())) - - for c := range windowChallenges { - columnProofs := createColumnProofs(sdr.WindowDrg(), sdr.WindowExpander(), UInt(c), nodeSize, columnTree, aux, windows, windowSize) - privateProof.ColumnProofs = append(privateProof.ColumnProofs, columnProofs...) - - windowProof := createWindowProof(sdr.WindowDrg(), sdr.WindowExpander(), UInt(c), nodeSize, dataTree, columnTree, qTree, aux, windows, windowSize) - privateProof.WindowProofs = append(privateProof.WindowProofs, windowProof) - } - - for c := range wrapperChallenges { - wrapperProof := createWrapperProof(sdr.Drg(), sdr.Expander(), sealSeed, UInt(c), nodeSize, qTree, replicaTree, aux, windows, windowSize) - privateProof.WrapperProofs = append(privateProof.WrapperProofs, wrapperProof) - } - - return privateProof -} - -// Verify a private proof. -// NOTE: Verification of a private proof is exactly the computation we will prove we have performed in a zk-SNARK. -// If we can verifiably prove that we have performed the verification of a private proof, then we need not reveal the proof itself. -// Since the zk-SNARK circuit proof is much smaller than the private proof, this allows us to save space on the chain (at the cost of increased computation to generate the zk-SNARK proof). -func (sdr *WinStackedDRG_I) VerifyPrivateSealProof(privateProof PrivateOfflineProof, sealSeed sector.SealSeed, randomness abi.InteractiveSealRandomness, commD Commitment, commR abi.SealedSectorCID) bool { - nodeSize := int(sdr.NodeSize()) - windowCount := int(sdr.WindowCount()) - windowSize := int(UInt(sdr.Cfg().As_WinStackedDRGCfgV1().SectorSize()) / UInt(sdr.WindowCount())) // TOOD: Make this a function. - layers := int(sdr.Layers()) - curveModulus := sdr.Curve().FieldModulus() - windowChallenges, wrapperChallenges := sdr._generateOfflineChallenges(sealSeed, randomness, sdr.Challenges(), sdr.WindowChallenges()) - - windowProofs := privateProof.WindowProofs - columnProofs := privateProof.ColumnProofs - wrapperProofs := privateProof.WrapperProofs - - // commC, commQ, and commRLast must be the same for all challenge proofs, so we can arbitrarily verify against the first. - firstColumnProof := columnProofs[0] - firstWrapperProof := wrapperProofs[0] - commC := firstColumnProof.InclusionProof.Root() - commQ := firstWrapperProof.QLayerProofs[0].Root() - commRLast := firstWrapperProof.ReplicaProof.Root() - - windowDrgParentCount := int(sdr.WindowDRGCfg().Degree()) - windowExpanderParentCount := int(sdr.WindowDRGCfg().Degree()) - wrapperExpanderParentCount := int(sdr.ExpanderGraphCfg().Degree()) - - for i, challenge := range windowChallenges { - // Verify one OfflineSDRChallengeProof. - windowProof := windowProofs[i] - dataProof := windowProof.DataProof - qLayerProof := windowProof.QLayerProof - - // Verify column proofs and that they represent the right columns. - columnElements := getColumnElements(sdr.Drg(), sdr.Expander(), challenge) - - // Check column openings. - for i, columnElement := range columnElements { - columnProof := columnProofs[i] - - // The provided column proofs must correspond to the expected columns. - if !columnProof.Verify(commC, UInt(columnElement)) { - return false - } - } - - // Check labeling. - for w := 0; w < windowCount; w++ { - for layer := 0; layer < layers; layer++ { - var parents []Label - - // First column proof is the challenge. - // Then the DRG parents. - for _, drgParentProof := range columnProofs[1 : 1+windowDrgParentCount] { - parent := drgParentProof.Column[layer] - parents = append(parents, parent) - } - // And the expander parents, if not the first layer. - if layer > 0 { - for _, expanderParentProof := range columnProofs[1+windowDrgParentCount : 1+windowExpanderParentCount] { - parent := expanderParentProof.Column[layer-1] - parents = append(parents, parent) - } - } - - calculatedLabel := generateLabel(sealSeed, i, w, parents) - - if layer == layers-1 { - // Last layer includes encoding. - dataNode := dataProof.Leaf() - qLayerNode := qLayerProof.Leaf() - - if !dataProof.Verify(commD, UInt(windowSize*w)+challenge) { - return false - } - - encodedNode := encodeNode(dataNode, calculatedLabel, &curveModulus, nodeSize) - - if !bytes.Equal(encodedNode, qLayerNode) { - return false - } - - } else { - providedLabel := columnProofs[columnElements[0]].Column[layer] - - if !bytes.Equal(calculatedLabel, providedLabel) { - return false - } - } - } - } - } - - for i, challenge := range wrapperChallenges { - wrapperProof := wrapperProofs[i] - replicaProof := wrapperProof.ReplicaProof - qLayerProofs := wrapperProof.QLayerProofs - - if !replicaProof.Verify(commRLast, challenge) { - return false - } - - var parents []Label - for i := 0; i < wrapperExpanderParentCount; i++ { - parent := qLayerProofs[i].Leaf() - parents = append(parents, parent) - } - - label := generateLabel(sealSeed, i, windowCount+1, parents) - replicaNode := replicaProof.Leaf() - - if !bytes.Equal(label, replicaNode) { - return false - } - } - - commRCalculated := TernaryHash_PedersenHash(commC, commQ, commRLast) - - if !bytes.Equal(commRCalculated, AsBytes_SealedSectorCID(commR)) { - return false - } - - return true -} - -func (sdr *WinStackedDRG_I) CreateOfflineCircuitProof(proof PrivateOfflineProof, aux sector.ProofAuxTmp) abi.SealProof { - // partitions := sdr.Partitions() - // publicInputs := GeneratePublicInputs() - - panic("TODO") - var proofBytes []byte - panic("TODO") - - sealProof := abi.SealProof{ - ProofBytes: proofBytes, - } - - return sealProof -} - -func (sdr *WinStackedDRG_I) _generateOfflineChallenges(sealSeed sector.SealSeed, randomness abi.InteractiveSealRandomness, wrapperChallengeCount WinStackedDRGChallenges, windowChallengeCount WinStackedDRGWindowChallenges) (windowChallenges []UInt, wrapperChallenges []UInt) { - wrapperChallenges = generateOfflineChallenges(int(sdr.Nodes()), sealSeed, randomness, int(wrapperChallengeCount)) - windowChallenges = generateOfflineChallenges(int(sdr.WindowDRGCfg().Nodes()), sealSeed, randomness, int(windowChallengeCount)) - - return windowChallenges, wrapperChallenges -} - -//////////////////////////////////////////////////////////////////////////////// -// Seal Verification - -func (sdr *WinStackedDRG_I) VerifySeal(sv abi.SealVerifyInfo) bool { - onChain := sv.OnChain - sealProof := onChain.Proof - commR := abi.SealedSectorCID(onChain.SealedCID) - commD := abi.UnsealedSectorCID(sv.UnsealedCID) - sealSeed := computeSealSeed(sv.SectorID, sv.Randomness, commD) - - wrapperChallenges, windowChallenges := sdr._generateOfflineChallenges(sealSeed, sv.InteractiveRandomness, sdr.Challenges(), sdr.WindowChallenges()) - return sdr._verifyOfflineCircuitProof(commD, commR, sealSeed, windowChallenges, wrapperChallenges, sealProof) -} - -func (sdr *WinStackedDRG_I) _verifyOfflineCircuitProof(commD abi.UnsealedSectorCID, commR abi.SealedSectorCID, sealSeed sector.SealSeed, windowChallenges []UInt, wrapperChallenges []UInt, sv abi.SealProof) bool { - //publicInputs := GeneratePublicInputs() - panic("TODO") -} - -//////////////////////////////////////////////////////////////////////////////// -// PoSt - -func (sdr *WinStackedDRG_I) _generateCandidate(postCfg PoStInstanceCfg, randomness abi.PoStRandomness, aux sector.PersistentProofAux, sectorID abi.SectorID, sectorChallengeIndex uint64) abi.PoStCandidate { - cfg := postCfg.As_PoStCfgV1() - - nodes := int(cfg.Nodes()) - leafChallengeCount := int(cfg.LeafChallengeCount()) - challengeRangeSize := int(cfg.ChallengeRangeSize()) - treePath := aux.CommRLastTreePath() - tree := LoadMerkleTree(treePath) - - var data []byte - var inclusionProofs []InclusionProof - for i := 0; i < leafChallengeCount; i++ { - leafChallenge := generateLeafChallenge(randomness, sectorChallengeIndex, i, nodes, challengeRangeSize) - - for j := 0; j < challengeRangeSize; j++ { - leafIndex := leafChallenge + UInt(j) - data = append(data, tree.Leaf(leafIndex)...) - inclusionProof := tree.ProveInclusion(leafIndex) - inclusionProofs = append(inclusionProofs, inclusionProof) - } - } - - partialTicket := computePartialTicket(randomness, sectorID, data) - - privateProof := InternalPrivateCandidateProof{ - InclusionProofs: inclusionProofs, - } - - var registeredProof abi.RegisteredProof - - // FIXME: Need to get registeredProof! - - candidate := abi.PoStCandidate{ - PartialTicket: partialTicket, - PrivateProof: privateProof.externalize(registeredProof), - SectorID: sectorID, - ChallengeIndex: int64(sectorChallengeIndex), - } - return candidate -} - -func (sdr *WinStackedDRG_I) VerifyInternalPrivateCandidateProof(postCfg PoStInstanceCfg, p *InternalPrivateCandidateProof, challengeSeed abi.PoStRandomness, candidate abi.PoStCandidate, commRLast Commitment) bool { - cfg := postCfg.As_PoStCfgV1() - //util.Assert(candidate.PrivateProof == nil) - nodes := int(cfg.Nodes()) - challengeRangeSize := int(cfg.ChallengeRangeSize()) - - sectorID := candidate.SectorID - claimedPartialTicket := candidate.PartialTicket - - allInclusionProofs := p.InclusionProofs - - var ticketData []byte - - for _, p := range allInclusionProofs { - ticketData = append(ticketData, p.Leaf()...) - } - - // Check partial ticket - calculatedTicket := computePartialTicket(challengeSeed, sectorID, ticketData) - - if len(calculatedTicket) != len(claimedPartialTicket) { - return false - } - for i, byte := range claimedPartialTicket { - if byte != calculatedTicket[i] { - return false - } - } - - // Helper to get InclusionProofs sequentially. - next := func() InclusionProof { - if len(allInclusionProofs) < 1 { - return nil - } - - proof := allInclusionProofs[0] - allInclusionProofs = allInclusionProofs[1:] - return proof - } - - // Check all inclusion proofs. - for i := 0; i < int(cfg.LeafChallengeCount()); i++ { - leafChallenge := generateLeafChallenge(challengeSeed, UInt(candidate.ChallengeIndex), i, nodes, challengeRangeSize) - for j := 0; j < challengeRangeSize; j++ { - leafIndex := leafChallenge + UInt(j) - proof := next() - if proof == nil { - // All required inclusion proofs must be provided. - return false - } - if !proof.Verify(commRLast, leafIndex) { - return false - } - } - } - - return true -} - -func (sdr *WinStackedDRG_I) VerifyPrivatePoStProof(cfg PoStInstanceCfg, privateProof PrivatePoStProof, candidates []abi.PoStCandidate, sectorIDs []abi.SectorID, sectorCommitments sector.SectorCommitments) bool { - // This is safe by construction. - challengeSeed := privateProof.ChallengeSeed - - for i, p := range privateProof.CandidateProofs { - proof := newInternalPrivateProof(p) - - candidate := candidates[i] - ci := candidate.ChallengeIndex - expectedSectorID := sectorIDs[ci] - - challengedSectorID := generateSectorChallenge(challengeSeed, i, sectorIDs) - - if expectedSectorID != challengedSectorID { - return false - } - - commRLast := sectorCommitments[expectedSectorID] - - if !sdr.VerifyInternalPrivateCandidateProof(cfg, &proof, challengeSeed, candidate, commRLast) { - return false - } - } - return true -} - -func (sdr *WinStackedDRG_I) _createPoStCircuitProof(privateProof PrivatePoStProof) abi.PoStProof { - panic("TODO") - - var proofBytes []byte - panic("TODO") - - postProof := abi.PoStProof{ - ProofBytes: proofBytes, - } - - return postProof -} diff --git a/content/libraries/ipfs/_index.md b/content/libraries/ipfs/_index.md index 553edec18..e4f47c1ec 100644 --- a/content/libraries/ipfs/_index.md +++ b/content/libraries/ipfs/_index.md @@ -1,9 +1,26 @@ --- title: IPFS -description: IPFS - InterPlanetary File System -weight: 5 +bookCollapseSection: true +weight: 2 dashboardWeight: 1 -dashboardState: wip -dashboardAudit: done +dashboardState: stable +dashboardAudit: wip dashboardTests: 0 --- + +# IPFS + +Filecoin is built on the same underlying stack as IPFS - including connecting nodes peer-to-peer via [libp2p](https://libp2p.io) and addressing data using [IPLD](https://ipld.io/). Therefore, it borrows many concepts from the InterPlanetary File System (IPFS), such as content addressing, the CID (which, strictly speaking, is part of the Multiformats specification) and Merkle-DAGs (which is part of IPLD). It also makes direct use of `Bitswap` (the data transfer algorithm in IPFS) and `UnixFS` (the file format built on top of IPLD Merkle-Dags). + +## Bitswap + +[Bitswap](https://github.com/ipfs/go-bitswap) is a simple peer-to-peer data exchange protocol, used primarily in IPFS, which can also be used independently of the rest of the pieces that make up IPFS. In Filecoin, `Bitswap` is used to request and receive blocks when a node is synchonized ("caught up") but `GossipSub` has failed to deliver some blocks to a node. + +Please refer to the [Bitswap specification](https://github.com/ipfs/specs/blob/master/BITSWAP.md) for more information. + + +## UnixFS + +[UnixFS](https://github.com/ipfs/go-unixfs) is a protocol buffers-based format for describing files, directories, and symlinks in IPFS. `UnixFS` is used in Filecoin as a file formatting guideline for files submitted to the Filecoin network. + +Please refer to the [UnixFS specification](https://github.com/ipfs/specs/blob/master/UNIXFS.md) for more information. diff --git a/content/libraries/ipfs/bitswap.md b/content/libraries/ipfs/bitswap.md deleted file mode 100644 index 75525ba8f..000000000 --- a/content/libraries/ipfs/bitswap.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: BitSwap -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: missing -dashboardTests: 0 ---- - -# BitSwap diff --git a/content/libraries/ipfs/graphsync.md b/content/libraries/ipfs/graphsync.md deleted file mode 100644 index 8acd7b695..000000000 --- a/content/libraries/ipfs/graphsync.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: GraphSync -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: missing -dashboardTests: 0 ---- - -# GraphSync \ No newline at end of file diff --git a/content/libraries/ipfs/unixfs.md b/content/libraries/ipfs/unixfs.md deleted file mode 100644 index c88a1341c..000000000 --- a/content/libraries/ipfs/unixfs.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: UnixFS -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: missing -dashboardTests: 0 ---- - -# UnixFS diff --git a/content/libraries/ipld/_index.md b/content/libraries/ipld/_index.md index 896d95e16..abbaf452b 100644 --- a/content/libraries/ipld/_index.md +++ b/content/libraries/ipld/_index.md @@ -1,14 +1,41 @@ --- title: IPLD -description: IPLD - InterPlanetary Linked Data bookCollapseSection: true -weight: 3 +weight: 4 dashboardWeight: 1 -dashboardState: missing +dashboardState: stable dashboardAudit: missing dashboardTests: 0 --- -# IPLD - InterPlanetary Linked Data +# IPLD -{{< embed src="ipld.id" lang="go" >}} +The InterPlanetary Linked Data or [IPLD](https://ipld.io/) is the data model of the content-addressable web. It provides standards and formats to build Merkle-DAG data-structures, like those that represent a filesystem. IPLD allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data via hashes as instances of IPLD. This means that data can be linked and referenced from totally different data structures in a global namespace. This is a very useful feature that is used extensively in Filecoin. + +IPLD introduces several concepts and protocols, such as the concept of content addressing itself, codecs such as DAG-CBOR, file formats such as Content Addressable aRchives (CARs), and protocols such as GraphSync. + +Please refer to the [IPLD specifications repository](https://github.com/ipld/specs) for more information. + +## DAG-CBOR encoding + +All Filecoin system data structures are stored using DAG-CBOR (which is an IPLD codec). DAG-CBOR is a more strict subset of CBOR with a predefined tagging scheme, designed for storage, retrieval and traversal of hash-linked data DAGs. + +Files and data stored on the Filecoin network are also stored using various IPLD codecs (not necessarily DAG-CBOR). IPLD provides a consistent and coherent abstraction above data that allows Filecoin to build and interact with complex, multi-block data structures, such as HAMT and AMT. Filecoin uses the DAG-CBOR codec for the serialization and deserialization of its data structures and interacts with that data using the IPLD Data Model, upon which various tools are built. IPLD Selectors are also used to address specific nodes within a linked data structure (see GraphSync below). + +Please refer to the [DAG-CBOR specification](https://github.com/ipld/specs/blob/master/block-layer/codecs/dag-cbor.md) for more information. + +## Content Addressable aRchives (CARs) + +The Content Addressable aRchives (CAR) format is used to store content addressable objects in the form of IPLD block data as a sequence of bytes; typically in a file with a `.car` filename extension. + +The CAR format is used to produce a _Filecoin Piece_ (the main representation of files in Filecoin) by serialising its IPLD DAG. The `.car` file then goes through further transformations to produce the _Piece CID_. + +Please refer to the [CAR specification](https://github.com/ipld/specs/blob/master/block-layer/content-addressable-archives.md) for further information. + +## GraphSync + +GraphSync is a request-response protocol that synchronizes _parts_ of a graph (an authenticated Directed Acyclic Graph - DAG) between different peers. It uses _selectors_ to identify the specific subset of the graph to be synchronized between different peers. + +GraphSync is used by Filecoin in order to synchronize parts of the blockchain. + +Please refer to the [GraphSync specification](https://github.com/ipld/specs/blob/master/block-layer/graphsync/graphsync.md) for more information. diff --git a/content/libraries/ipld/cid.md b/content/libraries/ipld/cid.md deleted file mode 100644 index 642a56b19..000000000 --- a/content/libraries/ipld/cid.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: CID -description: CIDs - Content IDentifiers -dashboardWeight: 1 -dashboardState: wip -dashboardAudit: n/a -dashboardTests: 0 ---- - -# CIDs - Content IDentifiers - -For most objects referenced by Filecoin, a Content Identifier (CID for short) is used. Any pointer inclusions in the Filecoin spec `id` files (e.g. `&Object`) denotes the CID of said object. Some objects explicitly name a CID field. The spec treats these notations interchangeably. -This is effectively a hash value, prefixed with its hash function (multihash) as well as extra labels to inform applications about how to deserialize the given data. - -For a more detailed specification, we refer the reader to the -[IPLD repository](https://github.com/ipld/cid). diff --git a/content/libraries/ipld/datamodel.md b/content/libraries/ipld/datamodel.md deleted file mode 100644 index 0c06e21a7..000000000 --- a/content/libraries/ipld/datamodel.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Data Model -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: n/a -dashboardTests: 0 ---- - -# Data Model \ No newline at end of file diff --git a/content/libraries/ipld/ipld.id b/content/libraries/ipld/ipld.id deleted file mode 100644 index 5cbffcae1..000000000 --- a/content/libraries/ipld/ipld.id +++ /dev/null @@ -1,17 +0,0 @@ -// imported as ipld.Object - -import cid "github.com/ipfs/go-cid" - -type Object interface { - CID() cid.Cid - - // Populate(v interface{}) error -} - -type GraphStore struct { - // Retrieves a serialized value from the store by CID. Returns the value and whether it was found. - Get(c cid.Cid) (util.Bytes, bool) - - // Puts a serialized value in the store, returning the CID. - Put(value util.Bytes) (c cid.Cid) -} diff --git a/content/libraries/ipld/selectors.id b/content/libraries/ipld/selectors.id deleted file mode 100644 index 34beda5a9..000000000 --- a/content/libraries/ipld/selectors.id +++ /dev/null @@ -1,153 +0,0 @@ -// This is a compression of the IPLD Selector Spec -// Full spec: https://github.com/ipld/specs/blob/master/selectors/selectors.md - -type Selector union { - Matcher - ExploreAll - ExploreFields - ExploreIndex - ExploreRange - ExploreRecursive - ExploreUnion - ExploreConditional - ExploreRecursiveEdge -} - -// ExploreAll is similar to a `*` -- it traverses all elements of an array, -// or all entries in a map, and applies a next selector to the reached nodes. -type ExploreAll struct { - next Selector -} - -// ExploreFields traverses named fields in a map (or equivalently, struct, if -// traversing on typed/schema nodes) and applies a next selector to the -// reached nodes. -// -// Note that a concept of exploring a whole path (e.g. "foo/bar/baz") can be -// represented as a set of three nexted ExploreFields selectors, each -// specifying one field. -type ExploreFields struct { - fields {string: Selector} -} - -// ExploreIndex traverses a specific index in a list, and applies a next -// selector to the reached node. -type ExploreIndex struct { - index UInt - next Selector -} - -// ExploreIndex traverses a list, and for each element in the range specified, -// will apply a next selector to those reached nodes. -type ExploreRange struct { - start UInt - end UInt - next Selector -} - -// ExploreRecursive traverses some structure recursively. -// To guide this exploration, it uses a "sequence", which is another Selector -// tree; some leaf node in this sequence should contain an ExploreRecursiveEdge -// selector, which denotes the place recursion should occur. -// -// In implementation, whenever evaluation reaches an ExploreRecursiveEdge marker -// in the recursion sequence's Selector tree, the implementation logically -// produces another new Selector which is a copy of the original -// ExploreRecursive selector, but with a decremented maxDepth parameter, and -// continues evaluation thusly. -// -// It is not valid for an ExploreRecursive selector's sequence to contain -// no instances of ExploreRecursiveEdge; it *is* valid for it to contain -// more than one ExploreRecursiveEdge. -// -// ExploreRecursive can contain a nested ExploreRecursive! -// This is comparable to a nested for-loop. -// In these cases, any ExploreRecursiveEdge instance always refers to the -// nearest parent ExploreRecursive (in other words, ExploreRecursiveEdge can -// be thought of like the 'continue' statement, or end of a for-loop body; -// it is *not* a 'goto' statement). -// -// Be careful when using ExploreRecursive with a large maxDepth parameter; -// it can easily cause very large traversals (especially if used in combination -// with selectors like ExploreAll inside the sequence). -type ExploreRecursive struct { - sequence Selector - maxDepth UInt - stopAt Condition -} - -// ExploreRecursiveEdge is a special sentinel value which is used to mark -// the end of a sequence started by an ExploreRecursive selector: the recursion -// goes back to the initial state of the earlier ExploreRecursive selector, -// and proceeds again (with a decremented maxDepth value). -// -// An ExploreRecursive selector that doesn't contain an ExploreRecursiveEdge -// is nonsensical. Containing more than one ExploreRecursiveEdge is valid. -// An ExploreRecursiveEdge without an enclosing ExploreRecursive is an error. -type ExploreRecursiveEdge struct {} - -// ExploreUnion allows selection to continue with two or more distinct selectors -// while exploring the same tree of data. -// -// ExploreUnion can be used to apply a Matcher on one node (causing it to -// be considered part of a (possibly labelled) result set), while simultaneously -// continuing to explore deeper parts of the tree with another selector, -// for example. -type ExploreUnion [Selector] - -// Note that ExploreConditional versus a Matcher with a Condition are distinct: -// ExploreConditional progresses deeper into a tree; -// whereas a Matcher with a Condition may look deeper to make its decision, -// but returns a match for the node it's on rather any of the deeper values. -type ExploreConditional struct { - condition Condition - next Selector -} - -// Matcher marks a node to be included in the "result" set. -// (All nodes traversed by a selector are in the "covered" set (which is a.k.a. -// "the merkle proof"); the "result" set is a subset of the "covered" set.) -// -// In libraries using selectors, the "result" set is typically provided to -// some user-specified callback. -// -// A selector tree with only "explore*"-type selectors and no Matcher selectors -// is valid; it will just generate a "covered" set of nodes and no "result" set. -type Matcher struct { - onlyIf Condition? // match is true based on position alone if this is not set. - label string? // labels can be used to match multiple different structures in one selection. -} - -// Condition is expresses a predicate with a boolean result. -// -// Condition clauses are used several places: -// - in Matcher, to determine if a node is selected. -// - in ExploreRecursive, to halt exploration. -// - in ExploreConditional, -// -// -// TODO -- Condition is very skeletal and incomplete. -// The place where Condition appears in other structs is correct; -// the rest of the details inside it are not final nor even completely drafted. -type Condition union { - // We can come back to this and expand it later... - // TODO: figure out how to make this recurse correctly, so I can say "hasField{hasField{or{hasValue{1}, hasValue{2}}}}". - Condition_HasField - Condition_HasValue - Condition_HasKind - Condition_IsLink - Condition_GreaterThan - Condition_LessThan - Condition_And - Condition_Or - // REVIEW: since we introduced "and" and "or" here, we're getting into dangertown again. we'll need a "max conditionals limit" (a la 'gas' of some kind) near here. -} - -type Condition_HasField struct {} -type Condition_HasKind struct {} -type Condition_HasValue struct {} -type Condition_And struct {} -type Condition_GreaterThan struct {} -type Condition_IsLink struct {} -type Condition_LessThan struct {} -type Condition_Or struct {} diff --git a/content/libraries/ipld/selectors.md b/content/libraries/ipld/selectors.md deleted file mode 100644 index 623d55d72..000000000 --- a/content/libraries/ipld/selectors.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Selectors -description: Selectors - IPLD Query Language -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: n/a -dashboardTests: 0 ---- - -# Selectors - IPLD Query Language - -{{< embed src="selectors.id" lang="go" >}} diff --git a/content/libraries/libp2p/_index.md b/content/libraries/libp2p/_index.md index a3b25748b..8e79bc533 100644 --- a/content/libraries/libp2p/_index.md +++ b/content/libraries/libp2p/_index.md @@ -1,16 +1,31 @@ --- title: libp2p -description: A modular network stack. Run your network applications free from runtime and address services, independently of their location. -weight: 4 +weight: 5 bookCollapseSection: true dashboardWeight: 1 -dashboardState: missing +dashboardState: stable dashboardTests: 0 dashboardAudit: done dashboardAuditDate: '2019-10-10' dashboardAuditURL: https://github.com/protocol/libp2p-vulnerabilities/blob/master/DRAFT_NCC_Group_ProtocolLabs_1903ProtocolLabsLibp2p_Report_2019-10-10_v1.1.pdf --- -# Libp2p - A modular network stack +# Libp2p -{{< embed src="libp2p.id" lang="go" >}} +[Libp2p](https://libp2p.io) is a modular network protocol stack for peer-to-peer networks. It consists of a catalogue of modules from which p2p network developers can select and reuse just the protocols they need, while making it easy to upgrade and interoperate between applications. This includes several protocols and algorithms to enable efficient peer-to-peer communication like peer discovery, peer routing and NAT Traversal. While libp2p is used by both IPFS and Filecoin, it is a standalone stack that can be used independently of these systems as well. + +There are several implementations of libp2p, which can be found at the [libp2p GitHub repositoriy](https://github.com/libp2p). The specification of libp2p can be found in its [specs repo](https://github.com/libp2p/specs) and its documentation at [https://docs.libp2p.io](https://docs.libp2p.io). + +Below we discuss how some of libp2p's components are used in Filecoin. + +## DHT + +The Kademlia DHT implementation of libp2p is used by Filecoin for peer discovery and peer exchange. Libp2p's [PeerID](https://github.com/libp2p/specs/blob/master/peer-ids/peer-ids.md) is used as the ID scheme for Filecoin storage miners and more generally Filecoin nodes. One way that clients find miner information, such as a miner's address, is by using the DHT to resolve the associated PeerID to the miner's _Multiaddress_. + +The Kademlia DHT implementation of libp2p in go can be found in its [GitHub repository](https://github.com/libp2p/go-libp2p-kad-dht). + +## GossipSub + +GossipSub is libp2p's pubsub protocol. Filecoin uses GossipSub for message and block propagation among Filecoin nodes. The recent hardening extensions of GossipSub include a number of techniques to make it robust against a variety of attacks. + +Please refer to [GossipSub's Spec section](gossip_sub), or the protocol’s more complete [specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md) for details on its design, implementation and parameter settings. A [technical report](https://arxiv.org/abs/2007.02754) is also available, which discusses the design rationale of the protocol. diff --git a/content/libraries/libp2p/fil_libp2p_nodes.md b/content/libraries/libp2p/fil_libp2p_nodes.md deleted file mode 100644 index b704fb5b4..000000000 --- a/content/libraries/libp2p/fil_libp2p_nodes.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: FIL Libp2p Nodes -description: Filecoin libp2p Nodes -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: n/a -dashboardTests: 0 ---- - -# FIL libp2p nodes diff --git a/content/libraries/libp2p/gossipsub.md b/content/libraries/libp2p/gossipsub.md deleted file mode 100644 index cf6670ee6..000000000 --- a/content/libraries/libp2p/gossipsub.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gossipsub -description: Gossipsub for broadcasts -dashboardWeight: 1 -dashboardState: missing -dashboardTests: 0 -dashboardAudit: done -dashboardAuditDate: '2020-06-03' -dashboardAuditURL: https://gateway.ipfs.io/ipfs/QmWR376YyuyLewZDzaTHXGZr7quL5LB13HRFnNdSJ3CyXu/Least%20Authority%20-%20Gossipsub%20v1.1%20Final%20Audit%20Report%20%28v2%29.pdf ---- - -# Gossipsub diff --git a/content/libraries/libp2p/kad_dht.md b/content/libraries/libp2p/kad_dht.md deleted file mode 100644 index 34aa5d1c0..000000000 --- a/content/libraries/libp2p/kad_dht.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: DHT -description: Kademlia DHT for Peer Routing -dashboardWeight: 1 -dashboardState: missing -dashboardAudit: n/a -dashboardTests: 0 ---- - -# DHT \ No newline at end of file diff --git a/content/libraries/libp2p/libp2p.id b/content/libraries/libp2p/libp2p.id deleted file mode 100644 index 40fd0055e..000000000 --- a/content/libraries/libp2p/libp2p.id +++ /dev/null @@ -1,94 +0,0 @@ -import peer "github.com/libp2p/go-libp2p-core/peer" - -type Node struct { - // PeerID returns the PeerID associated with this libp2p Node - PeerID() peer.ID - - // MountProtocol adds given Protocol under specified protocol id. - MountProtocol(path ProtocolPath, protocol Protocol) - - // ConnectPeerID establishes a connection to peer matching given PeerInfo. - // - // peer.AddrInfo may be empty. If so: - // - Libp2pNode will try to use any Multiaddrs it knows (internal PeerStore) - // - Libp2pNode may use any `PeerRouting` protocol mounted onto the libp2p node. - // TODO: how to define this. - // NOTE: probably implies using kad-dht or gossipsub for this. - // - // Idempotent. If a connection already exists, this method returns silently. - Connect(peerInfo peer.AddrInfo) -} - -type ProtocolPath string - -type Protocol union { - StreamProtocol - DatagramProtocol -} - -// Stream is an interface to deal with networked processes, which communicate -// via streams of bytes. -// -// See golang.org/pkg/io -- as this is modelled after io.Reader and io.Writer -type Stream struct { - // Read reads bytes from the underlying stream and copies them to buf. - // Read returns the number of bytes read (n), and potentially an error - // encountered while reading. Read reads at most len(buf) byte. - // Read may read 0 bytes. - Read(buf Bytes) union {n int, err error} - - // Write writes bytes to the underlying stream, copying them from buf. - // Write returns the number of bytes written (n), and potentially an error - // encountered while writing. Write writes at most len(buf) byte. - // Write may read 0 bytes. - Write(buf Bytes) union {n int, err error} - - // Close terminates client's use of the stream. - // Calling Read or Write after Close is an error. - Close() error -} - -type StreamProtocol struct { - // AcceptStream accepts an incoming stream connection. - AcceptStream() struct { - stream Stream - peerInfo peer.AddrInfo - err error - } - - // OpenStream opens a stream to a particular PeerID. - OpenStream(peerInfo peer.AddrInfo) struct { - stream Stream - err error - } -} - -// Datagram -type Datagram Bytes - -// Datagrams are "messages" in the network packet sense of the word. -// -// "message-oriented network protocols" should use this interface, -// not the StreamProtocol interface. -// -// We call it "Datagram" here because unfortunately the word "Message" -// is very overloaded in Filecoin. -// Suggestion for libp2p: use datagram too. -type DatagramProtocol struct { - // AcceptDatagram accepts an incoming message. - AcceptDatagram() struct { - datagram Datagram - peerInfo peer.AddrInfo - err error - } - - // OpenStream opens a stream to a particular PeerID - SendDatagram(datagram Datagram, peerInfo peer.AddrInfo) struct {err error} -} - -// type StorageDealLibp2pProtocol struct { -// StreamProtocol StreamProtocol -// // --- -// AcceptStream() struct {} -// OpenStream() struct {} -// } diff --git a/content/libraries/multiformats/_index.md b/content/libraries/multiformats/_index.md index 32c55a9cd..b829f0c26 100644 --- a/content/libraries/multiformats/_index.md +++ b/content/libraries/multiformats/_index.md @@ -1,22 +1,37 @@ --- title: Multiformats -description: Multiformats - self describing protocol values -weight: 6 +weight: 3 dashboardWeight: 1 -dashboardState: missing -dashboardAudit: n/a +dashboardState: stable +dashboardAudit: missing dashboardTests: 0 --- # Multiformats -Self-describing protocol values +[Multiformats](https://multiformats.io/) is a set of self-describing protocol values. These values are useful both to the data layer (IPLD) and to the network layer (libp2p). Multiformats includes specifications for the Content Identifier (CID) used by IPLD and IPFS, the multicodec, multibase and multiaddress (used by libp2p). -## Multihash - self describing hash values +Please refer to the [Multiformats repository](https://github.com/multiformats) for more information. -{{< embed src="multihash.id" lang="go" >}} +## CIDs -## Multiaddr - self describing network addresses +Filecoin references data using IPLD's Content Identifier (CID). -{{< embed src="multiaddr.id" lang="go" >}} +A CID is a hash digest prefixed with identifiers for its hash function and codec. This means you can validate and decode data with only this identifier. +When CIDs are printed as strings they also use multibase to identify the base encoding being used. + +For a more detailed specification, please see the +[CID specification](https://github.com/multiformats/cid). + +## Multihash + +A Multihash is a set of self-describing hash values. Multihash is used for differentiating outputs from various well-established cryptographic hash functions, while addressing size and encoding considerations. + +Please refer to the [Multihash specification](https://github.com/multiformats/multihash) for more information. + +## Multiaddr + +A Multiadddress is a self-describing network address. Multiaddresses are composable and future-proof network addresses used by libp2p. + +Please refer to the [Multiaddr specification](https://github.com/multiformats/multiaddr) for more information. diff --git a/content/libraries/multiformats/multiaddr.id b/content/libraries/multiformats/multiaddr.id deleted file mode 100644 index 5ca10d4d1..000000000 --- a/content/libraries/multiformats/multiaddr.id +++ /dev/null @@ -1 +0,0 @@ -type Multiaddr Bytes diff --git a/content/libraries/multiformats/multihash.id b/content/libraries/multiformats/multihash.id deleted file mode 100644 index 1c199309c..000000000 --- a/content/libraries/multiformats/multihash.id +++ /dev/null @@ -1 +0,0 @@ -type Multihash Bytes diff --git a/content/systems/filecoin_markets/storage_market/_index.md b/content/systems/filecoin_markets/storage_market/_index.md index 0540452ba..c7878430a 100644 --- a/content/systems/filecoin_markets/storage_market/_index.md +++ b/content/systems/filecoin_markets/storage_market/_index.md @@ -56,7 +56,7 @@ Negotiation is the out of band process where a storage client and a storage prov Negotiation begins once a client has discovered a miner whose `StorageAsk` meets their desired criteria.The *recommended* order of operations for negotiating and publishing a deal is as follows: 1. Before sending a proposal to the provider, the `StorageClient` adds funds for a deal, as necessary, to the `StorageMarketActor` (by calling `AddBalance`) -2. In order to propose a storage deal, the `StorageClient` then calculates the piece commitment (`CommP`) for the data it intends to store ahead of time. This is neccesary so that the `StorageProvider` can verify the data the `StorageClient` sends to be stored matches the `CommP` in the `StorageDealProposal`. For more detail about the relationship between payloads, pieces, and `CommP` see [Piece](piece) and [Filproofs](filproofs). +2. In order to propose a storage deal, the `StorageClient` then calculates the piece commitment (`CommP`) for the data it intends to store ahead of time. This is neccesary so that the `StorageProvider` can verify the data the `StorageClient` sends to be stored matches the `CommP` in the `StorageDealProposal`. 3. The `StorageClient` now creates a `StorageDealProposal` and sends the proposal and the CID for the root of the data payload to be stored to the `StorageProvider` using the `Storage Deal Protocol` Execution now moves to the `StorageProvider` diff --git a/content/systems/filecoin_nodes/repository/ipldstore/_index.md b/content/systems/filecoin_nodes/repository/ipldstore/_index.md index 1926a7d2b..9e906fac2 100644 --- a/content/systems/filecoin_nodes/repository/ipldstore/_index.md +++ b/content/systems/filecoin_nodes/repository/ipldstore/_index.md @@ -9,7 +9,6 @@ dashboardTests: 0 # IPLD Store - Local Storage for hash-linked data -{{}} IPLD is a set of libraries which allow for the interoperability of content-addressed data structures across different distributed systems. It provides a fundamental 'common language' to primitive cryptographic hashing, enabling data to be verifiably referenced and retrieved between two independent protocols. For example, a user can reference a git commit in a blockchain transaction to create an immutable copy and timestamp, or a data from a DHT can be referenced and linked to in a smart contract.