Once the test network peers and orderers have been started, and the network identities have been registered and enrolled with the ECert CA, we can construct a channel linking the participants of the test network blockchain.
$ export TEST_NETWORK_CHANNEL_NAME="mychannel"
$ ./network channel create
Creating channel "mychannel":
✅ - Creating channel MSP ...
✅ - Aggregating channel MSP ...
✅ - Launching admin CLIs ...
✅ - Creating channel "mychannel" ...
✅ - Joining org1 peers to channel "mychannel" ...
✅ - Joining org2 peers to channel "mychannel" ...
🏁 - Channel is ready.
In order to construct a communication channel, the following steps must be performed:
-
TLS and MSP public certificates must be aggregated and distributed to all participants in the network.
-
Each organization will launch a command-line pod with the MSP environment such that all fabric binaries are executed as the Admin user.
-
The channel genesis block is constructed from
configtx.yaml
, andosnadmin
is used to distribute the new channel configuration block to all orderers in the network. -
The network peers fetch the genesis block from the orderers, and use the configuration to join the channel.
✅ - Creating channel MSP ...
✅ - Aggregating channel MSP ...
One of the responsibilities of a Hyperledger Fabric Consortium Organizer is to distribute the public MSP and
TLS certificates to organizations participating in a blockchain. In the Docker composed based test network, or
systems bootstrapped with the cryptogen
command, all of the public certificates will be available on a common
file system or volume share. In our Kubernetes test network, each organization maintains the cryptographic
assets on a distinct persistent volume, invisible to other the other participants in the network.
To distribute the TLS and MSP public certificates, the test network emulates the responsibilities of the
consortium organizer by constructing a Channel MSP
structure, extracting the relevant certificate files into a single msp-{org}.tar.gz
archive. This MSP
archive is then relayed to network participants, where it can be extracted and used to set the MSP context
in which the peer executes administrative commands.
The kube-specific techniques employed in MSP construction and distribution are:
- Channel MSP is generated by piping shell commands into each org's ECert CA pod.
- Channel MSP is extracted to the local system by piping the output of
tar
throughkubectl
(equivalent to the kubectl cp command). MSP archives are saved locally in/build/msp-{org}.tar.gz
archives. - Channel MSP is distributed across network participants by transferring the MSP archive files from
build/msp/msp-{org}.tgz
into the cluster as a config map. - An
initContainer
is launched in each organization's Admin CLI pod, unfurling the MSP context from themsp-config
config map into the local volume.
Despite this additional complexity, this technique allows us to carefully target the MSP context in which remote peer commands execute. The construct of an MSP Archive may be extended to other circumstances in which a consortium organizer transfers the public certificates in an out-of-band fashion to participants of a blockchain network.
Aggregating the certificates as a local MSP archive is accomplished by piping a tar
archive from the output
of a remote kubectl
into a local archive files. These files are then mounted into the Kube namespace by
constructing the msp-config
config map:
kubectl -n $NS exec deploy/org0-ecert-ca -- tar zcvf - -C /var/hyperledger/fabric organizations/ordererOrganizations/org0.example.com/msp > msp/msp-org0.example.com.tgz
kubectl -n $NS exec deploy/org1-ecert-ca -- tar zcvf - -C /var/hyperledger/fabric organizations/peerOrganizations/org1.example.com/msp > msp/msp-org1.example.com.tgz
kubectl -n $NS exec deploy/org2-ecert-ca -- tar zcvf - -C /var/hyperledger/fabric organizations/peerOrganizations/org2.example.com/msp > msp/msp-org2.example.com.tgz
kubectl -n $NS delete configmap msp-config || true
kubectl -n $NS create configmap msp-config --from-file=msp/```
✅ - Launching admin CLIs ...
After the channel MSP archives have been constructed and loaded into the msp-config
ConfigMap, a series
of kubernetes pods are launched in the namespace with an environment suitable for running the Fabric
peer
commands as the organization Administrator. Before starting the CLI admin container, an initContainer
reads the MSP archives and unfurls them into a location on the org's persistent volume:
# This init container will unfurl all of the MSP archives listed in the msp-config config map.
initContainers:
- name: msp-unfurl
image: busybox
command:
- sh
- -c
- "for msp in $(ls /msp/msp-*.tgz); do echo $msp && tar zxvf $msp -C /var/hyperledger/fabric; done"
volumeMounts:
- name: msp-config
mountPath: /msp
- name: fabric-volume
mountPath: /var/hyperledger
Once the MSP archives are extracted, the CLI is launched and the environment set such that peer
commands
will be executed with the organization's Administrative role.
cat kube/org0/org0-admin-cli.yaml | sed 's,{{FABRIC_VERSION}},'${FABRIC_VERSION}',g' | kubectl -n $NS apply -f -
cat kube/org1/org1-admin-cli.yaml | sed 's,{{FABRIC_VERSION}},'${FABRIC_VERSION}',g' | kubectl -n $NS apply -f -
cat kube/org2/org2-admin-cli.yaml | sed 's,{{FABRIC_VERSION}},'${FABRIC_VERSION}',g' | kubectl -n $NS apply -f -
✅ - Creating channel "mychannel" ...
As the consortium leader org0, we create the channel's genesis block and use the orderer admin REST services to register the channel genesis block configuration on the ordering nodes:
configtxgen -profile TwoOrgsApplicationGenesis -channelID '${CHANNEL_NAME}' -outputBlock genesis_block.pb
osnadmin channel join --orderer-address org0-orderer1:9443 --channelID '${CHANNEL_NAME}' --config-block genesis_block.pb
osnadmin channel join --orderer-address org0-orderer2:9443 --channelID '${CHANNEL_NAME}' --config-block genesis_block.pb
osnadmin channel join --orderer-address org0-orderer3:9443 --channelID '${CHANNEL_NAME}' --config-block genesis_block.pb
✅ - Joining org1 peers to channel "mychannel" ...
✅ - Joining org2 peers to channel "mychannel" ...
After the channel configurations have been registered with the network orderers, we will join the peers to the channel by retrieving the genesis block from the orderers and then joining the channel:
# Fetch the genesis block from an orderer
peer channel \
fetch oldest \
genesis_block.pb \
-c '${CHANNEL_NAME}' \
-o org0-orderer1:6050 \
--tls --cafile /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/org0-tls-ca.pem
# Join peer1 to the channel.
CORE_PEER_ADDRESS='${org}'-peer1:7051 \
peer channel \
join \
-b genesis_block.pb \
-o org0-orderer1:6050 \
--tls --cafile /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/org0-tls-ca.pem
# Join peer2 to the channel.
CORE_PEER_ADDRESS='${org}'-peer2:7051 \
peer channel \
join \
-b genesis_block.pb \
-o org0-orderer1:6050 \
--tls --cafile /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/org0-tls-ca.pem
$ ./network anchor peer2
✅ - Updating anchor peers to "peer2" ...
In the test network, the configtx.yaml sets the organization Anchor Peers to "peer1" in the genesis block. As such, no additional configuration is necessary for neighboring organizations to discover additional peers in the network.
However, the process of setting the anchor peers on a channel requires a more complicated scripting process, so we have included in the test network a mechanism to illustrate how anchor peers may be set on a network after a channel has been constructed.
Up to this point in the network configuration, the shell scripts orchestrating the remote volumes, peers, and
admin commands have all been executed by piping a sequence of commands into an existing pod directly
into the input of a kubectl
command. For small command sets this is adequate, but for the more complicated
process of registering a channel anchor peer, we have elected to use a different approach to launch the peer
update scripts on the kubernetes cluster.
When updating anchor peers, the ./network
script will:
- Transfer the shell scripts from
/scripts/*.sh
into the remote organization's persistent volume. - Issue a
kubectl exec -c "script-name.sh {args}"
on the org's admin CLI pod.
For non-trivial Fabric administative tasks, this approach of uploading a script into the cluster and then executing in an admin pod works well.