Skip to content

Latest commit

 

History

History
161 lines (123 loc) · 6.77 KB

File metadata and controls

161 lines (123 loc) · 6.77 KB

Kubernetes

To get started with the Kube test network, you will need access to a Kubernetes cluster.

TL/DR :

$ ./network kind 
Initializing KIND cluster "kind":
✅ - Pulling docker images for Fabric 2.3.2 ...
✅ - Creating cluster "kind" ...
✅ - Launching Nginx ingress controller ...
✅ - Launching container registry "kind-registry" at localhost:5000 ...
🏁 - Cluster is ready.

and :

$ ./network unkind 
Deleting cluster "kind":
☠️  - Deleting KIND cluster kind ...
🏁 - Cluster is gone.

Kube Context:

For illustration purposes, this project attempts in all cases to keep it simple as the general rule. By default, we will rely on kind (Kubernetes IN Docker) as a mechanism to quickly spin up ephemeral, short-lived clusters for development and illustration.

To maximize portability across revisions, vendor distributions, hardware profiles, and network topologies, this project relies exclusively on scripted interaction with the Kube API controller to reflect updates in a remote cluster. While this may not be the ideal technique for managing production workloads, the objective of this guide is to provide clarity on the nuances of Fabric / Kubernetes deployments, rather than an opinionated perspective on state of the art techniques for cloud Dev/Ops. Targeting the core Kube APIs means that there is a good chance that the systems will work "as-is" simply by setting the kubectl context to reference a cloud-native cluster (e.g. OCP, IKS, AWS, etc.)

If you don't have access to an existing cluster, or want to set up a short-lived cluster for development, testing, or CI, you can create a new cluster with:

$ ./network kind

or:

$ kind create cluster 

By default, kind will set the current Kube context to reference the new cluster. Any interaction with kubectl (or kube-context aware SDKs) will inherit the current context.

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:55346
CoreDNS is running at https://127.0.0.1:55346/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

When you are done with the cluster, tear it down with:

$ ./network unkind 

or:

$ kind delete cluster 

Test Network Structure

To emulate a more realistic example of multi-party collaboration, the test network forms a blockchain consensus group spanning three virtual organizations. Access to the blockchain is entirely constrained to Kubernetes private networks, and consuming applications make use of a Kube ingress controller for external visibility.

In k8s terms:

  • The blockchain is contained within a single Kubernetes Cluster.
  • Blockchain services (nodes, orderers, chaincode, etc.) reside within a single Namespace.
  • Each organization maintains a distinct, independent PersistentVolumeClaim for TLS certificates, local MSP, private data, and transaction ledgers.
  • Smart Contracts rely exclusively on the Chaincode-as-a-Service and External Builder patterns, running in the cluster as Kube Deployments with companion Services.
  • An HTTP(s) Ingress and companion gateway application is required for external access to the blockchain.

When running the test network locally, the ./network kind bootstrap will configure the system with an Nginx ingress controller, a private Container Registry, and persistent volumes / claims for host-local organization storage.

Behind the scenes, ./network kind is running:

# Create the KIND cluster and nginx ingress controller bound to :80 and :443 
kind create cluster --name ${TEST_NETWORK_CLUSTER_NAME:-kind} --config scripts/kind-config.yaml

# Create the Kube namespace 
kubectl create namespace ${TEST_NETWORK_NAMESPACE:-test-network}

# Create host persistent volumes (tied the kind-control-plane docker image lifetime)
kubectl create -f kube/pv-fabric-org0.yaml
kubectl create -f kube/pv-fabric-org1.yaml
kubectl create -f kube/pv-fabric-org2.yaml

# Create persistent volume claims binding to the host (docker) volumes 
kubectl -n $NS create -f kube/pvc-fabric-org0.yaml 
kubectl -n $NS create -f kube/pvc-fabric-org1.yaml 
kubectl -n $NS create -f kube/pvc-fabric-org2.yaml 

Container Registry

The kube yaml descriptors generally rely on the public Fabric images maintained at the public Docker and GitHub container registries. For casual usage, the test network will bootstrap and launch CAs, peers, orderers, chaincode, and sample applications without any additional configuration.

While public images are made available for pre-canned samples, there will undoubtedly be cases where you would like to build custom chaincode, gateway client applications, or custom builds of core Fabric binaries without uploading your code to a public registry. For this purpose, the Kube test network includes a Local Registry available for you to quickly deploy custom images directly into the cluster without uploading your code to the Internet.

By default, the kind.sh bootstrap will configure and link up a local container registry running at localhost:5000/. Images pushed to this registry will be immediately available to Pods deployed to the local cluster.

For dev/test/CI based flows using an external registry, the traditional Kubernetes practice of Adding ImagePullSecrets to a service account still applies.

Cloud Vendors

While the test network primarily targets KIND clusters, the singular reliance on the Kube API plane means that it should also work without modification on any modern cloud-based or bare metal Kubernetes distribution. While supporting the entire ecosystem of cloud vendors is not in scope for this sample project, we'd love to hear feedback, success stories, or bugs related to applying the test network to additional platforms.

In general, at a high-level the steps required to port the test network to ANY kube vendor are:

  • Configure an HTTP Ingress for access to any gateway, REST, or companion blockchain applications.
  • Register PersistentVolumeClaims for each of the organizations in the test network.
  • Create a Namespace for each instance of the test network.
  • Upload your chaincode, gateway clients, and application logic to an external Container Registry.
  • Run with a ServiceAccount and role bindings suitable for creating Pods, Deployments, and Services.

Example configurations for common cloud vendors:

IKS

OCP

AWS

Azure