-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bundle a default k8s Chaincode Builder into the peer #3405
Comments
Can you elaborate? Why is this needed? |
Why? The peer only cares about where the chaincode is running. Just deploy the chaincode and take care the packets find their way to it, and that the peer can authenticate the chaincode's TLS certificate. What's the difficulty?
Why would the Fabric peer need to know details about where or how the chaincode is deployed? |
@jkneubuh @jt-nti thoughts that spring to mind..
|
@yacovm just to your point of 'why and what's the difficulty'. You are correct, in theory the peer really only needs to know the address of where to talk to the chaincode. How packets get from A <-> B is really not an issue and has been solved. The peer though, in practice, really takes an active interest in the details of standing up chaincodes. The as-a-server approach helps to break this - but not entirely.. from talking to @jt-nti there are updates to the builder lifecycle that would help - and remove from the peer the knowledge of the chaincode deployment. The question is though - what elements of a chaincode define it's identity? To one organizations peers, how is 'equality' defined? And between organizations how is a chaincode defined? PackageID is only between one organization... so on a channel I think the identification is comprised of a string name, string version, montonically increase sequence |
On a pragmatic point @jkneubuh when adding the makefiles for the The first iteration was very wrong; the second is still in PR but hopefully the 'correct way' to fit in with the overall makefile. The pipeline as for release of Fabric binaries is little 'odd' in that on the release build, what is built and tested from this repo is not what is published. Fabric-test rebuilds this repo and publishes it directly. Not ideal. I do wonder if it's worth reviewing the pipelines; many projects produce flavours of the docker images. |
There are some issues on that repo for hatch battening, and I'll be adding more, if anyone is interested in helping with that effort.
I've created an issue to keep track of this one hyperledger-labs/fabric-builder-k8s#19
This is handled with the |
But why would the peer take part of the chaincode deployment in the chaincode-as-a-service model?? The chaincode needs to be deployed and managed externally by the organization.
Across organizations, all we need is a way to mark which namespaces the transaction refers to. That's it. |
hi guys, In my point of view
Questions
Comments
Thanks and Regards |
Hi @SamYuan1990
I would really like to get away from the peer being responsible for building chaincode. It's actually already possible (and recommended) to provide a prebuilt jar in a Java chaincode package. The prototype k8s builder runs a pre-built/pre-published Docker image, so that these steps can be handled in a traditional CI/CD pipeline where they should be. There's an example of that here -> https://github.com/hyperledgendary/conga-nft-contract
The difference is that the CCaaS builder just tells the peer where there is some chaincode running so that it can connect to it, and you are left to handle starting the chaincode process. (This is the reverse of the traditional approach where the chaincode connects to the peer.) On the other hand the new k8s builder allows the peer to manage the lifecycle of the chaincode seamlessly using kubernetes, similar to what happens when using the existing Go, Java and Node.js chaincode packages (although these don't use the builder framework). The advantages of the CCaaS builder are that it does not specify how you run the chaincode- you can use kubernetes but you don't have to. It's also really nice when developing chaincode because you can start it, update it, debug it, etc. all without having to go round the chaincode lifecycle loop again. The disadvantage is that you have to manage running the chaincode process yourself, which is more complicated than it really needs to be, plus the chaincode package says nothing about the actual chaincode implementation, just where it's running. The potential advantage of a new k8s builder is that deploying chaincode in a kubernetes environment becomes as straightforward as deploying traditional Go, Java, and Node.js packages, with the added advantage that the deploy is more reliable since the build step has been done once up front.
Neither the CCaaS builder or the k8s builder require DIND.
There is already a builder to deploy chaincode as a service, which can be running on k8s, but does not have to. |
Hi @yacovm I understand the reluctance to polluting the Fabric core with routines tied to any particular runtime or container orchestration engine (e.g. K8s, Docker Compose, Swarm, Mesos, VMWare, or whatever the current tech trend of the day happens to be...) Theoretically the external builders provide a solid mechanism for Fabric Administrators to configure a network with binaries specifically tailored to the target environment. In practice we have observed:
The simple task of copying the k8s external builder into the scope and file system of container-based peers is ... deceptively complicated. Other than pre-bundling default external builder into the peer image at Docker build time, is there another technique that would provide for a more modular, but convenient approach? The goal here is to make it "just work" for the 99% use case, and provide escape paths for sophisticated deployments requiring advanced configuration. I.e.. a compromise. |
Hi @SamYuan1990 - Builder comparison "at a glance" :
|
Why does the chaincode shim need to know its package ID? |
So you want to bundle inside the peer's file system, something that can interact with the peer and ask it about lifecycle? Is that why you need the mutual TLS for? To connect to the peer and withdraw the information about lifecycle or something like that? |
May I know where the image will be stored?
I like this
according to https://github.com/hyperledgendary/fabric-builder-k8s/blob/main/docs/TEST_NETWORK_K8S.md#kubernetes-permissions |
The example is using the GitHub container registry but they could be published anywhere the peer can connect to, or even loaded in a local registry.
The builder won't need all those permissions- probably just permission to create a pod and secret but not sure yet. The intention will be to require the minimum permissions necessary. The namespace is configurable as well which will hopefully help anyone who still has concerns. |
wait...
and the operator is make since to have some additional k8s permission and user is able to use it or not? and avoid permission issue? And we can make "ccaas" Builder integrate with the operator, and make new build as default one? |
Which means we extend fabric peer permission with "k8s" builder. And have to make it able to access the local registry with secrets, right? |
Hi @SamYuan1990 :
|
Let me summarize myself, what if At peer side
To provide user friendly, an k8s operator
|
Hi all - I would like to encourage the continued discussions on this topic; In this light, GitHub is not serving well as means to organize our efforts... May I suggest that we transition the discussion aspects over to Discord #fabric-kubernetes for interactive threads, and retain this ticket to summarize the action items necessary for inclusion / distribution of the k8s-builder with Fabric? |
it's hard for me to access discord from Beijing... just let me know the key point on github, and I will response. |
I was inquiring about whether this approach makes sense, and you're saying that we should dedicate this discussion for action items, under some assumption that what is described here has already been decided to be applied to Fabric core. |
@yacovm I am open to altering the design and/or mechanics. I am suggesting: a) We migrate the interactive discussion towards a different context (e.g. GitHub Discussions, Discord, etc.) to iterate and converge on an approach for:
b) Retain this ticket to track work items related to #2 above. @SamYuan1990 is a GitHub discussion more suitable for your participation in the conversation thread? |
@jkneubuh , yes, github discussion is more suitable for me... as I am located in China and there is GFW Blocking my access to discord. |
|
All : thanks for the lively debate / discussion. Can we shift over to #3407 to focus on the general discussion of running external builders for k8s. |
Per #3407 - summary -- closing this ticket as there is not agreement to bundle a k8s builder into the peer container. |
As of Fabric > 2.4.1 the peer image contains the default ccaas external chaincode builder to simplify networks relying on the Chaincode as a Service pattern. This builder allows the user to bypass the normal chaincode lifecycle events, launching an external process / URL to receive chaincode invocations from the peer to a known service URL.
The "as a service" deployment provides full flexibility to the administrators on how, where, and when the chaincode systems will be launched. In interactive development, such as in a local debugging context, the flexibility is invaluable. But in post-development workflows, however, the added flexibility becomes a real challenge for Fabric administration, as the service lifecycle is now intertwined with the (already) complicated chaincode lifecycle managed by the peer, channel, and consortium.
On the 5/11 Fabric Community Contributor call, @jt-nti presented a new technique for managing chaincode deployments to greatly simplify the overall process of managing chaincode in cloud native environments.
A New Course:
Chaincode compilation is performed outside of Fabric. (e.g. local builds, CI pipelines, public repos, etc.)
An external fabric-builder-k8s is responsible for receiving and responding to lifecycle events from the peer.
fabric-builder-k8s is responsible for managing the lifecycle of chaincode pods running in Kubernetes.
Using this hybrid approach, chaincode developers can build / test / edit routines locally, publish to a container registry, relying on the natural chaincode lifecycle for installing smart contracts on a channel. In tight build/edit/test iterations, development can occur in a debugger using CCaaS bound to a port on the host system.
On the Horizon
Compile-time feedback : Trap CC build/compilation errors at build, not at run/deployment time.
No requirements for DIND, docker, or chaincode builds in the network : You build it - Fabric will run it.
Instant chaincode : Chaincode launch times measured in seconds, not minutes
Goodbye, Docker! No more DIND, root privilege escalation, mobyd, etc.
Dude, I just want to write some chaincode...
Compass Bearing
While working with external builders is possible in Fabric, it's still a tremendous challenge to actually install external builders in cloud-native environments.
Address this by:
Shore up / battle harden / batten down the hatches / etc. ... the compass bearing set by fabric-builder-k8s
Add support for
imagePullSecret
andimagePullPolicy
attributes to the cc package json / metadata.Include the
image:label
style syntax to reference containers in the cc package json / metadata.Identify a technique to extend mTLS by default (or possibility) in the cc package json / metadata.
Build and distribute a golang-based default fabric-builder-k8s, adjacent to ccaas_builder on the peer Docker image and core yaml.
Document the overall approach, including a section or guide on the public docs site.
Charts and Maps
fabric-builder-k8s : functional prototype - works with kube test network and nano test network
Kube test network chaincode.sh : "Externally launching k8s resources"
Fabric Community Contributor Meeting - 5/11 : (Chaincode / k8s discussions start ~ 00:11:00)
Debugging Smart Contracts with Hyperledger Fabric on Kubernetes
Ahoy!
The text was updated successfully, but these errors were encountered: