-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
impl for couchdb supports in test0network8s as hardcode #651
impl for couchdb supports in test0network8s as hardcode #651
Conversation
Signed-off-by: Sam Yuan <yy19902439@126.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SamYuan1990 thanks for adding the new state db. This is great!
Can you please re-arrange the yaml to launch the couchdb as a sidecar container within the peer Deployment/Pod? This will keep all of the couch<->peer traffic local within the pod. If there is a need to access a couchdb from another pod in the cluster, it can be added as an additional port to the existing Peer Service
resources.
- Remove the orgXpeerYcouchdb
Services
and new yaml files - Add the couchdb containers into the peer deployment spec as sidecars
- change the peer's couchdb connection URL to:
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: localhost:5984
- include major.minor.patch in the couch label. E.g.
couchdb:3.2.1
- remove
create_couchdb
function (containers will be started by the peer Deployments) - Increase the connection retry at startup in the peer env:
CORE_LEDGER_STATE_COUCHDBCONFIG_MAXRETRIESONSTARTUP: 20
This is a great feature. Thanks Sam!
@jkneubuh , the reason which stopped me to make couchdb as a sidecar for
|
@jkneubuh , I don't know how to ensure k8s start |
@jkneubuh, do you mean container lifecycle as below? I will have a try and keep you updated.
|
@jkneubuh , I had tried with https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/753-sidecar-containers#implementation-history and ref to https://github.com/kubernetes/kubernetes/blob/aca84a38ccae51de5222ca99d8d7977cfc98b71f/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/testdata/swagger.json as a summary, |
Signed-off-by: Sam Yuan <yy19902439@126.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @SamYuan1990 !
Unfortunately there is no good way in k8s to control the sequence of containers coming up within a single Pod. The solution in this case is to try the "eventually it will come up" approach, retrying the connection within the peer.
I ran a test on this branch and it looks OK. The first time I ran ./network up
, it took a little bit of time for the couchdb image to be pulled from docker hub. In this case, the peer container started about 15 seconds before the couchdb container was ready.
When the peer launches it will repeatedly test for the couchdb in an exponential backoff loop based on the CORE_LEDGER_STATE_COUCHDBCONFIG_MAXRETRIESONSTARTUP
parameter. At startup time, I observed the peer retrying in a loop for a few times while the image was pulled into KIND from Docker hub, starting both containers successfully. In additional tests of the network (down/up, down/up, ...), couchdb comes up right away and there is no delay in the peer at launch time.
Looks good - thanks for making the updates.
a impl for #633
Signed-off-by: Sam Yuan yy19902439@126.com