Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

impl for couchdb supports in test0network8s as hardcode #651

Merged
merged 2 commits into from
Feb 21, 2022

Conversation

SamYuan1990
Copy link
Contributor

a impl for #633

Signed-off-by: Sam Yuan yy19902439@126.com

Signed-off-by: Sam Yuan <yy19902439@126.com>
@SamYuan1990 SamYuan1990 requested a review from a team as a code owner February 19, 2022 12:10
Copy link
Contributor

@jkneubuh jkneubuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SamYuan1990 thanks for adding the new state db. This is great!

Can you please re-arrange the yaml to launch the couchdb as a sidecar container within the peer Deployment/Pod? This will keep all of the couch<->peer traffic local within the pod. If there is a need to access a couchdb from another pod in the cluster, it can be added as an additional port to the existing Peer Service resources.

  • Remove the orgXpeerYcouchdb Services and new yaml files
  • Add the couchdb containers into the peer deployment spec as sidecars
  • change the peer's couchdb connection URL to:
      CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS:       localhost:5984
  • include major.minor.patch in the couch label. E.g. couchdb:3.2.1
  • remove create_couchdb function (containers will be started by the peer Deployments)
  • Increase the connection retry at startup in the peer env:
      CORE_LEDGER_STATE_COUCHDBCONFIG_MAXRETRIESONSTARTUP:  20

This is a great feature. Thanks Sam!

@SamYuan1990
Copy link
Contributor Author

@SamYuan1990 thanks for adding the new state db. This is great!

Can you please re-arrange the yaml to launch the couchdb as a sidecar container within the peer Deployment/Pod? This will keep all of the couch<->peer traffic local within the pod. If there is a need to access a couchdb from another pod in the cluster, it can be added as an additional port to the existing Peer Service resources.

  • Remove the orgXpeerYcouchdb Services and new yaml files
  • Add the couchdb containers into the peer deployment spec as sidecars
  • change the peer's couchdb connection URL to:
      CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS:       localhost:5984
  • include major.minor.patch in the couch label. E.g. couchdb:3.2.1
  • remove create_couchdb function (containers will be started by the peer Deployments)
  • Increase the connection retry at startup in the peer env:
      CORE_LEDGER_STATE_COUCHDBCONFIG_MAXRETRIESONSTARTUP:  20

This is a great feature. Thanks Sam!

@jkneubuh , the reason which stopped me to make couchdb as a sidecar for localhost:5984 in current PR is:
when we have peer and couchdb in same deployment

  • k8s will start peer container 1st or couchdb 1st?

  • if peer before couchdb will the peer container fails?

@SamYuan1990
Copy link
Contributor Author

SamYuan1990 commented Feb 19, 2022

@jkneubuh , I don't know how to ensure k8s start couchdb before peer container as we make the depends in composer as line here:
https://github.com/hyperledger/fabric-samples/blob/main/test-network/compose/compose-couch.yaml#L40

@SamYuan1990
Copy link
Contributor Author

SamYuan1990 commented Feb 19, 2022

@jkneubuh, do you mean container lifecycle as below? I will have a try and keep you updated.

lifecycle:
      type: Sidecar

@SamYuan1990
Copy link
Contributor Author

SamYuan1990 commented Feb 20, 2022

@jkneubuh , I had tried with https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/753-sidecar-containers#implementation-history and ref to https://github.com/kubernetes/kubernetes/blob/aca84a38ccae51de5222ca99d8d7977cfc98b71f/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/testdata/swagger.json
it seems we could not ensure couchdb started before peer with k8s api in currently. please advise.

as a summary,
I am agree with localhost:5984 as sidecar to make traffic within pod level network.
but
I am worried about as there not k8s level side car support/pod container depends support, what if couchdb starts after peer?

Signed-off-by: Sam Yuan <yy19902439@126.com>
@SamYuan1990 SamYuan1990 requested a review from jkneubuh February 20, 2022 05:34
Copy link
Contributor

@jkneubuh jkneubuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @SamYuan1990 !

Unfortunately there is no good way in k8s to control the sequence of containers coming up within a single Pod. The solution in this case is to try the "eventually it will come up" approach, retrying the connection within the peer.

I ran a test on this branch and it looks OK. The first time I ran ./network up, it took a little bit of time for the couchdb image to be pulled from docker hub. In this case, the peer container started about 15 seconds before the couchdb container was ready.

When the peer launches it will repeatedly test for the couchdb in an exponential backoff loop based on the CORE_LEDGER_STATE_COUCHDBCONFIG_MAXRETRIESONSTARTUP parameter. At startup time, I observed the peer retrying in a loop for a few times while the image was pulled into KIND from Docker hub, starting both containers successfully. In additional tests of the network (down/up, down/up, ...), couchdb comes up right away and there is no delay in the peer at launch time.

Looks good - thanks for making the updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants