Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose 6653 port on each node through NodePort to allow OF Master/Standby connection #201

Merged

Conversation

Andrea-Campanella
Copy link
Contributor

@Andrea-Campanella Andrea-Campanella commented Sep 3, 2020

This patch exposes a node port for each replica of ONOS that gets deployed on a separate pod when exposeNodePorts is set to true in values.yaml
For example in this case 3 replicas were deployed on 3 nodes and the resulting node ports are:

onos-of-0                  NodePort    10.43.192.158   <none>        6653:31653/TCP                                 19m
onos-of-1                  NodePort    10.43.123.123   <none>        6653:31654/TCP                                 19m
onos-of-2                  NodePort    10.43.132.63    <none>        6653:31655/TCP                                 19m

This allows an aggregation switch to connect to all the instances of ONOS at the same time to achieve the MASTER/STANBY active connections that OF requires.
The switch needs to have the IP of one of the nodes and all the ports configured.
e.g.

OPT_ARGS="-d 2 -c 2 -c 4 -t 10.128.100.70:31653 -t 10.128.100.70:31654 -t 10.128.100.70:31655  -i $DPID"

A small caveat is that with this patch ONOS can't be automatically scaled up in number of instances.

IF your cluster is deployed in Kind there is a further step needed to expose those node ports.
A custom cfg needs to be given to the deployment of your kind cluster

kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
  - role: control-plane
    extraPortMappings:
    - containerPort: 31653
      hostPort: 31653
    - containerPort: 31654
      hostPort: 31654
    - containerPort: 31655
      hostPort: 31655
  - role: worker
  - role: worker

@Andrea-Campanella Andrea-Campanella added the enhancement New feature or request label Sep 3, 2020
@Andrea-Campanella Andrea-Campanella self-assigned this Sep 3, 2020
@Andrea-Campanella
Copy link
Contributor Author

Thanks @kuujo @charlesmcchan I'll coordinate with the SEBA/VOLTHA team not to break tests and then merge this, most likely Monday.

{{- end}}

# workaround for . not working, see
# https://github.com/helm/helm/issues/1311
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not really even that . doesn't work. It actually does work as intended. This is just a limitation of variable scopes in Go templates. It does make for some pretty messy workarounds. We probably won't get anything out of tracking the issue...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, but since it took me a little to understand what was going on I still think it's worth tracking, also to show people why I have done something like that and to help if in the future something similar.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep it took me a while to understand how it worked too. It can be pretty frustrating.

@kuujo
Copy link
Contributor

kuujo commented Sep 4, 2020

Yes let's be careful. I added the Do Not Merge tag to make sure nobody sees an approved PR and merges it. You can remove it when you're ready.

@Andrea-Campanella
Copy link
Contributor Author

thanks @kuujo for the Do Not Merge. I'll take it out when we can merge based on upgrade of SEBA/VOLTHA PODs

@Andrea-Campanella Andrea-Campanella merged commit fbaef1e into onosproject:master Sep 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

Successfully merging this pull request may close these issues.

3 participants