Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handle static port policy #3375

Merged
merged 13 commits into from
Nov 15, 2023
Merged

Conversation

ashutosji
Copy link
Contributor

What type of PR is this?

Uncomment only one /kind <> line, press enter to put that in a new line, and remove leading whitespace from that line:

/kind breaking
/kind bug
/kind cleanup
/kind documentation
/kind feature
/kind hotfix
/kind release

What this PR does / Why we need it: This PR is a fix for static portPolicy.

Which issue(s) this PR fixes: #2314

Closes #2314

Special notes for your reviewer:

@agones-bot
Copy link
Collaborator

Build Failed 😱

Build Id: 210398ff-74c9-4c46-bd26-14cb76628cb3

To get permission to view the Cloud Build view, join the agones-discuss Google Group.

@agones-bot
Copy link
Collaborator

Build Failed 😱

Build Id: 51c1b3db-d307-43c1-b22f-0b7883815d62

To get permission to view the Cloud Build view, join the agones-discuss Google Group.

@agones-bot
Copy link
Collaborator

Build Succeeded 👏

Build Id: 2467e168-0b47-4a41-bd48-428c9819bcee

The following development artifacts have been built, and will exist for the next 30 days:

A preview of the website (the last 30 builds are retained):

To install this version:

  • git fetch https://github.com/googleforgames/agones.git pull/3375/head:pr_3375 && git checkout pr_3375
  • helm install agones ./install/helm/agones --namespace agones-system --set agones.image.registry=us-docker.pkg.dev/agones-images/ci --set agones.image.tag=1.35.0-dev-2a401f2-amd64

@ashutosji ashutosji marked this pull request as ready for review September 19, 2023 14:19
Copy link
Member

@markmandel markmandel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a good start!

Next step, let's create an e2e test for it! You can likely copy some existing examples for a lot of it - let me know if you would like me to point to some.

@ashutosji
Copy link
Contributor Author

This looks like a good start!

Next step, let's create an e2e test for it! You can likely copy some existing examples for a lot of it - let me know if you would like me to point to some.

Sure! I will create e2e test for it. Correct me if I am wrong, I think you are talking about this existing examples https://github.com/googleforgames/agones/blob/main/test/e2e/framework/framework.go#L523. Am I on right track?

@markmandel
Copy link
Member

Yes, but I'm talking about using something like:

func TestGameServerTcpUdpProtocol(t *testing.T) {

And doing a version with a Static port.

One thing to note - you will probably have to edit your firewall configurations to allow TCP connections to have the test pass for you locally (by default it was off).

@agones-bot
Copy link
Collaborator

Build Failed 😱

Build Id: 3ce028cc-782b-44bb-a384-5a1b14786ab8

To get permission to view the Cloud Build view, join the agones-discuss Google Group.

@agones-bot
Copy link
Collaborator

Build Failed 😱

Build Id: 37144c2b-5d74-4d97-a6f1-67ad36fc6323

To get permission to view the Cloud Build view, join the agones-discuss Google Group.

@agones-bot
Copy link
Collaborator

Build Failed 😱

Build Id: ad612f98-c0c1-4082-bff3-000f5cba3f11

To get permission to view the Cloud Build view, join the agones-discuss Google Group.

Comment on lines 925 to 927
if err != nil {
assert.FailNow(t, "Could not get a GameServer ready", err.Error())
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit:

Suggested change
if err != nil {
assert.FailNow(t, "Could not get a GameServer ready", err.Error())
}
require.NoError(t, err)

Same for below.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bumping this final nit (and below), then this is good to merge.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I accidentally resolved this yesterday when I didn't mean to.

if we can convert all the

	if err != nil {
		assert.FailNow(t, ...)
	}

Statements into require.NoError(t, err) that would be the last thing needed before this is good to merge.

@markmandel
Copy link
Member

Bringing offline conversation back to PR.

I see why you are getting the error message for ready GameServer instance has 1 port(s), want 2 on this check here:

expectedPortCount := len(gs.Spec.Ports)
if expectedPortCount > 0 {
for _, port := range gs.Spec.Ports {
if port.Protocol == agonesv1.ProtocolTCPUDP {
expectedPortCount++
}
}
}
if len(readyGs.Status.Ports) != expectedPortCount {
return nil, fmt.Errorf("ready GameServer instance has %d port(s), want %d", len(readyGs.Status.Ports), expectedPortCount)
}

For the Dynamic Port Allocation strategy, it actually edits the backing GameServer's ports, which I had forgotten it actually did!

if p.Protocol == agonesv1.ProtocolTCPUDP {
var duplicate = p
duplicate.HostPort = a.port
if duplicate.PortPolicy == agonesv1.Passthrough {
duplicate.ContainerPort = a.port
}
extraPorts = append(extraPorts, duplicate)
gs.Spec.Ports[i].Name = p.Name + "-tcp"
gs.Spec.Ports[i].Protocol = corev1.ProtocolTCP
}

Whereas what we are doing with this implementation is returning the appropriate Pod declaration.

So we have two options:

  1. Rather than editing GameServer.Pod(), we instead update the GameServer record itself in the gameserver/controller, to match the type of change we make in the PortAllocator for Dyanamic and Passthrough port configurations.

The appropriate place does seem to be here:

func (c *Controller) syncGameServerCreatingState(ctx context.Context, gs *agonesv1.GameServer) (*agonesv1.GameServer, error) {
if !(gs.Status.State == agonesv1.GameServerStateCreating && gs.ObjectMeta.DeletionTimestamp.IsZero()) {
return gs, nil
}
if _, isDev := gs.GetDevAddress(); isDev {
return gs, nil
}
loggerForGameServer(gs, c.baseLogger).Debug("Syncing Create State")
// Maybe something went wrong, and the pod was created, but the state was never moved to Starting, so let's check
_, err := c.gameServerPod(gs)

Before creating the Pod, and then when it Updates the GameServer those new port details will get updated along with them.

  1. Edit the CreateGameServerAndWaitUntilReady so that if the Port has a Static policy, we don't increment expectedPortCount, so we ignore the check for this particular use case.

I'm leaning towards No. 1, since that brings this implementation inline with what the PortAllocator does, and lowers the number of testing paths we need to manage (and special cases in the tests) - but what are your thoughts?

@ashutosji
Copy link
Contributor Author

ashutosji commented Oct 3, 2023

Bringing offline conversation back to PR.

I see why you are getting the error message for ready GameServer instance has 1 port(s), want 2 on this check here:

expectedPortCount := len(gs.Spec.Ports)
if expectedPortCount > 0 {
for _, port := range gs.Spec.Ports {
if port.Protocol == agonesv1.ProtocolTCPUDP {
expectedPortCount++
}
}
}
if len(readyGs.Status.Ports) != expectedPortCount {
return nil, fmt.Errorf("ready GameServer instance has %d port(s), want %d", len(readyGs.Status.Ports), expectedPortCount)
}

For the Dynamic Port Allocation strategy, it actually edits the backing GameServer's ports, which I had forgotten it actually did!

if p.Protocol == agonesv1.ProtocolTCPUDP {
var duplicate = p
duplicate.HostPort = a.port
if duplicate.PortPolicy == agonesv1.Passthrough {
duplicate.ContainerPort = a.port
}
extraPorts = append(extraPorts, duplicate)
gs.Spec.Ports[i].Name = p.Name + "-tcp"
gs.Spec.Ports[i].Protocol = corev1.ProtocolTCP
}

Whereas what we are doing with this implementation is returning the appropriate Pod declaration.

So we have two options:

  1. Rather than editing GameServer.Pod(), we instead update the GameServer record itself in the gameserver/controller, to match the type of change we make in the PortAllocator for Dyanamic and Passthrough port configurations.

The appropriate place does seem to be here:

func (c *Controller) syncGameServerCreatingState(ctx context.Context, gs *agonesv1.GameServer) (*agonesv1.GameServer, error) {
if !(gs.Status.State == agonesv1.GameServerStateCreating && gs.ObjectMeta.DeletionTimestamp.IsZero()) {
return gs, nil
}
if _, isDev := gs.GetDevAddress(); isDev {
return gs, nil
}
loggerForGameServer(gs, c.baseLogger).Debug("Syncing Create State")
// Maybe something went wrong, and the pod was created, but the state was never moved to Starting, so let's check
_, err := c.gameServerPod(gs)

Before creating the Pod, and then when it Updates the GameServer those new port details will get updated along with them.

  1. Edit the CreateGameServerAndWaitUntilReady so that if the Port has a Static policy, we don't increment expectedPortCount, so we ignore the check for this particular use case.

I'm leaning towards No. 1, since that brings this implementation inline with what the PortAllocator does, and lowers the number of testing paths we need to manage (and special cases in the tests) - but what are your thoughts?

Yes, You are right! Rather than writing and handling number of test cases we should proceed with the case1.

if gs.Spec.Ports[0].PortPolicy == agonesv1.Static && gs.Spec.Ports[0].Protocol == agonesv1.ProtocolTCPUDP {
			gs.Spec.Ports = []agonesv1.GameServerPort{
				{
					Name:          "tcp-port",
					ContainerPort: gs.Spec.Ports[0].ContainerPort,
					HostPort:      gs.Spec.Ports[0].HostPort,
					Protocol:      corev1.ProtocolTCP,
				},
				{
					Name:          "udp-port",
					ContainerPort: gs.Spec.Ports[0].ContainerPort,
					HostPort:      gs.Spec.Ports[0].HostPort,
					Protocol:      corev1.ProtocolUDP,
				},
			}

		}
		logrus.WithField("After", gs).Debug("Made Static PortPolicy changes")
          gs, err = c.createGameServerPod(ctx, gs)

However, I tried and added the above piece of code before Creating the pod here:

gs, err = c.createGameServerPod(ctx, gs)
but somehow it's throwing error Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference). And it is throwing the error after this log logrus.WithField("After", gs).Debug("Made Static PortPolicy changes") inside this function
gs, err = c.createGameServerPod(ctx, gs)
. I have not pushed the code because I am still working on it.

@markmandel
Copy link
Member

but somehow it's throwing error Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

Hard to debug without seeing the code 😃 so if you get really stuck, please upload the full log and the code.

@markmandel
Copy link
Member

Bumping this comment: https://github.com/googleforgames/agones/pull/3375/files#r1379210777 as the only thing still required to get this PR completed.

@agones-bot
Copy link
Collaborator

Build Succeeded 👏

Build Id: 5c43af7d-29db-46a8-b771-197bef652695

The following development artifacts have been built, and will exist for the next 30 days:

A preview of the website (the last 30 builds are retained):

To install this version:

  • git fetch https://github.com/googleforgames/agones.git pull/3375/head:pr_3375 && git checkout pr_3375
  • helm install agones ./install/helm/agones --namespace agones-system --set agones.image.registry=us-docker.pkg.dev/agones-images/ci --set agones.image.tag=1.37.0-dev-7370aeb-amd64

@agones-bot
Copy link
Collaborator

Build Failed 😱

Build Id: c78e862a-38e2-4f3b-8fb2-e4ebc1d13548

To get permission to view the Cloud Build view, join the agones-discuss Google Group.

@ashutosji
Copy link
Contributor Author

Bumping this comment: https://github.com/googleforgames/agones/pull/3375/files#r1379210777 as the only thing still required to get this PR completed.

I did the changes. But, Somehow build is failing at step-20. I am not getting why it's failing because it's not related to my code changes. Could you please look into this?

Copy link
Member

@markmandel markmandel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. #3505 should fix the flake I believe, so once that merges, this will go in next.

Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ashutosji, markmandel

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@markmandel markmandel enabled auto-merge (squash) November 14, 2023 20:46
@google-oss-prow google-oss-prow bot removed the lgtm label Nov 14, 2023
Copy link

New changes are detected. LGTM label has been removed.

@agones-bot
Copy link
Collaborator

Build Succeeded 👏

Build Id: fdf35c8f-58e3-4939-be0b-7c767fd2c322

The following development artifacts have been built, and will exist for the next 30 days:

A preview of the website (the last 30 builds are retained):

To install this version:

  • git fetch https://github.com/googleforgames/agones.git pull/3375/head:pr_3375 && git checkout pr_3375
  • helm install agones ./install/helm/agones --namespace agones-system --set agones.image.registry=us-docker.pkg.dev/agones-images/ci --set agones.image.tag=1.37.0-dev-eab3df1-amd64

@markmandel markmandel merged commit 9597660 into googleforgames:main Nov 15, 2023
3 checks passed
@Kalaiselvi84 Kalaiselvi84 added kind/bug These are bugs. and removed kind/other labels Dec 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TCPUDP not supported
4 participants