Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update maximum number of buckets in OVS group add/insert_bucket message #5942

Merged
merged 1 commit into from
Feb 21, 2024

Conversation

hongliangl
Copy link
Contributor

@hongliangl hongliangl commented Jan 30, 2024

The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as set_field:0xa0a0007->reg0,
set_field:0x50/0xffff->reg4, and resubmit(,EndpointDNAT). However, an update
in #5205 introduced a new action, set_field:0x4000000/0x4000000->reg4, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: set_field:0xa0a0007->xxreg0, set_field:0x50/0xffff->reg4,
set_field:0x100000/0x100000->reg4, and resubmit(,EndpointDNAT). The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 (got from (65535-24)/88) buckets with the largest size.

Signed-off-by: Hongliang Liu lhongliang@vmware.com

Copy link
Contributor

@antoninbas antoninbas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we still planning to switch to individual bucket management - see #2092 (comment) ?

pkg/agent/openflow/client_test.go Outdated Show resolved Hide resolved
pkg/agent/openflow/client_test.go Outdated Show resolved Hide resolved
@hongliangl hongliangl force-pushed the 20240130-refine-ovs-bucket branch 2 times, most recently from f88b924 to 55acd24 Compare February 2, 2024 05:05
@hongliangl
Copy link
Contributor Author

are we still planning to switch to individual bucket management - see #2092 (comment) ?

I have read through the discussion at #2092 (comment) ? I'm not entirely sure I understand. Do you mean that we allocate bucket IDs for every bucket in a group for individual bucket management ? Then we could delete buckets with their bucket IDs? @antoninbas

@antoninbas
Copy link
Contributor

are we still planning to switch to individual bucket management - see #2092 (comment) ?

I have read through the discussion at #2092 (comment) ? I'm not entirely sure I understand. Do you mean that we allocate bucket IDs for every bucket in a group for individual bucket management ? Then we could delete buckets with their bucket IDs? @antoninbas

In OpenFlow 1.5, it is possible to insert or remove specific group buckets without impacting the other group buckets, and we don't have to provide the full list of buckets when mutating the group buckets:

In previous versions of the specification, group operations can only change the entire set of group buckets
in a group. Two new group commands enable the insertion or removal of specific group buckets without
impacting the other group buckets in the group (EXT-350).

  • Group command OFPGC_INSERT_BUCKET to insert individual group buckets into a group.
  • Group command OFPGC_REMOVE_BUCKET to remove specific group buckets from a group.
  • Add bucket_id field to buckets to identify buckets.
  • Add group property to extend groups.
  • Add group bucket properties to extend group buckets.
  • Redefine bucket fields weight, watch_port and watch_group as group bucket properties

This change, combined with the fact that bundle operations can span any number of messages, mean AFAIK that there is no need for an "arbitrary" limit to the number of buckets that we support.

Do you mean that we allocate bucket IDs for every bucket in a group for individual bucket management ?

Yes

pkg/agent/openflow/pipeline_test.go Outdated Show resolved Hide resolved
pkg/agent/openflow/pipeline_test.go Show resolved Hide resolved
pkg/agent/openflow/multicast_test.go Outdated Show resolved Hide resolved
pkg/agent/openflow/pipeline_test.go Outdated Show resolved Hide resolved
@hongliangl
Copy link
Contributor Author

are we still planning to switch to individual bucket management - see #2092 (comment) ?

I have read through the discussion at #2092 (comment) ? I'm not entirely sure I understand. Do you mean that we allocate bucket IDs for every bucket in a group for individual bucket management ? Then we could delete buckets with their bucket IDs? @antoninbas

In OpenFlow 1.5, it is possible to insert or remove specific group buckets without impacting the other group buckets, and we don't have to provide the full list of buckets when mutating the group buckets:

We only use OFPGC_INSERT_BUCKET in current code to install an oversized group. We divide the group into multiple messages, one with OFPGC_ADD, and others with OFPGC_INSERT_BUCKET. See

for start := 0; start <= len(g.ofctrl.Buckets); start += MaxBucketsPerMessage {
for more details.

In previous versions of the specification, group operations can only change the entire set of group buckets
in a group. Two new group commands enable the insertion or removal of specific group buckets without
impacting the other group buckets in the group (EXT-350).

  • Group command OFPGC_INSERT_BUCKET to insert individual group buckets into a group.
  • Group command OFPGC_REMOVE_BUCKET to remove specific group buckets from a group.
  • Add bucket_id field to buckets to identify buckets.
  • Add group property to extend groups.
  • Add group bucket properties to extend group buckets.
  • Redefine bucket fields weight, watch_port and watch_group as group bucket properties

This change, combined with the fact that bundle operations can span any number of messages, mean AFAIK that there is no need for an "arbitrary" limit to the number of buckets that we support.

We addressed the limitation in #4167 by dividing the buckets within an oversized group into multiple group messages. However, it is crucial to impose a maximum limit on the number of buckets per group message, especially when the message size exceeds 65536 bytes.

As for updating an existing group, we don't use OFPGC_INSERT_BUCKET and OFPGC_REMOVE_BUCKET yet. I have made a PR #4564 for updating an existing OVS group. I could reopen it.

@antoninbas
Copy link
Contributor

@hongliangl I see, I was unaware of that change actually. Do you know why we went that route (the "hybrid" approach), instead of always managing buckets individually. For example, if all we need to do is remove an endpoint, it seems that it would make more sense to simply remove the corresponding bucket instead of sending all buckets again.

Comment on lines 92 to 100
ctrl := gomock.NewController(t)
fs := &featureMulticast{
bridge: binding.NewOFBridge(bridgeName, ""),
}
fakeOfTable := ovsoftest.NewMockTable(ctrl)
MulticastOutputTable.ofTable = fakeOfTable
defer func() {
MulticastOutputTable.ofTable = nil
}()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you shouldn't be sharing mocks across sub-tests

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the reason that each sub-test runs concurrently and they share the same mock object, and there is a risk of data races and unexpected behavior?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot to reply, sorry
No the risk is not so much about concurrency issues, because by default subtests are not run in parallel
The risk is about expectations set in one sub-test leaking to the next sub-test(s), because the mock object is common

pkg/agent/openflow/pipeline_test.go Outdated Show resolved Hide resolved
require.NoError(t, err)
require.Equal(t, 1, len(messages))
groupMod := messages[0].GetMessage().(*openflow15.GroupMod)
require.LessOrEqual(t, getGroupModLen(groupMod), openflowMessageMaxSize, "The size of the GroupMod message exceeds the maximum allowed size")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be Less, not LessOrEqual.

The OpenFlow header includes the uint16_t length field, which is the total length of the message (including the header). So that means the maximum possible length is 65,535 (64*1024 - 1) and not 65,536.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought of that this should be the final size of the message since I found that:

This is the GroupMod struct we use in libOpenflow

type GroupMod struct {
	common.Header
	Command         uint16 /* One of OFPGC_*. */
	Type            uint8  /* One of OFPGT_*. */
	pad             uint8  /* Pad to 64 bits. */
	GroupId         uint32 /* Group identifier. */
	BucketArrayLen  uint16 /* Length of action buckets data. */
	Pad             uint16
	CommandBucketId uint32 /* Bucket Id used as part of
	OFPGC_INSERT_BUCKET and OFPGC_REMOVE_BUCKET commands execution.*/
	Buckets    []Bucket /* List of buckets */
	Properties []util.Message
}

This is the struct of common.Header used by GroupMod above:

// The version specifies the OpenFlow protocol version being
// used. During the current draft phase of the OpenFlow
// Protocol, the most significant bit will be set to indicate an
// experimental version and the lower bits will indicate a
// revision number. The current version is 0x01. The final
// version for a Type 0 switch will be 0x00. The length field
// indicates the total length of the message, so no additional
// framing is used to distinguish one frame from the next.
type Header struct {
	Version uint8
	Type    uint8
	Length  uint16
	Xid     uint32
}

func (h *Header) Len() (n uint16) {
	return 8
}

I used the following code to get the length of GroupMod

func getGroupModLen(g *openflow15.GroupMod) uint32 {
	n := uint32(0)

	n = uint32(g.Header.Len())
	n += 16

	for _, b := range g.Buckets {
		n += uint32(b.Len())
	}

	for _, p := range g.Properties {
		n += uint32(p.Len())
	}
	return n
}

As a result, I thought the return value of the function should be the size of the GroupMod. Am I missing something extra?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything you are writing seems correct to me, but I am not sure I am seeing your point.

The result of the GroupMod.Len() method will be stored in the Length field of the header. See https://github.com/antrea-io/libOpenflow/blob/f8e71bfb83160a95d2ec6ca53bf55e6f821593e9/openflow15/group.go#L98-L99

func (g *GroupMod) Len() (n uint16) {
	n = g.Header.Len()
	n += 16

	for _, b := range g.Buckets {
		n += b.Len()
	}

	for _, p := range g.Properties {
		n += p.Len()
	}
	return
}

func (g *GroupMod) MarshalBinary() (data []byte, err error) {
	g.Header.Length = g.Len()
        // ...

g.Header.Length is a uint16 field, so it's max value is 65,535 and not 65,536. Originally you had set the constant to 64*1024 which is 65,536, so the test needed to be < and not <=.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate your clarification. I think I got your point. I mistakenly confused the range (0-65535) with the maximum value (65535) for a uint16. The maximum length should be 65535. Thank you for your patience and detailed explanation!

pkg/agent/openflow/pipeline_test.go Outdated Show resolved Hide resolved
pkg/agent/openflow/pipeline_test.go Show resolved Hide resolved
errorMsg := fmt.Sprintf(`The GroupMod size with %d buckets exceeds the OpenFlow message's maximum
allowable value, please consider setting binding.MaxBucketsPerMessage to a smaller value, like %d, to make the test pass.`,
binding.MaxBucketsPerMessage, binding.MaxBucketsPerMessage-100)
require.Less(t, getGroupModLen(groupMod), openflow15.MSG_MAX_LEN, errorMsg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to switch it back to LessOrEqual now...
Originally you had the constant defined as 64*1024, which is 65,536. You were using LessOrEqual, which was incorrect.
Now you have MSG_MAX_LEN as 0xffff in libOpenflow, and that is 65,535. But you switched from LessOrEqual to Less at the same time, so the check is still incorrect (off by 1, but in the other direction).

Essentially the check needs to be length <= 65,535 or length < 65,536

Hopefully that makes sense

@hongliangl hongliangl force-pushed the 20240130-refine-ovs-bucket branch 4 times, most recently from 5fa90e0 to 9fc4f30 Compare February 7, 2024 02:36
@hongliangl hongliangl changed the title Change maximum number of buckets in OVS group add/insert_bucket message Update maximum number of buckets in OVS group add/insert_bucket message Feb 7, 2024
Copy link
Contributor

@antoninbas antoninbas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a couple nits, otherwise LGTM


"antrea.io/antrea/pkg/agent/config"
binding "antrea.io/antrea/pkg/ovs/openflow"
ovsoftest "antrea.io/antrea/pkg/ovs/openflow/testing"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use openflowtest in most places, and that makes more sense to me

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

require.Equal(t, 1, len(messages))
groupMod := messages[0].GetMessage().(*openflow15.GroupMod)
errorMsg := fmt.Sprintf(`The GroupMod size with %d buckets exceeds the OpenFlow message's maximum
allowable size, please consider setting binding.MaxBucketsPerMessage to a smaller value, like %d, to make the test pass.`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the size suggestion is needed in the message, I would remove , like %d, to make the test pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

require.NoError(t, err)
require.Equal(t, 1, len(messages))
groupMod := messages[0].GetMessage().(*openflow15.GroupMod)
errorMsg := fmt.Sprintf(`The GroupMod size with %d buckets exceeds the OpenFlow message's maximum
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to add a newline in the message

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

…ssage

The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as `set_field:0xa0a0007->reg0`,
`set_field:0x50/0xffff->reg4`, and `resubmit(,EndpointDNAT)`. However, an update
in antrea-io#5205 introduced a new action, `set_field:0x4000000/0x4000000->reg4`, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: `set_field:0xa0a0007->xxreg0`, `set_field:0x50/0xffff->reg4`,
`set_field:0x100000/0x100000->reg4`, and `resubmit(,EndpointDNAT)`. The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 buckets, got from (65535-24)/88, with the largest size.

Signed-off-by: Hongliang Liu <lhongliang@vmware.com>
@antoninbas
Copy link
Contributor

/test-all

@antoninbas antoninbas merged commit 6bef10c into antrea-io:main Feb 21, 2024
49 of 53 checks passed
@antoninbas
Copy link
Contributor

@hongliangl does that need to be backported?

@hongliangl hongliangl deleted the 20240130-refine-ovs-bucket branch February 22, 2024 01:31
hongliangl added a commit to hongliangl/antrea that referenced this pull request Feb 22, 2024
)

The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as `set_field:0xa0a0007->reg0`,
`set_field:0x50/0xffff->reg4`, and `resubmit(,EndpointDNAT)`. However, an update
in antrea-io#5205 introduced a new action, `set_field:0x4000000/0x4000000->reg4`, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: `set_field:0xa0a0007->xxreg0`, `set_field:0x50/0xffff->reg4`,
`set_field:0x100000/0x100000->reg4`, and `resubmit(,EndpointDNAT)`. The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 buckets, calculated with (65535-24)/88, using the largest possible size.

Signed-off-by: Hongliang Liu <lhongliang@vmware.com>
hongliangl added a commit to hongliangl/antrea that referenced this pull request Feb 22, 2024
)

The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as `set_field:0xa0a0007->reg0`,
`set_field:0x50/0xffff->reg4`, and `resubmit(,EndpointDNAT)`. However, an update
in antrea-io#5205 introduced a new action, `set_field:0x4000000/0x4000000->reg4`, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: `set_field:0xa0a0007->xxreg0`, `set_field:0x50/0xffff->reg4`,
`set_field:0x100000/0x100000->reg4`, and `resubmit(,EndpointDNAT)`. The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 buckets, calculated with (65535-24)/88, using the largest possible size.

Signed-off-by: Hongliang Liu <lhongliang@vmware.com>
hongliangl added a commit to hongliangl/antrea that referenced this pull request Feb 22, 2024
)

The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as `set_field:0xa0a0007->reg0`,
`set_field:0x50/0xffff->reg4`, and `resubmit(,EndpointDNAT)`. However, an update
in antrea-io#5205 introduced a new action, `set_field:0x4000000/0x4000000->reg4`, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: `set_field:0xa0a0007->xxreg0`, `set_field:0x50/0xffff->reg4`,
`set_field:0x100000/0x100000->reg4`, and `resubmit(,EndpointDNAT)`. The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 buckets, calculated with (65535-24)/88, using the largest possible size.

Signed-off-by: Hongliang Liu <lhongliang@vmware.com>
antoninbas pushed a commit that referenced this pull request Feb 23, 2024
The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as `set_field:0xa0a0007->reg0`,
`set_field:0x50/0xffff->reg4`, and `resubmit(,EndpointDNAT)`. However, an update
in #5205 introduced a new action, `set_field:0x4000000/0x4000000->reg4`, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: `set_field:0xa0a0007->xxreg0`, `set_field:0x50/0xffff->reg4`,
`set_field:0x100000/0x100000->reg4`, and `resubmit(,EndpointDNAT)`. The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 buckets, calculated with (65535-24)/88, using the largest possible size.

Signed-off-by: Hongliang Liu <lhongliang@vmware.com>
antoninbas pushed a commit that referenced this pull request Feb 24, 2024
The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as `set_field:0xa0a0007->reg0`,
`set_field:0x50/0xffff->reg4`, and `resubmit(,EndpointDNAT)`. However, an update
in #5205 introduced a new action, `set_field:0x4000000/0x4000000->reg4`, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: `set_field:0xa0a0007->xxreg0`, `set_field:0x50/0xffff->reg4`,
`set_field:0x100000/0x100000->reg4`, and `resubmit(,EndpointDNAT)`. The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 buckets, calculated with (65535-24)/88, using the largest possible size.

Signed-off-by: Hongliang Liu <lhongliang@vmware.com>
luolanzone pushed a commit to luolanzone/antrea that referenced this pull request Mar 25, 2024
)

The current implementation limits the maximum number of buckets in an OVS group
add/insert_bucket message to 800. This constraint is based on the fact that each
bucket has 3 actions, such as `set_field:0xa0a0007->reg0`,
`set_field:0x50/0xffff->reg4`, and `resubmit(,EndpointDNAT)`. However, an update
in antrea-io#5205 introduced a new action, `set_field:0x4000000/0x4000000->reg4`, for
remote Endpoints, making it impossible to accommodate 800 buckets with 4 actions
in an OVS group add/insert_bucket message.

To overcome the limitation, we set the maximum number of buckets to 700, considering
the worst-case scenario where each bucket includes all available actions. For example,
a bucket with all available actions, which is for a remote non-hostNetwork IPv6 Service
Endpoint like this: `set_field:0xa0a0007->xxreg0`, `set_field:0x50/0xffff->reg4`,
`set_field:0x100000/0x100000->reg4`, and `resubmit(,EndpointDNAT)`. The size of such
bucket is 88 bytes, and the header size of an OVS group message is 24 bytes.
According to https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf,
the max size of an Openflow 1.5 message is 65535 bytes, as a result, a message can have
a maximum of 744 buckets, calculated with (65535-24)/88, using the largest possible size.

Signed-off-by: Hongliang Liu <lhongliang@vmware.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants