Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

orchagent/portsorch: Keeping sched_group_ids in port #2520

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

arvbb
Copy link
Contributor

@arvbb arvbb commented Nov 14, 2022

What I did
Based on comments from PR #2174, keeping scheduler groups in port and reusing them in qosorch.

Why I did it
Avoiding querying twice for scheduler groups.

How I verified it
sudo config warm_restart enable swss
sudo service swss restart
and
warm-boot

@arvbb arvbb marked this pull request as ready for review November 16, 2022 00:38
@neethajohn neethajohn requested a review from stephenxs November 30, 2022 18:17
neethajohn
neethajohn previously approved these changes Dec 1, 2022
Copy link
Collaborator

@stephenxs stephenxs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As comments.
Also suggest adding a mock test to simulate port removal and make sure no error.

orchagent/qosorch.cpp Outdated Show resolved Hide resolved
m_scheduler_group_port_info[port.m_port_id] = {
.groups = std::move(groups),
.groups = port.m_scheduler_group_ids,
Copy link
Collaborator

@stephenxs stephenxs Dec 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This avoids the memory issue but there is still a memory leak issue in DPB. eg. if the old port is removed then a set of new ports are created, the SAI OID of ports will be changed but m_scheduler_group_port_info[<SAI OID of old port>] won't be removed. Eventually, m_scheduler_group_port_info will contain a lot of info of ports that have been removed.
I would like to suggest

  1. moving the whole m_scheduler_group_port_info to Port struct and initialize it during port initialization or in this function.
  2. or keeping in m_scheduler_group_port_info and removing items from it on port removing.

@neethajohn how do you think

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the second solution will be a lot cleaner

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Stephen/Neetha, I am not familiar with DPB. How is the port removal scenario handled currently for scheduler group? Is it already getting covered in existing code before this PR or it is new functionality that needs to be added? Please clarify, thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stephenxs, can you comment on the DPB scenario?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When it comes to DPB, I think what we care about is the port removal flow.
In short, the QoS orchagent should listen to the config_db.PORT table and handle the port removal event.
On receiving a port removal event, it should map the port name to OID and then erase the item indexed by the OID from m_scheduler_group_port_info.
I think we can use the function bool PortsOrch::getPort(string alias, Port &p) to map the port name to OID.
But what I'm not sure about is whether the mapping is still available in ports orch in the port removal scenario because there is no guarantee in terms of the order in which the orchagents (ports ~, QoS ~) handle the port removal notification.
this is why I proposed option #1. but if we want to use #2 maybe we also have a way. I will check with the DPB expert in my team (if there is one) and get back to you.
thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants