-
Notifications
You must be signed in to change notification settings - Fork 552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Buffer Manager][201911] Reclaim unused buffer for admin-down ports #1837
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
neethajohn
reviewed
Jul 26, 2021
5 tasks
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
neethajohn
suggested changes
Sep 21, 2021
neethajohn
approved these changes
Sep 24, 2021
Depends on #1787 |
/azpw run |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
- Don't deploy/remove lossless buffer PG if port is shutdown - Readd buffer PG if port is started up Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
…PG is executed on Mellanox platform only The vendor information is passed by docker level environment when the docker container is created Signed-off-by: Stephen Sun <stephens@nvidia.com>
- Remove redundant return value check in buffer manager - Make sure the port is admin down before vs test - Move the code recovering the port's admin status to the finally block in order to guarantee the port will be admin down in any case. Signed-off-by: Stephen Sun <stephens@nvidia.com>
…FFER_QUEUE table Originally, there was one fvVector in doSpeedUpdateTask for both tables. It was OK since there was no intersection between its lifespan of both tables. However, when reclaiming buffer feature is introduced, its lifespan of both tables interleave, which requires it to be cleared when it was for one table and will be for another. This makes the code difficult to be understood and prevents the data fetched from redis db to be reused. To make the code clear and more efficient, a dedicated fvVector is introduced for each table. Signed-off-by: Stephen Sun <stephens@nvidia.com>
ba791a7
to
4c0b2df
Compare
4 tasks
5 tasks
This was referenced Nov 19, 2021
liat-grozovik
pushed a commit
that referenced
this pull request
Nov 22, 2021
…2011) - What I did It's to port #1837 to master to reclaim reserved buffer. As the way to do it differs among vendors, buffermgrd will: 1. Handle port admin down on Mellanox platform. - Not apply lossless buffer PG to an admin-down port - Remove lossless buffer PG (3-4) from a port when it is shut down. 2. Read lossless buffer PG (3-4) to a port when a port is started up. - Why I did it To support reclaiming reserved buffer when a port is shut down on Mellanox platform in traditional buffer model. - How I verified it sonic-mgmt test and vs test. Signed-off-by: Stephen Sun <stephens@nvidia.com>
EdenGri
pushed a commit
to EdenGri/sonic-swss
that referenced
this pull request
Feb 28, 2022
sonic-net#1837) What I did This PR adds support for an option to configure muxcable mode to a standby mode. The standby mode is in addition to auto\active\manual mode. The new output would look like this in case an standby arg is passed to the command line admin@sonic:~$ sudo config muxcable mode standby Ethernet0 admin@sonic:~$ sudo config muxcable mode standby all added an option to set muxcable mode to standby mode, in addition to existing auto/active/manual modes. How I did it added the changes in config/muxcable.py and added testcases How to verify it Ran the unit tests Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Depends on #1787
What I did
To reclaim reserved buffer.
As the way to do it differs among vendors, the environment
ASIC_VENDOR
is passed to swss docker and will be loaded whenbuffermgrd
starts. After that,buffermgrd
will:Why I did it
To support reclaiming reserved buffer when a port is shut down on Mellanox platform.
How I verified it
Regression test and vs test.
Details if related