diff --git a/TestPlans/BST/Threshold_Enhancements.md b/TestPlans/BST/Threshold_Enhancements.md
new file mode 100644
index 000000000000..e262976da1dd
--- /dev/null
+++ b/TestPlans/BST/Threshold_Enhancements.md
@@ -0,0 +1,111 @@
+# SQA Test Plan
+# SONiC - Threshold Feature Enhancements
+[TOC]
+# Test Plan Revision History
+| Rev | Date | Author | Change Description |
+|:---:|:-----------:|:------------------:|-----------------------------|
+| 0.2 | 08/17/2021 | prudviraj kristipati | Enhanced version |
+
+# List of Reviewers
+| Function | Name |
+|:---:|:-----------:|
+| Dev | Sharad Agarwal |
+| Dev | Bandaru Viswanath |
+| Dev | yelamandeswara rao |
+| QA | Kalyan vadlamani |
+
+# List of Approvers
+| Function | Name | Date Approved|
+|:---:|:-----------:|:------------------:|
+| Dev | sharad Agarwal |
+| Dev | Bandaru Viswanath |
+| Dev | yelamandeswara rao |
+| QA | kalyan vadlamani |
+
+# Definition/Abbreviation
+| **Term** | **Meaning** |
+| -------- | ------------------------------- |
+| BDBG | Broadcom SONIC Debug App|
+| Data Source | A specific source for the BDBG to collect the data from. It could be one of the Database Tables, Syslog, Kernel (fs) etc |
+| Observation Point | Locations on the switch where metrics are being observed e.g. on-chip Buffers at Ports, Queues etc |
+
+# Feature Overview
+The threshold feature allows configuration of a threshold on supported buffers in ingress and egress. A threshold breach notification (entry update in COUNTERS_DB) is generated on breach of the configured buffer threshold in ASIC.
+
+# 1 Test Focus Areas
+## 1.1 CLI Testing
+ - All CLI config commands related to Watermark counters
+## 1.2 Functional Testing
+verifying below counter values for user watermark and persistent watermark
+ - verifying Ingress port buffer pool
+ - verifying Egress port buffer pool
+ - verifying Egress service pool
+ - verifying Device buffer pool
+
+# 2 Topologies
+## 2.1 Topology 1
+SONiC switch with 4 TG ports
+### 2.2 Switch configuration
+- create one vlan and add vlan members and keep sending 3:1 congestion traffic.
+
+# 3 Test Case and Objectives
+## 3.1 Functional
+### 3.1.1 verify threshold device value by configuring the incorrect value and seen valid warning message is seen in the console or not.
+
+| **Test ID** | **ft_device_negative** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify threshold device value by configuring the incorrect value and seen valid warning message is seen in the console or not.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) Configure threshold device value by configuring the incorrect value.
3) Verify that SONiC switch is UP with default configuration.
4) Verify valid warning message is seen in the console or not.** |
+
+### 3.1.2 verify device buffer counters after configuring a threshold limit and verify the peak traffic is more than the configured threshold
+
+| **Test ID** | **ft_tf_device_buffer_pool** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify device buffer counters after configuring a threshold limit and verify the peak traffic is more than the configured threshold** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) Configure device buffer threshold with a valid range 1-100.
3) Issue valid show command 'show device threshold to check the configured threshold.
4) Start traffic
5) Check the threshold breach event table using command 'show threshold breaches'.
6) Configure device threshold and check the threshold breach event table.
7) Verify that device buffer threshold configuration is successful.
8) Verify that command output shows configured values.
9) Verify that traffic gets forwarded and command output shows the device buffer values.
10) Verify that a threshold is breached and verify the device buffer globally.
11)stop the traffic and wait for twice the configured interval and verify the respective counter value should be 0** |
+
+### 3.1.3 verify global Egress service pool multicast buffer counters after configuring a threshold limit and verify the peak traffic is more than the configured threshold
+
+| **Test ID** | **ft_tf_global_egress_buffer_pool_multicast** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify global Egress service pool multicast buffer counters after configuring a threshold limit and verify the peak traffic is more than the configured threshold.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) Configure global Egress service pool Multicast buffer threshold with a valid range 1-100.
3) Issue valid show command Show threshold egress multicast to check the configured threshold.
4) Start traffic.
5) Check the threshold breach event table using command 'show threshold breaches'.
6) Verify that SONiC switch is UP with default configuration.
7) Verify that global Egress Multicast threshold configuration is successful.
8) Verify that command output shows configured values.
9) Verify that traffic gets forwarded and command output shows the egress-multicast peak values.
10) Verify that a threshold is breached.
11)stop the traffic and wait for twice the configured interval and verify the respective counter value should be 0.** |
+
+### 3.1.4 verify Egress port buffer pool unicast buffer counters, after configuring a threshold limit and verify the peak traffic is more than the configured threshold
+
+| **Test ID** | **ft_tf_egress_port_pool_unicast** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | ** verify Egress port buffer pool unicast buffer counters, after configuring a threshold limit and verify the peak traffic is more than the configured threshold.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) Configure Egress port pool buffer threshold with a valid range 1-100.
3) Issue valid show command Show threshold to check the configured threshold.
4) Start traffic.
5) Check the threshold breach event table using command 'show threshold breaches'.
6) Configure multiple thresholds on multiple ports and check the threshold breach event table.
7) Verify that SONiC switch is UP with default configuration.
8) Verify that egress port buffer pool threshold configuration is successful.
9) Verify that command output shows configured values.
10) Verify that traffic gets forwarded and command output shows the egress-unicast peak values.
11) Verify that a threshold is breached.
12)stop the traffic and wait for twice the configured interval and verify the respective counter value should be 0.** |
+
+
+### 3.1.5 verify Egress port buffer pool shared(uc+mc) buffer counters, after configuring a threshold limit and verify the peak traffic is more than the configured threshold
+
+| **Test ID** | **ft_tf_egress_port_pool_shared** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify Egress port buffer pool shared(uc+mc) buffer counters, after configuring a threshold limit and verify the peak traffic is more than the configured threshold.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) Configure Egress port buffer pool threshold with a valid range 1-100.
3) Issue valid show command Show threshold to check the configured threshold.
4) Start traffic.
5) Check the threshold breach event table using command 'show threshold breaches'.
6) Configure multiple thresholds on multiple ports and check the threshold breach event table.
7) Verify that SONiC switch is UP with default configuration.
8) Verify that egress port buffer pool threshold configuration is successful.
9) Verify that command output shows configured values.
10) Verify that traffic gets forwarded and command output shows the egress port shared peak values.
11) Verify that a threshold is breached .
12)stop the traffic and wait for twice the configured interval and verify the respective counter value should be 0.** |
+
+### 3.1.6 verify port Ingress buffer pool shared(uc+mc) buffer counters after configuring a threshold limit and verify the peak traffic is more than the configured threshold
+
+| **Test ID** | **ft_tf_ingress_port_pool_shared** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify port Ingress buffer pool shared(uc+mc) buffer counters after configuring a threshold limit and verify the peak traffic is more than the configured threshold.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) Configure Ingress buffer pool shared threshold with a valid range 1-100.
3) Issue valid show command Show threshold to check the configured threshold.
4) Start traffic.
5) Check the threshold breach event table using command 'show threshold breaches'.
6) Configure multiple thresholds on multiple ports and check the threshold breach event table.
7) Verify that SONiC switch is UP with default configuration.
8) Verify that ingress port buffer pool threshold configuration is successful.
9) Verify that command output shows configured values.
10) Verify that traffic gets forwarded and command output shows the ingress port shared peak values.
11) Verify that a threshold is breached.
12)stop the traffic and wait for twice the configured interval and verify the respective counter value should be 0.** |
+
+# 4 Reference Links
+
+https://github.com/BRCM-SONIC/sonic_doc_private/blob/master/devops/telemetry/SONiC%20Threshold%20feature%20spec.md
+
diff --git a/TestPlans/BST/watermark_using_snapshot/watermark_using_snapshot_testplan.md b/TestPlans/BST/watermark_using_snapshot/watermark_using_snapshot_testplan.md
index 64d8d259b2f0..96afe769fcab 100644
--- a/TestPlans/BST/watermark_using_snapshot/watermark_using_snapshot_testplan.md
+++ b/TestPlans/BST/watermark_using_snapshot/watermark_using_snapshot_testplan.md
@@ -1,17 +1,19 @@
# SQA Test Plan
# watermark using snapshot
-# SONiC 3.0 Project and Buzznik Release
+# SONiC 3.0-4.0.0 Projects and Buzznik/Cyrus Releases
[TOC]
# Test Plan Revision History
| Rev | Date | Author | Change Description |
|:---:|:-----------:|:------------------:|-----------------------------|
-| 0.1 | 10/30/2019 | phani kumar ravula | initial version |
+| 0.2 | 08/16/2021 | prudviraj kristipati | Enhanced version |
+| 0.1 | 10/30/2019 | phani kumar ravula | initial version |
# List of Reviewers
| Function | Name |
|:---:|:-----------:|
| Dev | shirisha dasari |
| Dev | sachin suman |
+| Dev | sharad Agarwal |
| QA | kalyan vadlamani |
| QA | giri babu sajja |
@@ -21,6 +23,7 @@
|:---:|:-----------:|:------------------:|
| Dev | shirisha dasari | |
| Dev | sachin suman | |
+| Dev | sharad Agarwal |
| QA | kalyan vadlamani | |
| QA | giri babu sajja | |
@@ -36,6 +39,9 @@ The snapshot feature supported by certain hardware provides a bird’s eye view
The watermark feature currently supported on SONiC iterates over all the supported counters one-by-one. While this essentially allows all the counter data to be collated into the DB, since the stats are collected sequentially, it does not allow the user to accurately co-relate all the buffer usage statistics at a particular instant.
+In 4.0.0 Cyrus release BST is enhanced with Device buffer stats ,Ingress and egress service pool buffer usage on global and per port
+
+
# 1 Test Focus Areas
## 1.1 CLI Testing
@@ -85,6 +91,16 @@ verifying below counter values for user watermark and persistent watermark
| **Type** | **CLI** |
| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) clear the snapshot interval using CLI "sonic-clear watermark interval".
3) verify the snapshot interval value set to default vlaue using the command "show watermark interval" CLI command.** |
+### 3.1.4 verify the CLI commnd "show device watermark" and the default contents.
+
+| **Test ID** | **ft_sf_verify_show device** |
+|--------|:----------------|
+| **Test Name** | **verify the CLI commnd "show device watermark" and the default contents.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) clear the device watermark using CLI "clear device watermark".
3) verify the CLI commnd "show device watermark" and the default contents.** |
+
+
## 3.2 Functional
### 3.2.1 Verify that shared PG counters get updated in Counter DB for configured user watermark interval.
@@ -294,8 +310,58 @@ verifying below counter values for user watermark and persistent watermark
| **Type** | **Functional** |
| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) convert the buffers.json.j2 file to buffers.json and load into the DUT.
3) perform save and reboot.
4) Send the unicast traffic from 3 TG ports to the 4th TG port (i.e. send 3:1 congested trafic continuously.
5) verify the buffer pool counter value gets incremented properly using persistent watermark.
6) stop the traffic
7) clear the buffer pool counters using the cli command "sonic-clear buffer-pool persistent-watermark"
8)verify that buffer_pool counters gets cleared successfully** |
+### 3.2.24 verify the egress multicast buffer usage on a global buffer pool is successfully incremented or not when Multicast traffic is sent.
+
+| **Test ID** | **ft_sf_mc_share-buffer-count** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify the egress multicast buffer usage on a global buffer pool is successfully incremented or not when Multicast traffic is sent.
+.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the non default watermark interval.
3) convert the buffers.json.j2 file to buffers.json and load into the DUT.
4) perform save and reboot.
5) Send the Multicast traffic from 3 or 4 ports continuously.
6) Verify buffer pool counters get updated in Counter DB.
7) stop the traffic.** |
+
+### 3.2.25 verify the device counters are updated in the counter DB when sending the congested traffic.
+| **Test ID** | **ft_sf_device_counters** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify the device counters are updated in the counter DB when sending the congested traffic.
+.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the non default watermark interval.
3) convert the buffers.json.j2 file to buffers.json and load into the DUT.
4) perform save and reboot.
5) Send the unicast traffic from port-1 and port-2 destined to port-3 continuously.
6) Verify device buffer pool counters get updated in Counter DB.
7) Send the Multicast or unknown traffic from port-3 and port-4 that floods to remaining 3 ports continuously.
8)Verify device buffer pool counters get updated in Counter DB.
9) stop the traffic.** |
+
+### 3.2.26 verify the egress unicast buffer usage on a Egress port is successfully incremented or not.
+
+| **Test ID** | **ft_sf_uc_share-buffer-count_per_port** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify the egress unicast buffer usage on a Egress port is successfully incremented or not..
+.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the non default watermark interval on interface4.
3) convert the buffers.json.j2 file to buffers.json and load into the DUT.
4) perform save and reboot.
5) Send the unicast traffic from 3 or 4 ports continuously.
6) Verify device buffer pool counters get updated in Counter DB.
7) stop the traffic.
8) Send the unicast traffic from 3 or 4 ports continuously.
9) Verify device buffer pool counters get updated in Counter DB are showing 0.** |
+
+### 3.2.27 verify the egress shared buffer usage on a Egress port is successfully incremented or not.
+
+| **Test ID** | **ft_sf_ucmc_share-buffer-count_per_port** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify the egress shared buffer usage on a Egress port is successfully incremented or not.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the non default watermark interval on interface4.
3) convert the buffers.json.j2 file to buffers.json and load into the DUT.
4) perform save and reboot.
5) Send the unicast traffic from port-1 and port-2 destined to port-3 continuously.
6) Verify device buffer pool counters get updated in Counter DB.
7) Send the Multicast or unknown traffic from port-3 and port-4 that floods to remaining 3 ports continuously.
8)Verify device buffer pool counters get updated in Counter DB.
9) stop the traffic..
10) Repeat step 5 and step 7.
11) Verify device buffer pool counters get updated in Counter DB are showing 0.** |
+
+### 3.2.28 verify the shared buffer usage on a Ingress port are successfully incremented or not.
+
+| **Test ID** | **ft_sf_um_share-buffer-count_per_port** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify the shared buffer usage on a Ingress port are successfully incremented or not
+.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the non default watermark interval on interface3.
3) convert the buffers.json.j2 file to buffers.json and load into the DUT.
4) perform save and reboot.
5) Send the multicast traffic from 3 to 4 ports continuously.
6) Verify device buffer pool counters get updated in Counter DB.
7) stop the traffic.** |
+
+
# 4 Reference Links
http://gerrit-lvn-07.lvn.broadcom.net:8083/plugins/gitiles/sonic/documents/+/refs/changes/15/12815/9/devops/telemetry/watermarks_HLD-snapshot.md
+https://github.com/BRCM-SONIC/sonic_doc_private/blob/master/devops/telemetry/watermarks_HLD-snapshot.md
\ No newline at end of file
diff --git a/TestPlans/Pre-Buzznik/SONiC_2_0_Functional_Test_Cases_List_Threshold_Feature.xlsx b/TestPlans/Pre-Buzznik/SONiC_2_0_Functional_Test_Cases_List_Threshold_Feature.xlsx
index a72ac6bcdc96..eb4c31c4fcff 100644
Binary files a/TestPlans/Pre-Buzznik/SONiC_2_0_Functional_Test_Cases_List_Threshold_Feature.xlsx and b/TestPlans/Pre-Buzznik/SONiC_2_0_Functional_Test_Cases_List_Threshold_Feature.xlsx differ
diff --git a/TestPlans/system/Broadcom_Debug_App/Broadcom_Debug_Application_testplan.md b/TestPlans/system/Broadcom_Debug_App/Broadcom_Debug_Application_testplan.md
new file mode 100644
index 000000000000..3824447d7b8e
--- /dev/null
+++ b/TestPlans/system/Broadcom_Debug_App/Broadcom_Debug_Application_testplan.md
@@ -0,0 +1,249 @@
+# SQA Test Plan
+# Broadcom Debug Application
+# SONiC 4.0.0 Project and Cyrus Release
+[TOC]
+# Test Plan Revision History
+| Rev | Date | Author | Change Description |
+|:---:|:-----------:|:------------------:|-----------------------------|
+| 0.1 | 08/17/2019 | prudviraj kristipati | initial version |
+
+# List of Reviewers
+| Function | Name |
+|:---:|:-----------:|
+| Dev | Sharad Agarwal |
+| Dev | Bandaru Viswanath |
+| QA | Kalyan vadlamani |
+
+# List of Approvers
+| Function | Name | Date Approved|
+|:---:|:-----------:|:------------------:|
+| Dev | sharad Agarwal |
+| Dev | Bandaru Viswanath |
+| QA | kalyan vadlamani |
+
+# Definition/Abbreviation
+| **Term** | **Meaning** |
+| -------- | ------------------------------- |
+| BDBG | Broadcom SONIC Debug App|
+| Data Source | A specific source for the BDBG to collect the data from. It could be one of the Database Tables, Syslog, Kernel (fs) etc |
+| Observation Point | Locations on the switch where metrics are being observed e.g. on-chip Buffers at Ports, Queues etc |
+
+# Feature Overview
+BDBG attempts a use case oriented approach to debugging issues on a Broadcom SONIC Switch
+ - It aims to solve select (speific use case) problems
+ - Presents a unified, normalized and focused interface for the data to the user
+ - Correlates data from different sources on the Switch – software and hardware
+ - Presents a historical perspective along with current view
+ - Allows exporting of aggregated data to ease burden on external collectors
+
+There are existing individual features in Broadcom SONIC that aim to solve specific problems. BDBG does not compete / replace existing features, rather uses the data gathered by them.
+
+The BDBG infrastructure implements common aspects so that individual debug tools can share the infrastructure without having to re-implement from scratch. More details on provisioning are available in subsequent sections.
+
+Along with the basic infrastructure, two specific tools for monitoring congestions and drops are built. The tools are named `congestion` and `drops` respectively.
+
+# 1 Test Focus Areas
+## 1.1 CLI Testing
+ - All CLI config commands related to Watermark counters
+## 1.2 Functional Testing
+verifying below counter values for user watermark and persistent watermark
+ - verifying for shared PG counters
+ - verifying for queue unicast counters
+ - verifying for queue multicast counters
+ - verifying for buffer pool counters
+ - verifying for drop counters
+
+# 2 Topologies
+## 2.1 Topology 1
+![watermakrs_using_snapshot](watermark_using_snapshot.png)
+### 2.2 Switch configuration
+- create one vlan and add vlan memebers and keep sending 3:1 congestion traffic.
+
+# 3 Test Case and Objectives
+## 3.1 CLI
+### 3.1.1 Verify the collection and max-retention interval configurations and default values.
+
+| **Test ID** | **ft_collection_max-retention |
+|--------|:----------------|
+| **Test Name** | **Verify the collection and max-retention interval configurations and default values.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) configure the bdbg config collection-interval and max-retention interval CLI command.
2) The command should not through any error.
3) verify the default interval values are reflecting correctly |
+
+### 3.1.2 Verify the congestion-threshold configuration and default values.
+
+| **Test ID** | **ft_congestion-threshold** |
+|--------|:----------------|
+| **Test Name** | **Verify the congestion-threshold configuration and default values.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) configure the bdbg config congestion congestion-threshold CLI command.
2) The command should not through any error.
3) verify the default interval values are reflecting correctly** |
+
+### 3.1.3 Verify different show commands "bdbg show" and "bdbg show" drops" and validate the contents.
+
+| **Test ID** | **ft_show** |
+|--------|:----------------|
+| **Test Name** | **Verify different show commands "bdbg show" and "bdbg show" drops" and validate the contents.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) configure the bdbg config congestion congestion-threshold CLI command.
2) The command should not through any error.
3) verify the default values observed** |
+
+### 3.1.4 Verify different history congestion show commands "bdbg show congestion history" and "bdbg show congestion history interface <>" "bdbg show congestion history buffer <>" and validate the contents.
+
+| **Test ID** | **ft_show_congestion** |
+|--------|:----------------|
+| **Test Name** | **Verify different history congestion show commands "bdbg show congestion history" and "bdbg show congestion history interface <>" "bdbg show congestion history buffer <>" and validate the contents.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) configure the bdbg config congestion congestion-threshold CLI command.
2) The command should not through any error.
3) verify the default values observed** |
+
+### 3.1.5 Verify different clear commands.
+
+| **Test ID** | **ft_clear** |
+|--------|:----------------|
+| **Test Name** | **Verify different clear commands** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) verify command bdbg clear collection-interval.
2) verify command bdbg clear max-retention-interval.
3) verify command bdbg clear history.
4) verify command bdbg clear congestion history.
5) The command should not through any error.
6) verify the default values observed** |
+
+### 3.1.6 Verify different history congestion show commands "bdbg show congestion history" and "bdbg show congestion history interface <>" "bdbg show congestion history buffer <>" and validate the contents.
+
+| **Test ID** | **ft_show_congestion_history** |
+|--------|:----------------|
+| **Test Name** | **Verify different history congestion show commands "bdbg show congestion history" and "bdbg show congestion history interface <>" "bdbg show congestion history buffer <>" and validate the contents.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) configure the bdbg config congestion congestion-threshold CLI command.
2) The command should not through any error.
3) verify the default values observed** |
+
+### 3.1.7 Verify different show commands "bdbg show drops hisotry" and "bdbg show drops hisotry reasons" ,"bdbg show drops hisotry flows","bdbg show drops hisotry Locations" and validate the contents.
+
+| **Test ID** | **ft_show_drops_history** |
+|--------|:----------------|
+| **Test Name** | **Verify different show commands "bdbg show drops hisotry" and "bdbg show drops hisotry reasons" ,"bdbg show drops hisotry flows","bdbg show drops hisotry Locations" and validate the contents.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) configure the bdbg config congestion congestion-threshold CLI command.
2) The command should not through any error.
3) verify the default values observed** |
+
+### 3.1.8 Verify different clear history commands.
+
+| **Test ID** | **ft_clear_history** |
+|--------|:----------------|
+| **Test Name** | **Verify different clear history commands** |
+| **Test Setup** | **Topology1** |
+| **Type** | **CLI** |
+| **Steps** | **1) verify command bdbg clear collection-interval.
2) verify command bdbg clear max-retention-interval.
3) verify command bdbg clear history.
4) verify command bdbg clear congestion history.
5) verify command bdbg clear drop history.
6) The command should not through any error.
7) verify the default values observed** |
+
+## 3.2 Functional
+### 3.2.1 Verify that congestion for shared PG can be seen in show congestion output.
+
+| **Test ID** | **ft_shared_PG** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that congestion for shared PG can be seen in show congestion output.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the congestion threshold.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4) show congestion output .
5) stop the traffic.** |
+
+### 3.2.2 Verify that congestion for port queue can be seen in show congestion output.
+
+| **Test ID** | **ft_port_queue** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that congestion for port queue can be seen in show congestion output.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the congestion threshold.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4) Verify show congestion output..
5) stop the traffic.** |
+
+### 3.2.3 Verify that top congestion sources recorded behavior and the historical data behavior when max-retention-interval is exceeded.
+
+| **Test ID** | **ft_tp_congestion** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that top congestion sources recorded behavior and the historical data behavior when max-retention-interval is exceeded.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the collector and max-retention-interval.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4)Record the congestion traffic.
5) clear the congestion history.
6)Try to see the historical records exceeds the configured retention value the traffic.
7)After the retention interval the records purged and and will not be counted as historical data.
8)stop the traffic.** |
+
+### 3.2.4 Verify congestion history for a particular source.
+
+| **Test ID** | **ft_congestion_source** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | ** Verify congestion history for a particular source.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the collector and max-retention-interval.
3) Send the multicast traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4) Verify congestion history by "bdbg show congestion history" command.
5)stop the traffic.** |
+
+
+### 3.2.5 Verify the dropped 'TTL_ERR' packets can be verified from show drops.
+
+| **Test ID** | **ft_pkt_drop** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that 'TTL_ERR' packets are dropped and dropped events are sent to the configured collector.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **Setup:
i. DUT with 4 TG ports and reachable through management interface.
ii. L3 - IXIA stream having 'TTL_ERR' error. Set IPv4 L3 packet with TTL = '0' to introduce 'TTL_ERR' in the packet.
iii. Place the ACL rule and ACL table in the 'config_db.json' file such that configured ACL should have rule i.e. MONITOR_DROP.
iv. Routing configured and ARP resolved for TG ports.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Send 'Martian address' packets and check the behavior on collector after capturing the packets.5) Verify that 'Martian address' packets are dropped and can be verified from show drops active flows.** |
+
+### 3.2.6 Verify the packet drop counts for a interface which can be observed with 'show drops active interface Ethernet0'
+
+| **Test ID** | **ft_pkt_interface** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify the packet drop counts for a interface which can be observed with 'show drops active interface Ethernet0'.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **Setup:
i. DUT with 4 TG ports and reachable through management interface
ii. L3 IXIA stream having 'L3_DEST_MISS'. Do not install L3 route/L3 host entry.
iii. Place the ACL rule and ACL table in the 'config_db.json' file such that configured ACL should have rule i.e. MONITOR_DROP. Routing configured and ARP resolved for TG ports.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Send 'L3_DEST_MISS' packets and check the behavior on collector after capturing the packets.
5) Verify that 'L3_DEST_MISS' packets are dropped and the drop event with the help of show drops active interface Ethernet0 command.**
+
+### 3.2.7 Verify that congestion for service pool can be seen in show congestion output.
+
+| **Test ID** | **ft_port_service** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that congestion for service pool can be seen in show congestion output.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the congestion threshold.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4) Verify show congestion output..
5) stop the traffic.** |
+
+### 3.2.8 Verify that congestion for global pool can be seen in show congestion output.
+
+| **Test ID** | **ft_global_pool** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | ** Verify that congestion for global pool can be seen in show congestion output.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the congestion threshold.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4) Verify show congestion output..
5) stop the traffic.** |
+
+### 3.2.9 Verify that congestion for device can be seen in show congestion output.
+
+| **Test ID** | **ft_global_pool** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that congestion for device can be seen in show congestion output.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the congestion threshold.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4) Verify show congestion output..
5) stop the traffic.** |
+
+### 3.2.10 Verify that congestion for shared PG can be seen in show congestion output.
+
+| **Test ID** | **ft_shared_PG** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that congestion for shared PG can be seen in show congestion output.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the congestion threshold.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4) show congestion output .
5) stop the traffic.** |
+
+### 3.2.11 Verify the packet drop counts for cpu which can be observed with 'show drops active flows'
+
+| **Test ID** | **ft_pkt_flows** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify the packet drop counts for a interface which can be observed with 'show drops active flows'.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **Setup:
i. DUT with 4 TG ports and reachable through management interface
ii. L3 IXIA stream having 'L3_DEST_MISS'. Do not install L3 route/L3 host entry.
iii. Place the ACL rule and ACL table in the 'config_db.json' file such that configured ACL should have rule i.e. MONITOR_DROP. Routing configured and ARP resolved for TG ports.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Send 'L3_DEST_MISS' packets and check the behavior on collector after capturing the packets.
5) Verify that 'L3_DEST_MISS' packets are dropped and the drop event with the help of show drops active flows.
6) Verify the drops in show drops active flows locations also.**
+
+### 3.2.12 Verify that drop events recorded behavior and the historical data behavior when max-retention-interval is exceeded.
+
+| **Test ID** | **ft_drop_recorded** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that drop events recorded behavior and the historical data behavior when max-retention-interval is exceeded.** |
+| **Test Setup** | **Topology1** |
+| **Type** | **Functional** |
+| **Steps** | **1) Bring up the SONiC switch with default configuration.
2) configure the collector and max-retention-interval.
3) Send the traffic in from 3 TG ports to the 4th TG port (i.e. send 3:1 congested traffic) continuously.
4)Record the traffic.
5) clear the drops history.
6)Try to see the historical records exceeds the configured retention value the traffic.
7)After the retention interval the records purged and and will not be counted as historical data.
8)stop the traffic.** |
+
+# 4 Reference Links
+
+https://github.com/BRCM-SONIC/sonic_doc_private/blob/master/devops/tam/bdbg-hld.md
+
diff --git a/TestPlans/system/Drop_Monitor/SONIC_3_0_Mirror_On_Drop_Test_Plan.md b/TestPlans/system/Drop_Monitor/SONIC_3_0_Mirror_On_Drop_Test_Plan.md
index acc86b9a5ef8..460823a8c461 100644
--- a/TestPlans/system/Drop_Monitor/SONIC_3_0_Mirror_On_Drop_Test_Plan.md
+++ b/TestPlans/system/Drop_Monitor/SONIC_3_0_Mirror_On_Drop_Test_Plan.md
@@ -1,25 +1,30 @@
# SQA Test Plan
# Drop Monitor
-# SONiC 3.0 - Buzznik Release
+# SONiC 3.0 - 4.0.0 Buzznik/Cyrus Releases
[TOC]
# Test Plan Revision History
| Rev | Date | Author | Change Description |
|:---:|:-----------:|:------------------:|-----------------------------|
+| 0.2 | 18/08/2021 | Prudviraj Kristipati | Enhanced version |
| 0.1 | 11/05/2019 | Rizwan Siddiqui | Initial version |
# List of Reviewers
| Function | Name |
|:-------------------:|:--------------------------------:|
-| Management | Giri Babu Sajja |
-| Management | Sachin Suman |
+| QA | Kalyan Vadlamani |
+| QA | Giri Babu Sajja |
+| Dev | Sachin Suman |
+| Dev | Bandaru Viswanath |
| QA | Anil Kumar Kolkaleti |
| Dev | Shirisha Dasari |
# List of Approvers
| Function | Name | Date Approved |
|:------------:|:---------------:|:------------------:|
-| Management | Giri Babu Sajja | |
-| Management | Sachin Suman | |
+| QA | Giri Babu Sajja | |
+| Dev | Sachin Suman | |
+| QA | Kalyan Vadlamani ||
+| Dev | Bandaru Viswanath ||
# Definition/Abbreviation
| **Term** | **Meaning** |
@@ -32,12 +37,19 @@ Drop Start:- Sent when drops are observed on the flow for the first time. The re
Sampling rate: One out of configured number of dropped packets of flow are sampled for processing.
The report contains event, flow keys and the last observed drop reasons.
Once a drop-stop event is notified to the collector, if a flow drops again, a "drop-start" event will be sent indicating drops on flow.
+Additionally, to enable quick and targeted packet-drop debugging, the Drop Monitor feature supports reporting information locally about dropped flows without requiring an external Collector. This mode is termed `local` mode.
+
+The two modes - *local* mode and *external* mode are mutually exclusive. That is, when an external collector is configured, information on dropped-flows is unavailable locally on the Switch. Likewise, when used in *local* mode, drop reports are not sent to any external collector. The `external` mode is the default mode.
+
+The `local` mode is meant for debugging purposes only and is limited interms of scale (number of flows that can be monitored). It is not expected as a replacement for true drop monitoring with an external Collector.
+
# Test Approach
### What will be part of module config?
Module Config will have below items covered.
1. Ensure Min Topology, checking build contains advance package, feature supported platform and feature availability check, initializing TG port handlers and creating TG streams,
creating routing interfaces, creating a random VLAN to be used for L2 drop reason, needed show commands to check/debug the configuration.
2. Drop Monitor feature configuration such as entering into TAM mode, flow configuration, collector creation, sample configuration, aging-interval configuration.
3. All drop reasons related configuration i.e. ACL rules & ACL tables, 'Drop Monitor' flows and their corresponding TG streams configuration.
+4. Drop monitor with Local mode configuration can be part of this.
### What Utility will be used?
SpyTest framework TG packet capture utility will be used for analyzing the drop event packets.
@@ -85,7 +97,7 @@ Estimated run time would be 5-7 minutes (excluding other reboot tests).
- Warm reboot
- Fast reboot
- Config save and reload
-
+
## 1.3 Scalability Testing
- Scaling with max supported flows.
@@ -215,8 +227,44 @@ Note : VLAN tagged packet for the VLAN which is not configured / exists on the s
| **Type** | **Functional** |
| **Steps** | **Setup:
i. DUT with 4 TG ports and reachable through management interface.
ii. Send 'Unknown VLAN' & 'TTL_ERR'packets and check the behavior on collector after capturing the packets.
iii. Place the ACL rule and ACL table in the 'config_db.json' file such that configured ACL should have rule i.e. MONITOR_DROP.
iv. Routing configured and ARP resolved for TG ports.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Send 'TTL_ERR' & 'Unknown VLAN' packets and check the behavior on collector after capturing the packets.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that 'Drop Monitor' multiple drop events (start/active/stop) for 'Unknown VLAN' & 'TTL_ERR' are exported in a single packet with both events set in it. Also multiple flows reporting same event at same time are reported in a single event and sent to the collector in protobuf format having part of packet, flow details, drop reason opcode.** |
+### 3.1.15 verify the local mode configurations in different scenarios.
+
+| **Test ID** | **ft_mode_local_config |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify the local mode configurations in different scenarios. ** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **1) configure the mode as local.
2) verify the mode is reflecting correctly or not.
3)verify the built in collector is named as 'local' as mode configured is local.
4)verify that When Drop Monitor is setup in 'local mode' the collector parameter should be optional and is ignored..** |
+
+### 3.1.16 Verify that clear dropped flows functionality after 'Unknown VLAN' packets are dropped and dropped events are sent to the configured Drop Monitor mode local.
+
+| **Test ID** | **ft_mod_unknown_vlan_pkt_drop_local** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | ** Verify that clear dropped flows functionality after 'Unknown VLAN' packets are dropped and dropped events are sent to the configured Drop Monitor mode local.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **Setup:
i. DUT with 4 TG ports and reachable through management interface.
ii. L2 IXIA stream having VLAN tag with the VLAN which does not exist on the DUT.
iii. Place the ACL rule and ACL table in the 'config_db.json' file such that configured ACL should have rule i.e. MONITOR_DROP.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and mode local.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Send 'Unknown VLAN' packets and check the behavior from respective show command.
5) Send 'Unknown VLAN' packets after the aging interval.
6) Issue clear tam drop-monitor flows command.
7) Send 'Unknown VLAN' packets and check the behavior from respective show command.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and local configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that 'TTL1_ERR' packets are dropped and the drop event is sent to the local collector having part of packet, flow details, drop reason opcode.
5) Verify that drop reasons are not sent to the local collector.
6) observe that flow are cleared and verify the same in respective show command
7) Verify that 'TTL1_ERR' packets are dropped and the drop event is sent to the local collector having part of packet, flow details, drop reason opcode.** |
+Note : VLAN tagged packet for the VLAN which is not configured / exists on the switch is Unkown VLAN.
+
+### 3.1.17 Verify that 'L3HEADER Error' packets are dropped and dropped events are sent to configured Drop Monitor mode local when continuous traffic is sent.
+| **Test ID** | **ft_mod_l3header_error_pkt_drop_local** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that 'L3HEADER Error' packets are dropped and dropped events are sent to configured Drop Monitor mode local when continuous traffic is sent.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **Setup:
i. DUT with 4 TG ports and reachable through management interface.
ii. L3 IXIA stream having 'L3 Source Bind Fail'. Send continuous packet with unicast Destination MAC Address and Multicast Destination IP address.
iii. Place the ACL rule and ACL table in the 'config_db.json' file such that configured ACL should have rule i.e. MONITOR_DROP.
iv. Routing configured and ARP resolved for TG ports.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and mode local.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Send packet with a different MAC address than the MAC address used in source binding and check the show command for reasons.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and local configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that 'L3 Source Bind Fail' packets are dropped and the drop event are seen in show command ** |
+
+### 3.1.18 Verify drop monitor events are not to sent to the external collector even the collector is configured if the mode is set to local and collector is mentioned as local in session configuration.
+
+| **Test ID** | **ft_mod_l3_source_bind_fail_pkt_local_drop** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify drop monitor events are not to sent to the external collector even the collector is configured if the mode is set to local and collector is mentioned as local in session configuration.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Functional** |
+| **Steps** | **Setup:
i. DUT with 4 TG ports and reachable through management interface.
ii. L3 - IXIA stream having 'L3 Source Bind Fail'. Send packet with a different MAC than the MAC address used in SRC binding.
iii. Place the ACL rule and ACL table in the 'config_db.json' file such that configured ACL should have rule i.e. MONITOR_DROP.
iv. Routing configured and ARP resolved for TG ports.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and mode local.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Send packet with a different MAC address than the MAC address used in source binding and check the are seen in show command.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that 'L3 Source Bind Fail' packets are dropped and the drop events should not sent to external collector .** |
## 3.2 Negative
+
### 3.2.1 Verify that 'Known/Configured VLAN' packets are not dropped and no dropped events are sent to the configured collector.
| **Test ID** | **ft_neg_mod_known_vlan_pkt_not_dropped** |
@@ -235,7 +283,28 @@ Note : VLAN tagged packet for the VLAN which is not configured / exists on the s
| **Type** | **Negative** |
| **Steps** | **Setup:
i. L2 IXIA stream having VLAN tag with the VLAN which is not configured on the DUT.
Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Send packets to 'Unknown VLAN' and check the behavior on collector after capturing the packets.
5) Do a shut/no-shut on traffic sending links and check feature behavior.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that packets are dropped and events are sent to the collector.
5) Verify that once the link come back UP, feature functionality resumes, dropped packets events are sent to the collector in protobuf format having part of packet, flow details, drop reason opcode.** |
+### 3.2.3 verify that mode change is not allowed when the active sessions present on the switch.
+
+| **Test ID** | **ft_neg_external_local |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | ** verify that mode change is not allowed when the active sessions present on the switch.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Negative** |
+| **Steps** | **1) verify the default mode.
2) check any available sessions.
3) configure the mode as local.
4) check that mode change is not allowed as active sessions present on the switch.** |
+
+### 3.2.4 verify the command 'show tam drop-monitor flows' throws appropriate error when collector is configured as external.
+
+| **Test ID** | **ft_neg_error_drop |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **verify the command 'show tam drop-monitor flows' throws appropriate error when collector is configured as external. ** |
+| **Test Setup** | **Topology** |
+| **Type** | **Negative** |
+| **Steps** | **1)Make sure no dropped events are present in the device.
2) verify the which mode is configured on the device.
3)Try to issue the command show tam drop-monitor flows on the device .
4)Appropriate error should be seen on the console.** |
+
+##
+
## 3.3 Reboot/Reload Test Cases
+
### 3.3.1 Verify that 'Drop Monitor' warm-reboot functionality works fine.
| **Test ID** | **ft_mod_warm_reboot** |
@@ -263,23 +332,23 @@ Note : VLAN tagged packet for the VLAN which is not configured / exists on the s
| **Type** | **Config Reload** |
| **Steps** | **Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Save the config and perform 'reload config' and check the configuration.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that reload config functionality works fine, configuration is applied back to the switch.** |
-### 3.3.4 Verify 'Drop Monitor' configuration gets removed when TAM dockder is restarted.
+### 3.3.4 Verify 'Drop Monitor' configuration gets removed when TAM docker is restarted.
-| **Test ID** | **ft_mod_docker_restart** |
-| -------------- | :------------------------------------------------------------------------ |
-| **Test Name** | **Verify 'Drop Monitor' configuration gets removed when TAM dockder is restarted.** |
-| **Test Setup** | **Topology** |
-| **Type** | **Docker Restart** |
+| **Test ID** | **ft_mod_docker_restart** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify 'Drop Monitor' configuration gets removed when TAM docker is restarted.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Docker Restart** |
| **Steps** | **Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Restart TAM docker and check the configuration.
5) Repeat above steps #2 & 3.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that restarting the docker will only restart the dropMonitorMgr.
5) Verify that there should not be an impact on the config in CONFIG_DB.
** |
### 3.3.6 Verify that 'Drop Monitor' config save reload works fine.
| **Test ID** | **ft_mod_save_reload** |
| -------------- | :----------------------------------------------------------- |
-| **Test Name** | **Verify that 'Drop Monitor' config save reload works fine.**|
+| **Test Name** | **Verify that 'Drop Monitor' config save reload works fine.** |
| **Test Setup** | **Topology** |
| **Type** | **Config Save Reload** |
-| **Steps** | **Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Save the configuration and reboot the switch.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that 'Drop Monitor' configuration sustains a reboot and all configurataion is retained.
** |
+| **Steps** | **Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure a flow using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Save the configuration and reboot the switch.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that a flow is configured using 5-tuple.
4) Verify that 'Drop Monitor' configuration sustains a reboot and all configuration is retained.
** |
### 3.3.7 Verify that there is no memory leak for 'Drop Monitor' feature.
@@ -302,16 +371,32 @@ Note : VLAN tagged packet for the VLAN which is not configured / exists on the s
### 4 Scalability
### 4.1 Verify that 'Drop Monitor' max flows can be created.
-| **Test ID** | **ft_mod_max_flows** |
-| -------------- | :------------------------------------------------------------------- |
-| **Test Name** | **Verify that 'Drop Monitor' max supproted flows can be created.** |
-| **Test Setup** | **Topology** |
-| **Type** | **Scalability** |
-| **Steps** | **Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure max flows using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Pick any 2 of the configured flows and test feature functionality.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that max flows are configured and are shown correctly.
4) Verfiy that drop events are generated for the selected 2 flows and sent to the collector.
** | Note : 'Drop Monitor' max flows depend on max ACL rules supported per platform.
+| **Test ID** | **ft_mod_max_flows** |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify that 'Drop Monitor' max supported flows can be created.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Scalability** |
+| **Steps** | **Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and a collector.
3) Configure max flows using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Pick any 2 of the configured flows and test feature functionality.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and collector configuration is successful.
3) Verify that max flows are configured and are shown correctly.
4) Verify that drop events are generated for the selected 2 flows and sent to the collector.
** | Note : 'Drop Monitor' max flows depend on max ACL rules supported per platform.
+
+
+
+### 4.2 Verify max available flows when mode is configured as local.
+
+| **Test ID** | **ft_mod_max_flows_local**_ |
+| -------------- | :----------------------------------------------------------- |
+| **Test Name** | **Verify max available flows when mode is configured as local.** |
+| **Test Setup** | **Topology** |
+| **Type** | **Scalability** |
+
+| **Steps** | **Procedure:
1) Bring up the DUT with default configuration.
2) Enable 'Drop Monitor' feature, configure sampling rate, aging interval and mode as local.
3) Configure max flows using 5-Tuple info to look for a specific pattern in the incoming flow of traffic.
4) Pick any 2 of the configured flows and test feature functionality.
Expected Behavior:
1) Verify that DUT is UP with default configuration.
2) Verify that configuration is successful and 'Drop Monitor' feature is enabled, sampling config is successful, aging interval and local configuration is successful.
3) Verify that max flows are configured and are shown correctly.
4) Verify that drop events are generated for the selected 2 flows and sent to the local.
** | Note : 'Drop Monitor' max flows depend on max ACL rules supported per platform.
## 5 Reference Links
SONIC 3.0 'Drop Monitor' feature HLD @
http://gerrit-lvn-07.lvn.broadcom.net:8083/c/sonic/documents/+/12993
+SONIC 4.0.0 'Drop Monitor' feature HLD @
+
+https://github.com/BRCM-SONIC/sonic_doc_private/blob/master/devops/tam/tam-drop-monitor-hld.md
+