Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shutdown the portchannel,but the portchannel member'Oper status is not down #5319

Open
tim-rj opened this issue Sep 4, 2020 · 5 comments · May be fixed by sonic-net/sonic-utilities#1234
Open

Comments

@tim-rj
Copy link
Contributor

tim-rj commented Sep 4, 2020

Description

shutdown the portchannel,but the portchannel member'Oper status is not down

Steps to reproduce the issue:
1.config a new portchannel
2.add portchannel member
3.shutdown the portchannel
4.View port status

Describe the results you received:

portchannel member (portchannel member) Oper status is not down

root@sonic:/home/admin# show interfaces portchannel 
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  -------------
 0001  PortChannel0001  LACP(A)(Up)  Ethernet51(S)
 0002  PortChannel0002  LACP(A)(Dw)  N/A
root@sonic:/home/admin# config interface shutdown PortChannel0001 
root@sonic:/home/admin# show interfaces portchannel               
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  -------------
 0001  PortChannel0001  LACP(A)(Dw)  Ethernet51(D)
 0002  PortChannel0002  LACP(A)(Dw)  N/A
root@sonic:/home/admin# show int st
      Interface            Lanes    Speed    MTU    FEC               Alias             Vlan    Oper    Admin             Type    Asym PFC
---------------  ---------------  -------  -----  -----  ------------------  ---------------  ------  -------  ---------------  ----------
      Ethernet1                1      25G   9100    N/A   twentyfiveGigE0/1            trunk    down       up              N/A         N/A
      Ethernet2                2      25G   9100    N/A   twentyfiveGigE0/2           routed    down       up              N/A         N/A
      Ethernet3                3      25G   9100    N/A   twentyfiveGigE0/3           routed    down       up              N/A         N/A
      Ethernet4                4      25G   9100    N/A   twentyfiveGigE0/4           routed    down       up              N/A         N/A
      Ethernet5                5      25G   9100    N/A   twentyfiveGigE0/5           routed    down       up              N/A         N/A
      Ethernet6                6      25G   9100    N/A   twentyfiveGigE0/6           routed    down       up              N/A         N/A
      Ethernet7                7      25G   9100    N/A   twentyfiveGigE0/7           routed    down       up              N/A         N/A
      Ethernet8                8      25G   9100    N/A   twentyfiveGigE0/8           routed    down       up              N/A         N/A
      Ethernet9               13      25G   9100    N/A   twentyfiveGigE0/9           routed    down       up              N/A         N/A
     Ethernet10               14      25G   9100    N/A  twentyfiveGigE0/10           routed    down       up              N/A         N/A
     Ethernet11               15      25G   9100    N/A  twentyfiveGigE0/11           routed    down       up              N/A         N/A
     Ethernet12               16      25G   9100    N/A  twentyfiveGigE0/12           routed    down       up              N/A         N/A
     Ethernet13               21      25G   9100    N/A  twentyfiveGigE0/13           routed    down       up              N/A         N/A
     Ethernet14               22      25G   9100    N/A  twentyfiveGigE0/14           routed    down       up              N/A         N/A
     Ethernet15               23      25G   9100    N/A  twentyfiveGigE0/15           routed    down       up              N/A         N/A
     Ethernet16               24      25G   9100    N/A  twentyfiveGigE0/16           routed    down       up              N/A         N/A
     Ethernet17               29      25G   9100    N/A  twentyfiveGigE0/17           routed    down       up              N/A         N/A
     Ethernet18               30      25G   9100    N/A  twentyfiveGigE0/18           routed    down       up              N/A         N/A
     Ethernet19               31      25G   9100    N/A  twentyfiveGigE0/19           routed    down       up              N/A         N/A
     Ethernet20               32      25G   9100    N/A  twentyfiveGigE0/20           routed    down       up              N/A         N/A
     Ethernet21               33      25G   9100    N/A  twentyfiveGigE0/21           routed    down       up              N/A         N/A
     Ethernet22               34      25G   9100    N/A  twentyfiveGigE0/22           routed    down       up              N/A         N/A
     Ethernet23               35      25G   9100    N/A  twentyfiveGigE0/23           routed    down       up              N/A         N/A
     Ethernet24               36      25G   9100    N/A  twentyfiveGigE0/24           routed    down       up              N/A         N/A
     Ethernet25               41      25G   9100    N/A  twentyfiveGigE0/25           routed    down       up              N/A         N/A
     Ethernet26               42      25G   9100    N/A  twentyfiveGigE0/26           routed    down       up              N/A         N/A
     Ethernet27               43      25G   9100    N/A  twentyfiveGigE0/27           routed    down       up              N/A         N/A
     Ethernet28               44      25G   9100    N/A  twentyfiveGigE0/28           routed    down       up              N/A         N/A
     Ethernet29               49      25G   9100    N/A  twentyfiveGigE0/29           routed    down       up              N/A         N/A
     Ethernet30               50      25G   9100    N/A  twentyfiveGigE0/30           routed    down       up              N/A         N/A
     Ethernet31               51      25G   9100    N/A  twentyfiveGigE0/31           routed    down       up              N/A         N/A
     Ethernet32               52      25G   9100    N/A  twentyfiveGigE0/32           routed    down       up              N/A         N/A
     Ethernet33               57      25G   9100    N/A  twentyfiveGigE0/33           routed    down       up              N/A         N/A
     Ethernet34               58      25G   9100    N/A  twentyfiveGigE0/34           routed    down       up              N/A         N/A
     Ethernet35               59      25G   9100    N/A  twentyfiveGigE0/35           routed    down       up              N/A         N/A
     Ethernet36               60      25G   9100    N/A  twentyfiveGigE0/36           routed    down       up              N/A         N/A
     Ethernet37               61      25G   9100    N/A  twentyfiveGigE0/37           routed    down       up              N/A         N/A
     Ethernet38               62      25G   9100    N/A  twentyfiveGigE0/38           routed    down       up              N/A         N/A
     Ethernet39               63      25G   9100    N/A  twentyfiveGigE0/39           routed    down       up              N/A         N/A
     Ethernet40               64      25G   9100    N/A  twentyfiveGigE0/40           routed    down       up              N/A         N/A
     Ethernet41               65      25G   9100    N/A  twentyfiveGigE0/41           routed    down       up              N/A         N/A
     Ethernet42               66      25G   9100    N/A  twentyfiveGigE0/42           routed    down       up              N/A         N/A
     Ethernet43               67      25G   9100    N/A  twentyfiveGigE0/43           routed    down       up              N/A         N/A
     Ethernet44               68      25G   9100    N/A  twentyfiveGigE0/44           routed    down       up              N/A         N/A
     Ethernet45               69      25G   9100    N/A  twentyfiveGigE0/45           routed    down       up              N/A         N/A
     Ethernet46               70      25G   9100    N/A  twentyfiveGigE0/46           routed    down       up              N/A         N/A
     Ethernet47               71      25G   9100    N/A  twentyfiveGigE0/47           routed    down       up              N/A         N/A
     Ethernet48               72      25G   9100    N/A  twentyfiveGigE0/48           routed    down       up              N/A         N/A
     Ethernet49      85,86,87,88     100G   9100    N/A      hundredGigE0/1           routed      up       up  QSFP28 or later         N/A
     Ethernet50      77,78,79,80     100G   9100    N/A      hundredGigE0/2           routed    down       up              N/A         N/A
     Ethernet51     97,98,99,100     100G   9100    N/A      hundredGigE0/3  PortChannel0001      up       up  QSFP28 or later         N/A
     Ethernet52      93,94,95,96     100G   9100    N/A      hundredGigE0/4           routed    down       up              N/A         N/A
     Ethernet53  113,114,115,116     100G   9100    N/A      hundredGigE0/5           routed    down       up              N/A         N/A
     Ethernet54  105,106,107,108     100G   9100    N/A      hundredGigE0/6           routed    down       up              N/A         N/A
     Ethernet55  121,122,123,124     100G   9100    N/A      hundredGigE0/7           routed    down       up              N/A         N/A
     Ethernet56  125,126,127,128     100G   9100    N/A      hundredGigE0/8           routed    down       up              N/A         N/A
PortChannel0001              N/A     100G   9100    N/A                 N/A            trunk    down     down              N/A         N/A
root@sonic:/home/admin# sudo tail -n 20 /var/log/syslog
Sep  4 15:36:50.348016 sonic INFO dhclient[1874]: XMT: Solicit on eth0, interval 109060ms.
Sep  4 15:37:31.214639 sonic NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:PortChannel0001 admin:0 oper:0 addr:58:69:6c:fb:21:22 ifindex:8 master:66 type:team
Sep  4 15:37:31.214672 sonic NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:PortChannel0001 admin:0 oper:0 addr:58:69:6c:fb:21:22 ifindex:8 master:66
Sep  4 15:37:31.215893 sonic NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet51 admin:1 oper:1 addr:58:69:6c:fb:21:22 ifindex:61 master:8
Sep  4 15:37:31.215893 sonic NOTICE teamd#teammgrd: :- setLagAdminStatus: Set port channel PortChannel0001 admin status to down
Sep  4 15:37:31.216079 sonic NOTICE swss#orchagent: message repeated 49598 times: [ :- set: setting attribute 0x10000004 status: SAI_STATUS_SUCCESS]
Sep  4 15:37:31.216079 sonic NOTICE swss#orchagent: :- flushFdbEntries: flush key: SAI_OBJECT_TYPE_FDB_FLUSH:oid:0x21000000000000, fields: 1
Sep  4 15:37:31.216126 sonic NOTICE swss#orchagent: :- recordFlushFdbEntries: flush key: SAI_OBJECT_TYPE_FDB_FLUSH:oid:0x21000000000000, fields: 1
Sep  4 15:37:31.216901 sonic INFO kernel: [23580.849965] Bridge: port 3(PortChannel0001) entered disabled state
Sep  4 15:37:31.219106 sonic NOTICE teamd#teammgrd: :- setLagMtu: Set port channel PortChannel0001 MTU to 9100
Sep  4 15:37:31.221545 sonic NOTICE swss#orchagent: :- meta_sai_on_fdb_flush_event_consolidated: processing consolidated fdb flush event of type: SAI_FDB_ENTRY_TYPE_DYNAMIC
Sep  4 15:37:31.223437 sonic NOTICE swss#orchagent: :- meta_sai_on_fdb_flush_event_consolidated: fdb flush took 0.001918 sec
Sep  4 15:37:31.223437 sonic NOTICE swss#orchagent: :- meta_sai_on_fdb_flush_event_consolidated: processing consolidated fdb flush event of type: SAI_FDB_ENTRY_TYPE_STATIC
Sep  4 15:37:31.225281 sonic NOTICE swss#orchagent: :- meta_sai_on_fdb_flush_event_consolidated: fdb flush took 0.001875 sec
Sep  4 15:37:31.225281 sonic NOTICE swss#orchagent: :- set: setting attribute 0x10000004 status: SAI_STATUS_SUCCESS
Sep  4 15:37:31.226730 sonic INFO lldp#lldpmgrd: Unable to retrieve description for port 'Ethernet51'. Not adding port description
Sep  4 15:37:31.226880 sonic DEBUG lldp#lldpmgrd: Running command: 'lldpcli configure ports Ethernet51 lldp portidsubtype local hundredGigE0/3'
Sep  4 15:37:49.352929 sonic ERR monit[541]: 'telemetry' process is not running
Sep  4 15:37:49.358523 sonic ERR monit[541]: 'dialout_client' process is not running
Sep  4 15:37:49.365042 sonic ERR monit[541]: 'sflowmgrd' process is not running
ro

Describe the results you expected:
portchannel member (portchannel member) Oper status is down

Additional information you deem important (e.g. issue happens only occasionally):

**Output of `show version`:**

```

Sep 1 17:16:13.229631 sonic ERR monit[555]: 'sflowmgrd' process is not running
root@sonic:/home/admin# show version

SONiC Software Version: SONiC.master.399-07b9d7f4
Distribution: Debian 10.5
Kernel: 4.19.0-9-2-amd64
Build commit: 07b9d7f
Build date: Sun Aug 30 07:41:30 UTC 2020
Built by: johnar@jenkins-worker-8

Platform: x86_64-ruijie_b6510-48vs8cq-r0
HwSKU: B6510-48VS8CQ
ASIC: broadcom
Serial Number: G1W10072
Uptime: 17:26:38 up 2:51, 1 user, load average: 0.46, 0.37, 0.29

Docker images:
REPOSITORY TAG IMAGE ID SIZE
docker-teamd latest 0da13af040db 386MB
docker-teamd master.399-07b9d7f4 0da13af040db 386MB
docker-sonic-mgmt-framework latest b3309c50c848 481MB
docker-sonic-mgmt-framework master.399-07b9d7f4 b3309c50c848 481MB
docker-router-advertiser latest 81563660196e 355MB
docker-router-advertiser master.399-07b9d7f4 81563660196e 355MB
docker-platform-monitor latest bfc946290aa4 429MB
docker-platform-monitor master.399-07b9d7f4 bfc946290aa4 429MB
docker-lldp latest 17607fd34cc8 383MB
docker-lldp master.399-07b9d7f4 17607fd34cc8 383MB
docker-dhcp-relay latest 57d5a7e2d58d 362MB
docker-dhcp-relay master.399-07b9d7f4 57d5a7e2d58d 362MB
docker-database latest ddb145fe6d62 355MB
docker-database master.399-07b9d7f4 ddb145fe6d62 355MB
docker-orchagent latest 2b0a4c92794f 400MB
docker-orchagent master.399-07b9d7f4 2b0a4c92794f 400MB
docker-nat latest eb5c6b6736c2 389MB
docker-nat master.399-07b9d7f4 eb5c6b6736c2 389MB
docker-sonic-telemetry latest 6c4292e948fd 425MB
docker-sonic-telemetry master.399-07b9d7f4 6c4292e948fd 425MB
docker-fpm-frr latest 076744e3f438 402MB
docker-fpm-frr master.399-07b9d7f4 076744e3f438 402MB
docker-sflow latest 1046ede2b365 390MB
docker-sflow master.399-07b9d7f4 1046ede2b365 390MB
docker-snmp latest ffbf20d1540a 395MB
docker-snmp master.399-07b9d7f4 ffbf20d1540a 395MB
docker-syncd-brcm latest d62f6c218ce2 447MB
docker-syncd-brcm master.399-07b9d7f4 d62f6c218ce2 447MB

root@sonic:/home/admin#
```

**Attach debug file `sudo generate_dump`:**

```
(paste your output here)
```
@prsunny
Copy link
Contributor

prsunny commented Sep 8, 2020

This is an expected behavior. show int status for Ethernet (Front Panel) interfaces is the physical link oper status and not the logical oper status. As you can see, the portchannel member state can be observed via 'show int portchannel'

@tim-rj
Copy link
Contributor Author

tim-rj commented Sep 10, 2020

When a physical port is added to a portchannel member port, if the portchannel is shut down, the physical port should also be shut down and cannot communicate normally. This meets the expectations of the portchannel member port。But in fact, the portchannel is shut down, and the physical port can still communicate normally

@prsunny
Copy link
Contributor

prsunny commented Sep 10, 2020

So are you saying the portchannel member must be "admin" down when the portchannel itself is shutdown? The description says you are expecting "oper" down. Can you please clarify?

@tim-rj
Copy link
Contributor Author

tim-rj commented Sep 11, 2020

What I expect is that when the portchannel is shut down, the state of the member port that belongs to it is also shut down, and the member port cannot communicate normally until the portchannel is up or it is removed from the portchannel。

@prsunny
Copy link
Contributor

prsunny commented Sep 11, 2020

Got it, so that's a feature request that needs to be planned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants