Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update lua script to convert poll_interval to MS #3

Closed
wants to merge 1 commit into from

Conversation

noaOrMlnx
Copy link
Owner

@noaOrMlnx noaOrMlnx commented Aug 4, 2021

What I did
Update the rif_rates/lua script to multiply by 1000 instead of FlexCounter class.
related also to - sonic-sairedis/pull/878
Why I did it
times were not calculated properly.
How I verified it
check that the output of cli and redis is close enough:

admin@sonic:~$ show interfaces counters rif -p 5
The rates are calculated within 5 seconds period
    IFACE    RX_OK     RX_BPS    RX_PPS    RX_ERR    TX_OK    TX_BPS    TX_PPS    TX_ERR
---------  -------  ---------  --------  --------  -------  --------  --------  --------
Ethernet0        5  83.90 B/s    1.00/s         1        0  0.00 B/s    0.00/s         0
127.0.0.1:6379[2]> hgetall RATES:oid:0x6000000000d9d
 1) "SAI_ROUTER_INTERFACE_STAT_IN_OCTETS_last"
 2) "3948"
 3) "SAI_ROUTER_INTERFACE_STAT_IN_PACKETS_last"
 4) "47"
 5) "SAI_ROUTER_INTERFACE_STAT_OUT_OCTETS_last"
 6) "0"
 7) "SAI_ROUTER_INTERFACE_STAT_OUT_PACKETS_last"
 8) "0"
 9) "RX_BPS"
10) "77.634107320096575"
11) "RX_PPS"
12) "0.92421556333448329"
13) "TX_BPS"
14) "0"
15) "TX_PPS"
16) "0"

before the change:

admin@sonic:~$ show interfaces  counters rif -p 5
The rates are calculated within 5 seconds period
     IFACE    RX_OK       RX_BPS     RX_PPS    RX_ERR    TX_OK    TX_BPS    TX_PPS    TX_ERR
----------  -------  -----------  ---------  --------  -------  --------  --------  --------
Ethernet0        0     0.00 B/s     0.00/s         0        0  0.00 B/s    0.00/s         0
Ethernet28   29,894  501.05 KB/s  5964.85/s         1        0  0.00 B/s    0.00/s         0
127.0.0.1:6379[2]> hgetall "RATES:oid:0x6000000000517"
1) "SAI_ROUTER_INTERFACE_STAT_IN_OCTETS_last"
2) "17856552"
3) "SAI_ROUTER_INTERFACE_STAT_IN_PACKETS_last"
4) "212578"
5) "SAI_ROUTER_INTERFACE_STAT_OUT_OCTETS_last"
6) "0"
7) "SAI_ROUTER_INTERFACE_STAT_OUT_PACKETS_last"
8) "0"
9) "RX_BPS"
10) "67.774654124880783"
11) "RX_PPS"
12) "0.80684145386762807"
13) "TX_BPS"
14) "0"
15) "TX_PPS"
16) "0"

Details if related

@noaOrMlnx noaOrMlnx closed this Aug 5, 2021
noaOrMlnx pushed a commit that referenced this pull request Apr 2, 2023
Currently, ASAN sometimes reports the BufferOrch::m_buffer_type_maps and QosOrch::m_qos_maps as leaked. However, their lifetime is the lifetime of a process so they are not really 'leaked'.
This also adds a simple way to add more suppressions later if required.

Example of ASAN report:

Direct leak of 48 byte(s) in 1 object(s) allocated from:
    #0 0x7f96aa952d30 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xead30)
    #1 0x55ca1da9f789 in __static_initialization_and_destruction_0 /__w/2/s/orchagent/bufferorch.cpp:39
    #2 0x55ca1daa02af in _GLOBAL__sub_I_bufferorch.cpp /__w/2/s/orchagent/bufferorch.cpp:1321
    #3 0x55ca1e2a9cd4  (/usr/bin/orchagent+0xe89cd4)

Direct leak of 48 byte(s) in 1 object(s) allocated from:
    #0 0x7f96aa952d30 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xead30)
    #1 0x55ca1da6d2da in __static_initialization_and_destruction_0 /__w/2/s/orchagent/qosorch.cpp:80
    #2 0x55ca1da6ecf2 in _GLOBAL__sub_I_qosorch.cpp /__w/2/s/orchagent/qosorch.cpp:2000
    #3 0x55ca1e2a9cd4  (/usr/bin/orchagent+0xe89cd4)

- What I did
Added an lsan suppression config with static variable leak suppression

- Why I did it
To suppress ASAN false positives

- How I verified it
Run a test that produces the static variable leaks report and checked that report has these leaks suppressed.

Signed-off-by: Yakiv Huryk <yhuryk@nvidia.com>
noaOrMlnx pushed a commit that referenced this pull request Sep 26, 2023
**What I did**

Fix the Mem Leak by moving the raw pointers in type_maps to use smart pointers

**Why I did it**

```
Indirect leak of 83776 byte(s) in 476 object(s) allocated from:
    #0 0x7f0a2a414647 in operator new(unsigned long) ../../../../src/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x5555590cc923 in __gnu_cxx::new_allocator, std::allocator > const, referenced_object> > >::allocate(unsigned long, void const*) /usr/include/c++/10/ext/new_allocator.h:115
    #2 0x5555590cc923 in std::allocator_traits, std::allocator > const, referenced_object> > > >::allocate(std::allocator, std::allocator > const, referenced_object> > >&, unsigned long) /usr/include/c++/10/bits/alloc_traits.h:460
    #3 0x5555590cc923 in std::_Rb_tree, std::allocator >, std::pair, std::allocator > const, referenced_object>, std::_Select1st, std::allocator > const, referenced_object> >, std::less, std::allocator > >, std::allocator, std::allocator > const, referenced_object> > >::_M_get_node() /usr/include/c++/10/bits/stl_tree.h:584
    #4 0x5555590cc923 in std::_Rb_tree_node, std::allocator > const, referenced_object> >* std::_Rb_tree, std::allocator >, std::pair, std::allocator > const, referenced_object>, std::_Select1st, std::allocator > const, referenced_object> >, std::less, std::allocator > >, std::allocator, std::allocator > const, referenced_object> > >::_M_create_node, std::allocator > const&>, std::tuple<> >(std::piecewise_construct_t const&, std::tuple, std::allocator > const&>&&, std::tuple<>&&) /usr/include/c++/10/bits/stl_tree.h:634
    #5 0x5555590cc923 in std::_Rb_tree_iterator, std::allocator > const, referenced_object> > std::_Rb_tree, std::allocator >, std::pair, std::allocator > const, referenced_object>, std::_Select1st, std::allocator > const, referenced_object> >, std::less, std::allocator > >, std::allocator, std::allocator > const, referenced_object> > >::_M_emplace_hint_unique, std::allocator > const&>, std::tuple<> >(std::_Rb_tree_const_iterator, std::allocator > const, referenced_object> >, std::piecewise_construct_t const&, std::tuple, std::allocator > const&>&&, std::tuple<>&&) /usr/include/c++/10/bits/stl_tree.h:2461
    #6 0x5555590e8757 in std::map, std::allocator >, referenced_object, std::less, std::allocator > >, std::allocator, std::allocator > const, referenced_object> > >::operator[](std::__cxx11::basic_string, std::allocator > const&) /usr/include/c++/10/bits/stl_map.h:501
    #7 0x5555590d48b0 in Orch::setObjectReference(std::map, std::allocator >, std::map, std::allocator >, referenced_object, std::less, std::allocator > >, std::allocator, std::allocator > const, referenced_object> > >*, std::less, std::allocator > >, std::allocator, std::allocator > const, std::map, std::allocator >, referenced_object, std::less, std::allocator > >, std::allocator, std::allocator > const, referenced_object> > >*> > >&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) orchagent/orch.cpp:450
    #8 0x5555594ff66b in QosOrch::handleQueueTable(Consumer&, std::tuple, std::allocator >, std::__cxx11::basic_string, std::allocator >, std::vector, std::allocator >, std::__cxx11::basic_string, std::allocator > >, std::allocator, std::allocator >, std::__cxx11::basic_string, std::allocator > > > > >&) orchagent/qosorch.cpp:1763
    #9 0x5555594edbd6 in QosOrch::doTask(Consumer&) orchagent/qosorch.cpp:2179
    #10 0x5555590c8743 in Consumer::drain() orchagent/orch.cpp:241
    #11 0x5555590c8743 in Consumer::drain() orchagent/orch.cpp:238
    #12 0x5555590c8743 in Consumer::execute() orchagent/orch.cpp:235
    #13 0x555559090dad in OrchDaemon::start() orchagent/orchdaemon.cpp:755
    #14 0x555558e9be25 in main orchagent/main.cpp:766
    #15 0x7f0a299b6d09 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x23d09)
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants