-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP loss data not calculated correctly? #1534
Comments
@malanmoon, in principle, what you see is possible when UDP packets are received out of order. E.g. if in interval x last two packets received are 90 and 100. The report for this interval will show that 9 packets (91-99) are lost. However, if these packets are received in interval x+1 than in total no packet is lost. These lost packets will be counted in the "received out-of-order" in the server output. However, since the test includes more than 2^31 packets, the packet counters overflowed (note that the reported total received packets 2147483647 is exactly the maximum value for signed 32 bits: 2^31 - 1). To prevent the packet counter overflow you should use the It will help, if you can try the following two tests and share the summary lines reported by the server:
|
I second the suggestion for |
Per tests I run, using @malanmoon, I believe that it would still be useful to run the tests as suggested above (building and using the iperf3 from the PR would be better). |
iperf 3.12 (cJSON 1.7.15)
I created an UDP session for 24h, the server output:
Lost/Total Datagrams
showed0/2147483647 (0%)
while there is packet loss, albeit minimal.The text was updated successfully, but these errors were encountered: