Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark WebTransport streams keep alive #1389

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

MayyaSunil
Copy link
Collaborator

This PR is intended to resolve #1369

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comments on the issue.

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you have the right places for these calls.

You need one for the server and one for the client.

I think that the client one goes in webtransport_create_session and the server one goes in webtransport_accept_session.

The points you have identified are about streams created within the session, not the session stream.

Finally, I would like to see test cases for this, showing that keep-alives (PING) are sent if the connection is left to go idle for too long, with only a webtransport session active. There are test cases for keep-alive that you can base this on.

@larseggert
Copy link
Collaborator

@MayyaSunil are you still intending to land this change?

@MayyaSunil
Copy link
Collaborator Author

@larseggert, thanks for checking.
I would like to land this.
If you/anyone is planning to submit a patch for the bug, please feel free to do so, as I won't be able to work on this until my current assignments are completed (by the end of March)

@larseggert
Copy link
Collaborator

@MayyaSunil any update on when you might get to this?

@MayyaSunil
Copy link
Collaborator Author

@MayyaSunil any update on when you might get to this?

I will resume this early next week and hope to complete by the end of the week

@larseggert
Copy link
Collaborator

OK. Making this a draft PR until it's ready for review, please change it back then.

@larseggert larseggert marked this pull request as draft April 15, 2024 05:48
Copy link

Benchmark results

Performance differences relative to 2463618.

  • drain a timer quickly time: [311.82 ns 319.15 ns 325.94 ns]
    change: [-29.772% -28.465% -27.032%] (p = 0.00 < 0.05)
    💚 Performance has improved.

  • coalesce_acked_from_zero 1+1 entries
    time: [202.76 ns 203.28 ns 203.81 ns]
    change: [+2.4616% +2.8914% +3.3435%] (p = 0.00 < 0.05)
    💔 Performance has regressed.

  • coalesce_acked_from_zero 3+1 entries
    time: [248.40 ns 248.99 ns 249.61 ns]
    change: [+3.3741% +3.7299% +4.0540%] (p = 0.00 < 0.05)
    💔 Performance has regressed.

  • coalesce_acked_from_zero 10+1 entries
    time: [247.08 ns 247.76 ns 248.59 ns]
    change: [+3.0582% +3.6074% +4.1948%] (p = 0.00 < 0.05)
    💔 Performance has regressed.

  • coalesce_acked_from_zero 1000+1 entries
    time: [226.01 ns 226.33 ns 226.76 ns]
    change: [-20.439% -6.8145% +3.0254%] (p = 0.56 > 0.05)
    No change in performance detected.

  • RxStreamOrderer::inbound_frame()
    time: [119.26 ms 119.33 ms 119.40 ms]
    change: [+0.0014% +0.2059% +0.3459%] (p = 0.01 < 0.05)
    Change within noise threshold.

  • transfer/Run multiple transfers with varying seeds
    time: [120.41 ms 120.73 ms 121.04 ms]
    thrpt: [33.048 MiB/s 33.132 MiB/s 33.219 MiB/s]
    change:
    time: [+0.6807% +1.0546% +1.4080%] (p = 0.00 < 0.05)
    thrpt: [-1.3885% -1.0435% -0.6761%]
    Change within noise threshold.

  • transfer/Run multiple transfers with the same seed
    time: [120.79 ms 120.99 ms 121.21 ms]
    thrpt: [33.000 MiB/s 33.061 MiB/s 33.117 MiB/s]
    change:
    time: [-0.0877% +0.1479% +0.4062%] (p = 0.25 > 0.05)
    thrpt: [-0.4045% -0.1477% +0.0877%]
    No change in performance detected.

  • 1-conn/1-100mb-resp (aka. Download)/client
    time: [1.1127 s 1.1202 s 1.1281 s]
    thrpt: [88.646 MiB/s 89.272 MiB/s 89.875 MiB/s]
    change:
    time: [-3.4496% -2.1281% -0.9430%] (p = 0.00 < 0.05)
    thrpt: [+0.9520% +2.1743% +3.5729%]
    Change within noise threshold.

  • 1-conn/10_000-parallel-1b-resp (aka. RPS)/client
    time: [429.92 ms 432.15 ms 434.43 ms]
    thrpt: [23.019 Kelem/s 23.140 Kelem/s 23.260 Kelem/s]
    change:
    time: [-0.3675% +0.3151% +1.0375%] (p = 0.39 > 0.05)
    thrpt: [-1.0269% -0.3141% +0.3689%]
    No change in performance detected.

  • 1-conn/1-1b-resp (aka. HPS)/client
    time: [50.554 ms 50.876 ms 51.220 ms]
    thrpt: [19.524 elem/s 19.656 elem/s 19.781 elem/s]
    change:
    time: [-0.7285% +0.2310% +1.2101%] (p = 0.64 > 0.05)
    thrpt: [-1.1956% -0.2305% +0.7339%]
    No change in performance detected.

Client/server transfer results

Transfer of 134217728 bytes over loopback.

Client Server CC Pacing Mean [ms] Min [ms] Max [ms] Relative
msquic msquic 644.6 ± 254.5 398.7 1254.7 1.00
neqo msquic reno on 1065.4 ± 240.3 762.4 1414.0 1.00
neqo msquic reno 994.7 ± 274.8 763.2 1539.2 1.00
neqo msquic cubic on 954.2 ± 234.3 784.7 1390.4 1.00
neqo msquic cubic 923.7 ± 252.5 754.8 1435.4 1.00
msquic neqo reno on 4495.0 ± 233.5 4243.6 4915.4 1.00
msquic neqo reno 4454.3 ± 226.4 4104.9 4931.4 1.00
msquic neqo cubic on 4481.1 ± 182.1 4220.8 4763.1 1.00
msquic neqo cubic 4401.3 ± 194.7 4131.6 4626.2 1.00
neqo neqo reno on 4185.6 ± 320.6 3906.2 4849.4 1.00
neqo neqo reno 3737.2 ± 343.4 3379.3 4470.1 1.00
neqo neqo cubic on 4426.0 ± 215.8 4085.8 4849.5 1.00
neqo neqo cubic 4381.3 ± 183.8 4108.5 4703.5 1.00

⬇️ Download logs

@larseggert
Copy link
Collaborator

@MayyaSunil is this good to go functionality-wise? Does it still need a test?

@MayyaSunil
Copy link
Collaborator Author

@larseggert Its not okay functionality wise and needs a test

@larseggert
Copy link
Collaborator

CC @jesup

Copy link

github-actions bot commented Dec 3, 2024

Failed Interop Tests

QUIC Interop Runner, client vs. server, differences relative to 9143d73.

neqo-latest as client

neqo-latest as server

All results

Succeeded Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

Unsupported Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

@larseggert
Copy link
Collaborator

Webtransport newbie here. Trying to help wrap this up.

I think that the client one goes in webtransport_create_session

The call to create_bidi_transport_stream in that function already enabled keepalives. Hence, are we done for the client?

and the server one goes in webtransport_accept_session.

I've made that addition, and added a test. WDYT?

@larseggert larseggert marked this pull request as ready for review December 3, 2024 16:22
Copy link

codecov bot commented Dec 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.39%. Comparing base (9143d73) to head (183237d).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1389   +/-   ##
=======================================
  Coverage   95.39%   95.39%           
=======================================
  Files         113      113           
  Lines       36683    36682    -1     
=======================================
  Hits        34993    34993           
+ Misses       1690     1689    -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Mark WebTransport streams keep-alive
3 participants