Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assertion failed: stream.buffered_send_data >= len #256

Closed
goffrie opened this issue Apr 12, 2018 · 1 comment · Fixed by #258
Closed

assertion failed: stream.buffered_send_data >= len #256

goffrie opened this issue Apr 12, 2018 · 1 comment · Fixed by #258
Assignees
Labels

Comments

@goffrie
Copy link
Contributor

goffrie commented Apr 12, 2018

I was trying to make a test case and I ran into this panic. Seems like we clear_queue when receiving an RST_STREAM, which resets buffered_send_data, but the next time in pop_frame we don't see is_peer_reset() because the stream state is already Closed(Scheduled(CANCEL)).

Here's the test case that triggers the panic. It's a bit of a mess and it probably shouldn't pass as-is, but it shouldn't panic where it does.

#[test]
fn rst_while_sending() {
    let _ = ::env_logger::try_init();

    let mock = mock_io::Builder::new()
        .write(b"PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n")
        .write(frames::SETTINGS)
        .read(frames::SETTINGS)
        .wait(Duration::from_millis(10))
        .write(&[
            // POST /
            0, 0, 16, 1, 4, 0, 0, 0, 1, 131, 135, 65, 139, 157, 41,
            172, 75, 143, 168, 233, 25, 151, 33, 233, 132,
        ])
        .write(&[
            // DATA
            0, 64, 0, 0, 0, 0, 0, 0, 1
        ])
        .write(&[
            // DATA
            0; 16384
        ])
        .write(frames::SETTINGS_ACK)
        .read(frames::SETTINGS_ACK)
        .read(&[
            // RST_STREAM
            0, 0, 4, 3, 0, 0, 0, 0, 1,
            0, 0, 0, 2,
        ])
        .wait(Duration::from_millis(1000))
        .write(&[
            // DATA
            0, 64, 0, 0, 0, 0, 0, 0, 1
        ])
        .write(&[
            // DATA
            0; 16384
        ])
        .write(&[
            // DATA
            0, 64, 0, 0, 1, 0, 0, 0, 1
        ])
        .write(&[
            // DATA
            0; 16384
        ])
        .write(&[0, 0, 8, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1])
        .build();

    use futures::Async;
    let client = client::handshake(mock)
        .expect("handshake")
        .and_then(|(mut client, mut conn)| {
            let request = Request::builder()
                .method(Method::POST)
                .uri("https://http2.akamai.com/")
                .body(())
                .unwrap();

            let (_resp, mut stream) = client.send_request(request, false).expect("send_request");
            stream.send_data(vec![0u8; 16384*3].into(), true).unwrap();
            drop(stream);
            drop(_resp);
            futures::future::poll_fn(move || {
                trace!("POLLING CONN");
                match conn.poll() {
                    Ok(Async::NotReady) => trace!("CONN: NOT READY"),
                    r => {
                        trace!("CONN: READY {:?}", r);
                    }
                }
                Ok(Async::NotReady)
            }).map(|()| {
                trace!("DONE");
            })
        });

        client.wait().expect("wait");
}
@hawkw
Copy link
Collaborator

hawkw commented Apr 13, 2018

Thanks for reporting the issue, @goffrie! I've managed to reproduce it using your test, and I think I may have a fix ready pretty soon.


As a side notes far as I can tell, the test you've posted will never terminate (if it weren't hitting the panic), the poll_fn always returns Async::NotReady:

futures::future::poll_fn(move || {
    trace!("POLLING CONN");
    match conn.poll() {
        Ok(Async::NotReady) => trace!("CONN: NOT READY"),
        r => {
            trace!("CONN: READY {:?}", r);
        }
    }
    Ok(Async::NotReady)
})

Since this has no case where it returns Async::Ready, it will be polled again and again forever.

@hawkw hawkw self-assigned this Apr 13, 2018
@hawkw hawkw added the bug label Apr 13, 2018
hawkw added a commit that referenced this issue Apr 16, 2018
Fixes #256.

This PR changes `state::recv_reset` so any closed stream with queued send is immediately reset (and thus, the queue is cleared) on receipt of a `RST_STREAM` frame from the remote. 

This fixes the panic encountered by the test @goffrie posted in #256, where the stream is scheduled to close, receives a `RST_STREAM` frame, and sets the buffered capacity to 0, but isn't removed from the send queue, so we hit an assertion (or wrap, if debug assertions are disabled) when subtracting a sent frame's size from the buffered size.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants