-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --trace
option to print full trace of HTTP requests and responses for failed test cases
#761
Conversation
@@ -283,6 +288,7 @@ func newH1Server(handler http.Handler, listenAddr string, tlsConf *tls.Config) ( | |||
Handler: handler, | |||
TLSConfig: tlsConf, | |||
ReadHeaderTimeout: 5 * time.Second, | |||
ErrorLog: nopLogger(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an unrelated improvement. I would occasionally see log lines from the reference server if something failed during TLS handshake (some cases in the connect-kotling conformance tests were seeing timeouts happen during handshake, so the server gets an unexpected EOF and was logging it to stderr).
So this just silences that log output. If we re-enable it in the future, it should use the errWriter
passed in (instead of the default logger, which writes to os.Stderr
).
requestMessages: | ||
- "@type": type.googleapis.com/connectrpc.conformance.v1.ClientStreamRequest | ||
responseDefinition: | ||
responseDelayMs: 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seemed a bit tight -- this gives a margin for error of only 50 milliseconds, which seems like it could occasionally be exceeded on slow and/or highly-loaded CI worker machines. So I expanded it to 195 millis.
requestMessages: | ||
- "@type": type.googleapis.com/connectrpc.conformance.v1.BidiStreamRequest | ||
responseDefinition: | ||
responseDelayMs: 250 | ||
responseDelayMs: 200 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could have left this at 250, since the margin here was fine. But I changed it to 200 just for consistency with the other cases.
This is a bit of a bear -- most of the complexity is in the minimal amount of parsing that the middleware must do on the wire format, to report relevant information about it in the trace and then to also capture the end-of-stream message in the Connect streaming and gRPC-web protocols.
So most of the code is in the new
tracer
internal package, and that's where all of the gnarly bits are, too. The integrations into the command-line and into the reference client and server are pretty straight-forward.This does not instrument the grpc-go reference client and server implementations. We could add that later. It's relatively straight-forward with the server, since it can be configured to use
net/http
(though it is not currently configured this way). But the grpc-go client cannot be configured that way. So to collect traces, we must point the grpc-go client at a local (in-process) HTTP proxy, that can record the traces and then forward the requests/responses to the real backend. This seems like enough of a headache that we can punt on it and maybe revisit if people really need traces against the grpc-go implementations.