build compile packages and dependencies
+clean remove object files
+doc show documentation for package or symbol
+env print Go environment information
+bug start a bug report
+fix run go tool fix on packages
+fmt run gofmt on package sources
+generate generate Go files by processing source
+get download and install packages and dependencies
+install compile and install packages and dependencies
+list list packages
+run compile and run Go program
+test test packages
+tool run specified go tool
+version print Go version
+vet run go tool vet on packages
+
+
+Use "go help [command]" for more information about a command.
+
+
+Additional help topics:
+
+
c calling between Go and C
+buildmode description of build modes
+filetype file types
+gopath GOPATH environment variable
+environment environment variables
+importpath import path syntax
+packages description of package lists
+testflag description of testing flags
+testfunc description of testing functions
+
+
+Use "go help [topic]" for more information about that topic.
+
+
Compile packages and dependencies
+
+Usage:
+
+
go build [-o output] [-i] [build flags] [packages]
+
+
+Build compiles the packages named by the import paths,
+along with their dependencies, but it does not install the results.
+
+
+If the arguments to build are a list of .go files, build treats
+them as a list of source files specifying a single package.
+
+
+When compiling a single main package, build writes
+the resulting executable to an output file named after
+the first source file ('go build ed.go rx.go' writes 'ed' or 'ed.exe')
+or the source code directory ('go build unix/sam' writes 'sam' or 'sam.exe').
+The '.exe' suffix is added when writing a Windows executable.
+
+
+When compiling multiple packages or a single non-main package,
+build compiles the packages but discards the resulting object,
+serving only as a check that the packages can be built.
+
+
+When compiling packages, build ignores files that end in '_test.go'.
+
+
+The -o flag, only allowed when compiling a single package,
+forces build to write the resulting executable or object
+to the named output file, instead of the default behavior described
+in the last two paragraphs.
+
+
+The -i flag installs the packages that are dependencies of the target.
+
+
+The build flags are shared by the build, clean, get, install, list, run,
+and test commands:
+
+
-a
+ force rebuilding of packages that are already up-to-date.
+-n
+ print the commands but do not run them.
+-p n
+ the number of programs, such as build commands or
+ test binaries, that can be run in parallel.
+ The default is the number of CPUs available.
+-race
+ enable data race detection.
+ Supported only on linux/amd64, freebsd/amd64, darwin/amd64 and windows/amd64.
+-msan
+ enable interoperation with memory sanitizer.
+ Supported only on linux/amd64,
+ and only with Clang/LLVM as the host C compiler.
+-v
+ print the names of packages as they are compiled.
+-work
+ print the name of the temporary work directory and
+ do not delete it when exiting.
+-x
+ print the commands.
+
+-asmflags 'flag list'
+ arguments to pass on each go tool asm invocation.
+-buildmode mode
+ build mode to use. See 'go help buildmode' for more.
+-compiler name
+ name of compiler to use, as in runtime.Compiler (gccgo or gc).
+-gccgoflags 'arg list'
+ arguments to pass on each gccgo compiler/linker invocation.
+-gcflags 'arg list'
+ arguments to pass on each go tool compile invocation.
+-installsuffix suffix
+ a suffix to use in the name of the package installation directory,
+ in order to keep output separate from default builds.
+ If using the -race flag, the install suffix is automatically set to race
+ or, if set explicitly, has _race appended to it. Likewise for the -msan
+ flag. Using a -buildmode option that requires non-default compile flags
+ has a similar effect.
+-ldflags 'flag list'
+ arguments to pass on each go tool link invocation.
+-linkshared
+ link against shared libraries previously created with
+ -buildmode=shared.
+-pkgdir dir
+ install and load all packages from dir instead of the usual locations.
+ For example, when building with a non-standard configuration,
+ use -pkgdir to keep generated packages in a separate location.
+-tags 'tag list'
+ a list of build tags to consider satisfied during the build.
+ For more information about build tags, see the description of
+ build constraints in the documentation for the go/build package.
+-toolexec 'cmd args'
+ a program to use to invoke toolchain programs like vet and asm.
+ For example, instead of running asm, the go command will run
+ 'cmd args /path/to/asm <arguments for asm>'.
+
+
+The list flags accept a space-separated list of strings. To embed spaces
+in an element in the list, surround it with either single or double quotes.
+
+
+For more about specifying packages, see 'go help packages'.
+For more about where packages and binaries are installed,
+run 'go help gopath'.
+For more about calling between Go and C/C++, run 'go help c'.
+
+
+Note: Build adheres to certain conventions such as those described
+by 'go help gopath'. Not all projects can follow these conventions,
+however. Installations that have their own conventions or that use
+a separate software build system may choose to use lower-level
+invocations such as 'go tool compile' and 'go tool link' to avoid
+some of the overheads and design decisions of the build tool.
+
+
+See also: go install, go get, go clean.
+
+
Remove object files
+
+Usage:
+
+
go clean [-i] [-r] [-n] [-x] [build flags] [packages]
+
+
+Clean removes object files from package source directories.
+The go command builds most objects in a temporary directory,
+so go clean is mainly concerned with object files left by other
+tools or by manual invocations of go build.
+
+
+Specifically, clean removes the following files from each of the
+source directories corresponding to the import paths:
+
+
_obj/ old object directory, left from Makefiles
+_test/ old test directory, left from Makefiles
+_testmain.go old gotest file, left from Makefiles
+test.out old test log, left from Makefiles
+build.out old test log, left from Makefiles
+*.[568ao] object files, left from Makefiles
+
+DIR(.exe) from go build
+DIR.test(.exe) from go test -c
+MAINFILE(.exe) from go build MAINFILE.go
+*.so from SWIG
+
+
+In the list, DIR represents the final path element of the
+directory, and MAINFILE is the base name of any Go source
+file in the directory that is not included when building
+the package.
+
+
+The -i flag causes clean to remove the corresponding installed
+archive or binary (what 'go install' would create).
+
+
+The -n flag causes clean to print the remove commands it would execute,
+but not run them.
+
+
+The -r flag causes clean to be applied recursively to all the
+dependencies of the packages named by the import paths.
+
+
+The -x flag causes clean to print remove commands as it executes them.
+
+
+For more about build flags, see 'go help build'.
+
+
+For more about specifying packages, see 'go help packages'.
+
+
Show documentation for package or symbol
+
+Usage:
+
+
go doc [-u] [-c] [package|[package.]symbol[.method]]
+
+
+Doc prints the documentation comments associated with the item identified by its
+arguments (a package, const, func, type, var, or method) followed by a one-line
+summary of each of the first-level items "under" that item (package-level
+declarations for a package, methods for a type, etc.).
+
+
+Doc accepts zero, one, or two arguments.
+
+
+Given no arguments, that is, when run as
+
+
go doc
+
+
+it prints the package documentation for the package in the current directory.
+If the package is a command (package main), the exported symbols of the package
+are elided from the presentation unless the -cmd flag is provided.
+
+
+When run with one argument, the argument is treated as a Go-syntax-like
+representation of the item to be documented. What the argument selects depends
+on what is installed in GOROOT and GOPATH, as well as the form of the argument,
+which is schematically one of these:
+
+The first item in this list matched by the argument is the one whose documentation
+is printed. (See the examples below.) However, if the argument starts with a capital
+letter it is assumed to identify a symbol or method in the current directory.
+
+
+For packages, the order of scanning is determined lexically in breadth-first order.
+That is, the package presented is the one that matches the search and is nearest
+the root and lexically first at its level of the hierarchy. The GOROOT tree is
+always scanned in its entirety before GOPATH.
+
+
+If there is no package specified or matched, the package in the current
+directory is selected, so "go doc Foo" shows the documentation for symbol Foo in
+the current package.
+
+
+The package path must be either a qualified path or a proper suffix of a
+path. The go tool's usual package mechanism does not apply: package path
+elements like . and ... are not implemented by go doc.
+
+
+When run with two arguments, the first must be a full package path (not just a
+suffix), and the second is a symbol or symbol and method; this is similar to the
+syntax accepted by godoc:
+
+
go doc <pkg> <sym>[.<method>]
+
+
+In all forms, when matching symbols, lower-case letters in the argument match
+either case but upper-case letters match exactly. This means that there may be
+multiple matches of a lower-case argument in a package if different symbols have
+different cases. If this occurs, documentation for all matches is printed.
+
+
+Examples:
+
+
go doc
+ Show documentation for current package.
+go doc Foo
+ Show documentation for Foo in the current package.
+ (Foo starts with a capital letter so it cannot match
+ a package path.)
+go doc encoding/json
+ Show documentation for the encoding/json package.
+go doc json
+ Shorthand for encoding/json.
+go doc json.Number (or go doc json.number)
+ Show documentation and method summary for json.Number.
+go doc json.Number.Int64 (or go doc json.number.int64)
+ Show documentation for json.Number's Int64 method.
+go doc cmd/doc
+ Show package docs for the doc command.
+go doc -cmd cmd/doc
+ Show package docs and exported symbols within the doc command.
+go doc template.new
+ Show documentation for html/template's New function.
+ (html/template is lexically before text/template)
+go doc text/template.new # One argument
+ Show documentation for text/template's New function.
+go doc text/template new # Two arguments
+ Show documentation for text/template's New function.
+
+At least in the current tree, these invocations all print the
+documentation for json.Decoder's Decode method:
+
+go doc json.Decoder.Decode
+go doc json.decoder.decode
+go doc json.decode
+cd go/src/encoding/json; go doc decode
+
+
+Flags:
+
+
-c
+ Respect case when matching symbols.
+-cmd
+ Treat a command (package main) like a regular package.
+ Otherwise package main's exported symbols are hidden
+ when showing the package's top-level documentation.
+-u
+ Show documentation for unexported as well as exported
+ symbols and methods.
+
+
Print Go environment information
+
+Usage:
+
+
go env [var ...]
+
+
+Env prints Go environment information.
+
+
+By default env prints information as a shell script
+(on Windows, a batch file). If one or more variable
+names is given as arguments, env prints the value of
+each named variable on its own line.
+
+
Start a bug report
+
+Usage:
+
+
go bug
+
+
+Bug opens the default browser and starts a new bug report.
+The report includes useful system information.
+
+
Run go tool fix on packages
+
+Usage:
+
+
go fix [packages]
+
+
+Fix runs the Go fix command on the packages named by the import paths.
+
+
+For more about fix, see 'go doc cmd/fix'.
+For more about specifying packages, see 'go help packages'.
+
+
+To run fix with specific options, run 'go tool fix'.
+
+
+See also: go fmt, go vet.
+
+
Run gofmt on package sources
+
+Usage:
+
+
go fmt [-n] [-x] [packages]
+
+
+Fmt runs the command 'gofmt -l -w' on the packages named
+by the import paths. It prints the names of the files that are modified.
+
+
+For more about gofmt, see 'go doc cmd/gofmt'.
+For more about specifying packages, see 'go help packages'.
+
+
+The -n flag prints commands that would be executed.
+The -x flag prints commands as they are executed.
+
+
+To run gofmt with specific options, run gofmt itself.
+
+Generate runs commands described by directives within existing
+files. Those commands can run any process but the intent is to
+create or update Go source files.
+
+
+Go generate is never run automatically by go build, go get, go test,
+and so on. It must be run explicitly.
+
+
+Go generate scans the file for directives, which are lines of
+the form,
+
+
//go:generate command argument...
+
+
+(note: no leading spaces and no space in "//go") where command
+is the generator to be run, corresponding to an executable file
+that can be run locally. It must either be in the shell path
+(gofmt), a fully qualified path (/usr/you/bin/mytool), or a
+command alias, described below.
+
+
+Note that go generate does not parse the file, so lines that look
+like directives in comments or multiline strings will be treated
+as directives.
+
+
+The arguments to the directive are space-separated tokens or
+double-quoted strings passed to the generator as individual
+arguments when it is run.
+
+
+Quoted strings use Go syntax and are evaluated before execution; a
+quoted string appears as a single argument to the generator.
+
+
+Go generate sets several variables when it runs the generator:
+
+
$GOARCH
+ The execution architecture (arm, amd64, etc.)
+$GOOS
+ The execution operating system (linux, windows, etc.)
+$GOFILE
+ The base name of the file.
+$GOLINE
+ The line number of the directive in the source file.
+$GOPACKAGE
+ The name of the package of the file containing the directive.
+$DOLLAR
+ A dollar sign.
+
+
+Other than variable substitution and quoted-string evaluation, no
+special processing such as "globbing" is performed on the command
+line.
+
+
+As a last step before running the command, any invocations of any
+environment variables with alphanumeric names, such as $GOFILE or
+$HOME, are expanded throughout the command line. The syntax for
+variable expansion is $NAME on all operating systems. Due to the
+order of evaluation, variables are expanded even inside quoted
+strings. If the variable NAME is not set, $NAME expands to the
+empty string.
+
+
+A directive of the form,
+
+
//go:generate -command xxx args...
+
+
+specifies, for the remainder of this source file only, that the
+string xxx represents the command identified by the arguments. This
+can be used to create aliases or to handle multiword generators.
+For example,
+
+
//go:generate -command foo go tool foo
+
+
+specifies that the command "foo" represents the generator
+"go tool foo".
+
+
+Generate processes packages in the order given on the command line,
+one at a time. If the command line lists .go files, they are treated
+as a single package. Within a package, generate processes the
+source files in a package in file name order, one at a time. Within
+a source file, generate runs generators in the order they appear
+in the file, one at a time.
+
+
+If any generator returns an error exit status, "go generate" skips
+all further processing for that package.
+
+
+The generator is run in the package's source directory.
+
+
+Go generate accepts one specific flag:
+
+
-run=""
+ if non-empty, specifies a regular expression to select
+ directives whose full original source text (excluding
+ any trailing spaces and final newline) matches the
+ expression.
+
+
+It also accepts the standard build flags including -v, -n, and -x.
+The -v flag prints the names of packages and files as they are
+processed.
+The -n flag prints commands that would be executed.
+The -x flag prints commands as they are executed.
+
+
+For more about build flags, see 'go help build'.
+
+
+For more about specifying packages, see 'go help packages'.
+
+
Download and install packages and dependencies
+
+Usage:
+
+
go get [-d] [-f] [-fix] [-insecure] [-t] [-u] [build flags] [packages]
+
+
+Get downloads the packages named by the import paths, along with their
+dependencies. It then installs the named packages, like 'go install'.
+
+
+The -d flag instructs get to stop after downloading the packages; that is,
+it instructs get not to install the packages.
+
+
+The -f flag, valid only when -u is set, forces get -u not to verify that
+each package has been checked out from the source control repository
+implied by its import path. This can be useful if the source is a local fork
+of the original.
+
+
+The -fix flag instructs get to run the fix tool on the downloaded packages
+before resolving dependencies or building the code.
+
+
+The -insecure flag permits fetching from repositories and resolving
+custom domains using insecure schemes such as HTTP. Use with caution.
+
+
+The -t flag instructs get to also download the packages required to build
+the tests for the specified packages.
+
+
+The -u flag instructs get to use the network to update the named packages
+and their dependencies. By default, get uses the network to check out
+missing packages but does not use it to look for updates to existing packages.
+
+
+The -v flag enables verbose progress and debug output.
+
+
+Get also accepts build flags to control the installation. See 'go help build'.
+
+
+When checking out a new package, get creates the target directory
+GOPATH/src/<import-path>. If the GOPATH contains multiple entries,
+get uses the first one. For more details see: 'go help gopath'.
+
+
+When checking out or updating a package, get looks for a branch or tag
+that matches the locally installed version of Go. The most important
+rule is that if the local installation is running version "go1", get
+searches for a branch or tag named "go1". If no such version exists it
+retrieves the most recent version of the package.
+
+
+When go get checks out or updates a Git repository,
+it also updates any git submodules referenced by the repository.
+
+
+Get never checks out or updates code stored in vendor directories.
+
+
+For more about specifying packages, see 'go help packages'.
+
+
+For more about how 'go get' finds source code to
+download, see 'go help importpath'.
+
+
+See also: go build, go install, go clean.
+
+
Compile and install packages and dependencies
+
+Usage:
+
+
go install [build flags] [packages]
+
+
+Install compiles and installs the packages named by the import paths,
+along with their dependencies.
+
+
+For more about the build flags, see 'go help build'.
+For more about specifying packages, see 'go help packages'.
+
+
+See also: go build, go get, go clean.
+
+
List packages
+
+Usage:
+
+
go list [-e] [-f format] [-json] [build flags] [packages]
+
+
+List lists the packages named by the import paths, one per line.
+
+
+The default output shows the package import path:
+
+The -f flag specifies an alternate format for the list, using the
+syntax of package template. The default output is equivalent to -f
+''. The struct being passed to the template is:
+
+
type Package struct {
+ Dir string // directory containing package sources
+ ImportPath string // import path of package in dir
+ ImportComment string // path in import comment on package statement
+ Name string // package name
+ Doc string // package documentation string
+ Target string // install path
+ Shlib string // the shared library that contains this package (only set when -linkshared)
+ Goroot bool // is this package in the Go root?
+ Standard bool // is this package part of the standard Go library?
+ Stale bool // would 'go install' do anything for this package?
+ StaleReason string // explanation for Stale==true
+ Root string // Go root or Go path dir containing this package
+ ConflictDir string // this directory shadows Dir in $GOPATH
+ BinaryOnly bool // binary-only package: cannot be recompiled from sources
+
+ // Source files
+ GoFiles []string // .go source files (excluding CgoFiles, TestGoFiles, XTestGoFiles)
+ CgoFiles []string // .go sources files that import "C"
+ IgnoredGoFiles []string // .go sources ignored due to build constraints
+ CFiles []string // .c source files
+ CXXFiles []string // .cc, .cxx and .cpp source files
+ MFiles []string // .m source files
+ HFiles []string // .h, .hh, .hpp and .hxx source files
+ FFiles []string // .f, .F, .for and .f90 Fortran source files
+ SFiles []string // .s source files
+ SwigFiles []string // .swig files
+ SwigCXXFiles []string // .swigcxx files
+ SysoFiles []string // .syso object files to add to archive
+ TestGoFiles []string // _test.go files in package
+ XTestGoFiles []string // _test.go files outside package
+
+ // Cgo directives
+ CgoCFLAGS []string // cgo: flags for C compiler
+ CgoCPPFLAGS []string // cgo: flags for C preprocessor
+ CgoCXXFLAGS []string // cgo: flags for C++ compiler
+ CgoFFLAGS []string // cgo: flags for Fortran compiler
+ CgoLDFLAGS []string // cgo: flags for linker
+ CgoPkgConfig []string // cgo: pkg-config names
+
+ // Dependency information
+ Imports []string // import paths used by this package
+ Deps []string // all (recursively) imported dependencies
+ TestImports []string // imports from TestGoFiles
+ XTestImports []string // imports from XTestGoFiles
+
+ // Error information
+ Incomplete bool // this package or a dependency has an error
+ Error *PackageError // error loading package
+ DepsErrors []*PackageError // errors loading dependencies
+}
+
+
+Packages stored in vendor directories report an ImportPath that includes the
+path to the vendor directory (for example, "d/vendor/p" instead of "p"),
+so that the ImportPath uniquely identifies a given copy of a package.
+The Imports, Deps, TestImports, and XTestImports lists also contain these
+expanded imports paths. See golang.org/s/go15vendor for more about vendoring.
+
+
+The error information, if any, is
+
+
type PackageError struct {
+ ImportStack []string // shortest path from package named on command line to this one
+ Pos string // position of error (if present, file:line:col)
+ Err string // the error itself
+}
+
+
+The template function "join" calls strings.Join.
+
+
+The template function "context" returns the build context, defined as:
+
+
type Context struct {
+ GOARCH string // target architecture
+ GOOS string // target operating system
+ GOROOT string // Go root
+ GOPATH string // Go path
+ CgoEnabled bool // whether cgo can be used
+ UseAllFiles bool // use files regardless of +build lines, file names
+ Compiler string // compiler to assume when computing target paths
+ BuildTags []string // build constraints to match in +build lines
+ ReleaseTags []string // releases the current release is compatible with
+ InstallSuffix string // suffix to use in the name of the install dir
+}
+
+
+For more information about the meaning of these fields see the documentation
+for the go/build package's Context type.
+
+
+The -json flag causes the package data to be printed in JSON format
+instead of using the template format.
+
+
+The -e flag changes the handling of erroneous packages, those that
+cannot be found or are malformed. By default, the list command
+prints an error to standard error for each erroneous package and
+omits the packages from consideration during the usual printing.
+With the -e flag, the list command never prints errors to standard
+error and instead processes the erroneous packages with the usual
+printing. Erroneous packages will have a non-empty ImportPath and
+a non-nil Error field; other information may or may not be missing
+(zeroed).
+
+
+For more about build flags, see 'go help build'.
+
+
+For more about specifying packages, see 'go help packages'.
+
+
Compile and run Go program
+
+Usage:
+
+
go run [build flags] [-exec xprog] gofiles... [arguments...]
+
+
+Run compiles and runs the main package comprising the named Go source files.
+A Go source file is defined to be a file ending in a literal ".go" suffix.
+
+
+By default, 'go run' runs the compiled binary directly: 'a.out arguments...'.
+If the -exec flag is given, 'go run' invokes the binary using xprog:
+
+
'xprog a.out arguments...'.
+
+
+If the -exec flag is not given, GOOS or GOARCH is different from the system
+default, and a program named go_$GOOS_$GOARCH_exec can be found
+on the current search path, 'go run' invokes the binary using that program,
+for example 'go_nacl_386_exec a.out arguments...'. This allows execution of
+cross-compiled programs when a simulator or other execution method is
+available.
+
+
+For more about build flags, see 'go help build'.
+
+
+See also: go build.
+
+
Test packages
+
+Usage:
+
+
go test [build/test flags] [packages] [build/test flags & test binary flags]
+
+
+'Go test' automates testing the packages named by the import paths.
+It prints a summary of the test results in the format:
+
+followed by detailed output for each failed package.
+
+
+'Go test' recompiles each package along with any files with names matching
+the file pattern "*_test.go".
+Files whose names begin with "_" (including "_test.go") or "." are ignored.
+These additional files can contain test functions, benchmark functions, and
+example functions. See 'go help testfunc' for more.
+Each listed package causes the execution of a separate test binary.
+
+
+Test files that declare a package with the suffix "_test" will be compiled as a
+separate package, and then linked and run with the main test binary.
+
+
+The go tool will ignore a directory named "testdata", making it available
+to hold ancillary data needed by the tests.
+
+
+By default, go test needs no arguments. It compiles and tests the package
+with source in the current directory, including tests, and runs the tests.
+
+
+The package is built in a temporary directory so it does not interfere with the
+non-test installation.
+
+
+In addition to the build flags, the flags handled by 'go test' itself are:
+
+
-args
+ Pass the remainder of the command line (everything after -args)
+ to the test binary, uninterpreted and unchanged.
+ Because this flag consumes the remainder of the command line,
+ the package list (if present) must appear before this flag.
+
+-c
+ Compile the test binary to pkg.test but do not run it
+ (where pkg is the last element of the package's import path).
+ The file name can be changed with the -o flag.
+
+-exec xprog
+ Run the test binary using xprog. The behavior is the same as
+ in 'go run'. See 'go help run' for details.
+
+-i
+ Install packages that are dependencies of the test.
+ Do not run the test.
+
+-o file
+ Compile the test binary to the named file.
+ The test still runs (unless -c or -i is specified).
+
+
+The test binary also accepts flags that control execution of the test; these
+flags are also accessible by 'go test'. See 'go help testflag' for details.
+
+
+For more about build flags, see 'go help build'.
+For more about specifying packages, see 'go help packages'.
+
+
+See also: go build, go vet.
+
+
Run specified go tool
+
+Usage:
+
+
go tool [-n] command [args...]
+
+
+Tool runs the go tool command identified by the arguments.
+With no arguments it prints the list of known tools.
+
+
+The -n flag causes tool to print the command that would be
+executed but not execute it.
+
+
+For more about each tool command, see 'go tool command -h'.
+
+
Print Go version
+
+Usage:
+
+
go version
+
+
+Version prints the Go version, as reported by runtime.Version.
+
+
Run go tool vet on packages
+
+Usage:
+
+
go vet [-n] [-x] [build flags] [packages]
+
+
+Vet runs the Go vet command on the packages named by the import paths.
+
+
+For more about vet, see 'go doc cmd/vet'.
+For more about specifying packages, see 'go help packages'.
+
+
+To run the vet tool with specific options, run 'go tool vet'.
+
+
+The -n flag prints commands that would be executed.
+The -x flag prints commands as they are executed.
+
+
+For more about build flags, see 'go help build'.
+
+
+See also: go fmt, go fix.
+
+
Calling between Go and C
+
+There are two different ways to call between Go and C/C++ code.
+
+
+The first is the cgo tool, which is part of the Go distribution. For
+information on how to use it see the cgo documentation (go doc cmd/cgo).
+
+
+The second is the SWIG program, which is a general tool for
+interfacing between languages. For information on SWIG see
+http://swig.org/. When running go build, any file with a .swig
+extension will be passed to SWIG. Any file with a .swigcxx extension
+will be passed to SWIG with the -c++ option.
+
+
+When either cgo or SWIG is used, go build will pass any .c, .m, .s,
+or .S files to the C compiler, and any .cc, .cpp, .cxx files to the C++
+compiler. The CC or CXX environment variables may be set to determine
+the C or C++ compiler, respectively, to use.
+
+
Description of build modes
+
+The 'go build' and 'go install' commands take a -buildmode argument which
+indicates which kind of object file is to be built. Currently supported values
+are:
+
+
-buildmode=archive
+ Build the listed non-main packages into .a files. Packages named
+ main are ignored.
+
+-buildmode=c-archive
+ Build the listed main package, plus all packages it imports,
+ into a C archive file. The only callable symbols will be those
+ functions exported using a cgo //export comment. Requires
+ exactly one main package to be listed.
+
+-buildmode=c-shared
+ Build the listed main packages, plus all packages that they
+ import, into C shared libraries. The only callable symbols will
+ be those functions exported using a cgo //export comment.
+ Non-main packages are ignored.
+
+-buildmode=default
+ Listed main packages are built into executables and listed
+ non-main packages are built into .a files (the default
+ behavior).
+
+-buildmode=shared
+ Combine all the listed non-main packages into a single shared
+ library that will be used when building with the -linkshared
+ option. Packages named main are ignored.
+
+-buildmode=exe
+ Build the listed main packages and everything they import into
+ executables. Packages not named main are ignored.
+
+-buildmode=pie
+ Build the listed main packages and everything they import into
+ position independent executables (PIE). Packages not named
+ main are ignored.
+
+-buildmode=plugin
+ Build the listed main packages, plus all packages that they
+ import, into a Go plugin. Packages not named main are ignored.
+
+
File types
+
+The go command examines the contents of a restricted set of files
+in each directory. It identifies which files to examine based on
+the extension of the file name. These extensions are:
+
+
.go
+ Go source files.
+.c, .h
+ C source files.
+ If the package uses cgo or SWIG, these will be compiled with the
+ OS-native compiler (typically gcc); otherwise they will
+ trigger an error.
+.cc, .cpp, .cxx, .hh, .hpp, .hxx
+ C++ source files. Only useful with cgo or SWIG, and always
+ compiled with the OS-native compiler.
+.m
+ Objective-C source files. Only useful with cgo, and always
+ compiled with the OS-native compiler.
+.s, .S
+ Assembler source files.
+ If the package uses cgo or SWIG, these will be assembled with the
+ OS-native assembler (typically gcc (sic)); otherwise they
+ will be assembled with the Go assembler.
+.swig, .swigcxx
+ SWIG definition files.
+.syso
+ System object files.
+
+
+Files of each of these types except .syso may contain build
+constraints, but the go command stops scanning for build constraints
+at the first item in the file that is not a blank line or //-style
+line comment. See the go/build package documentation for
+more details.
+
+
+Non-test Go source files can also include a //go:binary-only-package
+comment, indicating that the package sources are included
+for documentation only and must not be used to build the
+package binary. This enables distribution of Go packages in
+their compiled form alone. See the go/build package documentation
+for more details.
+
+
GOPATH environment variable
+
+The Go path is used to resolve import statements.
+It is implemented by and documented in the go/build package.
+
+
+The GOPATH environment variable lists places to look for Go code.
+On Unix, the value is a colon-separated string.
+On Windows, the value is a semicolon-separated string.
+On Plan 9, the value is a list.
+
+
+If the environment variable is unset, GOPATH defaults
+to a subdirectory named "go" in the user's home directory
+($HOME/go on Unix, %USERPROFILE%\go on Windows),
+unless that directory holds a Go distribution.
+Run "go env GOPATH" to see the current GOPATH.
+
+Each directory listed in GOPATH must have a prescribed structure:
+
+
+The src directory holds source code. The path below src
+determines the import path or executable name.
+
+
+The pkg directory holds installed package objects.
+As in the Go tree, each target operating system and
+architecture pair has its own subdirectory of pkg
+(pkg/GOOS_GOARCH).
+
+
+If DIR is a directory listed in the GOPATH, a package with
+source in DIR/src/foo/bar can be imported as "foo/bar" and
+has its compiled form installed to "DIR/pkg/GOOS_GOARCH/foo/bar.a".
+
+
+The bin directory holds compiled commands.
+Each command is named for its source directory, but only
+the final element, not the entire path. That is, the
+command with source in DIR/src/foo/quux is installed into
+DIR/bin/quux, not DIR/bin/foo/quux. The "foo/" prefix is stripped
+so that you can add DIR/bin to your PATH to get at the
+installed commands. If the GOBIN environment variable is
+set, commands are installed to the directory it names instead
+of DIR/bin. GOBIN must be an absolute path.
+
+Code in or below a directory named "internal" is importable only
+by code in the directory tree rooted at the parent of "internal".
+Here's an extended version of the directory layout above:
+
+The code in z.go is imported as "foo/internal/baz", but that
+import statement can only appear in source files in the subtree
+rooted at foo. The source files foo/f.go, foo/bar/x.go, and
+foo/quux/y.go can all import "foo/internal/baz", but the source file
+crash/bang/b.go cannot.
+
+Go 1.6 includes support for using local copies of external dependencies
+to satisfy imports of those dependencies, often referred to as vendoring.
+
+
+Code below a directory named "vendor" is importable only
+by code in the directory tree rooted at the parent of "vendor",
+and only using an import path that omits the prefix up to and
+including the vendor element.
+
+
+Here's the example from the previous section,
+but with the "internal" directory renamed to "vendor"
+and a new foo/vendor/crash/bang directory added:
+
+The same visibility rules apply as for internal, but the code
+in z.go is imported as "baz", not as "foo/vendor/baz".
+
+
+Code in vendor directories deeper in the source tree shadows
+code in higher directories. Within the subtree rooted at foo, an import
+of "crash/bang" resolves to "foo/vendor/crash/bang", not the
+top-level "crash/bang".
+
+
+Code in vendor directories is not subject to import path
+checking (see 'go help importpath').
+
+
+When 'go get' checks out or updates a git repository, it now also
+updates submodules.
+
+
+Vendor directories do not affect the placement of new repositories
+being checked out for the first time by 'go get': those are always
+placed in the main GOPATH, never in a vendor subtree.
+
+The go command, and the tools it invokes, examine a few different
+environment variables. For many of these, you can see the default
+value of on your system by running 'go env NAME', where NAME is the
+name of the variable.
+
+
+General-purpose environment variables:
+
+
GCCGO
+ The gccgo command to run for 'go build -compiler=gccgo'.
+GOARCH
+ The architecture, or processor, for which to compile code.
+ Examples are amd64, 386, arm, ppc64.
+GOBIN
+ The directory where 'go install' will install a command.
+GOOS
+ The operating system for which to compile code.
+ Examples are linux, darwin, windows, netbsd.
+GOPATH
+ For more details see: 'go help gopath'.
+GORACE
+ Options for the race detector.
+ See https://golang.org/doc/articles/race_detector.html.
+GOROOT
+ The root of the go tree.
+
+
+Environment variables for use with cgo:
+
+
CC
+ The command to use to compile C code.
+CGO_ENABLED
+ Whether the cgo command is supported. Either 0 or 1.
+CGO_CFLAGS
+ Flags that cgo will pass to the compiler when compiling
+ C code.
+CGO_CPPFLAGS
+ Flags that cgo will pass to the compiler when compiling
+ C or C++ code.
+CGO_CXXFLAGS
+ Flags that cgo will pass to the compiler when compiling
+ C++ code.
+CGO_FFLAGS
+ Flags that cgo will pass to the compiler when compiling
+ Fortran code.
+CGO_LDFLAGS
+ Flags that cgo will pass to the compiler when linking.
+CXX
+ The command to use to compile C++ code.
+PKG_CONFIG
+ Path to pkg-config tool.
+
+
+Architecture-specific environment variables:
+
+
GOARM
+ For GOARCH=arm, the ARM architecture for which to compile.
+ Valid values are 5, 6, 7.
+GO386
+ For GOARCH=386, the floating point instruction set.
+ Valid values are 387, sse2.
+
+
+Special-purpose environment variables:
+
+
GOROOT_FINAL
+ The root of the installed Go tree, when it is
+ installed in a location other than where it is built.
+ File names in stack traces are rewritten from GOROOT to
+ GOROOT_FINAL.
+GO_EXTLINK_ENABLED
+ Whether the linker should use external linking mode
+ when using -linkmode=auto with code that uses cgo.
+ Set to 0 to disable external linking mode, 1 to enable it.
+GIT_ALLOW_PROTOCOL
+ Defined by Git. A colon-separated list of schemes that are allowed to be used
+ with git fetch/clone. If set, any scheme not explicitly mentioned will be
+ considered insecure by 'go get'.
+
+
Import path syntax
+
+An import path (see 'go help packages') denotes a package stored in the local
+file system. In general, an import path denotes either a standard package (such
+as "unicode/utf8") or a package found in one of the work spaces (For more
+details see: 'go help gopath').
+
+
Relative import paths
+
+An import path beginning with ./ or ../ is called a relative path.
+The toolchain supports relative import paths as a shortcut in two ways.
+
+
+First, a relative path can be used as a shorthand on the command line.
+If you are working in the directory containing the code imported as
+"unicode" and want to run the tests for "unicode/utf8", you can type
+"go test ./utf8" instead of needing to specify the full path.
+Similarly, in the reverse situation, "go test .." will test "unicode" from
+the "unicode/utf8" directory. Relative patterns are also allowed, like
+"go test ./..." to test all subdirectories. See 'go help packages' for details
+on the pattern syntax.
+
+
+Second, if you are compiling a Go program not in a work space,
+you can use a relative path in an import statement in that program
+to refer to nearby code also not in a work space.
+This makes it easy to experiment with small multipackage programs
+outside of the usual work spaces, but such programs cannot be
+installed with "go install" (there is no work space in which to install them),
+so they are rebuilt from scratch each time they are built.
+To avoid ambiguity, Go programs cannot use relative import paths
+within a work space.
+
+
Remote import paths
+
+Certain import paths also
+describe how to obtain the source code for the package using
+a revision control system.
+
+
+A few common code hosting sites have special syntax:
+
+For code hosted on other servers, import paths may either be qualified
+with the version control type, or the go tool can dynamically fetch
+the import path over https/http and discover where the code resides
+from a <meta> tag in the HTML.
+
+
+To declare the code location, an import path of the form
+
+
repository.vcs/path
+
+
+specifies the given repository, with or without the .vcs suffix,
+using the named version control system, and then the path inside
+that repository. The supported version control systems are:
+
+denotes the root directory of the Mercurial repository at
+example.org/user/foo or foo.hg, and
+
+
import "example.org/repo.git/foo/bar"
+
+
+denotes the foo/bar directory of the Git repository at
+example.org/repo or repo.git.
+
+
+When a version control system supports multiple protocols,
+each is tried in turn when downloading. For example, a Git
+download tries https://, then git+ssh://.
+
+
+By default, downloads are restricted to known secure protocols
+(e.g. https, ssh). To override this setting for Git downloads, the
+GIT_ALLOW_PROTOCOL environment variable can be set (For more details see:
+'go help environment').
+
+
+If the import path is not a known code hosting site and also lacks a
+version control qualifier, the go tool attempts to fetch the import
+over https/http and looks for a <meta> tag in the document's HTML
+<head>.
+
+The import-prefix is the import path corresponding to the repository
+root. It must be a prefix or an exact match of the package being
+fetched with "go get". If it's not an exact match, another http
+request is made at the prefix to verify the <meta> tags match.
+
+
+The meta tag should appear as early in the file as possible.
+In particular, it should appear before any raw JavaScript or CSS,
+to avoid confusing the go command's restricted parser.
+
+
+The vcs is one of "git", "hg", "svn", etc,
+
+
+The repo-root is the root of the version control system
+containing a scheme and not containing a .vcs qualifier.
+
+New downloaded packages are written to the first directory listed in the GOPATH
+environment variable (For more details see: 'go help gopath').
+
+
+The go command attempts to download the version of the
+package appropriate for the Go release being used.
+Run 'go help get' for more.
+
+
Import path checking
+
+When the custom import path feature described above redirects to a
+known code hosting site, each of the resulting packages has two possible
+import paths, using the custom domain or the known hosting site.
+
+
+A package statement is said to have an "import comment" if it is immediately
+followed (before the next newline) by a comment of one of these two forms:
+
+
package math // import "path"
+package math /* import "path" */
+
+
+The go command will refuse to install a package with an import comment
+unless it is being referred to by that import path. In this way, import comments
+let package authors make sure the custom import path is used and not a
+direct path to the underlying code hosting site.
+
+
+Import path checking is disabled for code found within vendor trees.
+This makes it possible to copy code into alternate locations in vendor trees
+without needing to update import comments.
+
+An import path that is a rooted path or that begins with
+a . or .. element is interpreted as a file system path and
+denotes the package in that directory.
+
+
+Otherwise, the import path P denotes the package found in
+the directory DIR/src/P for some DIR listed in the GOPATH
+environment variable (For more details see: 'go help gopath').
+
+
+If no import paths are given, the action applies to the
+package in the current directory.
+
+
+There are four reserved names for paths that should not be used
+for packages to be built with the go tool:
+
+
+- "main" denotes the top-level package in a stand-alone executable.
+
+
+- "all" expands to all package directories found in all the GOPATH
+trees. For example, 'go list all' lists all the packages on the local
+system.
+
+
+- "std" is like all but expands to just the packages in the standard
+Go library.
+
+
+- "cmd" expands to the Go repository's commands and their
+internal libraries.
+
+
+Import paths beginning with "cmd/" only match source code in
+the Go repository.
+
+
+An import path is a pattern if it includes one or more "..." wildcards,
+each of which can match any string, including the empty string and
+strings containing slashes. Such a pattern expands to all package
+directories found in the GOPATH trees with names matching the
+patterns. As a special case, x/... matches x as well as x's subdirectories.
+For example, net/... expands to net and packages in its subdirectories.
+
+
+An import path can also name a package to be downloaded from
+a remote repository. Run 'go help importpath' for details.
+
+
+Every package in a program must have a unique import path.
+By convention, this is arranged by starting each path with a
+unique prefix that belongs to you. For example, paths used
+internally at Google all begin with 'google', and paths
+denoting remote repositories begin with the path to the code,
+such as 'github.com/user/repo'.
+
+
+Packages in a program need not have unique package names,
+but there are two reserved package names with special meaning.
+The name main indicates a command, not a library.
+Commands are built into binaries and cannot be imported.
+The name documentation indicates documentation for
+a non-Go program in the directory. Files in package documentation
+are ignored by the go command.
+
+
+As a special case, if the package list is a list of .go files from a
+single directory, the command is applied to a single synthesized
+package made up of exactly those files, ignoring any build constraints
+in those files and ignoring any other files in the directory.
+
+
+Directory and file names that begin with "." or "_" are ignored
+by the go tool, as are directories named "testdata".
+
+
Description of testing flags
+
+The 'go test' command takes both flags that apply to 'go test' itself
+and flags that apply to the resulting test binary.
+
+
+Several of the flags control profiling and write an execution profile
+suitable for "go tool pprof"; run "go tool pprof -h" for more
+information. The --alloc_space, --alloc_objects, and --show_bytes
+options of pprof control how the information is presented.
+
+
+The following flags are recognized by the 'go test' command and
+control the execution of any test:
+
+
-bench regexp
+ Run (sub)benchmarks matching a regular expression.
+ The given regular expression is split into smaller ones by
+ top-level '/', where each must match the corresponding part of a
+ benchmark's identifier.
+ By default, no benchmarks run. To run all benchmarks,
+ use '-bench .' or '-bench=.'.
+
+-benchtime t
+ Run enough iterations of each benchmark to take t, specified
+ as a time.Duration (for example, -benchtime 1h30s).
+ The default is 1 second (1s).
+
+-count n
+ Run each test and benchmark n times (default 1).
+ If -cpu is set, run n times for each GOMAXPROCS value.
+ Examples are always run once.
+
+-cover
+ Enable coverage analysis.
+
+-covermode set,count,atomic
+ Set the mode for coverage analysis for the package[s]
+ being tested. The default is "set" unless -race is enabled,
+ in which case it is "atomic".
+ The values:
+ set: bool: does this statement run?
+ count: int: how many times does this statement run?
+ atomic: int: count, but correct in multithreaded tests;
+ significantly more expensive.
+ Sets -cover.
+
+-coverpkg pkg1,pkg2,pkg3
+ Apply coverage analysis in each test to the given list of packages.
+ The default is for each test to analyze only the package being tested.
+ Packages are specified as import paths.
+ Sets -cover.
+
+-cpu 1,2,4
+ Specify a list of GOMAXPROCS values for which the tests or
+ benchmarks should be executed. The default is the current value
+ of GOMAXPROCS.
+
+-parallel n
+ Allow parallel execution of test functions that call t.Parallel.
+ The value of this flag is the maximum number of tests to run
+ simultaneously; by default, it is set to the value of GOMAXPROCS.
+ Note that -parallel only applies within a single test binary.
+ The 'go test' command may run tests for different packages
+ in parallel as well, according to the setting of the -p flag
+ (see 'go help build').
+
+-run regexp
+ Run only those tests and examples matching the regular expression.
+ For tests the regular expression is split into smaller ones by
+ top-level '/', where each must match the corresponding part of a
+ test's identifier.
+
+-short
+ Tell long-running tests to shorten their run time.
+ It is off by default but set during all.bash so that installing
+ the Go tree can run a sanity check but not spend time running
+ exhaustive tests.
+
+-timeout t
+ If a test runs longer than t, panic.
+ The default is 10 minutes (10m).
+
+-v
+ Verbose output: log all tests as they are run. Also print all
+ text from Log and Logf calls even if the test succeeds.
+
+
+The following flags are also recognized by 'go test' and can be used to
+profile the tests during execution:
+
+
-benchmem
+ Print memory allocation statistics for benchmarks.
+
+-blockprofile block.out
+ Write a goroutine blocking profile to the specified file
+ when all tests are complete.
+ Writes test binary as -c would.
+
+-blockprofilerate n
+ Control the detail provided in goroutine blocking profiles by
+ calling runtime.SetBlockProfileRate with n.
+ See 'go doc runtime.SetBlockProfileRate'.
+ The profiler aims to sample, on average, one blocking event every
+ n nanoseconds the program spends blocked. By default,
+ if -test.blockprofile is set without this flag, all blocking events
+ are recorded, equivalent to -test.blockprofilerate=1.
+
+-coverprofile cover.out
+ Write a coverage profile to the file after all tests have passed.
+ Sets -cover.
+
+-cpuprofile cpu.out
+ Write a CPU profile to the specified file before exiting.
+ Writes test binary as -c would.
+
+-memprofile mem.out
+ Write a memory profile to the file after all tests have passed.
+ Writes test binary as -c would.
+
+-memprofilerate n
+ Enable more precise (and expensive) memory profiles by setting
+ runtime.MemProfileRate. See 'go doc runtime.MemProfileRate'.
+ To profile all memory allocations, use -test.memprofilerate=1
+ and pass --alloc_space flag to the pprof tool.
+
+-mutexprofile mutex.out
+ Write a mutex contention profile to the specified file
+ when all tests are complete.
+ Writes test binary as -c would.
+
+-mutexprofilefraction n
+ Sample 1 in n stack traces of goroutines holding a
+ contended mutex.
+
+-outputdir directory
+ Place output files from profiling in the specified directory,
+ by default the directory in which "go test" is running.
+
+-trace trace.out
+ Write an execution trace to the specified file before exiting.
+
+
+Each of these flags is also recognized with an optional 'test.' prefix,
+as in -test.v. When invoking the generated test binary (the result of
+'go test -c') directly, however, the prefix is mandatory.
+
+
+The 'go test' command rewrites or removes recognized flags,
+as appropriate, both before and after the optional package list,
+before invoking the test binary.
+
+
+For instance, the command
+
+
go test -v -myflag testdata -cpuprofile=prof.out -x
+
+
+will compile the test binary and then run it as
+
+(The -x flag is removed because it applies only to the go command's
+execution, not to the test itself.)
+
+
+The test flags that generate profiles (other than for coverage) also
+leave the test binary in pkg.test for use when analyzing the profiles.
+
+
+When 'go test' runs a test binary, it does so from within the
+corresponding package's source code directory. Depending on the test,
+it may be necessary to do the same when invoking a generated test
+binary directly.
+
+
+The command-line package list, if present, must appear before any
+flag not known to the go test command. Continuing the example above,
+the package list would have to appear before -myflag, but could appear
+on either side of -v.
+
+
+To keep an argument for a test binary from being interpreted as a
+known flag or a package name, use -args (see 'go help test') which
+passes the remainder of the command line through to the test binary
+uninterpreted and unaltered.
+
+
+For instance, the command
+
+
go test -v -args -x -v
+
+
+will compile the test binary and then run it as
+
+
pkg.test -test.v -x -v
+
+
+Similarly,
+
+
go test -args math
+
+
+will compile the test binary and then run it as
+
+
pkg.test math
+
+
+In the first example, the -x and the second -v are passed through to the
+test binary unchanged and with no effect on the go command itself.
+In the second example, the argument math is passed through to the test
+binary, instead of being interpreted as the package list.
+
+
Description of testing functions
+
+The 'go test' command expects to find test, benchmark, and example functions
+in the "*_test.go" files corresponding to the package under test.
+
+
+A test function is one named TestXXX (where XXX is any alphanumeric string
+not starting with a lower case letter) and should have the signature,
+
+
func TestXXX(t *testing.T) { ... }
+
+
+A benchmark function is one named BenchmarkXXX and should have the signature,
+
+
func BenchmarkXXX(b *testing.B) { ... }
+
+
+An example function is similar to a test function but, instead of using
+*testing.T to report success or failure, prints output to os.Stdout.
+If the last comment in the function starts with "Output:" then the output
+is compared exactly against the comment (see examples below). If the last
+comment begins with "Unordered output:" then the output is compared to the
+comment, however the order of the lines is ignored. An example with no such
+comment is compiled but not executed. An example with no text after
+"Output:" is compiled, executed, and expected to produce no output.
+
+
+Godoc displays the body of ExampleXXX to demonstrate the use
+of the function, constant, or variable XXX. An example of a method M with
+receiver type T or *T is named ExampleT_M. There may be multiple examples
+for a given function, constant, or variable, distinguished by a trailing _xxx,
+where xxx is a suffix not beginning with an upper case letter.
+
+
+Here is an example of an example:
+
+
func ExamplePrintln() {
+ Println("The output of\nthis example.")
+ // Output: The output of
+ // this example.
+}
+
+
+Here is another example where the ordering of the output is ignored:
+
+The entire test file is presented as the example when it contains a single
+example function, at least one other function, type, variable, or constant
+declaration, and no test or benchmark functions.
+
+
+See the documentation of the testing package for more information.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+`))
diff --git a/vendor/golang.org/x/net/http2/hpack/encode.go b/vendor/golang.org/x/net/http2/hpack/encode.go
index 395e1c2f04..54726c2a3c 100644
--- a/vendor/golang.org/x/net/http2/hpack/encode.go
+++ b/vendor/golang.org/x/net/http2/hpack/encode.go
@@ -39,6 +39,7 @@ func NewEncoder(w io.Writer) *Encoder {
tableSizeUpdate: false,
w: w,
}
+ e.dynTab.table.init()
e.dynTab.setMaxSize(initialHeaderTableSize)
return e
}
@@ -88,24 +89,17 @@ func (e *Encoder) WriteField(f HeaderField) error {
// only name matches, i points to that index and nameValueMatch
// becomes false.
func (e *Encoder) searchTable(f HeaderField) (i uint64, nameValueMatch bool) {
- for idx, hf := range staticTable {
- if hf.Name != f.Name {
- continue
- }
- if i == 0 {
- i = uint64(idx + 1)
- }
- if f.Sensitive || hf.Value != f.Value {
- continue
- }
- return uint64(idx + 1), true
+ i, nameValueMatch = staticTable.search(f)
+ if nameValueMatch {
+ return i, true
}
- j, nameValueMatch := e.dynTab.search(f)
+ j, nameValueMatch := e.dynTab.table.search(f)
if nameValueMatch || (i == 0 && j != 0) {
- i = j + uint64(len(staticTable))
+ return j + uint64(staticTable.len()), nameValueMatch
}
- return
+
+ return i, false
}
// SetMaxDynamicTableSize changes the dynamic header table size to v.
diff --git a/vendor/golang.org/x/net/http2/hpack/hpack.go b/vendor/golang.org/x/net/http2/hpack/hpack.go
index d18be440c5..176644acda 100644
--- a/vendor/golang.org/x/net/http2/hpack/hpack.go
+++ b/vendor/golang.org/x/net/http2/hpack/hpack.go
@@ -102,6 +102,7 @@ func NewDecoder(maxDynamicTableSize uint32, emitFunc func(f HeaderField)) *Decod
emit: emitFunc,
emitEnabled: true,
}
+ d.dynTab.table.init()
d.dynTab.allowedMaxSize = maxDynamicTableSize
d.dynTab.setMaxSize(maxDynamicTableSize)
return d
@@ -154,12 +155,9 @@ func (d *Decoder) SetAllowedMaxDynamicTableSize(v uint32) {
}
type dynamicTable struct {
- // ents is the FIFO described at
// http://http2.github.io/http2-spec/compression.html#rfc.section.2.3.2
- // The newest (low index) is append at the end, and items are
- // evicted from the front.
- ents []HeaderField
- size uint32
+ table headerFieldTable
+ size uint32 // in bytes
maxSize uint32 // current maxSize
allowedMaxSize uint32 // maxSize may go up to this, inclusive
}
@@ -169,77 +167,45 @@ func (dt *dynamicTable) setMaxSize(v uint32) {
dt.evict()
}
-// TODO: change dynamicTable to be a struct with a slice and a size int field,
-// per http://http2.github.io/http2-spec/compression.html#rfc.section.4.1:
-//
-//
-// Then make add increment the size. maybe the max size should move from Decoder to
-// dynamicTable and add should return an ok bool if there was enough space.
-//
-// Later we'll need a remove operation on dynamicTable.
-
func (dt *dynamicTable) add(f HeaderField) {
- dt.ents = append(dt.ents, f)
+ dt.table.addEntry(f)
dt.size += f.Size()
dt.evict()
}
-// If we're too big, evict old stuff (front of the slice)
+// If we're too big, evict old stuff.
func (dt *dynamicTable) evict() {
- base := dt.ents // keep base pointer of slice
- for dt.size > dt.maxSize {
- dt.size -= dt.ents[0].Size()
- dt.ents = dt.ents[1:]
- }
-
- // Shift slice contents down if we evicted things.
- if len(dt.ents) != len(base) {
- copy(base, dt.ents)
- dt.ents = base[:len(dt.ents)]
+ var n int
+ for dt.size > dt.maxSize && n < dt.table.len() {
+ dt.size -= dt.table.ents[n].Size()
+ n++
}
-}
-
-// Search searches f in the table. The return value i is 0 if there is
-// no name match. If there is name match or name/value match, i is the
-// index of that entry (1-based). If both name and value match,
-// nameValueMatch becomes true.
-func (dt *dynamicTable) search(f HeaderField) (i uint64, nameValueMatch bool) {
- l := len(dt.ents)
- for j := l - 1; j >= 0; j-- {
- ent := dt.ents[j]
- if ent.Name != f.Name {
- continue
- }
- if i == 0 {
- i = uint64(l - j)
- }
- if f.Sensitive {
- continue
- }
- if ent.Value != f.Value {
- continue
- }
- return uint64(l - j), true
- }
- return i, false
+ dt.table.evictOldest(n)
}
func (d *Decoder) maxTableIndex() int {
- return len(d.dynTab.ents) + len(staticTable)
+ // This should never overflow. RFC 7540 Section 6.5.2 limits the size of
+ // the dynamic table to 2^32 bytes, where each entry will occupy more than
+ // one byte. Further, the staticTable has a fixed, small length.
+ return d.dynTab.table.len() + staticTable.len()
}
func (d *Decoder) at(i uint64) (hf HeaderField, ok bool) {
- if i < 1 {
+ // See Section 2.3.3.
+ if i == 0 {
return
}
+ if i <= uint64(staticTable.len()) {
+ return staticTable.ents[i-1], true
+ }
if i > uint64(d.maxTableIndex()) {
return
}
- if i <= uint64(len(staticTable)) {
- return staticTable[i-1], true
- }
- dents := d.dynTab.ents
- return dents[len(dents)-(int(i)-len(staticTable))], true
+ // In the dynamic table, newer entries have lower indices.
+ // However, dt.ents[0] is the oldest entry. Hence, dt.ents is
+ // the reversed dynamic table.
+ dt := d.dynTab.table
+ return dt.ents[dt.len()-(int(i)-staticTable.len())], true
}
// Decode decodes an entire block.
diff --git a/vendor/golang.org/x/net/http2/hpack/tables.go b/vendor/golang.org/x/net/http2/hpack/tables.go
index b9283a0233..a66cfbea69 100644
--- a/vendor/golang.org/x/net/http2/hpack/tables.go
+++ b/vendor/golang.org/x/net/http2/hpack/tables.go
@@ -4,73 +4,200 @@
package hpack
-func pair(name, value string) HeaderField {
- return HeaderField{Name: name, Value: value}
+import (
+ "fmt"
+)
+
+// headerFieldTable implements a list of HeaderFields.
+// This is used to implement the static and dynamic tables.
+type headerFieldTable struct {
+ // For static tables, entries are never evicted.
+ //
+ // For dynamic tables, entries are evicted from ents[0] and added to the end.
+ // Each entry has a unique id that starts at one and increments for each
+ // entry that is added. This unique id is stable across evictions, meaning
+ // it can be used as a pointer to a specific entry. As in hpack, unique ids
+ // are 1-based. The unique id for ents[k] is k + evictCount + 1.
+ //
+ // Zero is not a valid unique id.
+ //
+ // evictCount should not overflow in any remotely practical situation. In
+ // practice, we will have one dynamic table per HTTP/2 connection. If we
+ // assume a very powerful server that handles 1M QPS per connection and each
+ // request adds (then evicts) 100 entries from the table, it would still take
+ // 2M years for evictCount to overflow.
+ ents []HeaderField
+ evictCount uint64
+
+ // byName maps a HeaderField name to the unique id of the newest entry with
+ // the same name. See above for a definition of "unique id".
+ byName map[string]uint64
+
+ // byNameValue maps a HeaderField name/value pair to the unique id of the newest
+ // entry with the same name and value. See above for a definition of "unique id".
+ byNameValue map[pairNameValue]uint64
+}
+
+type pairNameValue struct {
+ name, value string
+}
+
+func (t *headerFieldTable) init() {
+ t.byName = make(map[string]uint64)
+ t.byNameValue = make(map[pairNameValue]uint64)
+}
+
+// len reports the number of entries in the table.
+func (t *headerFieldTable) len() int {
+ return len(t.ents)
+}
+
+// addEntry adds a new entry.
+func (t *headerFieldTable) addEntry(f HeaderField) {
+ id := uint64(t.len()) + t.evictCount + 1
+ t.byName[f.Name] = id
+ t.byNameValue[pairNameValue{f.Name, f.Value}] = id
+ t.ents = append(t.ents, f)
+}
+
+// evictOldest evicts the n oldest entries in the table.
+func (t *headerFieldTable) evictOldest(n int) {
+ if n > t.len() {
+ panic(fmt.Sprintf("evictOldest(%v) on table with %v entries", n, t.len()))
+ }
+ for k := 0; k < n; k++ {
+ f := t.ents[k]
+ id := t.evictCount + uint64(k) + 1
+ if t.byName[f.Name] == id {
+ delete(t.byName, f.Name)
+ }
+ if p := (pairNameValue{f.Name, f.Value}); t.byNameValue[p] == id {
+ delete(t.byNameValue, p)
+ }
+ }
+ copy(t.ents, t.ents[n:])
+ for k := t.len() - n; k < t.len(); k++ {
+ t.ents[k] = HeaderField{} // so strings can be garbage collected
+ }
+ t.ents = t.ents[:t.len()-n]
+ if t.evictCount+uint64(n) < t.evictCount {
+ panic("evictCount overflow")
+ }
+ t.evictCount += uint64(n)
+}
+
+// search finds f in the table. If there is no match, i is 0.
+// If both name and value match, i is the matched index and nameValueMatch
+// becomes true. If only name matches, i points to that index and
+// nameValueMatch becomes false.
+//
+// The returned index is a 1-based HPACK index. For dynamic tables, HPACK says
+// that index 1 should be the newest entry, but t.ents[0] is the oldest entry,
+// meaning t.ents is reversed for dynamic tables. Hence, when t is a dynamic
+// table, the return value i actually refers to the entry t.ents[t.len()-i].
+//
+// All tables are assumed to be a dynamic tables except for the global
+// staticTable pointer.
+//
+// See Section 2.3.3.
+func (t *headerFieldTable) search(f HeaderField) (i uint64, nameValueMatch bool) {
+ if !f.Sensitive {
+ if id := t.byNameValue[pairNameValue{f.Name, f.Value}]; id != 0 {
+ return t.idToIndex(id), true
+ }
+ }
+ if id := t.byName[f.Name]; id != 0 {
+ return t.idToIndex(id), false
+ }
+ return 0, false
+}
+
+// idToIndex converts a unique id to an HPACK index.
+// See Section 2.3.3.
+func (t *headerFieldTable) idToIndex(id uint64) uint64 {
+ if id <= t.evictCount {
+ panic(fmt.Sprintf("id (%v) <= evictCount (%v)", id, t.evictCount))
+ }
+ k := id - t.evictCount - 1 // convert id to an index t.ents[k]
+ if t != staticTable {
+ return uint64(t.len()) - k // dynamic table
+ }
+ return k + 1
}
// http://tools.ietf.org/html/draft-ietf-httpbis-header-compression-07#appendix-B
-var staticTable = [...]HeaderField{
- pair(":authority", ""), // index 1 (1-based)
- pair(":method", "GET"),
- pair(":method", "POST"),
- pair(":path", "/"),
- pair(":path", "/index.html"),
- pair(":scheme", "http"),
- pair(":scheme", "https"),
- pair(":status", "200"),
- pair(":status", "204"),
- pair(":status", "206"),
- pair(":status", "304"),
- pair(":status", "400"),
- pair(":status", "404"),
- pair(":status", "500"),
- pair("accept-charset", ""),
- pair("accept-encoding", "gzip, deflate"),
- pair("accept-language", ""),
- pair("accept-ranges", ""),
- pair("accept", ""),
- pair("access-control-allow-origin", ""),
- pair("age", ""),
- pair("allow", ""),
- pair("authorization", ""),
- pair("cache-control", ""),
- pair("content-disposition", ""),
- pair("content-encoding", ""),
- pair("content-language", ""),
- pair("content-length", ""),
- pair("content-location", ""),
- pair("content-range", ""),
- pair("content-type", ""),
- pair("cookie", ""),
- pair("date", ""),
- pair("etag", ""),
- pair("expect", ""),
- pair("expires", ""),
- pair("from", ""),
- pair("host", ""),
- pair("if-match", ""),
- pair("if-modified-since", ""),
- pair("if-none-match", ""),
- pair("if-range", ""),
- pair("if-unmodified-since", ""),
- pair("last-modified", ""),
- pair("link", ""),
- pair("location", ""),
- pair("max-forwards", ""),
- pair("proxy-authenticate", ""),
- pair("proxy-authorization", ""),
- pair("range", ""),
- pair("referer", ""),
- pair("refresh", ""),
- pair("retry-after", ""),
- pair("server", ""),
- pair("set-cookie", ""),
- pair("strict-transport-security", ""),
- pair("transfer-encoding", ""),
- pair("user-agent", ""),
- pair("vary", ""),
- pair("via", ""),
- pair("www-authenticate", ""),
+var staticTable = newStaticTable()
+var staticTableEntries = [...]HeaderField{
+ {Name: ":authority"},
+ {Name: ":method", Value: "GET"},
+ {Name: ":method", Value: "POST"},
+ {Name: ":path", Value: "/"},
+ {Name: ":path", Value: "/index.html"},
+ {Name: ":scheme", Value: "http"},
+ {Name: ":scheme", Value: "https"},
+ {Name: ":status", Value: "200"},
+ {Name: ":status", Value: "204"},
+ {Name: ":status", Value: "206"},
+ {Name: ":status", Value: "304"},
+ {Name: ":status", Value: "400"},
+ {Name: ":status", Value: "404"},
+ {Name: ":status", Value: "500"},
+ {Name: "accept-charset"},
+ {Name: "accept-encoding", Value: "gzip, deflate"},
+ {Name: "accept-language"},
+ {Name: "accept-ranges"},
+ {Name: "accept"},
+ {Name: "access-control-allow-origin"},
+ {Name: "age"},
+ {Name: "allow"},
+ {Name: "authorization"},
+ {Name: "cache-control"},
+ {Name: "content-disposition"},
+ {Name: "content-encoding"},
+ {Name: "content-language"},
+ {Name: "content-length"},
+ {Name: "content-location"},
+ {Name: "content-range"},
+ {Name: "content-type"},
+ {Name: "cookie"},
+ {Name: "date"},
+ {Name: "etag"},
+ {Name: "expect"},
+ {Name: "expires"},
+ {Name: "from"},
+ {Name: "host"},
+ {Name: "if-match"},
+ {Name: "if-modified-since"},
+ {Name: "if-none-match"},
+ {Name: "if-range"},
+ {Name: "if-unmodified-since"},
+ {Name: "last-modified"},
+ {Name: "link"},
+ {Name: "location"},
+ {Name: "max-forwards"},
+ {Name: "proxy-authenticate"},
+ {Name: "proxy-authorization"},
+ {Name: "range"},
+ {Name: "referer"},
+ {Name: "refresh"},
+ {Name: "retry-after"},
+ {Name: "server"},
+ {Name: "set-cookie"},
+ {Name: "strict-transport-security"},
+ {Name: "transfer-encoding"},
+ {Name: "user-agent"},
+ {Name: "vary"},
+ {Name: "via"},
+ {Name: "www-authenticate"},
+}
+
+func newStaticTable() *headerFieldTable {
+ t := &headerFieldTable{}
+ t.init()
+ for _, e := range staticTableEntries[:] {
+ t.addEntry(e)
+ }
+ return t
}
var huffmanCodes = [256]uint32{
diff --git a/vendor/golang.org/x/net/http2/not_go16.go b/vendor/golang.org/x/net/http2/not_go16.go
index efd2e1282c..508cebcc4d 100644
--- a/vendor/golang.org/x/net/http2/not_go16.go
+++ b/vendor/golang.org/x/net/http2/not_go16.go
@@ -7,7 +7,6 @@
package http2
import (
- "crypto/tls"
"net/http"
"time"
)
@@ -20,27 +19,3 @@ func transportExpectContinueTimeout(t1 *http.Transport) time.Duration {
return 0
}
-
-// isBadCipher reports whether the cipher is blacklisted by the HTTP/2 spec.
-func isBadCipher(cipher uint16) bool {
- switch cipher {
- case tls.TLS_RSA_WITH_RC4_128_SHA,
- tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
- tls.TLS_RSA_WITH_AES_128_CBC_SHA,
- tls.TLS_RSA_WITH_AES_256_CBC_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA,
- tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:
- // Reject cipher suites from Appendix A.
- // "This list includes those cipher suites that do not
- // offer an ephemeral key exchange and those that are
- // based on the TLS null, stream or block cipher type"
- return true
- default:
- return false
- }
-}
diff --git a/vendor/golang.org/x/net/http2/not_go19.go b/vendor/golang.org/x/net/http2/not_go19.go
new file mode 100644
index 0000000000..5ae07726b7
--- /dev/null
+++ b/vendor/golang.org/x/net/http2/not_go19.go
@@ -0,0 +1,16 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !go1.9
+
+package http2
+
+import (
+ "net/http"
+)
+
+func configureServer19(s *http.Server, conf *Server) error {
+ // not supported prior to go1.9
+ return nil
+}
diff --git a/vendor/golang.org/x/net/http2/pipe.go b/vendor/golang.org/x/net/http2/pipe.go
index 914aaf8a7f..a6140099cb 100644
--- a/vendor/golang.org/x/net/http2/pipe.go
+++ b/vendor/golang.org/x/net/http2/pipe.go
@@ -15,8 +15,8 @@ import (
// underlying buffer is an interface. (io.Pipe is always unbuffered)
type pipe struct {
mu sync.Mutex
- c sync.Cond // c.L lazily initialized to &p.mu
- b pipeBuffer
+ c sync.Cond // c.L lazily initialized to &p.mu
+ b pipeBuffer // nil when done reading
err error // read error once empty. non-nil means closed.
breakErr error // immediate read error (caller doesn't see rest of b)
donec chan struct{} // closed on error
@@ -32,6 +32,9 @@ type pipeBuffer interface {
func (p *pipe) Len() int {
p.mu.Lock()
defer p.mu.Unlock()
+ if p.b == nil {
+ return 0
+ }
return p.b.Len()
}
@@ -47,7 +50,7 @@ func (p *pipe) Read(d []byte) (n int, err error) {
if p.breakErr != nil {
return 0, p.breakErr
}
- if p.b.Len() > 0 {
+ if p.b != nil && p.b.Len() > 0 {
return p.b.Read(d)
}
if p.err != nil {
@@ -55,6 +58,7 @@ func (p *pipe) Read(d []byte) (n int, err error) {
p.readFn() // e.g. copy trailers
p.readFn = nil // not sticky like p.err
}
+ p.b = nil
return 0, p.err
}
p.c.Wait()
@@ -75,6 +79,9 @@ func (p *pipe) Write(d []byte) (n int, err error) {
if p.err != nil {
return 0, errClosedPipeWrite
}
+ if p.breakErr != nil {
+ return len(d), nil // discard when there is no reader
+ }
return p.b.Write(d)
}
@@ -109,6 +116,9 @@ func (p *pipe) closeWithError(dst *error, err error, fn func()) {
return
}
p.readFn = fn
+ if dst == &p.breakErr {
+ p.b = nil
+ }
*dst = err
p.closeDoneLocked()
}
diff --git a/vendor/golang.org/x/net/http2/server.go b/vendor/golang.org/x/net/http2/server.go
index 3c641a8c24..7367b31c57 100644
--- a/vendor/golang.org/x/net/http2/server.go
+++ b/vendor/golang.org/x/net/http2/server.go
@@ -110,9 +110,41 @@ type Server struct {
// activity for the purposes of IdleTimeout.
IdleTimeout time.Duration
+ // MaxUploadBufferPerConnection is the size of the initial flow
+ // control window for each connections. The HTTP/2 spec does not
+ // allow this to be smaller than 65535 or larger than 2^32-1.
+ // If the value is outside this range, a default value will be
+ // used instead.
+ MaxUploadBufferPerConnection int32
+
+ // MaxUploadBufferPerStream is the size of the initial flow control
+ // window for each stream. The HTTP/2 spec does not allow this to
+ // be larger than 2^32-1. If the value is zero or larger than the
+ // maximum, a default value will be used instead.
+ MaxUploadBufferPerStream int32
+
// NewWriteScheduler constructs a write scheduler for a connection.
// If nil, a default scheduler is chosen.
NewWriteScheduler func() WriteScheduler
+
+ // Internal state. This is a pointer (rather than embedded directly)
+ // so that we don't embed a Mutex in this struct, which will make the
+ // struct non-copyable, which might break some callers.
+ state *serverInternalState
+}
+
+func (s *Server) initialConnRecvWindowSize() int32 {
+ if s.MaxUploadBufferPerConnection > initialWindowSize {
+ return s.MaxUploadBufferPerConnection
+ }
+ return 1 << 20
+}
+
+func (s *Server) initialStreamRecvWindowSize() int32 {
+ if s.MaxUploadBufferPerStream > 0 {
+ return s.MaxUploadBufferPerStream
+ }
+ return 1 << 20
}
func (s *Server) maxReadFrameSize() uint32 {
@@ -129,6 +161,40 @@ func (s *Server) maxConcurrentStreams() uint32 {
return defaultMaxStreams
}
+type serverInternalState struct {
+ mu sync.Mutex
+ activeConns map[*serverConn]struct{}
+}
+
+func (s *serverInternalState) registerConn(sc *serverConn) {
+ if s == nil {
+ return // if the Server was used without calling ConfigureServer
+ }
+ s.mu.Lock()
+ s.activeConns[sc] = struct{}{}
+ s.mu.Unlock()
+}
+
+func (s *serverInternalState) unregisterConn(sc *serverConn) {
+ if s == nil {
+ return // if the Server was used without calling ConfigureServer
+ }
+ s.mu.Lock()
+ delete(s.activeConns, sc)
+ s.mu.Unlock()
+}
+
+func (s *serverInternalState) startGracefulShutdown() {
+ if s == nil {
+ return // if the Server was used without calling ConfigureServer
+ }
+ s.mu.Lock()
+ for sc := range s.activeConns {
+ sc.startGracefulShutdown()
+ }
+ s.mu.Unlock()
+}
+
// ConfigureServer adds HTTP/2 support to a net/http Server.
//
// The configuration conf may be nil.
@@ -141,9 +207,13 @@ func ConfigureServer(s *http.Server, conf *Server) error {
if conf == nil {
conf = new(Server)
}
+ conf.state = &serverInternalState{activeConns: make(map[*serverConn]struct{})}
if err := configureServer18(s, conf); err != nil {
return err
}
+ if err := configureServer19(s, conf); err != nil {
+ return err
+ }
if s.TLSConfig == nil {
s.TLSConfig = new(tls.Config)
@@ -255,35 +325,37 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
defer cancel()
sc := &serverConn{
- srv: s,
- hs: opts.baseConfig(),
- conn: c,
- baseCtx: baseCtx,
- remoteAddrStr: c.RemoteAddr().String(),
- bw: newBufferedWriter(c),
- handler: opts.handler(),
- streams: make(map[uint32]*stream),
- readFrameCh: make(chan readFrameResult),
- wantWriteFrameCh: make(chan FrameWriteRequest, 8),
- wantStartPushCh: make(chan startPushRequest, 8),
- wroteFrameCh: make(chan frameWriteResult, 1), // buffered; one send in writeFrameAsync
- bodyReadCh: make(chan bodyReadMsg), // buffering doesn't matter either way
- doneServing: make(chan struct{}),
- clientMaxStreams: math.MaxUint32, // Section 6.5.2: "Initially, there is no limit to this value"
- advMaxStreams: s.maxConcurrentStreams(),
- initialWindowSize: initialWindowSize,
- maxFrameSize: initialMaxFrameSize,
- headerTableSize: initialHeaderTableSize,
- serveG: newGoroutineLock(),
- pushEnabled: true,
- }
+ srv: s,
+ hs: opts.baseConfig(),
+ conn: c,
+ baseCtx: baseCtx,
+ remoteAddrStr: c.RemoteAddr().String(),
+ bw: newBufferedWriter(c),
+ handler: opts.handler(),
+ streams: make(map[uint32]*stream),
+ readFrameCh: make(chan readFrameResult),
+ wantWriteFrameCh: make(chan FrameWriteRequest, 8),
+ serveMsgCh: make(chan interface{}, 8),
+ wroteFrameCh: make(chan frameWriteResult, 1), // buffered; one send in writeFrameAsync
+ bodyReadCh: make(chan bodyReadMsg), // buffering doesn't matter either way
+ doneServing: make(chan struct{}),
+ clientMaxStreams: math.MaxUint32, // Section 6.5.2: "Initially, there is no limit to this value"
+ advMaxStreams: s.maxConcurrentStreams(),
+ initialStreamSendWindowSize: initialWindowSize,
+ maxFrameSize: initialMaxFrameSize,
+ headerTableSize: initialHeaderTableSize,
+ serveG: newGoroutineLock(),
+ pushEnabled: true,
+ }
+
+ s.state.registerConn(sc)
+ defer s.state.unregisterConn(sc)
// The net/http package sets the write deadline from the
// http.Server.WriteTimeout during the TLS handshake, but then
- // passes the connection off to us with the deadline already
- // set. Disarm it here so that it is not applied to additional
- // streams opened on this connection.
- // TODO: implement WriteTimeout fully. See Issue 18437.
+ // passes the connection off to us with the deadline already set.
+ // Write deadlines are set per stream in serverConn.newStream.
+ // Disarm the net.Conn write deadline here.
if sc.hs.WriteTimeout != 0 {
sc.conn.SetWriteDeadline(time.Time{})
}
@@ -294,6 +366,9 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
sc.writeSched = NewRandomWriteScheduler()
}
+ // These start at the RFC-specified defaults. If there is a higher
+ // configured value for inflow, that will be updated when we send a
+ // WINDOW_UPDATE shortly after sending SETTINGS.
sc.flow.add(initialWindowSize)
sc.inflow.add(initialWindowSize)
sc.hpackEncoder = hpack.NewEncoder(&sc.headerWriteBuf)
@@ -376,10 +451,9 @@ type serverConn struct {
doneServing chan struct{} // closed when serverConn.serve ends
readFrameCh chan readFrameResult // written by serverConn.readFrames
wantWriteFrameCh chan FrameWriteRequest // from handlers -> serve
- wantStartPushCh chan startPushRequest // from handlers -> serve
wroteFrameCh chan frameWriteResult // from writeFrameAsync -> serve, tickles more frame writes
bodyReadCh chan bodyReadMsg // from handlers -> serve
- testHookCh chan func(int) // code to run on the serve loop
+ serveMsgCh chan interface{} // misc messages & code to send to / run on the serve loop
flow flow // conn-wide (not stream-specific) outbound flow control
inflow flow // conn-wide inbound flow control
tlsState *tls.ConnectionState // shared by all handlers, like net/http
@@ -387,38 +461,39 @@ type serverConn struct {
writeSched WriteScheduler
// Everything following is owned by the serve loop; use serveG.check():
- serveG goroutineLock // used to verify funcs are on serve()
- pushEnabled bool
- sawFirstSettings bool // got the initial SETTINGS frame after the preface
- needToSendSettingsAck bool
- unackedSettings int // how many SETTINGS have we sent without ACKs?
- clientMaxStreams uint32 // SETTINGS_MAX_CONCURRENT_STREAMS from client (our PUSH_PROMISE limit)
- advMaxStreams uint32 // our SETTINGS_MAX_CONCURRENT_STREAMS advertised the client
- curClientStreams uint32 // number of open streams initiated by the client
- curPushedStreams uint32 // number of open streams initiated by server push
- maxClientStreamID uint32 // max ever seen from client (odd), or 0 if there have been no client requests
- maxPushPromiseID uint32 // ID of the last push promise (even), or 0 if there have been no pushes
- streams map[uint32]*stream
- initialWindowSize int32
- maxFrameSize int32
- headerTableSize uint32
- peerMaxHeaderListSize uint32 // zero means unknown (default)
- canonHeader map[string]string // http2-lower-case -> Go-Canonical-Case
- writingFrame bool // started writing a frame (on serve goroutine or separate)
- writingFrameAsync bool // started a frame on its own goroutine but haven't heard back on wroteFrameCh
- needsFrameFlush bool // last frame write wasn't a flush
- inGoAway bool // we've started to or sent GOAWAY
- inFrameScheduleLoop bool // whether we're in the scheduleFrameWrite loop
- needToSendGoAway bool // we need to schedule a GOAWAY frame write
- goAwayCode ErrCode
- shutdownTimerCh <-chan time.Time // nil until used
- shutdownTimer *time.Timer // nil until used
- idleTimer *time.Timer // nil if unused
- idleTimerCh <-chan time.Time // nil if unused
+ serveG goroutineLock // used to verify funcs are on serve()
+ pushEnabled bool
+ sawFirstSettings bool // got the initial SETTINGS frame after the preface
+ needToSendSettingsAck bool
+ unackedSettings int // how many SETTINGS have we sent without ACKs?
+ clientMaxStreams uint32 // SETTINGS_MAX_CONCURRENT_STREAMS from client (our PUSH_PROMISE limit)
+ advMaxStreams uint32 // our SETTINGS_MAX_CONCURRENT_STREAMS advertised the client
+ curClientStreams uint32 // number of open streams initiated by the client
+ curPushedStreams uint32 // number of open streams initiated by server push
+ maxClientStreamID uint32 // max ever seen from client (odd), or 0 if there have been no client requests
+ maxPushPromiseID uint32 // ID of the last push promise (even), or 0 if there have been no pushes
+ streams map[uint32]*stream
+ initialStreamSendWindowSize int32
+ maxFrameSize int32
+ headerTableSize uint32
+ peerMaxHeaderListSize uint32 // zero means unknown (default)
+ canonHeader map[string]string // http2-lower-case -> Go-Canonical-Case
+ writingFrame bool // started writing a frame (on serve goroutine or separate)
+ writingFrameAsync bool // started a frame on its own goroutine but haven't heard back on wroteFrameCh
+ needsFrameFlush bool // last frame write wasn't a flush
+ inGoAway bool // we've started to or sent GOAWAY
+ inFrameScheduleLoop bool // whether we're in the scheduleFrameWrite loop
+ needToSendGoAway bool // we need to schedule a GOAWAY frame write
+ goAwayCode ErrCode
+ shutdownTimer *time.Timer // nil until used
+ idleTimer *time.Timer // nil if unused
// Owned by the writeFrameAsync goroutine:
headerWriteBuf bytes.Buffer
hpackEncoder *hpack.Encoder
+
+ // Used by startGracefulShutdown.
+ shutdownOnce sync.Once
}
func (sc *serverConn) maxHeaderListSize() uint32 {
@@ -463,10 +538,10 @@ type stream struct {
numTrailerValues int64
weight uint8
state streamState
- resetQueued bool // RST_STREAM queued for write; set by sc.resetStream
- gotTrailerHeader bool // HEADER frame for trailers was seen
- wroteHeaders bool // whether we wrote headers (not status 100)
- reqBuf []byte // if non-nil, body pipe buffer to return later at EOF
+ resetQueued bool // RST_STREAM queued for write; set by sc.resetStream
+ gotTrailerHeader bool // HEADER frame for trailers was seen
+ wroteHeaders bool // whether we wrote headers (not status 100)
+ writeDeadline *time.Timer // nil if unused
trailer http.Header // accumulated trailers
reqTrailer http.Header // handler's Request.Trailer
@@ -696,15 +771,17 @@ func (sc *serverConn) serve() {
{SettingMaxFrameSize, sc.srv.maxReadFrameSize()},
{SettingMaxConcurrentStreams, sc.advMaxStreams},
{SettingMaxHeaderListSize, sc.maxHeaderListSize()},
-
- // TODO: more actual settings, notably
- // SettingInitialWindowSize, but then we also
- // want to bump up the conn window size the
- // same amount here right after the settings
+ {SettingInitialWindowSize, uint32(sc.srv.initialStreamRecvWindowSize())},
},
})
sc.unackedSettings++
+ // Each connection starts with intialWindowSize inflow tokens.
+ // If a higher value is configured, we add more tokens.
+ if diff := sc.srv.initialConnRecvWindowSize() - initialWindowSize; diff > 0 {
+ sc.sendWindowUpdate(nil, int(diff))
+ }
+
if err := sc.readPreface(); err != nil {
sc.condlogf(err, "http2: server: error reading preface from client %v: %v", sc.conn.RemoteAddr(), err)
return
@@ -717,27 +794,25 @@ func (sc *serverConn) serve() {
sc.setConnState(http.StateIdle)
if sc.srv.IdleTimeout != 0 {
- sc.idleTimer = time.NewTimer(sc.srv.IdleTimeout)
+ sc.idleTimer = time.AfterFunc(sc.srv.IdleTimeout, sc.onIdleTimer)
defer sc.idleTimer.Stop()
- sc.idleTimerCh = sc.idleTimer.C
- }
-
- var gracefulShutdownCh <-chan struct{}
- if sc.hs != nil {
- gracefulShutdownCh = h1ServerShutdownChan(sc.hs)
}
go sc.readFrames() // closed by defer sc.conn.Close above
- settingsTimer := time.NewTimer(firstSettingsTimeout)
+ settingsTimer := time.AfterFunc(firstSettingsTimeout, sc.onSettingsTimer)
+ defer settingsTimer.Stop()
+
loopNum := 0
for {
loopNum++
select {
case wr := <-sc.wantWriteFrameCh:
+ if se, ok := wr.write.(StreamError); ok {
+ sc.resetStream(se)
+ break
+ }
sc.writeFrame(wr)
- case spr := <-sc.wantStartPushCh:
- sc.startPush(spr)
case res := <-sc.wroteFrameCh:
sc.wroteFrame(res)
case res := <-sc.readFrameCh:
@@ -745,26 +820,37 @@ func (sc *serverConn) serve() {
return
}
res.readMore()
- if settingsTimer.C != nil {
+ if settingsTimer != nil {
settingsTimer.Stop()
- settingsTimer.C = nil
+ settingsTimer = nil
}
case m := <-sc.bodyReadCh:
sc.noteBodyRead(m.st, m.n)
- case <-settingsTimer.C:
- sc.logf("timeout waiting for SETTINGS frames from %v", sc.conn.RemoteAddr())
- return
- case <-gracefulShutdownCh:
- gracefulShutdownCh = nil
- sc.startGracefulShutdown()
- case <-sc.shutdownTimerCh:
- sc.vlogf("GOAWAY close timer fired; closing conn from %v", sc.conn.RemoteAddr())
- return
- case <-sc.idleTimerCh:
- sc.vlogf("connection is idle")
- sc.goAway(ErrCodeNo)
- case fn := <-sc.testHookCh:
- fn(loopNum)
+ case msg := <-sc.serveMsgCh:
+ switch v := msg.(type) {
+ case func(int):
+ v(loopNum) // for testing
+ case *serverMessage:
+ switch v {
+ case settingsTimerMsg:
+ sc.logf("timeout waiting for SETTINGS frames from %v", sc.conn.RemoteAddr())
+ return
+ case idleTimerMsg:
+ sc.vlogf("connection is idle")
+ sc.goAway(ErrCodeNo)
+ case shutdownTimerMsg:
+ sc.vlogf("GOAWAY close timer fired; closing conn from %v", sc.conn.RemoteAddr())
+ return
+ case gracefulShutdownMsg:
+ sc.startGracefulShutdownInternal()
+ default:
+ panic("unknown timer")
+ }
+ case *startPushRequest:
+ sc.startPush(v)
+ default:
+ panic(fmt.Sprintf("unexpected type %T", v))
+ }
}
if sc.inGoAway && sc.curOpenStreams() == 0 && !sc.needToSendGoAway && !sc.writingFrame {
@@ -773,6 +859,36 @@ func (sc *serverConn) serve() {
}
}
+func (sc *serverConn) awaitGracefulShutdown(sharedCh <-chan struct{}, privateCh chan struct{}) {
+ select {
+ case <-sc.doneServing:
+ case <-sharedCh:
+ close(privateCh)
+ }
+}
+
+type serverMessage int
+
+// Message values sent to serveMsgCh.
+var (
+ settingsTimerMsg = new(serverMessage)
+ idleTimerMsg = new(serverMessage)
+ shutdownTimerMsg = new(serverMessage)
+ gracefulShutdownMsg = new(serverMessage)
+)
+
+func (sc *serverConn) onSettingsTimer() { sc.sendServeMsg(settingsTimerMsg) }
+func (sc *serverConn) onIdleTimer() { sc.sendServeMsg(idleTimerMsg) }
+func (sc *serverConn) onShutdownTimer() { sc.sendServeMsg(shutdownTimerMsg) }
+
+func (sc *serverConn) sendServeMsg(msg interface{}) {
+ sc.serveG.checkNotOn() // NOT
+ select {
+ case sc.serveMsgCh <- msg:
+ case <-sc.doneServing:
+ }
+}
+
// readPreface reads the ClientPreface greeting from the peer
// or returns an error on timeout or an invalid greeting.
func (sc *serverConn) readPreface() error {
@@ -1014,7 +1130,11 @@ func (sc *serverConn) wroteFrame(res frameWriteResult) {
// stateClosed after the RST_STREAM frame is
// written.
st.state = stateHalfClosedLocal
- sc.resetStream(streamError(st.id, ErrCodeCancel))
+ // Section 8.1: a server MAY request that the client abort
+ // transmission of a request without error by sending a
+ // RST_STREAM with an error code of NO_ERROR after sending
+ // a complete response.
+ sc.resetStream(streamError(st.id, ErrCodeNo))
case stateHalfClosedRemote:
sc.closeStream(st, errHandlerComplete)
}
@@ -1086,10 +1206,19 @@ func (sc *serverConn) scheduleFrameWrite() {
sc.inFrameScheduleLoop = false
}
-// startGracefulShutdown sends a GOAWAY with ErrCodeNo to tell the
-// client we're gracefully shutting down. The connection isn't closed
-// until all current streams are done.
+// startGracefulShutdown gracefully shuts down a connection. This
+// sends GOAWAY with ErrCodeNo to tell the client we're gracefully
+// shutting down. The connection isn't closed until all current
+// streams are done.
+//
+// startGracefulShutdown returns immediately; it does not wait until
+// the connection has shut down.
func (sc *serverConn) startGracefulShutdown() {
+ sc.serveG.checkNotOn() // NOT
+ sc.shutdownOnce.Do(func() { sc.sendServeMsg(gracefulShutdownMsg) })
+}
+
+func (sc *serverConn) startGracefulShutdownInternal() {
sc.goAwayIn(ErrCodeNo, 0)
}
@@ -1121,8 +1250,7 @@ func (sc *serverConn) goAwayIn(code ErrCode, forceCloseIn time.Duration) {
func (sc *serverConn) shutDownIn(d time.Duration) {
sc.serveG.check()
- sc.shutdownTimer = time.NewTimer(d)
- sc.shutdownTimerCh = sc.shutdownTimer.C
+ sc.shutdownTimer = time.AfterFunc(d, sc.onShutdownTimer)
}
func (sc *serverConn) resetStream(se StreamError) {
@@ -1305,6 +1433,9 @@ func (sc *serverConn) closeStream(st *stream, err error) {
panic(fmt.Sprintf("invariant; can't close stream in state %v", st.state))
}
st.state = stateClosed
+ if st.writeDeadline != nil {
+ st.writeDeadline.Stop()
+ }
if st.isPushed() {
sc.curPushedStreams--
} else {
@@ -1317,7 +1448,7 @@ func (sc *serverConn) closeStream(st *stream, err error) {
sc.idleTimer.Reset(sc.srv.IdleTimeout)
}
if h1ServerKeepAlivesDisabled(sc.hs) {
- sc.startGracefulShutdown()
+ sc.startGracefulShutdownInternal()
}
}
if p := st.body; p != nil {
@@ -1395,9 +1526,9 @@ func (sc *serverConn) processSettingInitialWindowSize(val uint32) error {
// adjust the size of all stream flow control windows that it
// maintains by the difference between the new value and the
// old value."
- old := sc.initialWindowSize
- sc.initialWindowSize = int32(val)
- growth := sc.initialWindowSize - old // may be negative
+ old := sc.initialStreamSendWindowSize
+ sc.initialStreamSendWindowSize = int32(val)
+ growth := int32(val) - old // may be negative
for _, st := range sc.streams {
if !st.flow.add(growth) {
// 6.9.2 Initial Flow Control Window Size
@@ -1504,7 +1635,7 @@ func (sc *serverConn) processGoAway(f *GoAwayFrame) error {
} else {
sc.vlogf("http2: received GOAWAY %+v, starting graceful shutdown", f)
}
- sc.startGracefulShutdown()
+ sc.startGracefulShutdownInternal()
// http://tools.ietf.org/html/rfc7540#section-6.8
// We should not create any new streams, which means we should disable push.
sc.pushEnabled = false
@@ -1543,6 +1674,12 @@ func (st *stream) copyTrailersToHandlerRequest() {
}
}
+// onWriteTimeout is run on its own goroutine (from time.AfterFunc)
+// when the stream's WriteTimeout has fired.
+func (st *stream) onWriteTimeout() {
+ st.sc.writeFrameFromHandler(FrameWriteRequest{write: streamError(st.id, ErrCodeInternal)})
+}
+
func (sc *serverConn) processHeaders(f *MetaHeadersFrame) error {
sc.serveG.check()
id := f.StreamID
@@ -1719,9 +1856,12 @@ func (sc *serverConn) newStream(id, pusherID uint32, state streamState) *stream
}
st.cw.Init()
st.flow.conn = &sc.flow // link to conn-level counter
- st.flow.add(sc.initialWindowSize)
- st.inflow.conn = &sc.inflow // link to conn-level counter
- st.inflow.add(initialWindowSize) // TODO: update this when we send a higher initial window size in the initial settings
+ st.flow.add(sc.initialStreamSendWindowSize)
+ st.inflow.conn = &sc.inflow // link to conn-level counter
+ st.inflow.add(sc.srv.initialStreamRecvWindowSize())
+ if sc.hs.WriteTimeout != 0 {
+ st.writeDeadline = time.AfterFunc(sc.hs.WriteTimeout, st.onWriteTimeout)
+ }
sc.streams[id] = st
sc.writeSched.OpenStream(st.id, OpenStreamOptions{PusherID: pusherID})
@@ -1785,16 +1925,14 @@ func (sc *serverConn) newWriterAndRequest(st *stream, f *MetaHeadersFrame) (*res
return nil, nil, err
}
if bodyOpen {
- st.reqBuf = getRequestBodyBuf()
- req.Body.(*requestBody).pipe = &pipe{
- b: &fixedBuffer{buf: st.reqBuf},
- }
-
if vv, ok := rp.header["Content-Length"]; ok {
req.ContentLength, _ = strconv.ParseInt(vv[0], 10, 64)
} else {
req.ContentLength = -1
}
+ req.Body.(*requestBody).pipe = &pipe{
+ b: &dataBuffer{expected: req.ContentLength},
+ }
}
return rw, req, nil
}
@@ -1890,24 +2028,6 @@ func (sc *serverConn) newWriterAndRequestNoBody(st *stream, rp requestParam) (*r
return rw, req, nil
}
-var reqBodyCache = make(chan []byte, 8)
-
-func getRequestBodyBuf() []byte {
- select {
- case b := <-reqBodyCache:
- return b
- default:
- return make([]byte, initialWindowSize)
- }
-}
-
-func putRequestBodyBuf(b []byte) {
- select {
- case reqBodyCache <- b:
- default:
- }
-}
-
// Run on its own goroutine.
func (sc *serverConn) runHandler(rw *responseWriter, req *http.Request, handler func(http.ResponseWriter, *http.Request)) {
didPanic := true
@@ -2003,12 +2123,6 @@ func (sc *serverConn) noteBodyReadFromHandler(st *stream, n int, err error) {
case <-sc.doneServing:
}
}
- if err == io.EOF {
- if buf := st.reqBuf; buf != nil {
- st.reqBuf = nil // shouldn't matter; field unused by other
- putRequestBodyBuf(buf)
- }
- }
}
func (sc *serverConn) noteBodyRead(st *stream, n int) {
@@ -2514,7 +2628,7 @@ func (w *responseWriter) push(target string, opts pushOptions) error {
return fmt.Errorf("method %q must be GET or HEAD", opts.Method)
}
- msg := startPushRequest{
+ msg := &startPushRequest{
parent: st,
method: opts.Method,
url: u,
@@ -2527,7 +2641,7 @@ func (w *responseWriter) push(target string, opts pushOptions) error {
return errClientDisconnected
case <-st.cw:
return errStreamClosed
- case sc.wantStartPushCh <- msg:
+ case sc.serveMsgCh <- msg:
}
select {
@@ -2549,7 +2663,7 @@ type startPushRequest struct {
done chan error
}
-func (sc *serverConn) startPush(msg startPushRequest) {
+func (sc *serverConn) startPush(msg *startPushRequest) {
sc.serveG.check()
// http://tools.ietf.org/html/rfc7540#section-6.6.
@@ -2588,7 +2702,7 @@ func (sc *serverConn) startPush(msg startPushRequest) {
// A server that is unable to establish a new stream identifier can send a GOAWAY
// frame so that the client is forced to open a new connection for new streams.
if sc.maxPushPromiseID+2 >= 1<<31 {
- sc.startGracefulShutdown()
+ sc.startGracefulShutdownInternal()
return 0, ErrPushLimitReached
}
sc.maxPushPromiseID += 2
@@ -2713,31 +2827,6 @@ var badTrailer = map[string]bool{
"Www-Authenticate": true,
}
-// h1ServerShutdownChan returns a channel that will be closed when the
-// provided *http.Server wants to shut down.
-//
-// This is a somewhat hacky way to get at http1 innards. It works
-// when the http2 code is bundled into the net/http package in the
-// standard library. The alternatives ended up making the cmd/go tool
-// depend on http Servers. This is the lightest option for now.
-// This is tested via the TestServeShutdown* tests in net/http.
-func h1ServerShutdownChan(hs *http.Server) <-chan struct{} {
- if fn := testh1ServerShutdownChan; fn != nil {
- return fn(hs)
- }
- var x interface{} = hs
- type I interface {
- getDoneChan() <-chan struct{}
- }
- if hs, ok := x.(I); ok {
- return hs.getDoneChan()
- }
- return nil
-}
-
-// optional test hook for h1ServerShutdownChan.
-var testh1ServerShutdownChan func(hs *http.Server) <-chan struct{}
-
// h1ServerKeepAlivesDisabled reports whether hs has its keep-alives
// disabled. See comments on h1ServerShutdownChan above for why
// the code is written this way.
diff --git a/vendor/golang.org/x/net/http2/transport.go b/vendor/golang.org/x/net/http2/transport.go
index fef8396867..3a85f25a20 100644
--- a/vendor/golang.org/x/net/http2/transport.go
+++ b/vendor/golang.org/x/net/http2/transport.go
@@ -1528,8 +1528,7 @@ func (rl *clientConnReadLoop) handleResponse(cs *clientStream, f *MetaHeadersFra
return res, nil
}
- buf := new(bytes.Buffer) // TODO(bradfitz): recycle this garbage
- cs.bufPipe = pipe{b: buf}
+ cs.bufPipe = pipe{b: &dataBuffer{expected: res.ContentLength}}
cs.bytesRemain = res.ContentLength
res.Body = transportResponseBody{cs}
go cs.awaitRequestCancel(cs.req)
@@ -1656,6 +1655,7 @@ func (b transportResponseBody) Close() error {
cc.wmu.Lock()
if !serverSentStreamEnd {
cc.fr.WriteRSTStream(cs.ID, ErrCodeCancel)
+ cs.didReset = true
}
// Return connection-level flow control.
if unread > 0 {
@@ -1703,12 +1703,6 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error {
return nil
}
if f.Length > 0 {
- if len(data) > 0 && cs.bufPipe.b == nil {
- // Data frame after it's already closed?
- cc.logf("http2: Transport received DATA frame for closed stream; closing connection")
- return ConnectionError(ErrCodeProtocol)
- }
-
// Check connection-level flow control.
cc.mu.Lock()
if cs.inflow.available() >= int32(f.Length) {
diff --git a/vendor/golang.org/x/net/http2/writesched_priority.go b/vendor/golang.org/x/net/http2/writesched_priority.go
index 01132721bd..848fed6ec7 100644
--- a/vendor/golang.org/x/net/http2/writesched_priority.go
+++ b/vendor/golang.org/x/net/http2/writesched_priority.go
@@ -53,7 +53,7 @@ type PriorityWriteSchedulerConfig struct {
}
// NewPriorityWriteScheduler constructs a WriteScheduler that schedules
-// frames by following HTTP/2 priorities as described in RFC 7340 Section 5.3.
+// frames by following HTTP/2 priorities as described in RFC 7540 Section 5.3.
// If cfg is nil, default options are used.
func NewPriorityWriteScheduler(cfg *PriorityWriteSchedulerConfig) WriteScheduler {
if cfg == nil {
diff --git a/vendor/google.golang.org/genproto/googleapis/rpc/status/LICENSE b/vendor/google.golang.org/genproto/googleapis/rpc/status/LICENSE
new file mode 100644
index 0000000000..d645695673
--- /dev/null
+++ b/vendor/google.golang.org/genproto/googleapis/rpc/status/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go b/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
new file mode 100644
index 0000000000..40e79375bf
--- /dev/null
+++ b/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
@@ -0,0 +1,143 @@
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// source: google/rpc/status.proto
+
+/*
+Package status is a generated protocol buffer package.
+
+It is generated from these files:
+ google/rpc/status.proto
+
+It has these top-level messages:
+ Status
+*/
+package status
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import google_protobuf "github.com/golang/protobuf/ptypes/any"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+
+// The `Status` type defines a logical error model that is suitable for different
+// programming environments, including REST APIs and RPC APIs. It is used by
+// [gRPC](https://github.com/grpc). The error model is designed to be:
+//
+// - Simple to use and understand for most users
+// - Flexible enough to meet unexpected needs
+//
+// # Overview
+//
+// The `Status` message contains three pieces of data: error code, error message,
+// and error details. The error code should be an enum value of
+// [google.rpc.Code][google.rpc.Code], but it may accept additional error codes if needed. The
+// error message should be a developer-facing English message that helps
+// developers *understand* and *resolve* the error. If a localized user-facing
+// error message is needed, put the localized message in the error details or
+// localize it in the client. The optional error details may contain arbitrary
+// information about the error. There is a predefined set of error detail types
+// in the package `google.rpc` which can be used for common error conditions.
+//
+// # Language mapping
+//
+// The `Status` message is the logical representation of the error model, but it
+// is not necessarily the actual wire format. When the `Status` message is
+// exposed in different client libraries and different wire protocols, it can be
+// mapped differently. For example, it will likely be mapped to some exceptions
+// in Java, but more likely mapped to some error codes in C.
+//
+// # Other uses
+//
+// The error model and the `Status` message can be used in a variety of
+// environments, either with or without APIs, to provide a
+// consistent developer experience across different environments.
+//
+// Example uses of this error model include:
+//
+// - Partial errors. If a service needs to return partial errors to the client,
+// it may embed the `Status` in the normal response to indicate the partial
+// errors.
+//
+// - Workflow errors. A typical workflow has multiple steps. Each step may
+// have a `Status` message for error reporting purpose.
+//
+// - Batch operations. If a client uses batch request and batch response, the
+// `Status` message should be used directly inside batch response, one for
+// each error sub-response.
+//
+// - Asynchronous operations. If an API call embeds asynchronous operation
+// results in its response, the status of those operations should be
+// represented directly using the `Status` message.
+//
+// - Logging. If some API errors are stored in logs, the message `Status` could
+// be used directly after any stripping needed for security/privacy reasons.
+type Status struct {
+ // The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code].
+ Code int32 `protobuf:"varint,1,opt,name=code" json:"code,omitempty"`
+ // A developer-facing error message, which should be in English. Any
+ // user-facing error message should be localized and sent in the
+ // [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client.
+ Message string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"`
+ // A list of messages that carry the error details. There will be a
+ // common set of message types for APIs to use.
+ Details []*google_protobuf.Any `protobuf:"bytes,3,rep,name=details" json:"details,omitempty"`
+}
+
+func (m *Status) Reset() { *m = Status{} }
+func (m *Status) String() string { return proto.CompactTextString(m) }
+func (*Status) ProtoMessage() {}
+func (*Status) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+func (m *Status) GetCode() int32 {
+ if m != nil {
+ return m.Code
+ }
+ return 0
+}
+
+func (m *Status) GetMessage() string {
+ if m != nil {
+ return m.Message
+ }
+ return ""
+}
+
+func (m *Status) GetDetails() []*google_protobuf.Any {
+ if m != nil {
+ return m.Details
+ }
+ return nil
+}
+
+func init() {
+ proto.RegisterType((*Status)(nil), "google.rpc.Status")
+}
+
+func init() { proto.RegisterFile("google/rpc/status.proto", fileDescriptor0) }
+
+var fileDescriptor0 = []byte{
+ // 209 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4f, 0xcf, 0xcf, 0x4f,
+ 0xcf, 0x49, 0xd5, 0x2f, 0x2a, 0x48, 0xd6, 0x2f, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xd6, 0x2b, 0x28,
+ 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x82, 0x48, 0xe8, 0x15, 0x15, 0x24, 0x4b, 0x49, 0x42, 0x15, 0x81,
+ 0x65, 0x92, 0x4a, 0xd3, 0xf4, 0x13, 0xf3, 0x2a, 0x21, 0xca, 0x94, 0xd2, 0xb8, 0xd8, 0x82, 0xc1,
+ 0xda, 0x84, 0x84, 0xb8, 0x58, 0x92, 0xf3, 0x53, 0x52, 0x25, 0x18, 0x15, 0x18, 0x35, 0x58, 0x83,
+ 0xc0, 0x6c, 0x21, 0x09, 0x2e, 0xf6, 0xdc, 0xd4, 0xe2, 0xe2, 0xc4, 0xf4, 0x54, 0x09, 0x26, 0x05,
+ 0x46, 0x0d, 0xce, 0x20, 0x18, 0x57, 0x48, 0x8f, 0x8b, 0x3d, 0x25, 0xb5, 0x24, 0x31, 0x33, 0xa7,
+ 0x58, 0x82, 0x59, 0x81, 0x59, 0x83, 0xdb, 0x48, 0x44, 0x0f, 0x6a, 0x21, 0xcc, 0x12, 0x3d, 0xc7,
+ 0xbc, 0xca, 0x20, 0x98, 0x22, 0xa7, 0x38, 0x2e, 0xbe, 0xe4, 0xfc, 0x5c, 0x3d, 0x84, 0xa3, 0x9c,
+ 0xb8, 0x21, 0xf6, 0x06, 0x80, 0x94, 0x07, 0x30, 0x46, 0x99, 0x43, 0xa5, 0xd2, 0xf3, 0x73, 0x12,
+ 0xf3, 0xd2, 0xf5, 0xf2, 0x8b, 0xd2, 0xf5, 0xd3, 0x53, 0xf3, 0xc0, 0x86, 0xe9, 0x43, 0xa4, 0x12,
+ 0x0b, 0x32, 0x8b, 0x91, 0xfc, 0x69, 0x0d, 0xa1, 0x16, 0x31, 0x31, 0x07, 0x05, 0x38, 0x27, 0xb1,
+ 0x81, 0x55, 0x1a, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff, 0xa4, 0x53, 0xf0, 0x7c, 0x10, 0x01, 0x00,
+ 0x00,
+}
diff --git a/vendor/google.golang.org/grpc/balancer.go b/vendor/google.golang.org/grpc/balancer.go
index 9d943fbada..9af4ee70ac 100644
--- a/vendor/google.golang.org/grpc/balancer.go
+++ b/vendor/google.golang.org/grpc/balancer.go
@@ -35,6 +35,7 @@ package grpc
import (
"fmt"
+ "net"
"sync"
"golang.org/x/net/context"
@@ -60,6 +61,10 @@ type BalancerConfig struct {
// use to dial to a remote load balancer server. The Balancer implementations
// can ignore this if it does not need to talk to another party securely.
DialCreds credentials.TransportCredentials
+ // Dialer is the custom dialer the Balancer implementation can use to dial
+ // to a remote load balancer server. The Balancer implementations
+ // can ignore this if it doesn't need to talk to remote balancer.
+ Dialer func(context.Context, string) (net.Conn, error)
}
// BalancerGetOptions configures a Get call.
@@ -385,6 +390,9 @@ func (rr *roundRobin) Notify() <-chan []Address {
func (rr *roundRobin) Close() error {
rr.mu.Lock()
defer rr.mu.Unlock()
+ if rr.done {
+ return errBalancerClosed
+ }
rr.done = true
if rr.w != nil {
rr.w.Close()
diff --git a/vendor/google.golang.org/grpc/benchmark/worker/benchmark_client.go b/vendor/google.golang.org/grpc/benchmark/worker/benchmark_client.go
index 199bbe1fa3..dfe8c8f0fe 100644
--- a/vendor/google.golang.org/grpc/benchmark/worker/benchmark_client.go
+++ b/vendor/google.golang.org/grpc/benchmark/worker/benchmark_client.go
@@ -37,6 +37,7 @@ import (
"math"
"runtime"
"sync"
+ "syscall"
"time"
"golang.org/x/net/context"
@@ -85,6 +86,7 @@ type benchmarkClient struct {
lastResetTime time.Time
histogramOptions stats.HistogramOptions
lockingHistograms []lockingHistogram
+ rusageLastReset *syscall.Rusage
}
func printClientConfig(config *testpb.ClientConfig) {
@@ -226,6 +228,9 @@ func startBenchmarkClient(config *testpb.ClientConfig) (*benchmarkClient, error)
return nil, err
}
+ rusage := new(syscall.Rusage)
+ syscall.Getrusage(syscall.RUSAGE_SELF, rusage)
+
rpcCountPerConn := int(config.OutstandingRpcsPerChannel)
bc := &benchmarkClient{
histogramOptions: stats.HistogramOptions{
@@ -236,9 +241,10 @@ func startBenchmarkClient(config *testpb.ClientConfig) (*benchmarkClient, error)
},
lockingHistograms: make([]lockingHistogram, rpcCountPerConn*len(conns), rpcCountPerConn*len(conns)),
- stop: make(chan bool),
- lastResetTime: time.Now(),
- closeConns: closeConns,
+ stop: make(chan bool),
+ lastResetTime: time.Now(),
+ closeConns: closeConns,
+ rusageLastReset: rusage,
}
if err = performRPCs(config, conns, bc); err != nil {
@@ -338,8 +344,9 @@ func (bc *benchmarkClient) doCloseLoopStreaming(conns []*grpc.ClientConn, rpcCou
// getStats returns the stats for benchmark client.
// It resets lastResetTime and all histograms if argument reset is true.
func (bc *benchmarkClient) getStats(reset bool) *testpb.ClientStats {
- var timeElapsed float64
+ var wallTimeElapsed, uTimeElapsed, sTimeElapsed float64
mergedHistogram := stats.NewHistogram(bc.histogramOptions)
+ latestRusage := new(syscall.Rusage)
if reset {
// Merging histogram may take some time.
@@ -353,14 +360,21 @@ func (bc *benchmarkClient) getStats(reset bool) *testpb.ClientStats {
mergedHistogram.Merge(toMerge[i])
}
- timeElapsed = time.Since(bc.lastResetTime).Seconds()
+ wallTimeElapsed = time.Since(bc.lastResetTime).Seconds()
+ syscall.Getrusage(syscall.RUSAGE_SELF, latestRusage)
+ uTimeElapsed, sTimeElapsed = cpuTimeDiff(bc.rusageLastReset, latestRusage)
+
+ bc.rusageLastReset = latestRusage
bc.lastResetTime = time.Now()
} else {
// Merge only, not reset.
for i := range bc.lockingHistograms {
bc.lockingHistograms[i].mergeInto(mergedHistogram)
}
- timeElapsed = time.Since(bc.lastResetTime).Seconds()
+
+ wallTimeElapsed = time.Since(bc.lastResetTime).Seconds()
+ syscall.Getrusage(syscall.RUSAGE_SELF, latestRusage)
+ uTimeElapsed, sTimeElapsed = cpuTimeDiff(bc.rusageLastReset, latestRusage)
}
b := make([]uint32, len(mergedHistogram.Buckets), len(mergedHistogram.Buckets))
@@ -376,9 +390,9 @@ func (bc *benchmarkClient) getStats(reset bool) *testpb.ClientStats {
SumOfSquares: float64(mergedHistogram.SumOfSquares),
Count: float64(mergedHistogram.Count),
},
- TimeElapsed: timeElapsed,
- TimeUser: 0,
- TimeSystem: 0,
+ TimeElapsed: wallTimeElapsed,
+ TimeUser: uTimeElapsed,
+ TimeSystem: sTimeElapsed,
}
}
diff --git a/vendor/google.golang.org/grpc/benchmark/worker/benchmark_server.go b/vendor/google.golang.org/grpc/benchmark/worker/benchmark_server.go
index 667ef2c17d..0d20581df4 100644
--- a/vendor/google.golang.org/grpc/benchmark/worker/benchmark_server.go
+++ b/vendor/google.golang.org/grpc/benchmark/worker/benchmark_server.go
@@ -38,6 +38,7 @@ import (
"strconv"
"strings"
"sync"
+ "syscall"
"time"
"google.golang.org/grpc"
@@ -55,11 +56,12 @@ var (
)
type benchmarkServer struct {
- port int
- cores int
- closeFunc func()
- mu sync.RWMutex
- lastResetTime time.Time
+ port int
+ cores int
+ closeFunc func()
+ mu sync.RWMutex
+ lastResetTime time.Time
+ rusageLastReset *syscall.Rusage
}
func printServerConfig(config *testpb.ServerConfig) {
@@ -156,18 +158,35 @@ func startBenchmarkServer(config *testpb.ServerConfig, serverPort int) (*benchma
grpclog.Fatalf("failed to get port number from server address: %v", err)
}
- return &benchmarkServer{port: p, cores: numOfCores, closeFunc: closeFunc, lastResetTime: time.Now()}, nil
+ rusage := new(syscall.Rusage)
+ syscall.Getrusage(syscall.RUSAGE_SELF, rusage)
+
+ return &benchmarkServer{
+ port: p,
+ cores: numOfCores,
+ closeFunc: closeFunc,
+ lastResetTime: time.Now(),
+ rusageLastReset: rusage,
+ }, nil
}
// getStats returns the stats for benchmark server.
// It resets lastResetTime if argument reset is true.
func (bs *benchmarkServer) getStats(reset bool) *testpb.ServerStats {
- // TODO wall time, sys time, user time.
bs.mu.RLock()
defer bs.mu.RUnlock()
- timeElapsed := time.Since(bs.lastResetTime).Seconds()
+ wallTimeElapsed := time.Since(bs.lastResetTime).Seconds()
+ rusageLatest := new(syscall.Rusage)
+ syscall.Getrusage(syscall.RUSAGE_SELF, rusageLatest)
+ uTimeElapsed, sTimeElapsed := cpuTimeDiff(bs.rusageLastReset, rusageLatest)
+
if reset {
bs.lastResetTime = time.Now()
+ bs.rusageLastReset = rusageLatest
+ }
+ return &testpb.ServerStats{
+ TimeElapsed: wallTimeElapsed,
+ TimeUser: uTimeElapsed,
+ TimeSystem: sTimeElapsed,
}
- return &testpb.ServerStats{TimeElapsed: timeElapsed, TimeUser: 0, TimeSystem: 0}
}
diff --git a/vendor/google.golang.org/grpc/benchmark/worker/main.go b/vendor/google.golang.org/grpc/benchmark/worker/main.go
index 17c52519bb..8a80406202 100644
--- a/vendor/google.golang.org/grpc/benchmark/worker/main.go
+++ b/vendor/google.golang.org/grpc/benchmark/worker/main.go
@@ -38,6 +38,8 @@ import (
"fmt"
"io"
"net"
+ "net/http"
+ _ "net/http/pprof"
"runtime"
"strconv"
"time"
@@ -50,8 +52,10 @@ import (
)
var (
- driverPort = flag.Int("driver_port", 10000, "port for communication with driver")
- serverPort = flag.Int("server_port", 0, "port for benchmark server if not specified by server config message")
+ driverPort = flag.Int("driver_port", 10000, "port for communication with driver")
+ serverPort = flag.Int("server_port", 0, "port for benchmark server if not specified by server config message")
+ pprofPort = flag.Int("pprof_port", -1, "Port for pprof debug server to listen on. Pprof server doesn't start if unset")
+ blockProfRate = flag.Int("block_prof_rate", 0, "fraction of goroutine blocking events to report in blocking profile")
)
type byteBufCodec struct {
@@ -227,5 +231,14 @@ func main() {
s.Stop()
}()
+ runtime.SetBlockProfileRate(*blockProfRate)
+
+ if *pprofPort >= 0 {
+ go func() {
+ grpclog.Println("Starting pprof server on port " + strconv.Itoa(*pprofPort))
+ grpclog.Println(http.ListenAndServe("localhost:"+strconv.Itoa(*pprofPort), nil))
+ }()
+ }
+
s.Serve(lis)
}
diff --git a/vendor/google.golang.org/grpc/benchmark/worker/util.go b/vendor/google.golang.org/grpc/benchmark/worker/util.go
index f0016ce49f..6f9b2b03fd 100644
--- a/vendor/google.golang.org/grpc/benchmark/worker/util.go
+++ b/vendor/google.golang.org/grpc/benchmark/worker/util.go
@@ -36,6 +36,7 @@ import (
"log"
"os"
"path/filepath"
+ "syscall"
)
// abs returns the absolute path the given relative file or directory path,
@@ -52,6 +53,20 @@ func abs(rel string) string {
return filepath.Join(v, rel)
}
+func cpuTimeDiff(first *syscall.Rusage, latest *syscall.Rusage) (float64, float64) {
+ var (
+ utimeDiffs = latest.Utime.Sec - first.Utime.Sec
+ utimeDiffus = latest.Utime.Usec - first.Utime.Usec
+ stimeDiffs = latest.Stime.Sec - first.Stime.Sec
+ stimeDiffus = latest.Stime.Usec - first.Stime.Usec
+ )
+
+ uTimeElapsed := float64(utimeDiffs) + float64(utimeDiffus)*1.0e-6
+ sTimeElapsed := float64(stimeDiffs) + float64(stimeDiffus)*1.0e-6
+
+ return uTimeElapsed, sTimeElapsed
+}
+
func goPackagePath(pkg string) (path string, err error) {
gp := os.Getenv("GOPATH")
if gp == "" {
diff --git a/vendor/google.golang.org/grpc/call.go b/vendor/google.golang.org/grpc/call.go
index 81b52be294..e73812fd18 100644
--- a/vendor/google.golang.org/grpc/call.go
+++ b/vendor/google.golang.org/grpc/call.go
@@ -36,7 +36,6 @@ package grpc
import (
"bytes"
"io"
- "math"
"time"
"golang.org/x/net/context"
@@ -44,6 +43,7 @@ import (
"google.golang.org/grpc/codes"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/transport"
)
@@ -73,14 +73,17 @@ func recvResponse(ctx context.Context, dopts dialOptions, t transport.ClientTran
}
}
for {
- if err = recv(p, dopts.codec, stream, dopts.dc, reply, math.MaxInt32, inPayload); err != nil {
+ if c.maxReceiveMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)")
+ }
+ if err = recv(p, dopts.codec, stream, dopts.dc, reply, *c.maxReceiveMessageSize, inPayload); err != nil {
if err == io.EOF {
break
}
return
}
}
- if inPayload != nil && err == io.EOF && stream.StatusCode() == codes.OK {
+ if inPayload != nil && err == io.EOF && stream.Status().Code() == codes.OK {
// TODO in the current implementation, inTrailer may be handled before inPayload in some cases.
// Fix the order if necessary.
dopts.copts.StatsHandler.HandleRPC(ctx, inPayload)
@@ -93,11 +96,7 @@ func recvResponse(ctx context.Context, dopts dialOptions, t transport.ClientTran
}
// sendRequest writes out various information of an RPC such as Context and Message.
-func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor, callHdr *transport.CallHdr, t transport.ClientTransport, args interface{}, opts *transport.Options) (_ *transport.Stream, err error) {
- stream, err := t.NewStream(ctx, callHdr)
- if err != nil {
- return nil, err
- }
+func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor, c *callInfo, callHdr *transport.CallHdr, stream *transport.Stream, t transport.ClientTransport, args interface{}, opts *transport.Options) (err error) {
defer func() {
if err != nil {
// If err is connection error, t will be closed, no need to close stream here.
@@ -120,7 +119,13 @@ func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor,
}
outBuf, err := encode(dopts.codec, args, compressor, cbuf, outPayload)
if err != nil {
- return nil, Errorf(codes.Internal, "grpc: %v", err)
+ return err
+ }
+ if c.maxSendMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxSendMessageSize field uninitialized(nil)")
+ }
+ if len(outBuf) > *c.maxSendMessageSize {
+ return Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(outBuf), *c.maxSendMessageSize)
}
err = t.Write(stream, outBuf, opts)
if err == nil && outPayload != nil {
@@ -131,10 +136,10 @@ func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor,
// does not exist.) so that t.Write could get io.EOF from wait(...). Leave the following
// recvResponse to get the final status.
if err != nil && err != io.EOF {
- return nil, err
+ return err
}
// Sent successfully.
- return stream, nil
+ return nil
}
// Invoke sends the RPC request on the wire and returns after response is received.
@@ -149,14 +154,18 @@ func Invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
func invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) (e error) {
c := defaultCallInfo
- if mc, ok := cc.getMethodConfig(method); ok {
- c.failFast = !mc.WaitForReady
- if mc.Timeout > 0 {
- var cancel context.CancelFunc
- ctx, cancel = context.WithTimeout(ctx, mc.Timeout)
- defer cancel()
- }
+ mc := cc.GetMethodConfig(method)
+ if mc.WaitForReady != nil {
+ c.failFast = !*mc.WaitForReady
+ }
+
+ if mc.Timeout != nil && *mc.Timeout >= 0 {
+ var cancel context.CancelFunc
+ ctx, cancel = context.WithTimeout(ctx, *mc.Timeout)
+ defer cancel()
}
+
+ opts = append(cc.dopts.callOptions, opts...)
for _, o := range opts {
if err := o.before(&c); err != nil {
return toRPCErr(err)
@@ -167,6 +176,10 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
o.after(&c)
}
}()
+
+ c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize)
+ c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize)
+
if EnableTracing {
c.traceInfo.tr = trace.New("grpc.Sent."+methodFamily(method), method)
defer c.traceInfo.tr.Finish()
@@ -183,26 +196,25 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
}
}()
}
+ ctx = newContextWithRPCInfo(ctx)
sh := cc.dopts.copts.StatsHandler
if sh != nil {
- ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method})
+ ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast})
begin := &stats.Begin{
Client: true,
BeginTime: time.Now(),
FailFast: c.failFast,
}
sh.HandleRPC(ctx, begin)
- }
- defer func() {
- if sh != nil {
+ defer func() {
end := &stats.End{
Client: true,
EndTime: time.Now(),
Error: e,
}
sh.HandleRPC(ctx, end)
- }
- }()
+ }()
+ }
topts := &transport.Options{
Last: true,
Delay: false,
@@ -224,6 +236,9 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
if cc.dopts.cp != nil {
callHdr.SendCompress = cc.dopts.cp.Type()
}
+ if c.creds != nil {
+ callHdr.Creds = c.creds
+ }
gopts := BalancerGetOptions{
BlockingWait: !c.failFast,
@@ -231,7 +246,7 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
t, put, err = cc.getTransport(ctx, gopts)
if err != nil {
// TODO(zhaoq): Probably revisit the error handling.
- if _, ok := err.(*rpcError); ok {
+ if _, ok := status.FromError(err); ok {
return err
}
if err == errConnClosing || err == errConnUnavailable {
@@ -246,19 +261,35 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
if c.traceInfo.tr != nil {
c.traceInfo.tr.LazyLog(&payload{sent: true, msg: args}, true)
}
- stream, err = sendRequest(ctx, cc.dopts, cc.dopts.cp, callHdr, t, args, topts)
+ stream, err = t.NewStream(ctx, callHdr)
+ if err != nil {
+ if put != nil {
+ if _, ok := err.(transport.ConnectionError); ok {
+ // If error is connection error, transport was sending data on wire,
+ // and we are not sure if anything has been sent on wire.
+ // If error is not connection error, we are sure nothing has been sent.
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: true, bytesReceived: false})
+ }
+ put()
+ }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
+ continue
+ }
+ return toRPCErr(err)
+ }
+ err = sendRequest(ctx, cc.dopts, cc.dopts.cp, &c, callHdr, stream, t, args, topts)
if err != nil {
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{
+ bytesSent: stream.BytesSent(),
+ bytesReceived: stream.BytesReceived(),
+ })
put()
- put = nil
}
// Retry a non-failfast RPC when
// i) there is a connection error; or
// ii) the server started to drain before this RPC was initiated.
- if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
- if c.failFast {
- return toRPCErr(err)
- }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
continue
}
return toRPCErr(err)
@@ -266,13 +297,13 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
err = recvResponse(ctx, cc.dopts, t, &c, stream, reply)
if err != nil {
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{
+ bytesSent: stream.BytesSent(),
+ bytesReceived: stream.BytesReceived(),
+ })
put()
- put = nil
}
- if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
- if c.failFast {
- return toRPCErr(err)
- }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
continue
}
return toRPCErr(err)
@@ -282,9 +313,12 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
}
t.CloseStream(stream, nil)
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{
+ bytesSent: stream.BytesSent(),
+ bytesReceived: stream.BytesReceived(),
+ })
put()
- put = nil
}
- return Errorf(stream.StatusCode(), "%s", stream.StatusDesc())
+ return stream.Status().Err()
}
}
diff --git a/vendor/google.golang.org/grpc/clientconn.go b/vendor/google.golang.org/grpc/clientconn.go
index 459ce0b641..04c16526ee 100644
--- a/vendor/google.golang.org/grpc/clientconn.go
+++ b/vendor/google.golang.org/grpc/clientconn.go
@@ -45,6 +45,7 @@ import (
"golang.org/x/net/trace"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/grpclog"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/stats"
"google.golang.org/grpc/transport"
)
@@ -55,8 +56,7 @@ var (
ErrClientConnClosing = errors.New("grpc: the client connection is closing")
// ErrClientConnTimeout indicates that the ClientConn cannot establish the
// underlying connections within the specified timeout.
- // DEPRECATED: Please use context.DeadlineExceeded instead. This error will be
- // removed in Q1 2017.
+ // DEPRECATED: Please use context.DeadlineExceeded instead.
ErrClientConnTimeout = errors.New("grpc: timed out when dialing")
// errNoTransportSecurity indicates that there is no transport security
@@ -78,7 +78,8 @@ var (
errConnClosing = errors.New("grpc: the connection is closing")
// errConnUnavailable indicates that the connection is unavailable.
errConnUnavailable = errors.New("grpc: the connection is unavailable")
- errNoAddr = errors.New("grpc: there is no address available to dial")
+ // errBalancerClosed indicates that the balancer is closed.
+ errBalancerClosed = errors.New("grpc: balancer is closed")
// minimum time to give a connection to complete
minConnectTimeout = 20 * time.Second
)
@@ -86,23 +87,57 @@ var (
// dialOptions configure a Dial call. dialOptions are set by the DialOption
// values passed to Dial.
type dialOptions struct {
- unaryInt UnaryClientInterceptor
- streamInt StreamClientInterceptor
- codec Codec
- cp Compressor
- dc Decompressor
- bs backoffStrategy
- balancer Balancer
- block bool
- insecure bool
- timeout time.Duration
- scChan <-chan ServiceConfig
- copts transport.ConnectOptions
+ unaryInt UnaryClientInterceptor
+ streamInt StreamClientInterceptor
+ codec Codec
+ cp Compressor
+ dc Decompressor
+ bs backoffStrategy
+ balancer Balancer
+ block bool
+ insecure bool
+ timeout time.Duration
+ scChan <-chan ServiceConfig
+ copts transport.ConnectOptions
+ callOptions []CallOption
}
+const (
+ defaultClientMaxReceiveMessageSize = 1024 * 1024 * 4
+ defaultClientMaxSendMessageSize = 1024 * 1024 * 4
+)
+
// DialOption configures how we set up the connection.
type DialOption func(*dialOptions)
+// WithInitialWindowSize returns a DialOption which sets the value for initial window size on a stream.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func WithInitialWindowSize(s int32) DialOption {
+ return func(o *dialOptions) {
+ o.copts.InitialWindowSize = s
+ }
+}
+
+// WithInitialConnWindowSize returns a DialOption which sets the value for initial window size on a connection.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func WithInitialConnWindowSize(s int32) DialOption {
+ return func(o *dialOptions) {
+ o.copts.InitialConnWindowSize = s
+ }
+}
+
+// WithMaxMsgSize returns a DialOption which sets the maximum message size the client can receive. Deprecated: use WithDefaultCallOptions(MaxCallRecvMsgSize(s)) instead.
+func WithMaxMsgSize(s int) DialOption {
+ return WithDefaultCallOptions(MaxCallRecvMsgSize(s))
+}
+
+// WithDefaultCallOptions returns a DialOption which sets the default CallOptions for calls over the connection.
+func WithDefaultCallOptions(cos ...CallOption) DialOption {
+ return func(o *dialOptions) {
+ o.callOptions = append(o.callOptions, cos...)
+ }
+}
+
// WithCodec returns a DialOption which sets a codec for message marshaling and unmarshaling.
func WithCodec(c Codec) DialOption {
return func(o *dialOptions) {
@@ -194,7 +229,7 @@ func WithTransportCredentials(creds credentials.TransportCredentials) DialOption
}
// WithPerRPCCredentials returns a DialOption which sets
-// credentials which will place auth state on each outbound RPC.
+// credentials and places auth state on each outbound RPC.
func WithPerRPCCredentials(creds credentials.PerRPCCredentials) DialOption {
return func(o *dialOptions) {
o.copts.PerRPCCredentials = append(o.copts.PerRPCCredentials, creds)
@@ -231,7 +266,7 @@ func WithStatsHandler(h stats.Handler) DialOption {
}
}
-// FailOnNonTempDialError returns a DialOption that specified if gRPC fails on non-temporary dial errors.
+// FailOnNonTempDialError returns a DialOption that specifies if gRPC fails on non-temporary dial errors.
// If f is true, and dialer returns a non-temporary error, gRPC will fail the connection to the network
// address and won't try to reconnect.
// The default value of FailOnNonTempDialError is false.
@@ -249,6 +284,13 @@ func WithUserAgent(s string) DialOption {
}
}
+// WithKeepaliveParams returns a DialOption that specifies keepalive paramaters for the client transport.
+func WithKeepaliveParams(kp keepalive.ClientParameters) DialOption {
+ return func(o *dialOptions) {
+ o.copts.KeepaliveParams = kp
+ }
+}
+
// WithUnaryInterceptor returns a DialOption that specifies the interceptor for unary RPCs.
func WithUnaryInterceptor(f UnaryClientInterceptor) DialOption {
return func(o *dialOptions) {
@@ -278,19 +320,35 @@ func Dial(target string, opts ...DialOption) (*ClientConn, error) {
}
// DialContext creates a client connection to the given target. ctx can be used to
-// cancel or expire the pending connecting. Once this function returns, the
+// cancel or expire the pending connection. Once this function returns, the
// cancellation and expiration of ctx will be noop. Users should call ClientConn.Close
// to terminate all the pending operations after this function returns.
-// This is the EXPERIMENTAL API.
func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) {
cc := &ClientConn{
target: target,
conns: make(map[Address]*addrConn),
}
cc.ctx, cc.cancel = context.WithCancel(context.Background())
+
for _, opt := range opts {
opt(&cc.dopts)
}
+ cc.mkp = cc.dopts.copts.KeepaliveParams
+
+ if cc.dopts.copts.Dialer == nil {
+ cc.dopts.copts.Dialer = newProxyDialer(
+ func(ctx context.Context, addr string) (net.Conn, error) {
+ return dialContext(ctx, "tcp", addr)
+ },
+ )
+ }
+
+ if cc.dopts.copts.UserAgent != "" {
+ cc.dopts.copts.UserAgent += " " + grpcUA
+ } else {
+ cc.dopts.copts.UserAgent = grpcUA
+ }
+
if cc.dopts.timeout > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, cc.dopts.timeout)
@@ -309,15 +367,16 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
}
}()
+ scSet := false
if cc.dopts.scChan != nil {
- // Wait for the initial service config.
+ // Try to get an initial service config.
select {
case sc, ok := <-cc.dopts.scChan:
if ok {
cc.sc = sc
+ scSet = true
}
- case <-ctx.Done():
- return nil, ctx.Err()
+ default:
}
}
// Set defaults.
@@ -333,53 +392,44 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
} else if cc.dopts.insecure && cc.dopts.copts.Authority != "" {
cc.authority = cc.dopts.copts.Authority
} else {
- colonPos := strings.LastIndex(target, ":")
- if colonPos == -1 {
- colonPos = len(target)
- }
- cc.authority = target[:colonPos]
+ cc.authority = target
}
- var ok bool
waitC := make(chan error, 1)
go func() {
- var addrs []Address
+ defer close(waitC)
if cc.dopts.balancer == nil && cc.sc.LB != nil {
cc.dopts.balancer = cc.sc.LB
}
- if cc.dopts.balancer == nil {
- // Connect to target directly if balancer is nil.
- addrs = append(addrs, Address{Addr: target})
- } else {
+ if cc.dopts.balancer != nil {
var credsClone credentials.TransportCredentials
if creds != nil {
credsClone = creds.Clone()
}
config := BalancerConfig{
DialCreds: credsClone,
+ Dialer: cc.dopts.copts.Dialer,
}
if err := cc.dopts.balancer.Start(target, config); err != nil {
waitC <- err
return
}
ch := cc.dopts.balancer.Notify()
- if ch == nil {
- // There is no name resolver installed.
- addrs = append(addrs, Address{Addr: target})
- } else {
- addrs, ok = <-ch
- if !ok || len(addrs) == 0 {
- waitC <- errNoAddr
- return
+ if ch != nil {
+ if cc.dopts.block {
+ doneChan := make(chan struct{})
+ go cc.lbWatcher(doneChan)
+ <-doneChan
+ } else {
+ go cc.lbWatcher(nil)
}
- }
- }
- for _, a := range addrs {
- if err := cc.resetAddrConn(a, false, nil); err != nil {
- waitC <- err
return
}
}
- close(waitC)
+ // No balancer, or no resolver within the balancer. Connect directly.
+ if err := cc.resetAddrConn(Address{Addr: target}, cc.dopts.block, nil); err != nil {
+ waitC <- err
+ return
+ }
}()
select {
case <-ctx.Done():
@@ -389,16 +439,21 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
return nil, err
}
}
-
- // If balancer is nil or balancer.Notify() is nil, ok will be false here.
- // The lbWatcher goroutine will not be created.
- if ok {
- go cc.lbWatcher()
+ if cc.dopts.scChan != nil && !scSet {
+ // Blocking wait for the initial service config.
+ select {
+ case sc, ok := <-cc.dopts.scChan:
+ if ok {
+ cc.sc = sc
+ }
+ case <-ctx.Done():
+ return nil, ctx.Err()
+ }
}
-
if cc.dopts.scChan != nil {
go cc.scWatcher()
}
+
return cc, nil
}
@@ -447,9 +502,14 @@ type ClientConn struct {
mu sync.RWMutex
sc ServiceConfig
conns map[Address]*addrConn
+ // Keepalive parameter can be udated if a GoAway is received.
+ mkp keepalive.ClientParameters
}
-func (cc *ClientConn) lbWatcher() {
+// lbWatcher watches the Notify channel of the balancer in cc and manages
+// connections accordingly. If doneChan is not nil, it is closed after the
+// first successfull connection is made.
+func (cc *ClientConn) lbWatcher(doneChan chan struct{}) {
for addrs := range cc.dopts.balancer.Notify() {
var (
add []Address // Addresses need to setup connections.
@@ -476,7 +536,15 @@ func (cc *ClientConn) lbWatcher() {
}
cc.mu.Unlock()
for _, a := range add {
- cc.resetAddrConn(a, true, nil)
+ if doneChan != nil {
+ err := cc.resetAddrConn(a, true, nil)
+ if err == nil {
+ close(doneChan)
+ doneChan = nil
+ }
+ } else {
+ cc.resetAddrConn(a, false, nil)
+ }
}
for _, c := range del {
c.tearDown(errConnDrain)
@@ -505,12 +573,15 @@ func (cc *ClientConn) scWatcher() {
// resetAddrConn creates an addrConn for addr and adds it to cc.conns.
// If there is an old addrConn for addr, it will be torn down, using tearDownErr as the reason.
// If tearDownErr is nil, errConnDrain will be used instead.
-func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr error) error {
+func (cc *ClientConn) resetAddrConn(addr Address, block bool, tearDownErr error) error {
ac := &addrConn{
cc: cc,
addr: addr,
dopts: cc.dopts,
}
+ cc.mu.RLock()
+ ac.dopts.copts.KeepaliveParams = cc.mkp
+ cc.mu.RUnlock()
ac.ctx, ac.cancel = context.WithCancel(cc.ctx)
ac.stateCV = sync.NewCond(&ac.mu)
if EnableTracing {
@@ -555,8 +626,7 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
stale.tearDown(tearDownErr)
}
}
- // skipWait may overwrite the decision in ac.dopts.block.
- if ac.dopts.block && !skipWait {
+ if block {
if err := ac.resetTransport(false); err != nil {
if err != errConnClosing {
// Tear down ac and delete it from cc.conns.
@@ -589,12 +659,23 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
return nil
}
-// TODO: Avoid the locking here.
-func (cc *ClientConn) getMethodConfig(method string) (m MethodConfig, ok bool) {
+// GetMethodConfig gets the method config of the input method.
+// If there's an exact match for input method (i.e. /service/method), we return
+// the corresponding MethodConfig.
+// If there isn't an exact match for the input method, we look for the default config
+// under the service (i.e /service/). If there is a default MethodConfig for
+// the serivce, we return it.
+// Otherwise, we return an empty MethodConfig.
+func (cc *ClientConn) GetMethodConfig(method string) MethodConfig {
+ // TODO: Avoid the locking here.
cc.mu.RLock()
defer cc.mu.RUnlock()
- m, ok = cc.sc.Methods[method]
- return
+ m, ok := cc.sc.Methods[method]
+ if !ok {
+ i := strings.LastIndex(method, "/")
+ m, _ = cc.sc.Methods[method[:i+1]]
+ }
+ return m
}
func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions) (transport.ClientTransport, func(), error) {
@@ -635,6 +716,7 @@ func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions)
}
if !ok {
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false})
put()
}
return nil, nil, errConnClosing
@@ -642,6 +724,7 @@ func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions)
t, err := ac.wait(ctx, cc.dopts.balancer != nil, !opts.BlockingWait)
if err != nil {
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false})
put()
}
return nil, nil, err
@@ -693,6 +776,20 @@ type addrConn struct {
tearDownErr error
}
+// adjustParams updates parameters used to create transports upon
+// receiving a GoAway.
+func (ac *addrConn) adjustParams(r transport.GoAwayReason) {
+ switch r {
+ case transport.TooManyPings:
+ v := 2 * ac.dopts.copts.KeepaliveParams.Time
+ ac.cc.mu.Lock()
+ if v > ac.cc.mkp.Time {
+ ac.cc.mkp.Time = v
+ }
+ ac.cc.mu.Unlock()
+ }
+}
+
// printf records an event in ac's event log, unless ac has been closed.
// REQUIRES ac.mu is held.
func (ac *addrConn) printf(format string, a ...interface{}) {
@@ -777,6 +874,8 @@ func (ac *addrConn) resetTransport(closeTransport bool) error {
Metadata: ac.addr.Metadata,
}
newTransport, err := transport.NewClientTransport(ctx, sinfo, ac.dopts.copts)
+ // Don't call cancel in success path due to a race in Go 1.6:
+ // https://github.com/golang/go/issues/15078.
if err != nil {
cancel()
@@ -799,11 +898,14 @@ func (ac *addrConn) resetTransport(closeTransport bool) error {
}
ac.mu.Unlock()
closeTransport = false
+ timer := time.NewTimer(sleepTime - time.Since(connectTime))
select {
- case <-time.After(sleepTime - time.Since(connectTime)):
+ case <-timer.C:
case <-ac.ctx.Done():
+ timer.Stop()
return ac.ctx.Err()
}
+ timer.Stop()
continue
}
ac.mu.Lock()
@@ -847,6 +949,7 @@ func (ac *addrConn) transportMonitor() {
}
return
case <-t.GoAway():
+ ac.adjustParams(t.GetGoAwayReason())
// If GoAway happens without any network I/O error, ac is closed without shutting down the
// underlying transport (the transport will be closed when all the pending RPCs finished or
// failed.).
@@ -855,9 +958,9 @@ func (ac *addrConn) transportMonitor() {
// In both cases, a new ac is created.
select {
case <-t.Error():
- ac.cc.resetAddrConn(ac.addr, true, errNetworkIO)
+ ac.cc.resetAddrConn(ac.addr, false, errNetworkIO)
default:
- ac.cc.resetAddrConn(ac.addr, true, errConnDrain)
+ ac.cc.resetAddrConn(ac.addr, false, errConnDrain)
}
return
case <-t.Error():
@@ -866,7 +969,8 @@ func (ac *addrConn) transportMonitor() {
t.Close()
return
case <-t.GoAway():
- ac.cc.resetAddrConn(ac.addr, true, errNetworkIO)
+ ac.adjustParams(t.GetGoAwayReason())
+ ac.cc.resetAddrConn(ac.addr, false, errNetworkIO)
return
default:
}
diff --git a/vendor/google.golang.org/grpc/codec.go b/vendor/google.golang.org/grpc/codec.go
new file mode 100644
index 0000000000..001804d37d
--- /dev/null
+++ b/vendor/google.golang.org/grpc/codec.go
@@ -0,0 +1,119 @@
+/*
+*
+ * Copyright 2014, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+*/
+
+package grpc
+
+import (
+ "math"
+ "sync"
+
+ "github.com/golang/protobuf/proto"
+)
+
+// Codec defines the interface gRPC uses to encode and decode messages.
+// Note that implementations of this interface must be thread safe;
+// a Codec's methods can be called from concurrent goroutines.
+type Codec interface {
+ // Marshal returns the wire format of v.
+ Marshal(v interface{}) ([]byte, error)
+ // Unmarshal parses the wire format into v.
+ Unmarshal(data []byte, v interface{}) error
+ // String returns the name of the Codec implementation. The returned
+ // string will be used as part of content type in transmission.
+ String() string
+}
+
+// protoCodec is a Codec implementation with protobuf. It is the default codec for gRPC.
+type protoCodec struct {
+}
+
+type cachedProtoBuffer struct {
+ lastMarshaledSize uint32
+ proto.Buffer
+}
+
+func capToMaxInt32(val int) uint32 {
+ if val > math.MaxInt32 {
+ return uint32(math.MaxInt32)
+ }
+ return uint32(val)
+}
+
+func (p protoCodec) marshal(v interface{}, cb *cachedProtoBuffer) ([]byte, error) {
+ protoMsg := v.(proto.Message)
+ newSlice := make([]byte, 0, cb.lastMarshaledSize)
+
+ cb.SetBuf(newSlice)
+ cb.Reset()
+ if err := cb.Marshal(protoMsg); err != nil {
+ return nil, err
+ }
+ out := cb.Bytes()
+ cb.lastMarshaledSize = capToMaxInt32(len(out))
+ return out, nil
+}
+
+func (p protoCodec) Marshal(v interface{}) ([]byte, error) {
+ cb := protoBufferPool.Get().(*cachedProtoBuffer)
+ out, err := p.marshal(v, cb)
+
+ // put back buffer and lose the ref to the slice
+ cb.SetBuf(nil)
+ protoBufferPool.Put(cb)
+ return out, err
+}
+
+func (p protoCodec) Unmarshal(data []byte, v interface{}) error {
+ cb := protoBufferPool.Get().(*cachedProtoBuffer)
+ cb.SetBuf(data)
+ v.(proto.Message).Reset()
+ err := cb.Unmarshal(v.(proto.Message))
+ cb.SetBuf(nil)
+ protoBufferPool.Put(cb)
+ return err
+}
+
+func (protoCodec) String() string {
+ return "proto"
+}
+
+var (
+ protoBufferPool = &sync.Pool{
+ New: func() interface{} {
+ return &cachedProtoBuffer{
+ Buffer: proto.Buffer{},
+ lastMarshaledSize: 16,
+ }
+ },
+ }
+)
diff --git a/vendor/google.golang.org/grpc/codes/codes.go b/vendor/google.golang.org/grpc/codes/codes.go
index e14b464acf..dcae5cc260 100644
--- a/vendor/google.golang.org/grpc/codes/codes.go
+++ b/vendor/google.golang.org/grpc/codes/codes.go
@@ -44,7 +44,7 @@ const (
// OK is returned on success.
OK Code = 0
- // Canceled indicates the operation was cancelled (typically by the caller).
+ // Canceled indicates the operation was canceled (typically by the caller).
Canceled Code = 1
// Unknown error. An example of where this error may be returned is
diff --git a/vendor/google.golang.org/grpc/credentials/credentials.go b/vendor/google.golang.org/grpc/credentials/credentials.go
index 4d45c3e3c7..b16dbd6f45 100644
--- a/vendor/google.golang.org/grpc/credentials/credentials.go
+++ b/vendor/google.golang.org/grpc/credentials/credentials.go
@@ -102,6 +102,10 @@ type TransportCredentials interface {
// authentication protocol on rawConn for clients. It returns the authenticated
// connection and the corresponding auth information about the connection.
// Implementations must use the provided context to implement timely cancellation.
+ // gRPC will try to reconnect if the error returned is a temporary error
+ // (io.EOF, context.DeadlineExceeded or err.Temporary() == true).
+ // If the returned error is a wrapper error, implementations should make sure that
+ // the error implements Temporary() to have the correct retry behaviors.
ClientHandshake(context.Context, string, net.Conn) (net.Conn, AuthInfo, error)
// ServerHandshake does the authentication handshake for servers. It returns
// the authenticated connection and the corresponding auth information about
@@ -192,14 +196,14 @@ func NewTLS(c *tls.Config) TransportCredentials {
return tc
}
-// NewClientTLSFromCert constructs a TLS from the input certificate for client.
+// NewClientTLSFromCert constructs TLS credentials from the input certificate for client.
// serverNameOverride is for testing only. If set to a non empty string,
// it will override the virtual host name of authority (e.g. :authority header field) in requests.
func NewClientTLSFromCert(cp *x509.CertPool, serverNameOverride string) TransportCredentials {
return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp})
}
-// NewClientTLSFromFile constructs a TLS from the input certificate file for client.
+// NewClientTLSFromFile constructs TLS credentials from the input certificate file for client.
// serverNameOverride is for testing only. If set to a non empty string,
// it will override the virtual host name of authority (e.g. :authority header field) in requests.
func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredentials, error) {
@@ -214,12 +218,12 @@ func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredent
return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp}), nil
}
-// NewServerTLSFromCert constructs a TLS from the input certificate for server.
+// NewServerTLSFromCert constructs TLS credentials from the input certificate for server.
func NewServerTLSFromCert(cert *tls.Certificate) TransportCredentials {
return NewTLS(&tls.Config{Certificates: []tls.Certificate{*cert}})
}
-// NewServerTLSFromFile constructs a TLS from the input certificate file and key
+// NewServerTLSFromFile constructs TLS credentials from the input certificate file and key
// file for server.
func NewServerTLSFromFile(certFile, keyFile string) (TransportCredentials, error) {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
diff --git a/vendor/google.golang.org/grpc/credentials/oauth/oauth.go b/vendor/google.golang.org/grpc/credentials/oauth/oauth.go
index 25393cc641..126bc7847a 100644
--- a/vendor/google.golang.org/grpc/credentials/oauth/oauth.go
+++ b/vendor/google.golang.org/grpc/credentials/oauth/oauth.go
@@ -37,6 +37,7 @@ package oauth
import (
"fmt"
"io/ioutil"
+ "sync"
"golang.org/x/net/context"
"golang.org/x/oauth2"
@@ -132,20 +133,27 @@ func NewComputeEngine() credentials.PerRPCCredentials {
// serviceAccount represents PerRPCCredentials via JWT signing key.
type serviceAccount struct {
+ mu sync.Mutex
config *jwt.Config
-}
-
-func (s serviceAccount) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) {
- token, err := s.config.TokenSource(ctx).Token()
- if err != nil {
- return nil, err
+ t *oauth2.Token
+}
+
+func (s *serviceAccount) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ if !s.t.Valid() {
+ var err error
+ s.t, err = s.config.TokenSource(ctx).Token()
+ if err != nil {
+ return nil, err
+ }
}
return map[string]string{
- "authorization": token.TokenType + " " + token.AccessToken,
+ "authorization": s.t.TokenType + " " + s.t.AccessToken,
}, nil
}
-func (s serviceAccount) RequireTransportSecurity() bool {
+func (s *serviceAccount) RequireTransportSecurity() bool {
return true
}
@@ -156,7 +164,7 @@ func NewServiceAccountFromKey(jsonKey []byte, scope ...string) (credentials.PerR
if err != nil {
return nil, err
}
- return serviceAccount{config: config}, nil
+ return &serviceAccount{config: config}, nil
}
// NewServiceAccountFromFile constructs the PerRPCCredentials using the JSON key file
diff --git a/vendor/google.golang.org/grpc/examples/helloworld/mock/mock_helloworld/hw_mock.go b/vendor/google.golang.org/grpc/examples/helloworld/mock_helloworld/hw_mock.go
similarity index 100%
rename from vendor/google.golang.org/grpc/examples/helloworld/mock/mock_helloworld/hw_mock.go
rename to vendor/google.golang.org/grpc/examples/helloworld/mock_helloworld/hw_mock.go
diff --git a/vendor/google.golang.org/grpc/examples/route_guide/client/client.go b/vendor/google.golang.org/grpc/examples/route_guide/client/client.go
index fff6398d4d..b43420d81e 100644
--- a/vendor/google.golang.org/grpc/examples/route_guide/client/client.go
+++ b/vendor/google.golang.org/grpc/examples/route_guide/client/client.go
@@ -34,7 +34,7 @@
// Package main implements a simple gRPC client that demonstrates how to use gRPC-Go libraries
// to perform unary, client streaming, server streaming and full duplex RPCs.
//
-// It interacts with the route guide service whose definition can be found in proto/route_guide.proto.
+// It interacts with the route guide service whose definition can be found in routeguide/route_guide.proto.
package main
import (
diff --git a/vendor/google.golang.org/grpc/examples/route_guide/mock_routeguide/rg_mock.go b/vendor/google.golang.org/grpc/examples/route_guide/mock_routeguide/rg_mock.go
new file mode 100644
index 0000000000..328c929fa8
--- /dev/null
+++ b/vendor/google.golang.org/grpc/examples/route_guide/mock_routeguide/rg_mock.go
@@ -0,0 +1,200 @@
+// Automatically generated by MockGen. DO NOT EDIT!
+// Source: google.golang.org/grpc/examples/route_guide/routeguide (interfaces: RouteGuideClient,RouteGuide_RouteChatClient)
+
+package mock_routeguide
+
+import (
+ gomock "github.com/golang/mock/gomock"
+ context "golang.org/x/net/context"
+ grpc "google.golang.org/grpc"
+ routeguide "google.golang.org/grpc/examples/route_guide/routeguide"
+ metadata "google.golang.org/grpc/metadata"
+)
+
+// Mock of RouteGuideClient interface
+type MockRouteGuideClient struct {
+ ctrl *gomock.Controller
+ recorder *_MockRouteGuideClientRecorder
+}
+
+// Recorder for MockRouteGuideClient (not exported)
+type _MockRouteGuideClientRecorder struct {
+ mock *MockRouteGuideClient
+}
+
+func NewMockRouteGuideClient(ctrl *gomock.Controller) *MockRouteGuideClient {
+ mock := &MockRouteGuideClient{ctrl: ctrl}
+ mock.recorder = &_MockRouteGuideClientRecorder{mock}
+ return mock
+}
+
+func (_m *MockRouteGuideClient) EXPECT() *_MockRouteGuideClientRecorder {
+ return _m.recorder
+}
+
+func (_m *MockRouteGuideClient) GetFeature(_param0 context.Context, _param1 *routeguide.Point, _param2 ...grpc.CallOption) (*routeguide.Feature, error) {
+ _s := []interface{}{_param0, _param1}
+ for _, _x := range _param2 {
+ _s = append(_s, _x)
+ }
+ ret := _m.ctrl.Call(_m, "GetFeature", _s...)
+ ret0, _ := ret[0].(*routeguide.Feature)
+ ret1, _ := ret[1].(error)
+ return ret0, ret1
+}
+
+func (_mr *_MockRouteGuideClientRecorder) GetFeature(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
+ _s := append([]interface{}{arg0, arg1}, arg2...)
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "GetFeature", _s...)
+}
+
+func (_m *MockRouteGuideClient) ListFeatures(_param0 context.Context, _param1 *routeguide.Rectangle, _param2 ...grpc.CallOption) (routeguide.RouteGuide_ListFeaturesClient, error) {
+ _s := []interface{}{_param0, _param1}
+ for _, _x := range _param2 {
+ _s = append(_s, _x)
+ }
+ ret := _m.ctrl.Call(_m, "ListFeatures", _s...)
+ ret0, _ := ret[0].(routeguide.RouteGuide_ListFeaturesClient)
+ ret1, _ := ret[1].(error)
+ return ret0, ret1
+}
+
+func (_mr *_MockRouteGuideClientRecorder) ListFeatures(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
+ _s := append([]interface{}{arg0, arg1}, arg2...)
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "ListFeatures", _s...)
+}
+
+func (_m *MockRouteGuideClient) RecordRoute(_param0 context.Context, _param1 ...grpc.CallOption) (routeguide.RouteGuide_RecordRouteClient, error) {
+ _s := []interface{}{_param0}
+ for _, _x := range _param1 {
+ _s = append(_s, _x)
+ }
+ ret := _m.ctrl.Call(_m, "RecordRoute", _s...)
+ ret0, _ := ret[0].(routeguide.RouteGuide_RecordRouteClient)
+ ret1, _ := ret[1].(error)
+ return ret0, ret1
+}
+
+func (_mr *_MockRouteGuideClientRecorder) RecordRoute(arg0 interface{}, arg1 ...interface{}) *gomock.Call {
+ _s := append([]interface{}{arg0}, arg1...)
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "RecordRoute", _s...)
+}
+
+func (_m *MockRouteGuideClient) RouteChat(_param0 context.Context, _param1 ...grpc.CallOption) (routeguide.RouteGuide_RouteChatClient, error) {
+ _s := []interface{}{_param0}
+ for _, _x := range _param1 {
+ _s = append(_s, _x)
+ }
+ ret := _m.ctrl.Call(_m, "RouteChat", _s...)
+ ret0, _ := ret[0].(routeguide.RouteGuide_RouteChatClient)
+ ret1, _ := ret[1].(error)
+ return ret0, ret1
+}
+
+func (_mr *_MockRouteGuideClientRecorder) RouteChat(arg0 interface{}, arg1 ...interface{}) *gomock.Call {
+ _s := append([]interface{}{arg0}, arg1...)
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "RouteChat", _s...)
+}
+
+// Mock of RouteGuide_RouteChatClient interface
+type MockRouteGuide_RouteChatClient struct {
+ ctrl *gomock.Controller
+ recorder *_MockRouteGuide_RouteChatClientRecorder
+}
+
+// Recorder for MockRouteGuide_RouteChatClient (not exported)
+type _MockRouteGuide_RouteChatClientRecorder struct {
+ mock *MockRouteGuide_RouteChatClient
+}
+
+func NewMockRouteGuide_RouteChatClient(ctrl *gomock.Controller) *MockRouteGuide_RouteChatClient {
+ mock := &MockRouteGuide_RouteChatClient{ctrl: ctrl}
+ mock.recorder = &_MockRouteGuide_RouteChatClientRecorder{mock}
+ return mock
+}
+
+func (_m *MockRouteGuide_RouteChatClient) EXPECT() *_MockRouteGuide_RouteChatClientRecorder {
+ return _m.recorder
+}
+
+func (_m *MockRouteGuide_RouteChatClient) CloseSend() error {
+ ret := _m.ctrl.Call(_m, "CloseSend")
+ ret0, _ := ret[0].(error)
+ return ret0
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) CloseSend() *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "CloseSend")
+}
+
+func (_m *MockRouteGuide_RouteChatClient) Context() context.Context {
+ ret := _m.ctrl.Call(_m, "Context")
+ ret0, _ := ret[0].(context.Context)
+ return ret0
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) Context() *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "Context")
+}
+
+func (_m *MockRouteGuide_RouteChatClient) Header() (metadata.MD, error) {
+ ret := _m.ctrl.Call(_m, "Header")
+ ret0, _ := ret[0].(metadata.MD)
+ ret1, _ := ret[1].(error)
+ return ret0, ret1
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) Header() *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "Header")
+}
+
+func (_m *MockRouteGuide_RouteChatClient) Recv() (*routeguide.RouteNote, error) {
+ ret := _m.ctrl.Call(_m, "Recv")
+ ret0, _ := ret[0].(*routeguide.RouteNote)
+ ret1, _ := ret[1].(error)
+ return ret0, ret1
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) Recv() *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "Recv")
+}
+
+func (_m *MockRouteGuide_RouteChatClient) RecvMsg(_param0 interface{}) error {
+ ret := _m.ctrl.Call(_m, "RecvMsg", _param0)
+ ret0, _ := ret[0].(error)
+ return ret0
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) RecvMsg(arg0 interface{}) *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "RecvMsg", arg0)
+}
+
+func (_m *MockRouteGuide_RouteChatClient) Send(_param0 *routeguide.RouteNote) error {
+ ret := _m.ctrl.Call(_m, "Send", _param0)
+ ret0, _ := ret[0].(error)
+ return ret0
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) Send(arg0 interface{}) *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "Send", arg0)
+}
+
+func (_m *MockRouteGuide_RouteChatClient) SendMsg(_param0 interface{}) error {
+ ret := _m.ctrl.Call(_m, "SendMsg", _param0)
+ ret0, _ := ret[0].(error)
+ return ret0
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) SendMsg(arg0 interface{}) *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "SendMsg", arg0)
+}
+
+func (_m *MockRouteGuide_RouteChatClient) Trailer() metadata.MD {
+ ret := _m.ctrl.Call(_m, "Trailer")
+ ret0, _ := ret[0].(metadata.MD)
+ return ret0
+}
+
+func (_mr *_MockRouteGuide_RouteChatClientRecorder) Trailer() *gomock.Call {
+ return _mr.mock.ctrl.RecordCall(_mr.mock, "Trailer")
+}
diff --git a/vendor/google.golang.org/grpc/examples/route_guide/server/server.go b/vendor/google.golang.org/grpc/examples/route_guide/server/server.go
index 5932722bfd..38073cb5fe 100644
--- a/vendor/google.golang.org/grpc/examples/route_guide/server/server.go
+++ b/vendor/google.golang.org/grpc/examples/route_guide/server/server.go
@@ -34,7 +34,7 @@
// Package main implements a simple gRPC server that demonstrates how to use gRPC-Go libraries
// to perform unary, client streaming, server streaming and full duplex RPCs.
//
-// It implements the route guide service whose definition can be found in proto/route_guide.proto.
+// It implements the route guide service whose definition can be found in routeguide/route_guide.proto.
package main
import (
diff --git a/vendor/google.golang.org/grpc/go16.go b/vendor/google.golang.org/grpc/go16.go
new file mode 100644
index 0000000000..eb4020334b
--- /dev/null
+++ b/vendor/google.golang.org/grpc/go16.go
@@ -0,0 +1,113 @@
+// +build go1.6,!go1.7
+
+/*
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package grpc
+
+import (
+ "fmt"
+ "io"
+ "net"
+ "net/http"
+ "os"
+
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+ "google.golang.org/grpc/transport"
+
+ "golang.org/x/net/context"
+)
+
+// dialContext connects to the address on the named network.
+func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
+ return (&net.Dialer{Cancel: ctx.Done()}).Dial(network, address)
+}
+
+func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error {
+ req.Cancel = ctx.Done()
+ if err := req.Write(conn); err != nil {
+ return fmt.Errorf("failed to write the HTTP request: %v", err)
+ }
+ return nil
+}
+
+// toRPCErr converts an error into an error from the status package.
+func toRPCErr(err error) error {
+ if _, ok := status.FromError(err); ok {
+ return err
+ }
+ switch e := err.(type) {
+ case transport.StreamError:
+ return status.Error(e.Code, e.Desc)
+ case transport.ConnectionError:
+ return status.Error(codes.Internal, e.Desc)
+ default:
+ switch err {
+ case context.DeadlineExceeded:
+ return status.Error(codes.DeadlineExceeded, err.Error())
+ case context.Canceled:
+ return status.Error(codes.Canceled, err.Error())
+ case ErrClientConnClosing:
+ return status.Error(codes.FailedPrecondition, err.Error())
+ }
+ }
+ return status.Error(codes.Unknown, err.Error())
+}
+
+// convertCode converts a standard Go error into its canonical code. Note that
+// this is only used to translate the error returned by the server applications.
+func convertCode(err error) codes.Code {
+ switch err {
+ case nil:
+ return codes.OK
+ case io.EOF:
+ return codes.OutOfRange
+ case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
+ return codes.FailedPrecondition
+ case os.ErrInvalid:
+ return codes.InvalidArgument
+ case context.Canceled:
+ return codes.Canceled
+ case context.DeadlineExceeded:
+ return codes.DeadlineExceeded
+ }
+ switch {
+ case os.IsExist(err):
+ return codes.AlreadyExists
+ case os.IsNotExist(err):
+ return codes.NotFound
+ case os.IsPermission(err):
+ return codes.PermissionDenied
+ }
+ return codes.Unknown
+}
diff --git a/vendor/google.golang.org/grpc/go17.go b/vendor/google.golang.org/grpc/go17.go
new file mode 100644
index 0000000000..b9eb781d13
--- /dev/null
+++ b/vendor/google.golang.org/grpc/go17.go
@@ -0,0 +1,113 @@
+// +build go1.7
+
+/*
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package grpc
+
+import (
+ "context"
+ "io"
+ "net"
+ "net/http"
+ "os"
+
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+ "google.golang.org/grpc/transport"
+
+ netctx "golang.org/x/net/context"
+)
+
+// dialContext connects to the address on the named network.
+func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
+ return (&net.Dialer{}).DialContext(ctx, network, address)
+}
+
+func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error {
+ req = req.WithContext(ctx)
+ if err := req.Write(conn); err != nil {
+ return err
+ }
+ return nil
+}
+
+// toRPCErr converts an error into an error from the status package.
+func toRPCErr(err error) error {
+ if _, ok := status.FromError(err); ok {
+ return err
+ }
+ switch e := err.(type) {
+ case transport.StreamError:
+ return status.Error(e.Code, e.Desc)
+ case transport.ConnectionError:
+ return status.Error(codes.Internal, e.Desc)
+ default:
+ switch err {
+ case context.DeadlineExceeded, netctx.DeadlineExceeded:
+ return status.Error(codes.DeadlineExceeded, err.Error())
+ case context.Canceled, netctx.Canceled:
+ return status.Error(codes.Canceled, err.Error())
+ case ErrClientConnClosing:
+ return status.Error(codes.FailedPrecondition, err.Error())
+ }
+ }
+ return status.Error(codes.Unknown, err.Error())
+}
+
+// convertCode converts a standard Go error into its canonical code. Note that
+// this is only used to translate the error returned by the server applications.
+func convertCode(err error) codes.Code {
+ switch err {
+ case nil:
+ return codes.OK
+ case io.EOF:
+ return codes.OutOfRange
+ case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
+ return codes.FailedPrecondition
+ case os.ErrInvalid:
+ return codes.InvalidArgument
+ case context.Canceled, netctx.Canceled:
+ return codes.Canceled
+ case context.DeadlineExceeded, netctx.DeadlineExceeded:
+ return codes.DeadlineExceeded
+ }
+ switch {
+ case os.IsExist(err):
+ return codes.AlreadyExists
+ case os.IsNotExist(err):
+ return codes.NotFound
+ case os.IsPermission(err):
+ return codes.PermissionDenied
+ }
+ return codes.Unknown
+}
diff --git a/vendor/google.golang.org/grpc/grpclb/grpclb.go b/vendor/google.golang.org/grpc/grpclb.go
similarity index 53%
rename from vendor/google.golang.org/grpc/grpclb/grpclb.go
rename to vendor/google.golang.org/grpc/grpclb.go
index d9a1a8b6fb..8638fc92a1 100644
--- a/vendor/google.golang.org/grpc/grpclb/grpclb.go
+++ b/vendor/google.golang.org/grpc/grpclb.go
@@ -31,19 +31,17 @@
*
*/
-// Package grpclb implements the load balancing protocol defined at
-// https://github.com/grpc/grpc/blob/master/doc/load-balancing.md.
-// The implementation is currently EXPERIMENTAL.
-package grpclb
+package grpc
import (
"errors"
"fmt"
+ "math/rand"
+ "net"
"sync"
"time"
"golang.org/x/net/context"
- "google.golang.org/grpc"
"google.golang.org/grpc/codes"
lbpb "google.golang.org/grpc/grpclb/grpc_lb_v1"
"google.golang.org/grpc/grpclog"
@@ -51,6 +49,43 @@ import (
"google.golang.org/grpc/naming"
)
+// Client API for LoadBalancer service.
+// Mostly copied from generated pb.go file.
+// To avoid circular dependency.
+type loadBalancerClient struct {
+ cc *ClientConn
+}
+
+func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...CallOption) (*balanceLoadClientStream, error) {
+ desc := &StreamDesc{
+ StreamName: "BalanceLoad",
+ ServerStreams: true,
+ ClientStreams: true,
+ }
+ stream, err := NewClientStream(ctx, desc, c.cc, "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...)
+ if err != nil {
+ return nil, err
+ }
+ x := &balanceLoadClientStream{stream}
+ return x, nil
+}
+
+type balanceLoadClientStream struct {
+ ClientStream
+}
+
+func (x *balanceLoadClientStream) Send(m *lbpb.LoadBalanceRequest) error {
+ return x.ClientStream.SendMsg(m)
+}
+
+func (x *balanceLoadClientStream) Recv() (*lbpb.LoadBalanceResponse, error) {
+ m := new(lbpb.LoadBalanceResponse)
+ if err := x.ClientStream.RecvMsg(m); err != nil {
+ return nil, err
+ }
+ return m, nil
+}
+
// AddressType indicates the address type returned by name resolution.
type AddressType uint8
@@ -61,18 +96,18 @@ const (
GRPCLB
)
-// Metadata contains the information the name resolution for grpclb should provide. The
-// name resolver used by grpclb balancer is required to provide this type of metadata in
+// AddrMetadataGRPCLB contains the information the name resolver for grpclb should provide. The
+// name resolver used by the grpclb balancer is required to provide this type of metadata in
// its address updates.
-type Metadata struct {
+type AddrMetadataGRPCLB struct {
// AddrType is the type of server (grpc load balancer or backend).
AddrType AddressType
// ServerName is the name of the grpc load balancer. Used for authentication.
ServerName string
}
-// Balancer creates a grpclb load balancer.
-func Balancer(r naming.Resolver) grpc.Balancer {
+// NewGRPCLBBalancer creates a grpclb load balancer.
+func NewGRPCLBBalancer(r naming.Resolver) Balancer {
return &balancer{
r: r,
}
@@ -84,42 +119,46 @@ type remoteBalancerInfo struct {
name string
}
-// addrInfo consists of the information of a backend server.
-type addrInfo struct {
- addr grpc.Address
+// grpclbAddrInfo consists of the information of a backend server.
+type grpclbAddrInfo struct {
+ addr Address
connected bool
- // dropRequest indicates whether a particular RPC which chooses this address
- // should be dropped.
- dropRequest bool
+ // dropForRateLimiting indicates whether this particular request should be
+ // dropped by the client for rate limiting.
+ dropForRateLimiting bool
+ // dropForLoadBalancing indicates whether this particular request should be
+ // dropped by the client for load balancing.
+ dropForLoadBalancing bool
}
type balancer struct {
r naming.Resolver
+ target string
mu sync.Mutex
seq int // a sequence number to make sure addrCh does not get stale addresses.
w naming.Watcher
- addrCh chan []grpc.Address
+ addrCh chan []Address
rbs []remoteBalancerInfo
- addrs []*addrInfo
+ addrs []*grpclbAddrInfo
next int
waitCh chan struct{}
done bool
expTimer *time.Timer
+ rand *rand.Rand
+
+ clientStats lbpb.ClientStats
}
-func (b *balancer) watchAddrUpdates(w naming.Watcher, ch chan remoteBalancerInfo) error {
+func (b *balancer) watchAddrUpdates(w naming.Watcher, ch chan []remoteBalancerInfo) error {
updates, err := w.Next()
if err != nil {
+ grpclog.Printf("grpclb: failed to get next addr update from watcher: %v", err)
return err
}
b.mu.Lock()
defer b.mu.Unlock()
if b.done {
- return grpc.ErrClientConnClosing
- }
- var bAddr remoteBalancerInfo
- if len(b.rbs) > 0 {
- bAddr = b.rbs[0]
+ return ErrClientConnClosing
}
for _, update := range updates {
switch update.Op {
@@ -135,7 +174,7 @@ func (b *balancer) watchAddrUpdates(w naming.Watcher, ch chan remoteBalancerInfo
if exist {
continue
}
- md, ok := update.Metadata.(*Metadata)
+ md, ok := update.Metadata.(*AddrMetadataGRPCLB)
if !ok {
// TODO: Revisit the handling here and may introduce some fallback mechanism.
grpclog.Printf("The name resolution contains unexpected metadata %v", update.Metadata)
@@ -169,16 +208,11 @@ func (b *balancer) watchAddrUpdates(w naming.Watcher, ch chan remoteBalancerInfo
}
// TODO: Fall back to the basic round-robin load balancing if the resulting address is
// not a load balancer.
- if len(b.rbs) > 0 {
- // For simplicity, always use the first one now. May revisit this decision later.
- if b.rbs[0] != bAddr {
- select {
- case <-ch:
- default:
- }
- ch <- b.rbs[0]
- }
+ select {
+ case <-ch:
+ default:
}
+ ch <- b.rbs
return nil
}
@@ -211,18 +245,19 @@ func (b *balancer) processServerList(l *lbpb.ServerList, seq int) {
servers := l.GetServers()
expiration := convertDuration(l.GetExpirationInterval())
var (
- sl []*addrInfo
- addrs []grpc.Address
+ sl []*grpclbAddrInfo
+ addrs []Address
)
for _, s := range servers {
md := metadata.Pairs("lb-token", s.LoadBalanceToken)
- addr := grpc.Address{
- Addr: fmt.Sprintf("%s:%d", s.IpAddress, s.Port),
+ addr := Address{
+ Addr: fmt.Sprintf("%s:%d", net.IP(s.IpAddress), s.Port),
Metadata: &md,
}
- sl = append(sl, &addrInfo{
- addr: addr,
- dropRequest: s.DropRequest,
+ sl = append(sl, &grpclbAddrInfo{
+ addr: addr,
+ dropForRateLimiting: s.DropForRateLimiting,
+ dropForLoadBalancing: s.DropForLoadBalancing,
})
addrs = append(addrs, addr)
}
@@ -249,12 +284,41 @@ func (b *balancer) processServerList(l *lbpb.ServerList, seq int) {
return
}
-func (b *balancer) callRemoteBalancer(lbc lbpb.LoadBalancerClient, seq int) (retry bool) {
+func (b *balancer) sendLoadReport(s *balanceLoadClientStream, interval time.Duration, done <-chan struct{}) {
+ ticker := time.NewTicker(interval)
+ defer ticker.Stop()
+ for {
+ select {
+ case <-ticker.C:
+ case <-done:
+ return
+ }
+ b.mu.Lock()
+ stats := b.clientStats
+ b.clientStats = lbpb.ClientStats{} // Clear the stats.
+ b.mu.Unlock()
+ t := time.Now()
+ stats.Timestamp = &lbpb.Timestamp{
+ Seconds: t.Unix(),
+ Nanos: int32(t.Nanosecond()),
+ }
+ if err := s.Send(&lbpb.LoadBalanceRequest{
+ LoadBalanceRequestType: &lbpb.LoadBalanceRequest_ClientStats{
+ ClientStats: &stats,
+ },
+ }); err != nil {
+ grpclog.Printf("grpclb: failed to send load report: %v", err)
+ return
+ }
+ }
+}
+
+func (b *balancer) callRemoteBalancer(lbc *loadBalancerClient, seq int) (retry bool) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
- stream, err := lbc.BalanceLoad(ctx, grpc.FailFast(false))
+ stream, err := lbc.BalanceLoad(ctx)
if err != nil {
- grpclog.Printf("Failed to perform RPC to the remote balancer %v", err)
+ grpclog.Printf("grpclb: failed to perform RPC to the remote balancer %v", err)
return
}
b.mu.Lock()
@@ -265,21 +329,25 @@ func (b *balancer) callRemoteBalancer(lbc lbpb.LoadBalancerClient, seq int) (ret
b.mu.Unlock()
initReq := &lbpb.LoadBalanceRequest{
LoadBalanceRequestType: &lbpb.LoadBalanceRequest_InitialRequest{
- InitialRequest: new(lbpb.InitialLoadBalanceRequest),
+ InitialRequest: &lbpb.InitialLoadBalanceRequest{
+ Name: b.target,
+ },
},
}
if err := stream.Send(initReq); err != nil {
+ grpclog.Printf("grpclb: failed to send init request: %v", err)
// TODO: backoff on retry?
return true
}
reply, err := stream.Recv()
if err != nil {
+ grpclog.Printf("grpclb: failed to recv init response: %v", err)
// TODO: backoff on retry?
return true
}
initResp := reply.GetInitialResponse()
if initResp == nil {
- grpclog.Println("Failed to receive the initial response from the remote balancer.")
+ grpclog.Println("grpclb: reply from remote balancer did not include initial response.")
return
}
// TODO: Support delegation.
@@ -288,10 +356,19 @@ func (b *balancer) callRemoteBalancer(lbc lbpb.LoadBalancerClient, seq int) (ret
grpclog.Println("TODO: Delegation is not supported yet.")
return
}
+ streamDone := make(chan struct{})
+ defer close(streamDone)
+ b.mu.Lock()
+ b.clientStats = lbpb.ClientStats{} // Clear client stats.
+ b.mu.Unlock()
+ if d := convertDuration(initResp.ClientStatsReportInterval); d > 0 {
+ go b.sendLoadReport(stream, d, streamDone)
+ }
// Retrieve the server list.
for {
reply, err := stream.Recv()
if err != nil {
+ grpclog.Printf("grpclb: failed to recv server list: %v", err)
break
}
b.mu.Lock()
@@ -309,85 +386,163 @@ func (b *balancer) callRemoteBalancer(lbc lbpb.LoadBalancerClient, seq int) (ret
return true
}
-func (b *balancer) Start(target string, config grpc.BalancerConfig) error {
+func (b *balancer) Start(target string, config BalancerConfig) error {
+ b.rand = rand.New(rand.NewSource(time.Now().Unix()))
// TODO: Fall back to the basic direct connection if there is no name resolver.
if b.r == nil {
return errors.New("there is no name resolver installed")
}
+ b.target = target
b.mu.Lock()
if b.done {
b.mu.Unlock()
- return grpc.ErrClientConnClosing
+ return ErrClientConnClosing
}
- b.addrCh = make(chan []grpc.Address)
+ b.addrCh = make(chan []Address)
w, err := b.r.Resolve(target)
if err != nil {
b.mu.Unlock()
+ grpclog.Printf("grpclb: failed to resolve address: %v, err: %v", target, err)
return err
}
b.w = w
b.mu.Unlock()
- balancerAddrCh := make(chan remoteBalancerInfo, 1)
+ balancerAddrsCh := make(chan []remoteBalancerInfo, 1)
// Spawn a goroutine to monitor the name resolution of remote load balancer.
go func() {
for {
- if err := b.watchAddrUpdates(w, balancerAddrCh); err != nil {
- grpclog.Printf("grpc: the naming watcher stops working due to %v.\n", err)
- close(balancerAddrCh)
+ if err := b.watchAddrUpdates(w, balancerAddrsCh); err != nil {
+ grpclog.Printf("grpclb: the naming watcher stops working due to %v.\n", err)
+ close(balancerAddrsCh)
return
}
}
}()
// Spawn a goroutine to talk to the remote load balancer.
go func() {
- var cc *grpc.ClientConn
- for {
- rb, ok := <-balancerAddrCh
+ var (
+ cc *ClientConn
+ // ccError is closed when there is an error in the current cc.
+ // A new rb should be picked from rbs and connected.
+ ccError chan struct{}
+ rb *remoteBalancerInfo
+ rbs []remoteBalancerInfo
+ rbIdx int
+ )
+
+ defer func() {
+ if ccError != nil {
+ select {
+ case <-ccError:
+ default:
+ close(ccError)
+ }
+ }
if cc != nil {
cc.Close()
}
- if !ok {
- // b is closing.
- return
+ }()
+
+ for {
+ var ok bool
+ select {
+ case rbs, ok = <-balancerAddrsCh:
+ if !ok {
+ return
+ }
+ foundIdx := -1
+ if rb != nil {
+ for i, trb := range rbs {
+ if trb == *rb {
+ foundIdx = i
+ break
+ }
+ }
+ }
+ if foundIdx >= 0 {
+ if foundIdx >= 1 {
+ // Move the address in use to the beginning of the list.
+ b.rbs[0], b.rbs[foundIdx] = b.rbs[foundIdx], b.rbs[0]
+ rbIdx = 0
+ }
+ continue // If found, don't dial new cc.
+ } else if len(rbs) > 0 {
+ // Pick a random one from the list, instead of always using the first one.
+ if l := len(rbs); l > 1 && rb != nil {
+ tmpIdx := b.rand.Intn(l - 1)
+ b.rbs[0], b.rbs[tmpIdx] = b.rbs[tmpIdx], b.rbs[0]
+ }
+ rbIdx = 0
+ rb = &rbs[0]
+ } else {
+ // foundIdx < 0 && len(rbs) <= 0.
+ rb = nil
+ }
+ case <-ccError:
+ ccError = nil
+ if rbIdx < len(rbs)-1 {
+ rbIdx++
+ rb = &rbs[rbIdx]
+ } else {
+ rb = nil
+ }
+ }
+
+ if rb == nil {
+ continue
+ }
+
+ if cc != nil {
+ cc.Close()
}
// Talk to the remote load balancer to get the server list.
- var err error
- creds := config.DialCreds
- if creds == nil {
- cc, err = grpc.Dial(rb.addr, grpc.WithInsecure())
- } else {
+ var (
+ err error
+ dopts []DialOption
+ )
+ if creds := config.DialCreds; creds != nil {
if rb.name != "" {
if err := creds.OverrideServerName(rb.name); err != nil {
- grpclog.Printf("Failed to override the server name in the credentials: %v", err)
+ grpclog.Printf("grpclb: failed to override the server name in the credentials: %v", err)
continue
}
}
- cc, err = grpc.Dial(rb.addr, grpc.WithTransportCredentials(creds))
+ dopts = append(dopts, WithTransportCredentials(creds))
+ } else {
+ dopts = append(dopts, WithInsecure())
+ }
+ if dialer := config.Dialer; dialer != nil {
+ // WithDialer takes a different type of function, so we instead use a special DialOption here.
+ dopts = append(dopts, func(o *dialOptions) { o.copts.Dialer = dialer })
}
+ ccError = make(chan struct{})
+ cc, err = Dial(rb.addr, dopts...)
if err != nil {
- grpclog.Printf("Failed to setup a connection to the remote balancer %v: %v", rb.addr, err)
- return
+ grpclog.Printf("grpclb: failed to setup a connection to the remote balancer %v: %v", rb.addr, err)
+ close(ccError)
+ continue
}
b.mu.Lock()
b.seq++ // tick when getting a new balancer address
seq := b.seq
b.next = 0
b.mu.Unlock()
- go func(cc *grpc.ClientConn) {
- lbc := lbpb.NewLoadBalancerClient(cc)
- for {
- if retry := b.callRemoteBalancer(lbc, seq); !retry {
- cc.Close()
- return
- }
+ go func(cc *ClientConn, ccError chan struct{}) {
+ lbc := &loadBalancerClient{cc}
+ b.callRemoteBalancer(lbc, seq)
+ cc.Close()
+ select {
+ case <-ccError:
+ default:
+ close(ccError)
}
- }(cc)
+ }(cc, ccError)
}
}()
return nil
}
-func (b *balancer) down(addr grpc.Address, err error) {
+func (b *balancer) down(addr Address, err error) {
b.mu.Lock()
defer b.mu.Unlock()
for _, a := range b.addrs {
@@ -398,7 +553,7 @@ func (b *balancer) down(addr grpc.Address, err error) {
}
}
-func (b *balancer) Up(addr grpc.Address) func(error) {
+func (b *balancer) Up(addr Address) func(error) {
b.mu.Lock()
defer b.mu.Unlock()
if b.done {
@@ -412,7 +567,7 @@ func (b *balancer) Up(addr grpc.Address) func(error) {
}
a.connected = true
}
- if a.connected && !a.dropRequest {
+ if a.connected && !a.dropForRateLimiting && !a.dropForLoadBalancing {
cnt++
}
}
@@ -426,15 +581,40 @@ func (b *balancer) Up(addr grpc.Address) func(error) {
}
}
-func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr grpc.Address, put func(), err error) {
+func (b *balancer) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) {
var ch chan struct{}
b.mu.Lock()
if b.done {
b.mu.Unlock()
- err = grpc.ErrClientConnClosing
+ err = ErrClientConnClosing
return
}
+ seq := b.seq
+ defer func() {
+ if err != nil {
+ return
+ }
+ put = func() {
+ s, ok := rpcInfoFromContext(ctx)
+ if !ok {
+ return
+ }
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if b.done || seq < b.seq {
+ return
+ }
+ b.clientStats.NumCallsFinished++
+ if !s.bytesSent {
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
+ } else if s.bytesReceived {
+ b.clientStats.NumCallsFinishedKnownReceived++
+ }
+ }
+ }()
+
+ b.clientStats.NumCallsStarted++
if len(b.addrs) > 0 {
if b.next >= len(b.addrs) {
b.next = 0
@@ -444,7 +624,7 @@ func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr
a := b.addrs[next]
next = (next + 1) % len(b.addrs)
if a.connected {
- if !a.dropRequest {
+ if !a.dropForRateLimiting && !a.dropForLoadBalancing {
addr = a.addr
b.next = next
b.mu.Unlock()
@@ -452,8 +632,15 @@ func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr
}
if !opts.BlockingWait {
b.next = next
+ if a.dropForLoadBalancing {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForLoadBalancing++
+ } else if a.dropForRateLimiting {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForRateLimiting++
+ }
b.mu.Unlock()
- err = grpc.Errorf(codes.Unavailable, "%s drops requests", a.addr.Addr)
+ err = Errorf(codes.Unavailable, "%s drops requests", a.addr.Addr)
return
}
}
@@ -465,8 +652,10 @@ func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr
}
if !opts.BlockingWait {
if len(b.addrs) == 0 {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
b.mu.Unlock()
- err = grpc.Errorf(codes.Unavailable, "there is no address available")
+ err = Errorf(codes.Unavailable, "there is no address available")
return
}
// Returns the next addr on b.addrs for a failfast RPC.
@@ -486,13 +675,19 @@ func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr
for {
select {
case <-ctx.Done():
+ b.mu.Lock()
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
+ b.mu.Unlock()
err = ctx.Err()
return
case <-ch:
b.mu.Lock()
if b.done {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
b.mu.Unlock()
- err = grpc.ErrClientConnClosing
+ err = ErrClientConnClosing
return
}
@@ -505,7 +700,7 @@ func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr
a := b.addrs[next]
next = (next + 1) % len(b.addrs)
if a.connected {
- if !a.dropRequest {
+ if !a.dropForRateLimiting && !a.dropForLoadBalancing {
addr = a.addr
b.next = next
b.mu.Unlock()
@@ -513,8 +708,15 @@ func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr
}
if !opts.BlockingWait {
b.next = next
+ if a.dropForLoadBalancing {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForLoadBalancing++
+ } else if a.dropForRateLimiting {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForRateLimiting++
+ }
b.mu.Unlock()
- err = grpc.Errorf(codes.Unavailable, "drop requests for the addreess %s", a.addr.Addr)
+ err = Errorf(codes.Unavailable, "drop requests for the addreess %s", a.addr.Addr)
return
}
}
@@ -536,13 +738,16 @@ func (b *balancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (addr
}
}
-func (b *balancer) Notify() <-chan []grpc.Address {
+func (b *balancer) Notify() <-chan []Address {
return b.addrCh
}
func (b *balancer) Close() error {
b.mu.Lock()
defer b.mu.Unlock()
+ if b.done {
+ return errBalancerClosed
+ }
b.done = true
if b.expTimer != nil {
b.expTimer.Stop()
diff --git a/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go b/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go
index 7be89477db..f63941bd80 100644
--- a/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go
+++ b/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go
@@ -10,6 +10,7 @@ It is generated from these files:
It has these top-level messages:
Duration
+ Timestamp
LoadBalanceRequest
InitialLoadBalanceRequest
ClientStats
@@ -24,11 +25,6 @@ import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
-import (
- context "golang.org/x/net/context"
- grpc "google.golang.org/grpc"
-)
-
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
@@ -58,6 +54,51 @@ func (m *Duration) String() string { return proto.CompactTextString(m
func (*Duration) ProtoMessage() {}
func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+func (m *Duration) GetSeconds() int64 {
+ if m != nil {
+ return m.Seconds
+ }
+ return 0
+}
+
+func (m *Duration) GetNanos() int32 {
+ if m != nil {
+ return m.Nanos
+ }
+ return 0
+}
+
+type Timestamp struct {
+ // Represents seconds of UTC time since Unix epoch
+ // 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
+ // 9999-12-31T23:59:59Z inclusive.
+ Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
+ // Non-negative fractions of a second at nanosecond resolution. Negative
+ // second values with fractions must still have non-negative nanos values
+ // that count forward in time. Must be from 0 to 999,999,999
+ // inclusive.
+ Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
+}
+
+func (m *Timestamp) Reset() { *m = Timestamp{} }
+func (m *Timestamp) String() string { return proto.CompactTextString(m) }
+func (*Timestamp) ProtoMessage() {}
+func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+
+func (m *Timestamp) GetSeconds() int64 {
+ if m != nil {
+ return m.Seconds
+ }
+ return 0
+}
+
+func (m *Timestamp) GetNanos() int32 {
+ if m != nil {
+ return m.Nanos
+ }
+ return 0
+}
+
type LoadBalanceRequest struct {
// Types that are valid to be assigned to LoadBalanceRequestType:
// *LoadBalanceRequest_InitialRequest
@@ -68,17 +109,17 @@ type LoadBalanceRequest struct {
func (m *LoadBalanceRequest) Reset() { *m = LoadBalanceRequest{} }
func (m *LoadBalanceRequest) String() string { return proto.CompactTextString(m) }
func (*LoadBalanceRequest) ProtoMessage() {}
-func (*LoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+func (*LoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
type isLoadBalanceRequest_LoadBalanceRequestType interface {
isLoadBalanceRequest_LoadBalanceRequestType()
}
type LoadBalanceRequest_InitialRequest struct {
- InitialRequest *InitialLoadBalanceRequest `protobuf:"bytes,1,opt,name=initial_request,oneof"`
+ InitialRequest *InitialLoadBalanceRequest `protobuf:"bytes,1,opt,name=initial_request,json=initialRequest,oneof"`
}
type LoadBalanceRequest_ClientStats struct {
- ClientStats *ClientStats `protobuf:"bytes,2,opt,name=client_stats,oneof"`
+ ClientStats *ClientStats `protobuf:"bytes,2,opt,name=client_stats,json=clientStats,oneof"`
}
func (*LoadBalanceRequest_InitialRequest) isLoadBalanceRequest_LoadBalanceRequestType() {}
@@ -180,30 +221,98 @@ func _LoadBalanceRequest_OneofSizer(msg proto.Message) (n int) {
}
type InitialLoadBalanceRequest struct {
- // Name of load balanced service (IE, service.grpc.gslb.google.com)
+ // Name of load balanced service (IE, balancer.service.com)
+ // length should be less than 256 bytes.
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
}
func (m *InitialLoadBalanceRequest) Reset() { *m = InitialLoadBalanceRequest{} }
func (m *InitialLoadBalanceRequest) String() string { return proto.CompactTextString(m) }
func (*InitialLoadBalanceRequest) ProtoMessage() {}
-func (*InitialLoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
+func (*InitialLoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
+
+func (m *InitialLoadBalanceRequest) GetName() string {
+ if m != nil {
+ return m.Name
+ }
+ return ""
+}
// Contains client level statistics that are useful to load balancing. Each
-// count should be reset to zero after reporting the stats.
+// count except the timestamp should be reset to zero after reporting the stats.
type ClientStats struct {
- // The total number of requests sent by the client since the last report.
- TotalRequests int64 `protobuf:"varint,1,opt,name=total_requests" json:"total_requests,omitempty"`
- // The number of client rpc errors since the last report.
- ClientRpcErrors int64 `protobuf:"varint,2,opt,name=client_rpc_errors" json:"client_rpc_errors,omitempty"`
- // The number of dropped requests since the last report.
- DroppedRequests int64 `protobuf:"varint,3,opt,name=dropped_requests" json:"dropped_requests,omitempty"`
+ // The timestamp of generating the report.
+ Timestamp *Timestamp `protobuf:"bytes,1,opt,name=timestamp" json:"timestamp,omitempty"`
+ // The total number of RPCs that started.
+ NumCallsStarted int64 `protobuf:"varint,2,opt,name=num_calls_started,json=numCallsStarted" json:"num_calls_started,omitempty"`
+ // The total number of RPCs that finished.
+ NumCallsFinished int64 `protobuf:"varint,3,opt,name=num_calls_finished,json=numCallsFinished" json:"num_calls_finished,omitempty"`
+ // The total number of RPCs that were dropped by the client because of rate
+ // limiting.
+ NumCallsFinishedWithDropForRateLimiting int64 `protobuf:"varint,4,opt,name=num_calls_finished_with_drop_for_rate_limiting,json=numCallsFinishedWithDropForRateLimiting" json:"num_calls_finished_with_drop_for_rate_limiting,omitempty"`
+ // The total number of RPCs that were dropped by the client because of load
+ // balancing.
+ NumCallsFinishedWithDropForLoadBalancing int64 `protobuf:"varint,5,opt,name=num_calls_finished_with_drop_for_load_balancing,json=numCallsFinishedWithDropForLoadBalancing" json:"num_calls_finished_with_drop_for_load_balancing,omitempty"`
+ // The total number of RPCs that failed to reach a server except dropped RPCs.
+ NumCallsFinishedWithClientFailedToSend int64 `protobuf:"varint,6,opt,name=num_calls_finished_with_client_failed_to_send,json=numCallsFinishedWithClientFailedToSend" json:"num_calls_finished_with_client_failed_to_send,omitempty"`
+ // The total number of RPCs that finished and are known to have been received
+ // by a server.
+ NumCallsFinishedKnownReceived int64 `protobuf:"varint,7,opt,name=num_calls_finished_known_received,json=numCallsFinishedKnownReceived" json:"num_calls_finished_known_received,omitempty"`
}
func (m *ClientStats) Reset() { *m = ClientStats{} }
func (m *ClientStats) String() string { return proto.CompactTextString(m) }
func (*ClientStats) ProtoMessage() {}
-func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
+func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
+
+func (m *ClientStats) GetTimestamp() *Timestamp {
+ if m != nil {
+ return m.Timestamp
+ }
+ return nil
+}
+
+func (m *ClientStats) GetNumCallsStarted() int64 {
+ if m != nil {
+ return m.NumCallsStarted
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinished() int64 {
+ if m != nil {
+ return m.NumCallsFinished
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedWithDropForRateLimiting() int64 {
+ if m != nil {
+ return m.NumCallsFinishedWithDropForRateLimiting
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedWithDropForLoadBalancing() int64 {
+ if m != nil {
+ return m.NumCallsFinishedWithDropForLoadBalancing
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedWithClientFailedToSend() int64 {
+ if m != nil {
+ return m.NumCallsFinishedWithClientFailedToSend
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedKnownReceived() int64 {
+ if m != nil {
+ return m.NumCallsFinishedKnownReceived
+ }
+ return 0
+}
type LoadBalanceResponse struct {
// Types that are valid to be assigned to LoadBalanceResponseType:
@@ -215,17 +324,17 @@ type LoadBalanceResponse struct {
func (m *LoadBalanceResponse) Reset() { *m = LoadBalanceResponse{} }
func (m *LoadBalanceResponse) String() string { return proto.CompactTextString(m) }
func (*LoadBalanceResponse) ProtoMessage() {}
-func (*LoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
+func (*LoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
type isLoadBalanceResponse_LoadBalanceResponseType interface {
isLoadBalanceResponse_LoadBalanceResponseType()
}
type LoadBalanceResponse_InitialResponse struct {
- InitialResponse *InitialLoadBalanceResponse `protobuf:"bytes,1,opt,name=initial_response,oneof"`
+ InitialResponse *InitialLoadBalanceResponse `protobuf:"bytes,1,opt,name=initial_response,json=initialResponse,oneof"`
}
type LoadBalanceResponse_ServerList struct {
- ServerList *ServerList `protobuf:"bytes,2,opt,name=server_list,oneof"`
+ ServerList *ServerList `protobuf:"bytes,2,opt,name=server_list,json=serverList,oneof"`
}
func (*LoadBalanceResponse_InitialResponse) isLoadBalanceResponse_LoadBalanceResponseType() {}
@@ -330,18 +439,26 @@ type InitialLoadBalanceResponse struct {
// This is an application layer redirect that indicates the client should use
// the specified server for load balancing. When this field is non-empty in
// the response, the client should open a separate connection to the
- // load_balancer_delegate and call the BalanceLoad method.
- LoadBalancerDelegate string `protobuf:"bytes,1,opt,name=load_balancer_delegate" json:"load_balancer_delegate,omitempty"`
+ // load_balancer_delegate and call the BalanceLoad method. Its length should
+ // be less than 64 bytes.
+ LoadBalancerDelegate string `protobuf:"bytes,1,opt,name=load_balancer_delegate,json=loadBalancerDelegate" json:"load_balancer_delegate,omitempty"`
// This interval defines how often the client should send the client stats
// to the load balancer. Stats should only be reported when the duration is
// positive.
- ClientStatsReportInterval *Duration `protobuf:"bytes,3,opt,name=client_stats_report_interval" json:"client_stats_report_interval,omitempty"`
+ ClientStatsReportInterval *Duration `protobuf:"bytes,2,opt,name=client_stats_report_interval,json=clientStatsReportInterval" json:"client_stats_report_interval,omitempty"`
}
func (m *InitialLoadBalanceResponse) Reset() { *m = InitialLoadBalanceResponse{} }
func (m *InitialLoadBalanceResponse) String() string { return proto.CompactTextString(m) }
func (*InitialLoadBalanceResponse) ProtoMessage() {}
-func (*InitialLoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+func (*InitialLoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+
+func (m *InitialLoadBalanceResponse) GetLoadBalancerDelegate() string {
+ if m != nil {
+ return m.LoadBalancerDelegate
+ }
+ return ""
+}
func (m *InitialLoadBalanceResponse) GetClientStatsReportInterval() *Duration {
if m != nil {
@@ -360,13 +477,13 @@ type ServerList struct {
// list as valid. It may be considered stale after waiting this interval of
// time after receiving the list. If the interval is not positive, the
// client can assume the list is valid until the next list is received.
- ExpirationInterval *Duration `protobuf:"bytes,3,opt,name=expiration_interval" json:"expiration_interval,omitempty"`
+ ExpirationInterval *Duration `protobuf:"bytes,3,opt,name=expiration_interval,json=expirationInterval" json:"expiration_interval,omitempty"`
}
func (m *ServerList) Reset() { *m = ServerList{} }
func (m *ServerList) String() string { return proto.CompactTextString(m) }
func (*ServerList) ProtoMessage() {}
-func (*ServerList) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+func (*ServerList) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
func (m *ServerList) GetServers() []*Server {
if m != nil {
@@ -382,176 +499,131 @@ func (m *ServerList) GetExpirationInterval() *Duration {
return nil
}
+// Contains server information. When none of the [drop_for_*] fields are true,
+// use the other fields. When drop_for_rate_limiting is true, ignore all other
+// fields. Use drop_for_load_balancing only when it is true and
+// drop_for_rate_limiting is false.
type Server struct {
// A resolved address for the server, serialized in network-byte-order. It may
// either be an IPv4 or IPv6 address.
- IpAddress []byte `protobuf:"bytes,1,opt,name=ip_address,proto3" json:"ip_address,omitempty"`
+ IpAddress []byte `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress,proto3" json:"ip_address,omitempty"`
// A resolved port number for the server.
Port int32 `protobuf:"varint,2,opt,name=port" json:"port,omitempty"`
// An opaque but printable token given to the frontend for each pick. All
// frontend requests for that pick must include the token in its initial
// metadata. The token is used by the backend to verify the request and to
// allow the backend to report load to the gRPC LB system.
- LoadBalanceToken string `protobuf:"bytes,3,opt,name=load_balance_token" json:"load_balance_token,omitempty"`
+ //
+ // Its length is variable but less than 50 bytes.
+ LoadBalanceToken string `protobuf:"bytes,3,opt,name=load_balance_token,json=loadBalanceToken" json:"load_balance_token,omitempty"`
// Indicates whether this particular request should be dropped by the client
- // when this server is chosen from the list.
- DropRequest bool `protobuf:"varint,4,opt,name=drop_request" json:"drop_request,omitempty"`
+ // for rate limiting.
+ DropForRateLimiting bool `protobuf:"varint,4,opt,name=drop_for_rate_limiting,json=dropForRateLimiting" json:"drop_for_rate_limiting,omitempty"`
+ // Indicates whether this particular request should be dropped by the client
+ // for load balancing.
+ DropForLoadBalancing bool `protobuf:"varint,5,opt,name=drop_for_load_balancing,json=dropForLoadBalancing" json:"drop_for_load_balancing,omitempty"`
}
func (m *Server) Reset() { *m = Server{} }
func (m *Server) String() string { return proto.CompactTextString(m) }
func (*Server) ProtoMessage() {}
-func (*Server) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
+func (*Server) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
-func init() {
- proto.RegisterType((*Duration)(nil), "grpc.lb.v1.Duration")
- proto.RegisterType((*LoadBalanceRequest)(nil), "grpc.lb.v1.LoadBalanceRequest")
- proto.RegisterType((*InitialLoadBalanceRequest)(nil), "grpc.lb.v1.InitialLoadBalanceRequest")
- proto.RegisterType((*ClientStats)(nil), "grpc.lb.v1.ClientStats")
- proto.RegisterType((*LoadBalanceResponse)(nil), "grpc.lb.v1.LoadBalanceResponse")
- proto.RegisterType((*InitialLoadBalanceResponse)(nil), "grpc.lb.v1.InitialLoadBalanceResponse")
- proto.RegisterType((*ServerList)(nil), "grpc.lb.v1.ServerList")
- proto.RegisterType((*Server)(nil), "grpc.lb.v1.Server")
-}
-
-// Reference imports to suppress errors if they are not otherwise used.
-var _ context.Context
-var _ grpc.ClientConn
-
-// This is a compile-time assertion to ensure that this generated file
-// is compatible with the grpc package it is being compiled against.
-const _ = grpc.SupportPackageIsVersion4
-
-// Client API for LoadBalancer service
-
-type LoadBalancerClient interface {
- // Bidirectional rpc to get a list of servers.
- BalanceLoad(ctx context.Context, opts ...grpc.CallOption) (LoadBalancer_BalanceLoadClient, error)
-}
-
-type loadBalancerClient struct {
- cc *grpc.ClientConn
-}
-
-func NewLoadBalancerClient(cc *grpc.ClientConn) LoadBalancerClient {
- return &loadBalancerClient{cc}
-}
-
-func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...grpc.CallOption) (LoadBalancer_BalanceLoadClient, error) {
- stream, err := grpc.NewClientStream(ctx, &_LoadBalancer_serviceDesc.Streams[0], c.cc, "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...)
- if err != nil {
- return nil, err
+func (m *Server) GetIpAddress() []byte {
+ if m != nil {
+ return m.IpAddress
}
- x := &loadBalancerBalanceLoadClient{stream}
- return x, nil
-}
-
-type LoadBalancer_BalanceLoadClient interface {
- Send(*LoadBalanceRequest) error
- Recv() (*LoadBalanceResponse, error)
- grpc.ClientStream
-}
-
-type loadBalancerBalanceLoadClient struct {
- grpc.ClientStream
-}
-
-func (x *loadBalancerBalanceLoadClient) Send(m *LoadBalanceRequest) error {
- return x.ClientStream.SendMsg(m)
+ return nil
}
-func (x *loadBalancerBalanceLoadClient) Recv() (*LoadBalanceResponse, error) {
- m := new(LoadBalanceResponse)
- if err := x.ClientStream.RecvMsg(m); err != nil {
- return nil, err
+func (m *Server) GetPort() int32 {
+ if m != nil {
+ return m.Port
}
- return m, nil
-}
-
-// Server API for LoadBalancer service
-
-type LoadBalancerServer interface {
- // Bidirectional rpc to get a list of servers.
- BalanceLoad(LoadBalancer_BalanceLoadServer) error
-}
-
-func RegisterLoadBalancerServer(s *grpc.Server, srv LoadBalancerServer) {
- s.RegisterService(&_LoadBalancer_serviceDesc, srv)
-}
-
-func _LoadBalancer_BalanceLoad_Handler(srv interface{}, stream grpc.ServerStream) error {
- return srv.(LoadBalancerServer).BalanceLoad(&loadBalancerBalanceLoadServer{stream})
-}
-
-type LoadBalancer_BalanceLoadServer interface {
- Send(*LoadBalanceResponse) error
- Recv() (*LoadBalanceRequest, error)
- grpc.ServerStream
+ return 0
}
-type loadBalancerBalanceLoadServer struct {
- grpc.ServerStream
+func (m *Server) GetLoadBalanceToken() string {
+ if m != nil {
+ return m.LoadBalanceToken
+ }
+ return ""
}
-func (x *loadBalancerBalanceLoadServer) Send(m *LoadBalanceResponse) error {
- return x.ServerStream.SendMsg(m)
+func (m *Server) GetDropForRateLimiting() bool {
+ if m != nil {
+ return m.DropForRateLimiting
+ }
+ return false
}
-func (x *loadBalancerBalanceLoadServer) Recv() (*LoadBalanceRequest, error) {
- m := new(LoadBalanceRequest)
- if err := x.ServerStream.RecvMsg(m); err != nil {
- return nil, err
+func (m *Server) GetDropForLoadBalancing() bool {
+ if m != nil {
+ return m.DropForLoadBalancing
}
- return m, nil
+ return false
}
-var _LoadBalancer_serviceDesc = grpc.ServiceDesc{
- ServiceName: "grpc.lb.v1.LoadBalancer",
- HandlerType: (*LoadBalancerServer)(nil),
- Methods: []grpc.MethodDesc{},
- Streams: []grpc.StreamDesc{
- {
- StreamName: "BalanceLoad",
- Handler: _LoadBalancer_BalanceLoad_Handler,
- ServerStreams: true,
- ClientStreams: true,
- },
- },
- Metadata: "grpclb.proto",
+func init() {
+ proto.RegisterType((*Duration)(nil), "grpc.lb.v1.Duration")
+ proto.RegisterType((*Timestamp)(nil), "grpc.lb.v1.Timestamp")
+ proto.RegisterType((*LoadBalanceRequest)(nil), "grpc.lb.v1.LoadBalanceRequest")
+ proto.RegisterType((*InitialLoadBalanceRequest)(nil), "grpc.lb.v1.InitialLoadBalanceRequest")
+ proto.RegisterType((*ClientStats)(nil), "grpc.lb.v1.ClientStats")
+ proto.RegisterType((*LoadBalanceResponse)(nil), "grpc.lb.v1.LoadBalanceResponse")
+ proto.RegisterType((*InitialLoadBalanceResponse)(nil), "grpc.lb.v1.InitialLoadBalanceResponse")
+ proto.RegisterType((*ServerList)(nil), "grpc.lb.v1.ServerList")
+ proto.RegisterType((*Server)(nil), "grpc.lb.v1.Server")
}
func init() { proto.RegisterFile("grpclb.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
- // 471 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x8c, 0x93, 0x51, 0x6f, 0xd3, 0x3e,
- 0x14, 0xc5, 0x9b, 0x7f, 0xb7, 0xfd, 0xb7, 0x9b, 0xc0, 0xc6, 0xdd, 0x54, 0xda, 0x32, 0x8d, 0x2a,
- 0x08, 0x54, 0x90, 0x08, 0x2c, 0xbc, 0xf1, 0x84, 0x0a, 0x0f, 0x45, 0xda, 0xd3, 0xf6, 0x86, 0x90,
- 0x2c, 0x27, 0xb9, 0x9a, 0x2c, 0x82, 0x6d, 0x6c, 0xaf, 0x1a, 0xdf, 0x07, 0xf1, 0x39, 0x91, 0xe3,
- 0x94, 0x64, 0x54, 0x15, 0xbc, 0xc5, 0xbe, 0x3e, 0xf7, 0x1e, 0xff, 0x7c, 0x02, 0xc9, 0xb5, 0xd1,
- 0x65, 0x5d, 0x64, 0xda, 0x28, 0xa7, 0x10, 0xfc, 0x2a, 0xab, 0x8b, 0x6c, 0x75, 0x9e, 0xbe, 0x80,
- 0xfd, 0x0f, 0x37, 0x86, 0x3b, 0xa1, 0x24, 0x1e, 0xc2, 0xff, 0x96, 0x4a, 0x25, 0x2b, 0x3b, 0x8e,
- 0x66, 0xd1, 0x7c, 0x88, 0xf7, 0x60, 0x57, 0x72, 0xa9, 0xec, 0xf8, 0xbf, 0x59, 0x34, 0xdf, 0x4d,
- 0x7f, 0x44, 0x80, 0x17, 0x8a, 0x57, 0x0b, 0x5e, 0x73, 0x59, 0xd2, 0x25, 0x7d, 0xbb, 0x21, 0xeb,
- 0xf0, 0x1d, 0x1c, 0x0a, 0x29, 0x9c, 0xe0, 0x35, 0x33, 0x61, 0xab, 0x91, 0xc7, 0xf9, 0xd3, 0xac,
- 0x1b, 0x94, 0x7d, 0x0c, 0x47, 0x36, 0xf5, 0xcb, 0x01, 0xbe, 0x82, 0xa4, 0xac, 0x05, 0x49, 0xc7,
- 0xac, 0xe3, 0x2e, 0x8c, 0x8b, 0xf3, 0x87, 0x7d, 0xf9, 0xfb, 0xa6, 0x7e, 0xe5, 0xcb, 0xcb, 0xc1,
- 0xe2, 0x11, 0x4c, 0x6a, 0xc5, 0x2b, 0x56, 0x84, 0x4e, 0xeb, 0xb9, 0xcc, 0x7d, 0xd7, 0x94, 0x3e,
- 0x87, 0xc9, 0xd6, 0x61, 0x98, 0xc0, 0x8e, 0xe4, 0x5f, 0xa9, 0x71, 0x78, 0x90, 0x7e, 0x82, 0xb8,
- 0xd7, 0x18, 0x47, 0x70, 0xdf, 0x29, 0xd7, 0xdd, 0x63, 0xcd, 0x61, 0x02, 0x0f, 0x5a, 0x7f, 0x46,
- 0x97, 0x8c, 0x8c, 0x51, 0x26, 0x98, 0x1c, 0xe2, 0x18, 0x8e, 0x2a, 0xa3, 0xb4, 0xa6, 0xaa, 0x13,
- 0x0d, 0x7d, 0x25, 0xfd, 0x19, 0xc1, 0xf1, 0x1d, 0x03, 0x56, 0x2b, 0x69, 0x09, 0x17, 0x70, 0xd4,
- 0xe1, 0x0a, 0x7b, 0x2d, 0xaf, 0x67, 0x7f, 0xe3, 0x15, 0x4e, 0x2f, 0x07, 0xf8, 0x12, 0x62, 0x4b,
- 0x66, 0x45, 0x86, 0xd5, 0xc2, 0xba, 0x96, 0xd7, 0xa8, 0x2f, 0xbf, 0x6a, 0xca, 0x17, 0xc2, 0xf3,
- 0x5d, 0x9c, 0xc2, 0xf4, 0x0f, 0x5c, 0xa1, 0x53, 0xe0, 0x75, 0x0b, 0xd3, 0xed, 0xc3, 0xf0, 0x0c,
- 0x46, 0x7d, 0xad, 0x61, 0x15, 0xd5, 0x74, 0xcd, 0x5d, 0x8b, 0x10, 0xdf, 0xc2, 0x69, 0xff, 0xed,
- 0x98, 0x21, 0xad, 0x8c, 0x63, 0x42, 0x3a, 0x32, 0x2b, 0x5e, 0x37, 0x30, 0xe2, 0xfc, 0xa4, 0xef,
- 0x6d, 0x1d, 0xb8, 0xb4, 0x02, 0xe8, 0x7c, 0xe2, 0x13, 0x1f, 0x3f, 0xbf, 0xf2, 0xd8, 0x87, 0xf3,
- 0x38, 0xc7, 0xcd, 0x0b, 0xe1, 0x39, 0x1c, 0xd3, 0xad, 0x16, 0xa1, 0xc1, 0xbf, 0x4d, 0xf9, 0x0c,
- 0x7b, 0xad, 0x18, 0x01, 0x84, 0x66, 0xbc, 0xaa, 0x0c, 0xd9, 0xf0, 0xb6, 0x89, 0x0f, 0x84, 0x37,
- 0x1c, 0x22, 0x8e, 0x53, 0xc0, 0x3b, 0xa4, 0x9c, 0xfa, 0x42, 0xb2, 0xe9, 0x7e, 0x80, 0x27, 0x90,
- 0xf8, 0xa7, 0xfe, 0x1d, 0xf2, 0x9d, 0x59, 0x34, 0xdf, 0xcf, 0x0b, 0x48, 0x7a, 0xd8, 0x0c, 0x5e,
- 0x42, 0xdc, 0x7e, 0xfb, 0x6d, 0x3c, 0xeb, 0x5b, 0xda, 0xcc, 0xe3, 0xf4, 0xf1, 0xd6, 0x7a, 0xe0,
- 0x3f, 0x8f, 0x5e, 0x47, 0xc5, 0x5e, 0xf3, 0xdf, 0xbe, 0xf9, 0x15, 0x00, 0x00, 0xff, 0xff, 0x01,
- 0x8b, 0xc9, 0x26, 0xc7, 0x03, 0x00, 0x00,
+ // 733 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x55, 0xdd, 0x4e, 0x1b, 0x39,
+ 0x14, 0x66, 0x36, 0xfc, 0xe5, 0x24, 0x5a, 0x58, 0x93, 0x85, 0xc0, 0xc2, 0x2e, 0x1b, 0xa9, 0x34,
+ 0xaa, 0x68, 0x68, 0x43, 0x7b, 0xd1, 0x9f, 0x9b, 0x02, 0x45, 0x41, 0xe5, 0xa2, 0x72, 0xa8, 0x7a,
+ 0x55, 0x59, 0x4e, 0xc6, 0x80, 0xc5, 0xc4, 0x9e, 0xda, 0x4e, 0x68, 0x2f, 0x7b, 0xd9, 0x47, 0xe9,
+ 0x63, 0x54, 0x7d, 0x86, 0xbe, 0x4f, 0x65, 0x7b, 0x26, 0x33, 0x90, 0x1f, 0xd4, 0xbb, 0xf1, 0xf1,
+ 0x77, 0xbe, 0xf3, 0xf9, 0xd8, 0xdf, 0x19, 0x28, 0x5f, 0xa8, 0xb8, 0x1b, 0x75, 0x1a, 0xb1, 0x92,
+ 0x46, 0x22, 0xb0, 0xab, 0x46, 0xd4, 0x69, 0x0c, 0x1e, 0xd7, 0x9e, 0xc3, 0xe2, 0x51, 0x5f, 0x51,
+ 0xc3, 0xa5, 0x40, 0x55, 0x58, 0xd0, 0xac, 0x2b, 0x45, 0xa8, 0xab, 0xc1, 0x76, 0x50, 0x2f, 0xe0,
+ 0x74, 0x89, 0x2a, 0x30, 0x27, 0xa8, 0x90, 0xba, 0xfa, 0xc7, 0x76, 0x50, 0x9f, 0xc3, 0x7e, 0x51,
+ 0x7b, 0x01, 0xc5, 0x33, 0xde, 0x63, 0xda, 0xd0, 0x5e, 0xfc, 0xdb, 0xc9, 0xdf, 0x03, 0x40, 0xa7,
+ 0x92, 0x86, 0x07, 0x34, 0xa2, 0xa2, 0xcb, 0x30, 0xfb, 0xd8, 0x67, 0xda, 0xa0, 0xb7, 0xb0, 0xc4,
+ 0x05, 0x37, 0x9c, 0x46, 0x44, 0xf9, 0x90, 0xa3, 0x2b, 0x35, 0xef, 0x35, 0x32, 0xd5, 0x8d, 0x13,
+ 0x0f, 0x19, 0xcd, 0x6f, 0xcd, 0xe0, 0x3f, 0x93, 0xfc, 0x94, 0xf1, 0x25, 0x94, 0xbb, 0x11, 0x67,
+ 0xc2, 0x10, 0x6d, 0xa8, 0xf1, 0x2a, 0x4a, 0xcd, 0xb5, 0x3c, 0xdd, 0xa1, 0xdb, 0x6f, 0xdb, 0xed,
+ 0xd6, 0x0c, 0x2e, 0x75, 0xb3, 0xe5, 0xc1, 0x3f, 0xb0, 0x1e, 0x49, 0x1a, 0x92, 0x8e, 0x2f, 0x93,
+ 0x8a, 0x22, 0xe6, 0x73, 0xcc, 0x6a, 0x7b, 0xb0, 0x3e, 0x51, 0x09, 0x42, 0x30, 0x2b, 0x68, 0x8f,
+ 0x39, 0xf9, 0x45, 0xec, 0xbe, 0x6b, 0x5f, 0x67, 0xa1, 0x94, 0x2b, 0x86, 0xf6, 0xa1, 0x68, 0xd2,
+ 0x0e, 0x26, 0xe7, 0xfc, 0x3b, 0x2f, 0x6c, 0xd8, 0x5e, 0x9c, 0xe1, 0xd0, 0x03, 0xf8, 0x4b, 0xf4,
+ 0x7b, 0xa4, 0x4b, 0xa3, 0x48, 0xdb, 0x33, 0x29, 0xc3, 0x42, 0x77, 0xaa, 0x02, 0x5e, 0x12, 0xfd,
+ 0xde, 0xa1, 0x8d, 0xb7, 0x7d, 0x18, 0xed, 0x02, 0xca, 0xb0, 0xe7, 0x5c, 0x70, 0x7d, 0xc9, 0xc2,
+ 0x6a, 0xc1, 0x81, 0x97, 0x53, 0xf0, 0x71, 0x12, 0x47, 0x04, 0x1a, 0xa3, 0x68, 0x72, 0xcd, 0xcd,
+ 0x25, 0x09, 0x95, 0x8c, 0xc9, 0xb9, 0x54, 0x44, 0x51, 0xc3, 0x48, 0xc4, 0x7b, 0xdc, 0x70, 0x71,
+ 0x51, 0x9d, 0x75, 0x4c, 0xf7, 0x6f, 0x33, 0xbd, 0xe7, 0xe6, 0xf2, 0x48, 0xc9, 0xf8, 0x58, 0x2a,
+ 0x4c, 0x0d, 0x3b, 0x4d, 0xe0, 0x88, 0xc2, 0xde, 0x9d, 0x05, 0x72, 0xed, 0xb6, 0x15, 0xe6, 0x5c,
+ 0x85, 0xfa, 0x94, 0x0a, 0x59, 0xef, 0x6d, 0x89, 0x0f, 0xf0, 0x70, 0x52, 0x89, 0xe4, 0x19, 0x9c,
+ 0x53, 0x1e, 0xb1, 0x90, 0x18, 0x49, 0x34, 0x13, 0x61, 0x75, 0xde, 0x15, 0xd8, 0x19, 0x57, 0xc0,
+ 0x5f, 0xd5, 0xb1, 0xc3, 0x9f, 0xc9, 0x36, 0x13, 0x21, 0x6a, 0xc1, 0xff, 0x63, 0xe8, 0xaf, 0x84,
+ 0xbc, 0x16, 0x44, 0xb1, 0x2e, 0xe3, 0x03, 0x16, 0x56, 0x17, 0x1c, 0xe5, 0xd6, 0x6d, 0xca, 0x37,
+ 0x16, 0x85, 0x13, 0x50, 0xed, 0x47, 0x00, 0x2b, 0x37, 0x9e, 0x8d, 0x8e, 0xa5, 0xd0, 0x0c, 0xb5,
+ 0x61, 0x39, 0x73, 0x80, 0x8f, 0x25, 0x4f, 0x63, 0xe7, 0x2e, 0x0b, 0x78, 0x74, 0x6b, 0x06, 0x2f,
+ 0x0d, 0x3d, 0x90, 0x90, 0x3e, 0x83, 0x92, 0x66, 0x6a, 0xc0, 0x14, 0x89, 0xb8, 0x36, 0x89, 0x07,
+ 0x56, 0xf3, 0x7c, 0x6d, 0xb7, 0x7d, 0xca, 0x9d, 0x87, 0x40, 0x0f, 0x57, 0x07, 0x9b, 0xb0, 0x71,
+ 0xcb, 0x01, 0x9e, 0xd3, 0x5b, 0xe0, 0x5b, 0x00, 0x1b, 0x93, 0xa5, 0xa0, 0x27, 0xb0, 0x9a, 0x4f,
+ 0x56, 0x24, 0x64, 0x11, 0xbb, 0xa0, 0x26, 0xb5, 0x45, 0x25, 0xca, 0x92, 0xd4, 0x51, 0xb2, 0x87,
+ 0xde, 0xc1, 0x66, 0xde, 0xb2, 0x44, 0xb1, 0x58, 0x2a, 0x43, 0xb8, 0x30, 0x4c, 0x0d, 0x68, 0x94,
+ 0xc8, 0xaf, 0xe4, 0xe5, 0xa7, 0x43, 0x0c, 0xaf, 0xe7, 0xdc, 0x8b, 0x5d, 0xde, 0x49, 0x92, 0x56,
+ 0xfb, 0x12, 0x00, 0x64, 0xc7, 0x44, 0xbb, 0x76, 0x62, 0xd9, 0x95, 0x9d, 0x58, 0x85, 0x7a, 0xa9,
+ 0x89, 0x46, 0xfb, 0x81, 0x53, 0x08, 0x7a, 0x0d, 0x2b, 0xec, 0x53, 0xcc, 0x7d, 0x95, 0x4c, 0x4a,
+ 0x61, 0x8a, 0x14, 0x94, 0x25, 0x0c, 0x35, 0xfc, 0x0c, 0x60, 0xde, 0x53, 0xa3, 0x2d, 0x00, 0x1e,
+ 0x13, 0x1a, 0x86, 0x8a, 0x69, 0x3f, 0x34, 0xcb, 0xb8, 0xc8, 0xe3, 0x57, 0x3e, 0x60, 0xe7, 0x87,
+ 0x55, 0x9f, 0x4c, 0x4d, 0xf7, 0x6d, 0xed, 0x7c, 0xe3, 0x2e, 0x8c, 0xbc, 0x62, 0xc2, 0x69, 0x28,
+ 0xe2, 0xe5, 0x5c, 0x2b, 0xcf, 0x6c, 0x1c, 0xed, 0xc3, 0xea, 0x14, 0xdb, 0x2e, 0xe2, 0x95, 0x70,
+ 0x8c, 0x45, 0x9f, 0xc2, 0xda, 0x34, 0x2b, 0x2e, 0xe2, 0x4a, 0x38, 0xc6, 0x76, 0xcd, 0x0e, 0x94,
+ 0x73, 0xf7, 0xaf, 0x10, 0x86, 0x52, 0xf2, 0x6d, 0xc3, 0xe8, 0xdf, 0x7c, 0x83, 0x46, 0x87, 0xe5,
+ 0xc6, 0x7f, 0x13, 0xf7, 0xfd, 0x43, 0xaa, 0x07, 0x8f, 0x82, 0xce, 0xbc, 0xfb, 0x7d, 0xed, 0xff,
+ 0x0a, 0x00, 0x00, 0xff, 0xff, 0x64, 0xbf, 0xda, 0x5e, 0xce, 0x06, 0x00, 0x00,
}
diff --git a/vendor/google.golang.org/grpc/grpclb/grpclb_server_generated.go b/vendor/google.golang.org/grpc/grpclb/grpclb_server_generated.go
new file mode 100644
index 0000000000..65ce6360c9
--- /dev/null
+++ b/vendor/google.golang.org/grpc/grpclb/grpclb_server_generated.go
@@ -0,0 +1,54 @@
+// This file contains the generated server side code.
+// It's only used for grpclb testing.
+
+package grpclb
+
+import (
+ "google.golang.org/grpc"
+ lbpb "google.golang.org/grpc/grpclb/grpc_lb_v1"
+)
+
+// Server API for LoadBalancer service
+
+type loadBalancerServer interface {
+ // Bidirectional rpc to get a list of servers.
+ BalanceLoad(*loadBalancerBalanceLoadServer) error
+}
+
+func registerLoadBalancerServer(s *grpc.Server, srv loadBalancerServer) {
+ s.RegisterService(
+ &grpc.ServiceDesc{
+ ServiceName: "grpc.lb.v1.LoadBalancer",
+ HandlerType: (*loadBalancerServer)(nil),
+ Methods: []grpc.MethodDesc{},
+ Streams: []grpc.StreamDesc{
+ {
+ StreamName: "BalanceLoad",
+ Handler: balanceLoadHandler,
+ ServerStreams: true,
+ ClientStreams: true,
+ },
+ },
+ Metadata: "grpclb.proto",
+ }, srv)
+}
+
+func balanceLoadHandler(srv interface{}, stream grpc.ServerStream) error {
+ return srv.(loadBalancerServer).BalanceLoad(&loadBalancerBalanceLoadServer{stream})
+}
+
+type loadBalancerBalanceLoadServer struct {
+ grpc.ServerStream
+}
+
+func (x *loadBalancerBalanceLoadServer) Send(m *lbpb.LoadBalanceResponse) error {
+ return x.ServerStream.SendMsg(m)
+}
+
+func (x *loadBalancerBalanceLoadServer) Recv() (*lbpb.LoadBalanceRequest, error) {
+ m := new(lbpb.LoadBalanceRequest)
+ if err := x.ServerStream.RecvMsg(m); err != nil {
+ return nil, err
+ }
+ return m, nil
+}
diff --git a/vendor/google.golang.org/grpc/interceptor.go b/vendor/google.golang.org/grpc/interceptor.go
index 8d932efed7..19893be9d1 100644
--- a/vendor/google.golang.org/grpc/interceptor.go
+++ b/vendor/google.golang.org/grpc/interceptor.go
@@ -40,17 +40,17 @@ import (
// UnaryInvoker is called by UnaryClientInterceptor to complete RPCs.
type UnaryInvoker func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error
-// UnaryClientInterceptor intercepts the execution of a unary RPC on the client. inovker is the handler to complete the RPC
+// UnaryClientInterceptor intercepts the execution of a unary RPC on the client. invoker is the handler to complete the RPC
// and it is the responsibility of the interceptor to call it.
-// This is the EXPERIMENTAL API.
+// This is an EXPERIMENTAL API.
type UnaryClientInterceptor func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error
// Streamer is called by StreamClientInterceptor to create a ClientStream.
type Streamer func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error)
// StreamClientInterceptor intercepts the creation of ClientStream. It may return a custom ClientStream to intercept all I/O
-// operations. streamer is the handlder to create a ClientStream and it is the responsibility of the interceptor to call it.
-// This is the EXPERIMENTAL API.
+// operations. streamer is the handler to create a ClientStream and it is the responsibility of the interceptor to call it.
+// This is an EXPERIMENTAL API.
type StreamClientInterceptor func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error)
// UnaryServerInfo consists of various information about a unary RPC on
diff --git a/vendor/google.golang.org/grpc/interop/test_utils.go b/vendor/google.golang.org/grpc/interop/test_utils.go
index e4e427c75e..15ec008395 100644
--- a/vendor/google.golang.org/grpc/interop/test_utils.go
+++ b/vendor/google.golang.org/grpc/interop/test_utils.go
@@ -392,7 +392,7 @@ func DoPerRPCCreds(tc testpb.TestServiceClient, serviceAccountKeyFile, oauthScop
}
token := GetToken(serviceAccountKeyFile, oauthScope)
kv := map[string]string{"authorization": token.TokenType + " " + token.AccessToken}
- ctx := metadata.NewContext(context.Background(), metadata.MD{"authorization": []string{kv["authorization"]}})
+ ctx := metadata.NewOutgoingContext(context.Background(), metadata.MD{"authorization": []string{kv["authorization"]}})
reply, err := tc.UnaryCall(ctx, req)
if err != nil {
grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err)
@@ -416,7 +416,7 @@ var (
// DoCancelAfterBegin cancels the RPC after metadata has been sent but before payloads are sent.
func DoCancelAfterBegin(tc testpb.TestServiceClient, args ...grpc.CallOption) {
- ctx, cancel := context.WithCancel(metadata.NewContext(context.Background(), testMetadata))
+ ctx, cancel := context.WithCancel(metadata.NewOutgoingContext(context.Background(), testMetadata))
stream, err := tc.StreamingInputCall(ctx, args...)
if err != nil {
grpclog.Fatalf("%v.StreamingInputCall(_) = _, %v", tc, err)
@@ -491,7 +491,7 @@ func DoCustomMetadata(tc testpb.TestServiceClient, args ...grpc.CallOption) {
ResponseSize: proto.Int32(int32(1)),
Payload: pl,
}
- ctx := metadata.NewContext(context.Background(), customMetadata)
+ ctx := metadata.NewOutgoingContext(context.Background(), customMetadata)
var header, trailer metadata.MD
args = append(args, grpc.Header(&header), grpc.Trailer(&trailer))
reply, err := tc.UnaryCall(
@@ -627,7 +627,7 @@ func serverNewPayload(t testpb.PayloadType, size int32) (*testpb.Payload, error)
func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) {
status := in.GetResponseStatus()
- if md, ok := metadata.FromContext(ctx); ok {
+ if md, ok := metadata.FromIncomingContext(ctx); ok {
if initialMetadata, ok := md[initialMetadataKey]; ok {
header := metadata.Pairs(initialMetadataKey, initialMetadata[0])
grpc.SendHeader(ctx, header)
@@ -686,7 +686,7 @@ func (s *testServer) StreamingInputCall(stream testpb.TestService_StreamingInput
}
func (s *testServer) FullDuplexCall(stream testpb.TestService_FullDuplexCallServer) error {
- if md, ok := metadata.FromContext(stream.Context()); ok {
+ if md, ok := metadata.FromIncomingContext(stream.Context()); ok {
if initialMetadata, ok := md[initialMetadataKey]; ok {
header := metadata.Pairs(initialMetadataKey, initialMetadata[0])
stream.SendHeader(header)
diff --git a/vendor/google.golang.org/grpc/keepalive/keepalive.go b/vendor/google.golang.org/grpc/keepalive/keepalive.go
new file mode 100644
index 0000000000..0f855ff46e
--- /dev/null
+++ b/vendor/google.golang.org/grpc/keepalive/keepalive.go
@@ -0,0 +1,80 @@
+/*
+ *
+ * Copyright 2017, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+// Package keepalive defines configurable parameters for point-to-point healthcheck.
+package keepalive
+
+import (
+ "time"
+)
+
+// ClientParameters is used to set keepalive parameters on the client-side.
+// These configure how the client will actively probe to notice when a connection is broken
+// and send pings so intermediaries will be aware of the liveness of the connection.
+// Make sure these parameters are set in coordination with the keepalive policy on the server,
+// as incompatible settings can result in closing of connection.
+type ClientParameters struct {
+ // After a duration of this time if the client doesn't see any activity it pings the server to see if the transport is still alive.
+ Time time.Duration // The current default value is infinity.
+ // After having pinged for keepalive check, the client waits for a duration of Timeout and if no activity is seen even after that
+ // the connection is closed.
+ Timeout time.Duration // The current default value is 20 seconds.
+ // If true, client runs keepalive checks even with no active RPCs.
+ PermitWithoutStream bool // false by default.
+}
+
+// ServerParameters is used to set keepalive and max-age parameters on the server-side.
+type ServerParameters struct {
+ // MaxConnectionIdle is a duration for the amount of time after which an idle connection would be closed by sending a GoAway.
+ // Idleness duration is defined since the most recent time the number of outstanding RPCs became zero or the connection establishment.
+ MaxConnectionIdle time.Duration // The current default value is infinity.
+ // MaxConnectionAge is a duration for the maximum amount of time a connection may exist before it will be closed by sending a GoAway.
+ // A random jitter of +/-10% will be added to MaxConnectionAge to spread out connection storms.
+ MaxConnectionAge time.Duration // The current default value is infinity.
+ // MaxConnectinoAgeGrace is an additive period after MaxConnectionAge after which the connection will be forcibly closed.
+ MaxConnectionAgeGrace time.Duration // The current default value is infinity.
+ // After a duration of this time if the server doesn't see any activity it pings the client to see if the transport is still alive.
+ Time time.Duration // The current default value is 2 hours.
+ // After having pinged for keepalive check, the server waits for a duration of Timeout and if no activity is seen even after that
+ // the connection is closed.
+ Timeout time.Duration // The current default value is 20 seconds.
+}
+
+// EnforcementPolicy is used to set keepalive enforcement policy on the server-side.
+// Server will close connection with a client that violates this policy.
+type EnforcementPolicy struct {
+ // MinTime is the minimum amount of time a client should wait before sending a keepalive ping.
+ MinTime time.Duration // The current default value is 5 minutes.
+ // If true, server expects keepalive pings even when there are no active streams(RPCs).
+ PermitWithoutStream bool // false by default.
+}
diff --git a/vendor/google.golang.org/grpc/metadata/metadata.go b/vendor/google.golang.org/grpc/metadata/metadata.go
index 7332395028..06688134d5 100644
--- a/vendor/google.golang.org/grpc/metadata/metadata.go
+++ b/vendor/google.golang.org/grpc/metadata/metadata.go
@@ -36,58 +36,27 @@
package metadata // import "google.golang.org/grpc/metadata"
import (
- "encoding/base64"
"fmt"
"strings"
"golang.org/x/net/context"
)
-const (
- binHdrSuffix = "-bin"
-)
-
-// encodeKeyValue encodes key and value qualified for transmission via gRPC.
-// Transmitting binary headers violates HTTP/2 spec.
-// TODO(zhaoq): Maybe check if k is ASCII also.
-func encodeKeyValue(k, v string) (string, string) {
- k = strings.ToLower(k)
- if strings.HasSuffix(k, binHdrSuffix) {
- val := base64.StdEncoding.EncodeToString([]byte(v))
- v = string(val)
- }
- return k, v
-}
-
-// DecodeKeyValue returns the original key and value corresponding to the
-// encoded data in k, v.
-// If k is a binary header and v contains comma, v is split on comma before decoded,
-// and the decoded v will be joined with comma before returned.
+// DecodeKeyValue returns k, v, nil. It is deprecated and should not be used.
func DecodeKeyValue(k, v string) (string, string, error) {
- if !strings.HasSuffix(k, binHdrSuffix) {
- return k, v, nil
- }
- vvs := strings.Split(v, ",")
- for i, vv := range vvs {
- val, err := base64.StdEncoding.DecodeString(vv)
- if err != nil {
- return "", "", err
- }
- vvs[i] = string(val)
- }
- return k, strings.Join(vvs, ","), nil
+ return k, v, nil
}
// MD is a mapping from metadata keys to values. Users should use the following
// two convenience functions New and Pairs to generate MD.
type MD map[string][]string
-// New creates a MD from given key-value map.
-// Keys are automatically converted to lowercase. And for keys having "-bin" as suffix, their values will be applied Base64 encoding.
+// New creates an MD from a given key-value map.
+// Keys are automatically converted to lowercase.
func New(m map[string]string) MD {
md := MD{}
- for k, v := range m {
- key, val := encodeKeyValue(k, v)
+ for k, val := range m {
+ key := strings.ToLower(k)
md[key] = append(md[key], val)
}
return md
@@ -95,20 +64,19 @@ func New(m map[string]string) MD {
// Pairs returns an MD formed by the mapping of key, value ...
// Pairs panics if len(kv) is odd.
-// Keys are automatically converted to lowercase. And for keys having "-bin" as suffix, their values will be appplied Base64 encoding.
+// Keys are automatically converted to lowercase.
func Pairs(kv ...string) MD {
if len(kv)%2 == 1 {
panic(fmt.Sprintf("metadata: Pairs got the odd number of input pairs for metadata: %d", len(kv)))
}
md := MD{}
- var k string
+ var key string
for i, s := range kv {
if i%2 == 0 {
- k = s
+ key = strings.ToLower(s)
continue
}
- key, val := encodeKeyValue(k, s)
- md[key] = append(md[key], val)
+ md[key] = append(md[key], s)
}
return md
}
@@ -123,9 +91,9 @@ func (md MD) Copy() MD {
return Join(md)
}
-// Join joins any number of MDs into a single MD.
+// Join joins any number of mds into a single MD.
// The order of values for each key is determined by the order in which
-// the MDs containing those values are presented to Join.
+// the mds containing those values are presented to Join.
func Join(mds ...MD) MD {
out := MD{}
for _, md := range mds {
@@ -136,17 +104,41 @@ func Join(mds ...MD) MD {
return out
}
-type mdKey struct{}
+type mdIncomingKey struct{}
+type mdOutgoingKey struct{}
-// NewContext creates a new context with md attached.
+// NewContext is a wrapper for NewOutgoingContext(ctx, md). Deprecated.
func NewContext(ctx context.Context, md MD) context.Context {
- return context.WithValue(ctx, mdKey{}, md)
+ return NewOutgoingContext(ctx, md)
}
-// FromContext returns the MD in ctx if it exists.
-// The returned md should be immutable, writing to it may cause races.
-// Modification should be made to the copies of the returned md.
+// NewIncomingContext creates a new context with incoming md attached.
+func NewIncomingContext(ctx context.Context, md MD) context.Context {
+ return context.WithValue(ctx, mdIncomingKey{}, md)
+}
+
+// NewOutgoingContext creates a new context with outgoing md attached.
+func NewOutgoingContext(ctx context.Context, md MD) context.Context {
+ return context.WithValue(ctx, mdOutgoingKey{}, md)
+}
+
+// FromContext is a wrapper for FromIncomingContext(ctx). Deprecated.
func FromContext(ctx context.Context) (md MD, ok bool) {
- md, ok = ctx.Value(mdKey{}).(MD)
+ return FromIncomingContext(ctx)
+}
+
+// FromIncomingContext returns the incoming metadata in ctx if it exists. The
+// returned MD should not be modified. Writing to it may cause races.
+// Modification should be made to copies of the returned MD.
+func FromIncomingContext(ctx context.Context) (md MD, ok bool) {
+ md, ok = ctx.Value(mdIncomingKey{}).(MD)
+ return
+}
+
+// FromOutgoingContext returns the outgoing metadata in ctx if it exists. The
+// returned MD should not be modified. Writing to it may cause races.
+// Modification should be made to the copies of the returned MD.
+func FromOutgoingContext(ctx context.Context) (md MD, ok bool) {
+ md, ok = ctx.Value(mdOutgoingKey{}).(MD)
return
}
diff --git a/vendor/google.golang.org/grpc/peer/peer.go b/vendor/google.golang.org/grpc/peer/peer.go
index bfa6205ba9..fa7de9aff3 100644
--- a/vendor/google.golang.org/grpc/peer/peer.go
+++ b/vendor/google.golang.org/grpc/peer/peer.go
@@ -42,7 +42,8 @@ import (
"google.golang.org/grpc/credentials"
)
-// Peer contains the information of the peer for an RPC.
+// Peer contains the information of the peer for an RPC, such as the address
+// and authentication information.
type Peer struct {
// Addr is the peer address.
Addr net.Addr
diff --git a/vendor/google.golang.org/grpc/proxy.go b/vendor/google.golang.org/grpc/proxy.go
new file mode 100644
index 0000000000..10188dc343
--- /dev/null
+++ b/vendor/google.golang.org/grpc/proxy.go
@@ -0,0 +1,145 @@
+/*
+ *
+ * Copyright 2017, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package grpc
+
+import (
+ "bufio"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "net/http"
+ "net/http/httputil"
+ "net/url"
+
+ "golang.org/x/net/context"
+)
+
+var (
+ // errDisabled indicates that proxy is disabled for the address.
+ errDisabled = errors.New("proxy is disabled for the address")
+ // The following variable will be overwritten in the tests.
+ httpProxyFromEnvironment = http.ProxyFromEnvironment
+)
+
+func mapAddress(ctx context.Context, address string) (string, error) {
+ req := &http.Request{
+ URL: &url.URL{
+ Scheme: "https",
+ Host: address,
+ },
+ }
+ url, err := httpProxyFromEnvironment(req)
+ if err != nil {
+ return "", err
+ }
+ if url == nil {
+ return "", errDisabled
+ }
+ return url.Host, nil
+}
+
+// To read a response from a net.Conn, http.ReadResponse() takes a bufio.Reader.
+// It's possible that this reader reads more than what's need for the response and stores
+// those bytes in the buffer.
+// bufConn wraps the original net.Conn and the bufio.Reader to make sure we don't lose the
+// bytes in the buffer.
+type bufConn struct {
+ net.Conn
+ r io.Reader
+}
+
+func (c *bufConn) Read(b []byte) (int, error) {
+ return c.r.Read(b)
+}
+
+func doHTTPConnectHandshake(ctx context.Context, conn net.Conn, addr string) (_ net.Conn, err error) {
+ defer func() {
+ if err != nil {
+ conn.Close()
+ }
+ }()
+
+ req := (&http.Request{
+ Method: http.MethodConnect,
+ URL: &url.URL{Host: addr},
+ Header: map[string][]string{"User-Agent": {grpcUA}},
+ })
+
+ if err := sendHTTPRequest(ctx, req, conn); err != nil {
+ return nil, fmt.Errorf("failed to write the HTTP request: %v", err)
+ }
+
+ r := bufio.NewReader(conn)
+ resp, err := http.ReadResponse(r, req)
+ if err != nil {
+ return nil, fmt.Errorf("reading server HTTP response: %v", err)
+ }
+ defer resp.Body.Close()
+ if resp.StatusCode != http.StatusOK {
+ dump, err := httputil.DumpResponse(resp, true)
+ if err != nil {
+ return nil, fmt.Errorf("failed to do connect handshake, status code: %s", resp.Status)
+ }
+ return nil, fmt.Errorf("failed to do connect handshake, response: %q", dump)
+ }
+
+ return &bufConn{Conn: conn, r: r}, nil
+}
+
+// newProxyDialer returns a dialer that connects to proxy first if necessary.
+// The returned dialer checks if a proxy is necessary, dial to the proxy with the
+// provided dialer, does HTTP CONNECT handshake and returns the connection.
+func newProxyDialer(dialer func(context.Context, string) (net.Conn, error)) func(context.Context, string) (net.Conn, error) {
+ return func(ctx context.Context, addr string) (conn net.Conn, err error) {
+ var skipHandshake bool
+ newAddr, err := mapAddress(ctx, addr)
+ if err != nil {
+ if err != errDisabled {
+ return nil, err
+ }
+ skipHandshake = true
+ newAddr = addr
+ }
+
+ conn, err = dialer(ctx, newAddr)
+ if err != nil {
+ return
+ }
+ if !skipHandshake {
+ conn, err = doHTTPConnectHandshake(ctx, conn, addr)
+ }
+ return
+ }
+}
diff --git a/vendor/google.golang.org/grpc/rpc_util.go b/vendor/google.golang.org/grpc/rpc_util.go
index d98a88376b..96c4be6d6b 100644
--- a/vendor/google.golang.org/grpc/rpc_util.go
+++ b/vendor/google.golang.org/grpc/rpc_util.go
@@ -37,48 +37,22 @@ import (
"bytes"
"compress/gzip"
"encoding/binary"
- "fmt"
"io"
"io/ioutil"
"math"
- "os"
+ "sync"
"time"
- "github.com/golang/protobuf/proto"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
+ "google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/transport"
)
-// Codec defines the interface gRPC uses to encode and decode messages.
-type Codec interface {
- // Marshal returns the wire format of v.
- Marshal(v interface{}) ([]byte, error)
- // Unmarshal parses the wire format into v.
- Unmarshal(data []byte, v interface{}) error
- // String returns the name of the Codec implementation. The returned
- // string will be used as part of content type in transmission.
- String() string
-}
-
-// protoCodec is a Codec implementation with protobuf. It is the default codec for gRPC.
-type protoCodec struct{}
-
-func (protoCodec) Marshal(v interface{}) ([]byte, error) {
- return proto.Marshal(v.(proto.Message))
-}
-
-func (protoCodec) Unmarshal(data []byte, v interface{}) error {
- return proto.Unmarshal(data, v.(proto.Message))
-}
-
-func (protoCodec) String() string {
- return "proto"
-}
-
// Compressor defines the interface gRPC uses to compress a message.
type Compressor interface {
// Do compresses p into w.
@@ -87,16 +61,24 @@ type Compressor interface {
Type() string
}
-// NewGZIPCompressor creates a Compressor based on GZIP.
-func NewGZIPCompressor() Compressor {
- return &gzipCompressor{}
+type gzipCompressor struct {
+ pool sync.Pool
}
-type gzipCompressor struct {
+// NewGZIPCompressor creates a Compressor based on GZIP.
+func NewGZIPCompressor() Compressor {
+ return &gzipCompressor{
+ pool: sync.Pool{
+ New: func() interface{} {
+ return gzip.NewWriter(ioutil.Discard)
+ },
+ },
+ }
}
func (c *gzipCompressor) Do(w io.Writer, p []byte) error {
- z := gzip.NewWriter(w)
+ z := c.pool.Get().(*gzip.Writer)
+ z.Reset(w)
if _, err := z.Write(p); err != nil {
return err
}
@@ -116,6 +98,7 @@ type Decompressor interface {
}
type gzipDecompressor struct {
+ pool sync.Pool
}
// NewGZIPDecompressor creates a Decompressor based on GZIP.
@@ -124,11 +107,26 @@ func NewGZIPDecompressor() Decompressor {
}
func (d *gzipDecompressor) Do(r io.Reader) ([]byte, error) {
- z, err := gzip.NewReader(r)
- if err != nil {
- return nil, err
+ var z *gzip.Reader
+ switch maybeZ := d.pool.Get().(type) {
+ case nil:
+ newZ, err := gzip.NewReader(r)
+ if err != nil {
+ return nil, err
+ }
+ z = newZ
+ case *gzip.Reader:
+ z = maybeZ
+ if err := z.Reset(r); err != nil {
+ d.pool.Put(z)
+ return nil, err
+ }
}
- defer z.Close()
+
+ defer func() {
+ z.Close()
+ d.pool.Put(z)
+ }()
return ioutil.ReadAll(z)
}
@@ -138,11 +136,14 @@ func (d *gzipDecompressor) Type() string {
// callInfo contains all related configuration and information about an RPC.
type callInfo struct {
- failFast bool
- headerMD metadata.MD
- trailerMD metadata.MD
- peer *peer.Peer
- traceInfo traceInfo // in trace.go
+ failFast bool
+ headerMD metadata.MD
+ trailerMD metadata.MD
+ peer *peer.Peer
+ traceInfo traceInfo // in trace.go
+ maxReceiveMessageSize *int
+ maxSendMessageSize *int
+ creds credentials.PerRPCCredentials
}
var defaultCallInfo = callInfo{failFast: true}
@@ -159,6 +160,14 @@ type CallOption interface {
after(*callInfo)
}
+// EmptyCallOption does not alter the Call configuration.
+// It can be embedded in another structure to carry satellite data for use
+// by interceptors.
+type EmptyCallOption struct{}
+
+func (EmptyCallOption) before(*callInfo) error { return nil }
+func (EmptyCallOption) after(*callInfo) {}
+
type beforeCall func(c *callInfo) error
func (o beforeCall) before(c *callInfo) error { return o(c) }
@@ -189,7 +198,9 @@ func Trailer(md *metadata.MD) CallOption {
// unary RPC.
func Peer(peer *peer.Peer) CallOption {
return afterCall(func(c *callInfo) {
- *peer = *c.peer
+ if c.peer != nil {
+ *peer = *c.peer
+ }
})
}
@@ -198,7 +209,8 @@ func Peer(peer *peer.Peer) CallOption {
// immediately. Otherwise, the RPC client will block the call until a
// connection is available (or the call is canceled or times out) and will retry
// the call if it fails due to a transient error. Please refer to
-// https://github.com/grpc/grpc/blob/master/doc/fail_fast.md. Note: failFast is default to true.
+// https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md.
+// Note: failFast is default to true.
func FailFast(failFast bool) CallOption {
return beforeCall(func(c *callInfo) error {
c.failFast = failFast
@@ -206,6 +218,31 @@ func FailFast(failFast bool) CallOption {
})
}
+// MaxCallRecvMsgSize returns a CallOption which sets the maximum message size the client can receive.
+func MaxCallRecvMsgSize(s int) CallOption {
+ return beforeCall(func(o *callInfo) error {
+ o.maxReceiveMessageSize = &s
+ return nil
+ })
+}
+
+// MaxCallSendMsgSize returns a CallOption which sets the maximum message size the client can send.
+func MaxCallSendMsgSize(s int) CallOption {
+ return beforeCall(func(o *callInfo) error {
+ o.maxSendMessageSize = &s
+ return nil
+ })
+}
+
+// PerRPCCredentials returns a CallOption that sets credentials.PerRPCCredentials
+// for a call.
+func PerRPCCredentials(creds credentials.PerRPCCredentials) CallOption {
+ return beforeCall(func(c *callInfo) error {
+ c.creds = creds
+ return nil
+ })
+}
+
// The format of the payload: compressed or not?
type payloadFormat uint8
@@ -239,8 +276,8 @@ type parser struct {
// No other error values or types must be returned, which also means
// that the underlying io.Reader must not return an incompatible
// error.
-func (p *parser) recvMsg(maxMsgSize int) (pf payloadFormat, msg []byte, err error) {
- if _, err := io.ReadFull(p.r, p.header[:]); err != nil {
+func (p *parser) recvMsg(maxReceiveMessageSize int) (pf payloadFormat, msg []byte, err error) {
+ if _, err := p.r.Read(p.header[:]); err != nil {
return 0, nil, err
}
@@ -250,13 +287,13 @@ func (p *parser) recvMsg(maxMsgSize int) (pf payloadFormat, msg []byte, err erro
if length == 0 {
return pf, nil, nil
}
- if length > uint32(maxMsgSize) {
- return 0, nil, Errorf(codes.Internal, "grpc: received message length %d exceeding the max size %d", length, maxMsgSize)
+ if length > uint32(maxReceiveMessageSize) {
+ return 0, nil, Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", length, maxReceiveMessageSize)
}
// TODO(bradfitz,zhaoq): garbage. reuse buffer after proto decoding instead
// of making it for each message:
msg = make([]byte, int(length))
- if _, err := io.ReadFull(p.r, msg); err != nil {
+ if _, err := p.r.Read(msg); err != nil {
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
@@ -277,7 +314,7 @@ func encode(c Codec, msg interface{}, cp Compressor, cbuf *bytes.Buffer, outPayl
// TODO(zhaoq): optimize to reduce memory alloc and copying.
b, err = c.Marshal(msg)
if err != nil {
- return nil, err
+ return nil, Errorf(codes.Internal, "grpc: error while marshaling: %v", err.Error())
}
if outPayload != nil {
outPayload.Payload = msg
@@ -287,14 +324,14 @@ func encode(c Codec, msg interface{}, cp Compressor, cbuf *bytes.Buffer, outPayl
}
if cp != nil {
if err := cp.Do(cbuf, b); err != nil {
- return nil, err
+ return nil, Errorf(codes.Internal, "grpc: error while compressing: %v", err.Error())
}
b = cbuf.Bytes()
}
length = uint(len(b))
}
if length > math.MaxUint32 {
- return nil, Errorf(codes.InvalidArgument, "grpc: message too large (%d bytes)", length)
+ return nil, Errorf(codes.ResourceExhausted, "grpc: message too large (%d bytes)", length)
}
const (
@@ -335,8 +372,8 @@ func checkRecvPayload(pf payloadFormat, recvCompress string, dc Decompressor) er
return nil
}
-func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{}, maxMsgSize int, inPayload *stats.InPayload) error {
- pf, d, err := p.recvMsg(maxMsgSize)
+func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{}, maxReceiveMessageSize int, inPayload *stats.InPayload) error {
+ pf, d, err := p.recvMsg(maxReceiveMessageSize)
if err != nil {
return err
}
@@ -352,10 +389,10 @@ func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{
return Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err)
}
}
- if len(d) > maxMsgSize {
+ if len(d) > maxReceiveMessageSize {
// TODO: Revisit the error code. Currently keep it consistent with java
// implementation.
- return Errorf(codes.Internal, "grpc: received a message of %d bytes exceeding %d limit", len(d), maxMsgSize)
+ return Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", len(d), maxReceiveMessageSize)
}
if err := c.Unmarshal(d, m); err != nil {
return Errorf(codes.Internal, "grpc: failed to unmarshal the received message %v", err)
@@ -370,116 +407,57 @@ func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{
return nil
}
-// rpcError defines the status from an RPC.
-type rpcError struct {
- code codes.Code
- desc string
+type rpcInfo struct {
+ bytesSent bool
+ bytesReceived bool
}
-func (e *rpcError) Error() string {
- return fmt.Sprintf("rpc error: code = %d desc = %s", e.code, e.desc)
+type rpcInfoContextKey struct{}
+
+func newContextWithRPCInfo(ctx context.Context) context.Context {
+ return context.WithValue(ctx, rpcInfoContextKey{}, &rpcInfo{})
+}
+
+func rpcInfoFromContext(ctx context.Context) (s *rpcInfo, ok bool) {
+ s, ok = ctx.Value(rpcInfoContextKey{}).(*rpcInfo)
+ return
+}
+
+func updateRPCInfoInContext(ctx context.Context, s rpcInfo) {
+ if ss, ok := rpcInfoFromContext(ctx); ok {
+ *ss = s
+ }
+ return
}
// Code returns the error code for err if it was produced by the rpc system.
// Otherwise, it returns codes.Unknown.
+//
+// Deprecated; use status.FromError and Code method instead.
func Code(err error) codes.Code {
- if err == nil {
- return codes.OK
- }
- if e, ok := err.(*rpcError); ok {
- return e.code
+ if s, ok := status.FromError(err); ok {
+ return s.Code()
}
return codes.Unknown
}
// ErrorDesc returns the error description of err if it was produced by the rpc system.
// Otherwise, it returns err.Error() or empty string when err is nil.
+//
+// Deprecated; use status.FromError and Message method instead.
func ErrorDesc(err error) string {
- if err == nil {
- return ""
- }
- if e, ok := err.(*rpcError); ok {
- return e.desc
+ if s, ok := status.FromError(err); ok {
+ return s.Message()
}
return err.Error()
}
// Errorf returns an error containing an error code and a description;
// Errorf returns nil if c is OK.
+//
+// Deprecated; use status.Errorf instead.
func Errorf(c codes.Code, format string, a ...interface{}) error {
- if c == codes.OK {
- return nil
- }
- return &rpcError{
- code: c,
- desc: fmt.Sprintf(format, a...),
- }
-}
-
-// toRPCErr converts an error into a rpcError.
-func toRPCErr(err error) error {
- switch e := err.(type) {
- case *rpcError:
- return err
- case transport.StreamError:
- return &rpcError{
- code: e.Code,
- desc: e.Desc,
- }
- case transport.ConnectionError:
- return &rpcError{
- code: codes.Internal,
- desc: e.Desc,
- }
- default:
- switch err {
- case context.DeadlineExceeded:
- return &rpcError{
- code: codes.DeadlineExceeded,
- desc: err.Error(),
- }
- case context.Canceled:
- return &rpcError{
- code: codes.Canceled,
- desc: err.Error(),
- }
- case ErrClientConnClosing:
- return &rpcError{
- code: codes.FailedPrecondition,
- desc: err.Error(),
- }
- }
-
- }
- return Errorf(codes.Unknown, "%v", err)
-}
-
-// convertCode converts a standard Go error into its canonical code. Note that
-// this is only used to translate the error returned by the server applications.
-func convertCode(err error) codes.Code {
- switch err {
- case nil:
- return codes.OK
- case io.EOF:
- return codes.OutOfRange
- case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
- return codes.FailedPrecondition
- case os.ErrInvalid:
- return codes.InvalidArgument
- case context.Canceled:
- return codes.Canceled
- case context.DeadlineExceeded:
- return codes.DeadlineExceeded
- }
- switch {
- case os.IsExist(err):
- return codes.AlreadyExists
- case os.IsNotExist(err):
- return codes.NotFound
- case os.IsPermission(err):
- return codes.PermissionDenied
- }
- return codes.Unknown
+ return status.Errorf(c, format, a...)
}
// MethodConfig defines the configuration recommended by the service providers for a
@@ -489,24 +467,22 @@ type MethodConfig struct {
// WaitForReady indicates whether RPCs sent to this method should wait until
// the connection is ready by default (!failfast). The value specified via the
// gRPC client API will override the value set here.
- WaitForReady bool
+ WaitForReady *bool
// Timeout is the default timeout for RPCs sent to this method. The actual
// deadline used will be the minimum of the value specified here and the value
// set by the application via the gRPC client API. If either one is not set,
// then the other will be used. If neither is set, then the RPC has no deadline.
- Timeout time.Duration
+ Timeout *time.Duration
// MaxReqSize is the maximum allowed payload size for an individual request in a
// stream (client->server) in bytes. The size which is measured is the serialized
// payload after per-message compression (but before stream compression) in bytes.
// The actual value used is the minumum of the value specified here and the value set
// by the application via the gRPC client API. If either one is not set, then the other
// will be used. If neither is set, then the built-in default is used.
- // TODO: support this.
- MaxReqSize uint32
+ MaxReqSize *int
// MaxRespSize is the maximum allowed payload size for an individual response in a
// stream (server->client) in bytes.
- // TODO: support this.
- MaxRespSize uint32
+ MaxRespSize *int
}
// ServiceConfig is provided by the service provider and contains parameters for how
@@ -517,9 +493,32 @@ type ServiceConfig struct {
// via grpc.WithBalancer will override this.
LB Balancer
// Methods contains a map for the methods in this service.
+ // If there is an exact match for a method (i.e. /service/method) in the map, use the corresponding MethodConfig.
+ // If there's no exact match, look for the default config for the service (/service/) and use the corresponding MethodConfig if it exists.
+ // Otherwise, the method has no MethodConfig to use.
Methods map[string]MethodConfig
}
+func min(a, b *int) *int {
+ if *a < *b {
+ return a
+ }
+ return b
+}
+
+func getMaxSize(mcMax, doptMax *int, defaultVal int) *int {
+ if mcMax == nil && doptMax == nil {
+ return &defaultVal
+ }
+ if mcMax != nil && doptMax != nil {
+ return min(mcMax, doptMax)
+ }
+ if mcMax != nil {
+ return mcMax
+ }
+ return doptMax
+}
+
// SupportPackageIsVersion4 is referenced from generated protocol buffer files
// to assert that that code is compatible with this version of the grpc package.
//
@@ -527,3 +526,8 @@ type ServiceConfig struct {
// requires a synchronised update of grpc-go and protoc-gen-go. This constant
// should not be referenced from any other code.
const SupportPackageIsVersion4 = true
+
+// Version is the current grpc version.
+const Version = "1.5.0-dev"
+
+const grpcUA = "grpc-go/" + Version
diff --git a/vendor/google.golang.org/grpc/server.go b/vendor/google.golang.org/grpc/server.go
index 985226d609..9e9a3442ef 100644
--- a/vendor/google.golang.org/grpc/server.go
+++ b/vendor/google.golang.org/grpc/server.go
@@ -53,12 +53,19 @@ import (
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/internal"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/tap"
"google.golang.org/grpc/transport"
)
+const (
+ defaultServerMaxReceiveMessageSize = 1024 * 1024 * 4
+ defaultServerMaxSendMessageSize = 1024 * 1024 * 4
+)
+
type methodHandler func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error)
// MethodDesc represents an RPC service's method specification.
@@ -105,24 +112,63 @@ type Server struct {
}
type options struct {
- creds credentials.TransportCredentials
- codec Codec
- cp Compressor
- dc Decompressor
- maxMsgSize int
- unaryInt UnaryServerInterceptor
- streamInt StreamServerInterceptor
- inTapHandle tap.ServerInHandle
- statsHandler stats.Handler
- maxConcurrentStreams uint32
- useHandlerImpl bool // use http.Handler-based server
+ creds credentials.TransportCredentials
+ codec Codec
+ cp Compressor
+ dc Decompressor
+ unaryInt UnaryServerInterceptor
+ streamInt StreamServerInterceptor
+ inTapHandle tap.ServerInHandle
+ statsHandler stats.Handler
+ maxConcurrentStreams uint32
+ maxReceiveMessageSize int
+ maxSendMessageSize int
+ useHandlerImpl bool // use http.Handler-based server
+ unknownStreamDesc *StreamDesc
+ keepaliveParams keepalive.ServerParameters
+ keepalivePolicy keepalive.EnforcementPolicy
+ initialWindowSize int32
+ initialConnWindowSize int32
}
-var defaultMaxMsgSize = 1024 * 1024 * 4 // use 4MB as the default message size limit
+var defaultServerOptions = options{
+ maxReceiveMessageSize: defaultServerMaxReceiveMessageSize,
+ maxSendMessageSize: defaultServerMaxSendMessageSize,
+}
-// A ServerOption sets options.
+// A ServerOption sets options such as credentials, codec and keepalive parameters, etc.
type ServerOption func(*options)
+// InitialWindowSize returns a ServerOption that sets window size for stream.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func InitialWindowSize(s int32) ServerOption {
+ return func(o *options) {
+ o.initialWindowSize = s
+ }
+}
+
+// InitialConnWindowSize returns a ServerOption that sets window size for a connection.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func InitialConnWindowSize(s int32) ServerOption {
+ return func(o *options) {
+ o.initialConnWindowSize = s
+ }
+}
+
+// KeepaliveParams returns a ServerOption that sets keepalive and max-age parameters for the server.
+func KeepaliveParams(kp keepalive.ServerParameters) ServerOption {
+ return func(o *options) {
+ o.keepaliveParams = kp
+ }
+}
+
+// KeepaliveEnforcementPolicy returns a ServerOption that sets keepalive enforcement policy for the server.
+func KeepaliveEnforcementPolicy(kep keepalive.EnforcementPolicy) ServerOption {
+ return func(o *options) {
+ o.keepalivePolicy = kep
+ }
+}
+
// CustomCodec returns a ServerOption that sets a codec for message marshaling and unmarshaling.
func CustomCodec(codec Codec) ServerOption {
return func(o *options) {
@@ -144,11 +190,25 @@ func RPCDecompressor(dc Decompressor) ServerOption {
}
}
-// MaxMsgSize returns a ServerOption to set the max message size in bytes for inbound mesages.
-// If this is not set, gRPC uses the default 4MB.
+// MaxMsgSize returns a ServerOption to set the max message size in bytes the server can receive.
+// If this is not set, gRPC uses the default limit. Deprecated: use MaxRecvMsgSize instead.
func MaxMsgSize(m int) ServerOption {
+ return MaxRecvMsgSize(m)
+}
+
+// MaxRecvMsgSize returns a ServerOption to set the max message size in bytes the server can receive.
+// If this is not set, gRPC uses the default 4MB.
+func MaxRecvMsgSize(m int) ServerOption {
+ return func(o *options) {
+ o.maxReceiveMessageSize = m
+ }
+}
+
+// MaxSendMsgSize returns a ServerOption to set the max message size in bytes the server can send.
+// If this is not set, gRPC uses the default 4MB.
+func MaxSendMsgSize(m int) ServerOption {
return func(o *options) {
- o.maxMsgSize = m
+ o.maxSendMessageSize = m
}
}
@@ -173,7 +233,7 @@ func Creds(c credentials.TransportCredentials) ServerOption {
func UnaryInterceptor(i UnaryServerInterceptor) ServerOption {
return func(o *options) {
if o.unaryInt != nil {
- panic("The unary server interceptor has been set.")
+ panic("The unary server interceptor was already set and may not be reset.")
}
o.unaryInt = i
}
@@ -184,7 +244,7 @@ func UnaryInterceptor(i UnaryServerInterceptor) ServerOption {
func StreamInterceptor(i StreamServerInterceptor) ServerOption {
return func(o *options) {
if o.streamInt != nil {
- panic("The stream server interceptor has been set.")
+ panic("The stream server interceptor was already set and may not be reset.")
}
o.streamInt = i
}
@@ -195,7 +255,7 @@ func StreamInterceptor(i StreamServerInterceptor) ServerOption {
func InTapHandle(h tap.ServerInHandle) ServerOption {
return func(o *options) {
if o.inTapHandle != nil {
- panic("The tap handle has been set.")
+ panic("The tap handle was already set and may not be reset.")
}
o.inTapHandle = h
}
@@ -208,11 +268,28 @@ func StatsHandler(h stats.Handler) ServerOption {
}
}
+// UnknownServiceHandler returns a ServerOption that allows for adding a custom
+// unknown service handler. The provided method is a bidi-streaming RPC service
+// handler that will be invoked instead of returning the "unimplemented" gRPC
+// error whenever a request is received for an unregistered service or method.
+// The handling function has full access to the Context of the request and the
+// stream, and the invocation passes through interceptors.
+func UnknownServiceHandler(streamHandler StreamHandler) ServerOption {
+ return func(o *options) {
+ o.unknownStreamDesc = &StreamDesc{
+ StreamName: "unknown_service_handler",
+ Handler: streamHandler,
+ // We need to assume that the users of the streamHandler will want to use both.
+ ClientStreams: true,
+ ServerStreams: true,
+ }
+ }
+}
+
// NewServer creates a gRPC server which has no service registered and has not
// started to accept requests yet.
func NewServer(opt ...ServerOption) *Server {
- var opts options
- opts.maxMsgSize = defaultMaxMsgSize
+ opts := defaultServerOptions
for _, o := range opt {
o(&opts)
}
@@ -251,8 +328,8 @@ func (s *Server) errorf(format string, a ...interface{}) {
}
}
-// RegisterService register a service and its implementation to the gRPC
-// server. Called from the IDL generated code. This must be called before
+// RegisterService registers a service and its implementation to the gRPC
+// server. It is called from the IDL generated code. This must be called before
// invoking Serve.
func (s *Server) RegisterService(sd *ServiceDesc, ss interface{}) {
ht := reflect.TypeOf(sd.HandlerType).Elem()
@@ -297,7 +374,7 @@ type MethodInfo struct {
IsServerStream bool
}
-// ServiceInfo contains unary RPC method info, streaming RPC methid info and metadata for a service.
+// ServiceInfo contains unary RPC method info, streaming RPC method info and metadata for a service.
type ServiceInfo struct {
Methods []MethodInfo
// Metadata is the metadata specified in ServiceDesc when registering service.
@@ -390,10 +467,12 @@ func (s *Server) Serve(lis net.Listener) error {
s.mu.Lock()
s.printf("Accept error: %v; retrying in %v", err, tempDelay)
s.mu.Unlock()
+ timer := time.NewTimer(tempDelay)
select {
- case <-time.After(tempDelay):
+ case <-timer.C:
case <-s.ctx.Done():
}
+ timer.Stop()
continue
}
s.mu.Lock()
@@ -446,10 +525,14 @@ func (s *Server) handleRawConn(rawConn net.Conn) {
// transport.NewServerTransport).
func (s *Server) serveHTTP2Transport(c net.Conn, authInfo credentials.AuthInfo) {
config := &transport.ServerConfig{
- MaxStreams: s.opts.maxConcurrentStreams,
- AuthInfo: authInfo,
- InTapHandle: s.opts.inTapHandle,
- StatsHandler: s.opts.statsHandler,
+ MaxStreams: s.opts.maxConcurrentStreams,
+ AuthInfo: authInfo,
+ InTapHandle: s.opts.inTapHandle,
+ StatsHandler: s.opts.statsHandler,
+ KeepaliveParams: s.opts.keepaliveParams,
+ KeepalivePolicy: s.opts.keepalivePolicy,
+ InitialWindowSize: s.opts.initialWindowSize,
+ InitialConnWindowSize: s.opts.initialConnWindowSize,
}
st, err := transport.NewServerTransport("http2", c, config)
if err != nil {
@@ -581,14 +664,11 @@ func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Str
}
p, err := encode(s.opts.codec, msg, cp, cbuf, outPayload)
if err != nil {
- // This typically indicates a fatal issue (e.g., memory
- // corruption or hardware faults) the application program
- // cannot handle.
- //
- // TODO(zhaoq): There exist other options also such as only closing the
- // faulty stream locally and remotely (Other streams can keep going). Find
- // the optimal option.
- grpclog.Fatalf("grpc: Server failed to encode response %v", err)
+ grpclog.Println("grpc: server failed to encode response: ", err)
+ return err
+ }
+ if len(p) > s.opts.maxSendMessageSize {
+ return status.Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(p), s.opts.maxSendMessageSize)
}
err = t.Write(stream, p, opts)
if err == nil && outPayload != nil {
@@ -605,9 +685,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
BeginTime: time.Now(),
}
sh.HandleRPC(stream.Context(), begin)
- }
- defer func() {
- if sh != nil {
+ defer func() {
end := &stats.End{
EndTime: time.Now(),
}
@@ -615,8 +693,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
end.Error = toRPCErr(err)
}
sh.HandleRPC(stream.Context(), end)
- }
- }()
+ }()
+ }
if trInfo != nil {
defer trInfo.tr.Finish()
trInfo.firstLine.client = false
@@ -633,136 +711,137 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
stream.SetSendCompress(s.opts.cp.Type())
}
p := &parser{r: stream}
- for {
- pf, req, err := p.recvMsg(s.opts.maxMsgSize)
- if err == io.EOF {
- // The entire stream is done (for unary RPC only).
- return err
- }
- if err == io.ErrUnexpectedEOF {
- err = Errorf(codes.Internal, io.ErrUnexpectedEOF.Error())
- }
- if err != nil {
- switch err := err.(type) {
- case *rpcError:
- if e := t.WriteStatus(stream, err.code, err.desc); e != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
- }
+ pf, req, err := p.recvMsg(s.opts.maxReceiveMessageSize)
+ if err == io.EOF {
+ // The entire stream is done (for unary RPC only).
+ return err
+ }
+ if err == io.ErrUnexpectedEOF {
+ err = Errorf(codes.Internal, io.ErrUnexpectedEOF.Error())
+ }
+ if err != nil {
+ if st, ok := status.FromError(err); ok {
+ if e := t.WriteStatus(stream, st); e != nil {
+ grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
+ }
+ } else {
+ switch st := err.(type) {
case transport.ConnectionError:
// Nothing to do here.
case transport.StreamError:
- if e := t.WriteStatus(stream, err.Code, err.Desc); e != nil {
+ if e := t.WriteStatus(stream, status.New(st.Code, st.Desc)); e != nil {
grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
}
default:
- panic(fmt.Sprintf("grpc: Unexpected error (%T) from recvMsg: %v", err, err))
+ panic(fmt.Sprintf("grpc: Unexpected error (%T) from recvMsg: %v", st, st))
}
- return err
}
+ return err
+ }
- if err := checkRecvPayload(pf, stream.RecvCompress(), s.opts.dc); err != nil {
- switch err := err.(type) {
- case *rpcError:
- if e := t.WriteStatus(stream, err.code, err.desc); e != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
- }
- return err
- default:
- if e := t.WriteStatus(stream, codes.Internal, err.Error()); e != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
- }
- // TODO checkRecvPayload always return RPC error. Add a return here if necessary.
+ if err := checkRecvPayload(pf, stream.RecvCompress(), s.opts.dc); err != nil {
+ if st, ok := status.FromError(err); ok {
+ if e := t.WriteStatus(stream, st); e != nil {
+ grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
}
+ return err
}
- var inPayload *stats.InPayload
- if sh != nil {
- inPayload = &stats.InPayload{
- RecvTime: time.Now(),
- }
+ if e := t.WriteStatus(stream, status.New(codes.Internal, err.Error())); e != nil {
+ grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
}
- statusCode := codes.OK
- statusDesc := ""
- df := func(v interface{}) error {
- if inPayload != nil {
- inPayload.WireLength = len(req)
- }
- if pf == compressionMade {
- var err error
- req, err = s.opts.dc.Do(bytes.NewReader(req))
- if err != nil {
- if err := t.WriteStatus(stream, codes.Internal, err.Error()); err != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", err)
- }
- return Errorf(codes.Internal, err.Error())
- }
- }
- if len(req) > s.opts.maxMsgSize {
- // TODO: Revisit the error code. Currently keep it consistent with
- // java implementation.
- statusCode = codes.Internal
- statusDesc = fmt.Sprintf("grpc: server received a message of %d bytes exceeding %d limit", len(req), s.opts.maxMsgSize)
- }
- if err := s.opts.codec.Unmarshal(req, v); err != nil {
- return err
- }
- if inPayload != nil {
- inPayload.Payload = v
- inPayload.Data = req
- inPayload.Length = len(req)
- sh.HandleRPC(stream.Context(), inPayload)
- }
- if trInfo != nil {
- trInfo.tr.LazyLog(&payload{sent: false, msg: v}, true)
- }
- return nil
+
+ // TODO checkRecvPayload always return RPC error. Add a return here if necessary.
+ }
+ var inPayload *stats.InPayload
+ if sh != nil {
+ inPayload = &stats.InPayload{
+ RecvTime: time.Now(),
}
- reply, appErr := md.Handler(srv.server, stream.Context(), df, s.opts.unaryInt)
- if appErr != nil {
- if err, ok := appErr.(*rpcError); ok {
- statusCode = err.code
- statusDesc = err.desc
- } else {
- statusCode = convertCode(appErr)
- statusDesc = appErr.Error()
- }
- if trInfo != nil && statusCode != codes.OK {
- trInfo.tr.LazyLog(stringer(statusDesc), true)
- trInfo.tr.SetError()
- }
- if err := t.WriteStatus(stream, statusCode, statusDesc); err != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status: %v", err)
+ }
+ df := func(v interface{}) error {
+ if inPayload != nil {
+ inPayload.WireLength = len(req)
+ }
+ if pf == compressionMade {
+ var err error
+ req, err = s.opts.dc.Do(bytes.NewReader(req))
+ if err != nil {
+ return Errorf(codes.Internal, err.Error())
}
- return Errorf(statusCode, statusDesc)
+ }
+ if len(req) > s.opts.maxReceiveMessageSize {
+ // TODO: Revisit the error code. Currently keep it consistent with
+ // java implementation.
+ return status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", len(req), s.opts.maxReceiveMessageSize)
+ }
+ if err := s.opts.codec.Unmarshal(req, v); err != nil {
+ return status.Errorf(codes.Internal, "grpc: error unmarshalling request: %v", err)
+ }
+ if inPayload != nil {
+ inPayload.Payload = v
+ inPayload.Data = req
+ inPayload.Length = len(req)
+ sh.HandleRPC(stream.Context(), inPayload)
}
if trInfo != nil {
- trInfo.tr.LazyLog(stringer("OK"), false)
+ trInfo.tr.LazyLog(&payload{sent: false, msg: v}, true)
}
- opts := &transport.Options{
- Last: true,
- Delay: false,
+ return nil
+ }
+ reply, appErr := md.Handler(srv.server, stream.Context(), df, s.opts.unaryInt)
+ if appErr != nil {
+ appStatus, ok := status.FromError(appErr)
+ if !ok {
+ // Convert appErr if it is not a grpc status error.
+ appErr = status.Error(convertCode(appErr), appErr.Error())
+ appStatus, _ = status.FromError(appErr)
+ }
+ if trInfo != nil {
+ trInfo.tr.LazyLog(stringer(appStatus.Message()), true)
+ trInfo.tr.SetError()
+ }
+ if e := t.WriteStatus(stream, appStatus); e != nil {
+ grpclog.Printf("grpc: Server.processUnaryRPC failed to write status: %v", e)
+ }
+ return appErr
+ }
+ if trInfo != nil {
+ trInfo.tr.LazyLog(stringer("OK"), false)
+ }
+ opts := &transport.Options{
+ Last: true,
+ Delay: false,
+ }
+ if err := s.sendResponse(t, stream, reply, s.opts.cp, opts); err != nil {
+ if err == io.EOF {
+ // The entire stream is done (for unary RPC only).
+ return err
}
- if err := s.sendResponse(t, stream, reply, s.opts.cp, opts); err != nil {
- switch err := err.(type) {
+ if s, ok := status.FromError(err); ok {
+ if e := t.WriteStatus(stream, s); e != nil {
+ grpclog.Printf("grpc: Server.processUnaryRPC failed to write status: %v", e)
+ }
+ } else {
+ switch st := err.(type) {
case transport.ConnectionError:
// Nothing to do here.
case transport.StreamError:
- statusCode = err.Code
- statusDesc = err.Desc
+ if e := t.WriteStatus(stream, status.New(st.Code, st.Desc)); e != nil {
+ grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
+ }
default:
- statusCode = codes.Unknown
- statusDesc = err.Error()
+ panic(fmt.Sprintf("grpc: Unexpected error (%T) from sendResponse: %v", st, st))
}
- return err
}
- if trInfo != nil {
- trInfo.tr.LazyLog(&payload{sent: true, msg: reply}, true)
- }
- errWrite := t.WriteStatus(stream, statusCode, statusDesc)
- if statusCode != codes.OK {
- return Errorf(statusCode, statusDesc)
- }
- return errWrite
+ return err
+ }
+ if trInfo != nil {
+ trInfo.tr.LazyLog(&payload{sent: true, msg: reply}, true)
}
+ // TODO: Should we be logging if writing status failed here, like above?
+ // Should the logging be in WriteStatus? Should we ignore the WriteStatus
+ // error or allow the stats handler to see it?
+ return t.WriteStatus(stream, status.New(codes.OK, ""))
}
func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transport.Stream, srv *service, sd *StreamDesc, trInfo *traceInfo) (err error) {
@@ -772,9 +851,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
BeginTime: time.Now(),
}
sh.HandleRPC(stream.Context(), begin)
- }
- defer func() {
- if sh != nil {
+ defer func() {
end := &stats.End{
EndTime: time.Now(),
}
@@ -782,21 +859,22 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
end.Error = toRPCErr(err)
}
sh.HandleRPC(stream.Context(), end)
- }
- }()
+ }()
+ }
if s.opts.cp != nil {
stream.SetSendCompress(s.opts.cp.Type())
}
ss := &serverStream{
- t: t,
- s: stream,
- p: &parser{r: stream},
- codec: s.opts.codec,
- cp: s.opts.cp,
- dc: s.opts.dc,
- maxMsgSize: s.opts.maxMsgSize,
- trInfo: trInfo,
- statsHandler: sh,
+ t: t,
+ s: stream,
+ p: &parser{r: stream},
+ codec: s.opts.codec,
+ cp: s.opts.cp,
+ dc: s.opts.dc,
+ maxReceiveMessageSize: s.opts.maxReceiveMessageSize,
+ maxSendMessageSize: s.opts.maxSendMessageSize,
+ trInfo: trInfo,
+ statsHandler: sh,
}
if ss.cp != nil {
ss.cbuf = new(bytes.Buffer)
@@ -815,43 +893,47 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
}()
}
var appErr error
+ var server interface{}
+ if srv != nil {
+ server = srv.server
+ }
if s.opts.streamInt == nil {
- appErr = sd.Handler(srv.server, ss)
+ appErr = sd.Handler(server, ss)
} else {
info := &StreamServerInfo{
FullMethod: stream.Method(),
IsClientStream: sd.ClientStreams,
IsServerStream: sd.ServerStreams,
}
- appErr = s.opts.streamInt(srv.server, ss, info, sd.Handler)
+ appErr = s.opts.streamInt(server, ss, info, sd.Handler)
}
if appErr != nil {
- if err, ok := appErr.(*rpcError); ok {
- ss.statusCode = err.code
- ss.statusDesc = err.desc
- } else if err, ok := appErr.(transport.StreamError); ok {
- ss.statusCode = err.Code
- ss.statusDesc = err.Desc
- } else {
- ss.statusCode = convertCode(appErr)
- ss.statusDesc = appErr.Error()
+ appStatus, ok := status.FromError(appErr)
+ if !ok {
+ switch err := appErr.(type) {
+ case transport.StreamError:
+ appStatus = status.New(err.Code, err.Desc)
+ default:
+ appStatus = status.New(convertCode(appErr), appErr.Error())
+ }
+ appErr = appStatus.Err()
+ }
+ if trInfo != nil {
+ ss.mu.Lock()
+ ss.trInfo.tr.LazyLog(stringer(appStatus.Message()), true)
+ ss.trInfo.tr.SetError()
+ ss.mu.Unlock()
}
+ t.WriteStatus(ss.s, appStatus)
+ // TODO: Should we log an error from WriteStatus here and below?
+ return appErr
}
if trInfo != nil {
ss.mu.Lock()
- if ss.statusCode != codes.OK {
- ss.trInfo.tr.LazyLog(stringer(ss.statusDesc), true)
- ss.trInfo.tr.SetError()
- } else {
- ss.trInfo.tr.LazyLog(stringer("OK"), false)
- }
+ ss.trInfo.tr.LazyLog(stringer("OK"), false)
ss.mu.Unlock()
}
- errWrite := t.WriteStatus(ss.s, ss.statusCode, ss.statusDesc)
- if ss.statusCode != codes.OK {
- return Errorf(ss.statusCode, ss.statusDesc)
- }
- return errWrite
+ return t.WriteStatus(ss.s, status.New(codes.OK, ""))
}
@@ -867,7 +949,7 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
trInfo.tr.SetError()
}
errDesc := fmt.Sprintf("malformed method name: %q", stream.Method())
- if err := t.WriteStatus(stream, codes.InvalidArgument, errDesc); err != nil {
+ if err := t.WriteStatus(stream, status.New(codes.ResourceExhausted, errDesc)); err != nil {
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true)
trInfo.tr.SetError()
@@ -883,12 +965,16 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
method := sm[pos+1:]
srv, ok := s.m[service]
if !ok {
+ if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil {
+ s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo)
+ return
+ }
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"Unknown service %v", []interface{}{service}}, true)
trInfo.tr.SetError()
}
errDesc := fmt.Sprintf("unknown service %v", service)
- if err := t.WriteStatus(stream, codes.Unimplemented, errDesc); err != nil {
+ if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil {
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true)
trInfo.tr.SetError()
@@ -913,8 +999,12 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
trInfo.tr.LazyLog(&fmtStringer{"Unknown method %v", []interface{}{method}}, true)
trInfo.tr.SetError()
}
+ if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil {
+ s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo)
+ return
+ }
errDesc := fmt.Sprintf("unknown method %v", method)
- if err := t.WriteStatus(stream, codes.Unimplemented, errDesc); err != nil {
+ if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil {
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true)
trInfo.tr.SetError()
@@ -957,8 +1047,9 @@ func (s *Server) Stop() {
s.mu.Unlock()
}
-// GracefulStop stops the gRPC server gracefully. It stops the server to accept new
-// connections and RPCs and blocks until all the pending RPCs are finished.
+// GracefulStop stops the gRPC server gracefully. It stops the server from
+// accepting new connections and RPCs and blocks until all the pending RPCs are
+// finished.
func (s *Server) GracefulStop() {
s.mu.Lock()
defer s.mu.Unlock()
diff --git a/vendor/google.golang.org/grpc/stats/grpc_testing/test.pb.go b/vendor/google.golang.org/grpc/stats/grpc_testing/test.pb.go
index b24dcd8d34..5730004ab0 100644
--- a/vendor/google.golang.org/grpc/stats/grpc_testing/test.pb.go
+++ b/vendor/google.golang.org/grpc/stats/grpc_testing/test.pb.go
@@ -34,7 +34,6 @@ var _ = math.Inf
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
-// Unary request.
type SimpleRequest struct {
Id int32 `protobuf:"varint,2,opt,name=id" json:"id,omitempty"`
}
@@ -44,7 +43,13 @@ func (m *SimpleRequest) String() string { return proto.CompactTextStr
func (*SimpleRequest) ProtoMessage() {}
func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
-// Unary response, as configured by the request.
+func (m *SimpleRequest) GetId() int32 {
+ if m != nil {
+ return m.Id
+ }
+ return 0
+}
+
type SimpleResponse struct {
Id int32 `protobuf:"varint,3,opt,name=id" json:"id,omitempty"`
}
@@ -54,6 +59,13 @@ func (m *SimpleResponse) String() string { return proto.CompactTextSt
func (*SimpleResponse) ProtoMessage() {}
func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+func (m *SimpleResponse) GetId() int32 {
+ if m != nil {
+ return m.Id
+ }
+ return 0
+}
+
func init() {
proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest")
proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse")
@@ -77,6 +89,10 @@ type TestServiceClient interface {
// As one request could lead to multiple responses, this interface
// demonstrates the idea of full duplexing.
FullDuplexCall(ctx context.Context, opts ...grpc.CallOption) (TestService_FullDuplexCallClient, error)
+ // Client stream
+ ClientStreamCall(ctx context.Context, opts ...grpc.CallOption) (TestService_ClientStreamCallClient, error)
+ // Server stream
+ ServerStreamCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (TestService_ServerStreamCallClient, error)
}
type testServiceClient struct {
@@ -127,6 +143,72 @@ func (x *testServiceFullDuplexCallClient) Recv() (*SimpleResponse, error) {
return m, nil
}
+func (c *testServiceClient) ClientStreamCall(ctx context.Context, opts ...grpc.CallOption) (TestService_ClientStreamCallClient, error) {
+ stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[1], c.cc, "/grpc.testing.TestService/ClientStreamCall", opts...)
+ if err != nil {
+ return nil, err
+ }
+ x := &testServiceClientStreamCallClient{stream}
+ return x, nil
+}
+
+type TestService_ClientStreamCallClient interface {
+ Send(*SimpleRequest) error
+ CloseAndRecv() (*SimpleResponse, error)
+ grpc.ClientStream
+}
+
+type testServiceClientStreamCallClient struct {
+ grpc.ClientStream
+}
+
+func (x *testServiceClientStreamCallClient) Send(m *SimpleRequest) error {
+ return x.ClientStream.SendMsg(m)
+}
+
+func (x *testServiceClientStreamCallClient) CloseAndRecv() (*SimpleResponse, error) {
+ if err := x.ClientStream.CloseSend(); err != nil {
+ return nil, err
+ }
+ m := new(SimpleResponse)
+ if err := x.ClientStream.RecvMsg(m); err != nil {
+ return nil, err
+ }
+ return m, nil
+}
+
+func (c *testServiceClient) ServerStreamCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (TestService_ServerStreamCallClient, error) {
+ stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[2], c.cc, "/grpc.testing.TestService/ServerStreamCall", opts...)
+ if err != nil {
+ return nil, err
+ }
+ x := &testServiceServerStreamCallClient{stream}
+ if err := x.ClientStream.SendMsg(in); err != nil {
+ return nil, err
+ }
+ if err := x.ClientStream.CloseSend(); err != nil {
+ return nil, err
+ }
+ return x, nil
+}
+
+type TestService_ServerStreamCallClient interface {
+ Recv() (*SimpleResponse, error)
+ grpc.ClientStream
+}
+
+type testServiceServerStreamCallClient struct {
+ grpc.ClientStream
+}
+
+func (x *testServiceServerStreamCallClient) Recv() (*SimpleResponse, error) {
+ m := new(SimpleResponse)
+ if err := x.ClientStream.RecvMsg(m); err != nil {
+ return nil, err
+ }
+ return m, nil
+}
+
// Server API for TestService service
type TestServiceServer interface {
@@ -137,6 +219,10 @@ type TestServiceServer interface {
// As one request could lead to multiple responses, this interface
// demonstrates the idea of full duplexing.
FullDuplexCall(TestService_FullDuplexCallServer) error
+ // Client stream
+ ClientStreamCall(TestService_ClientStreamCallServer) error
+ // Server stream
+ ServerStreamCall(*SimpleRequest, TestService_ServerStreamCallServer) error
}
func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) {
@@ -187,6 +273,53 @@ func (x *testServiceFullDuplexCallServer) Recv() (*SimpleRequest, error) {
return m, nil
}
+func _TestService_ClientStreamCall_Handler(srv interface{}, stream grpc.ServerStream) error {
+ return srv.(TestServiceServer).ClientStreamCall(&testServiceClientStreamCallServer{stream})
+}
+
+type TestService_ClientStreamCallServer interface {
+ SendAndClose(*SimpleResponse) error
+ Recv() (*SimpleRequest, error)
+ grpc.ServerStream
+}
+
+type testServiceClientStreamCallServer struct {
+ grpc.ServerStream
+}
+
+func (x *testServiceClientStreamCallServer) SendAndClose(m *SimpleResponse) error {
+ return x.ServerStream.SendMsg(m)
+}
+
+func (x *testServiceClientStreamCallServer) Recv() (*SimpleRequest, error) {
+ m := new(SimpleRequest)
+ if err := x.ServerStream.RecvMsg(m); err != nil {
+ return nil, err
+ }
+ return m, nil
+}
+
+func _TestService_ServerStreamCall_Handler(srv interface{}, stream grpc.ServerStream) error {
+ m := new(SimpleRequest)
+ if err := stream.RecvMsg(m); err != nil {
+ return err
+ }
+ return srv.(TestServiceServer).ServerStreamCall(m, &testServiceServerStreamCallServer{stream})
+}
+
+type TestService_ServerStreamCallServer interface {
+ Send(*SimpleResponse) error
+ grpc.ServerStream
+}
+
+type testServiceServerStreamCallServer struct {
+ grpc.ServerStream
+}
+
+func (x *testServiceServerStreamCallServer) Send(m *SimpleResponse) error {
+ return x.ServerStream.SendMsg(m)
+}
+
var _TestService_serviceDesc = grpc.ServiceDesc{
ServiceName: "grpc.testing.TestService",
HandlerType: (*TestServiceServer)(nil),
@@ -203,6 +336,16 @@ var _TestService_serviceDesc = grpc.ServiceDesc{
ServerStreams: true,
ClientStreams: true,
},
+ {
+ StreamName: "ClientStreamCall",
+ Handler: _TestService_ClientStreamCall_Handler,
+ ClientStreams: true,
+ },
+ {
+ StreamName: "ServerStreamCall",
+ Handler: _TestService_ServerStreamCall_Handler,
+ ServerStreams: true,
+ },
},
Metadata: "test.proto",
}
@@ -210,16 +353,18 @@ var _TestService_serviceDesc = grpc.ServiceDesc{
func init() { proto.RegisterFile("test.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
- // 167 bytes of a gzipped FileDescriptorProto
+ // 196 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0x2a, 0x49, 0x2d, 0x2e,
0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x49, 0x2f, 0x2a, 0x48, 0xd6, 0x03, 0x09, 0x64,
0xe6, 0xa5, 0x2b, 0xc9, 0x73, 0xf1, 0x06, 0x67, 0xe6, 0x16, 0xe4, 0xa4, 0x06, 0xa5, 0x16, 0x96,
0xa6, 0x16, 0x97, 0x08, 0xf1, 0x71, 0x31, 0x65, 0xa6, 0x48, 0x30, 0x29, 0x30, 0x6a, 0xb0, 0x06,
0x31, 0x65, 0xa6, 0x28, 0x29, 0x70, 0xf1, 0xc1, 0x14, 0x14, 0x17, 0xe4, 0xe7, 0x15, 0xa7, 0x42,
- 0x55, 0x30, 0xc3, 0x54, 0x18, 0x2d, 0x63, 0xe4, 0xe2, 0x0e, 0x49, 0x2d, 0x2e, 0x09, 0x4e, 0x2d,
+ 0x55, 0x30, 0xc3, 0x54, 0x18, 0x9d, 0x60, 0xe2, 0xe2, 0x0e, 0x49, 0x2d, 0x2e, 0x09, 0x4e, 0x2d,
0x2a, 0xcb, 0x4c, 0x4e, 0x15, 0x72, 0xe3, 0xe2, 0x0c, 0xcd, 0x4b, 0x2c, 0xaa, 0x74, 0x4e, 0xcc,
0xc9, 0x11, 0x92, 0xd6, 0x43, 0xb6, 0x4e, 0x0f, 0xc5, 0x2e, 0x29, 0x19, 0xec, 0x92, 0x50, 0x7b,
0xfc, 0xb9, 0xf8, 0xdc, 0x4a, 0x73, 0x72, 0x5c, 0x4a, 0x0b, 0x72, 0x52, 0x2b, 0x28, 0x34, 0x4c,
- 0x83, 0xd1, 0x80, 0x31, 0x89, 0x0d, 0x1c, 0x00, 0xc6, 0x80, 0x00, 0x00, 0x00, 0xff, 0xff, 0x8d,
- 0x82, 0x5b, 0xdd, 0x0e, 0x01, 0x00, 0x00,
+ 0x83, 0xd1, 0x80, 0x51, 0xc8, 0x9f, 0x4b, 0xc0, 0x39, 0x27, 0x33, 0x35, 0xaf, 0x24, 0xb8, 0xa4,
+ 0x28, 0x35, 0x31, 0x97, 0x62, 0x23, 0x41, 0x06, 0x82, 0x3c, 0x9d, 0x5a, 0x44, 0x15, 0x03, 0x0d,
+ 0x18, 0x93, 0xd8, 0xc0, 0x51, 0x64, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0x61, 0x49, 0x59, 0xe6,
+ 0xb0, 0x01, 0x00, 0x00,
}
diff --git a/vendor/google.golang.org/grpc/stats/handlers.go b/vendor/google.golang.org/grpc/stats/handlers.go
index 26e1a8e2f0..5fdce2f5bb 100644
--- a/vendor/google.golang.org/grpc/stats/handlers.go
+++ b/vendor/google.golang.org/grpc/stats/handlers.go
@@ -45,19 +45,22 @@ type ConnTagInfo struct {
RemoteAddr net.Addr
// LocalAddr is the local address of the corresponding connection.
LocalAddr net.Addr
- // TODO add QOS related fields.
}
// RPCTagInfo defines the relevant information needed by RPC context tagger.
type RPCTagInfo struct {
// FullMethodName is the RPC method in the format of /package.service/method.
FullMethodName string
+ // FailFast indicates if this RPC is failfast.
+ // This field is only valid on client side, it's always false on server side.
+ FailFast bool
}
// Handler defines the interface for the related stats handling (e.g., RPCs, connections).
type Handler interface {
// TagRPC can attach some information to the given context.
- // The returned context is used in the rest lifetime of the RPC.
+ // The context used for the rest lifetime of the RPC will be derived from
+ // the returned context.
TagRPC(context.Context, *RPCTagInfo) context.Context
// HandleRPC processes the RPC stats.
HandleRPC(context.Context, RPCStats)
diff --git a/vendor/google.golang.org/grpc/stats/stats.go b/vendor/google.golang.org/grpc/stats/stats.go
index a82448a68b..6c406c7bad 100644
--- a/vendor/google.golang.org/grpc/stats/stats.go
+++ b/vendor/google.golang.org/grpc/stats/stats.go
@@ -49,7 +49,7 @@ type RPCStats interface {
}
// Begin contains stats when an RPC begins.
-// FailFast are only valid if Client is true.
+// FailFast is only valid if this Begin is from client side.
type Begin struct {
// Client is true if this Begin is from client side.
Client bool
@@ -59,7 +59,7 @@ type Begin struct {
FailFast bool
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *Begin) IsClient() bool { return s.Client }
func (s *Begin) isRPCStats() {}
@@ -80,19 +80,19 @@ type InPayload struct {
RecvTime time.Time
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *InPayload) IsClient() bool { return s.Client }
func (s *InPayload) isRPCStats() {}
// InHeader contains stats when a header is received.
-// FullMethod, addresses and Compression are only valid if Client is false.
type InHeader struct {
// Client is true if this InHeader is from client side.
Client bool
// WireLength is the wire length of header.
WireLength int
+ // The following fields are valid only if Client is false.
// FullMethod is the full RPC method string, i.e., /package.service/method.
FullMethod string
// RemoteAddr is the remote address of the corresponding connection.
@@ -103,7 +103,7 @@ type InHeader struct {
Compression string
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *InHeader) IsClient() bool { return s.Client }
func (s *InHeader) isRPCStats() {}
@@ -116,7 +116,7 @@ type InTrailer struct {
WireLength int
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *InTrailer) IsClient() bool { return s.Client }
func (s *InTrailer) isRPCStats() {}
@@ -137,19 +137,19 @@ type OutPayload struct {
SentTime time.Time
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if this stats information is from client side.
func (s *OutPayload) IsClient() bool { return s.Client }
func (s *OutPayload) isRPCStats() {}
// OutHeader contains stats when a header is sent.
-// FullMethod, addresses and Compression are only valid if Client is true.
type OutHeader struct {
// Client is true if this OutHeader is from client side.
Client bool
// WireLength is the wire length of header.
WireLength int
+ // The following fields are valid only if Client is true.
// FullMethod is the full RPC method string, i.e., /package.service/method.
FullMethod string
// RemoteAddr is the remote address of the corresponding connection.
@@ -160,7 +160,7 @@ type OutHeader struct {
Compression string
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if this stats information is from client side.
func (s *OutHeader) IsClient() bool { return s.Client }
func (s *OutHeader) isRPCStats() {}
@@ -173,7 +173,7 @@ type OutTrailer struct {
WireLength int
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if this stats information is from client side.
func (s *OutTrailer) IsClient() bool { return s.Client }
func (s *OutTrailer) isRPCStats() {}
@@ -184,7 +184,7 @@ type End struct {
Client bool
// EndTime is the time when the RPC ends.
EndTime time.Time
- // Error is the error just happened. Its type is gRPC error.
+ // Error is the error just happened. It implements status.Status if non-nil.
Error error
}
diff --git a/vendor/google.golang.org/grpc/status/status.go b/vendor/google.golang.org/grpc/status/status.go
new file mode 100644
index 0000000000..99a4cbe511
--- /dev/null
+++ b/vendor/google.golang.org/grpc/status/status.go
@@ -0,0 +1,145 @@
+/*
+ *
+ * Copyright 2017, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+// Package status implements errors returned by gRPC. These errors are
+// serialized and transmitted on the wire between server and client, and allow
+// for additional data to be transmitted via the Details field in the status
+// proto. gRPC service handlers should return an error created by this
+// package, and gRPC clients should expect a corresponding error to be
+// returned from the RPC call.
+//
+// This package upholds the invariants that a non-nil error may not
+// contain an OK code, and an OK code must result in a nil error.
+package status
+
+import (
+ "fmt"
+
+ "github.com/golang/protobuf/proto"
+ spb "google.golang.org/genproto/googleapis/rpc/status"
+ "google.golang.org/grpc/codes"
+)
+
+// statusError is an alias of a status proto. It implements error and Status,
+// and a nil statusError should never be returned by this package.
+type statusError spb.Status
+
+func (se *statusError) Error() string {
+ p := (*spb.Status)(se)
+ return fmt.Sprintf("rpc error: code = %s desc = %s", codes.Code(p.GetCode()), p.GetMessage())
+}
+
+func (se *statusError) status() *Status {
+ return &Status{s: (*spb.Status)(se)}
+}
+
+// Status represents an RPC status code, message, and details. It is immutable
+// and should be created with New, Newf, or FromProto.
+type Status struct {
+ s *spb.Status
+}
+
+// Code returns the status code contained in s.
+func (s *Status) Code() codes.Code {
+ if s == nil || s.s == nil {
+ return codes.OK
+ }
+ return codes.Code(s.s.Code)
+}
+
+// Message returns the message contained in s.
+func (s *Status) Message() string {
+ if s == nil || s.s == nil {
+ return ""
+ }
+ return s.s.Message
+}
+
+// Proto returns s's status as an spb.Status proto message.
+func (s *Status) Proto() *spb.Status {
+ if s == nil {
+ return nil
+ }
+ return proto.Clone(s.s).(*spb.Status)
+}
+
+// Err returns an immutable error representing s; returns nil if s.Code() is
+// OK.
+func (s *Status) Err() error {
+ if s.Code() == codes.OK {
+ return nil
+ }
+ return (*statusError)(s.s)
+}
+
+// New returns a Status representing c and msg.
+func New(c codes.Code, msg string) *Status {
+ return &Status{s: &spb.Status{Code: int32(c), Message: msg}}
+}
+
+// Newf returns New(c, fmt.Sprintf(format, a...)).
+func Newf(c codes.Code, format string, a ...interface{}) *Status {
+ return New(c, fmt.Sprintf(format, a...))
+}
+
+// Error returns an error representing c and msg. If c is OK, returns nil.
+func Error(c codes.Code, msg string) error {
+ return New(c, msg).Err()
+}
+
+// Errorf returns Error(c, fmt.Sprintf(format, a...)).
+func Errorf(c codes.Code, format string, a ...interface{}) error {
+ return Error(c, fmt.Sprintf(format, a...))
+}
+
+// ErrorProto returns an error representing s. If s.Code is OK, returns nil.
+func ErrorProto(s *spb.Status) error {
+ return FromProto(s).Err()
+}
+
+// FromProto returns a Status representing s.
+func FromProto(s *spb.Status) *Status {
+ return &Status{s: proto.Clone(s).(*spb.Status)}
+}
+
+// FromError returns a Status representing err if it was produced from this
+// package, otherwise it returns nil, false.
+func FromError(err error) (s *Status, ok bool) {
+ if err == nil {
+ return &Status{s: &spb.Status{Code: int32(codes.OK)}}, true
+ }
+ if s, ok := err.(*statusError); ok {
+ return s.status(), true
+ }
+ return nil, false
+}
diff --git a/vendor/google.golang.org/grpc/stream.go b/vendor/google.golang.org/grpc/stream.go
index bb468dc37e..2106d3ecb2 100644
--- a/vendor/google.golang.org/grpc/stream.go
+++ b/vendor/google.golang.org/grpc/stream.go
@@ -37,7 +37,6 @@ import (
"bytes"
"errors"
"io"
- "math"
"sync"
"time"
@@ -46,6 +45,7 @@ import (
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/transport"
)
@@ -113,17 +113,24 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
cancel context.CancelFunc
)
c := defaultCallInfo
- if mc, ok := cc.getMethodConfig(method); ok {
- c.failFast = !mc.WaitForReady
- if mc.Timeout > 0 {
- ctx, cancel = context.WithTimeout(ctx, mc.Timeout)
- }
+ mc := cc.GetMethodConfig(method)
+ if mc.WaitForReady != nil {
+ c.failFast = !*mc.WaitForReady
+ }
+
+ if mc.Timeout != nil {
+ ctx, cancel = context.WithTimeout(ctx, *mc.Timeout)
}
+
+ opts = append(cc.dopts.callOptions, opts...)
for _, o := range opts {
if err := o.before(&c); err != nil {
return nil, toRPCErr(err)
}
}
+ c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize)
+ c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize)
+
callHdr := &transport.CallHdr{
Host: cc.authority,
Method: method,
@@ -132,6 +139,9 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
if cc.dopts.cp != nil {
callHdr.SendCompress = cc.dopts.cp.Type()
}
+ if c.creds != nil {
+ callHdr.Creds = c.creds
+ }
var trInfo traceInfo
if EnableTracing {
trInfo.tr = trace.New("grpc.Sent."+methodFamily(method), method)
@@ -151,26 +161,27 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
}
}()
}
+ ctx = newContextWithRPCInfo(ctx)
sh := cc.dopts.copts.StatsHandler
if sh != nil {
- ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method})
+ ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast})
begin := &stats.Begin{
Client: true,
BeginTime: time.Now(),
FailFast: c.failFast,
}
sh.HandleRPC(ctx, begin)
- }
- defer func() {
- if err != nil && sh != nil {
- // Only handle end stats if err != nil.
- end := &stats.End{
- Client: true,
- Error: err,
+ defer func() {
+ if err != nil {
+ // Only handle end stats if err != nil.
+ end := &stats.End{
+ Client: true,
+ Error: err,
+ }
+ sh.HandleRPC(ctx, end)
}
- sh.HandleRPC(ctx, end)
- }
- }()
+ }()
+ }
gopts := BalancerGetOptions{
BlockingWait: !c.failFast,
}
@@ -178,7 +189,7 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
t, put, err = cc.getTransport(ctx, gopts)
if err != nil {
// TODO(zhaoq): Probably revisit the error handling.
- if _, ok := err.(*rpcError); ok {
+ if _, ok := status.FromError(err); ok {
return nil, err
}
if err == errConnClosing || err == errConnUnavailable {
@@ -193,14 +204,17 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
s, err = t.NewStream(ctx, callHdr)
if err != nil {
+ if _, ok := err.(transport.ConnectionError); ok && put != nil {
+ // If error is connection error, transport was sending data on wire,
+ // and we are not sure if anything has been sent on wire.
+ // If error is not connection error, we are sure nothing has been sent.
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: true, bytesReceived: false})
+ }
if put != nil {
put()
put = nil
}
- if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
- if c.failFast {
- return nil, toRPCErr(err)
- }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
continue
}
return nil, toRPCErr(err)
@@ -236,14 +250,13 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
select {
case <-t.Error():
// Incur transport error, simply exit.
+ case <-cc.ctx.Done():
+ cs.finish(ErrClientConnClosing)
+ cs.closeTransportStream(ErrClientConnClosing)
case <-s.Done():
// TODO: The trace of the RPC is terminated here when there is no pending
// I/O, which is probably not the optimal solution.
- if s.StatusCode() == codes.OK {
- cs.finish(nil)
- } else {
- cs.finish(Errorf(s.StatusCode(), "%s", s.StatusDesc()))
- }
+ cs.finish(s.Status().Err())
cs.closeTransportStream(nil)
case <-s.GoAway():
cs.finish(errConnDrain)
@@ -273,9 +286,10 @@ type clientStream struct {
tracing bool // set to EnableTracing when the clientStream is created.
- mu sync.Mutex
- put func()
- closed bool
+ mu sync.Mutex
+ put func()
+ closed bool
+ finished bool
// trInfo.tr is set when the clientStream is created (if EnableTracing is true),
// and is set to nil when the clientStream's finish method is called.
trInfo traceInfo
@@ -350,7 +364,13 @@ func (cs *clientStream) SendMsg(m interface{}) (err error) {
}
}()
if err != nil {
- return Errorf(codes.Internal, "grpc: %v", err)
+ return err
+ }
+ if cs.c.maxSendMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxSendMessageSize field uninitialized(nil)")
+ }
+ if len(out) > *cs.c.maxSendMessageSize {
+ return Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(out), *cs.c.maxSendMessageSize)
}
err = cs.t.Write(cs.s, out, &transport.Options{Last: false})
if err == nil && outPayload != nil {
@@ -361,28 +381,16 @@ func (cs *clientStream) SendMsg(m interface{}) (err error) {
}
func (cs *clientStream) RecvMsg(m interface{}) (err error) {
- defer func() {
- if err != nil && cs.statsHandler != nil {
- // Only generate End if err != nil.
- // If err == nil, it's not the last RecvMsg.
- // The last RecvMsg gets either an RPC error or io.EOF.
- end := &stats.End{
- Client: true,
- EndTime: time.Now(),
- }
- if err != io.EOF {
- end.Error = toRPCErr(err)
- }
- cs.statsHandler.HandleRPC(cs.statsCtx, end)
- }
- }()
var inPayload *stats.InPayload
if cs.statsHandler != nil {
inPayload = &stats.InPayload{
Client: true,
}
}
- err = recv(cs.p, cs.codec, cs.s, cs.dc, m, math.MaxInt32, inPayload)
+ if cs.c.maxReceiveMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)")
+ }
+ err = recv(cs.p, cs.codec, cs.s, cs.dc, m, *cs.c.maxReceiveMessageSize, inPayload)
defer func() {
// err != nil indicates the termination of the stream.
if err != nil {
@@ -405,17 +413,20 @@ func (cs *clientStream) RecvMsg(m interface{}) (err error) {
}
// Special handling for client streaming rpc.
// This recv expects EOF or errors, so we don't collect inPayload.
- err = recv(cs.p, cs.codec, cs.s, cs.dc, m, math.MaxInt32, nil)
+ if cs.c.maxReceiveMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)")
+ }
+ err = recv(cs.p, cs.codec, cs.s, cs.dc, m, *cs.c.maxReceiveMessageSize, nil)
cs.closeTransportStream(err)
if err == nil {
return toRPCErr(errors.New("grpc: client streaming protocol violation: get , want "))
}
if err == io.EOF {
- if cs.s.StatusCode() == codes.OK {
- cs.finish(err)
- return nil
+ if se := cs.s.Status().Err(); se != nil {
+ return se
}
- return Errorf(cs.s.StatusCode(), "%s", cs.s.StatusDesc())
+ cs.finish(err)
+ return nil
}
return toRPCErr(err)
}
@@ -423,11 +434,11 @@ func (cs *clientStream) RecvMsg(m interface{}) (err error) {
cs.closeTransportStream(err)
}
if err == io.EOF {
- if cs.s.StatusCode() == codes.OK {
- // Returns io.EOF to indicate the end of the stream.
- return
+ if statusErr := cs.s.Status().Err(); statusErr != nil {
+ return statusErr
}
- return Errorf(cs.s.StatusCode(), "%s", cs.s.StatusDesc())
+ // Returns io.EOF to indicate the end of the stream.
+ return
}
return toRPCErr(err)
}
@@ -461,20 +472,39 @@ func (cs *clientStream) closeTransportStream(err error) {
}
func (cs *clientStream) finish(err error) {
+ cs.mu.Lock()
+ defer cs.mu.Unlock()
+ if cs.finished {
+ return
+ }
+ cs.finished = true
defer func() {
if cs.cancel != nil {
cs.cancel()
}
}()
- cs.mu.Lock()
- defer cs.mu.Unlock()
for _, o := range cs.opts {
o.after(&cs.c)
}
if cs.put != nil {
+ updateRPCInfoInContext(cs.s.Context(), rpcInfo{
+ bytesSent: cs.s.BytesSent(),
+ bytesReceived: cs.s.BytesReceived(),
+ })
cs.put()
cs.put = nil
}
+ if cs.statsHandler != nil {
+ end := &stats.End{
+ Client: true,
+ EndTime: time.Now(),
+ }
+ if err != io.EOF {
+ // end.Error is nil if the RPC finished successfully.
+ end.Error = toRPCErr(err)
+ }
+ cs.statsHandler.HandleRPC(cs.statsCtx, end)
+ }
if !cs.tracing {
return
}
@@ -511,17 +541,16 @@ type ServerStream interface {
// serverStream implements a server side Stream.
type serverStream struct {
- t transport.ServerTransport
- s *transport.Stream
- p *parser
- codec Codec
- cp Compressor
- dc Decompressor
- cbuf *bytes.Buffer
- maxMsgSize int
- statusCode codes.Code
- statusDesc string
- trInfo *traceInfo
+ t transport.ServerTransport
+ s *transport.Stream
+ p *parser
+ codec Codec
+ cp Compressor
+ dc Decompressor
+ cbuf *bytes.Buffer
+ maxReceiveMessageSize int
+ maxSendMessageSize int
+ trInfo *traceInfo
statsHandler stats.Handler
@@ -577,9 +606,11 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) {
}
}()
if err != nil {
- err = Errorf(codes.Internal, "grpc: %v", err)
return err
}
+ if len(out) > ss.maxSendMessageSize {
+ return Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(out), ss.maxSendMessageSize)
+ }
if err := ss.t.Write(ss.s, out, &transport.Options{Last: false}); err != nil {
return toRPCErr(err)
}
@@ -609,7 +640,7 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) {
if ss.statsHandler != nil {
inPayload = &stats.InPayload{}
}
- if err := recv(ss.p, ss.codec, ss.s, ss.dc, m, ss.maxMsgSize, inPayload); err != nil {
+ if err := recv(ss.p, ss.codec, ss.s, ss.dc, m, ss.maxReceiveMessageSize, inPayload); err != nil {
if err == io.EOF {
return err
}
diff --git a/vendor/google.golang.org/grpc/transport/pre_go16.go b/vendor/google.golang.org/grpc/test/race.go
similarity index 80%
rename from vendor/google.golang.org/grpc/transport/pre_go16.go
rename to vendor/google.golang.org/grpc/test/race.go
index 33d91c17c4..2dc7cf89a6 100644
--- a/vendor/google.golang.org/grpc/transport/pre_go16.go
+++ b/vendor/google.golang.org/grpc/test/race.go
@@ -1,4 +1,4 @@
-// +build !go1.6
+// +build race
/*
* Copyright 2016, Google Inc.
@@ -32,20 +32,8 @@
*
*/
-package transport
+package test
-import (
- "net"
- "time"
-
- "golang.org/x/net/context"
-)
-
-// dialContext connects to the address on the named network.
-func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
- var dialer net.Dialer
- if deadline, ok := ctx.Deadline(); ok {
- dialer.Timeout = deadline.Sub(time.Now())
- }
- return dialer.Dial(network, address)
+func init() {
+ raceMode = true
}
diff --git a/vendor/google.golang.org/grpc/test/servertester.go b/vendor/google.golang.org/grpc/test/servertester.go
new file mode 100644
index 0000000000..ed3724ddae
--- /dev/null
+++ b/vendor/google.golang.org/grpc/test/servertester.go
@@ -0,0 +1,295 @@
+/*
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package test
+
+import (
+ "bytes"
+ "errors"
+ "io"
+ "strings"
+ "testing"
+ "time"
+
+ "golang.org/x/net/http2"
+ "golang.org/x/net/http2/hpack"
+)
+
+// This is a subset of http2's serverTester type.
+//
+// serverTester wraps a io.ReadWriter (acting like the underlying
+// network connection) and provides utility methods to read and write
+// http2 frames.
+//
+// NOTE(bradfitz): this could eventually be exported somewhere. Others
+// have asked for it too. For now I'm still experimenting with the
+// API and don't feel like maintaining a stable testing API.
+
+type serverTester struct {
+ cc io.ReadWriteCloser // client conn
+ t testing.TB
+ fr *http2.Framer
+
+ // writing headers:
+ headerBuf bytes.Buffer
+ hpackEnc *hpack.Encoder
+
+ // reading frames:
+ frc chan http2.Frame
+ frErrc chan error
+ readTimer *time.Timer
+}
+
+func newServerTesterFromConn(t testing.TB, cc io.ReadWriteCloser) *serverTester {
+ st := &serverTester{
+ t: t,
+ cc: cc,
+ frc: make(chan http2.Frame, 1),
+ frErrc: make(chan error, 1),
+ }
+ st.hpackEnc = hpack.NewEncoder(&st.headerBuf)
+ st.fr = http2.NewFramer(cc, cc)
+ st.fr.ReadMetaHeaders = hpack.NewDecoder(4096 /*initialHeaderTableSize*/, nil)
+
+ return st
+}
+
+func (st *serverTester) readFrame() (http2.Frame, error) {
+ go func() {
+ fr, err := st.fr.ReadFrame()
+ if err != nil {
+ st.frErrc <- err
+ } else {
+ st.frc <- fr
+ }
+ }()
+ t := time.NewTimer(2 * time.Second)
+ defer t.Stop()
+ select {
+ case f := <-st.frc:
+ return f, nil
+ case err := <-st.frErrc:
+ return nil, err
+ case <-t.C:
+ return nil, errors.New("timeout waiting for frame")
+ }
+}
+
+// greet initiates the client's HTTP/2 connection into a state where
+// frames may be sent.
+func (st *serverTester) greet() {
+ st.writePreface()
+ st.writeInitialSettings()
+ st.wantSettings()
+ st.writeSettingsAck()
+ for {
+ f, err := st.readFrame()
+ if err != nil {
+ st.t.Fatal(err)
+ }
+ switch f := f.(type) {
+ case *http2.WindowUpdateFrame:
+ // grpc's transport/http2_server sends this
+ // before the settings ack. The Go http2
+ // server uses a setting instead.
+ case *http2.SettingsFrame:
+ if f.IsAck() {
+ return
+ }
+ st.t.Fatalf("during greet, got non-ACK settings frame")
+ default:
+ st.t.Fatalf("during greet, unexpected frame type %T", f)
+ }
+ }
+}
+
+func (st *serverTester) writePreface() {
+ n, err := st.cc.Write([]byte(http2.ClientPreface))
+ if err != nil {
+ st.t.Fatalf("Error writing client preface: %v", err)
+ }
+ if n != len(http2.ClientPreface) {
+ st.t.Fatalf("Writing client preface, wrote %d bytes; want %d", n, len(http2.ClientPreface))
+ }
+}
+
+func (st *serverTester) writeInitialSettings() {
+ if err := st.fr.WriteSettings(); err != nil {
+ st.t.Fatalf("Error writing initial SETTINGS frame from client to server: %v", err)
+ }
+}
+
+func (st *serverTester) writeSettingsAck() {
+ if err := st.fr.WriteSettingsAck(); err != nil {
+ st.t.Fatalf("Error writing ACK of server's SETTINGS: %v", err)
+ }
+}
+
+func (st *serverTester) wantSettings() *http2.SettingsFrame {
+ f, err := st.readFrame()
+ if err != nil {
+ st.t.Fatalf("Error while expecting a SETTINGS frame: %v", err)
+ }
+ sf, ok := f.(*http2.SettingsFrame)
+ if !ok {
+ st.t.Fatalf("got a %T; want *SettingsFrame", f)
+ }
+ return sf
+}
+
+func (st *serverTester) wantSettingsAck() {
+ f, err := st.readFrame()
+ if err != nil {
+ st.t.Fatal(err)
+ }
+ sf, ok := f.(*http2.SettingsFrame)
+ if !ok {
+ st.t.Fatalf("Wanting a settings ACK, received a %T", f)
+ }
+ if !sf.IsAck() {
+ st.t.Fatal("Settings Frame didn't have ACK set")
+ }
+}
+
+// wait for any activity from the server
+func (st *serverTester) wantAnyFrame() http2.Frame {
+ f, err := st.fr.ReadFrame()
+ if err != nil {
+ st.t.Fatal(err)
+ }
+ return f
+}
+
+func (st *serverTester) encodeHeaderField(k, v string) {
+ err := st.hpackEnc.WriteField(hpack.HeaderField{Name: k, Value: v})
+ if err != nil {
+ st.t.Fatalf("HPACK encoding error for %q/%q: %v", k, v, err)
+ }
+}
+
+// encodeHeader encodes headers and returns their HPACK bytes. headers
+// must contain an even number of key/value pairs. There may be
+// multiple pairs for keys (e.g. "cookie"). The :method, :path, and
+// :scheme headers default to GET, / and https.
+func (st *serverTester) encodeHeader(headers ...string) []byte {
+ if len(headers)%2 == 1 {
+ panic("odd number of kv args")
+ }
+
+ st.headerBuf.Reset()
+
+ if len(headers) == 0 {
+ // Fast path, mostly for benchmarks, so test code doesn't pollute
+ // profiles when we're looking to improve server allocations.
+ st.encodeHeaderField(":method", "GET")
+ st.encodeHeaderField(":path", "/")
+ st.encodeHeaderField(":scheme", "https")
+ return st.headerBuf.Bytes()
+ }
+
+ if len(headers) == 2 && headers[0] == ":method" {
+ // Another fast path for benchmarks.
+ st.encodeHeaderField(":method", headers[1])
+ st.encodeHeaderField(":path", "/")
+ st.encodeHeaderField(":scheme", "https")
+ return st.headerBuf.Bytes()
+ }
+
+ pseudoCount := map[string]int{}
+ keys := []string{":method", ":path", ":scheme"}
+ vals := map[string][]string{
+ ":method": {"GET"},
+ ":path": {"/"},
+ ":scheme": {"https"},
+ }
+ for len(headers) > 0 {
+ k, v := headers[0], headers[1]
+ headers = headers[2:]
+ if _, ok := vals[k]; !ok {
+ keys = append(keys, k)
+ }
+ if strings.HasPrefix(k, ":") {
+ pseudoCount[k]++
+ if pseudoCount[k] == 1 {
+ vals[k] = []string{v}
+ } else {
+ // Allows testing of invalid headers w/ dup pseudo fields.
+ vals[k] = append(vals[k], v)
+ }
+ } else {
+ vals[k] = append(vals[k], v)
+ }
+ }
+ for _, k := range keys {
+ for _, v := range vals[k] {
+ st.encodeHeaderField(k, v)
+ }
+ }
+ return st.headerBuf.Bytes()
+}
+
+func (st *serverTester) writeHeadersGRPC(streamID uint32, path string) {
+ st.writeHeaders(http2.HeadersFrameParam{
+ StreamID: streamID,
+ BlockFragment: st.encodeHeader(
+ ":method", "POST",
+ ":path", path,
+ "content-type", "application/grpc",
+ "te", "trailers",
+ ),
+ EndStream: false,
+ EndHeaders: true,
+ })
+}
+
+func (st *serverTester) writeHeaders(p http2.HeadersFrameParam) {
+ if err := st.fr.WriteHeaders(p); err != nil {
+ st.t.Fatalf("Error writing HEADERS: %v", err)
+ }
+}
+
+func (st *serverTester) writeData(streamID uint32, endStream bool, data []byte) {
+ if err := st.fr.WriteData(streamID, endStream, data); err != nil {
+ st.t.Fatalf("Error writing DATA: %v", err)
+ }
+}
+
+func (st *serverTester) writeRSTStream(streamID uint32, code http2.ErrCode) {
+ if err := st.fr.WriteRSTStream(streamID, code); err != nil {
+ st.t.Fatalf("Error writing RST_STREAM: %v", err)
+ }
+}
+
+func (st *serverTester) writeDataPadded(streamID uint32, endStream bool, data, padding []byte) {
+ if err := st.fr.WriteDataPadded(streamID, endStream, data, padding); err != nil {
+ st.t.Fatalf("Error writing DATA with padding: %v", err)
+ }
+}
diff --git a/vendor/google.golang.org/grpc/transport/control.go b/vendor/google.golang.org/grpc/transport/control.go
index 2586cba469..beff034c83 100644
--- a/vendor/google.golang.org/grpc/transport/control.go
+++ b/vendor/google.golang.org/grpc/transport/control.go
@@ -35,7 +35,9 @@ package transport
import (
"fmt"
+ "math"
"sync"
+ "time"
"golang.org/x/net/http2"
)
@@ -44,8 +46,20 @@ const (
// The default value of flow control window size in HTTP2 spec.
defaultWindowSize = 65535
// The initial window size for flow control.
- initialWindowSize = defaultWindowSize // for an RPC
- initialConnWindowSize = defaultWindowSize * 16 // for a connection
+ initialWindowSize = defaultWindowSize // for an RPC
+ initialConnWindowSize = defaultWindowSize * 16 // for a connection
+ infinity = time.Duration(math.MaxInt64)
+ defaultClientKeepaliveTime = infinity
+ defaultClientKeepaliveTimeout = time.Duration(20 * time.Second)
+ defaultMaxStreamsClient = 100
+ defaultMaxConnectionIdle = infinity
+ defaultMaxConnectionAge = infinity
+ defaultMaxConnectionAgeGrace = infinity
+ defaultServerKeepaliveTime = time.Duration(2 * time.Hour)
+ defaultServerKeepaliveTimeout = time.Duration(20 * time.Second)
+ defaultKeepalivePolicyMinTime = time.Duration(5 * time.Minute)
+ // max window limit set by HTTP2 Specs.
+ maxWindowSize = math.MaxInt32
)
// The following defines various control items which could flow through
@@ -54,6 +68,7 @@ const (
type windowUpdate struct {
streamID uint32
increment uint32
+ flush bool
}
func (*windowUpdate) item() {}
@@ -73,6 +88,8 @@ type resetStream struct {
func (*resetStream) item() {}
type goAway struct {
+ code http2.ErrCode
+ debugData []byte
}
func (*goAway) item() {}
@@ -153,6 +170,40 @@ type inFlow struct {
// The amount of data the application has consumed but grpc has not sent
// window update for them. Used to reduce window update frequency.
pendingUpdate uint32
+ // delta is the extra window update given by receiver when an application
+ // is reading data bigger in size than the inFlow limit.
+ delta uint32
+}
+
+func (f *inFlow) maybeAdjust(n uint32) uint32 {
+ if n > uint32(math.MaxInt32) {
+ n = uint32(math.MaxInt32)
+ }
+ f.mu.Lock()
+ defer f.mu.Unlock()
+ // estSenderQuota is the receiver's view of the maximum number of bytes the sender
+ // can send without a window update.
+ estSenderQuota := int32(f.limit - (f.pendingData + f.pendingUpdate))
+ // estUntransmittedData is the maximum number of bytes the sends might not have put
+ // on the wire yet. A value of 0 or less means that we have already received all or
+ // more bytes than the application is requesting to read.
+ estUntransmittedData := int32(n - f.pendingData) // Casting into int32 since it could be negative.
+ // This implies that unless we send a window update, the sender won't be able to send all the bytes
+ // for this message. Therefore we must send an update over the limit since there's an active read
+ // request from the application.
+ if estUntransmittedData > estSenderQuota {
+ // Sender's window shouldn't go more than 2^31 - 1 as speecified in the HTTP spec.
+ if f.limit+n > maxWindowSize {
+ f.delta = maxWindowSize - f.limit
+ } else {
+ // Send a window update for the whole message and not just the difference between
+ // estUntransmittedData and estSenderQuota. This will be helpful in case the message
+ // is padded; We will fallback on the current available window(at least a 1/4th of the limit).
+ f.delta = n
+ }
+ return f.delta
+ }
+ return 0
}
// onData is invoked when some data frame is received. It updates pendingData.
@@ -160,7 +211,7 @@ func (f *inFlow) onData(n uint32) error {
f.mu.Lock()
defer f.mu.Unlock()
f.pendingData += n
- if f.pendingData+f.pendingUpdate > f.limit {
+ if f.pendingData+f.pendingUpdate > f.limit+f.delta {
return fmt.Errorf("received %d-bytes data exceeding the limit %d bytes", f.pendingData+f.pendingUpdate, f.limit)
}
return nil
@@ -175,6 +226,13 @@ func (f *inFlow) onRead(n uint32) uint32 {
return 0
}
f.pendingData -= n
+ if n > f.delta {
+ n -= f.delta
+ f.delta = 0
+ } else {
+ f.delta -= n
+ n = 0
+ }
f.pendingUpdate += n
if f.pendingUpdate >= f.limit/4 {
wu := f.pendingUpdate
@@ -184,10 +242,10 @@ func (f *inFlow) onRead(n uint32) uint32 {
return 0
}
-func (f *inFlow) resetPendingData() uint32 {
+func (f *inFlow) resetPendingUpdate() uint32 {
f.mu.Lock()
defer f.mu.Unlock()
- n := f.pendingData
- f.pendingData = 0
+ n := f.pendingUpdate
+ f.pendingUpdate = 0
return n
}
diff --git a/vendor/google.golang.org/grpc/transport/go16.go b/vendor/google.golang.org/grpc/transport/go16.go
index ee1c46bad5..252509e15c 100644
--- a/vendor/google.golang.org/grpc/transport/go16.go
+++ b/vendor/google.golang.org/grpc/transport/go16.go
@@ -37,6 +37,8 @@ package transport
import (
"net"
+ "google.golang.org/grpc/codes"
+
"golang.org/x/net/context"
)
@@ -44,3 +46,14 @@ import (
func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
return (&net.Dialer{Cancel: ctx.Done()}).Dial(network, address)
}
+
+// ContextErr converts the error from context package into a StreamError.
+func ContextErr(err error) StreamError {
+ switch err {
+ case context.DeadlineExceeded:
+ return streamErrorf(codes.DeadlineExceeded, "%v", err)
+ case context.Canceled:
+ return streamErrorf(codes.Canceled, "%v", err)
+ }
+ return streamErrorf(codes.Internal, "Unexpected error from context packet: %v", err)
+}
diff --git a/vendor/google.golang.org/grpc/transport/go17.go b/vendor/google.golang.org/grpc/transport/go17.go
index 356f13ff19..29f8b14d95 100644
--- a/vendor/google.golang.org/grpc/transport/go17.go
+++ b/vendor/google.golang.org/grpc/transport/go17.go
@@ -35,12 +35,26 @@
package transport
import (
+ "context"
"net"
- "golang.org/x/net/context"
+ "google.golang.org/grpc/codes"
+
+ netctx "golang.org/x/net/context"
)
// dialContext connects to the address on the named network.
func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
return (&net.Dialer{}).DialContext(ctx, network, address)
}
+
+// ContextErr converts the error from context package into a StreamError.
+func ContextErr(err error) StreamError {
+ switch err {
+ case context.DeadlineExceeded, netctx.DeadlineExceeded:
+ return streamErrorf(codes.DeadlineExceeded, "%v", err)
+ case context.Canceled, netctx.Canceled:
+ return streamErrorf(codes.Canceled, "%v", err)
+ }
+ return streamErrorf(codes.Internal, "Unexpected error from context packet: %v", err)
+}
diff --git a/vendor/google.golang.org/grpc/transport/handler_server.go b/vendor/google.golang.org/grpc/transport/handler_server.go
index 10b6dc0b19..f0bae9474c 100644
--- a/vendor/google.golang.org/grpc/transport/handler_server.go
+++ b/vendor/google.golang.org/grpc/transport/handler_server.go
@@ -53,6 +53,7 @@ import (
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
+ "google.golang.org/grpc/status"
)
// NewServerHandlerTransport returns a ServerTransport handling gRPC
@@ -101,14 +102,9 @@ func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request) (ServerTr
continue
}
for _, v := range vv {
- if k == "user-agent" {
- // user-agent is special. Copying logic of http_util.go.
- if i := strings.LastIndex(v, " "); i == -1 {
- // There is no application user agent string being set
- continue
- } else {
- v = v[:i]
- }
+ v, err := decodeMetadataHeader(k, v)
+ if err != nil {
+ return nil, streamErrorf(codes.InvalidArgument, "malformed binary metadata: %v", err)
}
metakv = append(metakv, k, v)
}
@@ -174,15 +170,22 @@ func (a strAddr) String() string { return string(a) }
// do runs fn in the ServeHTTP goroutine.
func (ht *serverHandlerTransport) do(fn func()) error {
+ // Avoid a panic writing to closed channel. Imperfect but maybe good enough.
select {
- case ht.writes <- fn:
- return nil
case <-ht.closedCh:
return ErrConnClosing
+ default:
+ select {
+ case ht.writes <- fn:
+ return nil
+ case <-ht.closedCh:
+ return ErrConnClosing
+ }
+
}
}
-func (ht *serverHandlerTransport) WriteStatus(s *Stream, statusCode codes.Code, statusDesc string) error {
+func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) error {
err := ht.do(func() {
ht.writeCommonHeaders(s)
@@ -192,10 +195,13 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, statusCode codes.Code,
ht.rw.(http.Flusher).Flush()
h := ht.rw.Header()
- h.Set("Grpc-Status", fmt.Sprintf("%d", statusCode))
- if statusDesc != "" {
- h.Set("Grpc-Message", encodeGrpcMessage(statusDesc))
+ h.Set("Grpc-Status", fmt.Sprintf("%d", st.Code()))
+ if m := st.Message(); m != "" {
+ h.Set("Grpc-Message", encodeGrpcMessage(m))
}
+
+ // TODO: Support Grpc-Status-Details-Bin
+
if md := s.Trailer(); len(md) > 0 {
for k, vv := range md {
// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
@@ -203,10 +209,9 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, statusCode codes.Code,
continue
}
for _, v := range vv {
- // http2 ResponseWriter mechanism to
- // send undeclared Trailers after the
- // headers have possibly been written.
- h.Add(http2.TrailerPrefix+k, v)
+ // http2 ResponseWriter mechanism to send undeclared Trailers after
+ // the headers have possibly been written.
+ h.Add(http2.TrailerPrefix+k, encodeMetadataHeader(k, v))
}
}
}
@@ -234,6 +239,7 @@ func (ht *serverHandlerTransport) writeCommonHeaders(s *Stream) {
// and https://golang.org/pkg/net/http/#example_ResponseWriter_trailers
h.Add("Trailer", "Grpc-Status")
h.Add("Trailer", "Grpc-Message")
+ // TODO: Support Grpc-Status-Details-Bin
if s.sendCompress != "" {
h.Set("Grpc-Encoding", s.sendCompress)
@@ -260,6 +266,7 @@ func (ht *serverHandlerTransport) WriteHeader(s *Stream, md metadata.MD) error {
continue
}
for _, v := range vv {
+ v = encodeMetadataHeader(k, v)
h.Add(k, v)
}
}
@@ -300,13 +307,13 @@ func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), trace
req := ht.req
s := &Stream{
- id: 0, // irrelevant
- windowHandler: func(int) {}, // nothing
- cancel: cancel,
- buf: newRecvBuffer(),
- st: ht,
- method: req.URL.Path,
- recvCompress: req.Header.Get("grpc-encoding"),
+ id: 0, // irrelevant
+ requestRead: func(int) {},
+ cancel: cancel,
+ buf: newRecvBuffer(),
+ st: ht,
+ method: req.URL.Path,
+ recvCompress: req.Header.Get("grpc-encoding"),
}
pr := &peer.Peer{
Addr: ht.RemoteAddr(),
@@ -314,10 +321,13 @@ func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), trace
if req.TLS != nil {
pr.AuthInfo = credentials.TLSInfo{State: *req.TLS}
}
- ctx = metadata.NewContext(ctx, ht.headerMD)
+ ctx = metadata.NewIncomingContext(ctx, ht.headerMD)
ctx = peer.NewContext(ctx, pr)
s.ctx = newContextWithStream(ctx, s)
- s.dec = &recvBufferReader{ctx: s.ctx, recv: s.buf}
+ s.trReader = &transportReader{
+ reader: &recvBufferReader{ctx: s.ctx, recv: s.buf},
+ windowHandler: func(int) {},
+ }
// readerDone is closed when the Body.Read-ing goroutine exits.
readerDone := make(chan struct{})
diff --git a/vendor/google.golang.org/grpc/transport/http2_client.go b/vendor/google.golang.org/grpc/transport/http2_client.go
index 892f8ba675..92b5180e7f 100644
--- a/vendor/google.golang.org/grpc/transport/http2_client.go
+++ b/vendor/google.golang.org/grpc/transport/http2_client.go
@@ -35,12 +35,12 @@ package transport
import (
"bytes"
- "fmt"
"io"
"math"
"net"
"strings"
"sync"
+ "sync/atomic"
"time"
"golang.org/x/net/context"
@@ -49,9 +49,11 @@ import (
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/grpclog"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
)
// http2Client implements the ClientTransport interface with HTTP2.
@@ -80,6 +82,8 @@ type http2Client struct {
// goAway is closed to notify the upper layer (i.e., addrConn.transportMonitor)
// that the server sent GoAway on this transport.
goAway chan struct{}
+ // awakenKeepalive is used to wake up keepalive when after it has gone dormant.
+ awakenKeepalive chan struct{}
framer *framer
hBuf *bytes.Buffer // the buffer for HPACK encoding
@@ -97,10 +101,19 @@ type http2Client struct {
// The scheme used: https if TLS is on, http otherwise.
scheme string
+ isSecure bool
+
creds []credentials.PerRPCCredentials
+ // Boolean to keep track of reading activity on transport.
+ // 1 is true and 0 is false.
+ activity uint32 // Accessed atomically.
+ kp keepalive.ClientParameters
+
statsHandler stats.Handler
+ initialWindowSize int32
+
mu sync.Mutex // guard the following variables
state transportState // the state of underlying connection
activeStreams map[uint32]*Stream
@@ -112,6 +125,9 @@ type http2Client struct {
goAwayID uint32
// prevGoAway ID records the Last-Stream-ID in the previous GOAway frame.
prevGoAwayID uint32
+ // goAwayReason records the http2.ErrCode and debug data received with the
+ // GoAway frame.
+ goAwayReason GoAwayReason
}
func dial(ctx context.Context, fn func(context.Context, string) (net.Conn, error), addr string) (net.Conn, error) {
@@ -157,9 +173,9 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
conn, err := dial(ctx, opts.Dialer, addr.Addr)
if err != nil {
if opts.FailOnNonTempDialError {
- return nil, connectionErrorf(isTemporary(err), err, "transport: %v", err)
+ return nil, connectionErrorf(isTemporary(err), err, "transport: error while dialing: %v", err)
}
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: Error while dialing %v", err)
}
// Any further errors will close the underlying connection
defer func(conn net.Conn) {
@@ -167,7 +183,10 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
conn.Close()
}
}(conn)
- var authInfo credentials.AuthInfo
+ var (
+ isSecure bool
+ authInfo credentials.AuthInfo
+ )
if creds := opts.TransportCredentials; creds != nil {
scheme = "https"
conn, authInfo, err = creds.ClientHandshake(ctx, addr.Addr, conn)
@@ -175,43 +194,63 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
// Credentials handshake errors are typically considered permanent
// to avoid retrying on e.g. bad certificates.
temp := isTemporary(err)
- return nil, connectionErrorf(temp, err, "transport: %v", err)
+ return nil, connectionErrorf(temp, err, "transport: authentication handshake failed: %v", err)
}
+ isSecure = true
+ }
+ kp := opts.KeepaliveParams
+ // Validate keepalive parameters.
+ if kp.Time == 0 {
+ kp.Time = defaultClientKeepaliveTime
+ }
+ if kp.Timeout == 0 {
+ kp.Timeout = defaultClientKeepaliveTimeout
}
- ua := primaryUA
- if opts.UserAgent != "" {
- ua = opts.UserAgent + " " + ua
+ icwz := int32(initialConnWindowSize)
+ if opts.InitialConnWindowSize >= defaultWindowSize {
+ icwz = opts.InitialConnWindowSize
}
var buf bytes.Buffer
t := &http2Client{
ctx: ctx,
target: addr.Addr,
- userAgent: ua,
+ userAgent: opts.UserAgent,
md: addr.Metadata,
conn: conn,
remoteAddr: conn.RemoteAddr(),
localAddr: conn.LocalAddr(),
authInfo: authInfo,
// The client initiated stream id is odd starting from 1.
- nextID: 1,
- writableChan: make(chan int, 1),
- shutdownChan: make(chan struct{}),
- errorChan: make(chan struct{}),
- goAway: make(chan struct{}),
- framer: newFramer(conn),
- hBuf: &buf,
- hEnc: hpack.NewEncoder(&buf),
- controlBuf: newRecvBuffer(),
- fc: &inFlow{limit: initialConnWindowSize},
- sendQuotaPool: newQuotaPool(defaultWindowSize),
- scheme: scheme,
- state: reachable,
- activeStreams: make(map[uint32]*Stream),
- creds: opts.PerRPCCredentials,
- maxStreams: math.MaxInt32,
- streamSendQuota: defaultWindowSize,
- statsHandler: opts.StatsHandler,
- }
+ nextID: 1,
+ writableChan: make(chan int, 1),
+ shutdownChan: make(chan struct{}),
+ errorChan: make(chan struct{}),
+ goAway: make(chan struct{}),
+ awakenKeepalive: make(chan struct{}, 1),
+ framer: newFramer(conn),
+ hBuf: &buf,
+ hEnc: hpack.NewEncoder(&buf),
+ controlBuf: newRecvBuffer(),
+ fc: &inFlow{limit: uint32(icwz)},
+ sendQuotaPool: newQuotaPool(defaultWindowSize),
+ scheme: scheme,
+ state: reachable,
+ activeStreams: make(map[uint32]*Stream),
+ isSecure: isSecure,
+ creds: opts.PerRPCCredentials,
+ maxStreams: defaultMaxStreamsClient,
+ streamsQuota: newQuotaPool(defaultMaxStreamsClient),
+ streamSendQuota: defaultWindowSize,
+ kp: kp,
+ statsHandler: opts.StatsHandler,
+ initialWindowSize: initialWindowSize,
+ }
+ if opts.InitialWindowSize >= defaultWindowSize {
+ t.initialWindowSize = opts.InitialWindowSize
+ }
+ // Make sure awakenKeepalive can't be written upon.
+ // keepalive routine will make it writable, if need be.
+ t.awakenKeepalive <- struct{}{}
if t.statsHandler != nil {
t.ctx = t.statsHandler.TagConn(t.ctx, &stats.ConnTagInfo{
RemoteAddr: t.remoteAddr,
@@ -230,32 +269,35 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
n, err := t.conn.Write(clientPreface)
if err != nil {
t.Close()
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: failed to write client preface: %v", err)
}
if n != len(clientPreface) {
t.Close()
return nil, connectionErrorf(true, err, "transport: preface mismatch, wrote %d bytes; want %d", n, len(clientPreface))
}
- if initialWindowSize != defaultWindowSize {
+ if t.initialWindowSize != defaultWindowSize {
err = t.framer.writeSettings(true, http2.Setting{
ID: http2.SettingInitialWindowSize,
- Val: uint32(initialWindowSize),
+ Val: uint32(t.initialWindowSize),
})
} else {
err = t.framer.writeSettings(true)
}
if err != nil {
t.Close()
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: failed to write initial settings frame: %v", err)
}
// Adjust the connection flow control window if needed.
- if delta := uint32(initialConnWindowSize - defaultWindowSize); delta > 0 {
+ if delta := uint32(icwz - defaultWindowSize); delta > 0 {
if err := t.framer.writeWindowUpdate(true, 0, delta); err != nil {
t.Close()
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: failed to write window update: %v", err)
}
}
go t.controller()
+ if t.kp.Time != infinity {
+ go t.keepalive()
+ }
t.writableChan <- 0
return t, nil
}
@@ -269,27 +311,33 @@ func (t *http2Client) newStream(ctx context.Context, callHdr *CallHdr) *Stream {
method: callHdr.Method,
sendCompress: callHdr.SendCompress,
buf: newRecvBuffer(),
- fc: &inFlow{limit: initialWindowSize},
+ fc: &inFlow{limit: uint32(t.initialWindowSize)},
sendQuotaPool: newQuotaPool(int(t.streamSendQuota)),
headerChan: make(chan struct{}),
}
t.nextID += 2
- s.windowHandler = func(n int) {
- t.updateWindow(s, uint32(n))
+ s.requestRead = func(n int) {
+ t.adjustWindow(s, uint32(n))
}
// The client side stream context should have exactly the same life cycle with the user provided context.
// That means, s.ctx should be read-only. And s.ctx is done iff ctx is done.
// So we use the original context here instead of creating a copy.
s.ctx = ctx
- s.dec = &recvBufferReader{
- ctx: s.ctx,
- goAway: s.goAway,
- recv: s.buf,
+ s.trReader = &transportReader{
+ reader: &recvBufferReader{
+ ctx: s.ctx,
+ goAway: s.goAway,
+ recv: s.buf,
+ },
+ windowHandler: func(n int) {
+ t.updateWindow(s, uint32(n))
+ },
}
+
return s
}
-// NewStream creates a stream and register it into the transport as "active"
+// NewStream creates a stream and registers it into the transport as "active"
// streams.
func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Stream, err error) {
pr := &peer.Peer{
@@ -299,10 +347,13 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
if t.authInfo != nil {
pr.AuthInfo = t.authInfo
}
- userCtx := ctx
ctx = peer.NewContext(ctx, pr)
- authData := make(map[string]string)
- for _, c := range t.creds {
+ var (
+ authData = make(map[string]string)
+ audience string
+ )
+ // Create an audience string only if needed.
+ if len(t.creds) > 0 || callHdr.Creds != nil {
// Construct URI required to get auth request metadata.
var port string
if pos := strings.LastIndex(t.target, ":"); pos != -1 {
@@ -313,17 +364,39 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
}
pos := strings.LastIndex(callHdr.Method, "/")
if pos == -1 {
- return nil, streamErrorf(codes.InvalidArgument, "transport: malformed method name: %q", callHdr.Method)
+ pos = len(callHdr.Method)
}
- audience := "https://" + callHdr.Host + port + callHdr.Method[:pos]
+ audience = "https://" + callHdr.Host + port + callHdr.Method[:pos]
+ }
+ for _, c := range t.creds {
data, err := c.GetRequestMetadata(ctx, audience)
if err != nil {
- return nil, streamErrorf(codes.InvalidArgument, "transport: %v", err)
+ return nil, streamErrorf(codes.Internal, "transport: %v", err)
}
for k, v := range data {
+ // Capital header names are illegal in HTTP/2.
+ k = strings.ToLower(k)
authData[k] = v
}
}
+ callAuthData := make(map[string]string)
+ // Check if credentials.PerRPCCredentials were provided via call options.
+ // Note: if these credentials are provided both via dial options and call
+ // options, then both sets of credentials will be applied.
+ if callCreds := callHdr.Creds; callCreds != nil {
+ if !t.isSecure && callCreds.RequireTransportSecurity() {
+ return nil, streamErrorf(codes.Unauthenticated, "transport: cannot send secure credentials on an insecure conneciton")
+ }
+ data, err := callCreds.GetRequestMetadata(ctx, audience)
+ if err != nil {
+ return nil, streamErrorf(codes.Internal, "transport: %v", err)
+ }
+ for k, v := range data {
+ // Capital header names are illegal in HTTP/2
+ k = strings.ToLower(k)
+ callAuthData[k] = v
+ }
+ }
t.mu.Lock()
if t.activeStreams == nil {
t.mu.Unlock()
@@ -337,21 +410,18 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
t.mu.Unlock()
return nil, ErrConnClosing
}
- checkStreamsQuota := t.streamsQuota != nil
t.mu.Unlock()
- if checkStreamsQuota {
- sq, err := wait(ctx, nil, nil, t.shutdownChan, t.streamsQuota.acquire())
- if err != nil {
- return nil, err
- }
- // Returns the quota balance back.
- if sq > 1 {
- t.streamsQuota.add(sq - 1)
- }
+ sq, err := wait(ctx, nil, nil, t.shutdownChan, t.streamsQuota.acquire())
+ if err != nil {
+ return nil, err
+ }
+ // Returns the quota balance back.
+ if sq > 1 {
+ t.streamsQuota.add(sq - 1)
}
if _, err := wait(ctx, nil, nil, t.shutdownChan, t.writableChan); err != nil {
// Return the quota back now because there is no stream returned to the caller.
- if _, ok := err.(StreamError); ok && checkStreamsQuota {
+ if _, ok := err.(StreamError); ok {
t.streamsQuota.add(1)
}
return nil, err
@@ -359,9 +429,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
t.mu.Lock()
if t.state == draining {
t.mu.Unlock()
- if checkStreamsQuota {
- t.streamsQuota.add(1)
- }
+ t.streamsQuota.add(1)
// Need to make t writable again so that the rpc in flight can still proceed.
t.writableChan <- 0
return nil, ErrStreamDrain
@@ -371,19 +439,18 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
return nil, ErrConnClosing
}
s := t.newStream(ctx, callHdr)
- s.clientStatsCtx = userCtx
t.activeStreams[s.id] = s
-
- // This stream is not counted when applySetings(...) initialize t.streamsQuota.
- // Reset t.streamsQuota to the right value.
- var reset bool
- if !checkStreamsQuota && t.streamsQuota != nil {
- reset = true
+ // If the number of active streams change from 0 to 1, then check if keepalive
+ // has gone dormant. If so, wake it up.
+ if len(t.activeStreams) == 1 {
+ select {
+ case t.awakenKeepalive <- struct{}{}:
+ t.framer.writePing(false, false, [8]byte{})
+ default:
+ }
}
+
t.mu.Unlock()
- if reset {
- t.streamsQuota.add(-1)
- }
// HPACK encodes various headers. Note that once WriteField(...) is
// called, the corresponding headers/continuation frame has to be sent
@@ -407,33 +474,34 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
}
for k, v := range authData {
- // Capital header names are illegal in HTTP/2.
- k = strings.ToLower(k)
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: v})
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
+ }
+ for k, v := range callAuthData {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
var (
hasMD bool
endHeaders bool
)
- if md, ok := metadata.FromContext(ctx); ok {
+ if md, ok := metadata.FromOutgoingContext(ctx); ok {
hasMD = true
- for k, v := range md {
+ for k, vv := range md {
// HTTP doesn't allow you to set pseudoheaders after non pseudoheaders were set.
if isReservedHeader(k) {
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
}
if md, ok := t.md.(*metadata.MD); ok {
- for k, v := range *md {
+ for k, vv := range *md {
if isReservedHeader(k) {
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
}
@@ -473,6 +541,8 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
return nil, connectionErrorf(true, err, "transport: %v", err)
}
}
+ s.bytesSent = true
+
if t.statsHandler != nil {
outHeader := &stats.OutHeader{
Client: true,
@@ -482,7 +552,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
LocalAddr: t.localAddr,
Compression: callHdr.SendCompress,
}
- t.statsHandler.HandleRPC(s.clientStatsCtx, outHeader)
+ t.statsHandler.HandleRPC(s.ctx, outHeader)
}
t.writableChan <- 0
return s, nil
@@ -491,15 +561,11 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
// CloseStream clears the footprint of a stream when the stream is not needed any more.
// This must not be executed in reader's goroutine.
func (t *http2Client) CloseStream(s *Stream, err error) {
- var updateStreams bool
t.mu.Lock()
if t.activeStreams == nil {
t.mu.Unlock()
return
}
- if t.streamsQuota != nil {
- updateStreams = true
- }
delete(t.activeStreams, s.id)
if t.state == draining && len(t.activeStreams) == 0 {
// The transport is draining and s is the last live stream on t.
@@ -508,15 +574,27 @@ func (t *http2Client) CloseStream(s *Stream, err error) {
return
}
t.mu.Unlock()
- if updateStreams {
- t.streamsQuota.add(1)
- }
- s.mu.Lock()
- if q := s.fc.resetPendingData(); q > 0 {
- if n := t.fc.onRead(q); n > 0 {
- t.controlBuf.put(&windowUpdate{0, n})
+ // rstStream is true in case the stream is being closed at the client-side
+ // and the server needs to be intimated about it by sending a RST_STREAM
+ // frame.
+ // To make sure this frame is written to the wire before the headers of the
+ // next stream waiting for streamsQuota, we add to streamsQuota pool only
+ // after having acquired the writableChan to send RST_STREAM out (look at
+ // the controller() routine).
+ var rstStream bool
+ var rstError http2.ErrCode
+ defer func() {
+ // In case, the client doesn't have to send RST_STREAM to server
+ // we can safely add back to streamsQuota pool now.
+ if !rstStream {
+ t.streamsQuota.add(1)
+ return
}
- }
+ t.controlBuf.put(&resetStream{s.id, rstError})
+ }()
+ s.mu.Lock()
+ rstStream = s.rstStream
+ rstError = s.rstError
if s.state == streamDone {
s.mu.Unlock()
return
@@ -527,8 +605,9 @@ func (t *http2Client) CloseStream(s *Stream, err error) {
}
s.state = streamDone
s.mu.Unlock()
- if se, ok := err.(StreamError); ok && se.Code != codes.DeadlineExceeded {
- t.controlBuf.put(&resetStream{s.id, http2.ErrCodeCancel})
+ if _, ok := err.(StreamError); ok {
+ rstStream = true
+ rstError = http2.ErrCodeCancel
}
}
@@ -724,6 +803,24 @@ func (t *http2Client) getStream(f http2.Frame) (*Stream, bool) {
return s, ok
}
+// adjustWindow sends out extra window update over the initial window size
+// of stream if the application is requesting data larger in size than
+// the window.
+func (t *http2Client) adjustWindow(s *Stream, n uint32) {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ if s.state == streamDone {
+ return
+ }
+ if w := s.fc.maybeAdjust(n); w > 0 {
+ // Piggyback conneciton's window update along.
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+}
+
// updateWindow adjusts the inbound quota for the stream and the transport.
// Window updates will deliver to the controller for sending when
// the cumulative quota exceeds the corresponding threshold.
@@ -733,55 +830,64 @@ func (t *http2Client) updateWindow(s *Stream, n uint32) {
if s.state == streamDone {
return
}
- if w := t.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
if w := s.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{s.id, w})
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
}
}
func (t *http2Client) handleData(f *http2.DataFrame) {
- size := len(f.Data())
+ size := f.Header().Length
if err := t.fc.onData(uint32(size)); err != nil {
t.notifyError(connectionErrorf(true, err, "%v", err))
return
}
+ // Decouple connection's flow control from application's read.
+ // An update on connection's flow control should not depend on
+ // whether user application has read the data or not. Such a
+ // restriction is already imposed on the stream's flow control,
+ // and therefore the sender will be blocked anyways.
+ // Decoupling the connection flow control will prevent other
+ // active(fast) streams from starving in presence of slow or
+ // inactive streams.
+ if w := t.fc.onRead(uint32(size)); w > 0 {
+ t.controlBuf.put(&windowUpdate{0, w, true})
+ }
// Select the right stream to dispatch.
s, ok := t.getStream(f)
if !ok {
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if size > 0 {
s.mu.Lock()
if s.state == streamDone {
s.mu.Unlock()
- // The stream has been closed. Release the corresponding quota.
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if err := s.fc.onData(uint32(size)); err != nil {
- s.state = streamDone
- s.statusCode = codes.Internal
- s.statusDesc = err.Error()
- close(s.done)
+ s.rstStream = true
+ s.rstError = http2.ErrCodeFlowControl
+ s.finish(status.New(codes.Internal, err.Error()))
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
- t.controlBuf.put(&resetStream{s.id, http2.ErrCodeFlowControl})
return
}
+ if f.Header().Flags.Has(http2.FlagDataPadded) {
+ if w := s.fc.onRead(uint32(size) - uint32(len(f.Data()))); w > 0 {
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+ }
s.mu.Unlock()
// TODO(bradfitz, zhaoq): A copy is required here because there is no
// guarantee f.Data() is consumed before the arrival of next frame.
// Can this copy be eliminated?
- data := make([]byte, size)
- copy(data, f.Data())
- s.write(recvMsg{data: data})
+ if len(f.Data()) > 0 {
+ data := make([]byte, len(f.Data()))
+ copy(data, f.Data())
+ s.write(recvMsg{data: data})
+ }
}
// The server has closed the stream without sending trailers. Record that
// the read direction is closed, and set the status appropriately.
@@ -791,10 +897,7 @@ func (t *http2Client) handleData(f *http2.DataFrame) {
s.mu.Unlock()
return
}
- s.state = streamDone
- s.statusCode = codes.Internal
- s.statusDesc = "server closed the stream without sending trailers"
- close(s.done)
+ s.finish(status.New(codes.Internal, "server closed the stream without sending trailers"))
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
}
@@ -810,18 +913,16 @@ func (t *http2Client) handleRSTStream(f *http2.RSTStreamFrame) {
s.mu.Unlock()
return
}
- s.state = streamDone
if !s.headerDone {
close(s.headerChan)
s.headerDone = true
}
- s.statusCode, ok = http2ErrConvTab[http2.ErrCode(f.ErrCode)]
+ statusCode, ok := http2ErrConvTab[http2.ErrCode(f.ErrCode)]
if !ok {
grpclog.Println("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error ", f.ErrCode)
- s.statusCode = codes.Unknown
+ statusCode = codes.Unknown
}
- s.statusDesc = fmt.Sprintf("stream terminated by RST_STREAM with error code: %d", f.ErrCode)
- close(s.done)
+ s.finish(status.Newf(statusCode, "stream terminated by RST_STREAM with error code: %d", f.ErrCode))
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
}
@@ -849,6 +950,9 @@ func (t *http2Client) handlePing(f *http2.PingFrame) {
}
func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) {
+ if f.ErrCode == http2.ErrCodeEnhanceYourCalm {
+ grpclog.Printf("Client received GoAway with http2.ErrCodeEnhanceYourCalm.")
+ }
t.mu.Lock()
if t.state == reachable || t.state == draining {
if f.LastStreamID > 0 && f.LastStreamID%2 != 1 {
@@ -870,6 +974,7 @@ func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) {
t.mu.Unlock()
return
default:
+ t.setGoAwayReason(f)
}
t.goAwayID = f.LastStreamID
close(t.goAway)
@@ -877,6 +982,26 @@ func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) {
t.mu.Unlock()
}
+// setGoAwayReason sets the value of t.goAwayReason based
+// on the GoAway frame received.
+// It expects a lock on transport's mutext to be held by
+// the caller.
+func (t *http2Client) setGoAwayReason(f *http2.GoAwayFrame) {
+ t.goAwayReason = NoReason
+ switch f.ErrCode {
+ case http2.ErrCodeEnhanceYourCalm:
+ if string(f.DebugData()) == "too_many_pings" {
+ t.goAwayReason = TooManyPings
+ }
+ }
+}
+
+func (t *http2Client) GetGoAwayReason() GoAwayReason {
+ t.mu.Lock()
+ defer t.mu.Unlock()
+ return t.goAwayReason
+}
+
func (t *http2Client) handleWindowUpdate(f *http2.WindowUpdateFrame) {
id := f.Header().StreamID
incr := f.Increment
@@ -895,18 +1020,16 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
if !ok {
return
}
+ s.bytesReceived = true
var state decodeState
- for _, hf := range frame.Fields {
- state.processHeaderField(hf)
- }
- if state.err != nil {
+ if err := state.decodeResponseHeader(frame); err != nil {
s.mu.Lock()
if !s.headerDone {
close(s.headerChan)
s.headerDone = true
}
s.mu.Unlock()
- s.write(recvMsg{err: state.err})
+ s.write(recvMsg{err: err})
// Something wrong. Stops reading even when there is remaining.
return
}
@@ -920,13 +1043,13 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
Client: true,
WireLength: int(frame.Header().Length),
}
- t.statsHandler.HandleRPC(s.clientStatsCtx, inHeader)
+ t.statsHandler.HandleRPC(s.ctx, inHeader)
} else {
inTrailer := &stats.InTrailer{
Client: true,
WireLength: int(frame.Header().Length),
}
- t.statsHandler.HandleRPC(s.clientStatsCtx, inTrailer)
+ t.statsHandler.HandleRPC(s.ctx, inTrailer)
}
}
}()
@@ -951,10 +1074,7 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
if len(state.mdata) > 0 {
s.trailer = state.mdata
}
- s.statusCode = state.statusCode
- s.statusDesc = state.statusDesc
- close(s.done)
- s.state = streamDone
+ s.finish(state.status())
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
}
@@ -982,6 +1102,7 @@ func (t *http2Client) reader() {
t.notifyError(err)
return
}
+ atomic.CompareAndSwapUint32(&t.activity, 0, 1)
sf, ok := frame.(*http2.SettingsFrame)
if !ok {
t.notifyError(err)
@@ -992,6 +1113,7 @@ func (t *http2Client) reader() {
// loop to keep reading incoming messages on this transport.
for {
frame, err := t.framer.readFrame()
+ atomic.CompareAndSwapUint32(&t.activity, 0, 1)
if err != nil {
// Abort an active stream if the http2.Framer returns a
// http2.StreamError. This can happen only if the server's response
@@ -1043,21 +1165,15 @@ func (t *http2Client) applySettings(ss []http2.Setting) {
s.Val = math.MaxInt32
}
t.mu.Lock()
- reset := t.streamsQuota != nil
- if !reset {
- t.streamsQuota = newQuotaPool(int(s.Val) - len(t.activeStreams))
- }
ms := t.maxStreams
t.maxStreams = int(s.Val)
t.mu.Unlock()
- if reset {
- t.streamsQuota.add(int(s.Val) - ms)
- }
+ t.streamsQuota.add(int(s.Val) - ms)
case http2.SettingInitialWindowSize:
t.mu.Lock()
for _, stream := range t.activeStreams {
// Adjust the sending quota for each stream.
- stream.sendQuotaPool.add(int(s.Val - t.streamSendQuota))
+ stream.sendQuotaPool.add(int(s.Val) - int(t.streamSendQuota))
}
t.streamSendQuota = s.Val
t.mu.Unlock()
@@ -1076,7 +1192,7 @@ func (t *http2Client) controller() {
case <-t.writableChan:
switch i := i.(type) {
case *windowUpdate:
- t.framer.writeWindowUpdate(true, i.streamID, i.increment)
+ t.framer.writeWindowUpdate(i.flush, i.streamID, i.increment)
case *settings:
if i.ack {
t.framer.writeSettingsAck(true)
@@ -1085,6 +1201,12 @@ func (t *http2Client) controller() {
t.framer.writeSettings(true, i.ss...)
}
case *resetStream:
+ // If the server needs to be to intimated about stream closing,
+ // then we need to make sure the RST_STREAM frame is written to
+ // the wire before the headers of the next stream waiting on
+ // streamQuota. We ensure this by adding to the streamsQuota pool
+ // only after having acquired the writableChan to send RST_STREAM.
+ t.streamsQuota.add(1)
t.framer.writeRSTStream(true, i.streamID, i.code)
case *flushIO:
t.framer.flushWrite()
@@ -1104,6 +1226,61 @@ func (t *http2Client) controller() {
}
}
+// keepalive running in a separate goroutune makes sure the connection is alive by sending pings.
+func (t *http2Client) keepalive() {
+ p := &ping{data: [8]byte{}}
+ timer := time.NewTimer(t.kp.Time)
+ for {
+ select {
+ case <-timer.C:
+ if atomic.CompareAndSwapUint32(&t.activity, 1, 0) {
+ timer.Reset(t.kp.Time)
+ continue
+ }
+ // Check if keepalive should go dormant.
+ t.mu.Lock()
+ if len(t.activeStreams) < 1 && !t.kp.PermitWithoutStream {
+ // Make awakenKeepalive writable.
+ <-t.awakenKeepalive
+ t.mu.Unlock()
+ select {
+ case <-t.awakenKeepalive:
+ // If the control gets here a ping has been sent
+ // need to reset the timer with keepalive.Timeout.
+ case <-t.shutdownChan:
+ return
+ }
+ } else {
+ t.mu.Unlock()
+ // Send ping.
+ t.controlBuf.put(p)
+ }
+
+ // By the time control gets here a ping has been sent one way or the other.
+ timer.Reset(t.kp.Timeout)
+ select {
+ case <-timer.C:
+ if atomic.CompareAndSwapUint32(&t.activity, 1, 0) {
+ timer.Reset(t.kp.Time)
+ continue
+ }
+ t.Close()
+ return
+ case <-t.shutdownChan:
+ if !timer.Stop() {
+ <-timer.C
+ }
+ return
+ }
+ case <-t.shutdownChan:
+ if !timer.Stop() {
+ <-timer.C
+ }
+ return
+ }
+ }
+}
+
func (t *http2Client) Error() <-chan struct{} {
return t.errorChan
}
diff --git a/vendor/google.golang.org/grpc/transport/http2_server.go b/vendor/google.golang.org/grpc/transport/http2_server.go
index a095dd0e07..3a18f30798 100644
--- a/vendor/google.golang.org/grpc/transport/http2_server.go
+++ b/vendor/google.golang.org/grpc/transport/http2_server.go
@@ -38,19 +38,25 @@ import (
"errors"
"io"
"math"
+ "math/rand"
"net"
"strconv"
"sync"
+ "sync/atomic"
+ "time"
+ "github.com/golang/protobuf/proto"
"golang.org/x/net/context"
"golang.org/x/net/http2"
"golang.org/x/net/http2/hpack"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/grpclog"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/tap"
)
@@ -90,11 +96,35 @@ type http2Server struct {
stats stats.Handler
+ // Flag to keep track of reading activity on transport.
+ // 1 is true and 0 is false.
+ activity uint32 // Accessed atomically.
+ // Keepalive and max-age parameters for the server.
+ kp keepalive.ServerParameters
+
+ // Keepalive enforcement policy.
+ kep keepalive.EnforcementPolicy
+ // The time instance last ping was received.
+ lastPingAt time.Time
+ // Number of times the client has violated keepalive ping policy so far.
+ pingStrikes uint8
+ // Flag to signify that number of ping strikes should be reset to 0.
+ // This is set whenever data or header frames are sent.
+ // 1 means yes.
+ resetPingStrikes uint32 // Accessed atomically.
+
+ initialWindowSize int32
+
mu sync.Mutex // guard the following
state transportState
activeStreams map[uint32]*Stream
// the per-stream outbound flow control window size set by the peer.
streamSendQuota uint32
+ // idle is the time instant when the connection went idle.
+ // This is either the begining of the connection or when the number of
+ // RPCs go down to 0.
+ // When the connection is busy, this value is set to 0.
+ idle time.Time
}
// newHTTP2Server constructs a ServerTransport based on HTTP2. ConnectionError is
@@ -114,41 +144,75 @@ func newHTTP2Server(conn net.Conn, config *ServerConfig) (_ ServerTransport, err
Val: maxStreams,
})
}
- if initialWindowSize != defaultWindowSize {
+ iwz := int32(initialWindowSize)
+ if config.InitialWindowSize >= defaultWindowSize {
+ iwz = config.InitialWindowSize
+ }
+ icwz := int32(initialConnWindowSize)
+ if config.InitialConnWindowSize >= defaultWindowSize {
+ icwz = config.InitialConnWindowSize
+ }
+ if iwz != defaultWindowSize {
settings = append(settings, http2.Setting{
ID: http2.SettingInitialWindowSize,
- Val: uint32(initialWindowSize)})
+ Val: uint32(iwz)})
}
if err := framer.writeSettings(true, settings...); err != nil {
return nil, connectionErrorf(true, err, "transport: %v", err)
}
// Adjust the connection flow control window if needed.
- if delta := uint32(initialConnWindowSize - defaultWindowSize); delta > 0 {
+ if delta := uint32(icwz - defaultWindowSize); delta > 0 {
if err := framer.writeWindowUpdate(true, 0, delta); err != nil {
return nil, connectionErrorf(true, err, "transport: %v", err)
}
}
+ kp := config.KeepaliveParams
+ if kp.MaxConnectionIdle == 0 {
+ kp.MaxConnectionIdle = defaultMaxConnectionIdle
+ }
+ if kp.MaxConnectionAge == 0 {
+ kp.MaxConnectionAge = defaultMaxConnectionAge
+ }
+ // Add a jitter to MaxConnectionAge.
+ kp.MaxConnectionAge += getJitter(kp.MaxConnectionAge)
+ if kp.MaxConnectionAgeGrace == 0 {
+ kp.MaxConnectionAgeGrace = defaultMaxConnectionAgeGrace
+ }
+ if kp.Time == 0 {
+ kp.Time = defaultServerKeepaliveTime
+ }
+ if kp.Timeout == 0 {
+ kp.Timeout = defaultServerKeepaliveTimeout
+ }
+ kep := config.KeepalivePolicy
+ if kep.MinTime == 0 {
+ kep.MinTime = defaultKeepalivePolicyMinTime
+ }
var buf bytes.Buffer
t := &http2Server{
- ctx: context.Background(),
- conn: conn,
- remoteAddr: conn.RemoteAddr(),
- localAddr: conn.LocalAddr(),
- authInfo: config.AuthInfo,
- framer: framer,
- hBuf: &buf,
- hEnc: hpack.NewEncoder(&buf),
- maxStreams: maxStreams,
- inTapHandle: config.InTapHandle,
- controlBuf: newRecvBuffer(),
- fc: &inFlow{limit: initialConnWindowSize},
- sendQuotaPool: newQuotaPool(defaultWindowSize),
- state: reachable,
- writableChan: make(chan int, 1),
- shutdownChan: make(chan struct{}),
- activeStreams: make(map[uint32]*Stream),
- streamSendQuota: defaultWindowSize,
- stats: config.StatsHandler,
+ ctx: context.Background(),
+ conn: conn,
+ remoteAddr: conn.RemoteAddr(),
+ localAddr: conn.LocalAddr(),
+ authInfo: config.AuthInfo,
+ framer: framer,
+ hBuf: &buf,
+ hEnc: hpack.NewEncoder(&buf),
+ maxStreams: maxStreams,
+ inTapHandle: config.InTapHandle,
+ controlBuf: newRecvBuffer(),
+ fc: &inFlow{limit: uint32(icwz)},
+ sendQuotaPool: newQuotaPool(defaultWindowSize),
+ state: reachable,
+ writableChan: make(chan int, 1),
+ shutdownChan: make(chan struct{}),
+ activeStreams: make(map[uint32]*Stream),
+ streamSendQuota: defaultWindowSize,
+ stats: config.StatsHandler,
+ kp: kp,
+ idle: time.Now(),
+ kep: kep,
+ initialWindowSize: iwz,
}
if t.stats != nil {
t.ctx = t.stats.TagConn(t.ctx, &stats.ConnTagInfo{
@@ -159,6 +223,7 @@ func newHTTP2Server(conn net.Conn, config *ServerConfig) (_ ServerTransport, err
t.stats.HandleConn(t.ctx, connBegin)
}
go t.controller()
+ go t.keepalive()
t.writableChan <- 0
return t, nil
}
@@ -170,18 +235,17 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
id: frame.Header().StreamID,
st: t,
buf: buf,
- fc: &inFlow{limit: initialWindowSize},
+ fc: &inFlow{limit: uint32(t.initialWindowSize)},
}
var state decodeState
for _, hf := range frame.Fields {
- state.processHeaderField(hf)
- }
- if err := state.err; err != nil {
- if se, ok := err.(StreamError); ok {
- t.controlBuf.put(&resetStream{s.id, statusCodeConvTab[se.Code]})
+ if err := state.processHeaderField(hf); err != nil {
+ if se, ok := err.(StreamError); ok {
+ t.controlBuf.put(&resetStream{s.id, statusCodeConvTab[se.Code]})
+ }
+ return
}
- return
}
if frame.StreamEnded() {
@@ -208,12 +272,16 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
s.ctx = newContextWithStream(s.ctx, s)
// Attach the received metadata to the context.
if len(state.mdata) > 0 {
- s.ctx = metadata.NewContext(s.ctx, state.mdata)
+ s.ctx = metadata.NewIncomingContext(s.ctx, state.mdata)
}
-
- s.dec = &recvBufferReader{
- ctx: s.ctx,
- recv: s.buf,
+ s.trReader = &transportReader{
+ reader: &recvBufferReader{
+ ctx: s.ctx,
+ recv: s.buf,
+ },
+ windowHandler: func(n int) {
+ t.updateWindow(s, uint32(n))
+ },
}
s.recvCompress = state.encoding
s.method = state.method
@@ -224,7 +292,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
}
s.ctx, err = t.inTapHandle(s.ctx, info)
if err != nil {
- // TODO: Log the real error.
+ grpclog.Printf("transport: http2Server.operateHeaders got an error from InTapHandle: %v", err)
t.controlBuf.put(&resetStream{s.id, http2.ErrCodeRefusedStream})
return
}
@@ -248,9 +316,12 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
t.maxStreamID = s.id
s.sendQuotaPool = newQuotaPool(int(t.streamSendQuota))
t.activeStreams[s.id] = s
+ if len(t.activeStreams) == 1 {
+ t.idle = time.Time{}
+ }
t.mu.Unlock()
- s.windowHandler = func(n int) {
- t.updateWindow(s, uint32(n))
+ s.requestRead = func(n int) {
+ t.adjustWindow(s, uint32(n))
}
s.ctx = traceCtx(s.ctx, s.method)
if t.stats != nil {
@@ -275,7 +346,10 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
// Check the validity of client preface.
preface := make([]byte, len(clientPreface))
if _, err := io.ReadFull(t.conn, preface); err != nil {
- grpclog.Printf("transport: http2Server.HandleStreams failed to receive the preface from client: %v", err)
+ // Only log if it isn't a simple tcp accept check (ie: tcp balancer doing open/close socket)
+ if err != io.EOF {
+ grpclog.Printf("transport: http2Server.HandleStreams failed to receive the preface from client: %v", err)
+ }
t.Close()
return
}
@@ -291,10 +365,11 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
return
}
if err != nil {
- grpclog.Printf("transport: http2Server.HandleStreams failed to read frame: %v", err)
+ grpclog.Printf("transport: http2Server.HandleStreams failed to read initial settings frame: %v", err)
t.Close()
return
}
+ atomic.StoreUint32(&t.activity, 1)
sf, ok := frame.(*http2.SettingsFrame)
if !ok {
grpclog.Printf("transport: http2Server.HandleStreams saw invalid preface type %T from client", frame)
@@ -305,6 +380,7 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
for {
frame, err := t.framer.readFrame()
+ atomic.StoreUint32(&t.activity, 1)
if err != nil {
if se, ok := err.(http2.StreamError); ok {
t.mu.Lock()
@@ -363,6 +439,23 @@ func (t *http2Server) getStream(f http2.Frame) (*Stream, bool) {
return s, true
}
+// adjustWindow sends out extra window update over the initial window size
+// of stream if the application is requesting data larger in size than
+// the window.
+func (t *http2Server) adjustWindow(s *Stream, n uint32) {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ if s.state == streamDone {
+ return
+ }
+ if w := s.fc.maybeAdjust(n); w > 0 {
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+}
+
// updateWindow adjusts the inbound quota for the stream and the transport.
// Window updates will deliver to the controller for sending when
// the cumulative quota exceeds the corresponding threshold.
@@ -372,37 +465,41 @@ func (t *http2Server) updateWindow(s *Stream, n uint32) {
if s.state == streamDone {
return
}
- if w := t.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
if w := s.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{s.id, w})
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
}
}
func (t *http2Server) handleData(f *http2.DataFrame) {
- size := len(f.Data())
+ size := f.Header().Length
if err := t.fc.onData(uint32(size)); err != nil {
grpclog.Printf("transport: http2Server %v", err)
t.Close()
return
}
+ // Decouple connection's flow control from application's read.
+ // An update on connection's flow control should not depend on
+ // whether user application has read the data or not. Such a
+ // restriction is already imposed on the stream's flow control,
+ // and therefore the sender will be blocked anyways.
+ // Decoupling the connection flow control will prevent other
+ // active(fast) streams from starving in presence of slow or
+ // inactive streams.
+ if w := t.fc.onRead(uint32(size)); w > 0 {
+ t.controlBuf.put(&windowUpdate{0, w, true})
+ }
// Select the right stream to dispatch.
s, ok := t.getStream(f)
if !ok {
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if size > 0 {
s.mu.Lock()
if s.state == streamDone {
s.mu.Unlock()
- // The stream has been closed. Release the corresponding quota.
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if err := s.fc.onData(uint32(size)); err != nil {
@@ -411,13 +508,20 @@ func (t *http2Server) handleData(f *http2.DataFrame) {
t.controlBuf.put(&resetStream{s.id, http2.ErrCodeFlowControl})
return
}
+ if f.Header().Flags.Has(http2.FlagDataPadded) {
+ if w := s.fc.onRead(uint32(size) - uint32(len(f.Data()))); w > 0 {
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+ }
s.mu.Unlock()
// TODO(bradfitz, zhaoq): A copy is required here because there is no
// guarantee f.Data() is consumed before the arrival of next frame.
// Can this copy be eliminated?
- data := make([]byte, size)
- copy(data, f.Data())
- s.write(recvMsg{data: data})
+ if len(f.Data()) > 0 {
+ data := make([]byte, len(f.Data()))
+ copy(data, f.Data())
+ s.write(recvMsg{data: data})
+ }
}
if f.Header().Flags.Has(http2.FlagDataEndStream) {
// Received the end of stream from the client.
@@ -451,6 +555,11 @@ func (t *http2Server) handleSettings(f *http2.SettingsFrame) {
t.controlBuf.put(&settings{ack: true, ss: ss})
}
+const (
+ maxPingStrikes = 2
+ defaultPingTimeout = 2 * time.Hour
+)
+
func (t *http2Server) handlePing(f *http2.PingFrame) {
if f.IsAck() { // Do nothing.
return
@@ -458,6 +567,38 @@ func (t *http2Server) handlePing(f *http2.PingFrame) {
pingAck := &ping{ack: true}
copy(pingAck.data[:], f.Data[:])
t.controlBuf.put(pingAck)
+
+ now := time.Now()
+ defer func() {
+ t.lastPingAt = now
+ }()
+ // A reset ping strikes means that we don't need to check for policy
+ // violation for this ping and the pingStrikes counter should be set
+ // to 0.
+ if atomic.CompareAndSwapUint32(&t.resetPingStrikes, 1, 0) {
+ t.pingStrikes = 0
+ return
+ }
+ t.mu.Lock()
+ ns := len(t.activeStreams)
+ t.mu.Unlock()
+ if ns < 1 && !t.kep.PermitWithoutStream {
+ // Keepalive shouldn't be active thus, this new ping should
+ // have come after atleast defaultPingTimeout.
+ if t.lastPingAt.Add(defaultPingTimeout).After(now) {
+ t.pingStrikes++
+ }
+ } else {
+ // Check if keepalive policy is respected.
+ if t.lastPingAt.Add(t.kep.MinTime).After(now) {
+ t.pingStrikes++
+ }
+ }
+
+ if t.pingStrikes > maxPingStrikes {
+ // Send goaway and close the connection.
+ t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings")})
+ }
}
func (t *http2Server) handleWindowUpdate(f *http2.WindowUpdateFrame) {
@@ -476,6 +617,13 @@ func (t *http2Server) writeHeaders(s *Stream, b *bytes.Buffer, endStream bool) e
first := true
endHeaders := false
var err error
+ defer func() {
+ if err == nil {
+ // Reset ping strikes when seding headers since that might cause the
+ // peer to send ping.
+ atomic.StoreUint32(&t.resetPingStrikes, 1)
+ }
+ }()
// Sends the headers in a single batch.
for !endHeaders {
size := t.hBuf.Len()
@@ -530,13 +678,13 @@ func (t *http2Server) WriteHeader(s *Stream, md metadata.MD) error {
if s.sendCompress != "" {
t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-encoding", Value: s.sendCompress})
}
- for k, v := range md {
+ for k, vv := range md {
if isReservedHeader(k) {
// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
bufLen := t.hBuf.Len()
@@ -557,7 +705,7 @@ func (t *http2Server) WriteHeader(s *Stream, md metadata.MD) error {
// There is no further I/O operations being able to perform on this stream.
// TODO(zhaoq): Now it indicates the end of entire stream. Revisit if early
// OK is adopted.
-func (t *http2Server) WriteStatus(s *Stream, statusCode codes.Code, statusDesc string) error {
+func (t *http2Server) WriteStatus(s *Stream, st *status.Status) error {
var headersSent, hasHeader bool
s.mu.Lock()
if s.state == streamDone {
@@ -588,17 +736,28 @@ func (t *http2Server) WriteStatus(s *Stream, statusCode codes.Code, statusDesc s
t.hEnc.WriteField(
hpack.HeaderField{
Name: "grpc-status",
- Value: strconv.Itoa(int(statusCode)),
+ Value: strconv.Itoa(int(st.Code())),
})
- t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(statusDesc)})
+ t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(st.Message())})
+
+ if p := st.Proto(); p != nil && len(p.Details) > 0 {
+ stBytes, err := proto.Marshal(p)
+ if err != nil {
+ // TODO: return error instead, when callers are able to handle it.
+ panic(err)
+ }
+
+ t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-status-details-bin", Value: encodeBinHeader(stBytes)})
+ }
+
// Attach the trailer metadata.
- for k, v := range s.trailer {
+ for k, vv := range s.trailer {
// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
if isReservedHeader(k) {
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
bufLen := t.hBuf.Len()
@@ -619,7 +778,7 @@ func (t *http2Server) WriteStatus(s *Stream, statusCode codes.Code, statusDesc s
// Write converts the data into HTTP2 data frame and sends it out. Non-nil error
// is returns if it fails (e.g., framing error, transport error).
-func (t *http2Server) Write(s *Stream, data []byte, opts *Options) error {
+func (t *http2Server) Write(s *Stream, data []byte, opts *Options) (err error) {
// TODO(zhaoq): Support multi-writers for a single stream.
var writeHeaderFrame bool
s.mu.Lock()
@@ -634,6 +793,13 @@ func (t *http2Server) Write(s *Stream, data []byte, opts *Options) error {
if writeHeaderFrame {
t.WriteHeader(s, nil)
}
+ defer func() {
+ if err == nil {
+ // Reset ping strikes when sending data since this might cause
+ // the peer to send ping.
+ atomic.StoreUint32(&t.resetPingStrikes, 1)
+ }
+ }()
r := bytes.NewBuffer(data)
for {
if r.Len() == 0 {
@@ -715,7 +881,7 @@ func (t *http2Server) applySettings(ss []http2.Setting) {
t.mu.Lock()
defer t.mu.Unlock()
for _, stream := range t.activeStreams {
- stream.sendQuotaPool.add(int(s.Val - t.streamSendQuota))
+ stream.sendQuotaPool.add(int(s.Val) - int(t.streamSendQuota))
}
t.streamSendQuota = s.Val
}
@@ -723,6 +889,91 @@ func (t *http2Server) applySettings(ss []http2.Setting) {
}
}
+// keepalive running in a separate goroutine does the following:
+// 1. Gracefully closes an idle connection after a duration of keepalive.MaxConnectionIdle.
+// 2. Gracefully closes any connection after a duration of keepalive.MaxConnectionAge.
+// 3. Forcibly closes a connection after an additive period of keepalive.MaxConnectionAgeGrace over keepalive.MaxConnectionAge.
+// 4. Makes sure a connection is alive by sending pings with a frequency of keepalive.Time and closes a non-resposive connection
+// after an additional duration of keepalive.Timeout.
+func (t *http2Server) keepalive() {
+ p := &ping{}
+ var pingSent bool
+ maxIdle := time.NewTimer(t.kp.MaxConnectionIdle)
+ maxAge := time.NewTimer(t.kp.MaxConnectionAge)
+ keepalive := time.NewTimer(t.kp.Time)
+ // NOTE: All exit paths of this function should reset their
+ // respecitve timers. A failure to do so will cause the
+ // following clean-up to deadlock and eventually leak.
+ defer func() {
+ if !maxIdle.Stop() {
+ <-maxIdle.C
+ }
+ if !maxAge.Stop() {
+ <-maxAge.C
+ }
+ if !keepalive.Stop() {
+ <-keepalive.C
+ }
+ }()
+ for {
+ select {
+ case <-maxIdle.C:
+ t.mu.Lock()
+ idle := t.idle
+ if idle.IsZero() { // The connection is non-idle.
+ t.mu.Unlock()
+ maxIdle.Reset(t.kp.MaxConnectionIdle)
+ continue
+ }
+ val := t.kp.MaxConnectionIdle - time.Since(idle)
+ if val <= 0 {
+ // The connection has been idle for a duration of keepalive.MaxConnectionIdle or more.
+ // Gracefully close the connection.
+ t.state = draining
+ t.mu.Unlock()
+ t.Drain()
+ // Reseting the timer so that the clean-up doesn't deadlock.
+ maxIdle.Reset(infinity)
+ return
+ }
+ t.mu.Unlock()
+ maxIdle.Reset(val)
+ case <-maxAge.C:
+ t.mu.Lock()
+ t.state = draining
+ t.mu.Unlock()
+ t.Drain()
+ maxAge.Reset(t.kp.MaxConnectionAgeGrace)
+ select {
+ case <-maxAge.C:
+ // Close the connection after grace period.
+ t.Close()
+ // Reseting the timer so that the clean-up doesn't deadlock.
+ maxAge.Reset(infinity)
+ case <-t.shutdownChan:
+ }
+ return
+ case <-keepalive.C:
+ if atomic.CompareAndSwapUint32(&t.activity, 1, 0) {
+ pingSent = false
+ keepalive.Reset(t.kp.Time)
+ continue
+ }
+ if pingSent {
+ t.Close()
+ // Reseting the timer so that the clean-up doesn't deadlock.
+ keepalive.Reset(infinity)
+ return
+ }
+ pingSent = true
+ t.controlBuf.put(p)
+ keepalive.Reset(t.kp.Timeout)
+ case <-t.shutdownChan:
+ return
+ }
+ }
+}
+
// controller running in a separate goroutine takes charge of sending control
// frames (e.g., window update, reset stream, setting, etc.) to the server.
func (t *http2Server) controller() {
@@ -734,7 +985,7 @@ func (t *http2Server) controller() {
case <-t.writableChan:
switch i := i.(type) {
case *windowUpdate:
- t.framer.writeWindowUpdate(true, i.streamID, i.increment)
+ t.framer.writeWindowUpdate(i.flush, i.streamID, i.increment)
case *settings:
if i.ack {
t.framer.writeSettingsAck(true)
@@ -754,7 +1005,10 @@ func (t *http2Server) controller() {
sid := t.maxStreamID
t.state = draining
t.mu.Unlock()
- t.framer.writeGoAway(true, sid, http2.ErrCodeNo, nil)
+ t.framer.writeGoAway(true, sid, i.code, i.debugData)
+ if i.code == http2.ErrCodeEnhanceYourCalm {
+ t.Close()
+ }
case *flushIO:
t.framer.flushWrite()
case *ping:
@@ -804,6 +1058,9 @@ func (t *http2Server) Close() (err error) {
func (t *http2Server) closeStream(s *Stream) {
t.mu.Lock()
delete(t.activeStreams, s.id)
+ if len(t.activeStreams) == 0 {
+ t.idle = time.Now()
+ }
if t.state == draining && len(t.activeStreams) == 0 {
defer t.Close()
}
@@ -813,11 +1070,6 @@ func (t *http2Server) closeStream(s *Stream) {
// called to interrupt the potential blocking on other goroutines.
s.cancel()
s.mu.Lock()
- if q := s.fc.resetPendingData(); q > 0 {
- if w := t.fc.onRead(q); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
- }
if s.state == streamDone {
s.mu.Unlock()
return
@@ -831,5 +1083,17 @@ func (t *http2Server) RemoteAddr() net.Addr {
}
func (t *http2Server) Drain() {
- t.controlBuf.put(&goAway{})
+ t.controlBuf.put(&goAway{code: http2.ErrCodeNo})
+}
+
+var rgen = rand.New(rand.NewSource(time.Now().UnixNano()))
+
+func getJitter(v time.Duration) time.Duration {
+ if v == infinity {
+ return 0
+ }
+ // Generate a jitter between +/- 10% of the value.
+ r := int64(v / 10)
+ j := rgen.Int63n(2*r) - r
+ return time.Duration(j)
}
diff --git a/vendor/google.golang.org/grpc/transport/http_util.go b/vendor/google.golang.org/grpc/transport/http_util.go
index a3c68d4cac..9b31717c41 100644
--- a/vendor/google.golang.org/grpc/transport/http_util.go
+++ b/vendor/google.golang.org/grpc/transport/http_util.go
@@ -36,24 +36,26 @@ package transport
import (
"bufio"
"bytes"
+ "encoding/base64"
"fmt"
"io"
"net"
+ "net/http"
"strconv"
"strings"
"sync/atomic"
"time"
+ "github.com/golang/protobuf/proto"
"golang.org/x/net/http2"
"golang.org/x/net/http2/hpack"
+ spb "google.golang.org/genproto/googleapis/rpc/status"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
- "google.golang.org/grpc/metadata"
+ "google.golang.org/grpc/status"
)
const (
- // The primary user agent
- primaryUA = "grpc-go/1.0"
// http2MaxFrameLen specifies the max length of a HTTP2 frame.
http2MaxFrameLen = 16384 // 16KB frame
// http://http2.github.io/http2-spec/#SettingValues
@@ -87,18 +89,39 @@ var (
codes.ResourceExhausted: http2.ErrCodeEnhanceYourCalm,
codes.PermissionDenied: http2.ErrCodeInadequateSecurity,
}
+ httpStatusConvTab = map[int]codes.Code{
+ // 400 Bad Request - INTERNAL.
+ http.StatusBadRequest: codes.Internal,
+ // 401 Unauthorized - UNAUTHENTICATED.
+ http.StatusUnauthorized: codes.Unauthenticated,
+ // 403 Forbidden - PERMISSION_DENIED.
+ http.StatusForbidden: codes.PermissionDenied,
+ // 404 Not Found - UNIMPLEMENTED.
+ http.StatusNotFound: codes.Unimplemented,
+ // 429 Too Many Requests - UNAVAILABLE.
+ http.StatusTooManyRequests: codes.Unavailable,
+ // 502 Bad Gateway - UNAVAILABLE.
+ http.StatusBadGateway: codes.Unavailable,
+ // 503 Service Unavailable - UNAVAILABLE.
+ http.StatusServiceUnavailable: codes.Unavailable,
+ // 504 Gateway timeout - UNAVAILABLE.
+ http.StatusGatewayTimeout: codes.Unavailable,
+ }
)
// Records the states during HPACK decoding. Must be reset once the
// decoding of the entire headers are finished.
type decodeState struct {
- err error // first error encountered decoding
-
encoding string
- // statusCode caches the stream status received from the trailer
- // the server sent. Client side only.
- statusCode codes.Code
- statusDesc string
+ // statusGen caches the stream status received from the trailer the server
+ // sent. Client side only. Do not access directly. After all trailers are
+ // parsed, use the status method to retrieve the status.
+ statusGen *status.Status
+ // rawStatusCode and rawStatusMsg are set from the raw trailer fields and are not
+ // intended for direct access outside of parsing.
+ rawStatusCode *int
+ rawStatusMsg string
+ httpStatus *int
// Server side only fields.
timeoutSet bool
timeout time.Duration
@@ -121,6 +144,7 @@ func isReservedHeader(hdr string) bool {
"grpc-message",
"grpc-status",
"grpc-timeout",
+ "grpc-status-details-bin",
"te":
return true
default:
@@ -139,12 +163,6 @@ func isWhitelistedPseudoHeader(hdr string) bool {
}
}
-func (d *decodeState) setErr(err error) {
- if d.err == nil {
- d.err = err
- }
-}
-
func validContentType(t string) bool {
e := "application/grpc"
if !strings.HasPrefix(t, e) {
@@ -158,56 +176,135 @@ func validContentType(t string) bool {
return true
}
-func (d *decodeState) processHeaderField(f hpack.HeaderField) {
+func (d *decodeState) status() *status.Status {
+ if d.statusGen == nil {
+ // No status-details were provided; generate status using code/msg.
+ d.statusGen = status.New(codes.Code(int32(*(d.rawStatusCode))), d.rawStatusMsg)
+ }
+ return d.statusGen
+}
+
+const binHdrSuffix = "-bin"
+
+func encodeBinHeader(v []byte) string {
+ return base64.RawStdEncoding.EncodeToString(v)
+}
+
+func decodeBinHeader(v string) ([]byte, error) {
+ if len(v)%4 == 0 {
+ // Input was padded, or padding was not necessary.
+ return base64.StdEncoding.DecodeString(v)
+ }
+ return base64.RawStdEncoding.DecodeString(v)
+}
+
+func encodeMetadataHeader(k, v string) string {
+ if strings.HasSuffix(k, binHdrSuffix) {
+ return encodeBinHeader(([]byte)(v))
+ }
+ return v
+}
+
+func decodeMetadataHeader(k, v string) (string, error) {
+ if strings.HasSuffix(k, binHdrSuffix) {
+ b, err := decodeBinHeader(v)
+ return string(b), err
+ }
+ return v, nil
+}
+
+func (d *decodeState) decodeResponseHeader(frame *http2.MetaHeadersFrame) error {
+ for _, hf := range frame.Fields {
+ if err := d.processHeaderField(hf); err != nil {
+ return err
+ }
+ }
+
+ // If grpc status exists, no need to check further.
+ if d.rawStatusCode != nil || d.statusGen != nil {
+ return nil
+ }
+
+ // If grpc status doesn't exist and http status doesn't exist,
+ // then it's a malformed header.
+ if d.httpStatus == nil {
+ return streamErrorf(codes.Internal, "malformed header: doesn't contain status(gRPC or HTTP)")
+ }
+
+ if *(d.httpStatus) != http.StatusOK {
+ code, ok := httpStatusConvTab[*(d.httpStatus)]
+ if !ok {
+ code = codes.Unknown
+ }
+ return streamErrorf(code, http.StatusText(*(d.httpStatus)))
+ }
+
+ // gRPC status doesn't exist and http status is OK.
+ // Set rawStatusCode to be unknown and return nil error.
+ // So that, if the stream has ended this Unknown status
+ // will be propogated to the user.
+ // Otherwise, it will be ignored. In which case, status from
+ // a later trailer, that has StreamEnded flag set, is propogated.
+ code := int(codes.Unknown)
+ d.rawStatusCode = &code
+ return nil
+
+}
+
+func (d *decodeState) processHeaderField(f hpack.HeaderField) error {
switch f.Name {
case "content-type":
if !validContentType(f.Value) {
- d.setErr(streamErrorf(codes.FailedPrecondition, "transport: received the unexpected content-type %q", f.Value))
- return
+ return streamErrorf(codes.FailedPrecondition, "transport: received the unexpected content-type %q", f.Value)
}
case "grpc-encoding":
d.encoding = f.Value
case "grpc-status":
code, err := strconv.Atoi(f.Value)
if err != nil {
- d.setErr(streamErrorf(codes.Internal, "transport: malformed grpc-status: %v", err))
- return
+ return streamErrorf(codes.Internal, "transport: malformed grpc-status: %v", err)
}
- d.statusCode = codes.Code(code)
+ d.rawStatusCode = &code
case "grpc-message":
- d.statusDesc = decodeGrpcMessage(f.Value)
+ d.rawStatusMsg = decodeGrpcMessage(f.Value)
+ case "grpc-status-details-bin":
+ v, err := decodeBinHeader(f.Value)
+ if err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err)
+ }
+ s := &spb.Status{}
+ if err := proto.Unmarshal(v, s); err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err)
+ }
+ d.statusGen = status.FromProto(s)
case "grpc-timeout":
d.timeoutSet = true
var err error
- d.timeout, err = decodeTimeout(f.Value)
- if err != nil {
- d.setErr(streamErrorf(codes.Internal, "transport: malformed time-out: %v", err))
- return
+ if d.timeout, err = decodeTimeout(f.Value); err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed time-out: %v", err)
}
case ":path":
d.method = f.Value
+ case ":status":
+ code, err := strconv.Atoi(f.Value)
+ if err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed http-status: %v", err)
+ }
+ d.httpStatus = &code
default:
if !isReservedHeader(f.Name) || isWhitelistedPseudoHeader(f.Name) {
- if f.Name == "user-agent" {
- i := strings.LastIndex(f.Value, " ")
- if i == -1 {
- // There is no application user agent string being set.
- return
- }
- // Extract the application user agent string.
- f.Value = f.Value[:i]
- }
if d.mdata == nil {
d.mdata = make(map[string][]string)
}
- k, v, err := metadata.DecodeKeyValue(f.Name, f.Value)
+ v, err := decodeMetadataHeader(f.Name, f.Value)
if err != nil {
grpclog.Printf("Failed to decode (%q, %q): %v", f.Name, f.Value, err)
- return
+ return nil
}
- d.mdata[k] = append(d.mdata[k], v)
+ d.mdata[f.Name] = append(d.mdata[f.Name], v)
}
}
+ return nil
}
type timeoutUnit uint8
@@ -379,6 +476,9 @@ func newFramer(conn net.Conn) *framer {
writer: bufio.NewWriterSize(conn, http2IOBufSize),
}
f.fr = http2.NewFramer(f.writer, f.reader)
+ // Opt-in to Frame reuse API on framer to reduce garbage.
+ // Frames aren't safe to read from after a subsequent call to ReadFrame.
+ f.fr.SetReuseFrames()
f.fr.ReadMetaHeaders = hpack.NewDecoder(http2InitHeaderTableSize, nil)
return f
}
diff --git a/vendor/google.golang.org/grpc/transport/transport.go b/vendor/google.golang.org/grpc/transport/transport.go
index caee54a801..063e28f0f8 100644
--- a/vendor/google.golang.org/grpc/transport/transport.go
+++ b/vendor/google.golang.org/grpc/transport/transport.go
@@ -45,10 +45,13 @@ import (
"sync"
"golang.org/x/net/context"
+ "golang.org/x/net/http2"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/tap"
)
@@ -102,6 +105,7 @@ func (b *recvBuffer) load() {
if len(b.backlog) > 0 {
select {
case b.c <- b.backlog[0]:
+ b.backlog[0] = nil
b.backlog = b.backlog[1:]
default:
}
@@ -168,11 +172,6 @@ type Stream struct {
id uint32
// nil for client side Stream.
st ServerTransport
- // clientStatsCtx keeps the user context for stats handling.
- // It's only valid on client side. Server side stats context is same as s.ctx.
- // All client side stats collection should use the clientStatsCtx (instead of the stream context)
- // so that all the generated stats for a particular RPC can be associated in the processing phase.
- clientStatsCtx context.Context
// ctx is the associated context of the stream.
ctx context.Context
// cancel is always nil for client side Stream.
@@ -186,14 +185,17 @@ type Stream struct {
recvCompress string
sendCompress string
buf *recvBuffer
- dec io.Reader
+ trReader io.Reader
fc *inFlow
recvQuota uint32
+
+ // TODO: Remote this unused variable.
// The accumulated inbound quota pending for window update.
updateQuota uint32
- // The handler to control the window update procedure for both this
- // particular stream and the associated transport.
- windowHandler func(int)
+
+ // Callback to state application's intentions to read data. This
+ // is used to adjust flow control, if need be.
+ requestRead func(int)
sendQuotaPool *quotaPool
// Close headerChan to indicate the end of reception of header metadata.
@@ -210,9 +212,17 @@ type Stream struct {
// true iff headerChan is closed. Used to avoid closing headerChan
// multiple times.
headerDone bool
- // the status received from the server.
- statusCode codes.Code
- statusDesc string
+ // the status error received from the server.
+ status *status.Status
+ // rstStream indicates whether a RST_STREAM frame needs to be sent
+ // to the server to signify that this stream is closing.
+ rstStream bool
+ // rstError is the error that needs to be sent along with the RST_STREAM frame.
+ rstError http2.ErrCode
+ // bytesSent and bytesReceived indicates whether any bytes have been sent or
+ // received on this stream.
+ bytesSent bool
+ bytesReceived bool
}
// RecvCompress returns the compression algorithm applied to the inbound
@@ -240,7 +250,7 @@ func (s *Stream) GoAway() <-chan struct{} {
// Header acquires the key-value pairs of header metadata once it
// is available. It blocks until i) the metadata is ready or ii) there is no
-// header metadata or iii) the stream is cancelled/expired.
+// header metadata or iii) the stream is canceled/expired.
func (s *Stream) Header() (metadata.MD, error) {
select {
case <-s.ctx.Done():
@@ -277,14 +287,9 @@ func (s *Stream) Method() string {
return s.method
}
-// StatusCode returns statusCode received from the server.
-func (s *Stream) StatusCode() codes.Code {
- return s.statusCode
-}
-
-// StatusDesc returns statusDesc received from the server.
-func (s *Stream) StatusDesc() string {
- return s.statusDesc
+// Status returns the status received from the server.
+func (s *Stream) Status() *status.Status {
+ return s.status
}
// SetHeader sets the header metadata. This can be called multiple times.
@@ -318,19 +323,66 @@ func (s *Stream) write(m recvMsg) {
s.buf.put(&m)
}
-// Read reads all the data available for this Stream from the transport and
+// Read reads all p bytes from the wire for this stream.
+func (s *Stream) Read(p []byte) (n int, err error) {
+ // Don't request a read if there was an error earlier
+ if er := s.trReader.(*transportReader).er; er != nil {
+ return 0, er
+ }
+ s.requestRead(len(p))
+ return io.ReadFull(s.trReader, p)
+}
+
+// tranportReader reads all the data available for this Stream from the transport and
// passes them into the decoder, which converts them into a gRPC message stream.
// The error is io.EOF when the stream is done or another non-nil error if
// the stream broke.
-func (s *Stream) Read(p []byte) (n int, err error) {
- n, err = s.dec.Read(p)
+type transportReader struct {
+ reader io.Reader
+ // The handler to control the window update procedure for both this
+ // particular stream and the associated transport.
+ windowHandler func(int)
+ er error
+}
+
+func (t *transportReader) Read(p []byte) (n int, err error) {
+ n, err = t.reader.Read(p)
if err != nil {
+ t.er = err
return
}
- s.windowHandler(n)
+ t.windowHandler(n)
return
}
+// finish sets the stream's state and status, and closes the done channel.
+// s.mu must be held by the caller. st must always be non-nil.
+func (s *Stream) finish(st *status.Status) {
+ s.status = st
+ s.state = streamDone
+ close(s.done)
+}
+
+// BytesSent indicates whether any bytes have been sent on this stream.
+func (s *Stream) BytesSent() bool {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ return s.bytesSent
+}
+
+// BytesReceived indicates whether any bytes have been received on this stream.
+func (s *Stream) BytesReceived() bool {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ return s.bytesReceived
+}
+
+// GoString is implemented by Stream so context.String() won't
+// race when printing %#v.
+func (s *Stream) GoString() string {
+ return fmt.Sprintf("", s, s.method)
+}
+
// The key to save transport.Stream in the context.
type streamKey struct{}
@@ -358,10 +410,14 @@ const (
// ServerConfig consists of all the configurations to establish a server transport.
type ServerConfig struct {
- MaxStreams uint32
- AuthInfo credentials.AuthInfo
- InTapHandle tap.ServerInHandle
- StatsHandler stats.Handler
+ MaxStreams uint32
+ AuthInfo credentials.AuthInfo
+ InTapHandle tap.ServerInHandle
+ StatsHandler stats.Handler
+ KeepaliveParams keepalive.ServerParameters
+ KeepalivePolicy keepalive.EnforcementPolicy
+ InitialWindowSize int32
+ InitialConnWindowSize int32
}
// NewServerTransport creates a ServerTransport with conn or non-nil error
@@ -385,8 +441,14 @@ type ConnectOptions struct {
PerRPCCredentials []credentials.PerRPCCredentials
// TransportCredentials stores the Authenticator required to setup a client connection.
TransportCredentials credentials.TransportCredentials
+ // KeepaliveParams stores the keepalive parameters.
+ KeepaliveParams keepalive.ClientParameters
// StatsHandler stores the handler for stats.
StatsHandler stats.Handler
+ // InitialWindowSize sets the intial window size for a stream.
+ InitialWindowSize int32
+ // InitialConnWindowSize sets the intial window size for a connection.
+ InitialConnWindowSize int32
}
// TargetInfo contains the information of the target such as network address and metadata.
@@ -430,6 +492,9 @@ type CallHdr struct {
// outbound message.
SendCompress string
+ // Creds specifies credentials.PerRPCCredentials for a call.
+ Creds credentials.PerRPCCredentials
+
// Flush indicates whether a new stream command should be sent
// to the peer without waiting for the first data. This is
// only a hint. The transport may modify the flush decision
@@ -469,10 +534,13 @@ type ClientTransport interface {
// once the transport is initiated.
Error() <-chan struct{}
- // GoAway returns a channel that is closed when ClientTranspor
+ // GoAway returns a channel that is closed when ClientTransport
// receives the draining signal from the server (e.g., GOAWAY frame in
// HTTP/2).
GoAway() <-chan struct{}
+
+ // GetGoAwayReason returns the reason why GoAway frame was received.
+ GetGoAwayReason() GoAwayReason
}
// ServerTransport is the common interface for all gRPC server-side transport
@@ -492,10 +560,9 @@ type ServerTransport interface {
// Write may not be called on all streams.
Write(s *Stream, data []byte, opts *Options) error
- // WriteStatus sends the status of a stream to the client.
- // WriteStatus is the final call made on a stream and always
- // occurs.
- WriteStatus(s *Stream, statusCode codes.Code, statusDesc string) error
+ // WriteStatus sends the status of a stream to the client. WriteStatus is
+ // the final call made on a stream and always occurs.
+ WriteStatus(s *Stream, st *status.Status) error
// Close tears down the transport. Once it is called, the transport
// should not be accessed any more. All the pending streams and their
@@ -561,6 +628,8 @@ var (
ErrStreamDrain = streamErrorf(codes.Unavailable, "the server stops accepting new RPCs")
)
+// TODO: See if we can replace StreamError with status package errors.
+
// StreamError is an error that only affects one stream within a connection.
type StreamError struct {
Code codes.Code
@@ -568,18 +637,7 @@ type StreamError struct {
}
func (e StreamError) Error() string {
- return fmt.Sprintf("stream error: code = %d desc = %q", e.Code, e.Desc)
-}
-
-// ContextErr converts the error from context package into a StreamError.
-func ContextErr(err error) StreamError {
- switch err {
- case context.DeadlineExceeded:
- return streamErrorf(codes.DeadlineExceeded, "%v", err)
- case context.Canceled:
- return streamErrorf(codes.Canceled, "%v", err)
- }
- panic(fmt.Sprintf("Unexpected error from context packet: %v", err))
+ return fmt.Sprintf("stream error: code = %s desc = %q", e.Code, e.Desc)
}
// wait blocks until it can receive from ctx.Done, closing, or proceed.
@@ -609,3 +667,16 @@ func wait(ctx context.Context, done, goAway, closing <-chan struct{}, proceed <-
return i, nil
}
}
+
+// GoAwayReason contains the reason for the GoAway frame received.
+type GoAwayReason uint8
+
+const (
+ // Invalid indicates that no GoAway frame is received.
+ Invalid GoAwayReason = 0
+ // NoReason is the default value when GoAway frame is received.
+ NoReason GoAwayReason = 1
+ // TooManyPings indicates that a GoAway frame with ErrCodeEnhanceYourCalm
+ // was recieved and that the debug data said "too_many_pings".
+ TooManyPings GoAwayReason = 2
+)
diff --git a/vendor/manifest b/vendor/manifest
index d486c8cffb..4bbf5e9f22 100644
--- a/vendor/manifest
+++ b/vendor/manifest
@@ -892,8 +892,8 @@
{
"importpath": "github.com/golang/protobuf/proto",
"repository": "https://github.com/golang/protobuf",
- "vcs": "",
- "revision": "5baca1b63153b1a82014546382edbdd302b138b6",
+ "vcs": "git",
+ "revision": "5a0f697c9ed9d68fef0116532c6e05cfeae00e55",
"branch": "master",
"path": "/proto"
},
@@ -906,6 +906,15 @@
"path": "/protoc-gen-go/descriptor",
"notests": true
},
+ {
+ "importpath": "github.com/golang/protobuf/ptypes",
+ "repository": "https://github.com/golang/protobuf",
+ "vcs": "git",
+ "revision": "5a0f697c9ed9d68fef0116532c6e05cfeae00e55",
+ "branch": "master",
+ "path": "/ptypes",
+ "notests": true
+ },
{
"importpath": "github.com/google/cadvisor/info/v1",
"repository": "https://github.com/google/cadvisor",
@@ -1431,7 +1440,7 @@
"importpath": "github.com/weaveworks/common",
"repository": "https://github.com/weaveworks/common",
"vcs": "git",
- "revision": "2faced4ddea5ec3b1ff8e88ba552714e3088cf41",
+ "revision": "493a1f760f47ed3b50afd5baabb36589d96017b8",
"branch": "master",
"notests": true
},
@@ -1552,7 +1561,7 @@
"importpath": "golang.org/x/net/http2",
"repository": "https://go.googlesource.com/net",
"vcs": "git",
- "revision": "dd2d9a67c97da0afa00d5726e28086007a0acce5",
+ "revision": "59a0b19b5533c7977ddeb86b017bf507ed407b12",
"branch": "master",
"path": "http2",
"notests": true
@@ -1694,11 +1703,20 @@
"branch": "master",
"notests": true
},
+ {
+ "importpath": "google.golang.org/genproto/googleapis/rpc/status",
+ "repository": "https://github.com/google/go-genproto",
+ "vcs": "git",
+ "revision": "aa2eb687b4d3e17154372564ad8d6bf11c3cf21f",
+ "branch": "master",
+ "path": "/googleapis/rpc/status",
+ "notests": true
+ },
{
"importpath": "google.golang.org/grpc",
"repository": "https://github.com/grpc/grpc-go",
"vcs": "git",
- "revision": "34384f34de585705f1a6783a158d2ec8af29f618",
+ "revision": "06c984861f3044e61f84671f9fe660da0fdf4997",
"branch": "master",
"notests": true
},