From 4c746a0890ff05d276da3965ff6c59ee6d22e943 Mon Sep 17 00:00:00 2001 From: Injun Song Date: Thu, 16 Sep 2021 09:03:23 +0900 Subject: [PATCH] *: add topic (#480) * *: add topic Added topic to Varlog. * proto: add Topic into proto (#479) * add topic * add topic into proto * wip * fix CommitContextOf * add highWatermark into report * Update internal/metadata_repository/raft_metadata_repository.go Co-authored-by: Injun Song * Update internal/metadata_repository/raft_metadata_repository.go Co-authored-by: Injun Song * Update internal/metadata_repository/raft_metadata_repository.go Co-authored-by: Injun Song * Update internal/metadata_repository/raft_metadata_repository.go Co-authored-by: Injun Song * Update internal/metadata_repository/raft_metadata_repository.go * Update internal/metadata_repository/storage.go Co-authored-by: Injun Song * fix code * fix code Co-authored-by: Hyungkeun Jang Co-authored-by: Injun Song * *: use int32 for storage node id and log stream id (#481) Changed type of `types.StorageNodeID` and `types.LogStreamID` from uint32 to int32. Resolves [#VARLOG-548](VARLOG-548). * topic: managemant topic (#485) * add topic * add topic into proto * wip * fix CommitContextOf * add highWatermark into report * wip * management topic * add test for register topic * add test for unregister topic * fmt * fix code * fix test * fix code * fix code Co-authored-by: Hyungkeun Jang * sn: remove redundant types for replica (#483) There were redundant types to represent replica: - `pkg/logc/StorageNode` - `proto/snpb/Replica` - `proto/snpb/AppendRequest_BackupNode` This patch removes those and uses `proto/varlogpb/StorageNode` and types that wrap it. Resolves [#VARLOG-546](VARLOG-546). * sn: add topic id to log i/o (#486) This patch adds `TopicID` to the methods of Log I/O interface. It doesn't contain any meaningful implementations about `TopicID`. Types that now have `TopicID` are follows: - `internal/storagenode/logio.ReadWriter` - `internal/storagenode/reportcommitter/reportcommitter.Getter` - `proto/snpb.AppendRequest`, `proto/snpb.AppendResponse`, `proto/snpb.ReadRequest`, `proto/snpb.SubscribeRequest`, `proto/snpb.TrimRequest` - `proto/varlogpb.Replica` Resolves [#VARLOG-542](VARLOG-542). * it: fix flaky test - TestVarlogSubscribeWithAddLS (#487) The `TestVarlogSubscribeWithAddLS` created a goroutine adding new LogStream while appending log entries. It however did not manage the life cycle of the goroutine resulting in several issues: - use closed connection to append logs - wait for commit messages from MR indefinitely since the MR is already closed This patch simply adds `sync.WaitGroup` to the test to avoid the above issues. Resolves [#VARLOG-569](VARLOG-569). * proto: removing unnecessary fields from messages (#488) This patch removes unnecessary fields generated automatically from proto messages. These lines are like follows: ``` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` ``` They are something related to either optimizations or compatibility that are generated by the gogoproto. However, they come to small overhead in many cases. (See https://github.com/cockroachdb/cockroach/pull/38404 and https://github.com/etcd-io/etcd/commit/e51c697ec6e8f44b5a0a455c8fada484db4633af#diff-76a35072df72591a656e69cab6f6fa99aa386fd5ace35c9042851eb324ec16b5). This change adds the following options to every proto definition file: ``` option (gogoproto.goproto_unkeyed_all) = false; option (gogoproto.goproto_unrecognized_all) = false; option (gogoproto.goproto_sizecache_all) = false; ``` Resolves [#VARLOG-557](VARLOG-557). * *: dedup LogEntry types (#489) - Remove `pkg/types.LogEntry`, then use `proto/varlogpb.LogEntry`. Resolves [#VARLOG-558](VARLOG-558). * vendor: bump Pebble (#490) Resolves #VARLOG-556 * sn: rename AddLogStream RPC (#491) In this patch, the RPC `AddLogStream` renames to `AddLogStreamReplica` to clarify its behavior. The RPC `AddLogStreamReplica` adds a new replica of the given LogStream to the StorageNode. Resolves [#VARLOG-568](VARLOG-568). * topic: apply Topic into client (#493) * logio: apply topic * add topic test * fix TestMRTopicLastHighWatermark * fix managing log stream status in vms Resolves #VARLOG-559 * all: update golang 1.17.0 (#492) Resolves [#VARLOG-555](VARLOG-555). * all: fix code style (#494) This patch just fixes code styles. Resolves [#VARLOG-572](VARLOG-572). * build: use predefined protoc (#496) Resolves [#VARLOG-563](VARLOG-563). * sn,topic: checking topic id while handling RPCs (#495) This patch adds a feature that the StorageNode checks if the topic id is valid or not. To support this functionality it adds a topicID to a parameter of many functions. The executor does not care about the topicID, rather it will be considered by StorageNode. To do that, StorageNode maintains the executors by using the executorsmap keyed by logStreamTopicID. The logStreamTopicID is a packed type of the LogStreamID and TopicID. Resolves [#VARLOG-542](VARLOG-542). * lint: fix code style (#497) This is a follow-up PR for VARLOG-572. Resolves [#VARLOG-572](VARLOG-572). Co-authored-by: Hyungkeun Jang Co-authored-by: Hyungkeun Jang --- .gitignore | 3 + .jenkins/Jenkinsfile | 22 +- Makefile | 354 +- build/reports/.keep => TOPIC | 0 .../app/metadata_repository.go | 6 +- cmd/rpcbench/server/main.go | 6 +- cmd/storagenode/app/flags.go | 2 +- cmd/storagenode/app/storagenode.go | 6 +- cmd/vmc/app/add.go | 13 +- cmd/vmc/app/cli.go | 93 +- cmd/vmc/app/ls_recovery.go | 12 +- cmd/vmc/app/remove.go | 4 +- cmd/vms/app/vms.go | 6 +- go.mod | 69 +- go.sum | 4 +- .../dummy_storagenode_client_factory_impl.go | 128 +- .../in_memory_metadata_repository.go | 122 - .../metadata_repository.go | 2 + .../metadata_repository_service.go | 10 + internal/metadata_repository/options.go | 36 +- internal/metadata_repository/raft.go | 25 +- .../raft_metadata_repository.go | 327 +- .../raft_metadata_repository_test.go | 776 +- internal/metadata_repository/raft_test.go | 7 +- .../metadata_repository/report_collector.go | 145 +- .../report_collector_test.go | 213 +- .../state_machine_log_test.go | 24 +- .../state_machine_syncer.go | 451 +- internal/metadata_repository/storage.go | 313 +- internal/metadata_repository/storage_test.go | 529 +- internal/storagenode/config.go | 4 +- internal/storagenode/config_test.go | 1 - internal/storagenode/executor/commit_task.go | 9 +- .../storagenode/executor/commit_task_test.go | 10 +- .../storagenode/executor/commit_wait_queue.go | 5 +- .../executor/commit_wait_queue_test.go | 3 +- internal/storagenode/executor/committer.go | 25 +- .../storagenode/executor/committer_test.go | 36 +- internal/storagenode/executor/config.go | 11 + internal/storagenode/executor/executor.go | 30 +- .../storagenode/executor/executor_mock.go | 12 +- .../storagenode/executor/executor_test.go | 524 +- internal/storagenode/executor/log_io.go | 20 +- .../executor/log_stream_context.go | 19 +- .../executor/log_stream_context_test.go | 5 +- internal/storagenode/executor/metadata.go | 12 +- internal/storagenode/executor/replicate.go | 10 +- .../storagenode/executor/replicate_task.go | 18 +- internal/storagenode/executor/replicator.go | 17 +- .../storagenode/executor/replicator_mock.go | 4 +- .../storagenode/executor/replicator_test.go | 48 +- internal/storagenode/executor/reportcommit.go | 11 +- .../storagenode/executor/reportcommit_test.go | 48 +- internal/storagenode/executor/seal.go | 10 +- internal/storagenode/executor/sync.go | 14 +- internal/storagenode/executor/testing_test.go | 7 + internal/storagenode/executor/write_task.go | 8 +- internal/storagenode/executor/writer.go | 18 +- internal/storagenode/executor/writer_test.go | 1 - .../storagenode/executorsmap/executors_map.go | 45 +- .../executorsmap/executors_map_test.go | 71 +- internal/storagenode/executorsmap/id.go | 19 + internal/storagenode/executorsmap/id_test.go | 24 + internal/storagenode/logio/readwriter.go | 17 +- internal/storagenode/logio/server.go | 27 +- internal/storagenode/replication/client.go | 8 +- .../storagenode/replication/client_mock.go | 3 +- internal/storagenode/replication/config.go | 10 +- internal/storagenode/replication/connector.go | 19 +- .../storagenode/replication/connector_mock.go | 4 +- .../storagenode/replication/replication.go | 7 +- .../replication/replication_mock.go | 11 +- .../replication/replication_test.go | 57 +- internal/storagenode/replication/server.go | 11 +- .../storagenode/replication/testing_test.go | 14 + .../reportcommitter/reportcommitter.go | 2 +- .../reportcommitter/reportcommitter_mock.go | 8 +- .../storagenode/reportcommitter/reporter.go | 2 +- .../reportcommitter/reporter_test.go | 2 +- .../storagenode/reportcommitter/server.go | 29 - internal/storagenode/server.go | 48 +- internal/storagenode/server_test.go | 259 - .../stopchannel/stop_channel_test.go | 9 +- internal/storagenode/storage/encode.go | 16 +- .../storage/pebble_commit_batch.go | 20 +- .../storagenode/storage/pebble_scanner.go | 4 +- .../storagenode/storage/pebble_storage.go | 54 +- .../storagenode/storage/pebble_write_batch.go | 10 +- internal/storagenode/storage/storage.go | 28 +- internal/storagenode/storage/storage_mock.go | 11 +- internal/storagenode/storage/storage_test.go | 75 +- internal/storagenode/storage_node.go | 144 +- internal/storagenode/storage_node_test.go | 273 +- internal/storagenode/storagenodetest_test.go | 57 + .../telemetry/storage_node_metrics.go | 8 +- internal/storagenode/telemetry/testing.go | 10 - internal/storagenode/volume.go | 162 - internal/storagenode/volume/volume.go | 168 + .../storagenode/{ => volume}/volume_test.go | 61 +- internal/vms/cluster_manager.go | 131 +- internal/vms/cluster_manager_service.go | 32 +- internal/vms/id_generator.go | 66 + internal/vms/mr_manager.go | 48 +- internal/vms/replica_selector_test.go | 23 +- internal/vms/sn_manager.go | 56 +- internal/vms/sn_manager_test.go | 50 +- internal/vms/vms_mock.go | 40 +- pkg/benchmark/benchmark.go | 2 +- pkg/logc/log_io_client.go | 63 +- pkg/logc/log_io_client_mock.go | 37 +- pkg/logc/log_io_client_test.go | 18 +- pkg/logc/log_io_proxy.go | 17 +- pkg/mrc/metadata_repository_client.go | 28 +- pkg/mrc/metadata_repository_client_mock.go | 28 + pkg/mrc/metadata_repository_client_test.go | 34 +- .../metadata_repository_management_client.go | 4 +- pkg/mrc/mrconnector/mr_connector.go | 12 +- pkg/mrc/mrconnector/mrc_proxy.go | 24 + pkg/rpc/rpc_conn.go | 3 +- pkg/snc/snc_mock.go | 46 +- pkg/snc/storage_node_management_client.go | 33 +- .../storage_node_management_client_test.go | 35 +- pkg/types/log_entry.go | 17 - pkg/types/types.go | 76 +- pkg/types/types_test.go | 15 +- pkg/util/netutil/netutil.go | 1 - pkg/util/telemetry/telemetry.go | 2 +- pkg/varlog/allowlist.go | 149 +- pkg/varlog/allowlist_test.go | 84 +- pkg/varlog/cluster_manager_client.go | 67 +- pkg/varlog/log_stream_selector.go | 6 +- pkg/varlog/metadata_refresher.go | 1 - pkg/varlog/operations.go | 30 +- pkg/varlog/replicas_retriever.go | 78 +- pkg/varlog/replicas_retriever_mock.go | 32 +- pkg/varlog/subscribe.go | 19 +- pkg/varlog/subscribe_test.go | 34 +- pkg/varlog/trim.go | 10 +- pkg/varlog/trim_test.go | 13 +- pkg/varlog/varlog.go | 33 +- proto/errpb/errors.pb.go | 24 +- proto/errpb/errors.proto | 6 + proto/mrpb/management.pb.go | 169 +- proto/mrpb/management.proto | 3 + proto/mrpb/metadata_repository.pb.go | 421 +- proto/mrpb/metadata_repository.proto | 17 +- proto/mrpb/mock/mrpb_mock.go | 70 + proto/mrpb/raft_entry.pb.go | 816 +- proto/mrpb/raft_entry.proto | 31 +- proto/mrpb/raft_metadata_repository.go | 57 +- proto/mrpb/raft_metadata_repository.pb.go | 314 +- proto/mrpb/raft_metadata_repository.proto | 24 +- proto/mrpb/state_machine_log.go | 5 +- proto/mrpb/state_machine_log.pb.go | 153 +- proto/mrpb/state_machine_log.proto | 7 +- proto/rpcbenchpb/rpcbench.pb.go | 37 +- proto/rpcbenchpb/rpcbench.proto | 3 + proto/snpb/log_io.pb.go | 574 +- proto/snpb/log_io.proto | 50 +- proto/snpb/log_stream_reporter.pb.go | 310 +- proto/snpb/log_stream_reporter.proto | 39 +- proto/snpb/management.pb.go | 783 +- proto/snpb/management.proto | 81 +- proto/snpb/mock/snpb_mock.go | 28 +- proto/snpb/replica.pb.go | 9 +- proto/snpb/replica.proto | 26 - proto/snpb/replicator.pb.go | 364 +- proto/snpb/replicator.proto | 29 +- proto/varlogpb/log_entry.go | 17 + proto/varlogpb/metadata.go | 171 +- proto/varlogpb/metadata.pb.go | 1668 ++- proto/varlogpb/metadata.proto | 133 +- proto/{snpb => varlogpb}/replica.go | 6 +- proto/vmspb/vms.pb.go | 1671 ++- proto/vmspb/vms.proto | 93 +- reports/.gitignore | 7 + test/e2e/action.go | 9 +- test/e2e/action_helper.go | 13 +- test/e2e/e2e_long_test.go | 1 + test/e2e/e2e_simple_test.go | 1 + test/e2e/k8s_util.go | 10 +- test/e2e/k8s_util_test.go | 1 + test/e2e/options.go | 43 +- test/e2e/vault_util_test.go | 11 +- test/it/cluster/client_test.go | 92 +- test/it/cluster/cluster_test.go | 70 +- test/it/config.go | 17 +- test/it/failover/failover_test.go | 222 +- test/it/management/management_test.go | 268 +- test/it/management/vms_test.go | 20 +- test/it/mrconnector/mr_connector_test.go | 10 +- test/it/testenv.go | 190 +- test/it/testenv_test.go | 1 + test/marshal_test.go | 14 +- test/rpc_e2e/rpc_test.go | 1 + tools/tools.go | 11 + vendor/github.com/cenkalti/backoff/v4/go.mod | 3 - vendor/github.com/cespare/xxhash/v2/go.mod | 3 - vendor/github.com/cespare/xxhash/v2/go.sum | 0 vendor/github.com/cockroachdb/errors/go.mod | 17 - vendor/github.com/cockroachdb/errors/go.sum | 272 - .../github.com/cockroachdb/pebble/.travis.yml | 28 +- vendor/github.com/cockroachdb/pebble/Makefile | 2 +- .../github.com/cockroachdb/pebble/commit.go | 2 +- .../cockroachdb/pebble/compaction.go | 330 +- vendor/github.com/cockroachdb/pebble/db.go | 2 +- vendor/github.com/cockroachdb/pebble/event.go | 8 + vendor/github.com/cockroachdb/pebble/go.mod | 21 - vendor/github.com/cockroachdb/pebble/go.sum | 295 - .../pebble/internal/manifest/l0_sublevels.go | 53 +- .../pebble/internal/manifest/version.go | 29 +- vendor/github.com/cockroachdb/pebble/open.go | 30 +- .../github.com/cockroachdb/pebble/options.go | 39 + .../{internal => }/record/log_writer.go | 0 .../pebble/{internal => }/record/record.go | 2 +- .../cockroachdb/pebble/table_cache.go | 3 +- .../cockroachdb/pebble/version_set.go | 2 +- .../cockroachdb/pebble/vfs/mem_fs.go | 2 +- .../github.com/cockroachdb/pebble/vfs/vfs.go | 26 +- vendor/github.com/cockroachdb/redact/go.mod | 3 - .../github.com/cockroachdb/sentry-go/go.mod | 33 - .../github.com/cockroachdb/sentry-go/go.sum | 285 - vendor/github.com/go-logr/logr/go.mod | 3 - vendor/github.com/go-ole/go-ole/go.mod | 3 - vendor/github.com/gogo/status/go.mod | 12 - vendor/github.com/gogo/status/go.sum | 12 - vendor/github.com/golang/snappy/go.mod | 1 - vendor/github.com/google/gofuzz/go.mod | 3 - vendor/github.com/json-iterator/go/go.mod | 11 - vendor/github.com/json-iterator/go/go.sum | 14 - vendor/github.com/kr/pretty/go.mod | 5 - vendor/github.com/kr/pretty/go.sum | 3 - vendor/github.com/kr/text/go.mod | 3 - vendor/github.com/prometheus/procfs/go.mod | 3 - vendor/github.com/prometheus/procfs/go.sum | 2 - .../github.com/russross/blackfriday/v2/go.mod | 1 - .../shurcooL/sanitized_anchor_name/go.mod | 1 - .../smartystreets/assertions/go.mod | 3 - vendor/github.com/spf13/pflag/go.mod | 3 - vendor/github.com/spf13/pflag/go.sum | 0 vendor/github.com/urfave/cli/v2/go.mod | 9 - vendor/github.com/urfave/cli/v2/go.sum | 14 - vendor/go.opentelemetry.io/contrib/go.mod | 3 - vendor/go.opentelemetry.io/contrib/go.sum | 0 .../otel/exporters/otlp/otlpmetric/go.mod | 83 - .../otel/exporters/otlp/otlpmetric/go.sum | 125 - .../otlp/otlpmetric/otlpmetricgrpc/go.mod | 81 - .../otlp/otlpmetric/otlpmetricgrpc/go.sum | 125 - .../otel/exporters/otlp/otlptrace/go.mod | 81 - .../otel/exporters/otlp/otlptrace/go.sum | 123 - .../otlp/otlptrace/otlptracegrpc/go.mod | 78 - .../otlp/otlptrace/otlptracegrpc/go.sum | 123 - .../otel/exporters/stdout/stdoutmetric/go.mod | 77 - .../otel/exporters/stdout/stdoutmetric/go.sum | 17 - .../otel/exporters/stdout/stdouttrace/go.mod | 76 - .../otel/exporters/stdout/stdouttrace/go.sum | 15 - vendor/go.opentelemetry.io/otel/go.mod | 74 - vendor/go.opentelemetry.io/otel/go.sum | 15 - .../otel/internal/metric/go.mod | 73 - .../otel/internal/metric/go.sum | 15 - vendor/go.opentelemetry.io/otel/metric/go.mod | 74 - vendor/go.opentelemetry.io/otel/metric/go.sum | 15 - .../otel/sdk/export/metric/go.mod | 74 - .../otel/sdk/export/metric/go.sum | 15 - .../otel/sdk/metric/go.mod | 77 - .../otel/sdk/metric/go.sum | 17 - vendor/go.opentelemetry.io/otel/trace/go.mod | 73 - vendor/go.opentelemetry.io/otel/trace/go.sum | 15 - vendor/go.uber.org/atomic/go.mod | 8 - vendor/go.uber.org/atomic/go.sum | 9 - vendor/go.uber.org/automaxprocs/go.mod | 9 - vendor/go.uber.org/automaxprocs/go.sum | 18 - vendor/go.uber.org/goleak/go.mod | 11 - vendor/go.uber.org/goleak/go.sum | 30 - vendor/go.uber.org/multierr/go.mod | 9 - vendor/go.uber.org/multierr/go.sum | 16 - vendor/go.uber.org/zap/go.mod | 13 - vendor/go.uber.org/zap/go.sum | 56 - vendor/golang.org/x/lint/go.mod | 5 - vendor/golang.org/x/lint/go.sum | 8 - vendor/golang.org/x/mod/LICENSE | 27 + vendor/golang.org/x/mod/PATENTS | 22 + vendor/golang.org/x/mod/module/module.go | 718 ++ vendor/golang.org/x/mod/semver/semver.go | 388 + vendor/golang.org/x/oauth2/go.mod | 10 - vendor/golang.org/x/oauth2/go.sum | 12 - .../golang.org/x/tools/cmd/goimports/doc.go | 47 + .../x/tools/cmd/goimports/goimports.go | 380 + .../x/tools/cmd/goimports/goimports_gc.go | 26 + .../x/tools/cmd/goimports/goimports_not_gc.go | 11 + .../x/tools/cmd/stringer/stringer.go | 655 + .../tools/go/internal/packagesdriver/sizes.go | 49 + vendor/golang.org/x/tools/go/packages/doc.go | 221 + .../x/tools/go/packages/external.go | 101 + .../golang.org/x/tools/go/packages/golist.go | 1096 ++ .../x/tools/go/packages/golist_overlay.go | 572 + .../x/tools/go/packages/loadmode_string.go | 57 + .../x/tools/go/packages/packages.go | 1233 ++ .../golang.org/x/tools/go/packages/visit.go | 59 + .../x/tools/internal/event/core/event.go | 85 + .../x/tools/internal/event/core/export.go | 70 + .../x/tools/internal/event/core/fast.go | 77 + .../golang.org/x/tools/internal/event/doc.go | 7 + .../x/tools/internal/event/event.go | 127 + .../x/tools/internal/event/keys/keys.go | 564 + .../x/tools/internal/event/keys/standard.go | 22 + .../x/tools/internal/event/label/label.go | 213 + .../x/tools/internal/fastwalk/fastwalk.go | 196 + .../fastwalk/fastwalk_dirent_fileno.go | 13 + .../internal/fastwalk/fastwalk_dirent_ino.go | 14 + .../fastwalk/fastwalk_dirent_namlen_bsd.go | 13 + .../fastwalk/fastwalk_dirent_namlen_linux.go | 29 + .../internal/fastwalk/fastwalk_portable.go | 37 + .../tools/internal/fastwalk/fastwalk_unix.go | 128 + .../x/tools/internal/gocommand/invoke.go | 273 + .../x/tools/internal/gocommand/vendor.go | 102 + .../x/tools/internal/gocommand/version.go | 51 + .../x/tools/internal/gopathwalk/walk.go | 264 + .../x/tools/internal/imports/fix.go | 1730 +++ .../x/tools/internal/imports/imports.go | 346 + .../x/tools/internal/imports/mod.go | 688 + .../x/tools/internal/imports/mod_cache.go | 236 + .../x/tools/internal/imports/sortimports.go | 280 + .../x/tools/internal/imports/zstdlib.go | 10516 ++++++++++++++++ .../internal/packagesinternal/packages.go | 21 + .../tools/internal/typesinternal/errorcode.go | 1358 ++ .../typesinternal/errorcode_string.go | 152 + .../x/tools/internal/typesinternal/types.go | 45 + vendor/golang.org/x/xerrors/LICENSE | 27 + vendor/golang.org/x/xerrors/PATENTS | 22 + vendor/golang.org/x/xerrors/README | 2 + vendor/golang.org/x/xerrors/adaptor.go | 193 + vendor/golang.org/x/xerrors/codereview.cfg | 1 + vendor/golang.org/x/xerrors/doc.go | 22 + vendor/golang.org/x/xerrors/errors.go | 33 + vendor/golang.org/x/xerrors/fmt.go | 187 + vendor/golang.org/x/xerrors/format.go | 34 + vendor/golang.org/x/xerrors/frame.go | 56 + .../golang.org/x/xerrors/internal/internal.go | 8 + vendor/golang.org/x/xerrors/wrap.go | 106 + vendor/google.golang.org/grpc/go.mod | 17 - vendor/google.golang.org/grpc/go.sum | 96 - vendor/gopkg.in/yaml.v2/go.mod | 5 - vendor/gopkg.in/yaml.v3/go.mod | 5 - vendor/k8s.io/klog/v2/go.mod | 5 - vendor/k8s.io/klog/v2/go.sum | 2 - vendor/modules.txt | 154 +- vendor/sigs.k8s.io/yaml/go.mod | 8 - vendor/sigs.k8s.io/yaml/go.sum | 9 - 349 files changed, 34916 insertions(+), 9922 deletions(-) rename build/reports/.keep => TOPIC (100%) delete mode 100644 internal/metadata_repository/in_memory_metadata_repository.go create mode 100644 internal/storagenode/executorsmap/id.go create mode 100644 internal/storagenode/executorsmap/id_test.go create mode 100644 internal/storagenode/replication/testing_test.go delete mode 100644 internal/storagenode/server_test.go create mode 100644 internal/storagenode/storagenodetest_test.go delete mode 100644 internal/storagenode/telemetry/testing.go delete mode 100644 internal/storagenode/volume.go create mode 100644 internal/storagenode/volume/volume.go rename internal/storagenode/{ => volume}/volume_test.go (77%) delete mode 100644 pkg/types/log_entry.go delete mode 100644 proto/snpb/replica.proto create mode 100644 proto/varlogpb/log_entry.go rename proto/{snpb => varlogpb}/replica.go (87%) create mode 100644 reports/.gitignore create mode 100644 tools/tools.go delete mode 100644 vendor/github.com/cenkalti/backoff/v4/go.mod delete mode 100644 vendor/github.com/cespare/xxhash/v2/go.mod delete mode 100644 vendor/github.com/cespare/xxhash/v2/go.sum delete mode 100644 vendor/github.com/cockroachdb/errors/go.mod delete mode 100644 vendor/github.com/cockroachdb/errors/go.sum delete mode 100644 vendor/github.com/cockroachdb/pebble/go.mod delete mode 100644 vendor/github.com/cockroachdb/pebble/go.sum rename vendor/github.com/cockroachdb/pebble/{internal => }/record/log_writer.go (100%) rename vendor/github.com/cockroachdb/pebble/{internal => }/record/record.go (99%) delete mode 100644 vendor/github.com/cockroachdb/redact/go.mod delete mode 100644 vendor/github.com/cockroachdb/sentry-go/go.mod delete mode 100644 vendor/github.com/cockroachdb/sentry-go/go.sum delete mode 100644 vendor/github.com/go-logr/logr/go.mod delete mode 100644 vendor/github.com/go-ole/go-ole/go.mod delete mode 100644 vendor/github.com/gogo/status/go.mod delete mode 100644 vendor/github.com/gogo/status/go.sum delete mode 100644 vendor/github.com/golang/snappy/go.mod delete mode 100644 vendor/github.com/google/gofuzz/go.mod delete mode 100644 vendor/github.com/json-iterator/go/go.mod delete mode 100644 vendor/github.com/json-iterator/go/go.sum delete mode 100644 vendor/github.com/kr/pretty/go.mod delete mode 100644 vendor/github.com/kr/pretty/go.sum delete mode 100644 vendor/github.com/kr/text/go.mod delete mode 100644 vendor/github.com/prometheus/procfs/go.mod delete mode 100644 vendor/github.com/prometheus/procfs/go.sum delete mode 100644 vendor/github.com/russross/blackfriday/v2/go.mod delete mode 100644 vendor/github.com/shurcooL/sanitized_anchor_name/go.mod delete mode 100644 vendor/github.com/smartystreets/assertions/go.mod delete mode 100644 vendor/github.com/spf13/pflag/go.mod delete mode 100644 vendor/github.com/spf13/pflag/go.sum delete mode 100644 vendor/github.com/urfave/cli/v2/go.mod delete mode 100644 vendor/github.com/urfave/cli/v2/go.sum delete mode 100644 vendor/go.opentelemetry.io/contrib/go.mod delete mode 100644 vendor/go.opentelemetry.io/contrib/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/internal/metric/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/internal/metric/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/metric/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/metric/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/sdk/export/metric/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/sdk/export/metric/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/sdk/metric/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/sdk/metric/go.sum delete mode 100644 vendor/go.opentelemetry.io/otel/trace/go.mod delete mode 100644 vendor/go.opentelemetry.io/otel/trace/go.sum delete mode 100644 vendor/go.uber.org/atomic/go.mod delete mode 100644 vendor/go.uber.org/atomic/go.sum delete mode 100644 vendor/go.uber.org/automaxprocs/go.mod delete mode 100644 vendor/go.uber.org/automaxprocs/go.sum delete mode 100644 vendor/go.uber.org/goleak/go.mod delete mode 100644 vendor/go.uber.org/goleak/go.sum delete mode 100644 vendor/go.uber.org/multierr/go.mod delete mode 100644 vendor/go.uber.org/multierr/go.sum delete mode 100644 vendor/go.uber.org/zap/go.mod delete mode 100644 vendor/go.uber.org/zap/go.sum delete mode 100644 vendor/golang.org/x/lint/go.mod delete mode 100644 vendor/golang.org/x/lint/go.sum create mode 100644 vendor/golang.org/x/mod/LICENSE create mode 100644 vendor/golang.org/x/mod/PATENTS create mode 100644 vendor/golang.org/x/mod/module/module.go create mode 100644 vendor/golang.org/x/mod/semver/semver.go delete mode 100644 vendor/golang.org/x/oauth2/go.mod delete mode 100644 vendor/golang.org/x/oauth2/go.sum create mode 100644 vendor/golang.org/x/tools/cmd/goimports/doc.go create mode 100644 vendor/golang.org/x/tools/cmd/goimports/goimports.go create mode 100644 vendor/golang.org/x/tools/cmd/goimports/goimports_gc.go create mode 100644 vendor/golang.org/x/tools/cmd/goimports/goimports_not_gc.go create mode 100644 vendor/golang.org/x/tools/cmd/stringer/stringer.go create mode 100644 vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go create mode 100644 vendor/golang.org/x/tools/go/packages/doc.go create mode 100644 vendor/golang.org/x/tools/go/packages/external.go create mode 100644 vendor/golang.org/x/tools/go/packages/golist.go create mode 100644 vendor/golang.org/x/tools/go/packages/golist_overlay.go create mode 100644 vendor/golang.org/x/tools/go/packages/loadmode_string.go create mode 100644 vendor/golang.org/x/tools/go/packages/packages.go create mode 100644 vendor/golang.org/x/tools/go/packages/visit.go create mode 100644 vendor/golang.org/x/tools/internal/event/core/event.go create mode 100644 vendor/golang.org/x/tools/internal/event/core/export.go create mode 100644 vendor/golang.org/x/tools/internal/event/core/fast.go create mode 100644 vendor/golang.org/x/tools/internal/event/doc.go create mode 100644 vendor/golang.org/x/tools/internal/event/event.go create mode 100644 vendor/golang.org/x/tools/internal/event/keys/keys.go create mode 100644 vendor/golang.org/x/tools/internal/event/keys/standard.go create mode 100644 vendor/golang.org/x/tools/internal/event/label/label.go create mode 100644 vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go create mode 100644 vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go create mode 100644 vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go create mode 100644 vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go create mode 100644 vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go create mode 100644 vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go create mode 100644 vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go create mode 100644 vendor/golang.org/x/tools/internal/gocommand/invoke.go create mode 100644 vendor/golang.org/x/tools/internal/gocommand/vendor.go create mode 100644 vendor/golang.org/x/tools/internal/gocommand/version.go create mode 100644 vendor/golang.org/x/tools/internal/gopathwalk/walk.go create mode 100644 vendor/golang.org/x/tools/internal/imports/fix.go create mode 100644 vendor/golang.org/x/tools/internal/imports/imports.go create mode 100644 vendor/golang.org/x/tools/internal/imports/mod.go create mode 100644 vendor/golang.org/x/tools/internal/imports/mod_cache.go create mode 100644 vendor/golang.org/x/tools/internal/imports/sortimports.go create mode 100644 vendor/golang.org/x/tools/internal/imports/zstdlib.go create mode 100644 vendor/golang.org/x/tools/internal/packagesinternal/packages.go create mode 100644 vendor/golang.org/x/tools/internal/typesinternal/errorcode.go create mode 100644 vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go create mode 100644 vendor/golang.org/x/tools/internal/typesinternal/types.go create mode 100644 vendor/golang.org/x/xerrors/LICENSE create mode 100644 vendor/golang.org/x/xerrors/PATENTS create mode 100644 vendor/golang.org/x/xerrors/README create mode 100644 vendor/golang.org/x/xerrors/adaptor.go create mode 100644 vendor/golang.org/x/xerrors/codereview.cfg create mode 100644 vendor/golang.org/x/xerrors/doc.go create mode 100644 vendor/golang.org/x/xerrors/errors.go create mode 100644 vendor/golang.org/x/xerrors/fmt.go create mode 100644 vendor/golang.org/x/xerrors/format.go create mode 100644 vendor/golang.org/x/xerrors/frame.go create mode 100644 vendor/golang.org/x/xerrors/internal/internal.go create mode 100644 vendor/golang.org/x/xerrors/wrap.go delete mode 100644 vendor/google.golang.org/grpc/go.mod delete mode 100644 vendor/google.golang.org/grpc/go.sum delete mode 100644 vendor/gopkg.in/yaml.v2/go.mod delete mode 100644 vendor/gopkg.in/yaml.v3/go.mod delete mode 100644 vendor/k8s.io/klog/v2/go.mod delete mode 100644 vendor/k8s.io/klog/v2/go.sum delete mode 100644 vendor/sigs.k8s.io/yaml/go.mod delete mode 100644 vendor/sigs.k8s.io/yaml/go.sum diff --git a/.gitignore b/.gitignore index 836d70c98..56d72a2b2 100644 --- a/.gitignore +++ b/.gitignore @@ -33,6 +33,9 @@ bin/vmr bin/vms bin/vsn bin/rpc_test_server +bin/rpcbench_client +bin/rpcbench_server +bin/benchmark # Python *.pyc diff --git a/.jenkins/Jenkinsfile b/.jenkins/Jenkinsfile index 687128d8b..db34ab784 100644 --- a/.jenkins/Jenkinsfile +++ b/.jenkins/Jenkinsfile @@ -6,49 +6,41 @@ pipeline { stage("build") { steps { - sh "make all" + sh "make build" } } stage("test") { - environment { - TEST_USE_LOGGER = "0" - } - steps { - sh "make test TEST_FAILFAST=1 TEST_TIMEOUT=30m TEST_COUNT=2 TEST_COVERAGE=1" + sh "make test_ci TEST_FLAGS='-v -race -failfast -count=2 -timeout=30m'" } post { always { sh "make test_report" - junit "build/reports/*.xml" + junit "reports/*.xml" sh "make coverage_report" - cobertura coberturaReportFile: "build/reports/coverage.xml" + cobertura coberturaReportFile: "reports/coverage.xml" } } } /* stage("benchmark") { - environment { - TEST_USE_LOGGER = "0" - } - steps { sh "make bench" lock("varlog-sandbox-01") { - sh "./scripts/benchmark.sh > build/reports/load.xml" + sh "./scripts/benchmark.sh > reports/load.xml" } } post { always { sh "make bench_report" - perfReport "build/reports/bench.xml" - perfReport "build/reports/load.xml" + perfReport "reports/bench.xml" + perfReport "reports/load.xml" } } } diff --git a/Makefile b/Makefile index b8d6ce5dc..f9e231c02 100644 --- a/Makefile +++ b/Makefile @@ -1,32 +1,28 @@ MAKEFLAGS += --warn-undefined-variables SHELL := /bin/bash -MAKEFILE_PATH := $(abspath $(lastword $(MAKEFILE_LIST))) -MAKEFILE_DIR := $(dir $(MAKEFILE_PATH)) -BUILD_DIR := $(MAKEFILE_DIR)/build -BIN_DIR := $(MAKEFILE_DIR)/bin - GO := go -GOPATH := $(shell $(GO) env GOPATH) -LDFLAGS := -GOFLAGS := -race GCFLAGS := -gcflags=all='-N -l' +GOPATH := $(shell $(GO) env GOPATH) +PKGS := $(shell $(GO) list ./... | \ + egrep -v "github.com/kakao/varlog/vendor" | \ + egrep -v "github.com/kakao/varlog/tools" | \ + sed -e "s;github.com/kakao/varlog/;;") -PROTOC := protoc -GRPC_GO_PLUGIN := protoc-gen-gogo -PROTO_INCS := -I ${GOPATH}/src -I ${MAKEFILE_DIR}/proto -I ${MAKEFILE_DIR}/vendor -I . -PROTO_SRCS := $(shell find . -name "*.proto" -not -path "./vendor/*") -PROTO_PBS := $(PROTO_SRCS:.proto=.pb.go) -HAS_PROTOC := $(shell which $(PROTOC) > /dev/null && echo true || echo false) -HAS_VALID_PROTOC := false -ifeq ($(HAS_PROTOC),true) -HAS_VALID_PROTOC := $(shell $(PROTOC) --version | grep -q "libprotoc 3" > /dev/null && echo true || echo false) -endif -HAS_GRPC_PLUGIN := $(shell which $(GRPC_GO_PLUGIN) > /dev/null && echo true || echo false) +.DEFAULT_GOAL := all .PHONY: all -all: generate fmt build +all: generate precommit build + +# precommit +.PHONY: precommit precommit_lint +precommit: fmt tidy vet test +precommit_lint: fmt tidy vet lint test + + +# build +BIN_DIR := $(CURDIR)/bin VMS := $(BIN_DIR)/vms VMC := $(BIN_DIR)/vmc VSN := $(BIN_DIR)/vsn @@ -37,119 +33,90 @@ BENCHMARK := $(BIN_DIR)/benchmark RPCBENCH_SERVER := $(BIN_DIR)/rpcbench_server RPCBENCH_CLIENT := $(BIN_DIR)/rpcbench_client -BUILD_OUTPUT := $(VMS) $(VMC) $(VSN) $(VMR) $(SNTOOL) $(RPC_TEST_SERVER) $(BENCHMARK) $(RPCBENCH) - -.PHONY: build vms vmc vsn vmr sntool rpc_test_server benchmark +.PHONY: build vms vmc vsn vmr sntool rpc_test_server benchmark rpcbench build: vms vmc vsn vmr sntool rpc_test_server benchmark rpcbench +vms: + $(GO) build $(GCFLAGS) -o $(VMS) cmd/vms/main.go +vmc: + $(GO) build $(GCFLAGS) -o $(VMC) cmd/vmc/main.go +vsn: + $(GO) build $(GCFLAGS) -o $(VSN) cmd/storagenode/main.go +vmr: + $(GO) build $(GCFLAGS) -o $(VMR) cmd/metadata_repository/main.go +sntool: + $(GO) build $(GCFLAGS) -o $(SNTOOL) cmd/sntool/sntool.go +rpc_test_server: + $(GO) build -tags rpc_e2e $(GCFLAGS) -o $(RPC_TEST_SERVER) cmd/rpc_test_server/main.go +benchmark: + $(GO) build $(GCFLAGS) -o $(BENCHMARK) cmd/benchmark/main.go +rpcbench: + $(GO) build $(GCFLAGS) -o $(RPCBENCH_SERVER) cmd/rpcbench/server/main.go + $(GO) build $(GCFLAGS) -o $(RPCBENCH_CLIENT) cmd/rpcbench/client/main.go + + +# testing +REPORTS_DIR := $(CURDIR)/reports +TEST_OUTPUT := $(REPORTS_DIR)/test.out +TEST_REPORT := $(REPORTS_DIR)/test.xml +COVERAGE_OUTPUT_TMP := $(REPORTS_DIR)/coverage.out.tmp +COVERAGE_OUTPUT := $(REPORTS_DIR)/coverage.out +COVERAGE_REPORT := $(REPORTS_DIR)/coverage.xml +BENCH_OUTPUT := $(REPORTS_DIR)/bench.out +BENCH_REPORT := $(REPORTS_DIR)/bench.xml + +TEST_FLAGS := -v -race -failfast -count=1 + +.PHONY: test test_ci test_report coverage_report +test: + tmpfile=$$(mktemp); \ + (TERM=xterm $(GO) test $(TEST_FLAGS) ./... 2>&1; echo $$? > $$tmpfile) | \ + tee $(TEST_OUTPUT); \ + ret=$$(cat $$tmpfile); \ + rm -f $$tmpfile; \ + exit $$ret -vms: proto - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(VMS) cmd/vms/main.go - -vmc: proto - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(VMC) cmd/vmc/main.go - -vsn: proto - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(VSN) cmd/storagenode/main.go - -vmr: proto - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(VMR) cmd/metadata_repository/main.go - -sntool: proto - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(SNTOOL) cmd/sntool/sntool.go - -rpc_test_server: proto - $(GO) build -tags rpc_e2e $(GOFLAGS) $(GCFLAGS) -o $(RPC_TEST_SERVER) cmd/rpc_test_server/main.go - -benchmark: proto - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(BENCHMARK) cmd/benchmark/main.go - -rpcbench: proto - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(RPCBENCH_SERVER) cmd/rpcbench/server/main.go - $(GO) build $(GOFLAGS) $(GCFLAGS) -o $(RPCBENCH_CLIENT) cmd/rpcbench/client/main.go - - -.PHONY: proto -proto: $(PROTO_PBS) -$(PROTO_PBS): $(PROTO_SRCS) - for src in $^ ; do \ - $(PROTOC) $(PROTO_INCS) \ - --gogo_out=plugins=grpc,Mgoogle/protobuf/empty.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/any.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/duration.proto=github.com/gogo/protobuf/types,paths=source_relative:. $$src ; \ - done - -TEST_COUNT := 1 -TEST_FLAGS := -count $(TEST_COUNT) - -ifneq ($(TEST_CPU),) - TEST_FLAGS := $(TEST_FLAGS) -cpu $(TEST_CPU) -endif - -ifneq ($(TEST_TIMEOUT),) - TEST_FLAGS := $(TEST_FLAGS) -timeout $(TEST_TIMEOUT) -endif - -ifneq ($(TEST_PARALLEL),) - TEST_FLAGS := $(TEST_FLAGS) -parallel $(TEST_PARALLEL) -endif - -TEST_COVERAGE := 0 -ifeq ($(TEST_COVERAGE),1) - TEST_FLAGS := $(TEST_FLAGS) -coverprofile=$(BUILD_DIR)/reports/coverage.out -endif - -TEST_FAILFAST := 1 -ifeq ($(TEST_FAILFAST),1) - TEST_FLAGS := $(TEST_FLAGS) -failfast -endif - -TEST_VERBOSE := 1 -ifeq ($(TEST_VERBOSE),1) - TEST_FLAGS := $(TEST_FLAGS) -v -endif - -TEST_E2E := 0 -ifeq ($(TEST_E2E),1) - TEST_FLAGS := $(TEST_FLAGS) -tags=e2e -endif - -.PHONY: test test_report coverage_report -test: build +test_ci: tmpfile=$$(mktemp); \ - (TERM=sh $(GO) test $(GOFLAGS) $(GCFLAGS) $(TEST_FLAGS) ./... 2>&1; echo $$? > $$tmpfile) | \ - tee $(BUILD_DIR)/reports/test_output.txt; \ + (TERM=xterm $(GO) test $(TEST_FLAGS) -coverprofile=$(COVERAGE_OUTPUT_TMP) ./... 2>&1; echo $$? > $$tmpfile) | \ + tee $(TEST_OUTPUT); \ ret=$$(cat $$tmpfile); \ rm -f $$tmpfile; \ exit $$ret test_report: - cat $(BUILD_DIR)/reports/test_output.txt | \ - go-junit-report > $(BUILD_DIR)/reports/report.xml - rm $(BUILD_DIR)/reports/test_output.txt + cat $(TEST_OUTPUT) | go-junit-report > $(TEST_REPORT) coverage_report: - gocov convert $(BUILD_DIR)/reports/coverage.out | gocov-xml > $(BUILD_DIR)/reports/coverage.xml + cat $(COVERAGE_OUTPUT_TMP) | grep -v ".pb.go" | grep -v "_mock.go" > $(COVERAGE_OUTPUT) + gocov convert $(COVERAGE_OUTPUT) | gocov-xml > $(COVERAGE_REPORT) bench: build tmpfile=$$(mktemp); \ - (TERM=sh $(GO) test -v -run=^$$ -count 1 -bench=. -benchmem ./... 2>&1; echo $$? > $$tmpfile) | \ - tee $(BUILD_DIR)/reports/bench_output.txt; \ + (TERM=xterm $(GO) test -v -run=^$$ -count 1 -bench=. -benchmem ./... 2>&1; echo $$? > $$tmpfile) | \ + tee $(BENCH_OUTPUT); \ ret=$$(cat $$tmpfile); \ rm -f $$tmpfile; \ exit $$ret bench_report: - cat $(BUILD_DIR)/reports/bench_output.txt | \ - go-junit-report > $(BUILD_DIR)/reports/bench.xml - rm $(BUILD_DIR)/reports/bench_output.txt + cat $(BENCH_OUTPUT) | go-junit-report > $(BENCH_REPORT) + +# testing on k8s TEST_DOCKER_CPUS := 8 TEST_DOCKER_MEMORY := 4GB - +TEST_POD_NAME := test-e2e .PHONY: test_docker test_e2e_docker test_e2e_docker_long + test_docker: image_builder_dev - docker run --rm -it --cpus $(TEST_DOCKER_CPUS) --memory $(TEST_DOCKER_MEMORY) ***REMOVED***/varlog/builder-dev:$(DOCKER_TAG) make test + docker run --rm -it \ + --cpus $(TEST_DOCKER_CPUS) \ + --memory $(TEST_DOCKER_MEMORY) \ + ***REMOVED***/varlog/builder-dev:$(DOCKER_TAG) \ + make test test_e2e_docker: image_builder_dev push_builder_dev - kubectl run --rm -it test-e2e \ + kubectl run --rm -it $(TEST_POD_NAME) \ --image=***REMOVED***/varlog/builder-dev:$(DOCKER_TAG) \ --image-pull-policy=Always \ --restart=Never \ @@ -159,125 +126,56 @@ test_e2e_docker: image_builder_dev push_builder_dev --command -- $(GO) test ./test/e2e -tags=e2e -v -timeout 30m -failfast -count 1 -race -p 1 test_e2e_docker_long: image_builder_dev push_builder_dev - kubectl run --rm -it test-e2e \ + kubectl run --rm -it $(TEST_POD_NAME) \ --image=***REMOVED***/varlog/builder-dev:$(DOCKER_TAG) \ --image-pull-policy=Always \ --restart=Never \ --env="VAULT_ADDR=$(VAULT_ADDR)" \ --env="VAULT_TOKEN=$(VAULT_TOKEN)" \ --env="VAULT_SECRET_PATH=$(VAULT_SECRET_PATH)" \ - --command -- $(GO) test ./test/e2e -tags=long_e2e -v -timeout 12h -failfast -count 1 -race -p 1 - -.PHONY: generate -generate: - $(GO) generate ./... - -.PHONY: fmt -fmt: - scripts/fmt.sh - -.PHONY: lint -lint: - @$(foreach path,$(shell $(GO) list ./... | grep -v vendor | sed -e s#github.com/kakao/varlog/##),golint $(path);) -.PHONY: vet -vet: - @$(GO) vet ./... - -.PHONY: clean -clean: - $(GO) clean - $(RM) $(BUILD_OUTPUT) - -.PHONY: clean_mock -clean_mock: - @$(foreach path,$(shell $(GO) list ./... | grep -v vendor | sed -e s#github.com/kakao/varlog/##),$(RM) -f $(path)/*_mock.go;) -.PHONY: deps -deps: - GO111MODULE=off $(GO) get golang.org/x/tools/cmd/goimports - GO111MODULE=off $(GO) get golang.org/x/lint/golint - GO111MODULE=off $(GO) get golang.org/x/tools/cmd/stringer - GO111MODULE=off $(GO) get github.com/gogo/protobuf/protoc-gen-gogo - GO111MODULE=off $(GO) get github.com/golang/mock/mockgen - -.PHONY: check -check: check_proto - -.PHONY: check_proto -check_proto: -ifneq ($(HAS_PROTOC),true) - @echo "error: $(PROTOC) not installed" - @false -endif - @echo "ok: $(PROTOC)" -ifneq ($(HAS_VALID_PROTOC),true) - @echo "error: $(shell $(PROTOC) --version) invalid version" - @false -endif - @echo "ok: $(shell $(PROTOC) --version)" -ifneq ($(HAS_GRPC_PLUGIN),true) - @echo "error: $(GRPC_GO_PLUGIN) not installed" - @false -endif - @echo "ok: $(GRPC_GO_PLUGIN)" - -.PHONY: docker image push \ - image_vms image_mr image_sn \ - push_vms push_mr push_sn - -VERSION := $(shell cat $(MAKEFILE_DIR)/VERSION) +# docker +DOCKERFILE := $(CURDIR)/docker/alpine/Dockerfile +DOCKER_REPOS := ***REMOVED*** +VERSION := $(shell cat $(CURDIR)/VERSION) GIT_HASH := $(shell git describe --always --broken) BUILD_DATE := $(shell date -u '+%FT%T%z') DOCKER_TAG := v$(VERSION)-$(GIT_HASH) -# IMAGE_BUILD_DATE := $(shell date -u '+%Y%m%d%H%M') -# DOCKER_TAG := v$(VERSION)-$(GIT_HASH)-$(IMAGE_BUILD_DATE) +.PHONY: docker image push image_vms image_mr image_sn push_vms push_mr push_sn docker: image push image: image_vms image_mr image_sn - image_vms: - docker build --target varlog-vms -f $(MAKEFILE_DIR)/docker/alpine/Dockerfile -t ***REMOVED***/varlog/varlog-vms:$(DOCKER_TAG) . - + docker build --target varlog-vms -f $(DOCKERFILE) -t $(DOCKER_REPOS)/varlog/varlog-vms:$(DOCKER_TAG) . image_mr: - docker build --target varlog-mr -f $(MAKEFILE_DIR)/docker/alpine/Dockerfile -t ***REMOVED***/varlog/varlog-mr:$(DOCKER_TAG) . - + docker build --target varlog-mr -f $(DOCKERFILE) -t $(DOCKER_REPOS)/varlog/varlog-mr:$(DOCKER_TAG) . image_sn: - docker build --target varlog-sn -f $(MAKEFILE_DIR)/docker/alpine/Dockerfile -t ***REMOVED***/varlog/varlog-sn:$(DOCKER_TAG) . + docker build --target varlog-sn -f $(DOCKERFILE) -t $(DOCKER_REPOS)/varlog/varlog-sn:$(DOCKER_TAG) . push: push_vms push_mr push_sn - push_vms: - docker push ***REMOVED***/varlog/varlog-vms:$(DOCKER_TAG) - + docker push $(DOCKER_REPOS)/varlog/varlog-vms:$(DOCKER_TAG) push_mr: - docker push ***REMOVED***/varlog/varlog-mr:$(DOCKER_TAG) - + docker push $(DOCKER_REPOS)/varlog/varlog-mr:$(DOCKER_TAG) push_sn: - docker push ***REMOVED***/varlog/varlog-sn:$(DOCKER_TAG) - -.PHONY: docker_dev image_dev push_dev \ - image_builder_dev image_rpc_test_server \ - push_builder_dev push_rpc_test_server + docker push $(DOCKER_REPOS)/varlog/varlog-sn:$(DOCKER_TAG) +.PHONY: docker_dev image_dev push_dev image_builder_dev image_rpc_test_server push_builder_dev push_rpc_test_server docker_dev: image_dev push_dev image_dev: image_builder_dev image_rpc_test_server - image_builder_dev: - docker build --target builder-dev -f $(MAKEFILE_DIR)/docker/alpine/Dockerfile -t ***REMOVED***/varlog/builder-dev:$(DOCKER_TAG) . - + docker build --target builder-dev -f $(DOCKERFILE) -t $(DOCKER_REPOS)/varlog/builder-dev:$(DOCKER_TAG) . image_rpc_test_server: - docker build --target rpc-test-server -f $(MAKEFILE_DIR)/docker/alpine/Dockerfile -t ***REMOVED***/varlog/rpc-test-server:$(DOCKER_TAG) . + docker build --target rpc-test-server -f $(DOCKERFILE) -t $(DOCKER_REPOS)/varlog/rpc-test-server:$(DOCKER_TAG) . push_dev: push_builder_dev push_rpc_test_server - push_builder_dev: - docker push ***REMOVED***/varlog/builder-dev:$(DOCKER_TAG) - + docker push $(DOCKER_REPOS)/varlog/builder-dev:$(DOCKER_TAG) push_rpc_test_server: - docker push ***REMOVED***/varlog/rpc-test-server:$(DOCKER_TAG) + docker push $(DOCKER_REPOS)/varlog/rpc-test-server:$(DOCKER_TAG) .PHONY: kustomize sandbox KUSTOMIZE_ENV := dev @@ -292,6 +190,68 @@ ifeq ($(BUILD_ENV),pm) KUSTOMIZE_ENV := pm endif kustomize: - @sed "s/IMAGE_TAG/$(DOCKER_TAG)/" $(MAKEFILE_DIR)/deploy/k8s/$(KUSTOMIZE_ENV)/kustomization.template.yaml > \ - $(MAKEFILE_DIR)/deploy/k8s/$(KUSTOMIZE_ENV)/kustomization.yaml - @echo "Run this command to apply: kubectl apply -k $(MAKEFILE_DIR)/deploy/k8s/$(KUSTOMIZE_ENV)/" + @sed "s/IMAGE_TAG/$(DOCKER_TAG)/" $(CURDIR)/deploy/k8s/$(KUSTOMIZE_ENV)/kustomization.template.yaml > \ + $(CURDIR)/deploy/k8s/$(KUSTOMIZE_ENV)/kustomization.yaml + @echo "Run this command to apply: kubectl apply -k $(CURDIR)/deploy/k8s/$(KUSTOMIZE_ENV)/" + + +# proto +DOCKER_PROTOBUF = $(DOCKER_REPOS)/varlog/protobuf:0.0.3 +PROTOC := docker run --rm -u $(shell id -u) -v$(PWD):$(PWD) -w$(PWD) $(DOCKER_PROTOBUF) --proto_path=$(PWD) +PROTO_SRCS := $(shell find . -name "*.proto" -not -path "./vendor/*") +PROTO_PBS := $(PROTO_SRCS:.proto=.pb.go) +PROTO_INCS := -I$(GOPATH)/src -I$(CURDIR)/proto -I$(CURDIR)/vendor + +.PHONY: proto +proto: $(PROTO_PBS) +$(PROTO_PBS): $(PROTO_SRCS) + @echo $(PROTOC) + for src in $^ ; do \ + $(PROTOC) $(PROTO_INCS) \ + --gogo_out=plugins=grpc,Mgoogle/protobuf/empty.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/any.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/duration.proto=github.com/gogo/protobuf/types,paths=source_relative:. $$src ; \ + done + + +# go:generate +.PHONY: generate +generate: + $(GO) generate ./... + + +# tools: lint, fmt, vet +.PHONY: tools fmt lint vet +tools: + $(GO) install golang.org/x/tools/cmd/goimports + $(GO) install golang.org/x/lint/golint + $(GO) install github.com/golang/mock/mockgen + $(GO) get golang.org/x/tools/cmd/stringer + +fmt: + @echo goimports + @$(foreach path,$(PKGS),goimports -w -local $(shell $(GO) list -m) ./$(path);) + @echo gofmt + @$(foreach path,$(PKGS),gofmt -w -s ./$(path);) + +lint: + @echo golint + @$(foreach path,$(PKGS),golint -set_exit_status ./$(path);) + +vet: + @echo govet + @$(foreach path,$(PKGS),$(GO) vet ./$(path);) + +tidy: + $(GO) mod tidy + + +# cleanup +.PHONY: clean clean_mock +clean: + $(GO) clean + $(RM) $(TEST_OUTPUT) $(TEST_REPORT) + $(RM) $(COVERAGE_OUTPUT_TMP) $(COVERAGE_OUTPUT) $(COVERAGE_REPORT) + $(RM) $(BENCH_OUTPUT) $(BENCH_REPORT) + $(RM) $(VMS) $(VMC) $(VSN) $(VMR) $(SNTOOL) $(RPC_TEST_SERVER) $(BENCHMARK) $(RPCBENCH_SERVER) $(RPCBENCH_CLIENT) + +clean_mock: + @$(foreach path,$(shell $(GO) list ./... | grep -v vendor | sed -e s#github.com/kakao/varlog/##),$(RM) -f $(path)/*_mock.go;) diff --git a/build/reports/.keep b/TOPIC similarity index 100% rename from build/reports/.keep rename to TOPIC diff --git a/cmd/metadata_repository/app/metadata_repository.go b/cmd/metadata_repository/app/metadata_repository.go index a5836c33d..555e071ea 100644 --- a/cmd/metadata_repository/app/metadata_repository.go +++ b/cmd/metadata_repository/app/metadata_repository.go @@ -39,10 +39,8 @@ func Main(opts *metadata_repository.MetadataRepositoryOptions) error { sigC := make(chan os.Signal, 1) signal.Notify(sigC, os.Interrupt, syscall.SIGTERM) go func() { - select { - case <-sigC: - mr.Close() - } + <-sigC + mr.Close() }() mr.Wait() diff --git a/cmd/rpcbench/server/main.go b/cmd/rpcbench/server/main.go index 5d5a1160a..57637f80a 100644 --- a/cmd/rpcbench/server/main.go +++ b/cmd/rpcbench/server/main.go @@ -109,10 +109,8 @@ func main() { sigC := make(chan os.Signal, 1) signal.Notify(sigC, os.Interrupt, syscall.SIGTERM) go func() { - select { - case <-sigC: - svr.Stop() - } + <-sigC + svr.Stop() }() var grp errgroup.Group diff --git a/cmd/storagenode/app/flags.go b/cmd/storagenode/app/flags.go index 3308395f2..27d0dab67 100644 --- a/cmd/storagenode/app/flags.go +++ b/cmd/storagenode/app/flags.go @@ -89,7 +89,7 @@ var ( } flagDisableDeleteUncommittedSync = vflag.FlagDescriptor{ Name: "disable-delete-uncommitted-sync", - Aliases: []string{"without-delete-uncommited-sync", "no-delete-uncommitted-sync"}, + Aliases: []string{"without-delete-uncommitted-sync", "no-delete-uncommitted-sync"}, EnvVars: []string{"DISABLE_DELETE_UNCOMMITTED_SYNC"}, } flagMemTableSizeBytes = vflag.FlagDescriptor{ diff --git a/cmd/storagenode/app/storagenode.go b/cmd/storagenode/app/storagenode.go index f2565d40c..e7e3d92a6 100644 --- a/cmd/storagenode/app/storagenode.go +++ b/cmd/storagenode/app/storagenode.go @@ -95,10 +95,8 @@ func Main(c *cli.Context) error { sigC := make(chan os.Signal, 1) signal.Notify(sigC, os.Interrupt, syscall.SIGTERM) go func() { - select { - case <-sigC: - sn.Close() - } + <-sigC + sn.Close() }() return sn.Run() diff --git a/cmd/vmc/app/add.go b/cmd/vmc/app/add.go index 2550984dd..f777f6767 100644 --- a/cmd/vmc/app/add.go +++ b/cmd/vmc/app/add.go @@ -6,6 +6,7 @@ import ( "github.com/gogo/protobuf/proto" "go.uber.org/zap" + "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/varlog" ) @@ -18,10 +19,18 @@ func (app *VMCApp) addStorageNode(snAddr string) { ) } -func (app *VMCApp) addLogStream() { +func (app *VMCApp) addTopic() { app.withExecutionContext( func(ctx context.Context, cli varlog.ClusterManagerClient) (proto.Message, error) { - return cli.AddLogStream(ctx, nil) + return cli.AddTopic(ctx) + }, + ) +} + +func (app *VMCApp) addLogStream(topicID types.TopicID) { + app.withExecutionContext( + func(ctx context.Context, cli varlog.ClusterManagerClient) (proto.Message, error) { + return cli.AddLogStream(ctx, topicID, nil) }, ) } diff --git a/cmd/vmc/app/cli.go b/cmd/vmc/app/cli.go index 47cce231c..c159a65d7 100644 --- a/cmd/vmc/app/cli.go +++ b/cmd/vmc/app/cli.go @@ -72,15 +72,32 @@ func (app *VMCApp) initAddCmd() *cli.Command { return nil } + // vmc add topic + tpCmd := newTopicCmd() + tpCmd.Flags = append(tpCmd.Flags, &cli.StringFlag{}) + tpCmd.Action = func(c *cli.Context) error { + app.addTopic() + return nil + } + // vmc add logstream lsCmd := newLSCmd() - lsCmd.Flags = append(lsCmd.Flags, &cli.StringFlag{}) + lsCmd.Flags = append(lsCmd.Flags, &cli.StringFlag{ + Name: "topic-id", + Usage: "topic identifier", + EnvVars: []string{"TOPIC_ID"}, + Required: true, + }) lsCmd.Action = func(c *cli.Context) error { - app.addLogStream() + topicID, err := types.ParseTopicID(c.String("topic-id")) + if err != nil { + return err + } + app.addLogStream(topicID) return nil } - cmd.Subcommands = append(cmd.Subcommands, snCmd, lsCmd) + cmd.Subcommands = append(cmd.Subcommands, snCmd, tpCmd, lsCmd) return cmd } @@ -110,18 +127,30 @@ func (app *VMCApp) initRmCmd() *cli.Command { // vmc remove logstream lsCmd := newLSCmd() - lsCmd.Flags = append(lsCmd.Flags, &cli.StringFlag{ - Name: "log-stream-id", - Usage: "log stream identifier", - EnvVars: []string{"LOG_STREAM_ID"}, - Required: true, - }) + lsCmd.Flags = append(lsCmd.Flags, + &cli.StringFlag{ + Name: "topic-id", + Usage: "topic identifier", + EnvVars: []string{"TOPIC_ID"}, + Required: true, + }, + &cli.StringFlag{ + Name: "log-stream-id", + Usage: "log stream identifier", + EnvVars: []string{"LOG_STREAM_ID"}, + Required: true, + }, + ) lsCmd.Action = func(c *cli.Context) error { + topicID, err := types.ParseTopicID(c.String("topic-id")) + if err != nil { + return err + } lsID, err := types.ParseLogStreamID(c.String("log-stream-id")) if err != nil { return err } - app.removeLogStream(lsID) + app.removeLogStream(topicID, lsID) return nil } @@ -191,7 +220,6 @@ func (app *VMCApp) initUpdateCmd() *cli.Command { StorageNodeID: pushSNID, Path: pushPath, } - } app.updateLogStream(lsID, popReplica, pushReplica) return nil @@ -208,6 +236,12 @@ func (app *VMCApp) initSealCmd() *cli.Command { } lsCmd := newLSCmd() + lsCmd.Flags = append(lsCmd.Flags, &cli.StringFlag{ + Name: "topic-id", + Usage: "topic identifier", + EnvVars: []string{"TOPIC_ID"}, + Required: true, + }) lsCmd.Flags = append(lsCmd.Flags, &cli.StringFlag{ Name: "log-stream-id", Usage: "log stream identifier", @@ -215,11 +249,15 @@ func (app *VMCApp) initSealCmd() *cli.Command { Required: true, }) lsCmd.Action = func(c *cli.Context) error { + tpID, err := types.ParseTopicID(c.String("topic-id")) + if err != nil { + return err + } lsID, err := types.ParseLogStreamID(c.String("log-stream-id")) if err != nil { return err } - app.sealLogStream(lsID) + app.sealLogStream(tpID, lsID) return nil } @@ -234,6 +272,12 @@ func (app *VMCApp) initUnsealCmd() *cli.Command { } lsCmd := newLSCmd() + lsCmd.Flags = append(lsCmd.Flags, &cli.StringFlag{ + Name: "topic-id", + Usage: "topic identifier", + EnvVars: []string{"TOPIC_ID"}, + Required: true, + }) lsCmd.Flags = append(lsCmd.Flags, &cli.StringFlag{ Name: "log-stream-id", Usage: "log stream identifier", @@ -241,11 +285,15 @@ func (app *VMCApp) initUnsealCmd() *cli.Command { Required: true, }) lsCmd.Action = func(c *cli.Context) error { + tpID, err := types.ParseTopicID(c.String("topic-id")) + if err != nil { + return err + } lsID, err := types.ParseLogStreamID(c.String("log-stream-id")) if err != nil { return err } - app.unsealLogStream(lsID) + app.unsealLogStream(tpID, lsID) return nil } @@ -262,6 +310,12 @@ func (app *VMCApp) initSyncCmd() *cli.Command { lsCmd := newLSCmd() lsCmd.Flags = append(lsCmd.Flags, + &cli.StringFlag{ + Name: "topic-id", + Usage: "topic identifier", + EnvVars: []string{"TOPIC_ID"}, + Required: true, + }, &cli.StringFlag{ Name: "log-stream-id", Usage: "log stream identifier", @@ -282,6 +336,10 @@ func (app *VMCApp) initSyncCmd() *cli.Command { }, ) lsCmd.Action = func(c *cli.Context) error { + tpID, err := types.ParseTopicID(c.String("topic-id")) + if err != nil { + return err + } lsID, err := types.ParseLogStreamID(c.String("log-stream-id")) if err != nil { return err @@ -294,7 +352,7 @@ func (app *VMCApp) initSyncCmd() *cli.Command { if err != nil { return err } - app.syncLogStream(lsID, srcSNID, dstSNID) + app.syncLogStream(tpID, lsID, srcSNID, dstSNID) return nil } @@ -420,6 +478,13 @@ func newSNCmd() *cli.Command { } } +func newTopicCmd() *cli.Command { + return &cli.Command{ + Name: "topic", + Aliases: []string{"t"}, + } +} + func newLSCmd() *cli.Command { return &cli.Command{ Name: "logstream", diff --git a/cmd/vmc/app/ls_recovery.go b/cmd/vmc/app/ls_recovery.go index 9b58d0e41..695d4565c 100644 --- a/cmd/vmc/app/ls_recovery.go +++ b/cmd/vmc/app/ls_recovery.go @@ -19,26 +19,26 @@ func (app *VMCApp) updateLogStream(logStreamID types.LogStreamID, popReplica, pu ) } -func (app *VMCApp) sealLogStream(logStreamID types.LogStreamID) { +func (app *VMCApp) sealLogStream(topicID types.TopicID, logStreamID types.LogStreamID) { app.withExecutionContext( func(ctx context.Context, cli varlog.ClusterManagerClient) (proto.Message, error) { - return cli.Seal(ctx, logStreamID) + return cli.Seal(ctx, topicID, logStreamID) }, ) } -func (app *VMCApp) unsealLogStream(logStreamID types.LogStreamID) { +func (app *VMCApp) unsealLogStream(topicID types.TopicID, logStreamID types.LogStreamID) { app.withExecutionContext( func(ctx context.Context, cli varlog.ClusterManagerClient) (proto.Message, error) { - return cli.Unseal(ctx, logStreamID) + return cli.Unseal(ctx, topicID, logStreamID) }, ) } -func (app *VMCApp) syncLogStream(logStreamID types.LogStreamID, srcStorageNodeID, dstStorageNodeID types.StorageNodeID) { +func (app *VMCApp) syncLogStream(topicID types.TopicID, logStreamID types.LogStreamID, srcStorageNodeID, dstStorageNodeID types.StorageNodeID) { app.withExecutionContext( func(ctx context.Context, cli varlog.ClusterManagerClient) (proto.Message, error) { - return cli.Sync(ctx, logStreamID, srcStorageNodeID, dstStorageNodeID) + return cli.Sync(ctx, topicID, logStreamID, srcStorageNodeID, dstStorageNodeID) }, ) } diff --git a/cmd/vmc/app/remove.go b/cmd/vmc/app/remove.go index 7d56f515f..99118daa1 100644 --- a/cmd/vmc/app/remove.go +++ b/cmd/vmc/app/remove.go @@ -18,10 +18,10 @@ func (app *VMCApp) removeStorageNode(storageNodeID types.StorageNodeID) { ) } -func (app *VMCApp) removeLogStream(logStreamID types.LogStreamID) { +func (app *VMCApp) removeLogStream(topicID types.TopicID, logStreamID types.LogStreamID) { app.withExecutionContext( func(ctx context.Context, cli varlog.ClusterManagerClient) (proto.Message, error) { - return cli.UnregisterLogStream(ctx, logStreamID) + return cli.UnregisterLogStream(ctx, topicID, logStreamID) // TODO (jun): according to options, it can remove log stream replicas of // the log stream. }, diff --git a/cmd/vms/app/vms.go b/cmd/vms/app/vms.go index c68a58984..9d01d6564 100644 --- a/cmd/vms/app/vms.go +++ b/cmd/vms/app/vms.go @@ -46,10 +46,8 @@ func Main(opts *vms.Options) error { sigC := make(chan os.Signal, 1) signal.Notify(sigC, os.Interrupt, syscall.SIGTERM) go func() { - select { - case <-sigC: - cm.Close() - } + <-sigC + cm.Close() }() cm.Wait() diff --git a/go.mod b/go.mod index 27bdf87eb..0adb88602 100644 --- a/go.mod +++ b/go.mod @@ -1,10 +1,10 @@ module github.com/kakao/varlog -go 1.16 +go 1.17 require ( github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d // indirect - github.com/cockroachdb/pebble v0.0.0-20210622171231-4fcf40933159 + github.com/cockroachdb/pebble v0.0.0-20210817201821-5e4468e97817 github.com/docker/go-units v0.4.0 github.com/go-ole/go-ole v1.2.4 // indirect github.com/gogo/protobuf v1.3.2 @@ -35,9 +35,11 @@ require ( go.uber.org/goleak v1.1.10 go.uber.org/multierr v1.7.0 go.uber.org/zap v1.16.0 + golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 golang.org/x/sys v0.0.0-20210304124612-50617c2ba197 golang.org/x/text v0.3.5 // indirect + golang.org/x/tools v0.0.0-20210106214847-113979e3529a google.golang.org/grpc v1.38.0 google.golang.org/grpc/examples v0.0.0-20210521225445-359fdbb7b310 // indirect google.golang.org/protobuf v1.26.0 @@ -46,3 +48,66 @@ require ( k8s.io/apimachinery v0.19.0 k8s.io/client-go v0.19.0 ) + +require ( + github.com/DataDog/zstd v1.4.5 // indirect + github.com/beorn7/perks v1.0.0 // indirect + github.com/cenkalti/backoff/v4 v4.1.1 // indirect + github.com/cespare/xxhash/v2 v2.1.1 // indirect + github.com/cockroachdb/errors v1.8.1 // indirect + github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f // indirect + github.com/cockroachdb/redact v1.0.8 // indirect + github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2 // indirect + github.com/coreos/go-semver v0.2.0 // indirect + github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7 // indirect + github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf // indirect + github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect + github.com/davecgh/go-spew v1.1.1 // indirect + github.com/dustin/go-humanize v1.0.0 // indirect + github.com/go-logr/logr v0.2.0 // indirect + github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a // indirect + github.com/golang/protobuf v1.5.2 // indirect + github.com/golang/snappy v0.0.3 // indirect + github.com/googleapis/gnostic v0.4.1 // indirect + github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect + github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect + github.com/imdario/mergo v0.3.5 // indirect + github.com/json-iterator/go v1.1.10 // indirect + github.com/jtolds/gls v4.20.0+incompatible // indirect + github.com/klauspost/compress v1.11.7 // indirect + github.com/kr/pretty v0.2.0 // indirect + github.com/kr/text v0.1.0 // indirect + github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.1 // indirect + github.com/pmezard/go-difflib v1.0.0 // indirect + github.com/prometheus/client_golang v1.0.0 // indirect + github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 // indirect + github.com/prometheus/common v0.4.1 // indirect + github.com/prometheus/procfs v0.0.2 // indirect + github.com/russross/blackfriday/v2 v2.0.1 // indirect + github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect + github.com/spf13/pflag v1.0.5 // indirect + github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.21.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.0.0-RC1 // indirect + go.opentelemetry.io/otel/internal/metric v0.21.0 // indirect + go.opentelemetry.io/proto/otlp v0.9.0 // indirect + go.uber.org/atomic v1.7.0 // indirect + golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 // indirect + golang.org/x/exp v0.0.0-20200513190911-00229845015e // indirect + golang.org/x/mod v0.3.0 // indirect + golang.org/x/net v0.0.0-20201021035429-f5854403a974 // indirect + golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d // indirect + golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect + golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect + google.golang.org/appengine v1.6.5 // indirect + google.golang.org/genproto v0.0.0-20200806141610-86f49bd18e98 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.3.0 // indirect + gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect + k8s.io/klog/v2 v2.2.0 // indirect + k8s.io/utils v0.0.0-20200729134348-d5654de09c73 // indirect + sigs.k8s.io/structured-merge-diff/v4 v4.0.1 // indirect + sigs.k8s.io/yaml v1.2.0 // indirect +) diff --git a/go.sum b/go.sum index 8504a724f..bf04d802c 100644 --- a/go.sum +++ b/go.sum @@ -69,8 +69,8 @@ github.com/cockroachdb/errors v1.8.1 h1:A5+txlVZfOqFBDa4mGz2bUWSp0aHElvHX2bKkdbQ github.com/cockroachdb/errors v1.8.1/go.mod h1:qGwQn6JmZ+oMjuLwjWzUNqblqk0xl4CVV3SQbGwK7Ac= github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f h1:o/kfcElHqOiXqcou5a3rIlMc7oJbMQkeLk0VQJ7zgqY= github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI= -github.com/cockroachdb/pebble v0.0.0-20210622171231-4fcf40933159 h1:W1e2O/5A1fh9Cr4DunvrP0zoN6kKaXaGATc4WCWhs7M= -github.com/cockroachdb/pebble v0.0.0-20210622171231-4fcf40933159/go.mod h1:JXfQr3d+XO4bL1pxGwKKo09xylQSdZ/mpZ9b2wfVcPs= +github.com/cockroachdb/pebble v0.0.0-20210817201821-5e4468e97817 h1:icLlV0p22w7vepuNCF4h8Qvo5hcpoi0ORSIfCqaTYPc= +github.com/cockroachdb/pebble v0.0.0-20210817201821-5e4468e97817/go.mod h1:JXfQr3d+XO4bL1pxGwKKo09xylQSdZ/mpZ9b2wfVcPs= github.com/cockroachdb/redact v1.0.8 h1:8QG/764wK+vmEYoOlfobpe12EQcS81ukx/a4hdVMxNw= github.com/cockroachdb/redact v1.0.8/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg= github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2 h1:IKgmqgMQlVJIZj19CdocBeSfSaiCbEBZGKODaixqtHM= diff --git a/internal/metadata_repository/dummy_storagenode_client_factory_impl.go b/internal/metadata_repository/dummy_storagenode_client_factory_impl.go index 7d7a71086..ec6ae55fa 100644 --- a/internal/metadata_repository/dummy_storagenode_client_factory_impl.go +++ b/internal/metadata_repository/dummy_storagenode_client_factory_impl.go @@ -26,7 +26,7 @@ func (rc *EmptyStorageNodeClient) GetReport() (*snpb.GetReportResponse, error) { return &snpb.GetReportResponse{}, nil } -func (rc *EmptyStorageNodeClient) Commit(gls snpb.CommitRequest) error { +func (rc *EmptyStorageNodeClient) Commit(snpb.CommitRequest) error { return nil } @@ -34,38 +34,38 @@ func (rc *EmptyStorageNodeClient) Close() error { return nil } -func (r *EmptyStorageNodeClient) PeerAddress() string { +func (rc *EmptyStorageNodeClient) PeerAddress() string { panic("not implemented") } -func (r *EmptyStorageNodeClient) PeerStorageNodeID() types.StorageNodeID { +func (rc *EmptyStorageNodeClient) PeerStorageNodeID() types.StorageNodeID { panic("not implemented") } -func (r *EmptyStorageNodeClient) GetMetadata(ctx context.Context) (*varlogpb.StorageNodeMetadataDescriptor, error) { +func (rc *EmptyStorageNodeClient) GetMetadata(context.Context) (*varlogpb.StorageNodeMetadataDescriptor, error) { panic("not implemented") } -func (r *EmptyStorageNodeClient) AddLogStream(ctx context.Context, logStreamID types.LogStreamID, path string) error { +func (rc *EmptyStorageNodeClient) AddLogStreamReplica(context.Context, types.TopicID, types.LogStreamID, string) error { panic("not implemented") } -func (r *EmptyStorageNodeClient) RemoveLogStream(ctx context.Context, logStreamID types.LogStreamID) error { +func (rc *EmptyStorageNodeClient) RemoveLogStream(context.Context, types.TopicID, types.LogStreamID) error { panic("not implemented") } -func (r *EmptyStorageNodeClient) Seal(ctx context.Context, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { +func (rc *EmptyStorageNodeClient) Seal(context.Context, types.TopicID, types.LogStreamID, types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { panic("not implemented") } -func (r *EmptyStorageNodeClient) Unseal(ctx context.Context, logStreamID types.LogStreamID, replicas []snpb.Replica) error { +func (rc *EmptyStorageNodeClient) Unseal(context.Context, types.TopicID, types.LogStreamID, []varlogpb.Replica) error { panic("not implemented") } -func (r *EmptyStorageNodeClient) Sync(ctx context.Context, logStreamID types.LogStreamID, backupStorageNodeID types.StorageNodeID, backupAddress string, lastGLSN types.GLSN) (*snpb.SyncStatus, error) { +func (rc *EmptyStorageNodeClient) Sync(context.Context, types.TopicID, types.LogStreamID, types.StorageNodeID, string, types.GLSN) (*snpb.SyncStatus, error) { panic("not implemented") } -func (r *EmptyStorageNodeClient) GetPrevCommitInfo(ctx context.Context, hwm types.GLSN) (*snpb.GetPrevCommitInfoResponse, error) { +func (rc *EmptyStorageNodeClient) GetPrevCommitInfo(context.Context, types.Version) (*snpb.GetPrevCommitInfoResponse, error) { panic("not implemented") } @@ -86,19 +86,19 @@ func (rcf *EmptyStorageNodeClientFactory) GetManagementClient(context.Context, t type DummyStorageNodeClientStatus int32 -const DefaultDelay time.Duration = 500 * time.Microsecond +const DefaultDelay = 500 * time.Microsecond const ( - DUMMY_STORAGENODE_CLIENT_STATUS_RUNNING DummyStorageNodeClientStatus = iota - DUMMY_STORAGENODE_CLIENT_STATUS_CLOSED - DUMMY_STORAGENODE_CLIENT_STATUS_CRASH + DummyStorageNodeClientStatusRunning DummyStorageNodeClientStatus = iota + DummyStorageNodeClientStatusClosed + DummyStorageNodeClientStatusCrash ) type DummyStorageNodeClient struct { storageNodeID types.StorageNodeID logStreamIDs []types.LogStreamID - knownHighWatermark []types.GLSN + knownVersion []types.Version uncommittedLLSNOffset []types.LLSN uncommittedLLSNLength []uint64 commitResultHistory [][]snpb.LogStreamCommitInfo @@ -147,15 +147,15 @@ func NewDummyStorageNodeClientFactory(nrLogStreams int, manual bool) *DummyStora return fac } -func (fac *DummyStorageNodeClientFactory) getStorageNodeClient(ctx context.Context, snID types.StorageNodeID) (*DummyStorageNodeClient, error) { - status := DUMMY_STORAGENODE_CLIENT_STATUS_RUNNING +func (fac *DummyStorageNodeClientFactory) getStorageNodeClient(_ context.Context, snID types.StorageNodeID) (*DummyStorageNodeClient, error) { + status := DummyStorageNodeClientStatusRunning LSIDs := make([]types.LogStreamID, fac.nrLogStreams) for i := 0; i < fac.nrLogStreams; i++ { LSIDs[i] = types.LogStreamID(snID) + types.LogStreamID(i) } - knownHighWatermark := make([]types.GLSN, fac.nrLogStreams) + knownVersion := make([]types.Version, fac.nrLogStreams) uncommittedLLSNOffset := make([]types.LLSN, fac.nrLogStreams) for i := 0; i < fac.nrLogStreams; i++ { @@ -169,7 +169,7 @@ func (fac *DummyStorageNodeClientFactory) getStorageNodeClient(ctx context.Conte manual: fac.manual, storageNodeID: snID, logStreamIDs: LSIDs, - knownHighWatermark: knownHighWatermark, + knownVersion: knownVersion, uncommittedLLSNOffset: uncommittedLLSNOffset, uncommittedLLSNLength: uncommittedLLSNLength, commitResultHistory: commitResultHistory, @@ -191,7 +191,7 @@ func (fac *DummyStorageNodeClientFactory) GetReporterClient(ctx context.Context, return fac.getStorageNodeClient(ctx, sn.StorageNodeID) } -func (fac *DummyStorageNodeClientFactory) GetManagementClient(ctx context.Context, clusterID types.ClusterID, address string, logger *zap.Logger) (snc.StorageNodeManagementClient, error) { +func (fac *DummyStorageNodeClientFactory) GetManagementClient(ctx context.Context, _ types.ClusterID, address string, _ *zap.Logger) (snc.StorageNodeManagementClient, error) { // cheating for test snID, err := strconv.Atoi(address) if err != nil { @@ -229,9 +229,9 @@ func (r *DummyStorageNodeClient) GetReport() (*snpb.GetReportResponse, error) { r.mu.Lock() defer r.mu.Unlock() - if r.status == DUMMY_STORAGENODE_CLIENT_STATUS_CRASH { + if r.status == DummyStorageNodeClientStatusCrash { return nil, errors.New("crash") - } else if r.status == DUMMY_STORAGENODE_CLIENT_STATUS_CLOSED { + } else if r.status == DummyStorageNodeClientStatusClosed { return nil, errors.New("closed") } @@ -248,7 +248,7 @@ func (r *DummyStorageNodeClient) GetReport() (*snpb.GetReportResponse, error) { for i, lsID := range r.logStreamIDs { u := snpb.LogStreamUncommitReport{ LogStreamID: lsID, - HighWatermark: r.knownHighWatermark[i], + Version: r.knownVersion[i], UncommittedLLSNOffset: r.uncommittedLLSNOffset[i], UncommittedLLSNLength: r.uncommittedLLSNLength[i], } @@ -264,9 +264,9 @@ func (r *DummyStorageNodeClient) Commit(cr snpb.CommitRequest) error { r.mu.Lock() defer r.mu.Unlock() - if r.status == DUMMY_STORAGENODE_CLIENT_STATUS_CRASH { + if r.status == DummyStorageNodeClientStatusCrash { return errors.New("crash") - } else if r.status == DUMMY_STORAGENODE_CLIENT_STATUS_CLOSED { + } else if r.status == DummyStorageNodeClientStatusClosed { return errors.New("closed") } @@ -280,19 +280,18 @@ func (r *DummyStorageNodeClient) Commit(cr snpb.CommitRequest) error { return nil } - if r.knownHighWatermark[idx] >= cr.CommitResult.HighWatermark { + if r.knownVersion[idx] >= cr.CommitResult.Version { //continue return nil } - r.knownHighWatermark[idx] = cr.CommitResult.HighWatermark + r.knownVersion[idx] = cr.CommitResult.Version r.commitResultHistory[idx] = append(r.commitResultHistory[idx], snpb.LogStreamCommitInfo{ LogStreamID: cr.CommitResult.LogStreamID, CommittedLLSNOffset: r.uncommittedLLSNOffset[idx], CommittedGLSNOffset: cr.CommitResult.CommittedGLSNOffset, CommittedGLSNLength: cr.CommitResult.CommittedGLSNLength, - HighWatermark: cr.CommitResult.HighWatermark, - PrevHighWatermark: cr.CommitResult.PrevHighWatermark, + Version: cr.CommitResult.Version, }) r.uncommittedLLSNOffset[idx] += types.LLSN(cr.CommitResult.CommittedGLSNLength) @@ -307,17 +306,17 @@ func (r *DummyStorageNodeClient) Close() error { r.mu.Lock() defer r.mu.Unlock() - if r.status != DUMMY_STORAGENODE_CLIENT_STATUS_CRASH && + if r.status != DummyStorageNodeClientStatusCrash && r.ref == 0 { r.factory.m.Delete(r.storageNodeID) - r.status = DUMMY_STORAGENODE_CLIENT_STATUS_CLOSED + r.status = DummyStorageNodeClientStatusClosed } return nil } -func (a *DummyStorageNodeClientFactory) lookupClient(snID types.StorageNodeID) *DummyStorageNodeClient { - f, ok := a.m.Load(snID) +func (fac *DummyStorageNodeClientFactory) lookupClient(snID types.StorageNodeID) *DummyStorageNodeClient { + f, ok := fac.m.Load(snID) if !ok { return nil } @@ -325,9 +324,9 @@ func (a *DummyStorageNodeClientFactory) lookupClient(snID types.StorageNodeID) * return f.(*DummyStorageNodeClient) } -func (a *DummyStorageNodeClientFactory) getClientIDs() []types.StorageNodeID { +func (fac *DummyStorageNodeClientFactory) getClientIDs() []types.StorageNodeID { var ids []types.StorageNodeID - a.m.Range(func(key, _ interface{}) bool { + fac.m.Range(func(key, _ interface{}) bool { ids = append(ids, key.(types.StorageNodeID)) return true }) @@ -359,15 +358,15 @@ func (r *DummyStorageNodeClient) numUncommitted(idx int) uint64 { return r.uncommittedLLSNLength[idx] } -func (r *DummyStorageNodeClient) getKnownHighWatermark(idx int) types.GLSN { +func (r *DummyStorageNodeClient) getKnownVersion(idx int) types.Version { r.mu.Lock() defer r.mu.Unlock() - return r.knownHighWatermark[idx] + return r.knownVersion[idx] } -func (a *DummyStorageNodeClientFactory) crashRPC(snID types.StorageNodeID) { - f, ok := a.m.Load(snID) +func (fac *DummyStorageNodeClientFactory) crashRPC(snID types.StorageNodeID) { + f, ok := fac.m.Load(snID) if !ok { fmt.Printf("notfound\n") return @@ -378,7 +377,7 @@ func (a *DummyStorageNodeClientFactory) crashRPC(snID types.StorageNodeID) { cli.mu.Lock() defer cli.mu.Unlock() - cli.status = DUMMY_STORAGENODE_CLIENT_STATUS_CRASH + cli.status = DummyStorageNodeClientStatusCrash } func (r *DummyStorageNodeClient) numLogStreams() int { @@ -395,8 +394,8 @@ func (r *DummyStorageNodeClient) logStreamID(idx int) types.LogStreamID { return r.logStreamIDs[idx] } -func (a *DummyStorageNodeClientFactory) recoverRPC(snID types.StorageNodeID) { - f, ok := a.m.Load(snID) +func (fac *DummyStorageNodeClientFactory) recoverRPC(snID types.StorageNodeID) { + f, ok := fac.m.Load(snID) if !ok { return } @@ -410,14 +409,14 @@ func (a *DummyStorageNodeClientFactory) recoverRPC(snID types.StorageNodeID) { manual: old.manual, storageNodeID: old.storageNodeID, logStreamIDs: old.logStreamIDs, - knownHighWatermark: old.knownHighWatermark, + knownVersion: old.knownVersion, uncommittedLLSNOffset: old.uncommittedLLSNOffset, uncommittedLLSNLength: old.uncommittedLLSNLength, - status: DUMMY_STORAGENODE_CLIENT_STATUS_RUNNING, + status: DummyStorageNodeClientStatusRunning, factory: old.factory, } - a.m.Store(snID, cli) + fac.m.Store(snID, cli) } func (r *DummyStorageNodeClient) PeerAddress() string { @@ -427,12 +426,12 @@ func (r *DummyStorageNodeClient) PeerStorageNodeID() types.StorageNodeID { return r.storageNodeID } -func (r *DummyStorageNodeClient) GetMetadata(ctx context.Context) (*varlogpb.StorageNodeMetadataDescriptor, error) { +func (r *DummyStorageNodeClient) GetMetadata(context.Context) (*varlogpb.StorageNodeMetadataDescriptor, error) { r.mu.Lock() defer r.mu.Unlock() status := varlogpb.StorageNodeStatusRunning - if r.status != DUMMY_STORAGENODE_CLIENT_STATUS_RUNNING { + if r.status != DummyStorageNodeClientStatusRunning { status = varlogpb.StorageNodeStatusDeleted } @@ -441,15 +440,17 @@ func (r *DummyStorageNodeClient) GetMetadata(ctx context.Context) (*varlogpb.Sto logStreams = append(logStreams, varlogpb.LogStreamMetadataDescriptor{ StorageNodeID: r.storageNodeID, LogStreamID: lsID, - HighWatermark: r.knownHighWatermark[i], + Version: r.knownVersion[i], }) } meta := &varlogpb.StorageNodeMetadataDescriptor{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: r.storageNodeID, - Address: r.PeerAddress(), - Status: status, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: r.storageNodeID, + Address: r.PeerAddress(), + }, + Status: status, }, LogStreams: logStreams, } @@ -457,33 +458,33 @@ func (r *DummyStorageNodeClient) GetMetadata(ctx context.Context) (*varlogpb.Sto return meta, nil } -func (r *DummyStorageNodeClient) AddLogStream(ctx context.Context, logStreamID types.LogStreamID, path string) error { +func (r *DummyStorageNodeClient) AddLogStreamReplica(context.Context, types.TopicID, types.LogStreamID, string) error { panic("not implemented") } -func (r *DummyStorageNodeClient) RemoveLogStream(ctx context.Context, logStreamID types.LogStreamID) error { +func (r *DummyStorageNodeClient) RemoveLogStream(context.Context, types.TopicID, types.LogStreamID) error { panic("not implemented") } -func (r *DummyStorageNodeClient) Seal(ctx context.Context, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { +func (r *DummyStorageNodeClient) Seal(context.Context, types.TopicID, types.LogStreamID, types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { panic("not implemented") } -func (r *DummyStorageNodeClient) Unseal(ctx context.Context, logStreamID types.LogStreamID, replicas []snpb.Replica) error { +func (r *DummyStorageNodeClient) Unseal(context.Context, types.TopicID, types.LogStreamID, []varlogpb.Replica) error { panic("not implemented") } -func (r *DummyStorageNodeClient) Sync(ctx context.Context, logStreamID types.LogStreamID, backupStorageNodeID types.StorageNodeID, backupAddress string, lastGLSN types.GLSN) (*snpb.SyncStatus, error) { +func (r *DummyStorageNodeClient) Sync(context.Context, types.TopicID, types.LogStreamID, types.StorageNodeID, string, types.GLSN) (*snpb.SyncStatus, error) { panic("not implemented") } -func (r *DummyStorageNodeClient) lookupPrevCommitInfo(idx int, hwm types.GLSN) (snpb.LogStreamCommitInfo, bool) { +func (r *DummyStorageNodeClient) lookupCommitInfo(idx int, ver types.Version) (snpb.LogStreamCommitInfo, bool) { i := sort.Search(len(r.commitResultHistory[idx]), func(i int) bool { - return r.commitResultHistory[idx][i].PrevHighWatermark >= hwm + return r.commitResultHistory[idx][i].Version >= ver }) if i < len(r.commitResultHistory[idx]) && - r.commitResultHistory[idx][i].PrevHighWatermark == hwm { + r.commitResultHistory[idx][i].Version == ver { return r.commitResultHistory[idx][i], true } @@ -494,7 +495,7 @@ func (r *DummyStorageNodeClient) lookupPrevCommitInfo(idx int, hwm types.GLSN) ( return snpb.LogStreamCommitInfo{}, false } -func (r *DummyStorageNodeClient) GetPrevCommitInfo(ctx context.Context, hwm types.GLSN) (*snpb.GetPrevCommitInfoResponse, error) { +func (r *DummyStorageNodeClient) GetPrevCommitInfo(_ context.Context, ver types.Version) (*snpb.GetPrevCommitInfoResponse, error) { ci := &snpb.GetPrevCommitInfoResponse{ StorageNodeID: r.storageNodeID, } @@ -510,15 +511,14 @@ func (r *DummyStorageNodeClient) GetPrevCommitInfo(ctx context.Context, hwm type HighestWrittenLLSN: r.uncommittedLLSNOffset[i] + types.LLSN(r.uncommittedLLSNLength[i]) - types.MinLLSN, } - if r.knownHighWatermark[i] <= hwm { + if r.knownVersion[i] <= ver { lsci.Status = snpb.GetPrevCommitStatusNotFound - } else if cr, ok := r.lookupPrevCommitInfo(i, hwm); ok { + } else if cr, ok := r.lookupCommitInfo(i, ver+1); ok { lsci.Status = snpb.GetPrevCommitStatusOK lsci.CommittedLLSNOffset = cr.CommittedLLSNOffset lsci.CommittedGLSNOffset = cr.CommittedGLSNOffset lsci.CommittedGLSNLength = cr.CommittedGLSNLength - lsci.HighWatermark = cr.HighWatermark - lsci.PrevHighWatermark = cr.PrevHighWatermark + lsci.Version = cr.Version } else { lsci.Status = snpb.GetPrevCommitStatusNotFound } diff --git a/internal/metadata_repository/in_memory_metadata_repository.go b/internal/metadata_repository/in_memory_metadata_repository.go deleted file mode 100644 index b66740751..000000000 --- a/internal/metadata_repository/in_memory_metadata_repository.go +++ /dev/null @@ -1,122 +0,0 @@ -package metadata_repository - -import ( - "context" - "errors" - "sync" - - "github.com/kakao/varlog/pkg/logc" - "github.com/kakao/varlog/pkg/types" - "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/mrpb" - "github.com/kakao/varlog/proto/snpb" - "github.com/kakao/varlog/proto/varlogpb" -) - -type InMemoryMetadataRepository struct { - metadata varlogpb.MetadataDescriptor - commitHistory []*mrpb.LogStreamCommitResults - penddingC chan *snpb.LogStreamUncommitReport - commitC chan *mrpb.LogStreamCommitResults - storageMap map[types.StorageNodeID]logc.LogIOClient - mu sync.RWMutex -} - -func NewInMemoryMetadataRepository() *InMemoryMetadataRepository { - r := &InMemoryMetadataRepository{} - return r -} - -func (r *InMemoryMetadataRepository) Close() error { - return nil -} - -func (r *InMemoryMetadataRepository) RegisterStorageNode(ctx context.Context, sn *varlogpb.StorageNodeDescriptor) error { - r.mu.Lock() - defer r.mu.Unlock() - - if err := r.metadata.InsertStorageNode(sn); err != nil { - return verrors.ErrAlreadyExists - } - - return nil -} - -func (r *InMemoryMetadataRepository) UnregisterStorageNode(ctx context.Context, snID types.StorageNodeID) error { - r.mu.Lock() - defer r.mu.Unlock() - - r.metadata.DeleteStorageNode(snID) - - return nil -} - -func (r *InMemoryMetadataRepository) RegisterLogStream(ctx context.Context, ls *varlogpb.LogStreamDescriptor) error { - r.mu.Lock() - defer r.mu.Unlock() - - if err := r.metadata.InsertLogStream(ls); err != nil { - return verrors.ErrAlreadyExists - } - - return nil -} - -func (r *InMemoryMetadataRepository) UnregisterLogStream(ctx context.Context, lsID types.LogStreamID) error { - r.mu.Lock() - defer r.mu.Unlock() - - r.metadata.DeleteLogStream(lsID) - - return nil -} - -func (r *InMemoryMetadataRepository) UpdateLogStream(ctx context.Context, ls *varlogpb.LogStreamDescriptor) error { - r.mu.Lock() - defer r.mu.Unlock() - - if err := r.metadata.UpdateLogStream(ls); err != nil { - return verrors.ErrNotExist - } - - return nil -} - -func (r *InMemoryMetadataRepository) GetMetadata(ctx context.Context) (*varlogpb.MetadataDescriptor, error) { - r.mu.RLock() - defer r.mu.RUnlock() - - return &r.metadata, nil -} - -func (r *InMemoryMetadataRepository) Seal(ctx context.Context, lsID types.LogStreamID) (types.GLSN, error) { - return types.GLSN(0), errors.New("not yet implemented") -} - -func (r *InMemoryMetadataRepository) Unseal(ctx context.Context, lsID types.LogStreamID) error { - return errors.New("not yet implemented") -} - -func (r *InMemoryMetadataRepository) aggregator() { - // not yet impliemented - // call GetReport() to all storage node -} - -func (r *InMemoryMetadataRepository) committer() { - // not yet impliemented - // calcurate glsn -} - -func (r *InMemoryMetadataRepository) deliverer() { - // not yet impliemented - // call Commit() to storage node -} - -func (r *InMemoryMetadataRepository) penddingReport(report *snpb.LogStreamUncommitReport) error { - r.penddingC <- report - return nil -} - -func (r *InMemoryMetadataRepository) deliveryResult(snId types.StorageNodeID, results []*snpb.LogStreamCommitResult) error { - return errors.New("not yet implemented") -} diff --git a/internal/metadata_repository/metadata_repository.go b/internal/metadata_repository/metadata_repository.go index 041ff1382..a84c4164e 100644 --- a/internal/metadata_repository/metadata_repository.go +++ b/internal/metadata_repository/metadata_repository.go @@ -10,6 +10,8 @@ import ( type MetadataRepository interface { RegisterStorageNode(context.Context, *varlogpb.StorageNodeDescriptor) error UnregisterStorageNode(context.Context, types.StorageNodeID) error + RegisterTopic(context.Context, types.TopicID) error + UnregisterTopic(context.Context, types.TopicID) error RegisterLogStream(context.Context, *varlogpb.LogStreamDescriptor) error UnregisterLogStream(context.Context, types.LogStreamID) error UpdateLogStream(context.Context, *varlogpb.LogStreamDescriptor) error diff --git a/internal/metadata_repository/metadata_repository_service.go b/internal/metadata_repository/metadata_repository_service.go index b102712dd..e8d3f0a6a 100644 --- a/internal/metadata_repository/metadata_repository_service.go +++ b/internal/metadata_repository/metadata_repository_service.go @@ -35,6 +35,16 @@ func (s *MetadataRepositoryService) UnregisterStorageNode(ctx context.Context, r return &types.Empty{}, err } +func (s *MetadataRepositoryService) RegisterTopic(ctx context.Context, req *mrpb.TopicRequest) (*types.Empty, error) { + err := s.metaRepos.RegisterTopic(ctx, req.TopicID) + return &types.Empty{}, err +} + +func (s *MetadataRepositoryService) UnregisterTopic(ctx context.Context, req *mrpb.TopicRequest) (*types.Empty, error) { + err := s.metaRepos.UnregisterTopic(ctx, req.TopicID) + return &types.Empty{}, err +} + func (s *MetadataRepositoryService) RegisterLogStream(ctx context.Context, req *mrpb.LogStreamRequest) (*types.Empty, error) { err := s.metaRepos.RegisterLogStream(ctx, req.LogStream) return &types.Empty{}, err diff --git a/internal/metadata_repository/options.go b/internal/metadata_repository/options.go index 79e2b413c..a99db0d83 100644 --- a/internal/metadata_repository/options.go +++ b/internal/metadata_repository/options.go @@ -14,23 +14,23 @@ import ( ) const ( - DefaultRPCBindAddress = "0.0.0.0:9092" - DefaultDebugAddress = "0.0.0.0:9099" - DefaultRaftPort = 10000 - DefaultSnapshotCount uint64 = 10000 - DefaultSnapshotCatchUpCount uint64 = 10000 - DefaultSnapshotPurgeCount uint = 10 - DefaultWalPurgeCount uint = 10 - DefaultLogReplicationFactor int = 1 - DefaultProposeTimeout time.Duration = 100 * time.Millisecond - DefaultRaftTick time.Duration = 100 * time.Millisecond - DefaultRPCTimeout time.Duration = 100 * time.Millisecond - DefaultCommitTick time.Duration = 1 * time.Millisecond - DefaultPromoteTick time.Duration = 100 * time.Millisecond - DefaultRaftDir string = "raftdata" - DefaultLogDir string = "log" - DefaultTelemetryCollectorName string = "nop" - DefaultTelmetryCollectorEndpoint string = "localhost:55680" + DefaultRPCBindAddress = "0.0.0.0:9092" + DefaultDebugAddress = "0.0.0.0:9099" + DefaultRaftPort = 10000 + DefaultSnapshotCount uint64 = 10000 + DefaultSnapshotCatchUpCount uint64 = 10000 + DefaultSnapshotPurgeCount uint = 10 + DefaultWalPurgeCount uint = 10 + DefaultLogReplicationFactor int = 1 + DefaultProposeTimeout = 100 * time.Millisecond + DefaultRaftTick = 100 * time.Millisecond + DefaultRPCTimeout = 100 * time.Millisecond + DefaultCommitTick = 1 * time.Millisecond + DefaultPromoteTick = 100 * time.Millisecond + DefaultRaftDir string = "raftdata" + DefaultLogDir string = "log" + DefaultTelemetryCollectorName string = "nop" + DefaultTelmetryCollectorEndpoint string = "localhost:55680" UnusedRequestIndex uint64 = 0 ) @@ -169,7 +169,7 @@ func (options *MetadataRepositoryOptions) validate() error { if (options.UnsafeNoWal && !options.EnableSML) || (!options.UnsafeNoWal && options.EnableSML) { - return errors.New("only one of wal and sml must be enabled.") + return errors.New("only one of wal and sml must be enabled") } return nil diff --git a/internal/metadata_repository/raft.go b/internal/metadata_repository/raft.go index ba84ba542..e169ceedd 100644 --- a/internal/metadata_repository/raft.go +++ b/internal/metadata_repository/raft.go @@ -48,8 +48,6 @@ type raftNode struct { snapdir string // path to snapshot directory lastIndex uint64 // index of log at start - raftState raft.StateType - snapshotIndex uint64 appliedIndex uint64 @@ -109,7 +107,6 @@ func newRaftNode(options RaftOptions, confChangeC chan raftpb.ConfChange, tmStub *telemetryStub, logger *zap.Logger) *raftNode { - commitC := make(chan *raftCommittedEntry) snapshotC := make(chan struct{}) @@ -274,7 +271,7 @@ func (rc *raftNode) publishEntries(ctx context.Context, ents []raftpb.Entry) boo // after commit, update appliedIndex rc.appliedIndex = ents[i].Index - //TODO:: check neccessary whether send signal replay WAL complete + //TODO:: check necessary whether send signal replay WAL complete /* // special nil commit to signal replay has finished if ents[i].Index == rc.lastIndex { @@ -391,7 +388,7 @@ func (rc *raftNode) replayWAL(snapshot *raftpb.Snapshot) *wal.WAL { zap.Uint64("lastIndex", rc.lastIndex), ) - //TODO:: check neccessary whether send signal replay WAL complete + //TODO:: check necessary whether send signal replay WAL complete return w } @@ -545,12 +542,12 @@ func (rc *raftNode) transferLeadership(wait bool) error { ctx, cancel := context.WithTimeout(context.Background(), 50*rc.raftTick) defer cancel() - rc.node.TransferLeadership(ctx, uint64(rc.membership.getLeader()), uint64(transferee)) + rc.node.TransferLeadership(ctx, rc.membership.getLeader(), uint64(transferee)) timer := time.NewTimer(rc.raftTick) defer timer.Stop() - for wait && uint64(rc.membership.getLeader()) != uint64(transferee) { + for wait && rc.membership.getLeader() != uint64(transferee) { select { case <-ctx.Done(): return ctx.Err() @@ -813,7 +810,7 @@ Loop: for { select { case <-ticker.C: - rc.promoteMember(ctx) + rc.promoteMember() case <-ctx.Done(): break Loop } @@ -822,7 +819,7 @@ Loop: ticker.Stop() } -func (rc *raftNode) promoteMember(ctx context.Context) { +func (rc *raftNode) promoteMember() { if !rc.membership.isLeader() { return } @@ -831,7 +828,7 @@ func (rc *raftNode) promoteMember(ctx context.Context) { leaderMatch := status.Progress[uint64(rc.id)].Match for nodeID, pr := range status.Progress { - if pr.IsLearner && float64(pr.Match) > float64(leaderMatch)*PROMOTE_RATE { + if pr.IsLearner && float64(pr.Match) > float64(leaderMatch)*PromoteRate { r := raftpb.ConfChange{ Type: raftpb.ConfChangeAddNode, NodeID: nodeID, @@ -932,8 +929,6 @@ func (rc *raftNode) recoverMembership(snapshot raftpb.Snapshot) { rc.membership.addMember(nodeID, peer.URL) } } - - return } func (rm *raftMembership) addMember(nodeID vtypes.NodeID, url string) { @@ -944,9 +939,7 @@ func (rm *raftMembership) addMember(nodeID vtypes.NodeID, url string) { return } - if _, ok := rm.learners[nodeID]; ok { - delete(rm.learners, nodeID) - } + delete(rm.learners, nodeID) rm.peers[nodeID] = url rm.members[nodeID] = url @@ -1035,7 +1028,7 @@ func (rm *raftMembership) updateState(state *raft.SoftState) { } if state.Lead != raft.None { - atomic.StoreUint64(&rm.leader, uint64(state.Lead)) + atomic.StoreUint64(&rm.leader, state.Lead) } atomic.StoreUint64((*uint64)(&rm.state), uint64(state.RaftState)) diff --git a/internal/metadata_repository/raft_metadata_repository.go b/internal/metadata_repository/raft_metadata_repository.go index 9643744f4..5627eed28 100644 --- a/internal/metadata_repository/raft_metadata_repository.go +++ b/internal/metadata_repository/raft_metadata_repository.go @@ -3,6 +3,7 @@ package metadata_repository import ( "context" "fmt" + "math" "net/http" "net/http/pprof" "os" @@ -34,7 +35,7 @@ import ( ) const ( - PROMOTE_RATE = 0.9 + PromoteRate = 0.9 ) type ReportCollectorHelper interface { @@ -44,7 +45,7 @@ type ReportCollectorHelper interface { GetLastCommitResults() *mrpb.LogStreamCommitResults - LookupNextCommitResults(types.GLSN) (*mrpb.LogStreamCommitResults, error) + LookupNextCommitResults(types.Version) (*mrpb.LogStreamCommitResults, error) } type RaftMetadataRepository struct { @@ -94,6 +95,9 @@ type RaftMetadataRepository struct { nrReport uint64 nrReportSinceCommit uint64 + // commit helper + topicEndPos map[types.TopicID]int + tmStub *telemetryStub } @@ -137,6 +141,7 @@ func NewRaftMetadataRepository(options *MetadataRepositoryOptions) *RaftMetadata runner: runner.New("mr", options.Logger), sw: stopwaiter.New(), tmStub: tmStub, + topicEndPos: make(map[types.TopicID]int), } mr.storage = NewMetadataStorage(mr.sendAck, options.SnapCount, mr.logger.Named("storage")) @@ -404,7 +409,7 @@ func (mr *RaftMetadataRepository) processReport(ctx context.Context) { } } -func (mr *RaftMetadataRepository) processCommit(ctx context.Context) { +func (mr *RaftMetadataRepository) processCommit(context.Context) { listenNoti := false for c := range mr.commitC { @@ -421,7 +426,7 @@ func (mr *RaftMetadataRepository) processCommit(ctx context.Context) { err = mr.reportCollector.Recover( mr.storage.GetStorageNodes(), mr.storage.GetLogStreams(), - mr.storage.GetFirstCommitResults().GetHighWatermark(), + mr.storage.GetFirstCommitResults().GetVersion(), ) if err != nil && err != verrors.ErrStopped { @@ -441,7 +446,7 @@ func (mr *RaftMetadataRepository) processCommit(ctx context.Context) { } } -func (mr *RaftMetadataRepository) processRNCommit(ctx context.Context) { +func (mr *RaftMetadataRepository) processRNCommit(context.Context) { for d := range mr.rnCommitC { var c *committedEntry var e *mrpb.RaftEntry @@ -620,6 +625,10 @@ func (mr *RaftMetadataRepository) apply(c *committedEntry) { mr.applyRegisterStorageNode(r, e.NodeIndex, e.RequestIndex) case *mrpb.UnregisterStorageNode: mr.applyUnregisterStorageNode(r, e.NodeIndex, e.RequestIndex) + case *mrpb.RegisterTopic: + mr.applyRegisterTopic(r, e.NodeIndex, e.RequestIndex) + case *mrpb.UnregisterTopic: + mr.applyUnregisterTopic(r, e.NodeIndex, e.RequestIndex) case *mrpb.RegisterLogStream: mr.applyRegisterLogStream(r, e.NodeIndex, e.RequestIndex) case *mrpb.UnregisterLogStream: @@ -679,6 +688,56 @@ func (mr *RaftMetadataRepository) applyUnregisterStorageNode(r *mrpb.UnregisterS return nil } +func (mr *RaftMetadataRepository) applyRegisterTopic(r *mrpb.RegisterTopic, nodeIndex, requestIndex uint64) error { + topicDesc := &varlogpb.TopicDescriptor{ + TopicID: r.TopicID, + } + err := mr.storage.RegisterTopic(topicDesc, nodeIndex, requestIndex) + if err != nil { + return err + } + + return nil +} + +func (mr *RaftMetadataRepository) applyUnregisterTopic(r *mrpb.UnregisterTopic, nodeIndex, requestIndex uint64) error { + topic := mr.storage.lookupTopic(r.TopicID) + if topic == nil { + return verrors.ErrNotExist + } + +UnregisterLS: + for _, lsID := range topic.LogStreams { + ls := mr.storage.lookupLogStream(lsID) + if ls == nil { + continue UnregisterLS + } + + err := mr.storage.unregisterLogStream(lsID) + if err != nil { + continue UnregisterLS + } + + for _, replica := range ls.Replicas { + err := mr.reportCollector.UnregisterLogStream(replica.StorageNodeID, lsID) + if err != nil && + err != verrors.ErrNotExist && + err != verrors.ErrStopped { + mr.logger.Panic("could not unregister reporter", zap.String("err", err.Error())) + } + } + + return nil + } + + err := mr.storage.UnregisterTopic(r.TopicID, nodeIndex, requestIndex) + if err != nil { + return err + } + + return nil +} + func (mr *RaftMetadataRepository) applyRegisterLogStream(r *mrpb.RegisterLogStream, nodeIndex, requestIndex uint64) error { err := mr.storage.RegisterLogStream(r.LogStream, nodeIndex, requestIndex) if err != nil { @@ -686,7 +745,7 @@ func (mr *RaftMetadataRepository) applyRegisterLogStream(r *mrpb.RegisterLogStre } for _, replica := range r.LogStream.Replicas { - err := mr.reportCollector.RegisterLogStream(replica.StorageNodeID, r.LogStream.LogStreamID, mr.GetHighWatermark(), varlogpb.LogStreamStatusRunning) + err := mr.reportCollector.RegisterLogStream(r.GetLogStream().GetTopicID(), replica.StorageNodeID, r.LogStream.LogStreamID, mr.GetLastCommitVersion(), varlogpb.LogStreamStatusRunning) if err != nil && err != verrors.ErrExist && err != verrors.ErrStopped { @@ -698,7 +757,7 @@ func (mr *RaftMetadataRepository) applyRegisterLogStream(r *mrpb.RegisterLogStre } func (mr *RaftMetadataRepository) applyUnregisterLogStream(r *mrpb.UnregisterLogStream, nodeIndex, requestIndex uint64) error { - ls := mr.storage.LookupLogStream(r.LogStreamID) + ls := mr.storage.lookupLogStream(r.LogStreamID) if ls == nil { return verrors.ErrNotExist } @@ -721,7 +780,7 @@ func (mr *RaftMetadataRepository) applyUnregisterLogStream(r *mrpb.UnregisterLog } func (mr *RaftMetadataRepository) applyUpdateLogStream(r *mrpb.UpdateLogStream, nodeIndex, requestIndex uint64) error { - ls := mr.storage.LookupLogStream(r.LogStream.LogStreamID) + ls := mr.storage.lookupLogStream(r.LogStream.LogStreamID) if ls == nil { return verrors.ErrNotExist } @@ -757,7 +816,7 @@ func (mr *RaftMetadataRepository) applyUpdateLogStream(r *mrpb.UpdateLogStream, } for _, replica := range r.LogStream.Replicas { - err := mr.reportCollector.RegisterLogStream(replica.StorageNodeID, r.LogStream.LogStreamID, mr.GetHighWatermark(), rcstatus) + err := mr.reportCollector.RegisterLogStream(ls.GetTopicID(), replica.StorageNodeID, r.LogStream.LogStreamID, mr.GetLastCommitVersion(), rcstatus) if err != nil && err != verrors.ErrExist && err != verrors.ErrStopped { @@ -781,9 +840,9 @@ func (mr *RaftMetadataRepository) applyReport(reports *mrpb.Reports) error { continue LS } - if (s.HighWatermark == u.HighWatermark && + if (s.Version == u.Version && s.UncommittedLLSNEnd() < u.UncommittedLLSNEnd()) || - s.HighWatermark < u.HighWatermark { + s.Version < u.Version { mr.storage.UpdateUncommitReport(u.LogStreamID, snID, u) } } @@ -792,6 +851,22 @@ func (mr *RaftMetadataRepository) applyReport(reports *mrpb.Reports) error { return nil } +func topicBoundary(topicLSIDs []TopicLSID, idx int) (begin bool, end bool) { + if idx == 0 { + begin = true + } else { + begin = topicLSIDs[idx].TopicID != topicLSIDs[idx-1].TopicID + } + + if idx == len(topicLSIDs)-1 { + end = true + } else { + end = topicLSIDs[idx].TopicID != topicLSIDs[idx+1].TopicID + } + + return +} + func (mr *RaftMetadataRepository) applyCommit(r *mrpb.Commit, appliedIndex uint64) error { if r.GetNodeID() == mr.nodeID { mr.tmStub.mb.Records("raft_delay").Record(context.TODO(), @@ -807,14 +882,12 @@ func (mr *RaftMetadataRepository) applyCommit(r *mrpb.Commit, appliedIndex uint6 defer mr.storage.ResetUpdateSinceCommit() prevCommitResults := mr.storage.getLastCommitResultsNoLock() - curHWM := prevCommitResults.GetHighWatermark() - trimHWM := types.MaxGLSN - committedOffset := curHWM + types.GLSN(1) + curVer := prevCommitResults.GetVersion() + trimVer := types.MaxVersion + totalCommitted := uint64(0) - crs := &mrpb.LogStreamCommitResults{ - PrevHighWatermark: curHWM, - } + crs := &mrpb.LogStreamCommitResults{} mr.tmStub.mb.Records("mr.reports_log.count").Record(context.Background(), float64(mr.nrReportSinceCommit), @@ -835,54 +908,69 @@ func (mr *RaftMetadataRepository) applyCommit(r *mrpb.Commit, appliedIndex uint6 if mr.storage.NumUpdateSinceCommit() > 0 { st := time.Now() - lsIDs := mr.storage.GetSortedLogStreamIDs() - commitResultsMap := make(map[types.GLSN]*mrpb.LogStreamCommitResults) - crs.CommitResults = make([]snpb.LogStreamCommitResult, 0, len(lsIDs)) + topicLSIDs := mr.storage.GetSortedTopicLogStreamIDs() + crs.CommitResults = make([]snpb.LogStreamCommitResult, 0, len(topicLSIDs)) + + commitResultsMap := make(map[types.Version]*mrpb.LogStreamCommitResults) - for idx, lsID := range lsIDs { - reports := mr.storage.LookupUncommitReports(lsID) - knownHWM, minHWM, nrUncommit := mr.calculateCommit(reports) + committedOffset := types.InvalidGLSN + + //TODO:: apply topic + for idx, topicLSID := range topicLSIDs { + beginTopic, endTopic := topicBoundary(topicLSIDs, idx) + + if beginTopic { + hpos := mr.topicEndPos[topicLSID.TopicID] + + committedOffset, hpos = prevCommitResults.LastHighWatermark(topicLSID.TopicID, hpos) + committedOffset += types.GLSN(1) + + mr.topicEndPos[topicLSID.TopicID] = hpos + } + + reports := mr.storage.LookupUncommitReports(topicLSID.LogStreamID) + knownVer, minVer, knownHWM, nrUncommit := mr.calculateCommit(reports) if reports.Status.Sealed() { nrUncommit = 0 } if reports.Status == varlogpb.LogStreamStatusSealed { - minHWM = curHWM + minVer = curVer } if reports.Status == varlogpb.LogStreamStatusSealing && - mr.getLastCommitted(lsID) <= knownHWM { - if err := mr.storage.SealLogStream(lsID, 0, 0); err == nil { - mr.reportCollector.Seal(lsID) + mr.getLastCommitted(topicLSID.TopicID, topicLSID.LogStreamID, idx) <= knownHWM { + if err := mr.storage.SealLogStream(topicLSID.LogStreamID, 0, 0); err == nil { + mr.reportCollector.Seal(topicLSID.LogStreamID) } } - if minHWM < trimHWM { - trimHWM = minHWM + if minVer < trimVer { + trimVer = minVer } if nrUncommit > 0 { - if knownHWM != curHWM { - baseCommitResults, ok := commitResultsMap[knownHWM] + if knownVer != curVer { + baseCommitResults, ok := commitResultsMap[knownVer] if !ok { - baseCommitResults = mr.storage.lookupNextCommitResultsNoLock(knownHWM) + baseCommitResults = mr.storage.lookupNextCommitResultsNoLock(knownVer) if baseCommitResults == nil { mr.logger.Panic("commit history should be exist", - zap.Uint64("hwm", uint64(knownHWM)), - zap.Uint64("first", uint64(mr.storage.getFirstCommitResultsNoLock().GetHighWatermark())), - zap.Uint64("last", uint64(mr.storage.getLastCommitResultsNoLock().GetHighWatermark())), + zap.Any("ver", knownVer), + zap.Any("first", mr.storage.getFirstCommitResultsNoLock().GetVersion()), + zap.Any("last", mr.storage.getLastCommitResultsNoLock().GetVersion()), ) } - commitResultsMap[knownHWM] = baseCommitResults + commitResultsMap[knownVer] = baseCommitResults } - nrCommitted := mr.numCommitSince(lsID, baseCommitResults, prevCommitResults, idx) + nrCommitted := mr.numCommitSince(topicLSID.TopicID, topicLSID.LogStreamID, baseCommitResults, prevCommitResults, idx) if nrCommitted > nrUncommit { msg := fmt.Sprintf("# of uncommit should be bigger than # of commit:: lsID[%v] cur[%v] first[%v] last[%v] reports[%+v] nrCommitted[%v] nrUncommit[%v]", - lsID, curHWM, - mr.storage.getFirstCommitResultsNoLock().GetHighWatermark(), - mr.storage.getLastCommitResultsNoLock().GetHighWatermark(), + topicLSID.LogStreamID, curVer, + mr.storage.getFirstCommitResultsNoLock().GetVersion(), + mr.storage.getLastCommitResultsNoLock().GetVersion(), reports, nrCommitted, nrUncommit, ) @@ -894,25 +982,31 @@ func (mr *RaftMetadataRepository) applyCommit(r *mrpb.Commit, appliedIndex uint6 } committedLLSNOffset := types.MinLLSN - prevCommitResult, _, ok := prevCommitResults.LookupCommitResult(lsID, idx) + prevCommitResult, _, ok := prevCommitResults.LookupCommitResult(topicLSID.TopicID, topicLSID.LogStreamID, idx) if ok { committedLLSNOffset = prevCommitResult.CommittedLLSNOffset + types.LLSN(prevCommitResult.CommittedGLSNLength) } commit := snpb.LogStreamCommitResult{ - LogStreamID: lsID, + TopicID: topicLSID.TopicID, + LogStreamID: topicLSID.LogStreamID, CommittedLLSNOffset: committedLLSNOffset, CommittedGLSNOffset: committedOffset, CommittedGLSNLength: nrUncommit, } if nrUncommit > 0 { - committedOffset = commit.CommittedGLSNOffset + types.GLSN(commit.CommittedGLSNLength) + committedOffset += types.GLSN(commit.CommittedGLSNLength) } else { - commit.CommittedGLSNOffset = mr.getLastCommitted(lsID) + types.GLSN(1) + commit.CommittedGLSNOffset = mr.getLastCommitted(topicLSID.TopicID, topicLSID.LogStreamID, idx) + types.GLSN(1) commit.CommittedGLSNLength = 0 } + // set highWatermark of topic + if endTopic { + commit.HighWatermark = committedOffset - types.MinGLSN + } + crs.CommitResults = append(crs.CommitResults, commit) totalCommitted += nrUncommit } @@ -924,7 +1018,7 @@ func (mr *RaftMetadataRepository) applyCommit(r *mrpb.Commit, appliedIndex uint6 Value: attribute.StringValue(mr.nodeID.String()), }) } - crs.HighWatermark = curHWM + types.GLSN(totalCommitted) + crs.Version = curVer + 1 if totalCommitted > 0 { if mr.options.EnableSML { @@ -933,7 +1027,7 @@ func (mr *RaftMetadataRepository) applyCommit(r *mrpb.Commit, appliedIndex uint6 } lentry.Payload.SetValue(&mrpb.StateMachineLogCommitResult{ - TrimGlsn: trimHWM, + TrimVersion: trimVer, CommitResult: crs, }) mr.saveSML(lentry) @@ -941,8 +1035,8 @@ func (mr *RaftMetadataRepository) applyCommit(r *mrpb.Commit, appliedIndex uint6 mr.storage.AppendLogStreamCommitHistory(crs) } - if !trimHWM.Invalid() && trimHWM != types.MaxGLSN { - mr.storage.TrimLogStreamCommitHistory(trimHWM) + if trimVer != 0 && trimVer != math.MaxUint64 { + mr.storage.TrimLogStreamCommitHistory(trimVer) } mr.reportCollector.Commit() @@ -979,7 +1073,7 @@ func (mr *RaftMetadataRepository) applyUnseal(r *mrpb.Unseal, nodeIndex, request return err } - mr.reportCollector.Unseal(r.LogStreamID, mr.GetHighWatermark()) + mr.reportCollector.Unseal(r.LogStreamID, mr.GetLastCommitVersion()) return nil } @@ -1022,44 +1116,45 @@ func (mr *RaftMetadataRepository) applyRecoverStateMachine(r *mrpb.RecoverStateM return mr.reportCollector.Recover( mr.storage.GetStorageNodes(), mr.storage.GetLogStreams(), - mr.storage.GetFirstCommitResults().GetHighWatermark(), + mr.storage.GetFirstCommitResults().GetVersion(), ) } -func (mr *RaftMetadataRepository) numCommitSince(lsID types.LogStreamID, base, latest *mrpb.LogStreamCommitResults, hintPos int) uint64 { +func (mr *RaftMetadataRepository) numCommitSince(topicID types.TopicID, lsID types.LogStreamID, base, latest *mrpb.LogStreamCommitResults, hintPos int) uint64 { if latest == nil { return 0 } - start, _, ok := base.LookupCommitResult(lsID, hintPos) + start, _, ok := base.LookupCommitResult(topicID, lsID, hintPos) if !ok { mr.logger.Panic("ls should be exist", zap.Uint64("lsID", uint64(lsID)), ) } - end, _, ok := latest.LookupCommitResult(lsID, hintPos) + end, _, ok := latest.LookupCommitResult(topicID, lsID, hintPos) if !ok { mr.logger.Panic("ls should be exist at latest", zap.Uint64("lsID", uint64(lsID)), ) } - return uint64(end.CommittedLLSNOffset-start.CommittedLLSNOffset) + uint64(end.CommittedGLSNLength) + return uint64(end.CommittedLLSNOffset-start.CommittedLLSNOffset) + end.CommittedGLSNLength } -func (mr *RaftMetadataRepository) calculateCommit(reports *mrpb.LogStreamUncommitReports) (types.GLSN, types.GLSN, uint64) { - var trimHWM types.GLSN = types.MaxGLSN - var knownHWM types.GLSN = types.InvalidGLSN - var beginLLSN types.LLSN = types.InvalidLLSN - var endLLSN types.LLSN = types.InvalidLLSN +func (mr *RaftMetadataRepository) calculateCommit(reports *mrpb.LogStreamUncommitReports) (types.Version, types.Version, types.GLSN, uint64) { + var trimVer = types.MaxVersion + var knownVer = types.InvalidVersion + var beginLLSN = types.InvalidLLSN + var endLLSN = types.InvalidLLSN + var highWatermark = types.InvalidGLSN if reports == nil { - return types.InvalidGLSN, types.InvalidGLSN, 0 + return types.InvalidVersion, types.InvalidVersion, types.InvalidGLSN, 0 } if len(reports.Replicas) < mr.nrReplica { - return types.InvalidGLSN, types.InvalidGLSN, 0 + return types.InvalidVersion, types.InvalidVersion, types.InvalidGLSN, 0 } for _, r := range reports.Replicas { @@ -1071,35 +1166,36 @@ func (mr *RaftMetadataRepository) calculateCommit(reports *mrpb.LogStreamUncommi endLLSN = r.UncommittedLLSNEnd() } - if knownHWM.Invalid() || r.HighWatermark > knownHWM { - // knownHighWatermark 이 다르다면, + if knownVer.Invalid() || r.Version > knownVer { + // knownVersion 이 다르다면, // 일부 SN 이 commitResult 를 받지 못했을 뿐이다. - knownHWM = r.HighWatermark + knownVer = r.Version + highWatermark = r.HighWatermark } - if r.HighWatermark < trimHWM { - trimHWM = r.HighWatermark + if r.Version < trimVer { + trimVer = r.Version } } - if trimHWM == types.MaxGLSN { - trimHWM = types.InvalidGLSN + if trimVer == types.MaxVersion { + trimVer = 0 } if beginLLSN > endLLSN { - return knownHWM, trimHWM, 0 + return knownVer, trimVer, highWatermark, 0 } - return knownHWM, trimHWM, uint64(endLLSN - beginLLSN) + return knownVer, trimVer, highWatermark, uint64(endLLSN - beginLLSN) } -func (mr *RaftMetadataRepository) getLastCommitted(lsID types.LogStreamID) types.GLSN { +func (mr *RaftMetadataRepository) getLastCommitted(topicID types.TopicID, lsID types.LogStreamID, hintPos int) types.GLSN { crs := mr.storage.GetLastCommitResults() if crs == nil { return types.InvalidGLSN } - r, _, ok := crs.LookupCommitResult(lsID, -1) + r, _, ok := crs.LookupCommitResult(topicID, lsID, hintPos) if !ok { // newbie return types.InvalidGLSN @@ -1109,16 +1205,31 @@ func (mr *RaftMetadataRepository) getLastCommitted(lsID types.LogStreamID) types return types.InvalidGLSN } - return r.CommittedGLSNOffset + types.GLSN(r.CommittedGLSNLength) - types.GLSN(1) + return r.CommittedGLSNOffset + types.GLSN(r.CommittedGLSNLength) - types.MinGLSN +} + +func (mr *RaftMetadataRepository) getLastCommitVersion(topicID types.TopicID, lsID types.LogStreamID) types.Version { + crs := mr.storage.GetLastCommitResults() + if crs == nil { + return types.InvalidVersion + } + + _, _, ok := crs.LookupCommitResult(topicID, lsID, -1) + if !ok { + // newbie + return types.InvalidVersion + } + + return crs.Version } -func (mr *RaftMetadataRepository) getLastCommittedLength(lsID types.LogStreamID) uint64 { +func (mr *RaftMetadataRepository) getLastCommittedLength(topicID types.TopicID, lsID types.LogStreamID) uint64 { crs := mr.storage.GetLastCommitResults() if crs == nil { return 0 } - r, _, ok := crs.LookupCommitResult(lsID, -1) + r, _, ok := crs.LookupCommitResult(topicID, lsID, -1) if !ok { return 0 } @@ -1232,6 +1343,27 @@ func (mr *RaftMetadataRepository) UnregisterStorageNode(ctx context.Context, snI return nil } +func (mr *RaftMetadataRepository) RegisterTopic(ctx context.Context, topicID types.TopicID) error { + r := &mrpb.RegisterTopic{ + TopicID: topicID, + } + + return mr.propose(ctx, r, true) +} + +func (mr *RaftMetadataRepository) UnregisterTopic(ctx context.Context, topicID types.TopicID) error { + r := &mrpb.UnregisterTopic{ + TopicID: topicID, + } + + err := mr.propose(ctx, r, true) + if err != verrors.ErrNotExist { + return err + } + + return nil +} + func (mr *RaftMetadataRepository) RegisterLogStream(ctx context.Context, ls *varlogpb.LogStreamDescriptor) error { r := &mrpb.RegisterLogStream{ LogStream: ls, @@ -1266,7 +1398,7 @@ func (mr *RaftMetadataRepository) UpdateLogStream(ctx context.Context, ls *varlo return nil } -func (mr *RaftMetadataRepository) GetMetadata(ctx context.Context) (*varlogpb.MetadataDescriptor, error) { +func (mr *RaftMetadataRepository) GetMetadata(context.Context) (*varlogpb.MetadataDescriptor, error) { if !mr.IsMember() { return nil, verrors.ErrNotMember } @@ -1289,9 +1421,14 @@ func (mr *RaftMetadataRepository) Seal(ctx context.Context, lsID types.LogStream return types.InvalidGLSN, err } - lastCommitted := mr.getLastCommitted(lsID) + ls := mr.storage.LookupLogStream(lsID) + if ls == nil { + mr.logger.Panic("can't find logStream") + } + + lastCommitted := mr.getLastCommitted(ls.TopicID, lsID, -1) mr.logger.Info("seal", - zap.Uint32("lsid", uint32(lsID)), + zap.Int32("lsid", int32(lsID)), zap.Uint64("last", uint64(lastCommitted))) return lastCommitted, nil @@ -1310,7 +1447,7 @@ func (mr *RaftMetadataRepository) Unseal(ctx context.Context, lsID types.LogStre return nil } -func (mr *RaftMetadataRepository) AddPeer(ctx context.Context, clusterID types.ClusterID, nodeID types.NodeID, url string) error { +func (mr *RaftMetadataRepository) AddPeer(ctx context.Context, _ types.ClusterID, nodeID types.NodeID, url string) error { if mr.membership.IsMember(nodeID) || mr.membership.IsLearner(nodeID) { return verrors.ErrAlreadyExists @@ -1342,7 +1479,7 @@ func (mr *RaftMetadataRepository) AddPeer(ctx context.Context, clusterID types.C return nil } -func (mr *RaftMetadataRepository) RemovePeer(ctx context.Context, clusterID types.ClusterID, nodeID types.NodeID) error { +func (mr *RaftMetadataRepository) RemovePeer(ctx context.Context, _ types.ClusterID, nodeID types.NodeID) error { if !mr.membership.IsMember(nodeID) && !mr.membership.IsLearner(nodeID) { return verrors.ErrNotExist @@ -1383,7 +1520,7 @@ func (mr *RaftMetadataRepository) registerEndpoint(ctx context.Context) { mr.propose(ctx, r, true) } -func (mr *RaftMetadataRepository) GetClusterInfo(ctx context.Context, clusterID types.ClusterID) (*mrpb.ClusterInfo, error) { +func (mr *RaftMetadataRepository) GetClusterInfo(context.Context, types.ClusterID) (*mrpb.ClusterInfo, error) { if !mr.IsMember() { return nil, verrors.ErrNotMember } @@ -1427,20 +1564,12 @@ func (mr *RaftMetadataRepository) GetReportCount() uint64 { return atomic.LoadUint64(&mr.nrReport) } -func (mr *RaftMetadataRepository) GetHighWatermark() types.GLSN { - return mr.storage.GetHighWatermark() -} - -func (mr *RaftMetadataRepository) GetPrevHighWatermark() types.GLSN { - r := mr.storage.GetLastCommitResults() - if r == nil { - return types.InvalidGLSN - } - return r.PrevHighWatermark +func (mr *RaftMetadataRepository) GetLastCommitVersion() types.Version { + return mr.storage.GetLastCommitResults().GetVersion() } -func (mr *RaftMetadataRepository) GetMinHighWatermark() types.GLSN { - return mr.storage.GetMinHighWatermark() +func (mr *RaftMetadataRepository) GetOldestCommitVersion() types.Version { + return mr.storage.GetFirstCommitResults().GetVersion() } func (mr *RaftMetadataRepository) IsMember() bool { @@ -1467,8 +1596,8 @@ func (mr *RaftMetadataRepository) GetLastCommitResults() *mrpb.LogStreamCommitRe return mr.storage.GetLastCommitResults() } -func (mr *RaftMetadataRepository) LookupNextCommitResults(glsn types.GLSN) (*mrpb.LogStreamCommitResults, error) { - return mr.storage.LookupNextCommitResults(glsn) +func (mr *RaftMetadataRepository) LookupNextCommitResults(ver types.Version) (*mrpb.LogStreamCommitResults, error) { + return mr.storage.LookupNextCommitResults(ver) } type handler func(ctx context.Context) (interface{}, error) @@ -1488,7 +1617,7 @@ func (mr *RaftMetadataRepository) withTelemetry(ctx context.Context, name string func (mr *RaftMetadataRepository) recoverStateMachine(ctx context.Context) error { storage := NewMetadataStorage(nil, 0, mr.logger.Named("storage")) - err := mr.restoreStateMachineFromStateMachineLog(ctx, storage) + err := mr.restoreStateMachineFromStateMachineLog(storage) if err != nil { return err } @@ -1508,7 +1637,7 @@ func (mr *RaftMetadataRepository) recoverStateMachine(ctx context.Context) error return nil } -func (mr *RaftMetadataRepository) restoreStateMachineFromStateMachineLog(ctx context.Context, storage *MetadataStorage) error { +func (mr *RaftMetadataRepository) restoreStateMachineFromStateMachineLog(storage *MetadataStorage) error { logIndex := uint64(0) snap := mr.raftNode.loadSnapshot() @@ -1541,8 +1670,8 @@ func (mr *RaftMetadataRepository) restoreStateMachineFromStateMachineLog(ctx con storage.UpdateLogStream(r.LogStream, 0, 0) case *mrpb.StateMachineLogCommitResult: storage.AppendLogStreamCommitHistory(r.CommitResult) - if !r.TrimGlsn.Invalid() { - storage.TrimLogStreamCommitHistory(r.TrimGlsn) + if !r.TrimVersion.Invalid() { + storage.TrimLogStreamCommitHistory(r.TrimVersion) } } } diff --git a/internal/metadata_repository/raft_metadata_repository_test.go b/internal/metadata_repository/raft_metadata_repository_test.go index 847c23edb..57f6fc9b4 100644 --- a/internal/metadata_repository/raft_metadata_repository_test.go +++ b/internal/metadata_repository/raft_metadata_repository_test.go @@ -397,12 +397,20 @@ func (clus *metadataRepoCluster) recoverMetadataRepo(idx int) error { return clus.start(idx) } -func (clus *metadataRepoCluster) initDummyStorageNode(nrSN int) error { +func (clus *metadataRepoCluster) initDummyStorageNode(nrSN, nrTopic int) error { + for i := 0; i < nrTopic; i++ { + if err := clus.nodes[0].RegisterTopic(context.TODO(), types.TopicID(i%nrTopic)); err != nil { + return err + } + } + for i := 0; i < nrSN; i++ { snID := types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -413,7 +421,7 @@ func (clus *metadataRepoCluster) initDummyStorageNode(nrSN int) error { } lsID := types.LogStreamID(snID) - ls := makeLogStream(lsID, []types.StorageNodeID{snID}) + ls := makeLogStream(types.TopicID(i%nrTopic), lsID, []types.StorageNodeID{snID}) rctx, cancel = context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() @@ -444,13 +452,14 @@ func (clus *metadataRepoCluster) descSNRefAll() { } } -func makeUncommitReport(snID types.StorageNodeID, knownHighWatermark types.GLSN, lsID types.LogStreamID, offset types.LLSN, length uint64) *mrpb.Report { +func makeUncommitReport(snID types.StorageNodeID, ver types.Version, hwm types.GLSN, lsID types.LogStreamID, offset types.LLSN, length uint64) *mrpb.Report { report := &mrpb.Report{ StorageNodeID: snID, } u := snpb.LogStreamUncommitReport{ LogStreamID: lsID, - HighWatermark: knownHighWatermark, + Version: ver, + HighWatermark: hwm, UncommittedLLSNOffset: offset, UncommittedLLSNLength: length, } @@ -459,10 +468,11 @@ func makeUncommitReport(snID types.StorageNodeID, knownHighWatermark types.GLSN, return report } -func appendUncommitReport(report *mrpb.Report, knownHighWatermark types.GLSN, lsID types.LogStreamID, offset types.LLSN, length uint64) *mrpb.Report { +func appendUncommitReport(report *mrpb.Report, ver types.Version, hwm types.GLSN, lsID types.LogStreamID, offset types.LLSN, length uint64) *mrpb.Report { u := snpb.LogStreamUncommitReport{ LogStreamID: lsID, - HighWatermark: knownHighWatermark, + Version: ver, + HighWatermark: hwm, UncommittedLLSNOffset: offset, UncommittedLLSNLength: length, } @@ -471,8 +481,9 @@ func appendUncommitReport(report *mrpb.Report, knownHighWatermark types.GLSN, ls return report } -func makeLogStream(lsID types.LogStreamID, snIDs []types.StorageNodeID) *varlogpb.LogStreamDescriptor { +func makeLogStream(topicID types.TopicID, lsID types.LogStreamID, snIDs []types.StorageNodeID) *varlogpb.LogStreamDescriptor { ls := &varlogpb.LogStreamDescriptor{ + TopicID: topicID, LogStreamID: lsID, Status: varlogpb.LogStreamStatusRunning, } @@ -488,13 +499,12 @@ func makeLogStream(lsID types.LogStreamID, snIDs []types.StorageNodeID) *varlogp return ls } -func makeCommitResult(snID types.StorageNodeID, lsID types.LogStreamID, llsn types.LLSN, prevHighwatermark, highWatermark, offset types.GLSN) snpb.CommitRequest { +func makeCommitResult(snID types.StorageNodeID, lsID types.LogStreamID, llsn types.LLSN, ver types.Version, offset types.GLSN) snpb.CommitRequest { return snpb.CommitRequest{ StorageNodeID: snID, CommitResult: snpb.LogStreamCommitResult{ LogStreamID: lsID, - PrevHighWatermark: prevHighwatermark, - HighWatermark: highWatermark, + Version: ver, CommittedGLSNOffset: offset, CommittedLLSNOffset: llsn, CommittedGLSNLength: 1, @@ -512,69 +522,82 @@ func TestMRApplyReport(t *testing.T) { }) mr := clus.nodes[0] + tn := &varlogpb.TopicDescriptor{ + TopicID: types.TopicID(1), + Status: varlogpb.TopicStatusRunning, + } + + err := mr.storage.registerTopic(tn) + So(err, ShouldBeNil) + snIDs := make([]types.StorageNodeID, rep) for i := range snIDs { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := mr.storage.registerStorageNode(sn) So(err, ShouldBeNil) } - lsId := types.LogStreamID(0) + lsID := types.LogStreamID(0) notExistSnID := types.StorageNodeID(rep) - report := makeUncommitReport(snIDs[0], types.InvalidGLSN, lsId, types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0], types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, 2) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - for _, snId := range snIDs { - _, ok := mr.storage.LookupUncommitReport(lsId, snId) + for _, snID := range snIDs { + _, ok := mr.storage.LookupUncommitReport(lsID, snID) So(ok, ShouldBeFalse) } Convey("UncommitReport should register when register LogStream", func(ctx C) { - ls := makeLogStream(lsId, snIDs) - err := mr.storage.registerLogStream(ls) + err := mr.storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + err = mr.storage.registerLogStream(ls) So(err, ShouldBeNil) - for _, snId := range snIDs { - _, ok := mr.storage.LookupUncommitReport(lsId, snId) + for _, snID := range snIDs { + _, ok := mr.storage.LookupUncommitReport(lsID, snID) So(ok, ShouldBeTrue) } Convey("Report should not apply if snID is not exist in UncommitReport", func(ctx C) { - report := makeUncommitReport(notExistSnID, types.InvalidGLSN, lsId, types.MinLLSN, 2) + report := makeUncommitReport(notExistSnID, types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, 2) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - _, ok := mr.storage.LookupUncommitReport(lsId, notExistSnID) + _, ok := mr.storage.LookupUncommitReport(lsID, notExistSnID) So(ok, ShouldBeFalse) }) Convey("Report should apply if snID is exist in UncommitReport", func(ctx C) { - snId := snIDs[0] - report := makeUncommitReport(snId, types.InvalidGLSN, lsId, types.MinLLSN, 2) + snID := snIDs[0] + report := makeUncommitReport(snID, types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, 2) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - r, ok := mr.storage.LookupUncommitReport(lsId, snId) + r, ok := mr.storage.LookupUncommitReport(lsID, snID) So(ok, ShouldBeTrue) So(r.UncommittedLLSNEnd(), ShouldEqual, types.MinLLSN+types.LLSN(2)) Convey("Report which have bigger END LLSN Should be applied", func(ctx C) { - report := makeUncommitReport(snId, types.InvalidGLSN, lsId, types.MinLLSN, 3) + report := makeUncommitReport(snID, types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, 3) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - r, ok := mr.storage.LookupUncommitReport(lsId, snId) + r, ok := mr.storage.LookupUncommitReport(lsID, snID) So(ok, ShouldBeTrue) So(r.UncommittedLLSNEnd(), ShouldEqual, types.MinLLSN+types.LLSN(3)) }) Convey("Report which have smaller END LLSN Should Not be applied", func(ctx C) { - report := makeUncommitReport(snId, types.InvalidGLSN, lsId, types.MinLLSN, 1) + report := makeUncommitReport(snID, types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, 1) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - r, ok := mr.storage.LookupUncommitReport(lsId, snId) + r, ok := mr.storage.LookupUncommitReport(lsID, snID) So(ok, ShouldBeTrue) So(r.UncommittedLLSNEnd(), ShouldNotEqual, types.MinLLSN+types.LLSN(1)) }) @@ -595,59 +618,66 @@ func TestMRCalculateCommit(t *testing.T) { for i := range snIDs { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := mr.storage.registerStorageNode(sn) So(err, ShouldBeNil) } - lsId := types.LogStreamID(0) - ls := makeLogStream(lsId, snIDs) - err := mr.storage.registerLogStream(ls) + + err := mr.storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + lsID := types.LogStreamID(0) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + err = mr.storage.registerLogStream(ls) So(err, ShouldBeNil) Convey("LogStream which all reports have not arrived cannot be commit", func(ctx C) { - report := makeUncommitReport(snIDs[0], types.InvalidGLSN, lsId, types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0], types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, 2) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - replicas := mr.storage.LookupUncommitReports(lsId) - _, minHWM, nrCommit := mr.calculateCommit(replicas) + replicas := mr.storage.LookupUncommitReports(lsID) + _, minVer, _, nrCommit := mr.calculateCommit(replicas) So(nrCommit, ShouldEqual, 0) - So(minHWM, ShouldEqual, types.InvalidGLSN) + So(minVer, ShouldEqual, types.InvalidVersion) }) Convey("LogStream which all reports are disjoint cannot be commit", func(ctx C) { - report := makeUncommitReport(snIDs[0], types.GLSN(10), lsId, types.MinLLSN+types.LLSN(5), 1) + report := makeUncommitReport(snIDs[0], types.Version(10), types.GLSN(10), lsID, types.MinLLSN+types.LLSN(5), 1) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - report = makeUncommitReport(snIDs[1], types.GLSN(7), lsId, types.MinLLSN+types.LLSN(3), 2) + report = makeUncommitReport(snIDs[1], types.Version(7), types.GLSN(7), lsID, types.MinLLSN+types.LLSN(3), 2) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - replicas := mr.storage.LookupUncommitReports(lsId) - knownHWM, minHWM, nrCommit := mr.calculateCommit(replicas) + replicas := mr.storage.LookupUncommitReports(lsID) + knownVer, minVer, _, nrCommit := mr.calculateCommit(replicas) So(nrCommit, ShouldEqual, 0) - So(knownHWM, ShouldEqual, types.GLSN(10)) - So(minHWM, ShouldEqual, types.GLSN(7)) + So(knownVer, ShouldEqual, types.Version(10)) + So(minVer, ShouldEqual, types.Version(7)) }) Convey("LogStream Should be commit where replication is completed", func(ctx C) { - report := makeUncommitReport(snIDs[0], types.GLSN(10), lsId, types.MinLLSN+types.LLSN(3), 3) + report := makeUncommitReport(snIDs[0], types.Version(10), types.GLSN(10), lsID, types.MinLLSN+types.LLSN(3), 3) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - report = makeUncommitReport(snIDs[1], types.GLSN(9), lsId, types.MinLLSN+types.LLSN(3), 2) + report = makeUncommitReport(snIDs[1], types.Version(9), types.GLSN(9), lsID, types.MinLLSN+types.LLSN(3), 2) mr.applyReport(&mrpb.Reports{Reports: []*mrpb.Report{report}}) - replicas := mr.storage.LookupUncommitReports(lsId) - knownHWM, minHWM, nrCommit := mr.calculateCommit(replicas) + replicas := mr.storage.LookupUncommitReports(lsID) + knownVer, minVer, _, nrCommit := mr.calculateCommit(replicas) So(nrCommit, ShouldEqual, 2) - So(minHWM, ShouldEqual, types.GLSN(9)) - So(knownHWM, ShouldEqual, types.GLSN(10)) + So(minVer, ShouldEqual, types.Version(9)) + So(knownVer, ShouldEqual, types.Version(10)) }) }) } func TestMRGlobalCommit(t *testing.T) { Convey("Calculate commit", t, func(ctx C) { + topicID := types.TopicID(1) rep := 2 clus := newMetadataRepoCluster(1, rep, false, false) Reset(func() { @@ -663,7 +693,9 @@ func TestMRGlobalCommit(t *testing.T) { snIDs[i][j] = types.StorageNodeID(i*2 + j) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i][j], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i][j], + }, } err := mr.storage.registerStorageNode(sn) @@ -671,13 +703,16 @@ func TestMRGlobalCommit(t *testing.T) { } } + err := mr.storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + lsIds := make([]types.LogStreamID, 2) for i := range lsIds { lsIds[i] = types.LogStreamID(i) } - for i, lsId := range lsIds { - ls := makeLogStream(lsId, snIDs[i]) + for i, lsID := range lsIds { + ls := makeLogStream(types.TopicID(1), lsID, snIDs[i]) err := mr.storage.registerLogStream(ls) So(err, ShouldBeNil) } @@ -689,61 +724,64 @@ func TestMRGlobalCommit(t *testing.T) { Convey("global commit", func(ctx C) { So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][0], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][0], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][1], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][1], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][0], types.InvalidGLSN, lsIds[1], types.MinLLSN, 4) + report := makeUncommitReport(snIDs[1][0], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, 4) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][1], types.InvalidGLSN, lsIds[1], types.MinLLSN, 3) + report := makeUncommitReport(snIDs[1][1], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, 3) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) // global commit (2, 3) highest glsn: 5 So(testutil.CompareWaitN(10, func() bool { - return mr.storage.GetHighWatermark() == types.GLSN(5) + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(5) }), ShouldBeTrue) Convey("LogStream should be dedup", func(ctx C) { So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][0], types.InvalidGLSN, lsIds[0], types.MinLLSN, 3) + report := makeUncommitReport(snIDs[0][0], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 3) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][1], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][1], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) time.Sleep(vtesting.TimeoutUnitTimesFactor(1)) So(testutil.CompareWaitN(50, func() bool { - return mr.storage.GetHighWatermark() == types.GLSN(5) + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(5) }), ShouldBeTrue) }) - Convey("LogStream which have wrong GLSN but have uncommitted should commit", func(ctx C) { + Convey("LogStream which have wrong Version but have uncommitted should commit", func(ctx C) { So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][0], types.InvalidGLSN, lsIds[0], types.MinLLSN, 6) + report := makeUncommitReport(snIDs[0][0], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 6) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][1], types.InvalidGLSN, lsIds[0], types.MinLLSN, 6) + report := makeUncommitReport(snIDs[0][1], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 6) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - return mr.storage.GetHighWatermark() == types.GLSN(9) + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(9) }), ShouldBeTrue) }) }) @@ -755,6 +793,7 @@ func TestMRGlobalCommitConsistency(t *testing.T) { rep := 1 nrNodes := 2 nrLS := 5 + topicID := types.TopicID(1) clus := newMetadataRepoCluster(nrNodes, rep, false, false) Reset(func() { @@ -766,7 +805,9 @@ func TestMRGlobalCommitConsistency(t *testing.T) { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } for j := 0; j < nrNodes; j++ { @@ -775,13 +816,18 @@ func TestMRGlobalCommitConsistency(t *testing.T) { } } + for i := 0; i < nrNodes; i++ { + err := clus.nodes[i].storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + } + lsIDs := make([]types.LogStreamID, nrLS) for i := range lsIDs { lsIDs[i] = types.LogStreamID(i) } for _, lsID := range lsIDs { - ls := makeLogStream(lsID, snIDs) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) for i := 0; i < nrNodes; i++ { err := clus.nodes[i].storage.registerLogStream(ls) So(err, ShouldBeNil) @@ -793,14 +839,14 @@ func TestMRGlobalCommitConsistency(t *testing.T) { return clus.healthCheckAll() }), ShouldBeTrue) - Convey("Then, it should calulate same glsn for each log streams", func(ctx C) { + Convey("Then, it should calculate same glsn for each log streams", func(ctx C) { for i := 0; i < 100; i++ { var report *mrpb.Report for j, lsID := range lsIDs { if j == 0 { - report = makeUncommitReport(snIDs[0], types.GLSN(i*nrLS), lsID, types.LLSN(i+1), 1) + report = makeUncommitReport(snIDs[0], types.Version(i), types.GLSN(i*nrLS), lsID, types.LLSN(i+1), 1) } else { - report = appendUncommitReport(report, types.GLSN(i*nrLS), lsID, types.LLSN(i+1), 1) + report = appendUncommitReport(report, types.Version(i), types.GLSN(i*nrLS), lsID, types.LLSN(i+1), 1) } } @@ -810,11 +856,11 @@ func TestMRGlobalCommitConsistency(t *testing.T) { for j := 0; j < nrNodes; j++ { So(testutil.CompareWaitN(10, func() bool { - return clus.nodes[j].storage.GetHighWatermark() == types.GLSN(nrLS*(i+1)) + return clus.nodes[j].storage.GetLastCommitVersion() == types.Version(i+1) }), ShouldBeTrue) for k, lsID := range lsIDs { - So(clus.nodes[j].getLastCommitted(lsID), ShouldEqual, types.GLSN((nrLS*i)+k+1)) + So(clus.nodes[j].getLastCommitted(topicID, lsID, -1), ShouldEqual, types.GLSN((nrLS*i)+k+1)) } } } @@ -840,7 +886,9 @@ func TestMRSimpleReportNCommit(t *testing.T) { lsID := types.LogStreamID(snID) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -853,9 +901,13 @@ func TestMRSimpleReportNCommit(t *testing.T) { return clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) != nil }), ShouldBeTrue) - ls := makeLogStream(lsID, snIDs) + err = clus.nodes[0].RegisterTopic(context.TODO(), types.TopicID(1)) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) rctx, cancel = context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() + err = clus.nodes[0].RegisterLogStream(rctx, ls) So(err, ShouldBeNil) @@ -877,7 +929,9 @@ func TestMRRequestMap(t *testing.T) { mr := clus.nodes[0] sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } requestNum := atomic.LoadUint64(&mr.requestNum) @@ -912,7 +966,9 @@ func TestMRRequestMap(t *testing.T) { mr := clus.nodes[0] sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } var st sync.WaitGroup @@ -954,7 +1010,9 @@ func TestMRRequestMap(t *testing.T) { mr := clus.nodes[0] sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(1)) @@ -985,7 +1043,9 @@ func TestMRRequestMap(t *testing.T) { }), ShouldBeTrue) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } requestNum := atomic.LoadUint64(&mr.requestNum) @@ -1000,6 +1060,7 @@ func TestMRRequestMap(t *testing.T) { func TestMRGetLastCommitted(t *testing.T) { Convey("getLastCommitted", t, func(ctx C) { rep := 2 + topicID := types.TopicID(1) clus := newMetadataRepoCluster(1, rep, false, false) Reset(func() { clus.closeNoErrors(t) @@ -1014,7 +1075,9 @@ func TestMRGetLastCommitted(t *testing.T) { snIDs[i][j] = types.StorageNodeID(i*2 + j) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i][j], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i][j], + }, } err := mr.storage.registerStorageNode(sn) @@ -1027,8 +1090,11 @@ func TestMRGetLastCommitted(t *testing.T) { lsIds[i] = types.LogStreamID(i) } - for i, lsId := range lsIds { - ls := makeLogStream(lsId, snIDs[i]) + err := mr.storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + for i, lsID := range lsIds { + ls := makeLogStream(types.TopicID(1), lsID, snIDs[i]) err := mr.storage.registerLogStream(ls) So(err, ShouldBeNil) } @@ -1038,63 +1104,65 @@ func TestMRGetLastCommitted(t *testing.T) { return clus.healthCheckAll() }), ShouldBeTrue) - Convey("getLastCommitted should return last committed GLSN", func(ctx C) { - preHighWatermark := mr.storage.GetHighWatermark() + Convey("getLastCommitted should return last committed Version", func(ctx C) { + preVersion := mr.storage.GetLastCommitVersion() So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][0], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][0], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][1], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][1], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][0], types.InvalidGLSN, lsIds[1], types.MinLLSN, 4) + report := makeUncommitReport(snIDs[1][0], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, 4) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][1], types.InvalidGLSN, lsIds[1], types.MinLLSN, 3) + report := makeUncommitReport(snIDs[1][1], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, 3) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) // global commit (2, 3) highest glsn: 5 So(testutil.CompareWaitN(10, func() bool { - return mr.storage.GetHighWatermark() == types.GLSN(5) + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(5) }), ShouldBeTrue) latest := mr.storage.getLastCommitResultsNoLock() - base := mr.storage.lookupNextCommitResultsNoLock(preHighWatermark) + base := mr.storage.lookupNextCommitResultsNoLock(preVersion) - So(mr.numCommitSince(lsIds[0], base, latest, -1), ShouldEqual, 2) - So(mr.numCommitSince(lsIds[1], base, latest, -1), ShouldEqual, 3) + So(mr.numCommitSince(topicID, lsIds[0], base, latest, -1), ShouldEqual, 2) + So(mr.numCommitSince(topicID, lsIds[1], base, latest, -1), ShouldEqual, 3) Convey("getLastCommitted should return same if not committed", func(ctx C) { for i := 0; i < 10; i++ { - preHighWatermark := mr.storage.GetHighWatermark() + preVersion := mr.storage.GetLastCommitVersion() So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][0], types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) + report := makeUncommitReport(snIDs[1][0], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][1], types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) + report := makeUncommitReport(snIDs[1][1], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(50, func() bool { - return mr.storage.GetHighWatermark() == types.GLSN(6+i) + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(6+i) }), ShouldBeTrue) latest := mr.storage.getLastCommitResultsNoLock() - base := mr.storage.lookupNextCommitResultsNoLock(preHighWatermark) + base := mr.storage.lookupNextCommitResultsNoLock(preVersion) - So(mr.numCommitSince(lsIds[0], base, latest, -1), ShouldEqual, 0) - So(mr.numCommitSince(lsIds[1], base, latest, -1), ShouldEqual, 1) + So(mr.numCommitSince(topicID, lsIds[0], base, latest, -1), ShouldEqual, 0) + So(mr.numCommitSince(topicID, lsIds[1], base, latest, -1), ShouldEqual, 1) } }) @@ -1105,37 +1173,38 @@ func TestMRGetLastCommitted(t *testing.T) { So(err, ShouldBeNil) for i := 0; i < 10; i++ { - preHighWatermark := mr.storage.GetHighWatermark() + preVersion := mr.storage.GetLastCommitVersion() So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][0], types.InvalidGLSN, lsIds[0], types.MinLLSN, uint64(3+i)) + report := makeUncommitReport(snIDs[0][0], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, uint64(3+i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][1], types.InvalidGLSN, lsIds[0], types.MinLLSN, uint64(3+i)) + report := makeUncommitReport(snIDs[0][1], types.InvalidVersion, types.InvalidGLSN, lsIds[0], types.MinLLSN, uint64(3+i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][0], types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) + report := makeUncommitReport(snIDs[1][0], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][1], types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) + report := makeUncommitReport(snIDs[1][1], types.InvalidVersion, types.InvalidGLSN, lsIds[1], types.MinLLSN, uint64(4+i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - return mr.storage.GetHighWatermark() == types.GLSN(6+i) + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(6+i) }), ShouldBeTrue) latest := mr.storage.getLastCommitResultsNoLock() - base := mr.storage.lookupNextCommitResultsNoLock(preHighWatermark) + base := mr.storage.lookupNextCommitResultsNoLock(preVersion) - So(mr.numCommitSince(lsIds[0], base, latest, -1), ShouldEqual, 1) - So(mr.numCommitSince(lsIds[1], base, latest, -1), ShouldEqual, 0) + So(mr.numCommitSince(topicID, lsIds[0], base, latest, -1), ShouldEqual, 1) + So(mr.numCommitSince(topicID, lsIds[1], base, latest, -1), ShouldEqual, 0) } }) }) @@ -1145,6 +1214,8 @@ func TestMRGetLastCommitted(t *testing.T) { func TestMRSeal(t *testing.T) { Convey("seal", t, func(ctx C) { rep := 2 + topicID := types.TopicID(1) + clus := newMetadataRepoCluster(1, rep, false, false) Reset(func() { clus.closeNoErrors(t) @@ -1159,7 +1230,9 @@ func TestMRSeal(t *testing.T) { snIDs[i][j] = types.StorageNodeID(i*2 + j) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i][j], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i][j], + }, } err := mr.storage.registerStorageNode(sn) @@ -1167,13 +1240,16 @@ func TestMRSeal(t *testing.T) { } } - lsIds := make([]types.LogStreamID, 2) - for i := range lsIds { - lsIds[i] = types.LogStreamID(i) + lsIDs := make([]types.LogStreamID, 2) + for i := range lsIDs { + lsIDs[i] = types.LogStreamID(i) } - for i, lsId := range lsIds { - ls := makeLogStream(lsId, snIDs[i]) + err := mr.storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + for i, lsID := range lsIDs { + ls := makeLogStream(types.TopicID(1), lsID, snIDs[i]) err := mr.storage.registerLogStream(ls) So(err, ShouldBeNil) } @@ -1185,36 +1261,36 @@ func TestMRSeal(t *testing.T) { Convey("Seal should commit and return last committed", func(ctx C) { So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][0], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][0], types.InvalidVersion, types.InvalidGLSN, lsIDs[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][1], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][1], types.InvalidVersion, types.InvalidGLSN, lsIDs[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][0], types.InvalidGLSN, lsIds[1], types.MinLLSN, 4) + report := makeUncommitReport(snIDs[1][0], types.InvalidVersion, types.InvalidGLSN, lsIDs[1], types.MinLLSN, 4) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][1], types.InvalidGLSN, lsIds[1], types.MinLLSN, 3) + report := makeUncommitReport(snIDs[1][1], types.InvalidVersion, types.InvalidGLSN, lsIDs[1], types.MinLLSN, 3) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() - lc, err := mr.Seal(rctx, lsIds[1]) + lc, err := mr.Seal(rctx, lsIDs[1]) So(err, ShouldBeNil) - So(lc, ShouldEqual, mr.getLastCommitted(lsIds[1])) + So(lc, ShouldEqual, mr.getLastCommitted(topicID, lsIDs[1], -1)) Convey("Seal should return same last committed", func(ctx C) { for i := 0; i < 10; i++ { rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() - lc2, err := mr.Seal(rctx, lsIds[1]) + lc2, err := mr.Seal(rctx, lsIDs[1]) So(err, ShouldBeNil) So(lc2, ShouldEqual, lc) } @@ -1226,6 +1302,8 @@ func TestMRSeal(t *testing.T) { func TestMRUnseal(t *testing.T) { Convey("unseal", t, func(ctx C) { rep := 2 + topicID := types.TopicID(1) + clus := newMetadataRepoCluster(1, rep, false, false) Reset(func() { clus.closeNoErrors(t) @@ -1240,7 +1318,9 @@ func TestMRUnseal(t *testing.T) { snIDs[i][j] = types.StorageNodeID(i*2 + j) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i][j], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i][j], + }, } err := mr.storage.registerStorageNode(sn) @@ -1248,13 +1328,16 @@ func TestMRUnseal(t *testing.T) { } } - lsIds := make([]types.LogStreamID, 2) - for i := range lsIds { - lsIds[i] = types.LogStreamID(i) + lsIDs := make([]types.LogStreamID, 2) + for i := range lsIDs { + lsIDs[i] = types.LogStreamID(i) } - for i, lsId := range lsIds { - ls := makeLogStream(lsId, snIDs[i]) + err := mr.storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + for i, lsID := range lsIDs { + ls := makeLogStream(types.TopicID(1), lsID, snIDs[i]) err := mr.storage.registerLogStream(ls) So(err, ShouldBeNil) } @@ -1265,39 +1348,42 @@ func TestMRUnseal(t *testing.T) { }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][0], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][0], types.InvalidVersion, types.InvalidGLSN, lsIDs[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[0][1], types.InvalidGLSN, lsIds[0], types.MinLLSN, 2) + report := makeUncommitReport(snIDs[0][1], types.InvalidVersion, types.InvalidGLSN, lsIDs[0], types.MinLLSN, 2) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][0], types.InvalidGLSN, lsIds[1], types.MinLLSN, 4) + report := makeUncommitReport(snIDs[1][0], types.InvalidVersion, types.InvalidGLSN, lsIDs[1], types.MinLLSN, 4) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][1], types.InvalidGLSN, lsIds[1], types.MinLLSN, 3) + report := makeUncommitReport(snIDs[1][1], types.InvalidVersion, types.InvalidGLSN, lsIDs[1], types.MinLLSN, 3) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - return mr.getLastCommitted(lsIds[1]) == 5 + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(5) }), ShouldBeTrue) rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() - sealedHWM, err := mr.Seal(rctx, lsIds[1]) + sealedHWM, err := mr.Seal(rctx, lsIDs[1]) So(err, ShouldBeNil) - So(sealedHWM, ShouldEqual, mr.getLastCommitted(lsIds[1])) + So(sealedHWM, ShouldEqual, mr.getLastCommitted(topicID, lsIDs[1], -1)) So(testutil.CompareWaitN(10, func() bool { + sealedVer := mr.getLastCommitVersion(topicID, lsIDs[1]) + for _, snID := range snIDs[1] { - report := makeUncommitReport(snID, sealedHWM, lsIds[1], types.LLSN(4), 0) + report := makeUncommitReport(snID, sealedVer, sealedHWM, lsIDs[1], types.LLSN(4), 0) if err := mr.proposeReport(report.StorageNodeID, report.UncommitReport); err != nil { return false } @@ -1308,37 +1394,40 @@ func TestMRUnseal(t *testing.T) { return false } - ls := meta.GetLogStream(lsIds[1]) + ls := meta.GetLogStream(lsIDs[1]) return ls.Status == varlogpb.LogStreamStatusSealed }), ShouldBeTrue) + sealedVer := mr.getLastCommitVersion(topicID, lsIDs[1]) + Convey("Unealed LS should update report", func(ctx C) { rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() - err := mr.Unseal(rctx, lsIds[1]) + err := mr.Unseal(rctx, lsIDs[1]) So(err, ShouldBeNil) for i := 1; i < 10; i++ { - preHighWatermark := mr.storage.GetHighWatermark() + preVersion := mr.storage.GetLastCommitVersion() So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][0], sealedHWM, lsIds[1], types.LLSN(4), uint64(i)) + report := makeUncommitReport(snIDs[1][0], sealedVer, sealedHWM, lsIDs[1], types.LLSN(4), uint64(i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(10, func() bool { - report := makeUncommitReport(snIDs[1][1], sealedHWM, lsIds[1], types.LLSN(4), uint64(i)) + report := makeUncommitReport(snIDs[1][1], sealedVer, sealedHWM, lsIDs[1], types.LLSN(4), uint64(i)) return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil }), ShouldBeTrue) So(testutil.CompareWaitN(50, func() bool { - return mr.storage.GetHighWatermark() == types.GLSN(5+i) + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(5+i) }), ShouldBeTrue) latest := mr.storage.getLastCommitResultsNoLock() - base := mr.storage.lookupNextCommitResultsNoLock(preHighWatermark) + base := mr.storage.lookupNextCommitResultsNoLock(preVersion) - So(mr.numCommitSince(lsIds[1], base, latest, -1), ShouldEqual, 1) + So(mr.numCommitSince(topicID, lsIDs[1], base, latest, -1), ShouldEqual, 1) } }) }) @@ -1365,16 +1454,21 @@ func TestMRUpdateLogStream(t *testing.T) { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := mr.RegisterStorageNode(context.TODO(), sn) So(err, ShouldBeNil) } + err := mr.RegisterTopic(context.TODO(), types.TopicID(1)) + So(err, ShouldBeNil) + lsID := types.LogStreamID(0) - ls := makeLogStream(lsID, snIDs[0:1]) - err := mr.RegisterLogStream(context.TODO(), ls) + ls := makeLogStream(types.TopicID(1), lsID, snIDs[0:1]) + err = mr.RegisterLogStream(context.TODO(), ls) So(err, ShouldBeNil) So(testutil.CompareWaitN(1, func() bool { return mr.reportCollector.NumCommitter() == 1 @@ -1386,7 +1480,7 @@ func TestMRUpdateLogStream(t *testing.T) { _, err = mr.Seal(rctx, lsID) So(err, ShouldBeNil) - updatedls := makeLogStream(lsID, snIDs[1:2]) + updatedls := makeLogStream(types.TopicID(1), lsID, snIDs[1:2]) err = mr.UpdateLogStream(context.TODO(), updatedls) So(err, ShouldBeNil) @@ -1433,7 +1527,9 @@ func TestMRFailoverLeaderElection(t *testing.T) { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -1443,17 +1539,21 @@ func TestMRFailoverLeaderElection(t *testing.T) { So(err, ShouldBeNil) } + err := clus.nodes[0].RegisterTopic(context.TODO(), types.TopicID(1)) + So(err, ShouldBeNil) + lsID := types.LogStreamID(0) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) - ls := makeLogStream(lsID, snIDs) rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() - err := clus.nodes[0].RegisterLogStream(rctx, ls) + + err = clus.nodes[0].RegisterLogStream(rctx, ls) So(err, ShouldBeNil) reporterClient := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snIDs[0]) So(testutil.CompareWaitN(50, func() bool { - return !reporterClient.getKnownHighWatermark(0).Invalid() + return !reporterClient.getKnownVersion(0).Invalid() }), ShouldBeTrue) Convey("When node fail", func(ctx C) { @@ -1466,10 +1566,10 @@ func TestMRFailoverLeaderElection(t *testing.T) { return leader != clus.leader() }), ShouldBeTrue) - prev := reporterClient.getKnownHighWatermark(0) + prev := reporterClient.getKnownVersion(0) So(testutil.CompareWaitN(50, func() bool { - return reporterClient.getKnownHighWatermark(0) > prev + return reporterClient.getKnownVersion(0) > prev }), ShouldBeTrue) }) }) @@ -1496,7 +1596,9 @@ func TestMRFailoverJoinNewNode(t *testing.T) { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(200)) @@ -1506,12 +1608,16 @@ func TestMRFailoverJoinNewNode(t *testing.T) { So(err, ShouldBeNil) } + err := clus.nodes[0].RegisterTopic(context.TODO(), types.TopicID(1)) + So(err, ShouldBeNil) + lsID := types.LogStreamID(0) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) - ls := makeLogStream(lsID, snIDs) rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(200)) defer cancel() - err := clus.nodes[0].RegisterLogStream(rctx, ls) + + err = clus.nodes[0].RegisterLogStream(rctx, ls) So(err, ShouldBeNil) Convey("When new node join", func(ctx C) { @@ -1545,7 +1651,9 @@ func TestMRFailoverJoinNewNode(t *testing.T) { snID := snIDs[nrRep-1] + types.StorageNodeID(1) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(200)) @@ -1605,7 +1713,9 @@ func TestMRFailoverJoinNewNode(t *testing.T) { snID := snIDs[nrRep-1] + types.StorageNodeID(1) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(200)) @@ -1822,7 +1932,9 @@ func TestMRLoadSnapshot(t *testing.T) { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -1896,7 +2008,9 @@ func TestMRRemoteSnapshot(t *testing.T) { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -2141,7 +2255,9 @@ func TestMRFailoverRecoverReportCollector(t *testing.T) { snIDs[i][j] = types.StorageNodeID(i*nrStorageNode + j) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i][j], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i][j], + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -2156,9 +2272,12 @@ func TestMRFailoverRecoverReportCollector(t *testing.T) { return clus.nodes[leader].reportCollector.NumExecutors() == nrStorageNode*nrRep }), ShouldBeTrue) + err := clus.nodes[0].RegisterTopic(context.TODO(), types.TopicID(1)) + So(err, ShouldBeNil) + for i := 0; i < nrLogStream; i++ { lsID := types.LogStreamID(i) - ls := makeLogStream(lsID, snIDs[i%nrStorageNode]) + ls := makeLogStream(types.TopicID(1), lsID, snIDs[i%nrStorageNode]) rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) defer cancel() @@ -2208,7 +2327,9 @@ func TestMRProposeTimeout(t *testing.T) { Convey("When cli register SN with timeout", func(ctx C) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(snID), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -2239,7 +2360,9 @@ func TestMRProposeRetry(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(snID), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -2326,6 +2449,7 @@ func TestMRScaleOutJoin(t *testing.T) { } func TestMRUnsafeNoWal(t *testing.T) { + t.Skip() Convey("Given MR cluster with unsafeNoWal", t, func(ctx C) { testSnapCount = 10 defer func() { testSnapCount = 0 }() @@ -2349,7 +2473,9 @@ func TestMRUnsafeNoWal(t *testing.T) { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } rctx, cancel := context.WithTimeout(context.Background(), vtesting.TimeoutUnitTimesFactor(50)) @@ -2432,7 +2558,9 @@ func TestMRFailoverRecoverFromStateMachineLog(t *testing.T) { for i := range snIDs { snIDs[i] = types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } sm.Metadata.InsertStorageNode(sn) @@ -2454,12 +2582,14 @@ func TestMRFailoverRecoverFromStateMachineLog(t *testing.T) { } func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { + t.Skip() Convey("Given MR cluster with unsafeNoWal", t, func(ctx C) { testSnapCount = 10 defer func() { testSnapCount = 0 }() nrRep := 1 nrNode := 1 nrSN := 5 + topicID := types.TopicID(1) clus := newMetadataRepoCluster(nrNode, nrRep, false, true) clus.Start() @@ -2474,7 +2604,7 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { So(leader, ShouldBeGreaterThan, -1) // register SN & LS - err := clus.initDummyStorageNode(nrSN) + err := clus.initDummyStorageNode(nrSN, 1) So(err, ShouldBeNil) So(testutil.CompareWaitN(50, func() bool { @@ -2495,16 +2625,16 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { written++ } - highWatermark := types.GLSN(written) - for _, snID := range snIDs { snCli := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) So(testutil.CompareWaitN(50, func() bool { - return snCli.getKnownHighWatermark(0) == highWatermark + return snCli.numUncommitted(0) == 0 }), ShouldBeTrue) } + version := clus.nodes[leader].GetLastCommitVersion() + Convey("When mr crashed and then commit to sn before restart mr. (it's for simulating lost data)", func(ctx C) { /* LS0 LS1 LS2 LS3 LS4 @@ -2515,23 +2645,22 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { So(clus.stop(leader), ShouldBeNil) //dummy commit result to SN direct - prevHighwatermark := highWatermark - highWatermark = highWatermark + types.GLSN(nrSN) + version++ - offset := prevHighwatermark + types.GLSN(1) + offset := types.GLSN(written + 1) expected := make(map[types.LogStreamID]uint64) for _, snID := range snIDs { snCli := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) lsID := snCli.logStreamID(0) - cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], prevHighwatermark, highWatermark, offset) + cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], version, offset) snCli.increaseUncommitted(0) expected[lsID] = 1 offset += types.GLSN(1) snCli.Commit(cr) - So(snCli.getKnownHighWatermark(0), ShouldEqual, highWatermark) + So(snCli.getKnownVersion(0), ShouldEqual, version) } So(clus.recoverMetadataRepo(leader), ShouldBeNil) @@ -2541,8 +2670,8 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { return clus.healthCheck(leader) }), ShouldBeTrue) - So(clus.nodes[leader].GetHighWatermark(), ShouldEqual, highWatermark) - fmt.Printf("recover highWatermark:%v\n", clus.nodes[leader].GetHighWatermark()) + So(clus.nodes[leader].GetLastCommitVersion(), ShouldEqual, version) + fmt.Printf("recover version:%v\n", clus.nodes[leader].GetLastCommitVersion()) /* LS0 LS1 LS2 LS3 LS4 @@ -2555,7 +2684,7 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { l, ok := expected[lsID] So(ok, ShouldBeTrue) - So(clus.nodes[leader].getLastCommittedLength(lsID), ShouldEqual, l) + So(clus.nodes[leader].getLastCommittedLength(topicID, lsID), ShouldEqual, l) } }) }) @@ -2570,17 +2699,16 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { So(clus.stop(leader), ShouldBeNil) //dummy commit result to SN direct - prevHighwatermark := highWatermark - highWatermark = highWatermark + types.GLSN(nrSN) + version++ - offset := prevHighwatermark + types.GLSN(1) + offset := types.GLSN(written + 1) expected := make(map[types.LogStreamID]uint64) for i, snID := range snIDs { snCli := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) lsID := snCli.logStreamID(0) - cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], prevHighwatermark, highWatermark, offset) + cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], version, offset) snCli.increaseUncommitted(0) expected[lsID] = 1 @@ -2601,7 +2729,7 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { } snCli.Commit(cr) - So(snCli.getKnownHighWatermark(0), ShouldEqual, highWatermark) + So(snCli.getKnownVersion(0), ShouldEqual, version) } So(clus.recoverMetadataRepo(leader), ShouldBeNil) @@ -2611,8 +2739,8 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { return clus.healthCheck(leader) }), ShouldBeTrue) - So(clus.nodes[leader].GetHighWatermark(), ShouldEqual, highWatermark) - fmt.Printf("recover highWatermark:%v\n", clus.nodes[leader].GetHighWatermark()) + So(clus.nodes[leader].GetLastCommitVersion(), ShouldEqual, version) + fmt.Printf("recover version:%v\n", clus.nodes[leader].GetLastCommitVersion()) /* LS0 LS1 LS2 LS3 LS4 @@ -2625,7 +2753,7 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { l, ok := expected[lsID] So(ok, ShouldBeTrue) - So(clus.nodes[leader].getLastCommittedLength(lsID), ShouldEqual, l) + So(clus.nodes[leader].getLastCommittedLength(topicID, lsID), ShouldEqual, l) } }) }) @@ -2647,10 +2775,9 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { //dummy commit result to SN direct nrEmpty := 2 - prevHighwatermark := highWatermark - highWatermark = highWatermark + types.GLSN(nrSN-nrEmpty) + version++ - offset := prevHighwatermark + types.GLSN(1) + offset := types.GLSN(written + 1) for i, snID := range snIDs { snCli := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) @@ -2661,31 +2788,30 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { continue } - cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], prevHighwatermark, highWatermark, offset) + cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], version, offset) offset += types.GLSN(1) snCli.increaseUncommitted(0) snCli.Commit(cr) - So(snCli.getKnownHighWatermark(0), ShouldEqual, highWatermark) + So(snCli.getKnownVersion(0), ShouldEqual, version) } - prevHighwatermark = highWatermark - highWatermark = highWatermark + types.GLSN(nrSN) + version = version + types.Version(1) expected := make(map[types.LogStreamID]uint64) for _, snID := range snIDs { snCli := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) lsID := snCli.logStreamID(0) - cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], prevHighwatermark, highWatermark, offset) + cr := makeCommitResult(snID, lsID, snCli.uncommittedLLSNOffset[0], version, offset) offset += types.GLSN(1) expected[lsID] = 1 snCli.increaseUncommitted(0) snCli.Commit(cr) - So(snCli.getKnownHighWatermark(0), ShouldEqual, highWatermark) + So(snCli.getKnownVersion(0), ShouldEqual, version) } So(clus.recoverMetadataRepo(leader), ShouldBeNil) @@ -2695,8 +2821,8 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { return clus.healthCheck(leader) }), ShouldBeTrue) - So(clus.nodes[leader].GetHighWatermark(), ShouldEqual, highWatermark) - fmt.Printf("recover highWatermark:%v\n", clus.nodes[leader].GetHighWatermark()) + So(clus.nodes[leader].GetLastCommitVersion(), ShouldEqual, version) + fmt.Printf("recover version:%v\n", clus.nodes[leader].GetLastCommitVersion()) /* LS0 LS1 LS2 LS3 LS4 @@ -2712,9 +2838,267 @@ func TestMRFailoverRecoverFromStateMachineLogWithIncompleteLog(t *testing.T) { l, ok := expected[lsID] So(ok, ShouldBeTrue) - So(clus.nodes[leader].getLastCommittedLength(lsID), ShouldEqual, l) + So(clus.nodes[leader].getLastCommittedLength(topicID, lsID), ShouldEqual, l) } }) }) }) } + +func TestMRUnregisterTopic(t *testing.T) { + Convey("Given 1 topic & 5 log streams", t, func(ctx C) { + rep := 1 + nrNodes := 1 + nrLS := 5 + topicID := types.TopicID(1) + + clus := newMetadataRepoCluster(nrNodes, rep, false, false) + Reset(func() { + clus.closeNoErrors(t) + }) + + So(clus.Start(), ShouldBeNil) + So(testutil.CompareWaitN(10, func() bool { + return clus.healthCheckAll() + }), ShouldBeTrue) + + snIDs := make([]types.StorageNodeID, rep) + for i := range snIDs { + snIDs[i] = types.StorageNodeID(i) + + sn := &varlogpb.StorageNodeDescriptor{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, + } + + err := clus.nodes[0].RegisterStorageNode(context.TODO(), sn) + So(err, ShouldBeNil) + } + + err := clus.nodes[0].RegisterTopic(context.TODO(), topicID) + So(err, ShouldBeNil) + + lsIDs := make([]types.LogStreamID, nrLS) + for i := range lsIDs { + lsIDs[i] = types.LogStreamID(i) + } + + for _, lsID := range lsIDs { + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + err := clus.nodes[0].RegisterLogStream(context.TODO(), ls) + So(err, ShouldBeNil) + } + + meta, _ := clus.nodes[0].GetMetadata(context.TODO()) + So(len(meta.GetLogStreams()), ShouldEqual, nrLS) + + err = clus.nodes[0].UnregisterTopic(context.TODO(), topicID) + So(err, ShouldBeNil) + + meta, _ = clus.nodes[0].GetMetadata(context.TODO()) + So(len(meta.GetLogStreams()), ShouldEqual, 0) + }) +} + +func TestMRTopicLastHighWatermark(t *testing.T) { + Convey("given metadata repository with multiple topics", t, func(ctx C) { + nrTopics := 3 + nrLS := 2 + rep := 2 + clus := newMetadataRepoCluster(1, rep, false, false) + mr := clus.nodes[0] + + snIDs := make([]types.StorageNodeID, rep) + for i := range snIDs { + snIDs[i] = types.StorageNodeID(i) + + sn := &varlogpb.StorageNodeDescriptor{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, + } + + err := mr.storage.registerStorageNode(sn) + So(err, ShouldBeNil) + } + + topicLogStreamID := make(map[types.TopicID][]types.LogStreamID) + topicID := types.TopicID(1) + lsID := types.LogStreamID(1) + for i := 0; i < nrTopics; i++ { + err := mr.storage.registerTopic(&varlogpb.TopicDescriptor{TopicID: topicID}) + So(err, ShouldBeNil) + + lsIds := make([]types.LogStreamID, nrLS) + for i := range lsIds { + ls := makeLogStream(topicID, lsID, snIDs) + err := mr.storage.registerLogStream(ls) + So(err, ShouldBeNil) + + lsIds[i] = lsID + lsID++ + } + topicLogStreamID[topicID] = lsIds + topicID++ + } + + Convey("getLastCommitted should return last committed Version", func(ctx C) { + So(clus.Start(), ShouldBeNil) + Reset(func() { + clus.closeNoErrors(t) + }) + + So(testutil.CompareWaitN(10, func() bool { + return clus.healthCheckAll() + }), ShouldBeTrue) + + for topicID, lsIds := range topicLogStreamID { + preVersion := mr.storage.GetLastCommitVersion() + + for _, lsID := range lsIds { + for i := 0; i < rep; i++ { + So(testutil.CompareWaitN(10, func() bool { + report := makeUncommitReport(snIDs[i], types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, uint64(2+i)) + return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil + }), ShouldBeTrue) + } + } + + // global commit (2, 2) highest glsn: 4 + So(testutil.CompareWaitN(10, func() bool { + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(4) + }), ShouldBeTrue) + + latest := mr.storage.getLastCommitResultsNoLock() + base := mr.storage.lookupNextCommitResultsNoLock(preVersion) + + for _, lsID := range lsIds { + So(mr.numCommitSince(topicID, lsID, base, latest, -1), ShouldEqual, 2) + } + } + }) + + Convey("add logStream into topic", func(ctx C) { + for topicID, lsIds := range topicLogStreamID { + ls := makeLogStream(topicID, lsID, snIDs) + err := mr.storage.registerLogStream(ls) + So(err, ShouldBeNil) + + lsIds = append(lsIds, lsID) + lsID++ + + topicLogStreamID[topicID] = lsIds + } + + So(clus.Start(), ShouldBeNil) + Reset(func() { + clus.closeNoErrors(t) + }) + + So(testutil.CompareWaitN(10, func() bool { + return clus.healthCheckAll() + }), ShouldBeTrue) + + for topicID, lsIds := range topicLogStreamID { + preVersion := mr.storage.GetLastCommitVersion() + + for _, lsID := range lsIds { + for i := 0; i < rep; i++ { + So(testutil.CompareWaitN(10, func() bool { + report := makeUncommitReport(snIDs[i], types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, uint64(2+i)) + return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil + }), ShouldBeTrue) + } + } + + // global commit (2, 2, 2) highest glsn: 6 + So(testutil.CompareWaitN(10, func() bool { + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(6) + }), ShouldBeTrue) + + latest := mr.storage.getLastCommitResultsNoLock() + base := mr.storage.lookupNextCommitResultsNoLock(preVersion) + + for _, lsID := range lsIds { + So(mr.numCommitSince(topicID, lsID, base, latest, -1), ShouldEqual, 2) + } + } + + for topicID, lsIds := range topicLogStreamID { + preVersion := mr.storage.GetLastCommitVersion() + + for _, lsID := range lsIds { + for i := 0; i < rep; i++ { + So(testutil.CompareWaitN(10, func() bool { + report := makeUncommitReport(snIDs[i], types.InvalidVersion, types.InvalidGLSN, lsID, types.MinLLSN, uint64(5+i)) + return mr.proposeReport(report.StorageNodeID, report.UncommitReport) == nil + }), ShouldBeTrue) + } + } + + // global commit (5, 5, 5) highest glsn: 15 + So(testutil.CompareWaitN(10, func() bool { + hwm, _ := mr.GetLastCommitResults().LastHighWatermark(topicID, -1) + return hwm == types.GLSN(15) + }), ShouldBeTrue) + + latest := mr.storage.getLastCommitResultsNoLock() + base := mr.storage.lookupNextCommitResultsNoLock(preVersion) + + for _, lsID := range lsIds { + So(mr.numCommitSince(topicID, lsID, base, latest, -1), ShouldEqual, 3) + } + } + }) + }) +} + +func TestMRTopicCatchup(t *testing.T) { + Convey("Given MR cluster with multi topic", t, func(ctx C) { + nrRep := 1 + nrNode := 1 + nrTopic := 2 + nrLSPerTopic := 2 + nrSN := nrTopic * nrLSPerTopic + + clus := newMetadataRepoCluster(nrNode, nrRep, false, true) + clus.Start() + Reset(func() { + clus.closeNoErrors(t) + }) + So(testutil.CompareWaitN(10, func() bool { + return clus.healthCheckAll() + }), ShouldBeTrue) + + leader := clus.leader() + So(leader, ShouldBeGreaterThan, -1) + + // register SN & LS + err := clus.initDummyStorageNode(nrSN, nrTopic) + So(err, ShouldBeNil) + + So(testutil.CompareWaitN(50, func() bool { + return len(clus.getSNIDs()) == nrSN + }), ShouldBeTrue) + + // append to SN + snIDs := clus.getSNIDs() + for i := 0; i < 100; i++ { + for _, snID := range snIDs { + snCli := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) + snCli.increaseUncommitted(0) + } + + for _, snID := range snIDs { + snCli := clus.reporterClientFac.(*DummyStorageNodeClientFactory).lookupClient(snID) + + So(testutil.CompareWaitN(50, func() bool { + return snCli.numUncommitted(0) == 0 + }), ShouldBeTrue) + } + } + }) +} diff --git a/internal/metadata_repository/raft_test.go b/internal/metadata_repository/raft_test.go index c358c2152..961e2a57d 100644 --- a/internal/metadata_repository/raft_test.go +++ b/internal/metadata_repository/raft_test.go @@ -154,11 +154,8 @@ func TestProposeOnFollower(t *testing.T) { go func(pC chan<- string, cC <-chan *raftCommittedEntry) { Loop: - for { - select { - case <-cC: - break Loop - } + for range cC { + break Loop } donec <- struct{}{} for range cC { diff --git a/internal/metadata_repository/report_collector.go b/internal/metadata_repository/report_collector.go index 0791c0a8c..db03c3a5d 100644 --- a/internal/metadata_repository/report_collector.go +++ b/internal/metadata_repository/report_collector.go @@ -29,13 +29,13 @@ type ReportCollector interface { Close() - Recover([]*varlogpb.StorageNodeDescriptor, []*varlogpb.LogStreamDescriptor, types.GLSN) error + Recover([]*varlogpb.StorageNodeDescriptor, []*varlogpb.LogStreamDescriptor, types.Version) error RegisterStorageNode(*varlogpb.StorageNodeDescriptor) error UnregisterStorageNode(types.StorageNodeID) error - RegisterLogStream(types.StorageNodeID, types.LogStreamID, types.GLSN, varlogpb.LogStreamStatus) error + RegisterLogStream(types.TopicID, types.StorageNodeID, types.LogStreamID, types.Version, varlogpb.LogStreamStatus) error UnregisterLogStream(types.StorageNodeID, types.LogStreamID) error @@ -43,7 +43,7 @@ type ReportCollector interface { Seal(types.LogStreamID) - Unseal(types.LogStreamID, types.GLSN) + Unseal(types.LogStreamID, types.Version) NumExecutors() int @@ -53,30 +53,32 @@ type ReportCollector interface { type commitHelper interface { getClient(ctx context.Context) (reportcommitter.Client, error) - getReportedHighWatermark(types.LogStreamID) (types.GLSN, bool) + getReportedVersion(types.LogStreamID) (types.Version, bool) getLastCommitResults() *mrpb.LogStreamCommitResults - lookupNextCommitResults(types.GLSN) (*mrpb.LogStreamCommitResults, error) + lookupNextCommitResults(types.Version) (*mrpb.LogStreamCommitResults, error) commit(context.Context, snpb.LogStreamCommitResult) error } type logStreamCommitter struct { - lsID types.LogStreamID - helper commitHelper + topicID types.TopicID + lsID types.LogStreamID + helper commitHelper commitStatus struct { - mu sync.RWMutex - status varlogpb.LogStreamStatus - beginHighWatermark types.GLSN + mu sync.RWMutex + status varlogpb.LogStreamStatus + beginVersion types.Version } catchupHelper struct { - cli reportcommitter.Client - sentHighWatermark types.GLSN - sentAt time.Time - expectedPos int + cli reportcommitter.Client + sentVersion types.Version + sentAt time.Time + expectedPos int + expectedEndPos int } triggerC chan struct{} @@ -215,7 +217,7 @@ func (rc *reportCollector) Close() { rc.closed = true } -func (rc *reportCollector) Recover(sns []*varlogpb.StorageNodeDescriptor, lss []*varlogpb.LogStreamDescriptor, highWatermark types.GLSN) error { +func (rc *reportCollector) Recover(sns []*varlogpb.StorageNodeDescriptor, lss []*varlogpb.LogStreamDescriptor, ver types.Version) error { rc.Run() for _, sn := range sns { @@ -240,7 +242,7 @@ func (rc *reportCollector) Recover(sns []*varlogpb.StorageNodeDescriptor, lss [] } for _, r := range ls.Replicas { - err := rc.RegisterLogStream(r.StorageNodeID, ls.LogStreamID, highWatermark, status) + err := rc.RegisterLogStream(ls.TopicID, r.StorageNodeID, ls.LogStreamID, ver, status) if err != nil { return err } @@ -313,7 +315,7 @@ func (rc *reportCollector) RegisterStorageNode(sn *varlogpb.StorageNodeDescripto return verrors.ErrExist } - logger := rc.logger.Named("executor").With(zap.Uint32("snid", uint32(sn.StorageNodeID))) + logger := rc.logger.Named("executor").With(zap.Int32("snid", int32(sn.StorageNodeID))) executor := &reportCollectExecutor{ storageNodeID: sn.StorageNodeID, helper: rc.helper, @@ -356,7 +358,7 @@ func (rc *reportCollector) UnregisterStorageNode(snID types.StorageNodeID) error return nil } -func (rc *reportCollector) RegisterLogStream(snID types.StorageNodeID, lsID types.LogStreamID, highWatermark types.GLSN, status varlogpb.LogStreamStatus) error { +func (rc *reportCollector) RegisterLogStream(topicID types.TopicID, snID types.StorageNodeID, lsID types.LogStreamID, ver types.Version, status varlogpb.LogStreamStatus) error { rc.mu.RLock() defer rc.mu.RUnlock() @@ -369,7 +371,7 @@ func (rc *reportCollector) RegisterLogStream(snID types.StorageNodeID, lsID type return verrors.ErrNotExist } - return executor.registerLogStream(lsID, highWatermark, status) + return executor.registerLogStream(topicID, lsID, ver, status) } func (rc *reportCollector) UnregisterLogStream(snID types.StorageNodeID, lsID types.LogStreamID) error { @@ -422,7 +424,7 @@ func (rc *reportCollector) Seal(lsID types.LogStreamID) { } } -func (rc *reportCollector) Unseal(lsID types.LogStreamID, highWatermark types.GLSN) { +func (rc *reportCollector) Unseal(lsID types.LogStreamID, ver types.Version) { rc.mu.RLock() defer rc.mu.RUnlock() @@ -431,7 +433,7 @@ func (rc *reportCollector) Unseal(lsID types.LogStreamID, highWatermark types.GL } for _, executor := range rc.executors { - executor.unseal(lsID, highWatermark) + executor.unseal(lsID, ver) } } @@ -535,7 +537,7 @@ func (rce *reportCollectExecutor) insertCommitter(c *logStreamCommitter) error { return nil } -func (rce *reportCollectExecutor) registerLogStream(lsID types.LogStreamID, highWatermark types.GLSN, status varlogpb.LogStreamStatus) error { +func (rce *reportCollectExecutor) registerLogStream(topicID types.TopicID, lsID types.LogStreamID, ver types.Version, status varlogpb.LogStreamStatus) error { rce.cmmu.Lock() defer rce.cmmu.Unlock() @@ -543,7 +545,7 @@ func (rce *reportCollectExecutor) registerLogStream(lsID types.LogStreamID, high return verrors.ErrExist } - c := newLogStreamCommitter(lsID, rce, highWatermark, status, rce.tmStub, rce.logger) + c := newLogStreamCommitter(topicID, lsID, rce, ver, status, rce.tmStub, rce.logger) err := c.run() if err != nil { return err @@ -593,12 +595,12 @@ func (rce *reportCollectExecutor) seal(lsID types.LogStreamID) { } } -func (rce *reportCollectExecutor) unseal(lsID types.LogStreamID, highWatermark types.GLSN) { +func (rce *reportCollectExecutor) unseal(lsID types.LogStreamID, ver types.Version) { rce.cmmu.RLock() defer rce.cmmu.RUnlock() if c := rce.lookupCommitter(lsID); c != nil { - c.unseal(highWatermark) + c.unseal(ver) } } @@ -682,12 +684,12 @@ func (rce *reportCollectExecutor) processReport(response *snpb.GetReportResponse } else if prev.LogStreamID < cur.LogStreamID { j++ } else { - if cur.HighWatermark < prev.HighWatermark { + if cur.Version < prev.Version { fmt.Printf("invalid report prev:%v, cur:%v\n", - prev.HighWatermark, cur.HighWatermark) + prev.Version, cur.Version) rce.logger.Panic("invalid report", - zap.Any("prev", prev.HighWatermark), - zap.Any("cur", cur.HighWatermark)) + zap.Any("prev", prev.Version), + zap.Any("cur", cur.Version)) } if cur.UncommittedLLSNOffset > prev.UncommittedLLSNOffset || @@ -756,32 +758,33 @@ func (rce *reportCollectExecutor) commit(ctx context.Context, cr snpb.LogStreamC return nil } -func (rce *reportCollectExecutor) getReportedHighWatermark(lsID types.LogStreamID) (types.GLSN, bool) { +func (rce *reportCollectExecutor) getReportedVersion(lsID types.LogStreamID) (types.Version, bool) { report := rce.reportCtx.getReport() if report == nil { - return types.InvalidGLSN, false + return types.InvalidVersion, false } r, ok := report.LookupReport(lsID) if !ok { - return types.InvalidGLSN, false + return types.InvalidVersion, false } - return r.HighWatermark, true + return r.Version, true } func (rce *reportCollectExecutor) getLastCommitResults() *mrpb.LogStreamCommitResults { return rce.helper.GetLastCommitResults() } -func (rce *reportCollectExecutor) lookupNextCommitResults(glsn types.GLSN) (*mrpb.LogStreamCommitResults, error) { - return rce.helper.LookupNextCommitResults(glsn) +func (rce *reportCollectExecutor) lookupNextCommitResults(ver types.Version) (*mrpb.LogStreamCommitResults, error) { + return rce.helper.LookupNextCommitResults(ver) } -func newLogStreamCommitter(lsID types.LogStreamID, helper commitHelper, highWatermark types.GLSN, status varlogpb.LogStreamStatus, tmStub *telemetryStub, logger *zap.Logger) *logStreamCommitter { +func newLogStreamCommitter(topicID types.TopicID, lsID types.LogStreamID, helper commitHelper, ver types.Version, status varlogpb.LogStreamStatus, tmStub *telemetryStub, logger *zap.Logger) *logStreamCommitter { triggerC := make(chan struct{}, 1) c := &logStreamCommitter{ + topicID: topicID, lsID: lsID, helper: helper, triggerC: triggerC, @@ -791,7 +794,7 @@ func newLogStreamCommitter(lsID types.LogStreamID, helper commitHelper, highWate } c.commitStatus.status = status - c.commitStatus.beginHighWatermark = highWatermark + c.commitStatus.beginVersion = ver return c } @@ -835,31 +838,31 @@ Loop: close(lc.triggerC) } -func (lc *logStreamCommitter) getCatchupHighWatermark(resetCatchupHelper bool) (types.GLSN, bool) { +func (lc *logStreamCommitter) getCatchupVersion(resetCatchupHelper bool) (types.Version, bool) { if resetCatchupHelper { - lc.setSentHighWatermark(types.InvalidGLSN) + lc.setSentVersion(types.InvalidVersion) } - status, beginHighWatermark := lc.getCommitStatus() + status, beginVer := lc.getCommitStatus() if status.Sealed() { - return types.InvalidGLSN, false + return types.InvalidVersion, false } - highWatermark, ok := lc.helper.getReportedHighWatermark(lc.lsID) + ver, ok := lc.helper.getReportedVersion(lc.lsID) if !ok { - return types.InvalidGLSN, false + return types.InvalidVersion, false } - sent := lc.getSentHighWatermark() - if sent > highWatermark { - highWatermark = sent + sent := lc.getSentVersion() + if sent > ver { + ver = sent } - if beginHighWatermark > highWatermark { - return beginHighWatermark, true + if beginVer > ver { + return beginVer, true } - return highWatermark, true + return ver, true } func (lc *logStreamCommitter) catchup(ctx context.Context) { @@ -879,8 +882,8 @@ func (lc *logStreamCommitter) catchup(ctx context.Context) { return } - highWatermark, ok := lc.getCatchupHighWatermark(resetCatchupHelper) - if !ok || highWatermark >= crs.HighWatermark { + ver, ok := lc.getCatchupVersion(resetCatchupHelper) + if !ok || ver >= crs.Version { return } @@ -894,20 +897,20 @@ func (lc *logStreamCommitter) catchup(ctx context.Context) { }() for ctx.Err() == nil { - if highWatermark != crs.PrevHighWatermark { - crs, err = lc.helper.lookupNextCommitResults(highWatermark) + if ver+1 != crs.Version { + crs, err = lc.helper.lookupNextCommitResults(ver) if err != nil { - latestHighWatermark, ok := lc.getCatchupHighWatermark(false) + latestVersion, ok := lc.getCatchupVersion(false) if !ok { return } - if latestHighWatermark > highWatermark { - highWatermark = latestHighWatermark + if latestVersion > ver { + ver = latestVersion continue } - lc.logger.Warn(fmt.Sprintf("lsid:%v latest:%v err:%v", lc.lsID, latestHighWatermark, err.Error())) + lc.logger.Warn(fmt.Sprintf("lsid:%v latest:%v err:%v", lc.lsID, latestVersion, err.Error())) return } } @@ -916,12 +919,12 @@ func (lc *logStreamCommitter) catchup(ctx context.Context) { return } - cr, expectedPos, ok := crs.LookupCommitResult(lc.lsID, lc.catchupHelper.expectedPos) + cr, expectedPos, ok := crs.LookupCommitResult(lc.topicID, lc.lsID, lc.catchupHelper.expectedPos) if ok { lc.catchupHelper.expectedPos = expectedPos - cr.HighWatermark = crs.HighWatermark - cr.PrevHighWatermark = crs.PrevHighWatermark + cr.Version = crs.Version + cr.HighWatermark, lc.catchupHelper.expectedEndPos = crs.LastHighWatermark(lc.topicID, lc.catchupHelper.expectedEndPos) err := lc.helper.commit(ctx, cr) if err != nil { @@ -929,8 +932,8 @@ func (lc *logStreamCommitter) catchup(ctx context.Context) { } } - lc.setSentHighWatermark(crs.HighWatermark) - highWatermark = crs.HighWatermark + lc.setSentVersion(crs.Version) + ver = crs.Version numCatchups++ } @@ -943,30 +946,30 @@ func (lc *logStreamCommitter) seal() { lc.commitStatus.status = varlogpb.LogStreamStatusSealed } -func (lc *logStreamCommitter) unseal(highWatermark types.GLSN) { +func (lc *logStreamCommitter) unseal(ver types.Version) { lc.commitStatus.mu.Lock() defer lc.commitStatus.mu.Unlock() lc.commitStatus.status = varlogpb.LogStreamStatusRunning - lc.commitStatus.beginHighWatermark = highWatermark + lc.commitStatus.beginVersion = ver } -func (lc *logStreamCommitter) getCommitStatus() (varlogpb.LogStreamStatus, types.GLSN) { +func (lc *logStreamCommitter) getCommitStatus() (varlogpb.LogStreamStatus, types.Version) { lc.commitStatus.mu.RLock() defer lc.commitStatus.mu.RUnlock() - return lc.commitStatus.status, lc.commitStatus.beginHighWatermark + return lc.commitStatus.status, lc.commitStatus.beginVersion } -func (lc *logStreamCommitter) getSentHighWatermark() types.GLSN { +func (lc *logStreamCommitter) getSentVersion() types.Version { if time.Since(lc.catchupHelper.sentAt) > DefaultCatchupRefreshTime { - return types.InvalidGLSN + return types.InvalidVersion } - return lc.catchupHelper.sentHighWatermark + return lc.catchupHelper.sentVersion } -func (lc *logStreamCommitter) setSentHighWatermark(highWatermark types.GLSN) { - lc.catchupHelper.sentHighWatermark = highWatermark +func (lc *logStreamCommitter) setSentVersion(ver types.Version) { + lc.catchupHelper.sentVersion = ver lc.catchupHelper.sentAt = time.Now() } diff --git a/internal/metadata_repository/report_collector_test.go b/internal/metadata_repository/report_collector_test.go index bffec1823..a4bd0950c 100644 --- a/internal/metadata_repository/report_collector_test.go +++ b/internal/metadata_repository/report_collector_test.go @@ -65,7 +65,7 @@ func (mr *dummyMetadataRepository) GetLastCommitResults() *mrpb.LogStreamCommitR return mr.m[len(mr.m)-1] } -func (mr *dummyMetadataRepository) LookupNextCommitResults(glsn types.GLSN) (*mrpb.LogStreamCommitResults, error) { +func (mr *dummyMetadataRepository) LookupNextCommitResults(ver types.Version) (*mrpb.LogStreamCommitResults, error) { mr.mt.Lock() defer mr.mt.Unlock() @@ -74,15 +74,15 @@ func (mr *dummyMetadataRepository) LookupNextCommitResults(glsn types.GLSN) (*mr return nil, err } - if mr.m[0].PrevHighWatermark > glsn { - err = fmt.Errorf("already trimmed glsn:%v, oldest:%v", glsn, mr.m[0].PrevHighWatermark) + if mr.m[0].Version > ver+1 { + err = fmt.Errorf("already trimmed ver:%v, oldest:%v", ver, mr.m[0].Version) } i := sort.Search(len(mr.m), func(i int) bool { - return mr.m[i].PrevHighWatermark >= glsn + return mr.m[i].Version >= ver+1 }) - if i < len(mr.m) && mr.m[i].PrevHighWatermark == glsn { + if i < len(mr.m) && mr.m[i].Version == ver+1 { return mr.m[i], err } @@ -94,15 +94,14 @@ func (mr *dummyMetadataRepository) appendGLS(gls *mrpb.LogStreamCommitResults) { defer mr.mt.Unlock() mr.m = append(mr.m, gls) - sort.Slice(mr.m, func(i, j int) bool { return mr.m[i].HighWatermark < mr.m[j].HighWatermark }) } -func (mr *dummyMetadataRepository) trimGLS(glsn types.GLSN) { +func (mr *dummyMetadataRepository) trimGLS(ver types.Version) { mr.mt.Lock() defer mr.mt.Unlock() for i, gls := range mr.m { - if glsn == gls.HighWatermark { + if ver == gls.Version { if i > 0 { mr.m = mr.m[i-1:] return @@ -135,7 +134,9 @@ func TestRegisterStorageNode(t *testing.T) { defer reportCollector.Close() sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(time.Now().UnixNano()), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(time.Now().UnixNano()), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -164,27 +165,30 @@ func TestRegisterLogStream(t *testing.T) { snID := types.StorageNodeID(0) lsID := types.LogStreamID(0) + topicID := types.TopicID(0) Convey("registeration LogStream with not existing storageNodeID should be failed", func() { - err := reportCollector.RegisterLogStream(snID, lsID, types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err := reportCollector.RegisterLogStream(topicID, snID, lsID, types.InvalidVersion, varlogpb.LogStreamStatusRunning) So(err, ShouldResemble, verrors.ErrNotExist) }) Convey("registeration LogStream with existing storageNodeID should be succeed", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := reportCollector.RegisterStorageNode(sn) So(err, ShouldBeNil) So(reportCollector.NumExecutors(), ShouldEqual, 1) - err = reportCollector.RegisterLogStream(snID, lsID, types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err = reportCollector.RegisterLogStream(topicID, snID, lsID, types.InvalidVersion, varlogpb.LogStreamStatusRunning) So(err, ShouldBeNil) So(reportCollector.NumCommitter(), ShouldEqual, 1) Convey("duplicated registeration LogStream should be failed", func() { - err = reportCollector.RegisterLogStream(snID, lsID, types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err = reportCollector.RegisterLogStream(topicID, snID, lsID, types.InvalidVersion, varlogpb.LogStreamStatusRunning) So(err, ShouldResemble, verrors.ErrExist) }) }) @@ -203,9 +207,12 @@ func TestUnregisterStorageNode(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) lsID := types.LogStreamID(0) + topicID := types.TopicID(0) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := reportCollector.RegisterStorageNode(sn) @@ -220,7 +227,7 @@ func TestUnregisterStorageNode(t *testing.T) { }) Convey("unregisteration storageNode with logstream should be failed", func() { - err = reportCollector.RegisterLogStream(snID, lsID, types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err = reportCollector.RegisterLogStream(topicID, snID, lsID, types.InvalidVersion, varlogpb.LogStreamStatusRunning) So(err, ShouldBeNil) So(reportCollector.NumCommitter(), ShouldEqual, 1) @@ -256,6 +263,7 @@ func TestUnregisterLogStream(t *testing.T) { snID := types.StorageNodeID(0) lsID := types.LogStreamID(0) + topicID := types.TopicID(0) Convey("unregisteration LogStream with not existing storageNodeID should be failed", func() { err := reportCollector.UnregisterLogStream(snID, lsID) @@ -264,14 +272,16 @@ func TestUnregisterLogStream(t *testing.T) { Convey("unregisteration LogStream with existing storageNodeID should be succeed", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := reportCollector.RegisterStorageNode(sn) So(err, ShouldBeNil) So(reportCollector.NumExecutors(), ShouldEqual, 1) - err = reportCollector.RegisterLogStream(snID, lsID, types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err = reportCollector.RegisterLogStream(topicID, snID, lsID, types.InvalidVersion, varlogpb.LogStreamStatusRunning) So(err, ShouldBeNil) So(reportCollector.NumCommitter(), ShouldEqual, 1) @@ -294,15 +304,18 @@ func TestRecoverStorageNode(t *testing.T) { defer reportCollector.Close() nrSN := 5 - hwm := types.MinGLSN + ver := types.MinVersion var SNs []*varlogpb.StorageNodeDescriptor var LSs []*varlogpb.LogStreamDescriptor var sealingLSID types.LogStreamID var sealedLSID types.LogStreamID + var topicID types.TopicID for i := 0; i < nrSN; i++ { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(time.Now().UnixNano()), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(time.Now().UnixNano()), + }, } SNs = append(SNs, sn) @@ -326,7 +339,7 @@ func TestRecoverStorageNode(t *testing.T) { LSs = append(LSs, ls) - err = reportCollector.RegisterLogStream(sn.StorageNodeID, ls.LogStreamID, types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err = reportCollector.RegisterLogStream(topicID, sn.StorageNodeID, ls.LogStreamID, types.InvalidVersion, varlogpb.LogStreamStatusRunning) So(err, ShouldBeNil) } @@ -354,7 +367,7 @@ func TestRecoverStorageNode(t *testing.T) { } Convey("When ReportCollector Recover", func(ctx C) { - reportCollector.Recover(SNs, LSs, hwm) + reportCollector.Recover(SNs, LSs, ver) Convey("Then there should be ReportCollectExecutor", func(ctx C) { sealing := false @@ -404,7 +417,7 @@ func TestRecoverStorageNode(t *testing.T) { } Convey("When ReportCollector Recover", func(ctx C) { - reportCollector.Recover(SNs, LSs, hwm) + reportCollector.Recover(SNs, LSs, ver) Convey("Then there should be no ReportCollectExecutor", func(ctx C) { for i := 0; i < nrSN; i++ { reportCollector.mu.RLock() @@ -467,7 +480,9 @@ func TestReport(t *testing.T) { for i := 0; i < nrStorage; i++ { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(i), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(i), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -491,7 +506,9 @@ func TestReportDedup(t *testing.T) { defer reportCollector.Close() sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -542,7 +559,9 @@ func TestReportCollectorSeal(t *testing.T) { Convey("Given ReportCollector", t, func() { nrStorage := 5 nrLogStream := nrStorage - knownHWM := types.InvalidGLSN + knownVer := types.InvalidVersion + glsn := types.MinGLSN + topicID := types.TopicID(0) a := NewDummyStorageNodeClientFactory(1, false) mr := NewDummyMetadataRepository(a) @@ -557,7 +576,9 @@ func TestReportCollectorSeal(t *testing.T) { for i := 0; i < nrStorage; i++ { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(i), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(i), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -573,7 +594,7 @@ func TestReportCollectorSeal(t *testing.T) { var sealedLSID types.LogStreamID for i := 0; i < nrLogStream; i++ { - err := reportCollector.RegisterLogStream(types.StorageNodeID(i%nrStorage), types.LogStreamID(i), types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err := reportCollector.RegisterLogStream(topicID, types.StorageNodeID(i%nrStorage), types.LogStreamID(i), types.InvalidVersion, varlogpb.LogStreamStatusRunning) if err != nil { t.Fatal(err) } @@ -581,9 +602,10 @@ func TestReportCollectorSeal(t *testing.T) { sealedLSID = types.LogStreamID(i) } - gls := cc.newDummyCommitResults(knownHWM, nrStorage) + gls := cc.newDummyCommitResults(knownVer+1, glsn, nrStorage) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) So(testutil.CompareWaitN(10, func() bool { reportCollector.Commit() @@ -595,7 +617,7 @@ func TestReportCollectorSeal(t *testing.T) { executor.cmmu.RLock() defer executor.cmmu.RUnlock() - if reportedHWM, ok := executor.getReportedHighWatermark(sealedLSID); ok && reportedHWM == knownHWM { + if reportedVer, ok := executor.getReportedVersion(sealedLSID); ok && reportedVer == knownVer { return true } } @@ -610,9 +632,10 @@ func TestReportCollectorSeal(t *testing.T) { time.Sleep(time.Second) Convey("Then it should not commit", func() { - gls = cc.newDummyCommitResults(knownHWM, nrStorage) + gls = cc.newDummyCommitResults(knownVer+1, glsn, nrStorage) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) for i := 0; i < 10; i++ { reportCollector.Commit() @@ -627,26 +650,27 @@ func TestReportCollectorSeal(t *testing.T) { executor.cmmu.RLock() defer executor.cmmu.RUnlock() - reportedHWM, ok := executor.getReportedHighWatermark(sealedLSID) - So(ok && reportedHWM == knownHWM, ShouldBeFalse) + reportedVer, ok := executor.getReportedVersion(sealedLSID) + So(ok && reportedVer == knownVer, ShouldBeFalse) } } Convey("When ReportCollector Unseal", func() { - reportCollector.Unseal(sealedLSID, knownHWM) + reportCollector.Unseal(sealedLSID, knownVer) cc.unseal(sealedLSID) Convey("Then it should commit", func() { - gls = cc.newDummyCommitResults(knownHWM, nrStorage) + gls = cc.newDummyCommitResults(knownVer+1, glsn, nrStorage) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) a.m.Range(func(k, v interface{}) bool { cli := v.(*DummyStorageNodeClient) So(testutil.CompareWaitN(10, func() bool { reportCollector.Commit() - return cli.getKnownHighWatermark(0) == knownHWM + return cli.getKnownVersion(0) == knownVer }), ShouldBeTrue) return true }) @@ -682,17 +706,16 @@ func (cc *dummyCommitContext) sealed(lsID types.LogStreamID) bool { return ok } -func (cc *dummyCommitContext) newDummyCommitResults(prev types.GLSN, nrLogStream int) *mrpb.LogStreamCommitResults { +func (cc *dummyCommitContext) newDummyCommitResults(ver types.Version, baseGLSN types.GLSN, nrLogStream int) *mrpb.LogStreamCommitResults { cr := &mrpb.LogStreamCommitResults{ - HighWatermark: prev + types.GLSN(nrLogStream), - PrevHighWatermark: prev, + Version: ver, } - glsn := prev + types.GLSN(1) for i := len(cc.committedLLSNBeginOffset); i < nrLogStream; i++ { cc.committedLLSNBeginOffset = append(cc.committedLLSNBeginOffset, types.MinLLSN) } + glsn := baseGLSN for i := 0; i < nrLogStream; i++ { numUncommitLen := 0 if !cc.sealed(types.LogStreamID(i)) { @@ -718,7 +741,9 @@ func TestCommit(t *testing.T) { Convey("Given ReportCollector", t, func() { nrStorage := 5 nrLogStream := nrStorage - knownHWM := types.InvalidGLSN + knownVer := types.InvalidVersion + glsn := types.MinGLSN + topicID := types.TopicID(0) a := NewDummyStorageNodeClientFactory(1, false) mr := NewDummyMetadataRepository(a) @@ -733,7 +758,9 @@ func TestCommit(t *testing.T) { for i := 0; i < nrStorage; i++ { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(i), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(i), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -747,16 +774,17 @@ func TestCommit(t *testing.T) { } for i := 0; i < nrLogStream; i++ { - err := reportCollector.RegisterLogStream(types.StorageNodeID(i%nrStorage), types.LogStreamID(i), types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err := reportCollector.RegisterLogStream(topicID, types.StorageNodeID(i%nrStorage), types.LogStreamID(i), types.InvalidVersion, varlogpb.LogStreamStatusRunning) if err != nil { t.Fatal(err) } } Convey("ReportCollector should broadcast commit result to registered storage node", func() { - gls := cc.newDummyCommitResults(knownHWM, nrStorage) + gls := cc.newDummyCommitResults(knownVer+1, glsn, nrStorage) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) reportCollector.Commit() @@ -765,19 +793,21 @@ func TestCommit(t *testing.T) { So(testutil.CompareWaitN(10, func() bool { reportCollector.Commit() - return cli.getKnownHighWatermark(0) == knownHWM + return cli.getKnownVersion(0) == knownVer }), ShouldBeTrue) return true }) Convey("ReportCollector should send ordered commit result to registered storage node", func() { - gls := cc.newDummyCommitResults(knownHWM, nrStorage) + gls := cc.newDummyCommitResults(knownVer+1, glsn, nrStorage) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) - gls = cc.newDummyCommitResults(knownHWM, nrStorage) + gls = cc.newDummyCommitResults(knownVer+1, glsn, nrStorage) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) reportCollector.Commit() @@ -786,18 +816,18 @@ func TestCommit(t *testing.T) { So(testutil.CompareWaitN(10, func() bool { reportCollector.Commit() - return cli.getKnownHighWatermark(0) == knownHWM + return cli.getKnownVersion(0) == knownVer }), ShouldBeTrue) return true }) - trimHWM := types.MaxGLSN + trimVer := types.MaxVersion reportCollector.mu.RLock() for _, executor := range reportCollector.executors { reports := executor.reportCtx.getReport() for _, report := range reports.UncommitReports { - if !report.HighWatermark.Invalid() && report.HighWatermark < trimHWM { - trimHWM = report.HighWatermark + if !report.Version.Invalid() && report.Version < trimVer { + trimVer = report.Version } } } @@ -805,12 +835,14 @@ func TestCommit(t *testing.T) { // wait for prev catchup job to finish time.Sleep(time.Second) - mr.trimGLS(trimHWM) - logger.Debug("trimGLS", zap.Any("knowHWM", knownHWM), zap.Any("trimHWM", trimHWM), zap.Any("result", len(mr.m))) + mr.trimGLS(trimVer) + logger.Debug("trimGLS", zap.Any("knowVer", knownVer), zap.Any("trimVer", trimVer), zap.Any("result", len(mr.m))) Convey("ReportCollector should send proper commit against new StorageNode", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(nrStorage), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(nrStorage), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -818,14 +850,15 @@ func TestCommit(t *testing.T) { nrStorage += 1 - err = reportCollector.RegisterLogStream(sn.StorageNodeID, types.LogStreamID(nrLogStream), knownHWM, varlogpb.LogStreamStatusRunning) + err = reportCollector.RegisterLogStream(topicID, sn.StorageNodeID, types.LogStreamID(nrLogStream), knownVer, varlogpb.LogStreamStatusRunning) So(err, ShouldBeNil) nrLogStream += 1 - gls := cc.newDummyCommitResults(knownHWM, nrStorage) + gls := cc.newDummyCommitResults(knownVer+1, glsn, nrStorage) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) So(testutil.CompareWaitN(10, func() bool { nrCli := 0 @@ -834,7 +867,7 @@ func TestCommit(t *testing.T) { So(testutil.CompareWaitN(10, func() bool { reportCollector.Commit() - return cli.getKnownHighWatermark(0) == knownHWM + return cli.getKnownVersion(0) == knownVer }), ShouldBeTrue) nrCli++ return true @@ -850,7 +883,9 @@ func TestCommit(t *testing.T) { func TestCommitWithDelay(t *testing.T) { Convey("Given ReportCollector", t, func() { - knownHWM := types.InvalidGLSN + knownVer := types.InvalidVersion + glsn := types.MinGLSN + topicID := types.TopicID(0) a := NewDummyStorageNodeClientFactory(1, false) mr := NewDummyMetadataRepository(a) @@ -864,7 +899,9 @@ func TestCommitWithDelay(t *testing.T) { }) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -876,7 +913,7 @@ func TestCommitWithDelay(t *testing.T) { return a.lookupClient(sn.StorageNodeID) != nil }), ShouldBeTrue) - err = reportCollector.RegisterLogStream(types.StorageNodeID(0), types.LogStreamID(0), types.InvalidGLSN, varlogpb.LogStreamStatusRunning) + err = reportCollector.RegisterLogStream(topicID, types.StorageNodeID(0), types.LogStreamID(0), types.InvalidVersion, varlogpb.LogStreamStatusRunning) if err != nil { t.Fatal(err) } @@ -896,37 +933,40 @@ func TestCommitWithDelay(t *testing.T) { dummySN := a.lookupClient(sn.StorageNodeID) Convey("disable report to catchup using old hwm", func() { - gls := cc.newDummyCommitResults(knownHWM, 1) + gls := cc.newDummyCommitResults(knownVer+1, glsn, 1) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) reportCollector.Commit() So(testutil.CompareWaitN(10, func() bool { - return executor.reportCtx.getReport().UncommitReports[0].HighWatermark == knownHWM + return executor.reportCtx.getReport().UncommitReports[0].Version == knownVer }), ShouldBeTrue) - reportedHWM := executor.reportCtx.getReport().UncommitReports[0].HighWatermark + reportedVer := executor.reportCtx.getReport().UncommitReports[0].Version dummySN.DisableReport() time.Sleep(10 * time.Millisecond) - gls = cc.newDummyCommitResults(knownHWM, 1) + gls = cc.newDummyCommitResults(knownVer+1, glsn, 1) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) - gls = cc.newDummyCommitResults(knownHWM, 1) + gls = cc.newDummyCommitResults(knownVer+1, glsn, 1) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) reportCollector.Commit() So(testutil.CompareWaitN(10, func() bool { - return dummySN.getKnownHighWatermark(0) == knownHWM + return dummySN.getKnownVersion(0) == knownVer }), ShouldBeTrue) time.Sleep(10 * time.Millisecond) - So(executor.reportCtx.getReport().UncommitReports[0].HighWatermark, ShouldEqual, reportedHWM) + So(executor.reportCtx.getReport().UncommitReports[0].Version, ShouldEqual, reportedVer) Convey("set commit delay & enable report to trim during catchup", func() { dummySN.SetCommitDelay(30 * time.Millisecond) @@ -937,23 +977,24 @@ func TestCommitWithDelay(t *testing.T) { So(testutil.CompareWaitN(10, func() bool { reports := executor.reportCtx.getReport() - return reports.UncommitReports[0].HighWatermark == knownHWM + return reports.UncommitReports[0].Version == knownVer }), ShouldBeTrue) // wait for prev catchup job to finish time.Sleep(time.Second) - mr.trimGLS(knownHWM) + mr.trimGLS(knownVer) - gls = cc.newDummyCommitResults(knownHWM, 1) + gls = cc.newDummyCommitResults(knownVer+1, glsn, 1) mr.appendGLS(gls) - knownHWM = gls.HighWatermark + knownVer = gls.Version + glsn += types.GLSN(len(gls.CommitResults)) Convey("then it should catchup", func() { reportCollector.Commit() So(testutil.CompareWaitN(10, func() bool { reports := executor.reportCtx.getReport() - return reports.UncommitReports[0].HighWatermark == knownHWM + return reports.UncommitReports[0].Version == knownVer }), ShouldBeTrue) }) }) @@ -963,7 +1004,7 @@ func TestCommitWithDelay(t *testing.T) { func TestRPCFail(t *testing.T) { Convey("Given ReportCollector", t, func(ctx C) { - //knownHWM := types.InvalidGLSN + //knownVer := types.InvalidVersion clientFac := NewDummyStorageNodeClientFactory(1, false) mr := NewDummyMetadataRepository(clientFac) @@ -976,7 +1017,9 @@ func TestRPCFail(t *testing.T) { }) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } err := reportCollector.RegisterStorageNode(sn) @@ -1035,7 +1078,9 @@ func TestReporterClientReconnect(t *testing.T) { mr := NewDummyMetadataRepository(clientFac) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, } logger, _ := zap.NewDevelopment() diff --git a/internal/metadata_repository/state_machine_log_test.go b/internal/metadata_repository/state_machine_log_test.go index 795d407c2..c0d501916 100644 --- a/internal/metadata_repository/state_machine_log_test.go +++ b/internal/metadata_repository/state_machine_log_test.go @@ -153,8 +153,10 @@ func TestStateMachineLogCut(t *testing.T) { for i := 0; int64(total) < segmentSizeBytes; i++ { l := &mrpb.RegisterStorageNode{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(1), - Address: "127.0.0.1:50000", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(1), + Address: "127.0.0.1:50000", + }, }, } @@ -243,8 +245,10 @@ func TestStateMachineLogReadFrom(t *testing.T) { for { l := &mrpb.RegisterStorageNode{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(1), - Address: "127.0.0.1:50000", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(1), + Address: "127.0.0.1:50000", + }, }, } @@ -301,8 +305,10 @@ func TestStateMachineLogReadFromHole(t *testing.T) { for { l := &mrpb.RegisterStorageNode{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(1), - Address: "127.0.0.1:50000", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(1), + Address: "127.0.0.1:50000", + }, }, } @@ -360,8 +366,10 @@ func TestStateMachineLogReadFromWithDirty(t *testing.T) { for ; appliedIndex < uint64(5); appliedIndex++ { l := &mrpb.RegisterStorageNode{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(1), - Address: "127.0.0.1:50000", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(1), + Address: "127.0.0.1:50000", + }, }, } diff --git a/internal/metadata_repository/state_machine_syncer.go b/internal/metadata_repository/state_machine_syncer.go index fffbac007..720dab075 100644 --- a/internal/metadata_repository/state_machine_syncer.go +++ b/internal/metadata_repository/state_machine_syncer.go @@ -2,14 +2,11 @@ package metadata_repository import ( "context" - "fmt" "sort" "time" "github.com/kakao/varlog/pkg/snc" "github.com/kakao/varlog/pkg/types" - "github.com/kakao/varlog/pkg/util/mathutil" - "github.com/kakao/varlog/pkg/verrors" "github.com/kakao/varlog/proto/mrpb" "github.com/kakao/varlog/proto/snpb" "github.com/kakao/varlog/proto/varlogpb" @@ -74,110 +71,116 @@ func (s *StateMachineSyncer) Close() { } func (s *StateMachineSyncer) syncMetadata(ctx context.Context, storage *MetadataStorage) error { - var err error + /* + var err error - collectedLSs := make(map[types.LogStreamID][]*varlogpb.LogStreamMetadataDescriptor) - for _, cli := range s.clients { - meta, err := cli.GetMetadata(ctx) - fmt.Printf("syncMetadata:: sn[%v] GetMetadata %+v. err:%v\n", - cli.PeerStorageNodeID(), meta, err) - - if err != nil { - return err - } + collectedLSs := make(map[types.LogStreamID][]*varlogpb.LogStreamMetadataDescriptor) + for _, cli := range s.clients { + meta, err := cli.GetMetadata(ctx) + fmt.Printf("syncMetadata:: sn[%v] GetMetadata %+v. err:%v\n", + cli.PeerStorageNodeID(), meta, err) - sn := meta.GetStorageNode() - if sn == nil { - continue - } + if err != nil { + return err + } - // sync StorageNodeDescriptor - if err := storage.RegisterStorageNode(sn, 0, 0); err != nil && err != verrors.ErrAlreadyExists { - return err - } + sn := meta.GetStorageNode() + if sn == nil { + continue + } - // collect LogStreamDescriptor - for _, tmp := range meta.GetLogStreams() { - ls := tmp - ls.StorageNodeID = sn.StorageNodeID + // sync StorageNodeDescriptor + if err := storage.RegisterStorageNode(sn, 0, 0); err != nil && err != verrors.ErrAlreadyExists { + return err + } - collectedLSs[ls.LogStreamID] = append(collectedLSs[ls.LogStreamID], &ls) - } - } + // collect LogStreamDescriptor + for _, tmp := range meta.GetLogStreams() { + ls := tmp + ls.StorageNodeID = sn.StorageNodeID - for _, ls := range storage.GetLogStreams() { - if _, ok := collectedLSs[ls.LogStreamID]; !ok { - return fmt.Errorf("sync metadata error. ls[%v] should exist in the collected log streams", ls.LogStreamID) + collectedLSs[ls.LogStreamID] = append(collectedLSs[ls.LogStreamID], &ls) + } } - } - // sync LogStreamDescriptor - for lsID, collectedLS := range collectedLSs { - oldLS := storage.LookupLogStream(lsID) - if oldLS != nil { - if compareLogStreamReplica(oldLS.Replicas, collectedLS) { - // already exist logstream - continue + for _, ls := range storage.GetLogStreams() { + if _, ok := collectedLSs[ls.LogStreamID]; !ok { + return fmt.Errorf("sync metadata error. ls[%v] should exist in the collected log streams", ls.LogStreamID) } } - if len(collectedLS) < s.nrReplica { + // sync LogStreamDescriptor + for lsID, collectedLS := range collectedLSs { + oldLS := storage.LookupLogStream(lsID) if oldLS != nil { - return fmt.Errorf("sync metadata error. ls[%d] # of collectedLS < repFactor", lsID) + if compareLogStreamReplica(oldLS.Replicas, collectedLS) { + // already exist logstream + continue + } } - for _, r := range collectedLS { - if !r.HighWatermark.Invalid() { - return fmt.Errorf("sync metadata error. newbie ls[%d] # of collectedLS < repFactor & has valid HWM", lsID) + if len(collectedLS) < s.nrReplica { + if oldLS != nil { + return fmt.Errorf("sync metadata error. ls[%d] # of collectedLS < repFactor", lsID) } - } - // not yet created logstream - continue - } + for _, r := range collectedLS { + if !r.HighWatermark.Invalid() { + return fmt.Errorf("sync metadata error. newbie ls[%d] # of collectedLS < repFactor & has valid HWM", lsID) + } + } - if len(collectedLS) > s.nrReplica { - collectedLS = s.selectReplicas(collectedLS) - } + // not yet created logstream + continue + } - ls := &varlogpb.LogStreamDescriptor{ - LogStreamID: lsID, - Status: varlogpb.LogStreamStatusSealed, - } + if len(collectedLS) > s.nrReplica { + collectedLS = s.selectReplicas(collectedLS) + } - for _, collectedReplica := range collectedLS { - r := &varlogpb.ReplicaDescriptor{ - StorageNodeID: collectedReplica.StorageNodeID, - Path: collectedReplica.Path, + ls := &varlogpb.LogStreamDescriptor{ + LogStreamID: lsID, + Status: varlogpb.LogStreamStatusSealed, } - ls.Replicas = append(ls.Replicas, r) - } + for _, collectedReplica := range collectedLS { + r := &varlogpb.ReplicaDescriptor{ + StorageNodeID: collectedReplica.StorageNodeID, + Path: collectedReplica.Path, + } - if oldLS == nil { - err = storage.RegisterLogStream(ls, 0, 0) - } else { - err = storage.UpdateLogStream(ls, 0, 0) + ls.Replicas = append(ls.Replicas, r) + } + + if oldLS == nil { + err = storage.RegisterLogStream(ls, 0, 0) + } else { + err = storage.UpdateLogStream(ls, 0, 0) + } } - } - return err + return err + */ + return nil } func (s *StateMachineSyncer) selectReplicas(replicas []*varlogpb.LogStreamMetadataDescriptor) []*varlogpb.LogStreamMetadataDescriptor { - if len(replicas) <= s.nrReplica { - return replicas - } - - sort.Slice(replicas, func(i, j int) bool { - if replicas[i].HighWatermark == replicas[j].HighWatermark { - return replicas[i].UpdatedTime.After(replicas[j].UpdatedTime) + /* + if len(replicas) <= s.nrReplica { + return replicas } - return replicas[i].HighWatermark > replicas[j].HighWatermark - }) + sort.Slice(replicas, func(i, j int) bool { + if replicas[i].HighWatermark == replicas[j].HighWatermark { + return replicas[i].UpdatedTime.After(replicas[j].UpdatedTime) + } + + return replicas[i].HighWatermark > replicas[j].HighWatermark + }) - return replicas[:s.nrReplica] + return replicas[:s.nrReplica] + */ + return nil } func compareLogStreamReplica(orig []*varlogpb.ReplicaDescriptor, diff []*varlogpb.LogStreamMetadataDescriptor) bool { @@ -198,32 +201,34 @@ func compareLogStreamReplica(orig []*varlogpb.ReplicaDescriptor, diff []*varlogp } func (s *StateMachineSyncer) SyncCommitResults(ctx context.Context, storage *MetadataStorage) error { - if err := s.syncMetadata(ctx, storage); err != nil { - return err - } - - for { - cc, err := s.initCommitResultContext(ctx, storage.GetLastCommitResults()) - if err != nil { - return fmt.Errorf("sync commit result init fail. %v", err) + /* + if err := s.syncMetadata(ctx, storage); err != nil { + return err } - if cc.commitResults.HighWatermark.Invalid() { - break - } + for { + cc, err := s.initCommitResultContext(ctx, storage.GetLastCommitResults()) + if err != nil { + return fmt.Errorf("sync commit result init fail. %v", err) + } - if err := cc.buildCommitResults(); err != nil { - return fmt.Errorf("sync commit result build fail. %v. info:%+v, prev:%+v", - err, cc.commitInfos, cc.prevCommitResults) - } + if cc.commitResults.HighWatermark.Invalid() { + break + } - if err := cc.validate(); err != nil { - return fmt.Errorf("sync commit result validate fail. %v. info:%+v, prev:%+v, cur:%+v", - err, cc.commitInfos, cc.prevCommitResults, cc.commitResults) - } + if err := cc.buildCommitResults(); err != nil { + return fmt.Errorf("sync commit result build fail. %v. info:%+v, prev:%+v", + err, cc.commitInfos, cc.prevCommitResults) + } - storage.AppendLogStreamCommitHistory(cc.commitResults) - } + if err := cc.validate(); err != nil { + return fmt.Errorf("sync commit result validate fail. %v. info:%+v, prev:%+v, cur:%+v", + err, cc.commitInfos, cc.prevCommitResults, cc.commitResults) + } + + storage.AppendLogStreamCommitHistory(cc.commitResults) + } + */ return nil } @@ -235,175 +240,181 @@ func (s *StateMachineSyncer) initCommitResultContext(ctx context.Context, prevCo commitInfos: make(map[types.LogStreamID]map[types.StorageNodeID]snpb.LogStreamCommitInfo), highestLLSNs: make(map[types.LogStreamID]types.LLSN), } + /* - for _, cli := range s.clients { - snID := cli.PeerStorageNodeID() - commitInfo, err := cli.GetPrevCommitInfo(ctx, prevCommitResults.GetHighWatermark()) - - if err != nil { - return nil, err - } + for _, cli := range s.clients { + snID := cli.PeerStorageNodeID() + commitInfo, err := cli.GetPrevCommitInfo(ctx, prevCommitResults.GetHighWatermark()) - for _, lsCommitInfo := range commitInfo.CommitInfos { - if lsCommitInfo.Status == snpb.GetPrevCommitStatusOK && - cc.commitResults.HighWatermark < lsCommitInfo.HighWatermark { - cc.commitResults.HighWatermark = lsCommitInfo.HighWatermark - cc.commitResults.PrevHighWatermark = lsCommitInfo.PrevHighWatermark + if err != nil { + return nil, err } - r, ok := cc.commitInfos[lsCommitInfo.LogStreamID] - if !ok { - r = make(map[types.StorageNodeID]snpb.LogStreamCommitInfo) - cc.commitInfos[lsCommitInfo.LogStreamID] = r - cc.sortedLSIDs = append(cc.sortedLSIDs, lsCommitInfo.LogStreamID) - } + for _, lsCommitInfo := range commitInfo.CommitInfos { + if lsCommitInfo.Status == snpb.GetPrevCommitStatusOK && + cc.commitResults.HighWatermark < lsCommitInfo.HighWatermark { + cc.commitResults.HighWatermark = lsCommitInfo.HighWatermark + } + + r, ok := cc.commitInfos[lsCommitInfo.LogStreamID] + if !ok { + r = make(map[types.StorageNodeID]snpb.LogStreamCommitInfo) + cc.commitInfos[lsCommitInfo.LogStreamID] = r + cc.sortedLSIDs = append(cc.sortedLSIDs, lsCommitInfo.LogStreamID) + } - r[snID] = *lsCommitInfo + r[snID] = *lsCommitInfo - if highestLLSN, ok := cc.highestLLSNs[lsCommitInfo.LogStreamID]; !ok || highestLLSN > lsCommitInfo.HighestWrittenLLSN { - cc.highestLLSNs[lsCommitInfo.LogStreamID] = lsCommitInfo.HighestWrittenLLSN + if highestLLSN, ok := cc.highestLLSNs[lsCommitInfo.LogStreamID]; !ok || highestLLSN > lsCommitInfo.HighestWrittenLLSN { + cc.highestLLSNs[lsCommitInfo.LogStreamID] = lsCommitInfo.HighestWrittenLLSN + } } } - } - if !cc.commitResults.HighWatermark.Invalid() { - sort.Slice(cc.sortedLSIDs, func(i, j int) bool { return cc.sortedLSIDs[i] < cc.sortedLSIDs[j] }) - cc.commitResults.CommitResults = make([]snpb.LogStreamCommitResult, 0, len(cc.sortedLSIDs)) - cc.expectedCommit = uint64(cc.commitResults.HighWatermark - cc.commitResults.PrevHighWatermark) - } + if !cc.commitResults.HighWatermark.Invalid() { + sort.Slice(cc.sortedLSIDs, func(i, j int) bool { return cc.sortedLSIDs[i] < cc.sortedLSIDs[j] }) + cc.commitResults.CommitResults = make([]snpb.LogStreamCommitResult, 0, len(cc.sortedLSIDs)) + //cc.expectedCommit = uint64(cc.commitResults.HighWatermark - cc.commitResults.PrevHighWatermark) + } + */ return cc, nil } func (cc *commitResultContext) buildCommitResults() error { - for _, lsID := range cc.sortedLSIDs { - c := snpb.LogStreamCommitResult{ - LogStreamID: lsID, - CommittedLLSNOffset: types.InvalidLLSN, - CommittedGLSNOffset: types.InvalidGLSN, - CommittedGLSNLength: 0, - HighWatermark: cc.commitResults.HighWatermark, - PrevHighWatermark: cc.commitResults.PrevHighWatermark, - } + /* + for _, lsID := range cc.sortedLSIDs { + c := snpb.LogStreamCommitResult{ + LogStreamID: lsID, + CommittedLLSNOffset: types.InvalidLLSN, + CommittedGLSNOffset: types.InvalidGLSN, + CommittedGLSNLength: 0, + HighWatermark: cc.commitResults.HighWatermark, + } - commitInfo, _ := cc.commitInfos[lsID] - - SET_COMMIT_INFO: - for _, r := range commitInfo { - if r.Status == snpb.GetPrevCommitStatusOK { - if c.HighWatermark == r.HighWatermark { - c.CommittedLLSNOffset = r.CommittedLLSNOffset - c.CommittedGLSNOffset = r.CommittedGLSNOffset - c.CommittedGLSNLength = r.CommittedGLSNLength - } else { - // empty commit - c.CommittedLLSNOffset = r.CommittedLLSNOffset + types.LLSN(r.CommittedGLSNLength) - c.CommittedGLSNOffset = cc.commitResults.PrevHighWatermark + types.GLSN(cc.numCommit+1) - c.CommittedGLSNLength = 0 + commitInfo, _ := cc.commitInfos[lsID] + + SET_COMMIT_INFO: + for _, r := range commitInfo { + if r.Status == snpb.GetPrevCommitStatusOK { + if c.HighWatermark == r.HighWatermark { + c.CommittedLLSNOffset = r.CommittedLLSNOffset + c.CommittedGLSNOffset = r.CommittedGLSNOffset + c.CommittedGLSNLength = r.CommittedGLSNLength + } else { + // empty commit + c.CommittedLLSNOffset = r.CommittedLLSNOffset + types.LLSN(r.CommittedGLSNLength) + c.CommittedGLSNOffset = cc.commitResults.PrevHighWatermark + types.GLSN(cc.numCommit+1) + c.CommittedGLSNLength = 0 + } + + break SET_COMMIT_INFO } - - break SET_COMMIT_INFO } + cc.numCommit += c.CommittedGLSNLength + cc.commitResults.CommitResults = append(cc.commitResults.CommitResults, c) } - cc.numCommit += c.CommittedGLSNLength - cc.commitResults.CommitResults = append(cc.commitResults.CommitResults, c) - } - if err := cc.fillCommitResult(); err != nil { - return err - } + if err := cc.fillCommitResult(); err != nil { + return err + } + */ return nil } func (cc *commitResultContext) validate() error { - i := 0 - j := 0 - - nrCommitted := uint64(0) - for i < len(cc.prevCommitResults.GetCommitResults()) && j < len(cc.commitResults.GetCommitResults()) { - prev := cc.prevCommitResults.CommitResults[i] - cur := cc.commitResults.CommitResults[j] - if prev.LogStreamID < cur.LogStreamID { + /* + i := 0 + j := 0 + + nrCommitted := uint64(0) + for i < len(cc.prevCommitResults.GetCommitResults()) && j < len(cc.commitResults.GetCommitResults()) { + prev := cc.prevCommitResults.CommitResults[i] + cur := cc.commitResults.CommitResults[j] + if prev.LogStreamID < cur.LogStreamID { + return fmt.Errorf("new commit reuslts should include all prev commit results") + } else if prev.LogStreamID > cur.LogStreamID { + if cur.CommittedLLSNOffset != types.MinLLSN { + return fmt.Errorf("newbie LS[%v] should start from MinLLSN", cur.LogStreamID) + } + + nrCommitted += cur.CommittedGLSNLength + j++ + } else { + if prev.CommittedLLSNOffset+types.LLSN(prev.CommittedGLSNLength) != cur.CommittedLLSNOffset { + return fmt.Errorf("invalid commit result") + } + + nrCommitted += cur.CommittedGLSNLength + i++ + j++ + } + } + + if i < len(cc.prevCommitResults.GetCommitResults()) { return fmt.Errorf("new commit reuslts should include all prev commit results") - } else if prev.LogStreamID > cur.LogStreamID { + } + + for j < len(cc.commitResults.CommitResults) { + cur := cc.commitResults.CommitResults[j] if cur.CommittedLLSNOffset != types.MinLLSN { return fmt.Errorf("newbie LS[%v] should start from MinLLSN", cur.LogStreamID) } nrCommitted += cur.CommittedGLSNLength j++ - } else { - if prev.CommittedLLSNOffset+types.LLSN(prev.CommittedGLSNLength) != cur.CommittedLLSNOffset { - return fmt.Errorf("invalid commit result") - } - - nrCommitted += cur.CommittedGLSNLength - i++ - j++ } - } - - if i < len(cc.prevCommitResults.GetCommitResults()) { - return fmt.Errorf("new commit reuslts should include all prev commit results") - } - for j < len(cc.commitResults.CommitResults) { - cur := cc.commitResults.CommitResults[j] - if cur.CommittedLLSNOffset != types.MinLLSN { - return fmt.Errorf("newbie LS[%v] should start from MinLLSN", cur.LogStreamID) + if nrCommitted != uint64(cc.commitResults.HighWatermark-cc.commitResults.PrevHighWatermark) { + return fmt.Errorf("invalid commit length") } - - nrCommitted += cur.CommittedGLSNLength - j++ - } - - if nrCommitted != uint64(cc.commitResults.HighWatermark-cc.commitResults.PrevHighWatermark) { - return fmt.Errorf("invalid commit length") - } + */ return nil } func (cc *commitResultContext) fillCommitResult() error { - committedGLSNOffset := cc.prevCommitResults.GetHighWatermark() + 1 - for i, commitResult := range cc.commitResults.CommitResults { - if !commitResult.CommittedGLSNOffset.Invalid() { - if committedGLSNOffset != commitResult.CommittedGLSNOffset { - return fmt.Errorf("committedGLSNOffset mismatch. lsid:%v, expectedGLSN:%v, recvGLSN:%v", - commitResult.GetLogStreamID(), committedGLSNOffset, commitResult.GetCommittedGLSNOffset()) - } + /* + committedGLSNOffset := cc.prevCommitResults.GetHighWatermark() + 1 + for i, commitResult := range cc.commitResults.CommitResults { + if !commitResult.CommittedGLSNOffset.Invalid() { + if committedGLSNOffset != commitResult.CommittedGLSNOffset { + return fmt.Errorf("committedGLSNOffset mismatch. lsid:%v, expectedGLSN:%v, recvGLSN:%v", + commitResult.GetLogStreamID(), committedGLSNOffset, commitResult.GetCommittedGLSNOffset()) + } - committedGLSNOffset = commitResult.CommittedGLSNOffset + types.GLSN(commitResult.CommittedGLSNLength) - continue - } + committedGLSNOffset = commitResult.CommittedGLSNOffset + types.GLSN(commitResult.CommittedGLSNLength) + continue + } - lastCommittedLLSN := types.InvalidLLSN - highestLLSN, _ := cc.highestLLSNs[commitResult.LogStreamID] + lastCommittedLLSN := types.InvalidLLSN + highestLLSN, _ := cc.highestLLSNs[commitResult.LogStreamID] - prevCommitResult, _, ok := cc.prevCommitResults.LookupCommitResult(commitResult.LogStreamID, i) - if ok { - lastCommittedLLSN = prevCommitResult.CommittedLLSNOffset + types.LLSN(prevCommitResult.CommittedGLSNLength) - 1 - } + prevCommitResult, _, ok := cc.prevCommitResults.LookupCommitResult(commitResult.LogStreamID, i) + if ok { + lastCommittedLLSN = prevCommitResult.CommittedLLSNOffset + types.LLSN(prevCommitResult.CommittedGLSNLength) - 1 + } - if highestLLSN < lastCommittedLLSN { - return fmt.Errorf("invalid commit info. ls:%v, highestLLSN:%v, lastCommittedLLSN:%v", - commitResult.LogStreamID, highestLLSN, lastCommittedLLSN) - } + if highestLLSN < lastCommittedLLSN { + return fmt.Errorf("invalid commit info. ls:%v, highestLLSN:%v, lastCommittedLLSN:%v", + commitResult.LogStreamID, highestLLSN, lastCommittedLLSN) + } - numUncommit := uint64(highestLLSN - lastCommittedLLSN) - boundary := uint64(boundaryCommittedGLSNOffset(cc.commitResults.CommitResults[i+1:]) - committedGLSNOffset) + numUncommit := uint64(highestLLSN - lastCommittedLLSN) + boundary := uint64(boundaryCommittedGLSNOffset(cc.commitResults.CommitResults[i+1:]) - committedGLSNOffset) - commitResult.CommittedGLSNLength = mathutil.MinUint64(cc.expectedCommit-cc.numCommit, - mathutil.MinUint64(numUncommit, boundary)) - commitResult.CommittedLLSNOffset = lastCommittedLLSN + 1 - commitResult.CommittedGLSNOffset = committedGLSNOffset + commitResult.CommittedGLSNLength = mathutil.MinUint64(cc.expectedCommit-cc.numCommit, + mathutil.MinUint64(numUncommit, boundary)) + commitResult.CommittedLLSNOffset = lastCommittedLLSN + 1 + commitResult.CommittedGLSNOffset = committedGLSNOffset - cc.commitResults.CommitResults[i] = commitResult + cc.commitResults.CommitResults[i] = commitResult - cc.numCommit += commitResult.CommittedGLSNLength - committedGLSNOffset += types.GLSN(commitResult.CommittedGLSNLength) - } + cc.numCommit += commitResult.CommittedGLSNLength + committedGLSNOffset += types.GLSN(commitResult.CommittedGLSNLength) + } + */ return nil } diff --git a/internal/metadata_repository/storage.go b/internal/metadata_repository/storage.go index 80b76b761..104abbcc4 100644 --- a/internal/metadata_repository/storage.go +++ b/internal/metadata_repository/storage.go @@ -4,6 +4,7 @@ import ( "context" "errors" "fmt" + "math" "sort" "sync" "sync/atomic" @@ -66,6 +67,11 @@ type Membership interface { Clear() } +type TopicLSID struct { + TopicID types.TopicID + LogStreamID types.LogStreamID +} + // TODO:: refactoring type MetadataStorage struct { // make orig immutable @@ -75,7 +81,7 @@ type MetadataStorage struct { diffStateMachine *mrpb.MetadataRepositoryDescriptor copyOnWrite atomicutil.AtomicBool - sortedLSIDs []types.LogStreamID + sortedTopicLSIDs []TopicLSID // snapshot orig for raft snap []byte @@ -215,9 +221,8 @@ func (ms *MetadataStorage) lookupStorageNode(snID types.StorageNodeID) *varlogpb if sn != nil { if sn.Status.Deleted() { return nil - } else { - return sn } + return sn } if pre == cur { @@ -240,9 +245,8 @@ func (ms *MetadataStorage) lookupLogStream(lsID types.LogStreamID) *varlogpb.Log if ls != nil { if ls.Status.Deleted() { return nil - } else { - return ls } + return ls } if pre == cur { @@ -259,6 +263,23 @@ func (ms *MetadataStorage) LookupLogStream(lsID types.LogStreamID) *varlogpb.Log return ms.lookupLogStream(lsID) } +func (ms *MetadataStorage) lookupTopic(topicID types.TopicID) *varlogpb.TopicDescriptor { + pre, cur := ms.getStateMachine() + topic := cur.Metadata.GetTopic(topicID) + if topic != nil { + if topic.Status.Deleted() { + return nil + } + return topic + } + + if pre == cur { + return nil + } + + return pre.Metadata.GetTopic(topicID) +} + func (ms *MetadataStorage) registerStorageNode(sn *varlogpb.StorageNodeDescriptor) error { old := ms.lookupStorageNode(sn.StorageNodeID) equal := old.Equal(sn) @@ -327,8 +348,10 @@ func (ms *MetadataStorage) unregisterStorageNode(snID types.StorageNodeID) error cur.Metadata.DeleteStorageNode(snID) if pre != cur { deleted := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, - Status: varlogpb.StorageNodeStatusDeleted, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, + Status: varlogpb.StorageNodeStatusDeleted, } cur.Metadata.InsertStorageNode(deleted) @@ -338,28 +361,39 @@ func (ms *MetadataStorage) unregisterStorageNode(snID types.StorageNodeID) error return nil } -func (ms *MetadataStorage) insertSortedLSIDs(lsID types.LogStreamID) { - i := sort.Search(len(ms.sortedLSIDs), func(i int) bool { - return ms.sortedLSIDs[i] >= lsID +func (ms *MetadataStorage) insertSortedLSIDs(topicID types.TopicID, lsID types.LogStreamID) { + i := sort.Search(len(ms.sortedTopicLSIDs), func(i int) bool { + if ms.sortedTopicLSIDs[i].TopicID == topicID { + return ms.sortedTopicLSIDs[i].LogStreamID >= lsID + } + + return ms.sortedTopicLSIDs[i].TopicID > topicID }) - if i < len(ms.sortedLSIDs) && ms.sortedLSIDs[i] == lsID { + if i < len(ms.sortedTopicLSIDs) && ms.sortedTopicLSIDs[i].LogStreamID == lsID { return } - ms.sortedLSIDs = append(ms.sortedLSIDs, 0) - copy(ms.sortedLSIDs[i+1:], ms.sortedLSIDs[i:]) - ms.sortedLSIDs[i] = lsID + ms.sortedTopicLSIDs = append(ms.sortedTopicLSIDs, TopicLSID{}) + copy(ms.sortedTopicLSIDs[i+1:], ms.sortedTopicLSIDs[i:]) + ms.sortedTopicLSIDs[i] = TopicLSID{ + TopicID: topicID, + LogStreamID: lsID, + } } -func (ms *MetadataStorage) deleteSortedLSIDs(lsID types.LogStreamID) { - i := sort.Search(len(ms.sortedLSIDs), func(i int) bool { - return ms.sortedLSIDs[i] >= lsID +func (ms *MetadataStorage) deleteSortedLSIDs(topicID types.TopicID, lsID types.LogStreamID) { + i := sort.Search(len(ms.sortedTopicLSIDs), func(i int) bool { + if ms.sortedTopicLSIDs[i].TopicID == topicID { + return ms.sortedTopicLSIDs[i].LogStreamID >= lsID + } + + return ms.sortedTopicLSIDs[i].TopicID > topicID }) - if i < len(ms.sortedLSIDs) && ms.sortedLSIDs[i] == lsID { - copy(ms.sortedLSIDs[i:], ms.sortedLSIDs[i+1:]) - ms.sortedLSIDs = ms.sortedLSIDs[:len(ms.sortedLSIDs)-1] + if i < len(ms.sortedTopicLSIDs) && ms.sortedTopicLSIDs[i].LogStreamID == lsID { + copy(ms.sortedTopicLSIDs[i:], ms.sortedTopicLSIDs[i+1:]) + ms.sortedTopicLSIDs = ms.sortedTopicLSIDs[:len(ms.sortedTopicLSIDs)-1] } } @@ -381,6 +415,11 @@ func (ms *MetadataStorage) registerLogStream(ls *varlogpb.LogStreamDescriptor) e return verrors.ErrInvalidArgument } + topic := ms.lookupTopic(ls.TopicID) + if topic == nil { + return verrors.ErrInvalidArgument + } + for _, r := range ls.Replicas { if ms.lookupStorageNode(r.StorageNodeID) == nil { return verrors.ErrInvalidArgument @@ -393,11 +432,6 @@ func (ms *MetadataStorage) registerLogStream(ls *varlogpb.LogStreamDescriptor) e return verrors.ErrAlreadyExists } - if equal { - // To ensure that it is applied to the meta cache - return nil - } - _, cur := ms.getStateMachine() ms.mtMu.Lock() @@ -419,14 +453,20 @@ func (ms *MetadataStorage) registerLogStream(ls *varlogpb.LogStreamDescriptor) e for _, r := range ls.Replicas { lm.Replicas[r.StorageNodeID] = snpb.LogStreamUncommitReport{ LogStreamID: ls.LogStreamID, - HighWatermark: ms.getHighWatermarkNoLock(), + Version: ms.getLastCommitVersionNoLock(), UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: 0, } } cur.LogStream.UncommitReports[ls.LogStreamID] = lm - ms.insertSortedLSIDs(ls.LogStreamID) + ms.insertSortedLSIDs(ls.TopicID, ls.LogStreamID) + + topic = proto.Clone(topic).(*varlogpb.TopicDescriptor) + topic.InsertLogStream(ls.LogStreamID) + if err := cur.Metadata.UpsertTopic(topic); err != nil { + return err + } ms.metaAppliedIndex++ @@ -447,7 +487,8 @@ func (ms *MetadataStorage) RegisterLogStream(ls *varlogpb.LogStreamDescriptor, n } func (ms *MetadataStorage) unregisterLogStream(lsID types.LogStreamID) error { - if ms.lookupLogStream(lsID) == nil { + ls := ms.lookupLogStream(lsID) + if ls == nil { return verrors.ErrNotExist } @@ -474,7 +515,7 @@ func (ms *MetadataStorage) unregisterLogStream(lsID types.LogStreamID) error { cur.LogStream.UncommitReports[lsID] = lm } - ms.deleteSortedLSIDs(lsID) + ms.deleteSortedLSIDs(ls.TopicID, lsID) ms.metaAppliedIndex++ return nil @@ -528,6 +569,83 @@ func (ms *MetadataStorage) updateLogStream(ls *varlogpb.LogStreamDescriptor) err return err } +func (ms *MetadataStorage) RegisterTopic(topic *varlogpb.TopicDescriptor, nodeIndex, requestIndex uint64) error { + err := ms.registerTopic(topic) + if err != nil { + if ms.cacheCompleteCB != nil { + ms.cacheCompleteCB(nodeIndex, requestIndex, err) + } + return err + } + + ms.triggerMetadataCache(nodeIndex, requestIndex) + return nil +} + +func (ms *MetadataStorage) registerTopic(topic *varlogpb.TopicDescriptor) error { + old := ms.lookupTopic(topic.TopicID) + equal := old.Equal(topic) + if old != nil && !equal { + return verrors.ErrAlreadyExists + } + + if equal { + // To ensure that it is applied to the meta cache + return nil + } + + _, cur := ms.getStateMachine() + + ms.mtMu.Lock() + defer ms.mtMu.Unlock() + + if err := cur.Metadata.UpsertTopic(topic); err != nil { + return err + } + + ms.metaAppliedIndex++ + return nil +} + +func (ms *MetadataStorage) UnregisterTopic(topicID types.TopicID, nodeIndex, requestIndex uint64) error { + err := ms.unregisterTopic(topicID) + if err != nil { + if ms.cacheCompleteCB != nil { + ms.cacheCompleteCB(nodeIndex, requestIndex, err) + } + return err + } + + ms.triggerMetadataCache(nodeIndex, requestIndex) + return nil +} + +func (ms *MetadataStorage) unregisterTopic(topicID types.TopicID) error { + if ms.lookupTopic(topicID) == nil { + return verrors.ErrNotExist + } + + pre, cur := ms.getStateMachine() + + ms.mtMu.Lock() + defer ms.mtMu.Unlock() + + cur.Metadata.DeleteTopic(topicID) + + if pre != cur { + deleted := &varlogpb.TopicDescriptor{ + TopicID: topicID, + Status: varlogpb.TopicStatusDeleted, + } + + cur.Metadata.InsertTopic(deleted) + } + + ms.metaAppliedIndex++ + + return nil +} + func (ms *MetadataStorage) updateUncommitReport(ls *varlogpb.LogStreamDescriptor) error { pre, cur := ms.getStateMachine() @@ -651,9 +769,9 @@ func (ms *MetadataStorage) updateUncommitReportStatus(lsID types.LogStreamID, st lls.Replicas[storageNodeID] = r } } else { - highWatermark := ms.getHighWatermarkNoLock() + version := ms.getLastCommitVersionNoLock() for storageNodeID, r := range lls.Replicas { - r.HighWatermark = highWatermark + r.Version = version lls.Replicas[storageNodeID] = r } } @@ -855,28 +973,28 @@ func (ms *MetadataStorage) LookupEndpoint(nodeID types.NodeID) string { return "" } -func (ms *MetadataStorage) lookupNextCommitResultsNoLock(glsn types.GLSN) *mrpb.LogStreamCommitResults { +func (ms *MetadataStorage) lookupNextCommitResultsNoLock(ver types.Version) *mrpb.LogStreamCommitResults { pre, cur := ms.getStateMachine() if pre != cur { - r := cur.LookupCommitResultsByPrev(glsn) + r := cur.LookupCommitResults(ver + 1) if r != nil { return r } } - return pre.LookupCommitResultsByPrev(glsn) + return pre.LookupCommitResults(ver + 1) } -func (ms *MetadataStorage) lookupCommitResultsNoLock(glsn types.GLSN) *mrpb.LogStreamCommitResults { +func (ms *MetadataStorage) lookupCommitResultsNoLock(ver types.Version) *mrpb.LogStreamCommitResults { pre, cur := ms.getStateMachine() if pre != cur { - r := cur.LookupCommitResults(glsn) + r := cur.LookupCommitResults(ver) if r != nil { return r } } - return pre.LookupCommitResults(glsn) + return pre.LookupCommitResults(ver) } func (ms *MetadataStorage) getLastCommitResultsNoLock() *mrpb.LogStreamCommitResults { @@ -947,13 +1065,13 @@ func (ms *MetadataStorage) verifyUncommitReport(s snpb.LogStreamUncommitReport) return true } - if fgls.PrevHighWatermark > s.HighWatermark || - lgls.HighWatermark < s.HighWatermark { + if fgls.Version > s.Version+1 || + lgls.Version < s.Version { return false } - return s.HighWatermark == fgls.PrevHighWatermark || - ms.lookupCommitResultsNoLock(s.HighWatermark) != nil + return s.Version+1 == fgls.Version || + ms.lookupCommitResultsNoLock(s.Version) != nil } func (ms *MetadataStorage) UpdateUncommitReport(lsID types.LogStreamID, snID types.StorageNodeID, s snpb.LogStreamUncommitReport) { @@ -980,18 +1098,18 @@ func (ms *MetadataStorage) UpdateUncommitReport(lsID types.LogStreamID, snID typ } if !ms.verifyUncommitReport(s) { - ms.logger.Warn("could not apply report: invalid hwm", - zap.Uint32("lsid", uint32(lsID)), - zap.Uint32("snid", uint32(snID)), - zap.Uint64("knownHWM", uint64(s.HighWatermark)), - zap.Uint64("first", uint64(ms.getFirstCommitResultsNoLock().GetHighWatermark())), - zap.Uint64("last", uint64(ms.getLastCommitResultsNoLock().GetHighWatermark())), + ms.logger.Warn("could not apply report: invalid ver", + zap.Int32("lsid", int32(lsID)), + zap.Int32("snid", int32(snID)), + zap.Uint64("ver", uint64(s.Version)), + zap.Uint64("first", uint64(ms.getFirstCommitResultsNoLock().GetVersion())), + zap.Uint64("last", uint64(ms.getLastCommitResultsNoLock().GetVersion())), ) return } if lm.Status.Sealed() { - if r.HighWatermark >= s.HighWatermark || + if r.Version >= s.Version || s.UncommittedLLSNOffset > r.UncommittedLLSNEnd() { return } @@ -1003,8 +1121,8 @@ func (ms *MetadataStorage) UpdateUncommitReport(lsID types.LogStreamID, snID typ ms.nrUpdateSinceCommit++ } -func (ms *MetadataStorage) GetSortedLogStreamIDs() []types.LogStreamID { - return ms.sortedLSIDs +func (ms *MetadataStorage) GetSortedTopicLogStreamIDs() []TopicLSID { + return ms.sortedTopicLSIDs } func (ms *MetadataStorage) AppendLogStreamCommitHistory(cr *mrpb.LogStreamCommitResults) { @@ -1024,10 +1142,10 @@ func (ms *MetadataStorage) AppendLogStreamCommitHistory(cr *mrpb.LogStreamCommit cur.LogStream.CommitHistory = append(cur.LogStream.CommitHistory, cr) } -func (ms *MetadataStorage) TrimLogStreamCommitHistory(trimGLSN types.GLSN) error { +func (ms *MetadataStorage) TrimLogStreamCommitHistory(ver types.Version) error { _, cur := ms.getStateMachine() - if trimGLSN != types.MaxGLSN && cur.LogStream.TrimGLSN < trimGLSN { - cur.LogStream.TrimGLSN = trimGLSN + if ver != math.MaxUint64 && cur.LogStream.TrimVersion < ver { + cur.LogStream.TrimVersion = ver } return nil } @@ -1042,32 +1160,24 @@ func (ms *MetadataStorage) UpdateAppliedIndex(appliedIndex uint64) { ms.triggerSnapshot(appliedIndex) } -func (ms *MetadataStorage) getHighWatermarkNoLock() types.GLSN { +func (ms *MetadataStorage) getLastCommitVersionNoLock() types.Version { gls := ms.getLastCommitResultsNoLock() - if gls == nil { - return types.InvalidGLSN - } - - return gls.HighWatermark + return gls.GetVersion() } -func (ms *MetadataStorage) GetHighWatermark() types.GLSN { +func (ms *MetadataStorage) GetLastCommitVersion() types.Version { ms.lsMu.RLock() defer ms.lsMu.RUnlock() - return ms.getHighWatermarkNoLock() + return ms.getLastCommitVersionNoLock() } -func (ms *MetadataStorage) GetMinHighWatermark() types.GLSN { +func (ms *MetadataStorage) GetMinCommitVersion() types.Version { ms.lsMu.RLock() defer ms.lsMu.RUnlock() gls := ms.getFirstCommitResultsNoLock() - if gls == nil { - return types.InvalidGLSN - } - - return gls.HighWatermark + return gls.GetVersion() } func (ms *MetadataStorage) NumUpdateSinceCommit() uint64 { @@ -1078,16 +1188,15 @@ func (ms *MetadataStorage) ResetUpdateSinceCommit() { ms.nrUpdateSinceCommit = 0 } -func (ms *MetadataStorage) LookupNextCommitResults(glsn types.GLSN) (*mrpb.LogStreamCommitResults, error) { +func (ms *MetadataStorage) LookupNextCommitResults(ver types.Version) (*mrpb.LogStreamCommitResults, error) { ms.lsMu.RLock() defer ms.lsMu.RUnlock() - var err error - if oldest := ms.getFirstCommitResultsNoLock(); oldest != nil && oldest.PrevHighWatermark > glsn { - err = fmt.Errorf("already trimmed glsn:%v, oldest:%v", glsn, oldest.PrevHighWatermark) + if oldest := ms.getFirstCommitResultsNoLock(); oldest != nil && oldest.Version > ver+1 { + return nil, fmt.Errorf("already trimmed ver:%v, oldest:%v", ver, oldest.Version) } - return ms.lookupNextCommitResultsNoLock(glsn), err + return ms.lookupNextCommitResultsNoLock(ver), nil } func (ms *MetadataStorage) GetFirstCommitResults() *mrpb.LogStreamCommitResults { @@ -1112,20 +1221,20 @@ func (ms *MetadataStorage) GetMetadata() *varlogpb.MetadataDescriptor { } func (ms *MetadataStorage) GetLogStreamCommitResults() []*mrpb.LogStreamCommitResults { - trimGLSN := ms.origStateMachine.LogStream.TrimGLSN - if ms.origStateMachine.LogStream.TrimGLSN < ms.diffStateMachine.LogStream.TrimGLSN { - trimGLSN = ms.diffStateMachine.LogStream.TrimGLSN + ver := ms.origStateMachine.LogStream.TrimVersion + if ms.origStateMachine.LogStream.TrimVersion < ms.diffStateMachine.LogStream.TrimVersion { + ver = ms.diffStateMachine.LogStream.TrimVersion } crs := append(ms.origStateMachine.LogStream.CommitHistory, ms.diffStateMachine.LogStream.CommitHistory...) i := sort.Search(len(crs), func(i int) bool { - return crs[i].HighWatermark >= trimGLSN + return crs[i].Version >= ver }) if 0 < i && i < len(crs) && - crs[i].HighWatermark == trimGLSN { + crs[i].Version == ver { crs = crs[i-1:] } @@ -1238,9 +1347,8 @@ func (ms *MetadataStorage) RecoverStateMachine(stateMachine *mrpb.MetadataReposi ms.trimLogStreamCommitHistory() - fmt.Printf("recover commit result from [hwm:%v, prev:%v]\n", - ms.GetFirstCommitResults().GetHighWatermark(), - ms.GetFirstCommitResults().GetPrevHighWatermark(), + fmt.Printf("recover commit result from [ver:%v]\n", + ms.GetFirstCommitResults().GetVersion(), ) ms.jobC = make(chan *storageAsyncJob, 4096) @@ -1265,13 +1373,13 @@ func (ms *MetadataStorage) recoverLogStreams(stateMachine *mrpb.MetadataReposito lm.Status = varlogpb.LogStreamStatusSealing uncommittedLLSNLength := uint64(0) - cr, _, ok := commitResults.LookupCommitResult(ls.LogStreamID, -1) + cr, _, ok := commitResults.LookupCommitResult(ls.TopicID, ls.LogStreamID, -1) if ok { uncommittedLLSNLength = uint64(cr.CommittedLLSNOffset) + cr.CommittedGLSNLength - 1 } for storageNodeID, r := range lm.Replicas { - r.HighWatermark = types.InvalidGLSN + r.Version = types.InvalidVersion r.UncommittedLLSNOffset = types.MinLLSN r.UncommittedLLSNLength = uncommittedLLSNLength lm.Replicas[storageNodeID] = r @@ -1305,15 +1413,20 @@ func (ms *MetadataStorage) recoverStateMachine(stateMachine *mrpb.MetadataReposi ms.metaAppliedIndex = appliedIndex ms.appliedIndex = appliedIndex - ms.sortedLSIDs = nil + ms.sortedTopicLSIDs = nil for _, ls := range stateMachine.Metadata.LogStreams { if !ls.Status.Deleted() { - ms.sortedLSIDs = append(ms.sortedLSIDs, ls.LogStreamID) + ms.sortedTopicLSIDs = append(ms.sortedTopicLSIDs, TopicLSID{TopicID: ls.TopicID, LogStreamID: ls.LogStreamID}) } } - sort.Slice(ms.sortedLSIDs, func(i, j int) bool { return ms.sortedLSIDs[i] < ms.sortedLSIDs[j] }) + sort.Slice(ms.sortedTopicLSIDs, func(i, j int) bool { + if ms.sortedTopicLSIDs[i].TopicID == ms.sortedTopicLSIDs[j].TopicID { + return ms.sortedTopicLSIDs[i].LogStreamID < ms.sortedTopicLSIDs[j].LogStreamID + } + return ms.sortedTopicLSIDs[i].TopicID < ms.sortedTopicLSIDs[j].TopicID + }) ms.prMu.Unlock() ms.lsMu.Unlock() @@ -1466,6 +1579,14 @@ func (ms *MetadataStorage) createMetadataCache(job *jobMetadataCache) { } } + for _, topic := range ms.diffStateMachine.Metadata.Topics { + if topic.Status.Deleted() { + cache.DeleteTopic(topic.TopicID) + } else if cache.InsertTopic(topic) != nil { + cache.UpdateTopic(topic) + } + } + for _, ls := range ms.diffStateMachine.Metadata.LogStreams { if ls.Status.Deleted() { cache.DeleteLogStream(ls.LogStreamID) @@ -1483,7 +1604,8 @@ func (ms *MetadataStorage) createMetadataCache(job *jobMetadataCache) { func (ms *MetadataStorage) mergeMetadata() { if len(ms.diffStateMachine.Metadata.StorageNodes) == 0 && - len(ms.diffStateMachine.Metadata.LogStreams) == 0 { + len(ms.diffStateMachine.Metadata.LogStreams) == 0 && + len(ms.diffStateMachine.Metadata.Topics) == 0 { return } @@ -1499,6 +1621,14 @@ func (ms *MetadataStorage) mergeMetadata() { } } + for _, topic := range ms.diffStateMachine.Metadata.Topics { + if topic.Status.Deleted() { + ms.origStateMachine.Metadata.DeleteTopic(topic.TopicID) + } else if ms.origStateMachine.Metadata.InsertTopic(topic) != nil { + ms.origStateMachine.Metadata.UpdateTopic(topic) + } + } + for _, ls := range ms.diffStateMachine.Metadata.LogStreams { if ls.Status.Deleted() { ms.origStateMachine.Metadata.DeleteLogStream(ls.LogStreamID) @@ -1526,8 +1656,8 @@ func (ms *MetadataStorage) mergeLogStream() { ms.lsMu.Lock() defer ms.lsMu.Unlock() - if ms.origStateMachine.LogStream.TrimGLSN < ms.diffStateMachine.LogStream.TrimGLSN { - ms.origStateMachine.LogStream.TrimGLSN = ms.diffStateMachine.LogStream.TrimGLSN + if ms.origStateMachine.LogStream.TrimVersion < ms.diffStateMachine.LogStream.TrimVersion { + ms.origStateMachine.LogStream.TrimVersion = ms.diffStateMachine.LogStream.TrimVersion } ms.origStateMachine.LogStream.CommitHistory = append(ms.origStateMachine.LogStream.CommitHistory, ms.diffStateMachine.LogStream.CommitHistory...) @@ -1584,13 +1714,13 @@ func (ms *MetadataStorage) setConfState(cs *raftpb.ConfState) { func (ms *MetadataStorage) trimLogStreamCommitHistory() { s := ms.origStateMachine i := sort.Search(len(s.LogStream.CommitHistory), func(i int) bool { - return s.LogStream.CommitHistory[i].HighWatermark >= s.LogStream.TrimGLSN + return s.LogStream.CommitHistory[i].Version >= s.LogStream.TrimVersion }) if 1 < i && i < len(s.LogStream.CommitHistory) && - s.LogStream.CommitHistory[i].HighWatermark == s.LogStream.TrimGLSN { - ms.logger.Info("trim", zap.Any("glsn", s.LogStream.TrimGLSN)) + s.LogStream.CommitHistory[i].Version == s.LogStream.TrimVersion { + ms.logger.Info("trim", zap.Any("ver", s.LogStream.TrimVersion)) s.LogStream.CommitHistory = s.LogStream.CommitHistory[i-1:] } } @@ -1610,7 +1740,6 @@ func (ms *MetadataStorage) mergeStateMachine() { ms.mergeConfState() ms.releaseCopyOnWrite() - return } func (ms *MetadataStorage) needSnapshot() bool { diff --git a/internal/metadata_repository/storage_test.go b/internal/metadata_repository/storage_test.go index ed4a14b17..09ad9bfe2 100644 --- a/internal/metadata_repository/storage_test.go +++ b/internal/metadata_repository/storage_test.go @@ -3,6 +3,7 @@ package metadata_repository import ( "bytes" "math/rand" + "sort" "sync/atomic" "testing" "time" @@ -24,8 +25,10 @@ func TestStorageRegisterSN(t *testing.T) { ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, - Address: "mt_addr", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + Address: "mt_addr", + }, } err := ms.registerStorageNode(sn) @@ -35,12 +38,14 @@ func TestStorageRegisterSN(t *testing.T) { err := ms.registerStorageNode(sn) So(err, ShouldBeNil) - dup_sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, - Address: "diff_addr", + dupSN := &varlogpb.StorageNodeDescriptor{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + Address: "diff_addr", + }, } - err = ms.registerStorageNode(dup_sn) + err = ms.registerStorageNode(dupSN) So(err, ShouldResemble, verrors.ErrAlreadyExists) }) }) @@ -60,7 +65,9 @@ func TestStoragUnregisterSN(t *testing.T) { Convey("Wnen SN is exist", func(ctx C) { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -92,14 +99,18 @@ func TestStoragUnregisterSN(t *testing.T) { Convey("And LS which have the SN as replica is exist", func(ctx C) { rep := 1 + + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + lsID := types.LogStreamID(time.Now().UnixNano()) snIDs := make([]types.StorageNodeID, rep) for i := 0; i < rep; i++ { snIDs[i] = snID + types.StorageNodeID(i) } - ls := makeLogStream(lsID, snIDs) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) - err := ms.registerLogStream(ls) + err = ms.registerLogStream(ls) So(err, ShouldBeNil) Convey("Then SN should not be unregistered", func(ctx C) { @@ -124,7 +135,9 @@ func TestStoragGetAllSN(t *testing.T) { Convey("Wnen SN register", func(ctx C) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -139,7 +152,9 @@ func TestStoragGetAllSN(t *testing.T) { Convey("Wnen one more SN register to diff", func(ctx C) { snID2 := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID2, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID2, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -192,13 +207,19 @@ func TestStoragGetAllLS(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) lsID := types.LogStreamID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) So(err, ShouldBeNil) + err = ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + ls := &varlogpb.LogStreamDescriptor{ + TopicID: types.TopicID(1), LogStreamID: lsID, } @@ -216,7 +237,9 @@ func TestStoragGetAllLS(t *testing.T) { Convey("Wnen update LS to diff", func(ctx C) { snID2 := snID + 1 sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID2, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID2, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -264,33 +287,42 @@ func TestStoragGetAllLS(t *testing.T) { } func TestStorageRegisterLS(t *testing.T) { - Convey("LS which has no SN should not be registerd", t, func(ctx C) { + Convey("LS which has no SN should not be registered", t, func(ctx C) { ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + lsID := types.LogStreamID(time.Now().UnixNano()) - ls := makeLogStream(lsID, nil) + ls := makeLogStream(types.TopicID(1), lsID, nil) - err := ms.registerLogStream(ls) + err = ms.registerLogStream(ls) So(err, ShouldResemble, verrors.ErrInvalidArgument) }) - Convey("LS should not be registerd if not exist proper SN", t, func(ctx C) { + Convey("LS should not be registered if not exist proper SN", t, func(ctx C) { ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) rep := 2 + + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + lsID := types.LogStreamID(time.Now().UnixNano()) tmp := types.StorageNodeID(time.Now().UnixNano()) snIDs := make([]types.StorageNodeID, rep) for i := 0; i < rep; i++ { snIDs[i] = tmp + types.StorageNodeID(i) } - ls := makeLogStream(lsID, snIDs) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) - err := ms.registerLogStream(ls) + err = ms.registerLogStream(ls) So(err, ShouldResemble, verrors.ErrInvalidArgument) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[0], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[0], + }, } err = ms.registerStorageNode(sn) @@ -299,9 +331,11 @@ func TestStorageRegisterLS(t *testing.T) { err = ms.registerLogStream(ls) So(err, ShouldResemble, verrors.ErrInvalidArgument) - Convey("LS should be registerd if exist all SN", func(ctx C) { + Convey("LS should be registered if exist all SN", func(ctx C) { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[1], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[1], + }, } err := ms.registerStorageNode(sn) @@ -331,33 +365,39 @@ func TestStoragUnregisterLS(t *testing.T) { Convey("LS which is exist should be unregistered", func(ctx C) { rep := 1 + + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + snIDs := make([]types.StorageNodeID, rep) tmp := types.StorageNodeID(time.Now().UnixNano()) for i := 0; i < rep; i++ { snIDs[i] = tmp + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) - err := ms.RegisterLogStream(ls, 0, 0) + err = ms.RegisterLogStream(ls, 0, 0) So(err, ShouldBeNil) ms.setCopyOnWrite() - So(len(ms.GetSortedLogStreamIDs()), ShouldEqual, 1) + So(len(ms.GetSortedTopicLogStreamIDs()), ShouldEqual, 1) err = ms.unregisterLogStream(lsID) So(err, ShouldBeNil) So(ms.lookupLogStream(lsID), ShouldBeNil) So(ms.LookupUncommitReports(lsID), ShouldBeNil) - So(len(ms.GetSortedLogStreamIDs()), ShouldEqual, 0) + So(len(ms.GetSortedTopicLogStreamIDs()), ShouldEqual, 0) Convey("unregistered SN should not be found after merge", func(ctx C) { ms.mergeMetadata() @@ -367,7 +407,7 @@ func TestStoragUnregisterLS(t *testing.T) { So(ms.lookupLogStream(lsID), ShouldBeNil) So(ms.LookupUncommitReports(lsID), ShouldBeNil) - So(len(ms.GetSortedLogStreamIDs()), ShouldEqual, 0) + So(len(ms.GetSortedTopicLogStreamIDs()), ShouldEqual, 0) }) }) }) @@ -388,22 +428,28 @@ func TestStorageUpdateLS(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = types.StorageNodeID(lsID) + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) - err := ms.registerLogStream(ls) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + + err = ms.registerLogStream(ls) So(err, ShouldBeNil) updateSnIDs := make([]types.StorageNodeID, rep) for i := 0; i < rep; i++ { updateSnIDs[i] = snIDs[i] + types.StorageNodeID(rep) } - updateLS := makeLogStream(lsID, updateSnIDs) + updateLS := makeLogStream(types.TopicID(1), lsID, updateSnIDs) err = ms.UpdateLogStream(updateLS, 0, 0) So(err, ShouldResemble, verrors.ErrInvalidArgument) @@ -411,7 +457,9 @@ func TestStorageUpdateLS(t *testing.T) { Convey("LS should be updated if exist all SN", func(ctx C) { for i := 0; i < rep; i++ { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: updateSnIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: updateSnIDs[i], + }, } err := ms.registerStorageNode(sn) @@ -466,15 +514,21 @@ func TestStorageUpdateLSUnderCOW(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = types.StorageNodeID(lsID) + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) - err := ms.registerLogStream(ls) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + + err = ms.registerLogStream(ls) So(err, ShouldBeNil) // set COW @@ -486,14 +540,16 @@ func TestStorageUpdateLSUnderCOW(t *testing.T) { updateSnIDs[i] = snIDs[i] + types.StorageNodeID(rep) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: updateSnIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: updateSnIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - updateLS := makeLogStream(lsID, updateSnIDs) + updateLS := makeLogStream(types.TopicID(1), lsID, updateSnIDs) err = ms.UpdateLogStream(updateLS, 0, 0) So(err, ShouldBeNil) @@ -501,10 +557,10 @@ func TestStorageUpdateLSUnderCOW(t *testing.T) { // compare diffls := ms.diffStateMachine.Metadata.GetLogStream(lsID) - difflls, _ := ms.diffStateMachine.LogStream.UncommitReports[lsID] + difflls := ms.diffStateMachine.LogStream.UncommitReports[lsID] origls := ms.origStateMachine.Metadata.GetLogStream(lsID) - origlls, _ := ms.origStateMachine.LogStream.UncommitReports[lsID] + origlls := ms.origStateMachine.LogStream.UncommitReports[lsID] So(diffls.Equal(origls), ShouldBeFalse) So(difflls.Equal(origlls), ShouldBeFalse) @@ -517,7 +573,7 @@ func TestStorageUpdateLSUnderCOW(t *testing.T) { // compare mergedls := ms.origStateMachine.Metadata.GetLogStream(lsID) - mergedlls, _ := ms.origStateMachine.LogStream.UncommitReports[lsID] + mergedlls := ms.origStateMachine.LogStream.UncommitReports[lsID] So(diffls.Equal(mergedls), ShouldBeTrue) So(difflls.Equal(mergedlls), ShouldBeTrue) @@ -547,15 +603,21 @@ func TestStorageSealLS(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = types.StorageNodeID(lsID) + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) - err := ms.registerLogStream(ls) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + + err = ms.registerLogStream(ls) So(err, ShouldBeNil) Convey("Seal should be success", func(ctx C) { @@ -590,7 +652,7 @@ func TestStorageSealLS(t *testing.T) { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: uint64(i), - HighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, } ms.UpdateUncommitReport(lsID, snIDs[i], r) @@ -613,7 +675,7 @@ func TestStorageSealLS(t *testing.T) { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: uint64(i + 1), - HighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, } ms.UpdateUncommitReport(lsID, snIDs[i], r) @@ -638,15 +700,21 @@ func TestStorageSealLSUnderCOW(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = types.StorageNodeID(lsID) + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) - err := ms.registerLogStream(ls) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + + err = ms.registerLogStream(ls) So(err, ShouldBeNil) // set COW @@ -658,7 +726,7 @@ func TestStorageSealLSUnderCOW(t *testing.T) { // compare diffls := ms.diffStateMachine.Metadata.GetLogStream(lsID) - difflls, _ := ms.diffStateMachine.LogStream.UncommitReports[lsID] + difflls := ms.diffStateMachine.LogStream.UncommitReports[lsID] origls := ms.origStateMachine.Metadata.GetLogStream(lsID) origlls, _ := ms.origStateMachine.LogStream.UncommitReports[lsID] @@ -707,15 +775,21 @@ func TestStorageUnsealLS(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = types.StorageNodeID(lsID) + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) - err := ms.registerLogStream(ls) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + + err = ms.registerLogStream(ls) So(err, ShouldBeNil) Convey("Unseal to LS which is already Unsealed should return nil", func(ctx C) { @@ -753,7 +827,7 @@ func TestStorageUnsealLS(t *testing.T) { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: uint64(i), - HighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, } ms.UpdateUncommitReport(lsID, snIDs[i], r) @@ -771,14 +845,13 @@ func TestStorageTrim(t *testing.T) { Convey("Given a GlobalLogStreams", t, func(ctx C) { ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) - for hwm := types.MinGLSN; hwm < types.GLSN(1024); hwm++ { + for ver := types.MinVersion; ver < types.Version(1024); ver++ { gls := &mrpb.LogStreamCommitResults{ - HighWatermark: hwm, - PrevHighWatermark: hwm - types.GLSN(1), + Version: ver, } gls.CommitResults = append(gls.CommitResults, snpb.LogStreamCommitResult{ - CommittedGLSNOffset: hwm, + CommittedGLSNOffset: types.GLSN(ver), CommittedGLSNLength: 1, }) @@ -786,12 +859,12 @@ func TestStorageTrim(t *testing.T) { } Convey("When operate trim, trimmed gls should not be found", func(ctx C) { - for trim := types.InvalidGLSN; trim < types.GLSN(1024); trim++ { + for trim := types.InvalidVersion; trim < types.Version(1024); trim++ { ms.TrimLogStreamCommitHistory(trim) ms.trimLogStreamCommitHistory() - if trim > types.MinGLSN { - So(ms.getFirstCommitResultsNoLock().GetHighWatermark(), ShouldEqual, trim-types.GLSN(1)) + if trim > types.MinVersion { + So(ms.getFirstCommitResultsNoLock().GetVersion(), ShouldEqual, trim-types.MinVersion) } } }) @@ -809,7 +882,9 @@ func TestStorageReport(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = tmp + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) @@ -820,7 +895,7 @@ func TestStorageReport(t *testing.T) { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: 5, - HighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, } for i := 0; i < rep; i++ { @@ -830,13 +905,16 @@ func TestStorageReport(t *testing.T) { } Convey("storage should not apply report if snID is not exist in LS", func(ctx C) { - ls := makeLogStream(lsID, snIDs) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) ms.registerLogStream(ls) r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: 5, - HighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, } ms.UpdateUncommitReport(lsID, notExistSnID, r) @@ -847,7 +925,7 @@ func TestStorageReport(t *testing.T) { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: 5, - HighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, } for i := 0; i < rep; i++ { @@ -883,7 +961,9 @@ func TestStorageCopyOnWrite(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -901,8 +981,10 @@ func TestStorageCopyOnWrite(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, - Address: "my_addr", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + Address: "my_addr", + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -914,15 +996,19 @@ func TestStorageCopyOnWrite(t *testing.T) { So(cur.Metadata.GetStorageNode(snID), ShouldBeNil) sn = &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, - Address: "diff_addr", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + Address: "diff_addr", + }, } err = ms.RegisterStorageNode(sn, 0, 0) So(err, ShouldResemble, verrors.ErrAlreadyExists) snID2 := snID + types.StorageNodeID(1) sn = &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID2, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID2, + }, } err = ms.RegisterStorageNode(sn, 0, 0) @@ -943,16 +1029,22 @@ func TestStorageCopyOnWrite(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = types.StorageNodeID(lsID) + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) - ls2 := makeLogStream(lsID2, snIDs) - err := ms.RegisterLogStream(ls, 0, 0) + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + ls2 := makeLogStream(types.TopicID(1), lsID2, snIDs) + + err = ms.RegisterLogStream(ls, 0, 0) So(err, ShouldBeNil) ms.setCopyOnWrite() @@ -960,7 +1052,7 @@ func TestStorageCopyOnWrite(t *testing.T) { So(pre.Metadata.GetLogStream(lsID), ShouldNotBeNil) So(cur.Metadata.GetLogStream(lsID), ShouldBeNil) - conflict := makeLogStream(lsID, snIDs) + conflict := makeLogStream(types.TopicID(1), lsID, snIDs) ls.Replicas[0].StorageNodeID = ls.Replicas[0].StorageNodeID + 100 err = ms.RegisterLogStream(conflict, 0, 0) So(err, ShouldResemble, verrors.ErrAlreadyExists) @@ -986,19 +1078,25 @@ func TestStorageCopyOnWrite(t *testing.T) { for i := 0; i < rep; i++ { snIDs[i] = types.StorageNodeID(lsID) + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsID, snIDs) + + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) ms.registerLogStream(ls) r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: 5, - HighWatermark: types.GLSN(10), + Version: types.MinVersion, } ms.UpdateUncommitReport(lsID, snIDs[0], r) So(ms.isCopyOnWrite(), ShouldBeFalse) @@ -1014,7 +1112,7 @@ func TestStorageCopyOnWrite(t *testing.T) { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN, UncommittedLLSNLength: 5, - HighWatermark: types.GLSN(10), + Version: types.MinVersion, } ms.UpdateUncommitReport(lsID, snIDs[1], r) @@ -1029,8 +1127,7 @@ func TestStorageCopyOnWrite(t *testing.T) { ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) gls := &mrpb.LogStreamCommitResults{ - PrevHighWatermark: types.GLSN(5), - HighWatermark: types.GLSN(10), + Version: types.Version(2), } lsID := types.LogStreamID(time.Now().UnixNano()) @@ -1043,16 +1140,15 @@ func TestStorageCopyOnWrite(t *testing.T) { ms.AppendLogStreamCommitHistory(gls) So(ms.isCopyOnWrite(), ShouldBeFalse) - cr, _ := ms.LookupNextCommitResults(types.GLSN(5)) + cr, _ := ms.LookupNextCommitResults(types.MinVersion) So(cr, ShouldNotBeNil) - So(ms.GetHighWatermark(), ShouldEqual, types.GLSN(10)) + So(ms.GetLastCommitVersion(), ShouldEqual, types.Version(2)) Convey("lookup GlobalLogStream with copyOnWrite should give merged response", func(ctx C) { ms.setCopyOnWrite() gls := &mrpb.LogStreamCommitResults{ - PrevHighWatermark: types.GLSN(10), - HighWatermark: types.GLSN(15), + Version: types.Version(3), } commit := snpb.LogStreamCommitResult{ @@ -1063,11 +1159,11 @@ func TestStorageCopyOnWrite(t *testing.T) { gls.CommitResults = append(gls.CommitResults, commit) ms.AppendLogStreamCommitHistory(gls) - cr, _ := ms.LookupNextCommitResults(types.GLSN(5)) + cr, _ := ms.LookupNextCommitResults(types.MinVersion) So(cr, ShouldNotBeNil) - cr, _ = ms.LookupNextCommitResults(types.GLSN(10)) + cr, _ = ms.LookupNextCommitResults(types.Version(2)) So(cr, ShouldNotBeNil) - So(ms.GetHighWatermark(), ShouldEqual, types.GLSN(15)) + So(ms.GetLastCommitVersion(), ShouldEqual, types.Version(3)) }) }) } @@ -1104,7 +1200,9 @@ func TestStorageMetadataCache(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -1140,7 +1238,9 @@ func TestStorageMetadataCache(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -1152,7 +1252,9 @@ func TestStorageMetadataCache(t *testing.T) { snID2 := snID + types.StorageNodeID(1) sn = &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID2, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID2, + }, } err = ms.RegisterStorageNode(sn, 0, 0) @@ -1198,7 +1300,9 @@ func TestStorageStateMachineMerge(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -1220,7 +1324,9 @@ func TestStorageStateMachineMerge(t *testing.T) { snID = snID + types.StorageNodeID(1) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -1259,7 +1365,7 @@ func TestStorageStateMachineMerge(t *testing.T) { s := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN + types.LLSN(i*3), UncommittedLLSNLength: 1, - HighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, } ms.UpdateUncommitReport(lsID, snID, s) @@ -1277,7 +1383,7 @@ func TestStorageStateMachineMerge(t *testing.T) { s := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN + types.LLSN(1+i*3), UncommittedLLSNLength: 1, - HighWatermark: types.GLSN(1024), + Version: types.Version(1), } ms.UpdateUncommitReport(lsID, snID, s) @@ -1288,13 +1394,13 @@ func TestStorageStateMachineMerge(t *testing.T) { st := time.Now() ms.mergeStateMachine() - t.Log(time.Now().Sub(st)) + t.Log(time.Since(st)) }) } func TestStorageSnapshot(t *testing.T) { Convey("create snapshot should not operate while job running", t, func(ctx C) { - ch := make(chan struct{}, 0) + ch := make(chan struct{}) cb := func(uint64, uint64, error) { ch <- struct{}{} } @@ -1308,7 +1414,9 @@ func TestStorageSnapshot(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } appliedIndex := uint64(0) @@ -1324,7 +1432,9 @@ func TestStorageSnapshot(t *testing.T) { snID = snID + types.StorageNodeID(1) sn = &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err = ms.RegisterStorageNode(sn, 0, 0) @@ -1349,7 +1459,9 @@ func TestStorageSnapshot(t *testing.T) { snID2 := snID + types.StorageNodeID(1) sn = &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID2, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID2, + }, } err = ms.RegisterStorageNode(sn, 0, 0) @@ -1389,7 +1501,9 @@ func TestStorageApplySnapshot(t *testing.T) { snID := types.StorageNodeID(time.Now().UnixNano()) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -1402,7 +1516,9 @@ func TestStorageApplySnapshot(t *testing.T) { snID = snID + types.StorageNodeID(1) sn = &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err = ms.RegisterStorageNode(sn, 0, 0) @@ -1463,6 +1579,9 @@ func TestStorageSnapshotRace(t *testing.T) { numLS := 128 numRep := 3 + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + lsIDs := make([]types.LogStreamID, numLS) snIDs := make([][]types.StorageNodeID, numLS) for i := 0; i < numLS; i++ { @@ -1472,14 +1591,16 @@ func TestStorageSnapshotRace(t *testing.T) { snIDs[i][j] = types.StorageNodeID(i*numRep + j) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i][j], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i][j], + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) } - ls := makeLogStream(lsIDs[i], snIDs[i]) + ls := makeLogStream(types.TopicID(1), lsIDs[i], snIDs[i]) err := ms.RegisterLogStream(ls, 0, 0) So(err, ShouldBeNil) } @@ -1489,11 +1610,10 @@ func TestStorageSnapshotRace(t *testing.T) { checkLS := 0 for i := 0; i < n; i++ { - preGLSN := types.GLSN(i * numLS) - newGLSN := types.GLSN((i + 1) * numLS) + preVersion := types.Version(i) + newVersion := types.Version(i + 1) gls := &mrpb.LogStreamCommitResults{ - PrevHighWatermark: preGLSN, - HighWatermark: newGLSN, + Version: newVersion, } for j := 0; j < numLS; j++ { @@ -1505,7 +1625,7 @@ func TestStorageSnapshotRace(t *testing.T) { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN + types.LLSN(i), UncommittedLLSNLength: 1, - HighWatermark: preGLSN, + Version: preVersion, } ms.UpdateUncommitReport(lsID, snID, r) @@ -1516,7 +1636,7 @@ func TestStorageSnapshotRace(t *testing.T) { commit := snpb.LogStreamCommitResult{ LogStreamID: lsID, - CommittedGLSNOffset: preGLSN + types.GLSN(1), + CommittedGLSNOffset: types.GLSN(preVersion + 1), CommittedGLSNLength: uint64(numLS), } gls.CommitResults = append(gls.CommitResults, commit) @@ -1527,10 +1647,10 @@ func TestStorageSnapshotRace(t *testing.T) { appliedIndex++ ms.UpdateAppliedIndex(appliedIndex) - gls, _ = ms.LookupNextCommitResults(preGLSN) + gls, _ = ms.LookupNextCommitResults(preVersion) if gls != nil && - gls.HighWatermark == newGLSN && - ms.GetHighWatermark() == newGLSN { + gls.Version == newVersion && + ms.GetLastCommitVersion() == newVersion { checkGLS++ } @@ -1547,7 +1667,7 @@ func TestStorageSnapshotRace(t *testing.T) { r, ok := ls.Replicas[snID] if ok && - r.HighWatermark == preGLSN { + r.Version == preVersion { checkLS++ } } @@ -1581,21 +1701,25 @@ func TestStorageVerifyReport(t *testing.T) { snID := types.StorageNodeID(lsID) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, } err := ms.registerStorageNode(sn) So(err, ShouldBeNil) - ls := makeLogStream(lsID, []types.StorageNodeID{snID}) + err = ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + ls := makeLogStream(types.TopicID(1), lsID, []types.StorageNodeID{snID}) err = ms.RegisterLogStream(ls, 0, 0) So(err, ShouldBeNil) - for i := 0; i < 3; i++ { + for i := 1; i < 4; i++ { gls := &mrpb.LogStreamCommitResults{ - PrevHighWatermark: types.GLSN(i*5 + 5), - HighWatermark: types.GLSN(i*5 + 10), + Version: types.Version(i + 1), } commit := snpb.LogStreamCommitResult{ @@ -1608,26 +1732,31 @@ func TestStorageVerifyReport(t *testing.T) { ms.AppendLogStreamCommitHistory(gls) } - Convey("When update report with valid hwm, then it should be succeed", func(ctx C) { - for i := 0; i < 4; i++ { + Convey("When update report with valid version, then it should be succeed", func(ctx C) { + for i := 1; i < 5; i++ { r := snpb.LogStreamUncommitReport{ UncommittedLLSNOffset: types.MinLLSN + types.LLSN(i*5), UncommittedLLSNLength: 5, - HighWatermark: types.GLSN(i*5 + 5), + Version: types.Version(i), } So(ms.verifyUncommitReport(r), ShouldBeTrue) } }) - Convey("When update report with invalid hwm, then it should be succeed", func(ctx C) { - for i := 0; i < 5; i++ { - r := snpb.LogStreamUncommitReport{ - UncommittedLLSNOffset: types.MinLLSN + types.LLSN(i*5), - UncommittedLLSNLength: 5, - HighWatermark: types.GLSN(i*5 + 5 - 1), - } - So(ms.verifyUncommitReport(r), ShouldBeFalse) + Convey("When update report with invalid version, then it should not be succeed", func(ctx C) { + r := snpb.LogStreamUncommitReport{ + UncommittedLLSNOffset: types.MinLLSN + types.LLSN(5), + UncommittedLLSNLength: 5, + Version: types.Version(0), + } + So(ms.verifyUncommitReport(r), ShouldBeFalse) + + r = snpb.LogStreamUncommitReport{ + UncommittedLLSNOffset: types.MinLLSN + types.LLSN(5), + UncommittedLLSNLength: 5, + Version: types.Version(5), } + So(ms.verifyUncommitReport(r), ShouldBeFalse) }) }) } @@ -1649,7 +1778,9 @@ func TestStorageRecoverStateMachine(t *testing.T) { for i := 0; i < nrSN; i++ { snIDs[i] = base + types.StorageNodeID(i) sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snIDs[i], + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, } err := ms.RegisterStorageNode(sn, 0, 0) @@ -1704,3 +1835,135 @@ func TestStorageRecoverStateMachine(t *testing.T) { }) }) } + +func TestStorageRegisterTopic(t *testing.T) { + Convey("Topic should be registered if not existed", t, func(ctx C) { + ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) + + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + Convey("Topic should be registered even though it already exist", func(ctx C) { + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + }) + }) + + Convey("LS should not be registered if not exist topic", t, func(ctx C) { + rep := 2 + ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) + + lsID := types.LogStreamID(time.Now().UnixNano()) + tmp := types.StorageNodeID(time.Now().UnixNano()) + + snIDs := make([]types.StorageNodeID, rep) + for i := 0; i < rep; i++ { + snIDs[i] = tmp + types.StorageNodeID(i) + + sn := &varlogpb.StorageNodeDescriptor{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snIDs[i], + }, + } + + err := ms.registerStorageNode(sn) + So(err, ShouldBeNil) + } + + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + + err := ms.registerLogStream(ls) + So(err, ShouldResemble, verrors.ErrInvalidArgument) + + Convey("LS should be registered if exist topic", func(ctx C) { + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + err = ms.registerLogStream(ls) + So(err, ShouldBeNil) + + for i := 1; i < 10; i++ { + lsID += types.LogStreamID(1) + ls := makeLogStream(types.TopicID(1), lsID, snIDs) + + err := ms.registerLogStream(ls) + So(err, ShouldBeNil) + } + + topic := ms.lookupTopic(types.TopicID(1)) + So(topic, ShouldNotBeNil) + + So(len(topic.LogStreams), ShouldEqual, 10) + }) + }) +} + +func TestStoragUnregisterTopic(t *testing.T) { + Convey("unregister non-exist topic should return ErrNotExist", t, func(ctx C) { + ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) + + err := ms.unregisterTopic(types.TopicID(1)) + So(err, ShouldResemble, verrors.ErrNotExist) + }) + + Convey("unregister exist topic", t, func(ctx C) { + ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) + + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(1)}) + So(err, ShouldBeNil) + + err = ms.unregisterTopic(types.TopicID(1)) + So(err, ShouldBeNil) + + topic := ms.lookupTopic(types.TopicID(1)) + So(topic, ShouldBeNil) + }) +} + +func TestStorageSortedTopicLogStreamIDs(t *testing.T) { + Convey("UncommitReport should be committed", t, func(ctx C) { + /* + Topic-1 : LS-1, LS-3 + Topic-2 : LS-2, LS-4 + */ + + nrLS := 4 + nrTopic := 2 + + ms := NewMetadataStorage(nil, DefaultSnapshotCount, nil) + + snID := types.StorageNodeID(0) + snIDs := make([]types.StorageNodeID, 1) + snIDs = append(snIDs, snID) + + sn := &varlogpb.StorageNodeDescriptor{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, + } + + err := ms.registerStorageNode(sn) + So(err, ShouldBeNil) + + for i := 0; i < nrTopic; i++ { + err := ms.registerTopic(&varlogpb.TopicDescriptor{TopicID: types.TopicID(i + 1)}) + So(err, ShouldBeNil) + } + + for i := 0; i < nrLS; i++ { + lsID := types.LogStreamID(i + 1) + ls := makeLogStream(types.TopicID(i%2+1), lsID, snIDs) + err = ms.registerLogStream(ls) + So(err, ShouldBeNil) + } + + ids := ms.GetSortedTopicLogStreamIDs() + So(sort.SliceIsSorted(ids, func(i, j int) bool { + if ids[i].TopicID == ids[j].TopicID { + return ids[i].LogStreamID < ids[j].LogStreamID + } + + return ids[i].TopicID < ids[j].TopicID + }), ShouldBeTrue) + }) +} diff --git a/internal/storagenode/config.go b/internal/storagenode/config.go index 3a3412f12..06a5cb0e3 100644 --- a/internal/storagenode/config.go +++ b/internal/storagenode/config.go @@ -4,6 +4,8 @@ import ( "github.com/pkg/errors" "go.uber.org/zap" + "github.com/kakao/varlog/internal/storagenode/volume" + "github.com/kakao/varlog/internal/storagenode/executor" "github.com/kakao/varlog/internal/storagenode/pprof" "github.com/kakao/varlog/internal/storagenode/storage" @@ -140,7 +142,7 @@ func (o volumesOption) apply(c *config) { func WithVolumes(dirs ...string) Option { volumes := set.New(len(dirs)) for _, dir := range dirs { - vol, err := NewVolume(dir) + vol, err := volume.New(dir) if err != nil { panic(err) } diff --git a/internal/storagenode/config_test.go b/internal/storagenode/config_test.go index 745a57542..b9625f9e3 100644 --- a/internal/storagenode/config_test.go +++ b/internal/storagenode/config_test.go @@ -37,5 +37,4 @@ func TestConfig(t *testing.T) { } }) } - } diff --git a/internal/storagenode/executor/commit_task.go b/internal/storagenode/executor/commit_task.go index 8044c5065..a01e66e27 100644 --- a/internal/storagenode/executor/commit_task.go +++ b/internal/storagenode/executor/commit_task.go @@ -15,8 +15,8 @@ var commitTaskPool = sync.Pool{ } type commitTask struct { + version types.Version highWatermark types.GLSN - prevHighWatermark types.GLSN committedGLSNBegin types.GLSN committedGLSNEnd types.GLSN committedLLSNBegin types.LLSN @@ -33,8 +33,7 @@ func newCommitTask() *commitTask { } func (t *commitTask) release() { - t.highWatermark = types.InvalidGLSN - t.prevHighWatermark = types.InvalidGLSN + t.version = types.InvalidVersion t.committedGLSNBegin = types.InvalidGLSN t.committedGLSNEnd = types.InvalidGLSN t.committedLLSNBegin = types.InvalidLLSN @@ -44,8 +43,8 @@ func (t *commitTask) release() { commitTaskPool.Put(t) } -func (t *commitTask) stale(globalHWM types.GLSN) bool { - return t.highWatermark <= globalHWM +func (t *commitTask) stale(ver types.Version) bool { + return t.version <= ver } func (t *commitTask) annotate(ctx context.Context, m MeasurableExecutor, discarded bool) { diff --git a/internal/storagenode/executor/commit_task_test.go b/internal/storagenode/executor/commit_task_test.go index d5ca02b8a..4c6dda192 100644 --- a/internal/storagenode/executor/commit_task_test.go +++ b/internal/storagenode/executor/commit_task_test.go @@ -15,12 +15,10 @@ import ( func TestCommitTaskBlockPool(t *testing.T) { for i := 0; i < 100; i++ { ctb := newCommitTask() - require.Equal(t, types.InvalidGLSN, ctb.highWatermark) - require.Equal(t, types.InvalidGLSN, ctb.prevHighWatermark) + require.Equal(t, types.InvalidVersion, ctb.version) require.Equal(t, types.InvalidGLSN, ctb.committedGLSNBegin) require.Equal(t, types.InvalidGLSN, ctb.committedGLSNEnd) - ctb.highWatermark = 1 - ctb.prevHighWatermark = 1 + ctb.version = 1 ctb.committedGLSNBegin = 1 ctb.committedGLSNEnd = 1 ctb.release() @@ -31,7 +29,7 @@ type testCommitTaskHeap []*commitTask func (b testCommitTaskHeap) Len() int { return len(b) } -func (b testCommitTaskHeap) Less(i, j int) bool { return b[i].highWatermark < b[j].highWatermark } +func (b testCommitTaskHeap) Less(i, j int) bool { return b[i].version < b[j].version } func (b testCommitTaskHeap) Swap(i, j int) { b[i], b[j] = b[j], b[i] } @@ -68,7 +66,7 @@ func BenchmarkCommitTaskBlockBatch(b *testing.B) { commitTasks := make([]*commitTask, 0, b.N) for i := 0; i < b.N; i++ { ct := &commitTask{} - ct.highWatermark = types.GLSN(rand.Uint64()) + ct.version = types.Version(rand.Uint64()) commitTasks = append(commitTasks, ct) } benchFunc.f(b, commitTasks) diff --git a/internal/storagenode/executor/commit_wait_queue.go b/internal/storagenode/executor/commit_wait_queue.go index 9451e766a..a5c92b64c 100644 --- a/internal/storagenode/executor/commit_wait_queue.go +++ b/internal/storagenode/executor/commit_wait_queue.go @@ -48,11 +48,10 @@ type commitWaitQueueImpl struct { var _ commitWaitQueue = (*commitWaitQueueImpl)(nil) -func newCommitWaitQueue() (commitWaitQueue, error) { - cwq := &commitWaitQueueImpl{ +func newCommitWaitQueue() commitWaitQueue { + return &commitWaitQueueImpl{ queue: list.New(), } - return cwq, nil } func (cwq *commitWaitQueueImpl) push(cwt *commitWaitTask) error { diff --git a/internal/storagenode/executor/commit_wait_queue_test.go b/internal/storagenode/executor/commit_wait_queue_test.go index 634173572..4feef1f97 100644 --- a/internal/storagenode/executor/commit_wait_queue_test.go +++ b/internal/storagenode/executor/commit_wait_queue_test.go @@ -11,8 +11,7 @@ import ( func TestCommitWaitQueue(t *testing.T) { const n = 10 - cwq, err := newCommitWaitQueue() - require.NoError(t, err) + cwq := newCommitWaitQueue() require.Zero(t, cwq.size()) iter := cwq.peekIterator() require.False(t, iter.valid()) diff --git a/internal/storagenode/executor/committer.go b/internal/storagenode/executor/committer.go index 87edbe476..e4debfe1b 100644 --- a/internal/storagenode/executor/committer.go +++ b/internal/storagenode/executor/committer.go @@ -126,11 +126,7 @@ func (c *committerImpl) init() error { c.inflightCommitTasks.cv = sync.NewCond(&c.inflightCommitTasks.mu) - commitWaitQ, err := newCommitWaitQueue() - if err != nil { - return err - } - c.commitWaitQ = commitWaitQ + c.commitWaitQ = newCommitWaitQueue() r := runner.New("committer", nil) cancel, err := r.Run(c.commitLoop) @@ -236,8 +232,8 @@ func (c *committerImpl) ready(ctx context.Context) (int64, error) { numPopped++ ct.poppedTime = time.Now() - globalHighWatermark, _ := c.lsc.reportCommitBase() - if ct.stale(globalHighWatermark) { + commitVersion, _, _ := c.lsc.reportCommitBase() + if ct.stale(commitVersion) { ct.annotate(ctx, c.me, true) ct.release() } else { @@ -250,7 +246,7 @@ func (c *committerImpl) ready(ctx context.Context) (int64, error) { numPopped++ ct.poppedTime = time.Now() - if ct.stale(globalHighWatermark) { + if ct.stale(commitVersion) { ct.annotate(ctx, c.me, true) ct.release() continue @@ -279,8 +275,8 @@ func (c *committerImpl) commit(ctx context.Context) error { */ for _, ct := range c.commitTaskBatch { - globalHighWatermark, _ := c.lsc.reportCommitBase() - if ct.stale(globalHighWatermark) { + commitVersion, _, _ := c.lsc.reportCommitBase() + if ct.stale(commitVersion) { ct.annotate(ctx, c.me, true) continue } @@ -300,7 +296,7 @@ func (c *committerImpl) commit(ctx context.Context) error { } func (c *committerImpl) commitInternal(ctx context.Context, ct *commitTask) error { - _, uncommittedLLSNBegin := c.lsc.reportCommitBase() + _, _, uncommittedLLSNBegin := c.lsc.reportCommitBase() if uncommittedLLSNBegin != ct.committedLLSNBegin { // skip this commit // See #VARLOG-453 (VARLOG-453). @@ -338,8 +334,8 @@ func (c *committerImpl) commitInternal(ctx context.Context, ct *commitTask) erro } commitContext := storage.CommitContext{ + Version: ct.version, HighWatermark: ct.highWatermark, - PrevHighWatermark: ct.prevHighWatermark, CommittedGLSNBegin: ct.committedGLSNBegin, CommittedGLSNEnd: ct.committedGLSNEnd, CommittedLLSNBegin: uncommittedLLSNBegin, @@ -494,7 +490,7 @@ func (c *committerImpl) resetBatch() { // NOTE: (bool, error) = (processed, err) func (c *committerImpl) commitDirectly(commitContext storage.CommitContext, requireCommitWaitTasks bool) (bool, error) { - _, uncommittedLLSNBegin := c.lsc.reportCommitBase() + _, _, uncommittedLLSNBegin := c.lsc.reportCommitBase() numCommits := int(commitContext.CommittedGLSNEnd - commitContext.CommittedGLSNBegin) // NOTE: It seems to be similar to the above condition. The actual purpose of this @@ -569,8 +565,9 @@ func (c *committerImpl) commitDirectly(commitContext storage.CommitContext, requ c.lsc.localGLSN.localHighWatermark.Store(commitContext.CommittedGLSNEnd - 1) } uncommittedLLSNBegin += types.LLSN(numCommits) + c.decider.change(func() { - c.lsc.storeReportCommitBase(commitContext.HighWatermark, uncommittedLLSNBegin) + c.lsc.storeReportCommitBase(commitContext.Version, commitContext.HighWatermark, uncommittedLLSNBegin) }) // NOTE: Notifying the completion of append should be happened after assigning a new diff --git a/internal/storagenode/executor/committer_test.go b/internal/storagenode/executor/committer_test.go index 9c48838ac..5fb2c9ad1 100644 --- a/internal/storagenode/executor/committer_test.go +++ b/internal/storagenode/executor/committer_test.go @@ -188,8 +188,7 @@ func TestCommitterStop(t *testing.T) { lsc.uncommittedLLSNEnd.Add(1) err = committer.sendCommitTask(context.TODO(), &commitTask{ - highWatermark: 1, - prevHighWatermark: 0, + version: 1, committedGLSNBegin: 1, committedGLSNEnd: 2, committedLLSNBegin: 1, @@ -261,8 +260,7 @@ func TestCommitter(t *testing.T) { // commit, err = committer.sendCommitTask(context.TODO(), &commitTask{ - highWatermark: 2, - prevHighWatermark: 0, + version: 1, committedGLSNBegin: 1, committedGLSNEnd: 3, committedLLSNBegin: 1, @@ -296,8 +294,7 @@ func TestCommitter(t *testing.T) { // commit, err = committer.sendCommitTask(context.TODO(), &commitTask{ - highWatermark: 4, - prevHighWatermark: 2, + version: 2, committedGLSNBegin: 3, committedGLSNEnd: 5, committedLLSNBegin: 3, @@ -352,16 +349,14 @@ func TestCommitterCatchupCommitVarlog459(t *testing.T) { for i := 0; i < goal; i++ { committer.sendCommitTask(context.Background(), &commitTask{ - highWatermark: types.GLSN(i + 1), - prevHighWatermark: types.GLSN(i), + version: types.Version(i + 1), committedGLSNBegin: types.MinGLSN, committedGLSNEnd: types.MinGLSN, committedLLSNBegin: types.MinLLSN, }) if i > 0 { committer.sendCommitTask(context.Background(), &commitTask{ - highWatermark: types.GLSN(i), - prevHighWatermark: types.GLSN(i - 1), + version: types.Version(i), committedGLSNBegin: types.MinGLSN, committedGLSNEnd: types.MinGLSN, committedLLSNBegin: types.MinLLSN, @@ -374,8 +369,8 @@ func TestCommitterCatchupCommitVarlog459(t *testing.T) { }, 5*time.Second, 10*time.Millisecond) require.Eventually(t, func() bool { - hwm, _ := lsc.reportCommitBase() - return hwm == goal + ver, _, _ := lsc.reportCommitBase() + return ver == goal }, 5*time.Second, 10*time.Millisecond) } @@ -445,8 +440,7 @@ func TestCommitterState(t *testing.T) { // push commitTask require.NoError(t, committer.sendCommitTask(context.Background(), &commitTask{ - highWatermark: 1, - prevHighWatermark: 0, + version: 1, committedGLSNBegin: 1, committedGLSNEnd: 2, committedLLSNBegin: 1, @@ -454,8 +448,8 @@ func TestCommitterState(t *testing.T) { // committed require.Eventually(t, func() bool { - hwm, _ := lsc.reportCommitBase() - return committer.commitWaitQ.size() == 1 && hwm == 1 + ver, _, _ := lsc.reportCommitBase() + return committer.commitWaitQ.size() == 1 && ver == 1 }, time.Second, 10*time.Millisecond) // state == sealing @@ -469,8 +463,7 @@ func TestCommitterState(t *testing.T) { require.Error(t, committer.sendCommitWaitTask(context.Background(), cwt)) require.NoError(t, committer.sendCommitTask(context.Background(), &commitTask{ - highWatermark: 2, - prevHighWatermark: 1, + version: 2, committedGLSNBegin: 2, committedGLSNEnd: 3, committedLLSNBegin: 2, @@ -478,8 +471,8 @@ func TestCommitterState(t *testing.T) { // committed require.Eventually(t, func() bool { - hwm, _ := lsc.reportCommitBase() - return committer.commitWaitQ.size() == 0 && hwm == 2 + ver, _, _ := lsc.reportCommitBase() + return committer.commitWaitQ.size() == 0 && ver == 2 }, time.Second, 10*time.Millisecond) // state == learning | sealed @@ -492,8 +485,7 @@ func TestCommitterState(t *testing.T) { require.Error(t, committer.sendCommitWaitTask(context.Background(), cwt)) require.Error(t, committer.sendCommitTask(context.Background(), &commitTask{ - highWatermark: 3, - prevHighWatermark: 2, + version: 3, committedGLSNBegin: 3, committedGLSNEnd: 3, committedLLSNBegin: 3, diff --git a/internal/storagenode/executor/config.go b/internal/storagenode/executor/config.go index ddce41b92..ec213054c 100644 --- a/internal/storagenode/executor/config.go +++ b/internal/storagenode/executor/config.go @@ -23,6 +23,7 @@ const ( type config struct { storageNodeID types.StorageNodeID logStreamID types.LogStreamID + topicID types.TopicID storage storage.Storage writeQueueSize int @@ -112,6 +113,16 @@ func WithLogStreamID(lsid types.LogStreamID) Option { return logStreamIDOption(lsid) } +type topicIDOption types.TopicID + +func (o topicIDOption) apply(c *config) { + c.topicID = types.TopicID(o) +} + +func WithTopicID(topicID types.TopicID) Option { + return topicIDOption(topicID) +} + /* type StorageOption interface { WriterOption diff --git a/internal/storagenode/executor/executor.go b/internal/storagenode/executor/executor.go index 8b56e0060..fadce5d11 100644 --- a/internal/storagenode/executor/executor.go +++ b/internal/storagenode/executor/executor.go @@ -19,7 +19,7 @@ import ( "github.com/kakao/varlog/internal/storagenode/timestamper" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) type Executor interface { @@ -80,7 +80,7 @@ type executor struct { // The primaryBackups is a slice of replicas of a log stream. It is updated by Unseal // and is read by many codes. - primaryBackups []snpb.Replica + primaryBackups []varlogpb.Replica } var _ Executor = (*executor)(nil) @@ -111,12 +111,9 @@ func New(opts ...Option) (*executor, error) { // restore LogStreamContext // NOTE: LogStreamContext should be restored before initLogPipeline is called. - lsc, err := lse.restoreLogStreamContext(ri) - if err != nil { - return nil, err - } - lse.lsc = lsc - lse.decider = newDecidableCondition(lsc) + lse.lsc = lse.restoreLogStreamContext(ri) + + lse.decider = newDecidableCondition(lse.lsc) // init log pipeline if err := lse.initLogPipeline(); err != nil { @@ -207,9 +204,9 @@ func (e *executor) Close() (err error) { return multierr.Append(err, e.storage.Close()) } -func (e *executor) restoreLogStreamContext(ri storage.RecoveryInfo) (*logStreamContext, error) { +func (e *executor) restoreLogStreamContext(ri storage.RecoveryInfo) *logStreamContext { lsc := newLogStreamContext() - globalHighWatermark, uncommittedLLSNBegin := lsc.reportCommitBase() + commitVersion, highWatermark, uncommittedLLSNBegin := lsc.reportCommitBase() uncommittedLLSNEnd := lsc.uncommittedLLSNEnd.Load() lsc.commitProgress.mu.RLock() committedLLSNEnd := lsc.commitProgress.committedLLSNEnd @@ -218,7 +215,8 @@ func (e *executor) restoreLogStreamContext(ri storage.RecoveryInfo) (*logStreamC localLowWatermark := lsc.localGLSN.localLowWatermark.Load() if ri.LastCommitContext.Found { - globalHighWatermark = ri.LastCommitContext.CC.HighWatermark + commitVersion = ri.LastCommitContext.CC.Version + highWatermark = ri.LastCommitContext.CC.HighWatermark } if ri.LogEntryBoundary.Found { lastLLSN := ri.LogEntryBoundary.Last.LLSN @@ -230,14 +228,14 @@ func (e *executor) restoreLogStreamContext(ri storage.RecoveryInfo) (*logStreamC localLowWatermark = ri.LogEntryBoundary.First.GLSN } - lsc.storeReportCommitBase(globalHighWatermark, uncommittedLLSNBegin) + lsc.storeReportCommitBase(commitVersion, highWatermark, uncommittedLLSNBegin) lsc.uncommittedLLSNEnd.Store(uncommittedLLSNEnd) lsc.commitProgress.mu.Lock() lsc.commitProgress.committedLLSNEnd = committedLLSNEnd lsc.commitProgress.mu.Unlock() lsc.localGLSN.localHighWatermark.Store(localHighWatermark) lsc.localGLSN.localLowWatermark.Store(localLowWatermark) - return lsc, nil + return lsc } func (e *executor) regenerateCommitWaitTasks(ri storage.RecoveryInfo) error { @@ -297,7 +295,7 @@ func (e *executor) isPrimay() bool { // NOTE: A new log stream replica that has not received Unseal request is not primary // replica. return len(e.primaryBackups) > 0 && - e.primaryBackups[0].StorageNodeID == e.storageNodeID && + e.primaryBackups[0].StorageNode.StorageNodeID == e.storageNodeID && e.primaryBackups[0].LogStreamID == e.logStreamID } @@ -305,6 +303,10 @@ func (e *executor) StorageNodeID() types.StorageNodeID { return e.storageNodeID } +func (e *executor) TopicID() types.TopicID { + return e.topicID +} + func (e *executor) LogStreamID() types.LogStreamID { return e.logStreamID } diff --git a/internal/storagenode/executor/executor_mock.go b/internal/storagenode/executor/executor_mock.go index 4c9ba32ee..28509cb61 100644 --- a/internal/storagenode/executor/executor_mock.go +++ b/internal/storagenode/executor/executor_mock.go @@ -40,7 +40,7 @@ func (m *MockExecutor) EXPECT() *MockExecutorMockRecorder { } // Append mocks base method. -func (m *MockExecutor) Append(arg0 context.Context, arg1 []byte, arg2 ...snpb.Replica) (types.GLSN, error) { +func (m *MockExecutor) Append(arg0 context.Context, arg1 []byte, arg2 ...varlogpb.Replica) (types.GLSN, error) { m.ctrl.T.Helper() varargs := []interface{}{arg0, arg1} for _, a := range arg2 { @@ -88,7 +88,7 @@ func (mr *MockExecutorMockRecorder) Commit(arg0, arg1 interface{}) *gomock.Call } // GetPrevCommitInfo mocks base method. -func (m *MockExecutor) GetPrevCommitInfo(arg0 types.GLSN) (*snpb.LogStreamCommitInfo, error) { +func (m *MockExecutor) GetPrevCommitInfo(arg0 types.Version) (*snpb.LogStreamCommitInfo, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "GetPrevCommitInfo", arg0) ret0, _ := ret[0].(*snpb.LogStreamCommitInfo) @@ -160,10 +160,10 @@ func (mr *MockExecutorMockRecorder) Path() *gomock.Call { } // Read mocks base method. -func (m *MockExecutor) Read(arg0 context.Context, arg1 types.GLSN) (types.LogEntry, error) { +func (m *MockExecutor) Read(arg0 context.Context, arg1 types.GLSN) (varlogpb.LogEntry, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "Read", arg0, arg1) - ret0, _ := ret[0].(types.LogEntry) + ret0, _ := ret[0].(varlogpb.LogEntry) ret1, _ := ret[1].(error) return ret0, ret1 } @@ -234,7 +234,7 @@ func (mr *MockExecutorMockRecorder) Subscribe(arg0, arg1, arg2 interface{}) *gom } // Sync mocks base method. -func (m *MockExecutor) Sync(arg0 context.Context, arg1 snpb.Replica) (*snpb.SyncStatus, error) { +func (m *MockExecutor) Sync(arg0 context.Context, arg1 varlogpb.Replica) (*snpb.SyncStatus, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "Sync", arg0, arg1) ret0, _ := ret[0].(*snpb.SyncStatus) @@ -292,7 +292,7 @@ func (mr *MockExecutorMockRecorder) Trim(arg0, arg1 interface{}) *gomock.Call { } // Unseal mocks base method. -func (m *MockExecutor) Unseal(arg0 context.Context, arg1 []snpb.Replica) error { +func (m *MockExecutor) Unseal(arg0 context.Context, arg1 []varlogpb.Replica) error { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "Unseal", arg0, arg1) ret0, _ := ret[0].(error) diff --git a/internal/storagenode/executor/executor_test.go b/internal/storagenode/executor/executor_test.go index 5f7d98da5..cccf27b85 100644 --- a/internal/storagenode/executor/executor_test.go +++ b/internal/storagenode/executor/executor_test.go @@ -20,7 +20,6 @@ import ( "github.com/kakao/varlog/internal/storagenode/replication" "github.com/kakao/varlog/internal/storagenode/stopchannel" "github.com/kakao/varlog/internal/storagenode/storage" - "github.com/kakao/varlog/internal/storagenode/telemetry" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/util/syncutil/atomicutil" "github.com/kakao/varlog/pkg/verrors" @@ -38,7 +37,7 @@ func TestExecutorClose(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -123,7 +122,10 @@ func newTestStorage(ctrl *gomock.Controller, cfg *testStorageConfig) storage.Sto } func TestExecutorAppend(t *testing.T) { - const numAppends = 100 + const ( + numAppends = 100 + topicID = 1 + ) defer goleak.VerifyNone(t) @@ -135,7 +137,8 @@ func TestExecutorAppend(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithTopicID(topicID), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -149,22 +152,24 @@ func TestExecutorAppend(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) - for hwm := types.GLSN(1); hwm <= types.GLSN(numAppends); hwm++ { + for ver := types.Version(1); ver <= types.Version(numAppends); ver++ { wg := sync.WaitGroup{} wg.Add(2) go func() { defer wg.Done() glsn, err := lse.Append(context.TODO(), []byte("foo")) require.NoError(t, err) - require.Equal(t, hwm, glsn) + require.Equal(t, types.GLSN(ver), glsn) }() go func() { defer wg.Done() @@ -174,11 +179,10 @@ func TestExecutorAppend(t *testing.T) { return report.UncommittedLLSNLength > 0 }, time.Second, time.Millisecond) err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ - HighWatermark: hwm, - PrevHighWatermark: hwm - 1, - CommittedGLSNOffset: hwm, + Version: ver, + CommittedGLSNOffset: types.GLSN(ver), CommittedGLSNLength: 1, - CommittedLLSNOffset: types.LLSN(hwm), + CommittedLLSNOffset: types.LLSN(ver), }) require.NoError(t, err) }() @@ -186,14 +190,17 @@ func TestExecutorAppend(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == hwm && report.UncommittedLLSNOffset == types.LLSN(hwm)+1 && + return report.Version == ver && report.UncommittedLLSNOffset == types.LLSN(ver)+1 && report.UncommittedLLSNLength == 0 }, time.Second, time.Millisecond) } } func TestExecutorRead(t *testing.T) { - const numAppends = 100 + const ( + numAppends = 100 + topicID = 1 + ) defer goleak.VerifyNone(t) @@ -205,7 +212,8 @@ func TestExecutorRead(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithTopicID(topicID), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -219,10 +227,12 @@ func TestExecutorRead(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -239,6 +249,7 @@ func TestExecutorRead(t *testing.T) { expectedLLSN := types.LLSN(i) expectedHWM := types.GLSN(i * 3) expectedGLSN := expectedHWM - 1 + expectedVer := types.Version(i) wg := sync.WaitGroup{} wg.Add(5) @@ -273,8 +284,8 @@ func TestExecutorRead(t *testing.T) { return report.UncommittedLLSNLength > 0 }, time.Second, time.Millisecond) err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: expectedVer, HighWatermark: expectedHWM, - PrevHighWatermark: expectedHWM - 3, CommittedGLSNOffset: expectedGLSN, CommittedGLSNLength: 1, CommittedLLSNOffset: expectedLLSN, @@ -282,10 +293,11 @@ func TestExecutorRead(t *testing.T) { require.NoError(t, err) }() wg.Wait() + require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == expectedHWM && + return report.Version == expectedVer && report.UncommittedLLSNOffset == expectedLLSN+1 && report.UncommittedLLSNLength == 0 }, time.Second, time.Millisecond) @@ -293,7 +305,11 @@ func TestExecutorRead(t *testing.T) { } func TestExecutorTrim(t *testing.T) { - const numAppends = 10 + const ( + numAppends = 10 + topicID = 1 + ) + defer goleak.VerifyNone(t) ctrl := gomock.NewController(t) @@ -304,7 +320,8 @@ func TestExecutorTrim(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithTopicID(topicID), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -318,10 +335,12 @@ func TestExecutorTrim(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -334,6 +353,7 @@ func TestExecutorTrim(t *testing.T) { expectedHWM := types.GLSN(i * 5) expectedLLSN := types.LLSN(i) expectedGLSN := expectedHWM - 2 + expectedVer := types.Version(i) wg := sync.WaitGroup{} wg.Add(2) @@ -351,8 +371,8 @@ func TestExecutorTrim(t *testing.T) { return report.UncommittedLLSNLength > 0 }, time.Second, time.Millisecond) err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: expectedVer, HighWatermark: expectedHWM, - PrevHighWatermark: expectedHWM - 5, CommittedGLSNOffset: expectedGLSN, CommittedGLSNLength: 1, CommittedLLSNOffset: expectedLLSN, @@ -363,7 +383,7 @@ func TestExecutorTrim(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == expectedHWM && report.UncommittedLLSNOffset == expectedLLSN+1 && + return report.Version == expectedVer && report.UncommittedLLSNOffset == expectedLLSN+1 && report.UncommittedLLSNLength == 0 }, time.Second, time.Millisecond) } @@ -405,7 +425,11 @@ func TestExecutorTrim(t *testing.T) { } func TestExecutorSubscribe(t *testing.T) { - const numAppends = 10 + const ( + numAppends = 10 + topicID = 1 + ) + defer goleak.VerifyNone(t) ctrl := gomock.NewController(t) @@ -416,7 +440,8 @@ func TestExecutorSubscribe(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithTopicID(topicID), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -430,10 +455,12 @@ func TestExecutorSubscribe(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -446,6 +473,7 @@ func TestExecutorSubscribe(t *testing.T) { expectedHWM := types.GLSN(i * 5) expectedLLSN := types.LLSN(i) expectedGLSN := expectedHWM - 2 + expectedVer := types.Version(i) wg := sync.WaitGroup{} wg.Add(2) @@ -463,8 +491,8 @@ func TestExecutorSubscribe(t *testing.T) { return report.UncommittedLLSNLength > 0 }, time.Second, time.Millisecond) err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: expectedVer, HighWatermark: expectedHWM, - PrevHighWatermark: expectedHWM - 5, CommittedGLSNOffset: expectedGLSN, CommittedGLSNLength: 1, CommittedLLSNOffset: expectedLLSN, @@ -475,7 +503,7 @@ func TestExecutorSubscribe(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == expectedHWM && report.UncommittedLLSNOffset == expectedLLSN+1 && + return report.Version == expectedVer && report.UncommittedLLSNOffset == expectedLLSN+1 && report.UncommittedLLSNLength == 0 }, time.Second, time.Millisecond) } @@ -486,11 +514,11 @@ func TestExecutorSubscribe(t *testing.T) { ) // subscribe [1,1) - subEnv, err = lse.Subscribe(context.TODO(), 1, 1) + _, err = lse.Subscribe(context.TODO(), 1, 1) require.Error(t, err) // subscribe [2,1) - subEnv, err = lse.Subscribe(context.TODO(), 2, 1) + _, err = lse.Subscribe(context.TODO(), 2, 1) require.Error(t, err) // subscribe [1,2) @@ -533,7 +561,7 @@ func TestExecutorSubscribe(t *testing.T) { require.True(t, ok) require.Equal(t, types.GLSN(48), sr.LogEntry.GLSN) - sr, ok = <-subEnv.ScanResultC() + _, ok = <-subEnv.ScanResultC() require.False(t, ok) require.ErrorIs(t, subEnv.Err(), io.EOF) }() @@ -551,8 +579,8 @@ func TestExecutorSubscribe(t *testing.T) { return report.UncommittedLLSNLength > 0 }, time.Second, time.Millisecond) err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 11, HighWatermark: 55, - PrevHighWatermark: 50, CommittedGLSNOffset: 53, CommittedGLSNLength: 1, CommittedLLSNOffset: 11, @@ -597,7 +625,7 @@ func TestExecutorReplicate(t *testing.T) { WithStorage(strg), WithStorageNodeID(backupSNID), WithLogStreamID(logStreamID), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -611,14 +639,18 @@ func TestExecutorReplicate(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: primarySNID, - LogStreamID: logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: primarySNID, + }, + LogStreamID: logStreamID, }, { - StorageNodeID: backupSNID, - LogStreamID: logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: backupSNID, + }, + LogStreamID: logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -627,6 +659,7 @@ func TestExecutorReplicate(t *testing.T) { expectedHWM := types.GLSN(i * 5) expectedLLSN := types.LLSN(i) expectedGLSN := expectedHWM - 2 + expectedVer := types.Version(i) assert.NoError(t, lse.Replicate(context.TODO(), expectedLLSN, []byte("foo"))) @@ -637,8 +670,8 @@ func TestExecutorReplicate(t *testing.T) { }, time.Second, 10*time.Millisecond) require.NoError(t, lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: expectedVer, HighWatermark: expectedHWM, - PrevHighWatermark: expectedHWM - 5, CommittedGLSNOffset: expectedGLSN, CommittedGLSNLength: 1, CommittedLLSNOffset: expectedLLSN, @@ -647,7 +680,7 @@ func TestExecutorReplicate(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == expectedHWM && report.UncommittedLLSNOffset == expectedLLSN+1 && + return report.Version == expectedVer && report.UncommittedLLSNOffset == expectedLLSN+1 && report.UncommittedLLSNLength == 0 }, time.Second, 10*time.Millisecond) } @@ -661,6 +694,7 @@ func TestExecutorSealSuddenly(t *testing.T) { const ( numWriters = 10 + topicID = 1 ) strg, err := storage.NewStorage(storage.WithPath(t.TempDir())) @@ -668,7 +702,7 @@ func TestExecutorSealSuddenly(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -681,10 +715,12 @@ func TestExecutorSealSuddenly(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -733,10 +769,10 @@ func TestExecutorSealSuddenly(t *testing.T) { var cr snpb.LogStreamCommitResult i := sort.Search(len(commitResults), func(i int) bool { - return report.GetHighWatermark() <= commitResults[i].GetPrevHighWatermark() + return report.GetVersion() <= commitResults[i].GetVersion()-1 }) if i < len(commitResults) { - assert.Equal(t, commitResults[i].GetPrevHighWatermark(), report.GetHighWatermark()) + assert.Equal(t, commitResults[i].GetVersion()-1, report.GetVersion()) for i < len(commitResults) { lse.Commit(context.TODO(), commitResults[i]) i++ @@ -751,13 +787,13 @@ func TestExecutorSealSuddenly(t *testing.T) { } cr = snpb.LogStreamCommitResult{ - HighWatermark: report.GetHighWatermark() + types.GLSN(report.GetUncommittedLLSNLength()), - PrevHighWatermark: report.GetHighWatermark(), + Version: report.GetVersion() + 1, + HighWatermark: types.GLSN(report.GetUncommittedLLSNOffset()) + types.GLSN(report.GetUncommittedLLSNLength()) - 1, CommittedGLSNOffset: types.GLSN(report.GetUncommittedLLSNOffset()), CommittedGLSNLength: report.GetUncommittedLLSNLength(), CommittedLLSNOffset: report.GetUncommittedLLSNOffset(), } - lastCommittedGLSN = cr.GetHighWatermark() + lastCommittedGLSN = cr.GetCommittedGLSNOffset() + types.GLSN(cr.GetCommittedGLSNLength()) - 1 commitResults = append(commitResults, cr) lse.Commit(context.TODO(), cr) muSealed.Unlock() @@ -771,6 +807,7 @@ func TestExecutorSealSuddenly(t *testing.T) { require.Eventually(t, func() bool { status, glsn, err := lse.Seal(context.TODO(), lastCommittedGLSN) + require.NoError(t, err) return status == varlogpb.LogStreamStatusSealed && glsn == lastCommittedGLSN }, time.Second, 10*time.Millisecond) @@ -780,6 +817,8 @@ func TestExecutorSealSuddenly(t *testing.T) { } func TestExecutorSeal(t *testing.T) { + const topicID = 1 + defer goleak.VerifyNone(t) ctrl := gomock.NewController(t) @@ -790,7 +829,8 @@ func TestExecutorSeal(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithTopicID(topicID), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -804,10 +844,12 @@ func TestExecutorSeal(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -831,8 +873,8 @@ func TestExecutorSeal(t *testing.T) { }, time.Second, time.Millisecond) err = lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 1, HighWatermark: 2, - PrevHighWatermark: 0, CommittedGLSNOffset: 1, CommittedGLSNLength: 2, CommittedLLSNOffset: 1, @@ -842,7 +884,7 @@ func TestExecutorSeal(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 2 && report.UncommittedLLSNOffset == 3 && report.UncommittedLLSNLength == 8 + return report.Version == 1 && report.UncommittedLLSNOffset == 3 && report.UncommittedLLSNLength == 8 }, time.Second, time.Millisecond) // sealing @@ -852,8 +894,8 @@ func TestExecutorSeal(t *testing.T) { assert.Equal(t, varlogpb.LogStreamStatusSealing, status) err = lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 2, HighWatermark: 3, - PrevHighWatermark: 2, CommittedGLSNOffset: 3, CommittedGLSNLength: 1, CommittedLLSNOffset: 3, @@ -863,7 +905,7 @@ func TestExecutorSeal(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 3 && report.UncommittedLLSNOffset == 4 && report.UncommittedLLSNLength == 7 + return report.Version == 2 && report.UncommittedLLSNOffset == 4 && report.UncommittedLLSNLength == 7 }, time.Second, time.Millisecond) // sealed @@ -884,17 +926,19 @@ func TestExecutorSeal(t *testing.T) { assert.Equal(t, 7, numErrs) // unseal - err = lse.Unseal(context.TODO(), []snpb.Replica{ + err = lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, }) assert.NoError(t, err) report, err := lse.GetReport() require.NoError(t, err) - assert.Equal(t, types.GLSN(3), report.HighWatermark) + assert.Equal(t, types.Version(2), report.Version) assert.Equal(t, types.LLSN(4), report.UncommittedLLSNOffset) assert.Zero(t, report.UncommittedLLSNLength) @@ -915,8 +959,8 @@ func TestExecutorSeal(t *testing.T) { }, time.Second, time.Millisecond) err = lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 3, HighWatermark: 13, - PrevHighWatermark: 3, CommittedGLSNOffset: 4, CommittedGLSNLength: 10, CommittedLLSNOffset: 4, @@ -928,7 +972,7 @@ func TestExecutorSeal(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 13 && report.UncommittedLLSNOffset == 14 && report.UncommittedLLSNLength == 0 + return report.Version == 3 && report.UncommittedLLSNOffset == 14 && report.UncommittedLLSNLength == 0 }, time.Second, time.Millisecond) // check LLSN is sequential @@ -955,7 +999,7 @@ func TestExecutorWithRecover(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -969,10 +1013,12 @@ func TestExecutorWithRecover(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -1003,6 +1049,7 @@ func TestExecutorCloseSuddenly(t *testing.T) { const ( numWriter = 100 numReader = 10 + topicID = 1 ) strg, err := storage.NewStorage( @@ -1014,7 +1061,7 @@ func TestExecutorCloseSuddenly(t *testing.T) { lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -1023,10 +1070,12 @@ func TestExecutorCloseSuddenly(t *testing.T) { require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -1070,8 +1119,8 @@ func TestExecutorCloseSuddenly(t *testing.T) { continue } if err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ - HighWatermark: report.GetHighWatermark() + types.GLSN(report.GetUncommittedLLSNLength()), - PrevHighWatermark: report.GetHighWatermark(), + Version: report.GetVersion() + 1, + HighWatermark: types.GLSN(report.GetUncommittedLLSNOffset()) + types.GLSN(report.GetUncommittedLLSNLength()) - 1, CommittedGLSNOffset: types.GLSN(report.GetUncommittedLLSNOffset()), CommittedGLSNLength: report.GetUncommittedLLSNLength(), CommittedLLSNOffset: report.GetUncommittedLLSNOffset(), @@ -1123,6 +1172,8 @@ func TestExecutorCloseSuddenly(t *testing.T) { } func TestExecutorNew(t *testing.T) { + const topicID = 1 + defer goleak.VerifyNone(t) ctrl := gomock.NewController(t) @@ -1134,7 +1185,8 @@ func TestExecutorNew(t *testing.T) { require.NoError(t, err) lse, err := New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithTopicID(topicID), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) @@ -1142,10 +1194,12 @@ func TestExecutorNew(t *testing.T) { require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) @@ -1171,8 +1225,8 @@ func TestExecutorNew(t *testing.T) { return report.UncommittedLLSNLength == 10 }, time.Second, time.Millisecond) err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 1, HighWatermark: 5, - PrevHighWatermark: 0, CommittedGLSNOffset: 1, CommittedGLSNLength: 5, CommittedLLSNOffset: 1, @@ -1181,7 +1235,7 @@ func TestExecutorNew(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 5 + return report.Version == 1 }, time.Second, time.Millisecond) }() @@ -1189,7 +1243,7 @@ func TestExecutorNew(t *testing.T) { report, err := lse.GetReport() require.NoError(t, err) - require.Equal(t, types.GLSN(5), report.HighWatermark) + require.Equal(t, types.Version(1), report.Version) require.Equal(t, types.LLSN(6), report.UncommittedLLSNOffset) require.EqualValues(t, 5, report.UncommittedLLSNLength) @@ -1201,19 +1255,19 @@ func TestExecutorNew(t *testing.T) { require.NoError(t, err) lse, err = New( WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) report, err = lse.GetReport() require.NoError(t, err) - require.Equal(t, types.GLSN(5), report.HighWatermark) + require.Equal(t, types.Version(1), report.Version) require.Equal(t, types.LLSN(6), report.UncommittedLLSNOffset) require.EqualValues(t, 5, report.UncommittedLLSNLength) err = lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 2, HighWatermark: 8, - PrevHighWatermark: 5, CommittedGLSNOffset: 6, CommittedGLSNLength: 3, CommittedLLSNOffset: 6, @@ -1222,7 +1276,7 @@ func TestExecutorNew(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 8 + return report.Version == 2 }, time.Second, time.Millisecond) // Seal @@ -1233,15 +1287,17 @@ func TestExecutorNew(t *testing.T) { // Check if uncommitted logs are deleted report, err = lse.GetReport() require.NoError(t, err) - require.Equal(t, types.GLSN(8), report.HighWatermark) + require.Equal(t, types.Version(2), report.Version) require.Equal(t, types.LLSN(9), report.UncommittedLLSNOffset) require.EqualValues(t, 0, report.UncommittedLLSNLength) // Unseal - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) @@ -1255,14 +1311,18 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() - const logStreamID = types.LogStreamID(1) + const ( + logStreamID = types.LogStreamID(1) + topicID = 1 + ) strg, err := storage.NewStorage(storage.WithPath(t.TempDir())) require.NoError(t, err) lse, err := New( WithLogStreamID(logStreamID), + WithTopicID(topicID), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1273,10 +1333,12 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) @@ -1299,8 +1361,8 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { }, time.Second, time.Millisecond) err := lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 1, HighWatermark: 5, - PrevHighWatermark: 0, CommittedGLSNOffset: 1, CommittedGLSNLength: 5, CommittedLLSNOffset: 1, @@ -1310,12 +1372,12 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 5 + return report.Version == 1 }, time.Second, time.Millisecond) err = lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 2, HighWatermark: 20, - PrevHighWatermark: 5, CommittedGLSNOffset: 11, CommittedGLSNLength: 5, CommittedLLSNOffset: 6, @@ -1325,7 +1387,7 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 20 + return report.Version == 2 }, time.Second, time.Millisecond) }() wg.Wait() @@ -1351,25 +1413,11 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 1, CommittedGLSNLength: 5, HighestWrittenLLSN: 10, - HighWatermark: 5, - PrevHighWatermark: 0, + Version: 1, }, commitInfo) commitInfo, err = lse.GetPrevCommitInfo(1) require.NoError(t, err) - require.Equal(t, &snpb.LogStreamCommitInfo{ - LogStreamID: logStreamID, - Status: snpb.GetPrevCommitStatusOK, - CommittedLLSNOffset: 1, - CommittedGLSNOffset: 1, - CommittedGLSNLength: 5, - HighestWrittenLLSN: 10, - HighWatermark: 5, - PrevHighWatermark: 0, - }, commitInfo) - - commitInfo, err = lse.GetPrevCommitInfo(5) - require.NoError(t, err) require.Equal(t, &snpb.LogStreamCommitInfo{ LogStreamID: logStreamID, Status: snpb.GetPrevCommitStatusOK, @@ -1377,24 +1425,22 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 11, CommittedGLSNLength: 5, HighestWrittenLLSN: 10, - HighWatermark: 20, - PrevHighWatermark: 5, + Version: 2, }, commitInfo) - commitInfo, err = lse.GetPrevCommitInfo(6) + commitInfo, err = lse.GetPrevCommitInfo(2) require.NoError(t, err) require.Equal(t, &snpb.LogStreamCommitInfo{ LogStreamID: logStreamID, - Status: snpb.GetPrevCommitStatusOK, - CommittedLLSNOffset: 6, - CommittedGLSNOffset: 11, - CommittedGLSNLength: 5, + Status: snpb.GetPrevCommitStatusNotFound, + CommittedLLSNOffset: types.InvalidLLSN, + CommittedGLSNOffset: types.InvalidGLSN, + CommittedGLSNLength: 0, HighestWrittenLLSN: 10, - HighWatermark: 20, - PrevHighWatermark: 5, + Version: 0, }, commitInfo) - commitInfo, err = lse.GetPrevCommitInfo(20) + commitInfo, err = lse.GetPrevCommitInfo(3) require.NoError(t, err) require.Equal(t, &snpb.LogStreamCommitInfo{ LogStreamID: logStreamID, @@ -1403,8 +1449,7 @@ func TestExecutorGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: types.InvalidGLSN, CommittedGLSNLength: 0, HighestWrittenLLSN: 10, - HighWatermark: types.InvalidGLSN, - PrevHighWatermark: types.InvalidGLSN, + Version: 0, }, commitInfo) } @@ -1421,7 +1466,7 @@ func TestExecutorGetPrevCommitInfoWithEmptyCommitContext(t *testing.T) { lse, err := New( WithLogStreamID(logStreamID), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1432,16 +1477,18 @@ func TestExecutorGetPrevCommitInfoWithEmptyCommitContext(t *testing.T) { require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse.storageNodeID, - LogStreamID: lse.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse.storageNodeID, + }, + LogStreamID: lse.logStreamID, }, })) require.NoError(t, lse.Commit(context.TODO(), snpb.LogStreamCommitResult{ + Version: 1, HighWatermark: 5, - PrevHighWatermark: 0, CommittedGLSNOffset: 1, CommittedGLSNLength: 0, CommittedLLSNOffset: 1, @@ -1450,7 +1497,7 @@ func TestExecutorGetPrevCommitInfoWithEmptyCommitContext(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == 5 + return report.Version == 1 }, time.Second, time.Millisecond) commitInfo, err := lse.GetPrevCommitInfo(0) @@ -1463,8 +1510,7 @@ func TestExecutorGetPrevCommitInfoWithEmptyCommitContext(t *testing.T) { CommittedGLSNOffset: 1, CommittedGLSNLength: 0, HighestWrittenLLSN: 0, - HighWatermark: 5, - PrevHighWatermark: 0, + Version: 1, }, commitInfo) } @@ -1479,7 +1525,7 @@ func TestExecutorUnsealWithInvalidReplicas(t *testing.T) { lse, err := New( WithLogStreamID(1), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1491,14 +1537,34 @@ func TestExecutorUnsealWithInvalidReplicas(t *testing.T) { require.Equal(t, varlogpb.LogStreamStatusSealed, status) require.Error(t, lse.Unseal(context.TODO(), nil)) - require.Error(t, lse.Unseal(context.TODO(), []snpb.Replica{})) - require.Error(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 1, LogStreamID: 1}, - {StorageNodeID: 2, LogStreamID: 2}, + require.Error(t, lse.Unseal(context.TODO(), []varlogpb.Replica{})) + require.Error(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 2, + }, + LogStreamID: 2, + }, })) - require.Error(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 1, LogStreamID: 1}, - {StorageNodeID: 1, LogStreamID: 1}, + require.Error(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, })) } @@ -1514,7 +1580,7 @@ func TestExecutorPrimaryBackup(t *testing.T) { WithStorageNodeID(1), WithLogStreamID(1), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1525,8 +1591,13 @@ func TestExecutorPrimaryBackup(t *testing.T) { status, _, err := lse.Seal(context.TODO(), types.InvalidGLSN) require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 1, LogStreamID: 1}, + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, })) require.True(t, lse.isPrimay()) @@ -1534,9 +1605,19 @@ func TestExecutorPrimaryBackup(t *testing.T) { status, _, err = lse.Seal(context.TODO(), types.InvalidGLSN) require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 2, LogStreamID: 1}, - {StorageNodeID: 1, LogStreamID: 1}, + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 2, + }, + LogStreamID: 1, + }, + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, })) require.False(t, lse.isPrimay()) @@ -1544,11 +1625,21 @@ func TestExecutorPrimaryBackup(t *testing.T) { status, _, err = lse.Seal(context.TODO(), types.InvalidGLSN) require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.Error(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 2, LogStreamID: 1}, + require.Error(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 2, + }, + LogStreamID: 1, + }, })) - require.Error(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 1, LogStreamID: 2}, + require.Error(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 2, + }, })) } @@ -1564,7 +1655,7 @@ func TestExecutorSyncInitNewReplica(t *testing.T) { WithStorageNodeID(1), WithLogStreamID(1), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1593,7 +1684,7 @@ func TestExecutorSyncInitInvalidState(t *testing.T) { WithStorageNodeID(1), WithLogStreamID(1), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1611,8 +1702,13 @@ func TestExecutorSyncInitInvalidState(t *testing.T) { require.Error(t, err) // RUNNING - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 1, LogStreamID: 1}, + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, })) // FIXME (jun): use varlogpb.LogStreamStatus! require.Equal(t, executorMutable, lse.stateBarrier.state.load()) @@ -1635,7 +1731,7 @@ func TestExecutorSyncBackupReplica(t *testing.T) { WithStorageNodeID(1), WithLogStreamID(1), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1646,15 +1742,26 @@ func TestExecutorSyncBackupReplica(t *testing.T) { require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 2, LogStreamID: 1}, - {StorageNodeID: 1, LogStreamID: 1}, + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 2, + }, + LogStreamID: 1, + }, + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, })) require.False(t, lse.isPrimay()) for i := 1; i <= 2; i++ { llsn := types.LLSN(i) glsn := types.GLSN(i) + ver := types.Version(i) assert.NoError(t, lse.Replicate(context.Background(), llsn, []byte("foo"))) @@ -1665,8 +1772,8 @@ func TestExecutorSyncBackupReplica(t *testing.T) { }, time.Second, 10*time.Millisecond) assert.NoError(t, lse.Commit(context.Background(), snpb.LogStreamCommitResult{ + Version: ver, HighWatermark: glsn, - PrevHighWatermark: glsn - 1, CommittedGLSNOffset: glsn, CommittedGLSNLength: 1, CommittedLLSNOffset: llsn, @@ -1675,7 +1782,7 @@ func TestExecutorSyncBackupReplica(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() require.NoError(t, err) - return report.HighWatermark == glsn && report.UncommittedLLSNOffset == llsn+1 && + return report.Version == ver && report.UncommittedLLSNOffset == llsn+1 && report.UncommittedLLSNLength == 0 }, time.Second, 10*time.Millisecond) } @@ -1688,7 +1795,7 @@ func TestExecutorSyncBackupReplica(t *testing.T) { require.Eventually(t, func() bool { rpt, err := lse.GetReport() assert.NoError(t, err) - return 3 == rpt.GetUncommittedLLSNOffset() && 2 == rpt.GetUncommittedLLSNLength() + return rpt.GetUncommittedLLSNOffset() == 3 && rpt.GetUncommittedLLSNLength() == 2 }, time.Second, 10*time.Millisecond) // LLSN | GLSN @@ -1709,8 +1816,8 @@ func TestExecutorSyncBackupReplica(t *testing.T) { // Replica in learning state should not accept commit request. assert.Error(t, lse.Commit(context.Background(), snpb.LogStreamCommitResult{ + Version: 3, HighWatermark: 4, - PrevHighWatermark: 2, CommittedGLSNOffset: 3, CommittedGLSNLength: 2, CommittedLLSNOffset: 3, @@ -1723,8 +1830,7 @@ func TestExecutorSyncBackupReplica(t *testing.T) { require.NoError(t, lse.SyncReplicate(context.Background(), snpb.SyncPayload{ CommitContext: &varlogpb.CommitContext{ - HighWatermark: 4, - PrevHighWatermark: 2, + Version: 3, CommittedGLSNBegin: 3, CommittedGLSNEnd: 5, CommittedLLSNBegin: 3, @@ -1741,6 +1847,8 @@ func TestExecutorSyncBackupReplica(t *testing.T) { } func TestExecutorSyncPrimaryReplica(t *testing.T) { + const topicID = 1 + defer goleak.VerifyNone(t) ctrl := gomock.NewController(t) @@ -1751,8 +1859,9 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { lse, err := New( WithStorageNodeID(1), WithLogStreamID(1), + WithTopicID(topicID), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1763,8 +1872,13 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 1, LogStreamID: 1}, + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, })) require.True(t, lse.isPrimay()) @@ -1772,6 +1886,7 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { var wg sync.WaitGroup llsn := types.LLSN(i) glsn := types.GLSN(i) + ver := types.Version(i) wg.Add(1) go func() { @@ -1790,8 +1905,8 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { }, time.Second, 10*time.Millisecond) assert.NoError(t, lse.Commit(context.Background(), snpb.LogStreamCommitResult{ + Version: ver, HighWatermark: glsn, - PrevHighWatermark: glsn - 1, CommittedGLSNOffset: glsn, CommittedGLSNLength: 1, CommittedLLSNOffset: llsn, @@ -1799,7 +1914,7 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { assert.Eventually(t, func() bool { report, err := lse.GetReport() assert.NoError(t, err) - return report.HighWatermark == glsn && report.UncommittedLLSNOffset == llsn+1 && + return report.Version == ver && report.UncommittedLLSNOffset == llsn+1 && report.UncommittedLLSNLength == 0 }, time.Second, 10*time.Millisecond) }() @@ -1840,8 +1955,8 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { // Replica in learning state should not accept commit request. assert.Error(t, lse.Commit(context.Background(), snpb.LogStreamCommitResult{ + Version: 3, HighWatermark: 4, - PrevHighWatermark: 2, CommittedGLSNOffset: 3, CommittedGLSNLength: 2, CommittedLLSNOffset: 3, @@ -1854,8 +1969,7 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { require.NoError(t, lse.SyncReplicate(context.Background(), snpb.SyncPayload{ CommitContext: &varlogpb.CommitContext{ - HighWatermark: 4, - PrevHighWatermark: 2, + Version: 3, CommittedGLSNBegin: 3, CommittedGLSNEnd: 5, CommittedLLSNBegin: 3, @@ -1872,6 +1986,8 @@ func TestExecutorSyncPrimaryReplica(t *testing.T) { } func TestExecutorSync(t *testing.T) { + const topicID = 1 + defer goleak.VerifyNone(t) ctrl := gomock.NewController(t) @@ -1882,8 +1998,9 @@ func TestExecutorSync(t *testing.T) { lse, err := New( WithStorageNodeID(1), WithLogStreamID(1), + WithTopicID(topicID), WithStorage(strg), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { @@ -1909,16 +2026,26 @@ func TestExecutorSync(t *testing.T) { require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse.Unseal(context.TODO(), []snpb.Replica{ - {StorageNodeID: 1, LogStreamID: 1}, - {StorageNodeID: 2, LogStreamID: 1}, + require.NoError(t, lse.Unseal(context.TODO(), []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, + }, + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 2, + }, + LogStreamID: 1, + }, })) require.True(t, lse.isPrimay()) require.Equal(t, varlogpb.LogStreamStatusRunning, lse.Metadata().Status) require.NoError(t, lse.Commit(context.Background(), snpb.LogStreamCommitResult{ + Version: 1, HighWatermark: 5, - PrevHighWatermark: 0, CommittedGLSNOffset: 1, CommittedGLSNLength: 0, CommittedLLSNOffset: 1, @@ -1927,12 +2054,12 @@ func TestExecutorSync(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() assert.NoError(t, err) - return report.HighWatermark == 5 + return report.Version == 1 }, time.Second, 10*time.Millisecond) require.NoError(t, lse.Commit(context.Background(), snpb.LogStreamCommitResult{ + Version: 2, HighWatermark: 10, - PrevHighWatermark: 5, CommittedGLSNOffset: 1, CommittedGLSNLength: 0, CommittedLLSNOffset: 1, @@ -1941,18 +2068,24 @@ func TestExecutorSync(t *testing.T) { require.Eventually(t, func() bool { report, err := lse.GetReport() assert.NoError(t, err) - return report.HighWatermark == 10 + return report.Version == 2 }, time.Second, 10*time.Millisecond) for i := 1; i <= 2; i++ { llsn := types.LLSN(i) glsn := types.GLSN(10 + i) + ver := types.Version(2 + i) var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() - _, err := lse.Append(context.Background(), []byte("foo"), snpb.Replica{StorageNodeID: 2, LogStreamID: 1}) + _, err := lse.Append(context.Background(), []byte("foo"), varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 2, + }, + LogStreamID: 1, + }) assert.NoError(t, err) }() @@ -1966,8 +2099,8 @@ func TestExecutorSync(t *testing.T) { }, time.Second, 10*time.Millisecond) assert.NoError(t, lse.Commit(context.Background(), snpb.LogStreamCommitResult{ + Version: ver, HighWatermark: glsn, - PrevHighWatermark: glsn - 1, CommittedGLSNOffset: glsn, CommittedGLSNLength: 1, CommittedLLSNOffset: llsn, @@ -1975,7 +2108,7 @@ func TestExecutorSync(t *testing.T) { assert.Eventually(t, func() bool { report, err := lse.GetReport() assert.NoError(t, err) - return report.HighWatermark == glsn && report.UncommittedLLSNOffset == llsn+1 && + return report.Version == ver && report.UncommittedLLSNOffset == llsn+1 && report.UncommittedLLSNLength == 0 }, time.Second, 10*time.Millisecond) }() @@ -1990,7 +2123,7 @@ func TestExecutorSync(t *testing.T) { // * cc | | | 12 | 11 | 12 | 13 | 2 // * le | 2 | 12 | | | | | - done := make(chan struct{}, 0) + done := make(chan struct{}) dstClient.EXPECT().SyncInit(gomock.Any(), gomock.Any()).DoAndReturn( func(ctx context.Context, srcRnage snpb.SyncRange) (snpb.SyncRange, error) { select { @@ -2005,22 +2138,23 @@ func TestExecutorSync(t *testing.T) { step := 0 expectedLLSN := types.LLSN(1) exptectedGLSN := types.GLSN(11) + expectedVer := types.Version(3) dstClient.EXPECT().SyncReplicate(gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn( - func(ctx context.Context, replica snpb.Replica, payload snpb.SyncPayload) error { + func(ctx context.Context, replica varlogpb.Replica, payload snpb.SyncPayload) error { defer func() { step++ }() if step%2 == 0 { // cc assert.NotNil(t, payload.CommitContext) - assert.Equal(t, exptectedGLSN, payload.CommitContext.HighWatermark) - assert.Equal(t, exptectedGLSN-1, payload.CommitContext.PrevHighWatermark) + assert.Equal(t, expectedVer, payload.CommitContext.Version) } else { // le assert.NotNil(t, payload.LogEntry) assert.Equal(t, exptectedGLSN, payload.LogEntry.GLSN) assert.Equal(t, expectedLLSN, payload.LogEntry.LLSN) expectedLLSN++ exptectedGLSN++ + expectedVer++ } if expectedLLSN > types.LLSN(2) { close(done) @@ -2034,7 +2168,7 @@ func TestExecutorSync(t *testing.T) { require.Equal(t, varlogpb.LogStreamStatusSealed, status) require.Eventually(t, func() bool { - sts, err := lse.Sync(context.Background(), snpb.Replica{}) + sts, err := lse.Sync(context.Background(), varlogpb.Replica{}) assert.NoError(t, err) return sts != nil && sts.State == snpb.SyncStateComplete }, time.Second, 10*time.Millisecond) diff --git a/internal/storagenode/executor/log_io.go b/internal/storagenode/executor/log_io.go index ac5177fa4..df83875d4 100644 --- a/internal/storagenode/executor/log_io.go +++ b/internal/storagenode/executor/log_io.go @@ -11,10 +11,10 @@ import ( "github.com/kakao/varlog/internal/storagenode/storage" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) -func (e *executor) Append(ctx context.Context, data []byte, backups ...snpb.Replica) (types.GLSN, error) { +func (e *executor) Append(ctx context.Context, data []byte, backups ...varlogpb.Replica) (types.GLSN, error) { // FIXME: e.guard() can be removed, but doing ops to storage after closing should be // handled. Mostly, trim and read can be occurred after clsoing storage. if err := e.guard(); err != nil { @@ -40,7 +40,7 @@ func (e *executor) Append(ctx context.Context, data []byte, backups ...snpb.Repl if !e.isPrimay() { return errors.Wrapf(verrors.ErrInvalid, "backup replica") } - if !snpb.EqualReplicas(e.primaryBackups[1:], wt.backups) { + if !varlogpb.EqualReplicas(e.primaryBackups[1:], wt.backups) { return errors.Wrapf(verrors.ErrInvalid, "replicas mismatch: expected=%+v, actual=%+v", e.primaryBackups[1:], wt.backups) } return nil @@ -58,19 +58,19 @@ func (e *executor) Append(ctx context.Context, data []byte, backups ...snpb.Repl return glsn, err } -func (e *executor) Read(ctx context.Context, glsn types.GLSN) (logEntry types.LogEntry, err error) { +func (e *executor) Read(ctx context.Context, glsn types.GLSN) (logEntry varlogpb.LogEntry, err error) { if glsn.Invalid() { - return types.InvalidLogEntry, errors.WithStack(verrors.ErrInvalid) + return varlogpb.InvalidLogEntry(), errors.WithStack(verrors.ErrInvalid) } if err := e.guard(); err != nil { - return types.InvalidLogEntry, err + return varlogpb.InvalidLogEntry(), err } defer e.unguard() // TODO: consider context to cancel waiting if err := e.decider.waitC(ctx, glsn); err != nil { - return types.InvalidLogEntry, err + return varlogpb.InvalidLogEntry(), err } // TODO: check trimmed @@ -79,7 +79,7 @@ func (e *executor) Read(ctx context.Context, glsn types.GLSN) (logEntry types.Lo trimGLSN := e.deferredTrim.glsn e.deferredTrim.mu.RUnlock() if glsn <= trimGLSN { - return types.InvalidLogEntry, errors.WithStack(verrors.ErrTrimmed) + return varlogpb.InvalidLogEntry(), errors.WithStack(verrors.ErrTrimmed) } // TODO: trivial optimization, is it needed? @@ -269,8 +269,8 @@ func (e *executor) Trim(_ context.Context, glsn types.GLSN) error { return trimGLSN } - globalHighWatermark, _ := e.lsc.reportCommitBase() - if glsn >= globalHighWatermark-e.deferredTrim.safetyGap { + _, highWatermark, _ := e.lsc.reportCommitBase() + if glsn >= highWatermark-e.deferredTrim.safetyGap { return errors.New("too high prefix") } diff --git a/internal/storagenode/executor/log_stream_context.go b/internal/storagenode/executor/log_stream_context.go index 33096e50c..e821b3e12 100644 --- a/internal/storagenode/executor/log_stream_context.go +++ b/internal/storagenode/executor/log_stream_context.go @@ -17,7 +17,8 @@ import ( // against to the atomic.Load. For these reasons, the shared mutex is used. type reportCommitBase struct { mu sync.RWMutex - globalHighWatermark types.GLSN + commitVersion types.Version + highWatermark types.GLSN uncommittedLLSNBegin types.LLSN } @@ -40,7 +41,7 @@ type logStreamContext struct { func newLogStreamContext() *logStreamContext { lsc := &logStreamContext{} - lsc.storeReportCommitBase(types.InvalidGLSN, types.MinLLSN) + lsc.storeReportCommitBase(types.InvalidVersion, types.MinGLSN, types.MinLLSN) lsc.uncommittedLLSNEnd.Store(types.MinLLSN) @@ -54,17 +55,19 @@ func newLogStreamContext() *logStreamContext { return lsc } -func (lsc *logStreamContext) reportCommitBase() (globalHighWatermark types.GLSN, uncommittedLLSNBegin types.LLSN) { +func (lsc *logStreamContext) reportCommitBase() (commitVersion types.Version, highWatermark types.GLSN, uncommittedLLSNBegin types.LLSN) { lsc.base.mu.RLock() - globalHighWatermark = lsc.base.globalHighWatermark + commitVersion = lsc.base.commitVersion + highWatermark = lsc.base.highWatermark uncommittedLLSNBegin = lsc.base.uncommittedLLSNBegin lsc.base.mu.RUnlock() return } -func (lsc *logStreamContext) storeReportCommitBase(globalHighWatermark types.GLSN, uncommittedLLSNBegin types.LLSN) { +func (lsc *logStreamContext) storeReportCommitBase(commitVersion types.Version, highWatermark types.GLSN, uncommittedLLSNBegin types.LLSN) { lsc.base.mu.Lock() - lsc.base.globalHighWatermark = globalHighWatermark + lsc.base.commitVersion = commitVersion + lsc.base.highWatermark = highWatermark lsc.base.uncommittedLLSNBegin = uncommittedLLSNBegin lsc.base.mu.Unlock() } @@ -90,8 +93,8 @@ func newDecidableCondition(lsc *logStreamContext) *decidableCondition { // If true, the LSE must know the log entry is in this LSE or not. // If false, the LSE can't guarantee whether the log entry is in this LSE or not. func (dc *decidableCondition) decidable(glsn types.GLSN) bool { - globalHighWatermark, _ := dc.lsc.reportCommitBase() - return glsn <= globalHighWatermark + _, highWatermark, _ := dc.lsc.reportCommitBase() + return glsn <= highWatermark } // NOTE: Canceling ctx is not a guarantee that this waitC is wakeup immediately. diff --git a/internal/storagenode/executor/log_stream_context_test.go b/internal/storagenode/executor/log_stream_context_test.go index ebfcba1c0..252310ed9 100644 --- a/internal/storagenode/executor/log_stream_context_test.go +++ b/internal/storagenode/executor/log_stream_context_test.go @@ -11,7 +11,8 @@ import ( func TestLogStreamContext(t *testing.T) { lsc := newLogStreamContext() - globalHighWatermark, uncommittedLLSNBegin := lsc.reportCommitBase() - require.Equal(t, types.InvalidGLSN, globalHighWatermark) + version, highWatermark, uncommittedLLSNBegin := lsc.reportCommitBase() + require.Equal(t, types.InvalidVersion, version) + require.Equal(t, types.MinGLSN, highWatermark) require.Equal(t, types.MinLLSN, uncommittedLLSNBegin) } diff --git a/internal/storagenode/executor/metadata.go b/internal/storagenode/executor/metadata.go index 3f4438537..f977e58c5 100644 --- a/internal/storagenode/executor/metadata.go +++ b/internal/storagenode/executor/metadata.go @@ -10,7 +10,7 @@ import ( type MetadataProvider interface { Metadata() varlogpb.LogStreamMetadataDescriptor Path() string - GetPrevCommitInfo(hwm types.GLSN) (*snpb.LogStreamCommitInfo, error) + GetPrevCommitInfo(ver types.Version) (*snpb.LogStreamCommitInfo, error) } func (e *executor) Path() string { @@ -27,9 +27,12 @@ func (e *executor) Metadata() varlogpb.LogStreamMetadataDescriptor { case executorSealed: status = varlogpb.LogStreamStatusSealed } + version, _, _ := e.lsc.reportCommitBase() return varlogpb.LogStreamMetadataDescriptor{ StorageNodeID: e.storageNodeID, LogStreamID: e.logStreamID, + TopicID: e.topicID, + Version: version, HighWatermark: e.lsc.localGLSN.localHighWatermark.Load(), Status: status, Path: e.storage.Path(), @@ -38,13 +41,13 @@ func (e *executor) Metadata() varlogpb.LogStreamMetadataDescriptor { } } -func (e *executor) GetPrevCommitInfo(prevHWM types.GLSN) (*snpb.LogStreamCommitInfo, error) { +func (e *executor) GetPrevCommitInfo(ver types.Version) (*snpb.LogStreamCommitInfo, error) { info := &snpb.LogStreamCommitInfo{ LogStreamID: e.logStreamID, HighestWrittenLLSN: e.lsc.uncommittedLLSNEnd.Load() - 1, } - cc, err := e.storage.ReadFloorCommitContext(prevHWM) + cc, err := e.storage.ReadFloorCommitContext(ver) switch err { case storage.ErrNotFoundCommitContext: info.Status = snpb.GetPrevCommitStatusNotFound @@ -59,7 +62,6 @@ func (e *executor) GetPrevCommitInfo(prevHWM types.GLSN) (*snpb.LogStreamCommitI info.CommittedLLSNOffset = cc.CommittedLLSNBegin info.CommittedGLSNOffset = cc.CommittedGLSNBegin info.CommittedGLSNLength = uint64(cc.CommittedGLSNEnd - cc.CommittedGLSNBegin) - info.HighWatermark = cc.HighWatermark - info.PrevHighWatermark = cc.PrevHighWatermark + info.Version = cc.Version return info, nil } diff --git a/internal/storagenode/executor/replicate.go b/internal/storagenode/executor/replicate.go index e515642fd..2cb3432e2 100644 --- a/internal/storagenode/executor/replicate.go +++ b/internal/storagenode/executor/replicate.go @@ -59,7 +59,7 @@ func (e *executor) SyncInit(ctx context.Context, srcRange snpb.SyncRange) (syncR // TODO: check range of sync // if the executor already has last position committed, this RPC should be rejected. - _, uncommittedLLSNBegin := e.lsc.reportCommitBase() + _, _, uncommittedLLSNBegin := e.lsc.reportCommitBase() lastCommittedLLSN := uncommittedLLSNBegin - 1 if lastCommittedLLSN > srcRange.LastLLSN { panic("oops") @@ -163,8 +163,8 @@ func (e *executor) SyncReplicate(ctx context.Context, payload snpb.SyncPayload) e.lsc.uncommittedLLSNEnd.Add(numCommits) _, err = e.committer.commitDirectly(storage.CommitContext{ + Version: e.srs.cc.Version, HighWatermark: e.srs.cc.HighWatermark, - PrevHighWatermark: e.srs.cc.PrevHighWatermark, CommittedGLSNBegin: e.srs.cc.CommittedGLSNBegin, CommittedGLSNEnd: e.srs.cc.CommittedGLSNEnd, CommittedLLSNBegin: e.srs.cc.CommittedLLSNBegin, @@ -194,7 +194,7 @@ func (e *executor) SyncReplicate(ctx context.Context, payload snpb.SyncPayload) // TODO: Add unit tests for various situations. // NOTE: // - What if it has no entries to sync because they are trimmed? -func (e *executor) Sync(ctx context.Context, replica snpb.Replica) (*snpb.SyncStatus, error) { +func (e *executor) Sync(ctx context.Context, replica varlogpb.Replica) (*snpb.SyncStatus, error) { if err := e.guard(); err != nil { return nil, err } @@ -207,7 +207,7 @@ func (e *executor) Sync(ctx context.Context, replica snpb.Replica) (*snpb.SyncSt e.syncTrackers.mu.Lock() defer e.syncTrackers.mu.Unlock() - if state, ok := e.syncTrackers.trk.get(replica.GetStorageNodeID()); ok { + if state, ok := e.syncTrackers.trk.get(replica.StorageNode.StorageNodeID); ok { return state.ToSyncStatus(), nil } @@ -280,8 +280,8 @@ func (e *executor) sync(ctx context.Context, state *syncState) (err error) { // send cc err = client.SyncReplicate(ctx, state.dst, snpb.SyncPayload{ CommitContext: &varlogpb.CommitContext{ + Version: cc.Version, HighWatermark: cc.HighWatermark, - PrevHighWatermark: cc.PrevHighWatermark, CommittedGLSNBegin: cc.CommittedGLSNBegin, CommittedGLSNEnd: cc.CommittedGLSNEnd, CommittedLLSNBegin: cc.CommittedLLSNBegin, diff --git a/internal/storagenode/executor/replicate_task.go b/internal/storagenode/executor/replicate_task.go index a38e003ec..96c7638d4 100644 --- a/internal/storagenode/executor/replicate_task.go +++ b/internal/storagenode/executor/replicate_task.go @@ -6,7 +6,7 @@ import ( "time" "github.com/kakao/varlog/pkg/types" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) var replicateTaskPool = sync.Pool{ @@ -18,7 +18,7 @@ var replicateTaskPool = sync.Pool{ type replicateTask struct { llsn types.LLSN data []byte - replicas []snpb.Replica + replicas []varlogpb.Replica createdTime time.Time poppedTime time.Time @@ -30,13 +30,13 @@ func newReplicateTask() *replicateTask { return rt } -func (t *replicateTask) release() { - t.llsn = types.InvalidLLSN - t.data = nil - t.replicas = nil - t.createdTime = time.Time{} - t.poppedTime = time.Time{} - replicateTaskPool.Put(t) +func (rt *replicateTask) release() { + rt.llsn = types.InvalidLLSN + rt.data = nil + rt.replicas = nil + rt.createdTime = time.Time{} + rt.poppedTime = time.Time{} + replicateTaskPool.Put(rt) } func (rt *replicateTask) annotate(ctx context.Context, m MeasurableExecutor) { diff --git a/internal/storagenode/executor/replicator.go b/internal/storagenode/executor/replicator.go index 178d40f9c..076f87b60 100644 --- a/internal/storagenode/executor/replicator.go +++ b/internal/storagenode/executor/replicator.go @@ -15,7 +15,7 @@ import ( "github.com/kakao/varlog/internal/storagenode/replication" "github.com/kakao/varlog/pkg/util/runner" "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) type replicatorConfig struct { @@ -48,7 +48,7 @@ type replicator interface { // connector has no clients. resetConnector() error - clientOf(ctx context.Context, replica snpb.Replica) (replication.Client, error) + clientOf(ctx context.Context, replica varlogpb.Replica) (replication.Client, error) } type replicatorImpl struct { @@ -225,17 +225,6 @@ func (r *replicatorImpl) generateReplicateCallback(ctx context.Context, startTim } } -func (r *replicatorImpl) replicateCallback(err error) { - // NOTE: `inflight` should be decreased when the callback is called since all responses - // // either success and failure should be come before unsealing. - defer func() { - atomic.AddInt64(&r.inflight, -1) - }() - if err != nil { - r.state.setSealing() - } -} - func (r *replicatorImpl) stop() { r.running.mu.Lock() r.running.val = false @@ -294,6 +283,6 @@ func (r *replicatorImpl) resetConnector() error { } // TODO (jun): Is this good method? If not, replicator can have interface for sync. -func (r *replicatorImpl) clientOf(ctx context.Context, replica snpb.Replica) (replication.Client, error) { +func (r *replicatorImpl) clientOf(ctx context.Context, replica varlogpb.Replica) (replication.Client, error) { return r.connector.Get(ctx, replica) } diff --git a/internal/storagenode/executor/replicator_mock.go b/internal/storagenode/executor/replicator_mock.go index e0e96b166..3dd3e4fff 100644 --- a/internal/storagenode/executor/replicator_mock.go +++ b/internal/storagenode/executor/replicator_mock.go @@ -11,7 +11,7 @@ import ( gomock "github.com/golang/mock/gomock" replication "github.com/kakao/varlog/internal/storagenode/replication" - snpb "github.com/kakao/varlog/proto/snpb" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) // MockReplicator is a mock of replicator interface. @@ -38,7 +38,7 @@ func (m *MockReplicator) EXPECT() *MockReplicatorMockRecorder { } // clientOf mocks base method. -func (m *MockReplicator) clientOf(ctx context.Context, replica snpb.Replica) (replication.Client, error) { +func (m *MockReplicator) clientOf(ctx context.Context, replica varlogpb.Replica) (replication.Client, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "clientOf", ctx, replica) ret0, _ := ret[0].(replication.Client) diff --git a/internal/storagenode/executor/replicator_test.go b/internal/storagenode/executor/replicator_test.go index 8e0f5d4f5..d8f3bdccc 100644 --- a/internal/storagenode/executor/replicator_test.go +++ b/internal/storagenode/executor/replicator_test.go @@ -18,12 +18,11 @@ import ( "github.com/kakao/varlog/internal/storagenode/id" "github.com/kakao/varlog/internal/storagenode/replication" - "github.com/kakao/varlog/internal/storagenode/telemetry" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/util/netutil" "github.com/kakao/varlog/pkg/util/syncutil/atomicutil" "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) func TestReplicationProcessorFailure(t *testing.T) { @@ -94,11 +93,13 @@ func TestReplicationProcessorNoClient(t *testing.T) { rtb := newReplicateTask() rtb.llsn = types.LLSN(1) - rtb.replicas = []snpb.Replica{ + rtb.replicas = []varlogpb.Replica{ { - StorageNodeID: 1, - LogStreamID: 1, - Address: "localhost:12345", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + Address: "localhost:12345", + }, + LogStreamID: 1, }, } @@ -187,11 +188,13 @@ func TestReplicationProcessor(t *testing.T) { for i := 1; i <= numLogs; i++ { rt := newReplicateTask() rt.llsn = types.LLSN(i) - rt.replicas = []snpb.Replica{ + rt.replicas = []varlogpb.Replica{ { - StorageNodeID: 1, - LogStreamID: 1, - Address: "localhost:12345", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + Address: "localhost:12345", + }, + LogStreamID: 1, }, } if err := tc.rp.send(context.TODO(), rt); err != nil { @@ -235,8 +238,8 @@ func TestReplicatorResetConnector(t *testing.T) { // mock replicator: backup's executor replicator := replication.NewMockReplicator(ctrl) replicatorGetter := replication.NewMockGetter(ctrl) - replicatorGetter.EXPECT().Replicator(gomock.Any()).DoAndReturn( - func(lsid types.LogStreamID) (replication.Replicator, bool) { + replicatorGetter.EXPECT().Replicator(gomock.Any(), gomock.Any()).DoAndReturn( + func(types.TopicID, types.LogStreamID) (replication.Replicator, bool) { return replicator, true }, ).AnyTimes() @@ -257,10 +260,8 @@ func TestReplicatorResetConnector(t *testing.T) { } blockedLogs = append(blockedLogs, llsn) mu.Unlock() - select { - case <-ctx.Done(): - return ctx.Err() - } + <-ctx.Done() + return ctx.Err() }, ).AnyTimes() @@ -269,7 +270,7 @@ func TestReplicatorResetConnector(t *testing.T) { server := replication.NewServer( replication.WithStorageNodeIDGetter(snidGetter), replication.WithLogReplicatorGetter(replicatorGetter), - replication.WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + replication.WithMeasurable(NewTestMeasurable(ctrl)), ) grpcServer := grpc.NewServer() @@ -304,10 +305,12 @@ func TestReplicatorResetConnector(t *testing.T) { for llsn := types.MinLLSN; llsn <= maxReplicatedLLSN+1; llsn++ { rt := newReplicateTask() rt.llsn = llsn - rt.replicas = []snpb.Replica{{ - StorageNodeID: backupSNID, - LogStreamID: logStreamID, - Address: addrs[0], + rt.replicas = []varlogpb.Replica{{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: backupSNID, + Address: addrs[0], + }, + LogStreamID: logStreamID, }} require.NoError(t, rp.send(context.Background(), rt)) } @@ -324,7 +327,7 @@ func TestReplicatorResetConnector(t *testing.T) { // (replicatedLogs) logs were replicated, but (maxTestLLSN-maxReplicatedLLSN) logs are still // waiting for replicated completely. require.Eventually(t, func() bool { - return 1 == atomic.LoadInt64(&rp.inflight) + return atomic.LoadInt64(&rp.inflight) == 1 }, time.Second, 10*time.Millisecond) // Resetting connector cancels inflight replications. @@ -336,5 +339,4 @@ func TestReplicatorResetConnector(t *testing.T) { require.NoError(t, server.Close()) grpcServer.GracefulStop() wg.Wait() - } diff --git a/internal/storagenode/executor/reportcommit.go b/internal/storagenode/executor/reportcommit.go index 0ef80794b..270709ec9 100644 --- a/internal/storagenode/executor/reportcommit.go +++ b/internal/storagenode/executor/reportcommit.go @@ -16,11 +16,12 @@ func (e *executor) GetReport() (snpb.LogStreamUncommitReport, error) { } defer e.unguard() - globalHighWatermark, uncommittedLLSNBegin := e.lsc.reportCommitBase() + version, highWatermark, uncommittedLLSNBegin := e.lsc.reportCommitBase() uncommittedLLSNEnd := e.lsc.uncommittedLLSNEnd.Load() return snpb.LogStreamUncommitReport{ LogStreamID: e.logStreamID, - HighWatermark: globalHighWatermark, + Version: version, + HighWatermark: highWatermark, UncommittedLLSNOffset: uncommittedLLSNBegin, UncommittedLLSNLength: uint64(uncommittedLLSNEnd - uncommittedLLSNBegin), }, nil @@ -33,16 +34,16 @@ func (e *executor) Commit(ctx context.Context, commitResult snpb.LogStreamCommit defer e.unguard() // TODO: check validate logic again - globalHighWatermark, _ := e.lsc.reportCommitBase() - if commitResult.HighWatermark <= globalHighWatermark { + version, _, _ := e.lsc.reportCommitBase() + if commitResult.Version <= version { // too old // return errors.New("too old commit result") return errOldCommit } ct := newCommitTask() + ct.version = commitResult.Version ct.highWatermark = commitResult.HighWatermark - ct.prevHighWatermark = commitResult.PrevHighWatermark ct.committedGLSNBegin = commitResult.CommittedGLSNOffset ct.committedGLSNEnd = commitResult.CommittedGLSNOffset + types.GLSN(commitResult.CommittedGLSNLength) ct.committedLLSNBegin = commitResult.CommittedLLSNOffset diff --git a/internal/storagenode/executor/reportcommit_test.go b/internal/storagenode/executor/reportcommit_test.go index c870f1eb5..484c0d35a 100644 --- a/internal/storagenode/executor/reportcommit_test.go +++ b/internal/storagenode/executor/reportcommit_test.go @@ -13,7 +13,6 @@ import ( "github.com/kakao/varlog/internal/storagenode/id" "github.com/kakao/varlog/internal/storagenode/reportcommitter" "github.com/kakao/varlog/internal/storagenode/storage" - "github.com/kakao/varlog/internal/storagenode/telemetry" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/proto/snpb" "github.com/kakao/varlog/proto/varlogpb" @@ -21,9 +20,10 @@ import ( func TestLogStreamReporter(t *testing.T) { const ( - snid = types.StorageNodeID(1) - lsid1 = types.LogStreamID(1) - lsid2 = types.LogStreamID(2) + snid = types.StorageNodeID(1) + lsid1 = types.LogStreamID(1) + lsid2 = types.LogStreamID(2) + topicID = 1 ) defer goleak.VerifyNone(t) @@ -37,7 +37,7 @@ func TestLogStreamReporter(t *testing.T) { WithStorageNodeID(snid), WithLogStreamID(lsid1), WithStorage(strg1), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { require.NoError(t, lse1.Close()) }() @@ -46,10 +46,12 @@ func TestLogStreamReporter(t *testing.T) { require.NoError(t, err) require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse1.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse1.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse1.storageNodeID, - LogStreamID: lse1.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse1.storageNodeID, + }, + LogStreamID: lse1.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse1.Metadata().Status) @@ -60,7 +62,7 @@ func TestLogStreamReporter(t *testing.T) { WithStorageNodeID(snid), WithLogStreamID(lsid2), WithStorage(strg2), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) require.NoError(t, err) defer func() { require.NoError(t, lse2.Close()) }() @@ -69,17 +71,19 @@ func TestLogStreamReporter(t *testing.T) { require.NoError(t, err) require.Equal(t, types.InvalidGLSN, sealedGLSN) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, lse2.Unseal(context.TODO(), []snpb.Replica{ + require.NoError(t, lse2.Unseal(context.TODO(), []varlogpb.Replica{ { - StorageNodeID: lse2.storageNodeID, - LogStreamID: lse2.logStreamID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: lse2.storageNodeID, + }, + LogStreamID: lse2.logStreamID, }, })) require.Equal(t, varlogpb.LogStreamStatusRunning, lse2.Metadata().Status) rcg := reportcommitter.NewMockGetter(ctrl) - rcg.EXPECT().ReportCommitter(gomock.Any()).DoAndReturn( - func(lsid types.LogStreamID) (reportcommitter.ReportCommitter, bool) { + rcg.EXPECT().ReportCommitter(gomock.Any(), gomock.Any()).DoAndReturn( + func(_ types.TopicID, lsid types.LogStreamID) (reportcommitter.ReportCommitter, bool) { switch lsid { case lsid1: return lse1, true @@ -126,11 +130,11 @@ func TestLogStreamReporter(t *testing.T) { for _, report := range reports { switch report.GetLogStreamID() { case lsid1: - require.Equal(t, types.GLSN(0), report.GetHighWatermark()) + require.Equal(t, types.Version(0), report.GetVersion()) require.Equal(t, types.LLSN(1), report.GetUncommittedLLSNOffset()) require.EqualValues(t, 0, report.GetUncommittedLLSNLength()) case lsid2: - require.Equal(t, types.GLSN(0), report.GetHighWatermark()) + require.Equal(t, types.Version(0), report.GetVersion()) require.Equal(t, types.LLSN(1), report.GetUncommittedLLSNOffset()) require.EqualValues(t, 0, report.GetUncommittedLLSNLength()) } @@ -167,8 +171,7 @@ func TestLogStreamReporter(t *testing.T) { require.NoError(t, lsr.Commit(context.TODO(), snpb.LogStreamCommitResult{ LogStreamID: lsid1, - HighWatermark: 2, - PrevHighWatermark: 0, + Version: 1, CommittedGLSNOffset: 1, CommittedGLSNLength: 1, CommittedLLSNOffset: 1, @@ -176,8 +179,7 @@ func TestLogStreamReporter(t *testing.T) { require.NoError(t, lsr.Commit(context.TODO(), snpb.LogStreamCommitResult{ LogStreamID: lsid2, - HighWatermark: 2, - PrevHighWatermark: 0, + Version: 1, CommittedGLSNOffset: 2, CommittedGLSNLength: 1, CommittedLLSNOffset: 1, @@ -201,8 +203,7 @@ func TestLogStreamReporter(t *testing.T) { require.Error(t, lsr.Commit(context.TODO(), snpb.LogStreamCommitResult{ LogStreamID: lsid1, - HighWatermark: 4, - PrevHighWatermark: 2, + Version: 2, CommittedGLSNOffset: 3, CommittedGLSNLength: 1, CommittedLLSNOffset: 2, @@ -211,8 +212,7 @@ func TestLogStreamReporter(t *testing.T) { require.Error(t, lsr.Commit(context.TODO(), snpb.LogStreamCommitResult{ LogStreamID: lsid2, - HighWatermark: 4, - PrevHighWatermark: 2, + Version: 2, CommittedGLSNOffset: 4, CommittedGLSNLength: 1, CommittedLLSNOffset: 2, diff --git a/internal/storagenode/executor/seal.go b/internal/storagenode/executor/seal.go index ae33cdcae..0f0e59927 100644 --- a/internal/storagenode/executor/seal.go +++ b/internal/storagenode/executor/seal.go @@ -7,13 +7,12 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/snpb" "github.com/kakao/varlog/proto/varlogpb" ) type SealUnsealer interface { Seal(ctx context.Context, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) - Unseal(ctx context.Context, replicas []snpb.Replica) error + Unseal(ctx context.Context, replicas []varlogpb.Replica) error } func (e *executor) Seal(ctx context.Context, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { @@ -81,10 +80,9 @@ func (e *executor) sealInternal(lastCommittedGLSN types.GLSN) (varlogpb.LogStrea return varlogpb.LogStreamStatusSealed, lastCommittedGLSN, nil } -func (e *executor) Unseal(_ context.Context, replicas []snpb.Replica) error { - if err := snpb.ValidReplicas(replicas); err != nil { +func (e *executor) Unseal(_ context.Context, replicas []varlogpb.Replica) error { + if err := varlogpb.ValidReplicas(replicas); err != nil { return err - } if err := e.guard(); err != nil { @@ -97,7 +95,7 @@ func (e *executor) Unseal(_ context.Context, replicas []snpb.Replica) error { found := false for _, replica := range replicas { - if replica.StorageNodeID == e.storageNodeID && replica.LogStreamID == e.logStreamID { + if replica.StorageNode.StorageNodeID == e.storageNodeID && replica.LogStreamID == e.logStreamID { found = true break } diff --git a/internal/storagenode/executor/sync.go b/internal/storagenode/executor/sync.go index 0f78db16f..3e2c51869 100644 --- a/internal/storagenode/executor/sync.go +++ b/internal/storagenode/executor/sync.go @@ -15,22 +15,22 @@ import ( // progressing. type syncState struct { cancel context.CancelFunc - dst snpb.Replica - first types.LogEntry - last types.LogEntry + dst varlogpb.Replica + first varlogpb.LogEntry + last varlogpb.LogEntry mu sync.Mutex - curr types.LogEntry + curr varlogpb.LogEntry err error } -func newSyncState(cancel context.CancelFunc, dstReplica snpb.Replica, first, last types.LogEntry) *syncState { +func newSyncState(cancel context.CancelFunc, dstReplica varlogpb.Replica, first, last varlogpb.LogEntry) *syncState { return &syncState{ cancel: cancel, dst: dstReplica, first: first, last: last, - curr: types.InvalidLogEntry, + curr: varlogpb.InvalidLogEntry(), } } @@ -74,7 +74,7 @@ func (st *syncTracker) get(snID types.StorageNodeID) (*syncState, bool) { // not multi goroutine-safe, thus it must be called within mutex. func (st *syncTracker) run(ctx context.Context, state *syncState, locker sync.Locker) { state.mu.Lock() - replicaSNID := state.dst.GetStorageNodeID() + replicaSNID := state.dst.StorageNode.StorageNodeID state.mu.Unlock() st.tracker[replicaSNID] = state diff --git a/internal/storagenode/executor/testing_test.go b/internal/storagenode/executor/testing_test.go index 8e59f071f..3a1907131 100644 --- a/internal/storagenode/executor/testing_test.go +++ b/internal/storagenode/executor/testing_test.go @@ -14,3 +14,10 @@ func NewTestMeasurableExecutor(ctrl *gomock.Controller, snid types.StorageNodeID ret.EXPECT().Stub().Return(telemetry.NewNopTelmetryStub()).AnyTimes() return ret } + +func NewTestMeasurable(ctrl *gomock.Controller) *telemetry.MockMeasurable { + m := telemetry.NewMockMeasurable(ctrl) + nop := telemetry.NewNopTelmetryStub() + m.EXPECT().Stub().Return(nop).AnyTimes() + return m +} diff --git a/internal/storagenode/executor/write_task.go b/internal/storagenode/executor/write_task.go index 6025cf8ad..dd6b55bf2 100644 --- a/internal/storagenode/executor/write_task.go +++ b/internal/storagenode/executor/write_task.go @@ -6,7 +6,7 @@ import ( "time" "github.com/kakao/varlog/pkg/types" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) var writeTaskPool = sync.Pool{ @@ -24,7 +24,7 @@ type writeTask struct { // backups is a list of backups of the log stream. The first element is the primary // replica, and the others are backup backups. - backups []snpb.Replica + backups []varlogpb.Replica // NOTE: primary can be removed by using isPrimary method of executor. primary bool @@ -50,13 +50,13 @@ func newWriteTaskInternal(twg *taskWaitGroup, data []byte) *writeTask { return wt } -func newPrimaryWriteTask(twg *taskWaitGroup, data []byte, backups []snpb.Replica) *writeTask { +func newPrimaryWriteTask(twg *taskWaitGroup, data []byte, backups []varlogpb.Replica) *writeTask { wt := newWriteTaskInternal(twg, data) wt.primary = true wt.backups = backups return wt - } + func newBackupWriteTask(twg *taskWaitGroup, data []byte, llsn types.LLSN) *writeTask { if llsn.Invalid() { panic("invalid LLSN") diff --git a/internal/storagenode/executor/writer.go b/internal/storagenode/executor/writer.go index 1406860e7..369dc400c 100644 --- a/internal/storagenode/executor/writer.go +++ b/internal/storagenode/executor/writer.go @@ -60,21 +60,6 @@ type WriterOption interface { applyWriter(*writerConfig) } -/* -type writerState int32 - -const ( - writerStateInit writerState = iota - writerStateRun - writerStateClosed -) -*/ -const ( - writerStateInit = 0 - writerStateRun = 1 - writerStateClosed = 2 -) - type writer interface { send(ctx context.Context, tb *writeTask) error stop() @@ -361,7 +346,7 @@ func (w *writerImpl) fanout(ctx context.Context, oldLLSN, newLLSN types.LLSN) er // Assumes that the below expression is executed after commitWaitTask is passed to // committer and committer calls the Done of twg very quickly. Since the Done of twg // is called, the Append RPC tries to release the writeTask. At that times, if the - // conditions are executed, the race condition will happend. + // conditions are executed, the race condition will happened. for _, wt := range w.writeTaskBatch { // Replication tasks succeed after writing the logs into the // storage. Thus here notifies the Replicate RPC handler to return @@ -385,7 +370,6 @@ func (w *writerImpl) fanout(ctx context.Context, oldLLSN, newLLSN types.LLSN) er w.lsc.uncommittedLLSNEnd.Load(), oldLLSN, newLLSN, )) } - }() return w.sendToCommitter(ctx) }) diff --git a/internal/storagenode/executor/writer_test.go b/internal/storagenode/executor/writer_test.go index 195063b50..96aacddc5 100644 --- a/internal/storagenode/executor/writer_test.go +++ b/internal/storagenode/executor/writer_test.go @@ -555,5 +555,4 @@ func TestWriterVarlog444(t *testing.T) { require.Eventually(t, func() bool { return lsc.uncommittedLLSNEnd.Load() == types.LLSN(4) }, time.Second, 10*time.Millisecond) - } diff --git a/internal/storagenode/executorsmap/executors_map.go b/internal/storagenode/executorsmap/executors_map.go index 6f5bf132f..aa1664e5a 100644 --- a/internal/storagenode/executorsmap/executors_map.go +++ b/internal/storagenode/executorsmap/executors_map.go @@ -12,28 +12,29 @@ import ( type executorSlot struct { extor executor.Executor - id types.LogStreamID + id logStreamTopicID } var nilSlot = executorSlot{} type ExecutorsMap struct { slots []executorSlot - hash map[types.LogStreamID]executorSlot + hash map[logStreamTopicID]executorSlot mu sync.RWMutex } func New(initSize int) *ExecutorsMap { return &ExecutorsMap{ slots: make([]executorSlot, 0, initSize), - hash: make(map[types.LogStreamID]executorSlot, initSize), + hash: make(map[logStreamTopicID]executorSlot, initSize), } } // Load returns the executor stored in the map for a lsid, or nil if the executor is not present. -func (m *ExecutorsMap) Load(lsid types.LogStreamID) (extor executor.Executor, loaded bool) { +func (m *ExecutorsMap) Load(tpid types.TopicID, lsid types.LogStreamID) (extor executor.Executor, loaded bool) { + id := packLogStreamTopicID(lsid, tpid) m.mu.RLock() - slot, ok := m.fastLookup(lsid) + slot, ok := m.fastLookup(id) m.mu.RUnlock() if !ok { return nil, false @@ -43,15 +44,16 @@ func (m *ExecutorsMap) Load(lsid types.LogStreamID) (extor executor.Executor, lo // Store stores the executor for a lsid. Setting nil as an executor is also possible. However, // overwriting a non-nil executor is not possible. -func (m *ExecutorsMap) Store(lsid types.LogStreamID, extor executor.Executor) (err error) { +func (m *ExecutorsMap) Store(tpid types.TopicID, lsid types.LogStreamID, extor executor.Executor) (err error) { + id := packLogStreamTopicID(lsid, tpid) m.mu.Lock() - slot, idx, ok := m.lookup(lsid) + slot, idx, ok := m.lookup(id) if ok { if slot.extor == nil { m.slots[idx].extor = extor slot.extor = extor - m.hash[lsid] = slot + m.hash[id] = slot } else { // Overwriting the executor is not allowed. err = errors.Errorf("try to overwrite executor: %d", lsid) @@ -59,7 +61,7 @@ func (m *ExecutorsMap) Store(lsid types.LogStreamID, extor executor.Executor) (e m.mu.Unlock() return err } - m.store(lsid, extor) + m.store(id, extor) m.mu.Unlock() return err @@ -67,15 +69,16 @@ func (m *ExecutorsMap) Store(lsid types.LogStreamID, extor executor.Executor) (e // LoadOrStore returns the existing executor for the lsid if present. If not, it stores the // executor. The loaded result is true if the executor is loaded, otherwise, false. -func (m *ExecutorsMap) LoadOrStore(lsid types.LogStreamID, extor executor.Executor) (actual executor.Executor, loaded bool) { +func (m *ExecutorsMap) LoadOrStore(tpid types.TopicID, lsid types.LogStreamID, extor executor.Executor) (actual executor.Executor, loaded bool) { + id := packLogStreamTopicID(lsid, tpid) m.mu.Lock() - slot, ok := m.fastLookup(lsid) + slot, ok := m.fastLookup(id) if ok { m.mu.Unlock() return slot.extor, true } - m.store(lsid, extor) + m.store(id, extor) m.mu.Unlock() return extor, false @@ -83,16 +86,17 @@ func (m *ExecutorsMap) LoadOrStore(lsid types.LogStreamID, extor executor.Execut // LoadAndDelete deletes the executor for a lsid, and returns the old executor. The loaded result is // true if the executor is loaded, otherwise, false. -func (m *ExecutorsMap) LoadAndDelete(lsid types.LogStreamID) (executor.Executor, bool) { +func (m *ExecutorsMap) LoadAndDelete(tpid types.TopicID, lsid types.LogStreamID) (executor.Executor, bool) { + id := packLogStreamTopicID(lsid, tpid) m.mu.Lock() - slot, idx, ok := m.lookup(lsid) + slot, idx, ok := m.lookup(id) if !ok { m.mu.Unlock() return nil, false } m.delete(idx) - delete(m.hash, lsid) + delete(m.hash, id) m.mu.Unlock() return slot.extor, true @@ -102,7 +106,8 @@ func (m *ExecutorsMap) LoadAndDelete(lsid types.LogStreamID) (executor.Executor, func (m *ExecutorsMap) Range(f func(types.LogStreamID, executor.Executor) bool) { m.mu.RLock() for i := 0; i < len(m.slots); i++ { - lsid := m.slots[i].id + id := m.slots[i].id + lsid, _ := id.unpack() extor := m.slots[i].extor if !f(lsid, extor) { break @@ -118,12 +123,12 @@ func (m *ExecutorsMap) Size() int { return ret } -func (m *ExecutorsMap) fastLookup(lsid types.LogStreamID) (extor executorSlot, ok bool) { +func (m *ExecutorsMap) fastLookup(lsid logStreamTopicID) (extor executorSlot, ok bool) { extor, ok = m.hash[lsid] return } -func (m *ExecutorsMap) lookup(lsid types.LogStreamID) (extor executorSlot, idx int, ok bool) { +func (m *ExecutorsMap) lookup(lsid logStreamTopicID) (extor executorSlot, idx int, ok bool) { n := len(m.slots) idx = m.search(lsid) if idx < n && m.slots[idx].id == lsid { @@ -132,14 +137,14 @@ func (m *ExecutorsMap) lookup(lsid types.LogStreamID) (extor executorSlot, idx i return nilSlot, n, false } -func (m *ExecutorsMap) store(lsid types.LogStreamID, extor executor.Executor) { +func (m *ExecutorsMap) store(lsid logStreamTopicID, extor executor.Executor) { idx := m.search(lsid) slot := executorSlot{extor: extor, id: lsid} m.insert(idx, slot) m.hash[lsid] = slot } -func (m *ExecutorsMap) search(lsid types.LogStreamID) int { +func (m *ExecutorsMap) search(lsid logStreamTopicID) int { return sort.Search(len(m.slots), func(idx int) bool { return lsid <= m.slots[idx].id }) diff --git a/internal/storagenode/executorsmap/executors_map_test.go b/internal/storagenode/executorsmap/executors_map_test.go index 168d74645..ea2f12a57 100644 --- a/internal/storagenode/executorsmap/executors_map_test.go +++ b/internal/storagenode/executorsmap/executors_map_test.go @@ -18,12 +18,12 @@ func TestExecutorsMapEmpty(t *testing.T) { emap := New(10) require.Zero(t, emap.Size()) - extor, ok := emap.Load(1) + extor, ok := emap.Load(1, 1) require.Nil(t, extor) require.False(t, ok) numCalled := 0 - emap.Range(func(_ types.LogStreamID, _ executor.Executor) bool { + emap.Range(func(types.LogStreamID, executor.Executor) bool { numCalled++ return true }) @@ -36,17 +36,17 @@ func TestExecutorsMapStore(t *testing.T) { emap := New(10) - require.NoError(t, emap.Store(1, nil)) + require.NoError(t, emap.Store(1, 1, nil)) require.Equal(t, 1, emap.Size()) - loadedExtor, loaded := emap.Load(1) + loadedExtor, loaded := emap.Load(1, 1) require.True(t, loaded) require.Nil(t, loadedExtor) extor := executor.NewMockExecutor(ctrl) - require.NoError(t, emap.Store(1, extor)) + require.NoError(t, emap.Store(1, 1, extor)) require.Equal(t, 1, emap.Size()) - loadedExtor, loaded = emap.Load(1) + loadedExtor, loaded = emap.Load(1, 1) require.True(t, loaded) require.Equal(t, extor, loadedExtor) } @@ -57,28 +57,28 @@ func TestExecutorsMapLoadOrStore(t *testing.T) { emap := New(10) - extor, loaded := emap.LoadOrStore(1, nil) + extor, loaded := emap.LoadOrStore(1, 1, nil) require.False(t, loaded) require.Nil(t, extor) require.Equal(t, 1, emap.Size()) - loadedExtor, loaded := emap.Load(1) + loadedExtor, loaded := emap.Load(1, 1) require.True(t, loaded) require.Nil(t, loadedExtor) - actualExtor, loaded := emap.LoadOrStore(1, executor.NewMockExecutor(ctrl)) + actualExtor, loaded := emap.LoadOrStore(1, 1, executor.NewMockExecutor(ctrl)) require.True(t, loaded) require.Equal(t, loadedExtor, actualExtor) extor = executor.NewMockExecutor(ctrl) - require.NoError(t, emap.Store(1, extor)) + require.NoError(t, emap.Store(1, 1, extor)) require.Equal(t, 1, emap.Size()) - loadedExtor, loaded = emap.Load(1) + loadedExtor, loaded = emap.Load(1, 1) require.True(t, loaded) require.Equal(t, extor, loadedExtor) - extor, loaded = emap.LoadAndDelete(1) + extor, loaded = emap.LoadAndDelete(1, 1) require.True(t, loaded) require.Equal(t, extor, loadedExtor) } @@ -89,7 +89,7 @@ func TestExecutorsMapLoadAndDelete(t *testing.T) { emap := New(10) - extor, loaded := emap.LoadAndDelete(1) + extor, loaded := emap.LoadAndDelete(1, 1) require.Nil(t, extor) require.False(t, loaded) } @@ -101,11 +101,11 @@ func TestExecutorsMapOverwrite(t *testing.T) { emap := New(10) extor := executor.NewMockExecutor(ctrl) - require.NoError(t, emap.Store(1, extor)) + require.NoError(t, emap.Store(1, 1, extor)) require.Equal(t, 1, emap.Size()) - require.Error(t, emap.Store(1, extor)) - require.Error(t, emap.Store(1, executor.NewMockExecutor(ctrl))) + require.Error(t, emap.Store(1, 1, extor)) + require.Error(t, emap.Store(1, 1, executor.NewMockExecutor(ctrl))) } func TestExecutorsMapMultipleExecutors(t *testing.T) { @@ -114,8 +114,11 @@ func TestExecutorsMapMultipleExecutors(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() + rng := rand.New(rand.NewSource(time.Now().UnixNano())) + emap := New(10) extors := make([]executor.Executor, numExtors) + lstpMap := make(map[types.LogStreamID]types.TopicID, numExtors) for i := 0; i < numExtors; i++ { require.Equal(t, i, emap.Size()) @@ -124,19 +127,25 @@ func TestExecutorsMapMultipleExecutors(t *testing.T) { extor = nil } extors[i] = extor - emap.Store(types.LogStreamID(i), extor) + tpid := types.TopicID(rng.Int31()) + lsid := types.LogStreamID(i) + lstpMap[lsid] = tpid + require.NoError(t, emap.Store(tpid, lsid, extor)) require.Equal(t, i+1, emap.Size()) } require.Equal(t, numExtors, emap.Size()) for i := 0; i < numExtors; i++ { - actual, loaded := emap.LoadOrStore(types.LogStreamID(i), executor.NewMockExecutor(ctrl)) + lsid := types.LogStreamID(i) + require.Contains(t, lstpMap, lsid) + tpid := lstpMap[lsid] + actual, loaded := emap.LoadOrStore(tpid, lsid, executor.NewMockExecutor(ctrl)) require.True(t, loaded) require.Equal(t, extors[i], actual) if i%2 != 0 { - require.Error(t, emap.Store(types.LogStreamID(1), executor.NewMockExecutor(ctrl))) + require.Error(t, emap.Store(tpid, lsid, executor.NewMockExecutor(ctrl))) } } @@ -170,21 +179,29 @@ func TestExecutorsMapOrdred(t *testing.T) { rand.Seed(time.Now().UnixNano()) emap := New(numExtors) + lstpMap := make(map[types.LogStreamID]types.TopicID) var stored [numExtors]bool for i := 0; i < numExtors*10; i++ { - lsid := rand.Intn(numExtors) + pos := rand.Intn(numExtors) + lsid := types.LogStreamID(pos) + tpid := types.TopicID(rand.Int31()) + if _, ok := lstpMap[lsid]; ok { + tpid = lstpMap[lsid] + } else { + lstpMap[lsid] = tpid + } ok := !stored[lsid] if ok { - require.NoError(t, emap.Store(types.LogStreamID(lsid), executor.NewMockExecutor(ctrl))) + require.NoError(t, emap.Store(tpid, lsid, executor.NewMockExecutor(ctrl))) stored[lsid] = true } else { - require.Error(t, emap.Store(types.LogStreamID(lsid), executor.NewMockExecutor(ctrl))) + require.Error(t, emap.Store(tpid, lsid, executor.NewMockExecutor(ctrl))) } } iteratedLSIDs := make([]types.LogStreamID, 0, numExtors) - emap.Range(func(lsid types.LogStreamID, extor executor.Executor) bool { + emap.Range(func(lsid types.LogStreamID, _ executor.Executor) bool { iteratedLSIDs = append(iteratedLSIDs, lsid) return true }) @@ -195,8 +212,8 @@ func TestExecutorsMapOrdred(t *testing.T) { func BenchmarkExecutorsMap(b *testing.B) { const ( + topicID = types.TopicID(1) numExtors = 1e5 - initSize = 128 ) ctrl := gomock.NewController(b) @@ -208,7 +225,7 @@ func BenchmarkExecutorsMap(b *testing.B) { for i := 0; i < numExtors; i++ { lsid := types.LogStreamID(i) extor := executor.NewMockExecutor(ctrl) - require.NoError(b, ordmap.Store(lsid, extor)) + require.NoError(b, ordmap.Store(topicID, lsid, extor)) stdmap[lsid] = extor } @@ -249,7 +266,7 @@ func BenchmarkExecutorsMap(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { - ordmap.Store(types.LogStreamID(i), mockExecutor) + _ = ordmap.Store(topicID, types.LogStreamID(i), mockExecutor) } }, }, @@ -272,7 +289,7 @@ func BenchmarkExecutorsMap(b *testing.B) { benchfunc: func(b *testing.B) { for i := 0; i < b.N; i++ { lsid := loadIDs[i%len(loadIDs)] - extor, ok := ordmap.Load(lsid) + extor, ok := ordmap.Load(topicID, lsid) if ok { callback(lsid, extor) } diff --git a/internal/storagenode/executorsmap/id.go b/internal/storagenode/executorsmap/id.go new file mode 100644 index 000000000..77a968e08 --- /dev/null +++ b/internal/storagenode/executorsmap/id.go @@ -0,0 +1,19 @@ +package executorsmap + +import "github.com/kakao/varlog/pkg/types" + +const halfBits = 32 + +type logStreamTopicID int64 + +func packLogStreamTopicID(lsid types.LogStreamID, tpid types.TopicID) logStreamTopicID { + return logStreamTopicID(int64(lsid)<> halfBits) & mask) + tpid = types.TopicID(n & mask) + return lsid, tpid +} diff --git a/internal/storagenode/executorsmap/id_test.go b/internal/storagenode/executorsmap/id_test.go new file mode 100644 index 000000000..d49577d09 --- /dev/null +++ b/internal/storagenode/executorsmap/id_test.go @@ -0,0 +1,24 @@ +package executorsmap + +import ( + "math/rand" + "testing" + "time" + + "github.com/stretchr/testify/require" + + "github.com/kakao/varlog/pkg/types" +) + +func TestLogStreamTopicID(t *testing.T) { + const n = 10 + rng := rand.New(rand.NewSource(time.Now().UnixNano())) + for i := 0; i < n; i++ { + expectedLogStreamID := types.LogStreamID(rng.Int31()) + expectedTopicID := types.TopicID(rng.Int31()) + + actualLogStreamID, actualTopicID := packLogStreamTopicID(expectedLogStreamID, expectedTopicID).unpack() + require.Equal(t, expectedLogStreamID, actualLogStreamID) + require.Equal(t, expectedTopicID, actualTopicID) + } +} diff --git a/internal/storagenode/logio/readwriter.go b/internal/storagenode/logio/readwriter.go index 3200c9430..00ae24481 100644 --- a/internal/storagenode/logio/readwriter.go +++ b/internal/storagenode/logio/readwriter.go @@ -5,13 +5,21 @@ import ( "github.com/kakao/varlog/internal/storagenode/storage" "github.com/kakao/varlog/pkg/types" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) +// ReadWriter represents methods to read or write logs in a log stream. type ReadWriter interface { - Append(ctx context.Context, data []byte, backups ...snpb.Replica) (types.GLSN, error) - Read(ctx context.Context, glsn types.GLSN) (types.LogEntry, error) + // Append writes a log to the log stream. + Append(ctx context.Context, data []byte, backups ...varlogpb.Replica) (types.GLSN, error) + + // Read reads a log with the given glsn. + Read(ctx context.Context, glsn types.GLSN) (varlogpb.LogEntry, error) + + // Subscribe scans logs from the inclusive begin to the exclusive end. Subscribe(ctx context.Context, begin, end types.GLSN) (SubscribeEnv, error) + + // Trim removes logs until glsn. Trim(ctx context.Context, glsn types.GLSN) error } @@ -21,7 +29,8 @@ type SubscribeEnv interface { Err() error } +// Getter is the interface that wraps basic methods to access ReadWriter. type Getter interface { - ReadWriter(logStreamID types.LogStreamID) (ReadWriter, bool) + ReadWriter(topicID types.TopicID, logStreamID types.LogStreamID) (ReadWriter, bool) ForEachReadWriters(f func(ReadWriter)) } diff --git a/internal/storagenode/logio/server.go b/internal/storagenode/logio/server.go index ee1b8fcd0..d6d956a3e 100644 --- a/internal/storagenode/logio/server.go +++ b/internal/storagenode/logio/server.go @@ -19,6 +19,7 @@ import ( "github.com/kakao/varlog/pkg/util/telemetry/attribute" "github.com/kakao/varlog/pkg/verrors" "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) type Server interface { @@ -67,26 +68,28 @@ func (s *server) Append(ctx context.Context, req *snpb.AppendRequest) (*snpb.App startTime := time.Now() defer func() { dur := time.Since(startTime) - s.measurable.Stub().Metrics().RpcServerAppendDuration.Record( + s.measurable.Stub().Metrics().RPCServerAppendDuration.Record( ctx, float64(dur.Microseconds())/1000.0, ) }() - req := reqI.(*snpb.AppendRequest) var rsp *snpb.AppendResponse - lse, ok := s.readWriterGetter.ReadWriter(req.GetLogStreamID()) + lse, ok := s.readWriterGetter.ReadWriter(req.GetTopicID(), req.GetLogStreamID()) if !ok { code = codes.NotFound return rsp, errors.WithStack(verrors.ErrInvalid) } - backups := make([]snpb.Replica, 0, len(req.Backups)) + backups := make([]varlogpb.Replica, 0, len(req.Backups)) for i := range req.Backups { - backups = append(backups, snpb.Replica{ - StorageNodeID: req.Backups[i].GetStorageNodeID(), - Address: req.Backups[i].GetAddress(), - LogStreamID: req.GetLogStreamID(), + backups = append(backups, varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: req.Backups[i].GetStorageNodeID(), + Address: req.Backups[i].GetAddress(), + }, + TopicID: req.GetTopicID(), + LogStreamID: req.GetLogStreamID(), }) } @@ -105,9 +108,8 @@ func (s *server) Read(ctx context.Context, req *snpb.ReadRequest) (*snpb.ReadRes code := codes.Internal rspI, err := s.withTelemetry(ctx, "varlog.snpb.LogIO/Read", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.ReadRequest) var rsp *snpb.ReadResponse - lse, ok := s.readWriterGetter.ReadWriter(req.GetLogStreamID()) + lse, ok := s.readWriterGetter.ReadWriter(req.GetTopicID(), req.GetLogStreamID()) if !ok { code = codes.NotFound return rsp, errors.WithStack(verrors.ErrInvalid) @@ -154,13 +156,11 @@ func (s *server) Subscribe(req *snpb.SubscribeRequest, stream snpb.LogIO_Subscri code := codes.Internal _, err := s.withTelemetry(stream.Context(), "varlog.snpb.LogIO/Subscribe", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.SubscribeRequest) - if req.GetGLSNBegin() >= req.GetGLSNEnd() { code = codes.InvalidArgument return nil, errors.New("storagenode: invalid subscription range") } - reader, ok := s.readWriterGetter.ReadWriter(req.GetLogStreamID()) + reader, ok := s.readWriterGetter.ReadWriter(req.GetTopicID(), req.GetLogStreamID()) if !ok { code = codes.NotFound return nil, errors.WithStack(verrors.ErrInvalid) @@ -198,7 +198,6 @@ func (s *server) Trim(ctx context.Context, req *snpb.TrimRequest) (*pbtypes.Empt code := codes.Internal rspI, err := s.withTelemetry(ctx, "varlog.snpb.LogIO/Trim", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.TrimRequest) trimGLSN := req.GetGLSN() // TODO diff --git a/internal/storagenode/replication/client.go b/internal/storagenode/replication/client.go index 5149d01c7..827ad99fe 100644 --- a/internal/storagenode/replication/client.go +++ b/internal/storagenode/replication/client.go @@ -17,6 +17,7 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/verrors" "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) type Client interface { @@ -24,7 +25,7 @@ type Client interface { Replicate(ctx context.Context, llsn types.LLSN, data []byte, cb func(error)) PeerStorageNodeID() types.StorageNodeID SyncInit(ctx context.Context, srcRnage snpb.SyncRange) (snpb.SyncRange, error) - SyncReplicate(ctx context.Context, replica snpb.Replica, payload snpb.SyncPayload) error + SyncReplicate(ctx context.Context, replica varlogpb.Replica, payload snpb.SyncPayload) error } type client struct { @@ -93,7 +94,7 @@ func (c *client) run(ctx context.Context) (err error) { } func (c *client) PeerStorageNodeID() types.StorageNodeID { - return c.replica.GetStorageNodeID() + return c.replica.StorageNode.StorageNodeID } func (c *client) Replicate(ctx context.Context, llsn types.LLSN, data []byte, callback func(error)) { @@ -122,6 +123,7 @@ func (c *client) Replicate(ctx context.Context, llsn types.LLSN, data []byte, ca } req := &snpb.ReplicationRequest{ + TopicID: c.replica.GetTopicID(), LogStreamID: c.replica.GetLogStreamID(), LLSN: llsn, Payload: data, @@ -228,7 +230,7 @@ func (c *client) SyncInit(ctx context.Context, srcRnage snpb.SyncRange) (snpb.Sy return rsp.GetRange(), errors.WithStack(verrors.FromStatusError(err)) } -func (c *client) SyncReplicate(ctx context.Context, replica snpb.Replica, payload snpb.SyncPayload) error { +func (c *client) SyncReplicate(ctx context.Context, replica varlogpb.Replica, payload snpb.SyncPayload) error { c.closed.mu.RLock() if c.closed.val { c.closed.mu.RUnlock() diff --git a/internal/storagenode/replication/client_mock.go b/internal/storagenode/replication/client_mock.go index 2090e4008..f6d89fcbf 100644 --- a/internal/storagenode/replication/client_mock.go +++ b/internal/storagenode/replication/client_mock.go @@ -12,6 +12,7 @@ import ( types "github.com/kakao/varlog/pkg/types" snpb "github.com/kakao/varlog/proto/snpb" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) // MockClient is a mock of Client interface. @@ -93,7 +94,7 @@ func (mr *MockClientMockRecorder) SyncInit(arg0, arg1 interface{}) *gomock.Call } // SyncReplicate mocks base method. -func (m *MockClient) SyncReplicate(arg0 context.Context, arg1 snpb.Replica, arg2 snpb.SyncPayload) error { +func (m *MockClient) SyncReplicate(arg0 context.Context, arg1 varlogpb.Replica, arg2 snpb.SyncPayload) error { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "SyncReplicate", arg0, arg1, arg2) ret0, _ := ret[0].(error) diff --git a/internal/storagenode/replication/config.go b/internal/storagenode/replication/config.go index 7dd64971b..f51025c27 100644 --- a/internal/storagenode/replication/config.go +++ b/internal/storagenode/replication/config.go @@ -7,7 +7,7 @@ import ( "github.com/kakao/varlog/internal/storagenode/id" "github.com/kakao/varlog/internal/storagenode/telemetry" "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) const ( @@ -16,7 +16,7 @@ const ( ) type clientConfig struct { - replica snpb.Replica + replica varlogpb.Replica requestQueueSize int measure telemetry.Measurable logger *zap.Logger @@ -124,13 +124,13 @@ type ConnectorOption interface { applyConnector(*connectorConfig) } -type replicaOption snpb.Replica +type replicaOption varlogpb.Replica func (o replicaOption) applyClient(c *clientConfig) { - c.replica = snpb.Replica(o) + c.replica = varlogpb.Replica(o) } -func WithReplica(replica snpb.Replica) ClientOption { +func WithReplica(replica varlogpb.Replica) ClientOption { return replicaOption(replica) } diff --git a/internal/storagenode/replication/connector.go b/internal/storagenode/replication/connector.go index 19772ab5b..70f3bab8f 100644 --- a/internal/storagenode/replication/connector.go +++ b/internal/storagenode/replication/connector.go @@ -9,16 +9,15 @@ import ( "github.com/pkg/errors" "go.uber.org/multierr" - "go.uber.org/zap" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/verrors" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) type Connector interface { io.Closer - Get(ctx context.Context, replica snpb.Replica) (Client, error) + Get(ctx context.Context, replica varlogpb.Replica) (Client, error) } type connector struct { @@ -26,7 +25,6 @@ type connector struct { clients map[types.StorageNodeID]*client closed bool mu sync.Mutex - logger *zap.Logger } func NewConnector(opts ...ConnectorOption) (Connector, error) { @@ -41,7 +39,7 @@ func NewConnector(opts ...ConnectorOption) (Connector, error) { return c, nil } -func (c *connector) Get(ctx context.Context, replica snpb.Replica) (Client, error) { +func (c *connector) Get(ctx context.Context, replica varlogpb.Replica) (Client, error) { c.mu.Lock() defer c.mu.Unlock() @@ -49,7 +47,7 @@ func (c *connector) Get(ctx context.Context, replica snpb.Replica) (Client, erro return nil, errors.WithStack(verrors.ErrClosed) } - cl, ok := c.clients[replica.StorageNodeID] + cl, ok := c.clients[replica.StorageNode.StorageNodeID] if ok { return cl, nil } @@ -57,7 +55,7 @@ func (c *connector) Get(ctx context.Context, replica snpb.Replica) (Client, erro if err != nil { return nil, err } - c.clients[replica.StorageNodeID] = cl + c.clients[replica.StorageNode.StorageNodeID] = cl return cl, nil } @@ -80,9 +78,8 @@ func (c *connector) Close() (err error) { return err } -func (c *connector) newClient(ctx context.Context, replica snpb.Replica) (*client, error) { - opts := append(c.clientOptions, WithReplica(replica)) - cl, err := newClient(ctx, opts...) +func (c *connector) newClient(ctx context.Context, replica varlogpb.Replica) (*client, error) { + cl, err := newClient(ctx, append(c.clientOptions, WithReplica(replica))...) if err != nil { return nil, err } @@ -93,5 +90,5 @@ func (c *connector) newClient(ctx context.Context, replica snpb.Replica) (*clien func (c *connector) delClient(client *client) { c.mu.Lock() defer c.mu.Unlock() - delete(c.clients, client.replica.GetStorageNodeID()) + delete(c.clients, client.replica.StorageNode.StorageNodeID) } diff --git a/internal/storagenode/replication/connector_mock.go b/internal/storagenode/replication/connector_mock.go index 87f38e1e8..7026a0cee 100644 --- a/internal/storagenode/replication/connector_mock.go +++ b/internal/storagenode/replication/connector_mock.go @@ -10,7 +10,7 @@ import ( gomock "github.com/golang/mock/gomock" - snpb "github.com/kakao/varlog/proto/snpb" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) // MockConnector is a mock of Connector interface. @@ -51,7 +51,7 @@ func (mr *MockConnectorMockRecorder) Close() *gomock.Call { } // Get mocks base method. -func (m *MockConnector) Get(arg0 context.Context, arg1 snpb.Replica) (Client, error) { +func (m *MockConnector) Get(arg0 context.Context, arg1 varlogpb.Replica) (Client, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "Get", arg0, arg1) ret0, _ := ret[0].(Client) diff --git a/internal/storagenode/replication/replication.go b/internal/storagenode/replication/replication.go index c617045c1..2e1ccfc0e 100644 --- a/internal/storagenode/replication/replication.go +++ b/internal/storagenode/replication/replication.go @@ -8,10 +8,11 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) type SyncTaskStatus struct { - Replica snpb.Replica + Replica varlogpb.Replica State snpb.SyncState Span snpb.SyncRange Curr types.LLSN @@ -24,9 +25,9 @@ type Replicator interface { Replicate(ctx context.Context, llsn types.LLSN, data []byte) error SyncInit(ctx context.Context, srcRnage snpb.SyncRange) (snpb.SyncRange, error) SyncReplicate(ctx context.Context, payload snpb.SyncPayload) error - Sync(ctx context.Context, replica snpb.Replica) (*snpb.SyncStatus, error) + Sync(ctx context.Context, replica varlogpb.Replica) (*snpb.SyncStatus, error) } type Getter interface { - Replicator(logStreamID types.LogStreamID) (Replicator, bool) + Replicator(topicID types.TopicID, logStreamID types.LogStreamID) (Replicator, bool) } diff --git a/internal/storagenode/replication/replication_mock.go b/internal/storagenode/replication/replication_mock.go index ec916b304..e457c35b0 100644 --- a/internal/storagenode/replication/replication_mock.go +++ b/internal/storagenode/replication/replication_mock.go @@ -12,6 +12,7 @@ import ( types "github.com/kakao/varlog/pkg/types" snpb "github.com/kakao/varlog/proto/snpb" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) // MockReplicator is a mock of Replicator interface. @@ -52,7 +53,7 @@ func (mr *MockReplicatorMockRecorder) Replicate(arg0, arg1, arg2 interface{}) *g } // Sync mocks base method. -func (m *MockReplicator) Sync(arg0 context.Context, arg1 snpb.Replica) (*snpb.SyncStatus, error) { +func (m *MockReplicator) Sync(arg0 context.Context, arg1 varlogpb.Replica) (*snpb.SyncStatus, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "Sync", arg0, arg1) ret0, _ := ret[0].(*snpb.SyncStatus) @@ -119,16 +120,16 @@ func (m *MockGetter) EXPECT() *MockGetterMockRecorder { } // Replicator mocks base method. -func (m *MockGetter) Replicator(arg0 types.LogStreamID) (Replicator, bool) { +func (m *MockGetter) Replicator(arg0 types.TopicID, arg1 types.LogStreamID) (Replicator, bool) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Replicator", arg0) + ret := m.ctrl.Call(m, "Replicator", arg0, arg1) ret0, _ := ret[0].(Replicator) ret1, _ := ret[1].(bool) return ret0, ret1 } // Replicator indicates an expected call of Replicator. -func (mr *MockGetterMockRecorder) Replicator(arg0 interface{}) *gomock.Call { +func (mr *MockGetterMockRecorder) Replicator(arg0, arg1 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Replicator", reflect.TypeOf((*MockGetter)(nil).Replicator), arg0) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Replicator", reflect.TypeOf((*MockGetter)(nil).Replicator), arg0, arg1) } diff --git a/internal/storagenode/replication/replication_test.go b/internal/storagenode/replication/replication_test.go index 7cf4d728c..d9dcd92b9 100644 --- a/internal/storagenode/replication/replication_test.go +++ b/internal/storagenode/replication/replication_test.go @@ -16,10 +16,9 @@ import ( "google.golang.org/grpc" "github.com/kakao/varlog/internal/storagenode/id" - "github.com/kakao/varlog/internal/storagenode/telemetry" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/util/netutil" - "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) func TestReplicationBadConnectorClient(t *testing.T) { @@ -34,11 +33,11 @@ func TestReplicationBadConnectorClient(t *testing.T) { // no address connector, err = NewConnector( WithClientOptions( - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ), ) require.NoError(t, err) - _, err = connector.Get(context.TODO(), snpb.Replica{}) + _, err = connector.Get(context.TODO(), varlogpb.Replica{}) require.Error(t, err) require.NoError(t, connector.Close()) @@ -46,12 +45,14 @@ func TestReplicationBadConnectorClient(t *testing.T) { connector, err = NewConnector( WithClientOptions( WithRequestQueueSize(-1), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), )) require.NoError(t, err) _, err = connector.Get(context.TODO(), - snpb.Replica{ - Address: "localhost:12345", + varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + Address: "localhost:12345", + }, }, ) require.Error(t, err) @@ -60,11 +61,15 @@ func TestReplicationBadConnectorClient(t *testing.T) { // bad address connector, err = NewConnector(WithClientOptions( WithRequestQueueSize(1), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), )) require.NoError(t, err) _, err = connector.Get(context.TODO(), - snpb.Replica{Address: "bad-address"}, + varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + Address: "bad-address", + }, + }, ) require.Error(t, err) require.NoError(t, connector.Close()) @@ -77,7 +82,7 @@ func TestReplicationClosedClient(t *testing.T) { // logReplicator mock replicator := NewMockReplicator(ctrl) replicatorGetter := NewMockGetter(ctrl) - replicatorGetter.EXPECT().Replicator(gomock.Any()).Return(replicator, true).AnyTimes() + replicatorGetter.EXPECT().Replicator(gomock.Any(), gomock.Any()).Return(replicator, true).AnyTimes() replicator.EXPECT().Replicate(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes() // replicator server @@ -92,7 +97,7 @@ func TestReplicationClosedClient(t *testing.T) { server := NewServer( WithStorageNodeIDGetter(snidGetter), WithLogReplicatorGetter(replicatorGetter), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) grpcServer := grpc.NewServer() @@ -109,16 +114,18 @@ func TestReplicationClosedClient(t *testing.T) { // connector connector, err := NewConnector( WithClientOptions( - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ), ) require.NoError(t, err) // replicator client - replica := snpb.Replica{ - StorageNodeID: 1, - LogStreamID: 1, - Address: addrs[0], + replica := varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + Address: addrs[0], + }, + LogStreamID: 1, } client, err := connector.Get(context.TODO(), replica) @@ -175,8 +182,8 @@ func TestReplication(t *testing.T) { // replicator mock replicator := NewMockReplicator(ctrl) replicatorGetter := NewMockGetter(ctrl) - replicatorGetter.EXPECT().Replicator(gomock.Any()).DoAndReturn( - func(lsid types.LogStreamID) (Replicator, bool) { + replicatorGetter.EXPECT().Replicator(gomock.Any(), gomock.Any()).DoAndReturn( + func(_ types.TopicID, lsid types.LogStreamID) (Replicator, bool) { switch lsid { case logStreamID: return replicator, true @@ -214,7 +221,7 @@ func TestReplication(t *testing.T) { server := NewServer( WithStorageNodeIDGetter(snidGetter), WithLogReplicatorGetter(replicatorGetter), - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ) grpcServer := grpc.NewServer() @@ -231,7 +238,7 @@ func TestReplication(t *testing.T) { // connector connector, err := NewConnector( WithClientOptions( - WithMeasurable(telemetry.NewTestMeasurable(ctrl)), + WithMeasurable(NewTestMeasurable(ctrl)), ), ) require.NoError(t, err) @@ -241,10 +248,12 @@ func TestReplication(t *testing.T) { }() // replicator client - replica := snpb.Replica{ - StorageNodeID: storageNodeID, - LogStreamID: logStreamID, - Address: addrs[0], + replica := varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: storageNodeID, + Address: addrs[0], + }, + LogStreamID: logStreamID, } client, err := connector.Get(context.TODO(), replica) diff --git a/internal/storagenode/replication/server.go b/internal/storagenode/replication/server.go index 14ee77ee2..a747dcd78 100644 --- a/internal/storagenode/replication/server.go +++ b/internal/storagenode/replication/server.go @@ -136,14 +136,15 @@ func (s *serverImpl) replicate(ctx context.Context, repCtxC <-chan *replicateTas err = repCtx.err if repCtx.err == nil { startTime := time.Now() + tpid := repCtx.req.GetTopicID() lsid := repCtx.req.GetLogStreamID() - if logReplicator, ok := s.logReplicatorGetter.Replicator(lsid); ok { + if logReplicator, ok := s.logReplicatorGetter.Replicator(tpid, lsid); ok { err = logReplicator.Replicate(ctx, repCtx.req.GetLLSN(), repCtx.req.GetPayload()) } else { err = fmt.Errorf("no executor: %v", lsid) } repCtx.err = err - s.measure.Stub().Metrics().RpcServerReplicateDuration.Record( + s.measure.Stub().Metrics().RPCServerReplicateDuration.Record( ctx, float64(time.Since(startTime).Microseconds())/1000.0, ) @@ -220,8 +221,9 @@ func (s *serverImpl) SyncInit(ctx context.Context, req *snpb.SyncInitRequest) (r span.End() }() + tpID := req.GetDestination().TopicID lsID := req.GetDestination().LogStreamID - logReplicator, ok := s.logReplicatorGetter.Replicator(lsID) + logReplicator, ok := s.logReplicatorGetter.Replicator(tpID, lsID) if !ok { err = errors.Errorf("no executor: %v", lsID) return rsp, err @@ -258,8 +260,9 @@ func (s *serverImpl) SyncReplicate(ctx context.Context, req *snpb.SyncReplicateR span.End() }() + tpID := req.GetDestination().TopicID lsID := req.GetDestination().LogStreamID - logReplicator, ok := s.logReplicatorGetter.Replicator(lsID) + logReplicator, ok := s.logReplicatorGetter.Replicator(tpID, lsID) if !ok { err = errors.Errorf("no executor: %v", lsID) return rsp, err diff --git a/internal/storagenode/replication/testing_test.go b/internal/storagenode/replication/testing_test.go new file mode 100644 index 000000000..d38013179 --- /dev/null +++ b/internal/storagenode/replication/testing_test.go @@ -0,0 +1,14 @@ +package replication + +import ( + gomock "github.com/golang/mock/gomock" + + "github.com/kakao/varlog/internal/storagenode/telemetry" +) + +func NewTestMeasurable(ctrl *gomock.Controller) *telemetry.MockMeasurable { + m := telemetry.NewMockMeasurable(ctrl) + nop := telemetry.NewNopTelmetryStub() + m.EXPECT().Stub().Return(nop).AnyTimes() + return m +} diff --git a/internal/storagenode/reportcommitter/reportcommitter.go b/internal/storagenode/reportcommitter/reportcommitter.go index 4200266a8..fb68f6ace 100644 --- a/internal/storagenode/reportcommitter/reportcommitter.go +++ b/internal/storagenode/reportcommitter/reportcommitter.go @@ -17,7 +17,7 @@ type ReportCommitter interface { type Getter interface { // ReportCommitter returns reportCommitter corresponded with given logStreamID. If the // reportCommitter does not exist, the result ok is false. - ReportCommitter(logStreamID types.LogStreamID) (reportCommitter ReportCommitter, ok bool) + ReportCommitter(topicID types.TopicID, logStreamID types.LogStreamID) (reportCommitter ReportCommitter, ok bool) // GetReports stores reports of all reportCommitters to given the rsp. GetReports(rsp *snpb.GetReportResponse, f func(ReportCommitter, *snpb.GetReportResponse)) diff --git a/internal/storagenode/reportcommitter/reportcommitter_mock.go b/internal/storagenode/reportcommitter/reportcommitter_mock.go index 514e30454..f7c228f24 100644 --- a/internal/storagenode/reportcommitter/reportcommitter_mock.go +++ b/internal/storagenode/reportcommitter/reportcommitter_mock.go @@ -102,16 +102,16 @@ func (mr *MockGetterMockRecorder) GetReports(arg0, arg1 interface{}) *gomock.Cal } // ReportCommitter mocks base method. -func (m *MockGetter) ReportCommitter(arg0 types.LogStreamID) (ReportCommitter, bool) { +func (m *MockGetter) ReportCommitter(arg0 types.TopicID, arg1 types.LogStreamID) (ReportCommitter, bool) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "ReportCommitter", arg0) + ret := m.ctrl.Call(m, "ReportCommitter", arg0, arg1) ret0, _ := ret[0].(ReportCommitter) ret1, _ := ret[1].(bool) return ret0, ret1 } // ReportCommitter indicates an expected call of ReportCommitter. -func (mr *MockGetterMockRecorder) ReportCommitter(arg0 interface{}) *gomock.Call { +func (mr *MockGetterMockRecorder) ReportCommitter(arg0, arg1 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ReportCommitter", reflect.TypeOf((*MockGetter)(nil).ReportCommitter), arg0) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ReportCommitter", reflect.TypeOf((*MockGetter)(nil).ReportCommitter), arg0, arg1) } diff --git a/internal/storagenode/reportcommitter/reporter.go b/internal/storagenode/reportcommitter/reporter.go index 12e44c38c..b22cc020c 100644 --- a/internal/storagenode/reportcommitter/reporter.go +++ b/internal/storagenode/reportcommitter/reporter.go @@ -87,7 +87,7 @@ func (r *reporter) Commit(ctx context.Context, commitResult snpb.LogStreamCommit return errors.WithStack(verrors.ErrClosed) } - committer, ok := r.reportCommitterGetter.ReportCommitter(commitResult.LogStreamID) + committer, ok := r.reportCommitterGetter.ReportCommitter(commitResult.TopicID, commitResult.LogStreamID) if !ok { return errors.Errorf("no such committer: %d", commitResult.LogStreamID) } diff --git a/internal/storagenode/reportcommitter/reporter_test.go b/internal/storagenode/reportcommitter/reporter_test.go index cb1287515..aceb6ac73 100644 --- a/internal/storagenode/reportcommitter/reporter_test.go +++ b/internal/storagenode/reportcommitter/reporter_test.go @@ -20,7 +20,7 @@ func TestLogStreamReporterEmptyStorageNode(t *testing.T) { defer ctrl.Finish() rcg := NewMockGetter(ctrl) - rcg.EXPECT().ReportCommitter(gomock.Any()).Return(nil, false).AnyTimes() + rcg.EXPECT().ReportCommitter(gomock.Any(), gomock.Any()).Return(nil, false).AnyTimes() rcg.EXPECT().GetReports(gomock.Any(), gomock.Any()).Return().AnyTimes() getter := id.NewMockStorageNodeIDGetter(ctrl) diff --git a/internal/storagenode/reportcommitter/server.go b/internal/storagenode/reportcommitter/server.go index b1d452104..7eec69f8b 100644 --- a/internal/storagenode/reportcommitter/server.go +++ b/internal/storagenode/reportcommitter/server.go @@ -3,17 +3,13 @@ package reportcommitter //go:generate mockgen -build_flags -mod=vendor -self_package github.com/kakao/varlog/internal/storagenode/reportcommitter -package reportcommitter -destination server_mock.go . Server import ( - "context" - "fmt" "io" - oteltrace "go.opentelemetry.io/otel/trace" "go.uber.org/zap" "google.golang.org/grpc" "github.com/kakao/varlog/internal/storagenode/rpcserver" "github.com/kakao/varlog/internal/storagenode/telemetry" - "github.com/kakao/varlog/pkg/util/telemetry/attribute" "github.com/kakao/varlog/proto/snpb" ) @@ -45,31 +41,6 @@ func (s *server) Register(server *grpc.Server) { s.logger.Info("register to rpc server") } -func (s *server) withTelemetry(ctx context.Context, spanName string, req interface{}, h rpcserver.Handler) (rsp interface{}, err error) { - ctx, span := s.measure.Stub().StartSpan(ctx, spanName, - oteltrace.WithAttributes(attribute.StorageNodeID(s.lsr.StorageNodeID())), - oteltrace.WithSpanKind(oteltrace.SpanKindServer), - ) - - rsp, err = h(ctx, req) - if err == nil { - s.logger.Debug(spanName, - zap.Stringer("request", req.(fmt.Stringer)), - zap.Stringer("response", rsp.(fmt.Stringer)), - ) - } else { - span.RecordError(err) - s.logger.Error(spanName, - zap.Error(err), - zap.Stringer("request", req.(fmt.Stringer)), - ) - } - - // s.measure.Stub().Metrics().ActiveRequests.Add(ctx, -1, attributes...) - span.End() - return rsp, err -} - func (s *server) GetReport(stream snpb.LogStreamReporter_GetReportServer) (err error) { req := snpb.GetReportRequest{} rsp := snpb.GetReportResponse{ diff --git a/internal/storagenode/server.go b/internal/storagenode/server.go index b9bf84840..abb35eac7 100644 --- a/internal/storagenode/server.go +++ b/internal/storagenode/server.go @@ -85,13 +85,13 @@ func (s *server) GetMetadata(ctx context.Context, req *snpb.GetMetadataRequest) } // AddLogStream implements the ManagementServer AddLogStream method. -func (s *server) AddLogStream(ctx context.Context, req *snpb.AddLogStreamRequest) (*snpb.AddLogStreamResponse, error) { +func (s *server) AddLogStreamReplica(ctx context.Context, req *snpb.AddLogStreamReplicaRequest) (*snpb.AddLogStreamReplicaResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.snpb.Server/AddLogStream", req, - func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.AddLogStreamRequest) - path, err := s.storageNode.AddLogStream(ctx, req.GetLogStreamID(), req.GetStorage().GetPath()) - return &snpb.AddLogStreamResponse{ + func(ctx context.Context, _ interface{}) (interface{}, error) { + path, err := s.storageNode.AddLogStream(ctx, req.GetTopicID(), req.GetLogStreamID(), req.GetStorage().GetPath()) + return &snpb.AddLogStreamReplicaResponse{ LogStream: &varlogpb.LogStreamDescriptor{ + TopicID: req.GetTopicID(), LogStreamID: req.GetLogStreamID(), Status: varlogpb.LogStreamStatusRunning, Replicas: []*varlogpb.ReplicaDescriptor{{ @@ -105,15 +105,14 @@ func (s *server) AddLogStream(ctx context.Context, req *snpb.AddLogStreamRequest if err != nil { return nil, verrors.ToStatusError(err) } - return rspI.(*snpb.AddLogStreamResponse), nil + return rspI.(*snpb.AddLogStreamReplicaResponse), nil } // RemoveLogStream implements the ManagementServer RemoveLogStream method. func (s *server) RemoveLogStream(ctx context.Context, req *snpb.RemoveLogStreamRequest) (*pbtypes.Empty, error) { rspI, err := s.withTelemetry(ctx, "varlog.snpb.Server/RemoveLogStream", req, - func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.RemoveLogStreamRequest) - err := s.storageNode.RemoveLogStream(ctx, req.GetLogStreamID()) + func(ctx context.Context, _ interface{}) (interface{}, error) { + err := s.storageNode.RemoveLogStream(ctx, req.GetTopicID(), req.GetLogStreamID()) return &pbtypes.Empty{}, err }, ) @@ -126,9 +125,8 @@ func (s *server) RemoveLogStream(ctx context.Context, req *snpb.RemoveLogStreamR // Seal implements the ManagementServer Seal method. func (s *server) Seal(ctx context.Context, req *snpb.SealRequest) (*snpb.SealResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.snpb.Server/Seal", req, - func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.SealRequest) - status, maxGLSN, err := s.storageNode.Seal(ctx, req.GetLogStreamID(), req.GetLastCommittedGLSN()) + func(ctx context.Context, _ interface{}) (interface{}, error) { + status, maxGLSN, err := s.storageNode.Seal(ctx, req.GetTopicID(), req.GetLogStreamID(), req.GetLastCommittedGLSN()) return &snpb.SealResponse{ Status: status, LastCommittedGLSN: maxGLSN, @@ -144,9 +142,8 @@ func (s *server) Seal(ctx context.Context, req *snpb.SealRequest) (*snpb.SealRes // Unseal implements the ManagementServer Unseal method. func (s *server) Unseal(ctx context.Context, req *snpb.UnsealRequest) (*pbtypes.Empty, error) { rspI, err := s.withTelemetry(ctx, "varlog.snpb.Server/Unseal", req, - func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.UnsealRequest) - err := s.storageNode.Unseal(ctx, req.GetLogStreamID(), req.GetReplicas()) + func(ctx context.Context, _ interface{}) (interface{}, error) { + err := s.storageNode.Unseal(ctx, req.GetTopicID(), req.GetLogStreamID(), req.GetReplicas()) return &pbtypes.Empty{}, err }, ) @@ -159,14 +156,16 @@ func (s *server) Unseal(ctx context.Context, req *snpb.UnsealRequest) (*pbtypes. // Sync implements the ManagementServer Sync method. func (s *server) Sync(ctx context.Context, req *snpb.SyncRequest) (*snpb.SyncResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.snpb.Server/Sync", req, - func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.SyncRequest) - replica := snpb.Replica{ - StorageNodeID: req.GetBackup().GetStorageNodeID(), - LogStreamID: req.GetLogStreamID(), - Address: req.GetBackup().GetAddress(), + func(ctx context.Context, _ interface{}) (interface{}, error) { + replica := varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: req.GetBackup().GetStorageNodeID(), + Address: req.GetBackup().GetAddress(), + }, + TopicID: req.GetTopicID(), + LogStreamID: req.GetLogStreamID(), } - status, err := s.storageNode.Sync(ctx, req.GetLogStreamID(), replica) + status, err := s.storageNode.Sync(ctx, req.GetTopicID(), req.GetLogStreamID(), replica) return &snpb.SyncResponse{Status: status}, err }, ) @@ -178,9 +177,8 @@ func (s *server) Sync(ctx context.Context, req *snpb.SyncRequest) (*snpb.SyncRes func (s *server) GetPrevCommitInfo(ctx context.Context, req *snpb.GetPrevCommitInfoRequest) (*snpb.GetPrevCommitInfoResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.snpb.Server/GetPrevCommitInfo", req, - func(ctx context.Context, reqI interface{}) (interface{}, error) { - req := reqI.(*snpb.GetPrevCommitInfoRequest) - info, err := s.storageNode.GetPrevCommitInfo(ctx, req.GetPrevHighWatermark()) + func(ctx context.Context, _ interface{}) (interface{}, error) { + info, err := s.storageNode.GetPrevCommitInfo(ctx, req.GetPrevVersion()) rsp := &snpb.GetPrevCommitInfoResponse{ StorageNodeID: s.storageNode.StorageNodeID(), CommitInfos: info, diff --git a/internal/storagenode/server_test.go b/internal/storagenode/server_test.go deleted file mode 100644 index 6a1381111..000000000 --- a/internal/storagenode/server_test.go +++ /dev/null @@ -1,259 +0,0 @@ -package storagenode - -// -//import ( -// "context" -// "github.com/kakao/varlog/internal/storagenode" -// "testing" -// -// "github.com/golang/mock/gomock" -// . "github.com/smartystreets/goconvey/convey" -// "go.uber.org/zap" -// -// "github.com/kakao/varlog/pkg/types" -// "github.com/kakao/varlog/pkg/verrors" -// "github.com/kakao/varlog/proto/snpb" -// "github.com/kakao/varlog/proto/varlogpb" -//) -// -//func TestManagementServiceGetMetadata(t *testing.T) { -// Convey("Given a ManagementService", t, func() { -// const clusterID = types.ClusterID(1) -// ctrl := gomock.NewController(t) -// defer ctrl.Finish() -// -// mock := NewMockManagement(ctrl) -// mock.EXPECT().ClusterID().Return(clusterID).AnyTimes() -// mock.EXPECT().StorageNodeID().Return(types.StorageNodeID(1)).AnyTimes() -// service := New(mock, storagenode.newNopTelmetryStub(), zap.NewNop()) -// -// gmReq := &snpb.GetMetadataRequest{ClusterID: clusterID} -// -// Convey("When the passed clusterID is not the same", func() { -// Convey("Then the GetMetadata should return an error", func() { -// gmReq.ClusterID += 1 -// _, err := service.GetMetadata(context.TODO(), gmReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the underlying Server failed to get metadata", func() { -// mock.EXPECT().GetMetadata(gomock.Any()).Return(nil, verrors.ErrInternal) -// Convey("Then the GetMetadata should return an error", func() { -// _, err := service.GetMetadata(context.TODO(), gmReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the underlying Server succeeds to get metadata", func() { -// mock.EXPECT().GetMetadata(gomock.Any()).Return(&varlogpb.StorageNodeMetadataDescriptor{}, nil) -// Convey("Then the GetMetadata should return the metadata", func() { -// _, err := service.GetMetadata(context.TODO(), gmReq) -// So(err, ShouldBeNil) -// }) -// }) -// }) -//} -// -//func TestManagementServiceAddLogStream(t *testing.T) { -// Convey("Given a ManagementService", t, func() { -// const ( -// clusterID = types.ClusterID(1) -// storageNodeID = types.StorageNodeID(1) -// ) -// ctrl := gomock.NewController(t) -// defer ctrl.Finish() -// -// mock := NewMockManagement(ctrl) -// mock.EXPECT().ClusterID().Return(clusterID).AnyTimes() -// mock.EXPECT().StorageNodeID().Return(storageNodeID).AnyTimes() -// service := New(mock, storagenode.newNopTelmetryStub(), zap.NewNop()) -// -// alsReq := &snpb.AddLogStreamRequest{ClusterID: clusterID, StorageNodeID: storageNodeID} -// -// Convey("When the underlying Server failed to add the LogStream", func() { -// mock.EXPECT().AddLogStream(gomock.Any(), gomock.Any(), gomock.Any()).Return("", verrors.ErrInternal) -// Convey("Then the AddLogStream should return an error", func() { -// _, err := service.AddLogStream(context.TODO(), alsReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the passed ClusterID is not the same", func() { -// Convey("Then the AddLogStream should return an error", func() { -// alsReq.ClusterID += 1 -// _, err := service.AddLogStream(context.TODO(), alsReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the passed StorageNodeID is not the same", func() { -// Convey("Then the AddLogStream should return an error", func() { -// alsReq.StorageNodeID += 1 -// _, err := service.AddLogStream(context.TODO(), alsReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the underlying Server succeeds to add the LogStream", func() { -// mock.EXPECT().AddLogStream(gomock.Any(), gomock.Any(), gomock.Any()).Return("/tmp", nil) -// Convey("Then the AddLogStream should return a response message about LogStream", func() { -// _, err := service.AddLogStream(context.TODO(), alsReq) -// So(err, ShouldBeNil) -// }) -// }) -// }) -//} -// -//func TestManagementServiceRemoveLogStream(t *testing.T) { -// Convey("Given a ManagementService", t, func() { -// const ( -// clusterID = types.ClusterID(1) -// storageNodeID = types.StorageNodeID(1) -// ) -// -// ctrl := gomock.NewController(t) -// defer ctrl.Finish() -// -// mock := NewMockManagement(ctrl) -// mock.EXPECT().ClusterID().Return(clusterID).AnyTimes() -// mock.EXPECT().StorageNodeID().Return(storageNodeID).AnyTimes() -// service := New(mock, storagenode.newNopTelmetryStub(), zap.NewNop()) -// -// rmReq := &snpb.RemoveLogStreamRequest{ClusterID: clusterID, StorageNodeID: storageNodeID} -// -// Convey("When the underlying Server failed to remove the LogStream", func() { -// mock.EXPECT().RemoveLogStream(gomock.Any(), gomock.Any()).Return(verrors.ErrInternal) -// Convey("Then the RemoveLogStream should return an error", func() { -// _, err := service.RemoveLogStream(context.TODO(), rmReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the passed ClusterID is invalid", func() { -// Convey("Then the RemoveLogStream should return an error", func() { -// rmReq.ClusterID += 1 -// _, err := service.RemoveLogStream(context.TODO(), rmReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the passed StorageNodeID is invalid", func() { -// Convey("Then the RemoveLogStream should return an error", func() { -// rmReq.StorageNodeID += 1 -// _, err := service.RemoveLogStream(context.TODO(), rmReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the underlying Server succeeds to remove the LogStream", func() { -// mock.EXPECT().RemoveLogStream(gomock.Any(), gomock.Any()).Return(nil) -// Convey("Then the RemoveLogStream should not return an error", func() { -// _, err := service.RemoveLogStream(context.TODO(), rmReq) -// So(err, ShouldBeNil) -// }) -// }) -// }) -//} -// -//func TestManagementServiceSeal(t *testing.T) { -// Convey("Given a ManagementService", t, func() { -// const ( -// clusterID = types.ClusterID(1) -// storageNodeID = types.StorageNodeID(1) -// ) -// -// ctrl := gomock.NewController(t) -// defer ctrl.Finish() -// -// mock := NewMockManagement(ctrl) -// mock.EXPECT().ClusterID().Return(clusterID).AnyTimes() -// mock.EXPECT().StorageNodeID().Return(storageNodeID).AnyTimes() -// service := New(mock, storagenode.newNopTelmetryStub(), zap.NewNop()) -// -// sealReq := &snpb.SealRequest{ClusterID: clusterID, StorageNodeID: storageNodeID} -// -// Convey("When the underlying Server failed to seal the LogStream", func() { -// mock.EXPECT().Seal(gomock.Any(), gomock.Any(), gomock.Any()).Return(varlogpb.LogStreamStatusRunning, types.GLSN(1), verrors.ErrInternal) -// Convey("Then the Seal should return an error", func() { -// _, err := service.Seal(context.TODO(), sealReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the underlying Server succeeds to seal the LogStream", func() { -// mock.EXPECT().Seal(gomock.Any(), gomock.Any(), gomock.Any()).Return(varlogpb.LogStreamStatusSealed, types.GLSN(1), nil) -// Convey("Then the Seal should not return an error", func() { -// _, err := service.Seal(context.TODO(), sealReq) -// So(err, ShouldBeNil) -// }) -// }) -// -// Convey("When the passed ClusterID is not the same", func() { -// Convey("Then the Seal should return an error", func() { -// sealReq.ClusterID += 1 -// _, err := service.Seal(context.TODO(), sealReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the passed StorageNodeID is not the same", func() { -// Convey("Then the Seal should return an error", func() { -// sealReq.StorageNodeID += 1 -// _, err := service.Seal(context.TODO(), sealReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// }) -//} -// -//func TestManagementServiceUnseal(t *testing.T) { -// Convey("Given that a ManagementService handles Unseal RPC call", t, func() { -// const ( -// clusterID = types.ClusterID(1) -// storageNodeID = types.StorageNodeID(1) -// ) -// -// ctrl := gomock.NewController(t) -// defer ctrl.Finish() -// -// mock := NewMockManagement(ctrl) -// mock.EXPECT().ClusterID().Return(clusterID).AnyTimes() -// mock.EXPECT().StorageNodeID().Return(storageNodeID).AnyTimes() -// service := New(mock, storagenode.newNopTelmetryStub(), zap.NewNop()) -// -// unsealReq := &snpb.UnsealRequest{ClusterID: clusterID, StorageNodeID: storageNodeID} -// -// Convey("When the underlying Server failed to unseal the LogStream", func() { -// mock.EXPECT().Unseal(gomock.Any(), gomock.Any()).Return(verrors.ErrInternal) -// Convey("Then the Unseal should return an error", func() { -// _, err := service.Unseal(context.TODO(), unsealReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the underlying Server succeeds to unseal the LogStream", func() { -// mock.EXPECT().Unseal(gomock.Any(), gomock.Any()).Return(nil) -// Convey("Then the ManagementService should not return an error", func() { -// _, err := service.Unseal(context.TODO(), unsealReq) -// So(err, ShouldBeNil) -// }) -// }) -// -// Convey("When the passed ClusterID is not the same", func() { -// Convey("Then the Unseal should return an error", func() { -// unsealReq.ClusterID += 1 -// _, err := service.Unseal(context.TODO(), unsealReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// -// Convey("When the passed StorageNodeID is not the same", func() { -// Convey("Then the Unseal should return an error", func() { -// unsealReq.StorageNodeID += 1 -// _, err := service.Unseal(context.TODO(), unsealReq) -// So(err, ShouldNotBeNil) -// }) -// }) -// }) -//} diff --git a/internal/storagenode/stopchannel/stop_channel_test.go b/internal/storagenode/stopchannel/stop_channel_test.go index ec4f0f8d5..685b02a8b 100644 --- a/internal/storagenode/stopchannel/stop_channel_test.go +++ b/internal/storagenode/stopchannel/stop_channel_test.go @@ -28,13 +28,8 @@ func TestStopChannelGoroutines(t *testing.T) { wg.Add(concurrency) for i := 0; i < concurrency; i++ { go func() { - for { - select { - case <-sc.StopC(): - wg.Done() - return - } - } + <-sc.StopC() + wg.Done() }() } go func() { diff --git a/internal/storagenode/storage/encode.go b/internal/storagenode/storage/encode.go index 1791b1a55..5fa8974ac 100644 --- a/internal/storagenode/storage/encode.go +++ b/internal/storagenode/storage/encode.go @@ -98,11 +98,8 @@ func encodeCommitContextKey(cc CommitContext) commitContextKey { func encodeCommitContextKeyInternal(cc CommitContext, key []byte) commitContextKey { key[0] = commitContextKeyPrefix - offset := 1 sz := types.GLSNLen - binary.BigEndian.PutUint64(key[offset:offset+sz], uint64(cc.PrevHighWatermark)) - - offset += sz + offset := 1 binary.BigEndian.PutUint64(key[offset:offset+sz], uint64(cc.HighWatermark)) offset += sz @@ -114,6 +111,10 @@ func encodeCommitContextKeyInternal(cc CommitContext, key []byte) commitContextK offset += sz binary.BigEndian.PutUint64(key[offset:offset+sz], uint64(cc.CommittedLLSNBegin)) + offset += sz + sz = types.VersionLen + binary.BigEndian.PutUint64(key[offset:offset+sz], uint64(cc.Version)) + return key } @@ -123,9 +124,6 @@ func decodeCommitContextKey(k commitContextKey) (cc CommitContext) { } sz := types.GLSNLen offset := 1 - cc.PrevHighWatermark = types.GLSN(binary.BigEndian.Uint64(k[offset : offset+sz])) - - offset += sz cc.HighWatermark = types.GLSN(binary.BigEndian.Uint64(k[offset : offset+sz])) offset += sz @@ -137,5 +135,9 @@ func decodeCommitContextKey(k commitContextKey) (cc CommitContext) { offset += sz cc.CommittedLLSNBegin = types.LLSN(binary.BigEndian.Uint64(k[offset : offset+sz])) + offset += sz + sz = types.VersionLen + cc.Version = types.Version(binary.BigEndian.Uint64(k[offset : offset+sz])) + return cc } diff --git a/internal/storagenode/storage/pebble_commit_batch.go b/internal/storagenode/storage/pebble_commit_batch.go index b439891e8..554a8138b 100644 --- a/internal/storagenode/storage/pebble_commit_batch.go +++ b/internal/storagenode/storage/pebble_commit_batch.go @@ -41,16 +41,16 @@ func newPebbleCommitBatch() *pebbleCommitBatch { return pebbleCommitBatchPool.Get().(*pebbleCommitBatch) } -func (cb *pebbleCommitBatch) release() { - cb.b = nil - cb.ps = nil - cb.cc = InvalidCommitContext - cb.snapshot.prevWrittenLLSN = types.InvalidLLSN - cb.snapshot.prevCommittedLLSN = types.InvalidLLSN - cb.snapshot.prevCommittedGLSN = types.InvalidGLSN - cb.progress.prevCommittedLLSN = types.InvalidLLSN - cb.progress.prevCommittedGLSN = types.InvalidGLSN - pebbleCommitBatchPool.Put(cb) +func (pcb *pebbleCommitBatch) release() { + pcb.b = nil + pcb.ps = nil + pcb.cc = InvalidCommitContext + pcb.snapshot.prevWrittenLLSN = types.InvalidLLSN + pcb.snapshot.prevCommittedLLSN = types.InvalidLLSN + pcb.snapshot.prevCommittedGLSN = types.InvalidGLSN + pcb.progress.prevCommittedLLSN = types.InvalidLLSN + pcb.progress.prevCommittedGLSN = types.InvalidGLSN + pebbleCommitBatchPool.Put(pcb) } func (pcb *pebbleCommitBatch) Put(llsn types.LLSN, glsn types.GLSN) error { diff --git a/internal/storagenode/storage/pebble_scanner.go b/internal/storagenode/storage/pebble_scanner.go index 731d25fd7..17a1cfee7 100644 --- a/internal/storagenode/storage/pebble_scanner.go +++ b/internal/storagenode/storage/pebble_scanner.go @@ -6,7 +6,7 @@ import ( "github.com/cockroachdb/pebble" "go.uber.org/zap" - "github.com/kakao/varlog/pkg/types" + "github.com/kakao/varlog/proto/varlogpb" ) type pebbleScanner struct { @@ -35,7 +35,7 @@ func (scanner *pebbleScanner) Next() ScanResult { scanner.logger.Warn("error while closing scanner", zap.Error(err)) } }() - logEntry := types.LogEntry{ + logEntry := varlogpb.LogEntry{ GLSN: decodeCommitKey(ck), LLSN: decodeDataKey(dk), } diff --git a/internal/storagenode/storage/pebble_storage.go b/internal/storagenode/storage/pebble_storage.go index 15711ac43..29352d8dd 100644 --- a/internal/storagenode/storage/pebble_storage.go +++ b/internal/storagenode/storage/pebble_storage.go @@ -9,6 +9,7 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/verrors" + "github.com/kakao/varlog/proto/varlogpb" ) const PebbleStorageName = "pebble" @@ -108,12 +109,11 @@ func (ps *pebbleStorage) readLastCommitContext(onlyNonEmpty bool) (CommitContext continue } return cc, true - } return InvalidCommitContext, false } -func (ps *pebbleStorage) readLogEntryBoundary() (types.LogEntry, types.LogEntry, bool, error) { +func (ps *pebbleStorage) readLogEntryBoundary() (varlogpb.LogEntry, varlogpb.LogEntry, bool, error) { iter := ps.db.NewIter(&pebble.IterOptions{ LowerBound: []byte{commitKeyPrefix}, UpperBound: []byte{commitKeySentinelPrefix}, @@ -123,12 +123,12 @@ func (ps *pebbleStorage) readLogEntryBoundary() (types.LogEntry, types.LogEntry, }() if !iter.First() { - return types.InvalidLogEntry, types.InvalidLogEntry, false, nil + return varlogpb.InvalidLogEntry(), varlogpb.InvalidLogEntry(), false, nil } firstGLSN := decodeCommitKey(iter.Key()) firstLE, err := ps.Read(firstGLSN) if err != nil { - return types.InvalidLogEntry, types.InvalidLogEntry, true, err + return varlogpb.InvalidLogEntry(), varlogpb.InvalidLogEntry(), true, err } iter.Last() @@ -137,7 +137,7 @@ func (ps *pebbleStorage) readLogEntryBoundary() (types.LogEntry, types.LogEntry, return firstLE, lastLE, true, err } -func (ps *pebbleStorage) readUncommittedLogEntryBoundary(lastCommittedLogEntry types.LogEntry) (types.LLSN, types.LLSN) { +func (ps *pebbleStorage) readUncommittedLogEntryBoundary(lastCommittedLogEntry varlogpb.LogEntry) (types.LLSN, types.LLSN) { dk := encodeDataKey(lastCommittedLogEntry.LLSN + 1) iter := ps.db.NewIter(&pebble.IterOptions{ LowerBound: dk, @@ -270,7 +270,7 @@ func (ps *pebbleStorage) Name() string { return PebbleStorageName } -func (ps *pebbleStorage) Read(glsn types.GLSN) (types.LogEntry, error) { +func (ps *pebbleStorage) Read(glsn types.GLSN) (varlogpb.LogEntry, error) { rkb := newCommitKeyBuffer() defer rkb.release() @@ -280,7 +280,7 @@ func (ps *pebbleStorage) Read(glsn types.GLSN) (types.LogEntry, error) { if err == pebble.ErrNotFound { err = verrors.ErrNoEntry } - return types.InvalidLogEntry, errors.WithStack(err) + return varlogpb.InvalidLogEntry(), errors.WithStack(err) } data, dcloser, err := ps.db.Get(dk) @@ -288,10 +288,10 @@ func (ps *pebbleStorage) Read(glsn types.GLSN) (types.LogEntry, error) { if err == pebble.ErrNotFound { err = verrors.ErrNoEntry } - return types.InvalidLogEntry, errors.WithStack(err) + return varlogpb.InvalidLogEntry(), errors.WithStack(err) } - logEntry := types.LogEntry{ + logEntry := varlogpb.LogEntry{ GLSN: glsn, LLSN: decodeDataKey(dk), } @@ -300,12 +300,12 @@ func (ps *pebbleStorage) Read(glsn types.GLSN) (types.LogEntry, error) { copy(logEntry.Data, data) } if err := multierr.Append(errors.WithStack(ccloser.Close()), errors.WithStack(dcloser.Close())); err != nil { - return types.InvalidLogEntry, err + return varlogpb.InvalidLogEntry(), err } return logEntry, nil } -func (ps *pebbleStorage) ReadAt(llsn types.LLSN) (types.LogEntry, error) { +func (ps *pebbleStorage) ReadAt(llsn types.LLSN) (varlogpb.LogEntry, error) { // NOTE: Scanning by commit context can be better. iter := ps.db.NewIter(&pebble.IterOptions{ LowerBound: []byte{commitKeyPrefix}, @@ -323,7 +323,7 @@ func (ps *pebbleStorage) ReadAt(llsn types.LLSN) (types.LogEntry, error) { } iter.Next() } - return types.InvalidLogEntry, errors.WithStack(verrors.ErrNoEntry) + return varlogpb.InvalidLogEntry(), errors.WithStack(verrors.ErrNoEntry) } func (ps *pebbleStorage) Scan(begin, end types.GLSN) Scanner { @@ -462,7 +462,7 @@ func (ps *pebbleStorage) StoreCommitContext(cc CommitContext) error { return ps.db.Set(cck, nil, ps.commitContextOption) } -func (ps *pebbleStorage) ReadFloorCommitContext(prevHighWatermark types.GLSN) (CommitContext, error) { +func (ps *pebbleStorage) ReadFloorCommitContext(ver types.Version) (CommitContext, error) { iter := ps.db.NewIter(&pebble.IterOptions{ LowerBound: []byte{commitContextKeyPrefix}, UpperBound: []byte{commitContextKeySentinelPrefix}, @@ -475,19 +475,20 @@ func (ps *pebbleStorage) ReadFloorCommitContext(prevHighWatermark types.GLSN) (C return InvalidCommitContext, ErrNotFoundCommitContext } - last := decodeCommitContextKey(iter.Key()) - if last.HighWatermark <= prevHighWatermark { + cc := decodeCommitContextKey(iter.Key()) + if cc.Version <= ver { return InvalidCommitContext, ErrNotFoundCommitContext } - // NotFound - if !iter.SeekLT(encodeCommitContextKey(CommitContext{ - PrevHighWatermark: prevHighWatermark + 1, - })) { - return InvalidCommitContext, ErrNotFoundCommitContext + for iter.Prev() { + prev := decodeCommitContextKey(iter.Key()) + if prev.Version <= ver { + return cc, nil + } + + cc = prev } - cc := decodeCommitContextKey(iter.Key()) return cc, nil } @@ -495,20 +496,21 @@ func (ps *pebbleStorage) CommitContextOf(glsn types.GLSN) (CommitContext, error) if glsn.Invalid() { return InvalidCommitContext, ErrNotFoundCommitContext } - upperKey := encodeCommitContextKey(CommitContext{ - PrevHighWatermark: glsn, + lowerKey := encodeCommitContextKey(CommitContext{ + HighWatermark: glsn, }) iter := ps.db.NewIter(&pebble.IterOptions{ - LowerBound: []byte{commitContextKeyPrefix}, - UpperBound: upperKey, + LowerBound: lowerKey, + UpperBound: []byte{commitContextKeySentinelPrefix}, }) defer func() { _ = iter.Close() }() - if !iter.Last() { + if !iter.First() { return InvalidCommitContext, ErrNotFoundCommitContext } + if cc := decodeCommitContextKey(iter.Key()); cc.CommittedGLSNBegin <= glsn && glsn < cc.CommittedGLSNEnd { return cc, nil } diff --git a/internal/storagenode/storage/pebble_write_batch.go b/internal/storagenode/storage/pebble_write_batch.go index 5fb80c359..17c6a59c9 100644 --- a/internal/storagenode/storage/pebble_write_batch.go +++ b/internal/storagenode/storage/pebble_write_batch.go @@ -27,11 +27,11 @@ func newPebbleWriteBatch() *pebbleWriteBatch { return pebbleWriteBatchPool.Get().(*pebbleWriteBatch) } -func (wb *pebbleWriteBatch) release() { - wb.b = nil - wb.ps = nil - wb.prevWrittenLLSN = types.InvalidLLSN - pebbleWriteBatchPool.Put(wb) +func (pwb *pebbleWriteBatch) release() { + pwb.b = nil + pwb.ps = nil + pwb.prevWrittenLLSN = types.InvalidLLSN + pebbleWriteBatchPool.Put(pwb) } func (pwb *pebbleWriteBatch) Put(llsn types.LLSN, data []byte) error { diff --git a/internal/storagenode/storage/storage.go b/internal/storagenode/storage/storage.go index 0f077362e..e6ea65d49 100644 --- a/internal/storagenode/storage/storage.go +++ b/internal/storagenode/storage/storage.go @@ -6,6 +6,7 @@ import ( "errors" "github.com/kakao/varlog/pkg/types" + "github.com/kakao/varlog/proto/varlogpb" ) var ( @@ -23,8 +24,8 @@ type RecoveryInfo struct { Found bool } LogEntryBoundary struct { - First types.LogEntry - Last types.LogEntry + First varlogpb.LogEntry + Last varlogpb.LogEntry Found bool } UncommittedLogEntryBoundary struct { @@ -35,13 +36,13 @@ type RecoveryInfo struct { // ScanResult represents a result of Scanner.Next() method. It should be immutable. type ScanResult struct { - LogEntry types.LogEntry + LogEntry varlogpb.LogEntry Err error } func NewInvalidScanResult(err error) ScanResult { return ScanResult{ - LogEntry: types.InvalidLogEntry, + LogEntry: varlogpb.InvalidLogEntry(), Err: err, } } @@ -72,31 +73,28 @@ type CommitBatch interface { } var InvalidCommitContext = CommitContext{ - HighWatermark: types.InvalidGLSN, - PrevHighWatermark: types.InvalidGLSN, + Version: types.InvalidVersion, CommittedGLSNBegin: types.InvalidGLSN, CommittedGLSNEnd: types.InvalidGLSN, } type CommitContext struct { + Version types.Version HighWatermark types.GLSN - PrevHighWatermark types.GLSN CommittedGLSNBegin types.GLSN CommittedGLSNEnd types.GLSN CommittedLLSNBegin types.LLSN } func (cc CommitContext) Empty() bool { - numCommits := cc.CommittedGLSNEnd - cc.CommittedGLSNBegin - if numCommits < 0 { + if cc.CommittedGLSNEnd < cc.CommittedGLSNBegin { panic("invalid commit context") } - return numCommits == 0 + return cc.CommittedGLSNEnd-cc.CommittedGLSNBegin == 0 } func (cc CommitContext) Equal(other CommitContext) bool { - return cc.HighWatermark == other.HighWatermark && - cc.PrevHighWatermark == other.PrevHighWatermark && + return cc.Version == other.Version && cc.CommittedGLSNBegin == other.CommittedGLSNBegin && cc.CommittedGLSNEnd == other.CommittedGLSNEnd && cc.CommittedLLSNBegin == other.CommittedLLSNBegin @@ -111,10 +109,10 @@ type Storage interface { // Read reads the log entry at the glsn. // If there is no entry at the given position, it returns varlog.ErrNoEntry. - Read(glsn types.GLSN) (types.LogEntry, error) + Read(glsn types.GLSN) (varlogpb.LogEntry, error) // ReadAt reads the log entry at the llsn. - ReadAt(llsn types.LLSN) (types.LogEntry, error) + ReadAt(llsn types.LLSN) (varlogpb.LogEntry, error) // Scan returns Scanner that reads log entries from the glsn. Scan(begin, end types.GLSN) Scanner @@ -134,7 +132,7 @@ type Storage interface { // ReadFloorCommitContext returns a commit context whose member prevHighWatermark is the // greatest commit context less than or equal to the given parameter prevHighWatermark. - ReadFloorCommitContext(prevHighWatermark types.GLSN) (CommitContext, error) + ReadFloorCommitContext(ver types.Version) (CommitContext, error) // CommitContextOf looks up a commit context that contains the log entry positioned at the // given glsn. diff --git a/internal/storagenode/storage/storage_mock.go b/internal/storagenode/storage/storage_mock.go index 711b756a6..d7e4cb942 100644 --- a/internal/storagenode/storage/storage_mock.go +++ b/internal/storagenode/storage/storage_mock.go @@ -10,6 +10,7 @@ import ( gomock "github.com/golang/mock/gomock" types "github.com/kakao/varlog/pkg/types" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) // MockScanner is a mock of Scanner interface. @@ -346,10 +347,10 @@ func (mr *MockStorageMockRecorder) Path() *gomock.Call { } // Read mocks base method. -func (m *MockStorage) Read(arg0 types.GLSN) (types.LogEntry, error) { +func (m *MockStorage) Read(arg0 types.GLSN) (varlogpb.LogEntry, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "Read", arg0) - ret0, _ := ret[0].(types.LogEntry) + ret0, _ := ret[0].(varlogpb.LogEntry) ret1, _ := ret[1].(error) return ret0, ret1 } @@ -361,10 +362,10 @@ func (mr *MockStorageMockRecorder) Read(arg0 interface{}) *gomock.Call { } // ReadAt mocks base method. -func (m *MockStorage) ReadAt(arg0 types.LLSN) (types.LogEntry, error) { +func (m *MockStorage) ReadAt(arg0 types.LLSN) (varlogpb.LogEntry, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "ReadAt", arg0) - ret0, _ := ret[0].(types.LogEntry) + ret0, _ := ret[0].(varlogpb.LogEntry) ret1, _ := ret[1].(error) return ret0, ret1 } @@ -376,7 +377,7 @@ func (mr *MockStorageMockRecorder) ReadAt(arg0 interface{}) *gomock.Call { } // ReadFloorCommitContext mocks base method. -func (m *MockStorage) ReadFloorCommitContext(arg0 types.GLSN) (CommitContext, error) { +func (m *MockStorage) ReadFloorCommitContext(arg0 types.Version) (CommitContext, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "ReadFloorCommitContext", arg0) ret0, _ := ret[0].(CommitContext) diff --git a/internal/storagenode/storage/storage_test.go b/internal/storagenode/storage/storage_test.go index b4bdeceee..8ec9de30f 100644 --- a/internal/storagenode/storage/storage_test.go +++ b/internal/storagenode/storage/storage_test.go @@ -10,6 +10,7 @@ import ( "go.uber.org/zap" "github.com/kakao/varlog/pkg/types" + "github.com/kakao/varlog/proto/varlogpb" ) type testStorage struct { @@ -198,7 +199,7 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { var ( err error - le types.LogEntry + le varlogpb.LogEntry wb WriteBatch cb CommitBatch cc CommitContext @@ -227,8 +228,8 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { // Commit // Invalid commit context cc = CommitContext{ + Version: 1, HighWatermark: 1, - PrevHighWatermark: 0, CommittedGLSNBegin: 2, CommittedGLSNEnd: 1, } @@ -237,8 +238,8 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { // Invalid commit: good CC, but no entries cb, err = strg.NewCommitBatch(CommitContext{ + Version: 1, HighWatermark: 3, - PrevHighWatermark: 0, CommittedGLSNBegin: 2, CommittedGLSNEnd: 4, }) @@ -248,8 +249,8 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { // (LLSN,GLSN): (1,2), (2,3) cc = CommitContext{ + Version: 1, HighWatermark: 3, - PrevHighWatermark: 0, CommittedGLSNBegin: 2, CommittedGLSNEnd: 4, } @@ -283,8 +284,8 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { // Commit // (LLSN,GLSN): (3,6), (4,7) cc = CommitContext{ + Version: 2, HighWatermark: 8, - PrevHighWatermark: 3, CommittedGLSNBegin: 6, CommittedGLSNEnd: 8, } @@ -302,8 +303,8 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { // Commit (invalid commit context) overlapped with previous committed range cc = CommitContext{ + Version: 3, HighWatermark: 9, - PrevHighWatermark: 8, CommittedGLSNBegin: 7, CommittedGLSNEnd: 8, } @@ -312,8 +313,8 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { // Commit (not written log) cc = CommitContext{ + Version: 3, HighWatermark: 9, - PrevHighWatermark: 8, CommittedGLSNBegin: 9, CommittedGLSNEnd: 10, } @@ -340,7 +341,7 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { sc = strg.Scan(2, 8) sr = sc.Next() require.True(t, sr.Valid()) - require.Equal(t, types.LogEntry{ + require.Equal(t, varlogpb.LogEntry{ GLSN: 2, LLSN: 1, Data: nil, @@ -348,7 +349,7 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { sr = sc.Next() require.True(t, sr.Valid()) - require.Equal(t, types.LogEntry{ + require.Equal(t, varlogpb.LogEntry{ GLSN: 3, LLSN: 2, Data: []byte("foo"), @@ -356,7 +357,7 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { sr = sc.Next() require.True(t, sr.Valid()) - require.Equal(t, types.LogEntry{ + require.Equal(t, varlogpb.LogEntry{ GLSN: 6, LLSN: 3, Data: []byte("bar"), @@ -364,7 +365,7 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { sr = sc.Next() require.True(t, sr.Valid()) - require.Equal(t, types.LogEntry{ + require.Equal(t, varlogpb.LogEntry{ GLSN: 7, LLSN: 4, Data: nil, @@ -402,7 +403,7 @@ func TestStorageWriteCommitReadScanDelete(t *testing.T) { sc = strg.Scan(0, 7) sr = sc.Next() require.True(t, sr.Valid()) - require.Equal(t, types.LogEntry{ + require.Equal(t, varlogpb.LogEntry{ GLSN: 6, LLSN: 3, Data: []byte("bar"), @@ -453,7 +454,7 @@ func TestStorageInterleavedCommit(t *testing.T) { require.NoError(t, wb.Close()) cc1 := CommitContext{ - PrevHighWatermark: 0, + Version: 1, HighWatermark: 4, CommittedGLSNBegin: 1, CommittedGLSNEnd: 5, @@ -464,7 +465,7 @@ func TestStorageInterleavedCommit(t *testing.T) { require.NoError(t, cb1.Put(2, 2)) cc2 := CommitContext{ - PrevHighWatermark: 0, + Version: 1, HighWatermark: 5, CommittedGLSNBegin: 3, CommittedGLSNEnd: 6, @@ -503,8 +504,8 @@ func TestStorageReadRecoveryInfoOnlyEmptyCommitContext(t *testing.T) { testEachStorage(t, func(t *testing.T, strg Storage) { // empty cc, hwm=1 cb, err := strg.NewCommitBatch(CommitContext{ + Version: 1, HighWatermark: 1, - PrevHighWatermark: 0, CommittedGLSNBegin: 1, CommittedGLSNEnd: 1, }) @@ -524,8 +525,8 @@ func TestStorageReadRecoveryInfoOnlyEmptyCommitContext(t *testing.T) { // empty cc, hwm=2 cb, err = strg.NewCommitBatch(CommitContext{ + Version: 2, HighWatermark: 2, - PrevHighWatermark: 1, CommittedGLSNBegin: 1, CommittedGLSNEnd: 1, }) @@ -556,8 +557,8 @@ func TestStorageReadRecoveryInfoNonEmptyCommitContext(t *testing.T) { require.NoError(t, wb.Apply()) require.NoError(t, wb.Close()) cb, err := strg.NewCommitBatch(CommitContext{ + Version: 1, HighWatermark: 5, - PrevHighWatermark: 0, CommittedGLSNBegin: 3, CommittedGLSNEnd: 5, }) @@ -591,8 +592,8 @@ func TestStorageReadRecoveryInfoMixed(t *testing.T) { require.NoError(t, wb.Apply()) require.NoError(t, wb.Close()) cb, err := strg.NewCommitBatch(CommitContext{ + Version: 1, HighWatermark: 5, - PrevHighWatermark: 0, CommittedGLSNBegin: 3, CommittedGLSNEnd: 5, }) @@ -604,8 +605,8 @@ func TestStorageReadRecoveryInfoMixed(t *testing.T) { // empty cc, hwm=6 cb, err = strg.NewCommitBatch(CommitContext{ + Version: 2, HighWatermark: 6, - PrevHighWatermark: 5, CommittedGLSNBegin: 5, CommittedGLSNEnd: 5, }) @@ -615,8 +616,8 @@ func TestStorageReadRecoveryInfoMixed(t *testing.T) { // empty cc, hwm=7 cb, err = strg.NewCommitBatch(CommitContext{ + Version: 3, HighWatermark: 7, - PrevHighWatermark: 6, CommittedGLSNBegin: 6, // or 5? TODO: clarify it CommittedGLSNEnd: 6, // or 5? TODO: clarify it }) @@ -651,8 +652,8 @@ func TestStorageRecoveryInfoUncommitted(t *testing.T) { require.NoError(t, wb.Close()) cb, err := strg.NewCommitBatch(CommitContext{ + Version: 1, HighWatermark: 5, - PrevHighWatermark: 0, CommittedGLSNBegin: 1, CommittedGLSNEnd: 3, }) @@ -696,8 +697,8 @@ func TestStorageReadFloorCommitContext(t *testing.T) { require.NoError(t, wb.Close()) cb, err := strg.NewCommitBatch(CommitContext{ + Version: 1, HighWatermark: 6, - PrevHighWatermark: 0, CommittedGLSNBegin: 5, CommittedGLSNEnd: 7, }) @@ -708,8 +709,8 @@ func TestStorageReadFloorCommitContext(t *testing.T) { require.NoError(t, cb.Close()) cb, err = strg.NewCommitBatch(CommitContext{ + Version: 2, HighWatermark: 10, - PrevHighWatermark: 6, CommittedGLSNBegin: 9, CommittedGLSNEnd: 11, }) @@ -724,26 +725,14 @@ func TestStorageReadFloorCommitContext(t *testing.T) { require.NoError(t, err) require.Equal(t, cc.HighWatermark, types.GLSN(6)) - cc, err = strg.ReadFloorCommitContext(4) - require.NoError(t, err) - require.Equal(t, cc.HighWatermark, types.GLSN(6)) - - cc, err = strg.ReadFloorCommitContext(5) - require.NoError(t, err) - require.Equal(t, cc.HighWatermark, types.GLSN(6)) - - cc, err = strg.ReadFloorCommitContext(6) - require.NoError(t, err) - require.Equal(t, cc.HighWatermark, types.GLSN(10)) - - cc, err = strg.ReadFloorCommitContext(7) + cc, err = strg.ReadFloorCommitContext(1) require.NoError(t, err) require.Equal(t, cc.HighWatermark, types.GLSN(10)) - _, err = strg.ReadFloorCommitContext(10) + _, err = strg.ReadFloorCommitContext(2) require.ErrorIs(t, ErrNotFoundCommitContext, err) - _, err = strg.ReadFloorCommitContext(11) + _, err = strg.ReadFloorCommitContext(3) require.ErrorIs(t, ErrNotFoundCommitContext, err) require.NoError(t, strg.Close()) @@ -775,7 +764,7 @@ func TestStorageCommitContextOf(t *testing.T) { require.NoError(t, wb.Close()) cb, err := strg.NewCommitBatch(CommitContext{ - PrevHighWatermark: 0, + Version: 1, HighWatermark: 5, CommittedGLSNBegin: 1, CommittedGLSNEnd: 1, @@ -786,7 +775,7 @@ func TestStorageCommitContextOf(t *testing.T) { require.NoError(t, cb.Close()) cc1 := CommitContext{ - PrevHighWatermark: 5, + Version: 2, HighWatermark: 10, CommittedGLSNBegin: 9, CommittedGLSNEnd: 11, @@ -800,7 +789,7 @@ func TestStorageCommitContextOf(t *testing.T) { require.NoError(t, cb.Close()) cb, err = strg.NewCommitBatch(CommitContext{ - PrevHighWatermark: 10, + Version: 3, HighWatermark: 15, CommittedGLSNBegin: 11, CommittedGLSNEnd: 11, @@ -811,7 +800,7 @@ func TestStorageCommitContextOf(t *testing.T) { require.NoError(t, cb.Close()) cb, err = strg.NewCommitBatch(CommitContext{ - PrevHighWatermark: 15, + Version: 4, HighWatermark: 20, CommittedGLSNBegin: 11, CommittedGLSNEnd: 11, @@ -822,7 +811,7 @@ func TestStorageCommitContextOf(t *testing.T) { require.NoError(t, cb.Close()) cc2 := CommitContext{ - PrevHighWatermark: 20, + Version: 5, HighWatermark: 25, CommittedGLSNBegin: 21, CommittedGLSNEnd: 23, diff --git a/internal/storagenode/storage_node.go b/internal/storagenode/storage_node.go index 15a5fb90a..77271c8c4 100644 --- a/internal/storagenode/storage_node.go +++ b/internal/storagenode/storage_node.go @@ -20,6 +20,8 @@ import ( "google.golang.org/grpc/health" "google.golang.org/grpc/health/grpc_health_v1" + "github.com/kakao/varlog/internal/storagenode/volume" + "github.com/kakao/varlog/internal/storagenode/executor" "github.com/kakao/varlog/internal/storagenode/executorsmap" "github.com/kakao/varlog/internal/storagenode/id" @@ -89,6 +91,8 @@ var _ fmt.Stringer = (*StorageNode)(nil) var _ telemetry.Measurable = (*StorageNode)(nil) func New(ctx context.Context, opts ...Option) (*StorageNode, error) { + const hintNumExecutors = 32 + cfg, err := newConfig(opts) if err != nil { return nil, err @@ -97,7 +101,7 @@ func New(ctx context.Context, opts ...Option) (*StorageNode, error) { config: *cfg, storageNodePaths: set.New(len(cfg.volumes)), tsp: timestamper.New(), - executors: executorsmap.New(32), + executors: executorsmap.New(hintNumExecutors), } sn.pprofServer = pprof.New(sn.pprofOpts...) @@ -112,12 +116,15 @@ func New(ctx context.Context, opts ...Option) (*StorageNode, error) { sn.logger = sn.logger.Named("storagenode").With( zap.Uint32("cid", uint32(sn.cid)), - zap.Uint32("snid", uint32(sn.snid)), + zap.Int32("snid", int32(sn.snid)), ) for v := range sn.volumes { - vol := v.(Volume) - snPath, err := vol.CreateStorageNodePath(sn.cid, sn.snid) + vol, ok := v.(volume.Volume) + if !ok { + continue + } + snPath, err := volume.CreateStorageNodePath(vol, sn.cid, sn.snid) if err != nil { return nil, err } @@ -152,7 +159,10 @@ func New(ctx context.Context, opts ...Option) (*StorageNode, error) { // log stream path logStreamPaths := set.New(0) for v := range sn.volumes { - vol := v.(Volume) + vol, ok := v.(volume.Volume) + if !ok { + continue + } paths := vol.ReadLogStreamPaths(sn.cid, sn.snid) for _, path := range paths { if logStreamPaths.Contains(path) { @@ -163,17 +173,20 @@ func New(ctx context.Context, opts ...Option) (*StorageNode, error) { } for logStreamPathIf := range logStreamPaths { - logStreamPath := logStreamPathIf.(string) - _, _, _, logStreamID, err := ParseLogStreamPath(logStreamPath) + logStreamPath, ok := logStreamPathIf.(string) + if !ok { + continue + } + _, _, _, topicID, logStreamID, err := volume.ParseLogStreamPath(logStreamPath) if err != nil { return nil, err } - strg, err := sn.createStorage(context.Background(), logStreamPath) + strg, err := sn.createStorage(logStreamPath) if err != nil { return nil, err } - if err := sn.startLogStream(context.Background(), logStreamID, strg); err != nil { + if err := sn.startLogStream(topicID, logStreamID, strg); err != nil { return nil, err } } @@ -214,8 +227,6 @@ func (sn *StorageNode) Run() error { sn.stopper.state = storageNodeRunning } - sn.healthServer.SetServingStatus("", grpc_health_v1.HealthCheckResponse_SERVING) - // mux mux := cmux.New(sn.listener) httpL := mux.Match(cmux.HTTP1Fast()) @@ -238,6 +249,7 @@ func (sn *StorageNode) Run() error { return nil }) + sn.healthServer.SetServingStatus("", grpc_health_v1.HealthCheckResponse_SERVING) sn.stopper.mu.Unlock() sn.logger.Info("start") return sn.servers.Wait() @@ -247,8 +259,7 @@ func (sn *StorageNode) Close() { sn.stopper.mu.Lock() defer sn.stopper.mu.Unlock() - switch sn.stopper.state { - case storageNodeClosed: + if sn.stopper.state == storageNodeClosed { return } sn.stopper.state = storageNodeClosed @@ -283,9 +294,9 @@ func (sn *StorageNode) StorageNodeID() types.StorageNodeID { return sn.snid } -// GetMeGetMetadata implements the Server GetMetadata method. +// GetMetadata implements the Server GetMetadata method. func (sn *StorageNode) GetMetadata(ctx context.Context) (*varlogpb.StorageNodeMetadataDescriptor, error) { - ctx, span := sn.tmStub.StartSpan(ctx, "storagenode.GetMetadata") + _, span := sn.tmStub.StartSpan(ctx, "storagenode.GetMetadata") defer span.End() sn.muAddr.RLock() @@ -295,15 +306,20 @@ func (sn *StorageNode) GetMetadata(ctx context.Context) (*varlogpb.StorageNodeMe snmeta := &varlogpb.StorageNodeMetadataDescriptor{ ClusterID: sn.cid, StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: sn.snid, - Address: addr, - Status: varlogpb.StorageNodeStatusRunning, // TODO (jun), Ready, Running, Stopping, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: sn.snid, + Address: addr, + }, + Status: varlogpb.StorageNodeStatusRunning, // TODO (jun), Ready, Running, Stopping, }, CreatedTime: sn.tsp.Created(), UpdatedTime: sn.tsp.LastUpdated(), } for snPathIf := range sn.storageNodePaths { - snPath := snPathIf.(string) + snPath, ok := snPathIf.(string) + if !ok { + continue + } snmeta.StorageNode.Storages = append(snmeta.StorageNode.Storages, &varlogpb.StorageDescriptor{ Path: snPath, Used: 0, @@ -311,11 +327,11 @@ func (sn *StorageNode) GetMetadata(ctx context.Context) (*varlogpb.StorageNodeMe }) } - snmeta.LogStreams = sn.logStreamMetadataDescriptors(ctx) + snmeta.LogStreams = sn.logStreamMetadataDescriptors() return snmeta, nil } -func (sn *StorageNode) logStreamMetadataDescriptors(ctx context.Context) []varlogpb.LogStreamMetadataDescriptor { +func (sn *StorageNode) logStreamMetadataDescriptors() []varlogpb.LogStreamMetadataDescriptor { lsmetas := make([]varlogpb.LogStreamMetadataDescriptor, 0, sn.estimatedNumberOfExecutors()) sn.forEachExecutors(func(_ types.LogStreamID, extor executor.Executor) { lsmetas = append(lsmetas, extor.Metadata()) @@ -324,8 +340,8 @@ func (sn *StorageNode) logStreamMetadataDescriptors(ctx context.Context) []varlo } // AddLogStream implements the Server AddLogStream method. -func (sn *StorageNode) AddLogStream(ctx context.Context, logStreamID types.LogStreamID, storageNodePath string) (string, error) { - logStreamPath, err := sn.addLogStream(ctx, logStreamID, storageNodePath) +func (sn *StorageNode) AddLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, storageNodePath string) (string, error) { + logStreamPath, err := sn.addLogStream(topicID, logStreamID, storageNodePath) if err != nil { return "", err } @@ -334,69 +350,69 @@ func (sn *StorageNode) AddLogStream(ctx context.Context, logStreamID types.LogSt return logStreamPath, nil } -func (sn *StorageNode) addLogStream(ctx context.Context, logStreamID types.LogStreamID, storageNodePath string) (lsPath string, err error) { +func (sn *StorageNode) addLogStream(topicID types.TopicID, logStreamID types.LogStreamID, storageNodePath string) (lsPath string, err error) { if !sn.storageNodePaths.Contains(storageNodePath) { return "", errors.New("storagenode: no such storage path") } - lsPath, err = CreateLogStreamPath(storageNodePath, logStreamID) + lsPath, err = volume.CreateLogStreamPath(storageNodePath, topicID, logStreamID) if err != nil { return "", err } - _, loaded := sn.executors.LoadOrStore(logStreamID, executor.Executor(nil)) + _, loaded := sn.executors.LoadOrStore(topicID, logStreamID, executor.Executor(nil)) if loaded { return "", errors.New("storagenode: log stream already exists") } defer func() { if err != nil { - _, _ = sn.executors.LoadAndDelete(logStreamID) + _, _ = sn.executors.LoadAndDelete(topicID, logStreamID) } }() - strg, err := sn.createStorage(ctx, lsPath) + strg, err := sn.createStorage(lsPath) if err != nil { return "", err } - err = sn.startLogStream(ctx, logStreamID, strg) + err = sn.startLogStream(topicID, logStreamID, strg) if err != nil { return "", err } return lsPath, nil } -func (sn *StorageNode) createStorage(ctx context.Context, logStreamPath string) (storage.Storage, error) { - opts := append(sn.storageOpts, storage.WithPath(logStreamPath), storage.WithLogger(sn.logger)) - return storage.NewStorage(opts...) +func (sn *StorageNode) createStorage(logStreamPath string) (storage.Storage, error) { + return storage.NewStorage(append( + sn.storageOpts, + storage.WithPath(logStreamPath), + storage.WithLogger(sn.logger), + )...) } -func (sn *StorageNode) startLogStream(ctx context.Context, logStreamID types.LogStreamID, storage storage.Storage) (err error) { - opts := append(sn.executorOpts, +func (sn *StorageNode) startLogStream(topicID types.TopicID, logStreamID types.LogStreamID, storage storage.Storage) (err error) { + lse, err := executor.New(append(sn.executorOpts, executor.WithStorage(storage), executor.WithStorageNodeID(sn.snid), + executor.WithTopicID(topicID), executor.WithLogStreamID(logStreamID), executor.WithMeasurable(sn), executor.WithLogger(sn.logger), - ) - lse, err := executor.New(opts...) + )...) if err != nil { return err } sn.tsp.Touch() - sn.executors.Store(logStreamID, lse) - return nil + return sn.executors.Store(topicID, logStreamID, lse) } // RemoveLogStream implements the Server RemoveLogStream method. -func (sn *StorageNode) RemoveLogStream(ctx context.Context, logStreamID types.LogStreamID) error { - ifaceExecutor, loaded := sn.executors.LoadAndDelete(logStreamID) +func (sn *StorageNode) RemoveLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) error { + lse, loaded := sn.executors.LoadAndDelete(topicID, logStreamID) if !loaded { return verrors.ErrNotExist } - lse := ifaceExecutor.(executor.Executor) - lse.Close() // TODO (jun): Is removing data path optional or default behavior? @@ -409,20 +425,20 @@ func (sn *StorageNode) RemoveLogStream(ctx context.Context, logStreamID types.Lo return nil } -func (sn *StorageNode) lookupExecutor(logStreamID types.LogStreamID) (executor.Executor, error) { - ifaceExecutor, ok := sn.executors.Load(logStreamID) +func (sn *StorageNode) lookupExecutor(topicID types.TopicID, logStreamID types.LogStreamID) (executor.Executor, error) { + extor, ok := sn.executors.Load(topicID, logStreamID) if !ok { return nil, errors.WithStack(errNoLogStream) } - if ifaceExecutor == nil { + if extor == nil { return nil, errors.WithStack(errNotReadyLogStream) } - return ifaceExecutor.(executor.Executor), nil + return extor, nil } // Seal implements the Server Seal method. -func (sn *StorageNode) Seal(ctx context.Context, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { - lse, err := sn.lookupExecutor(logStreamID) +func (sn *StorageNode) Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { + lse, err := sn.lookupExecutor(topicID, logStreamID) if err != nil { return varlogpb.LogStreamStatusRunning, types.InvalidGLSN, err } @@ -435,8 +451,8 @@ func (sn *StorageNode) Seal(ctx context.Context, logStreamID types.LogStreamID, } // Unseal implements the Server Unseal method. -func (sn *StorageNode) Unseal(ctx context.Context, logStreamID types.LogStreamID, replicas []snpb.Replica) error { - lse, err := sn.lookupExecutor(logStreamID) +func (sn *StorageNode) Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, replicas []varlogpb.Replica) error { + lse, err := sn.lookupExecutor(topicID, logStreamID) if err != nil { return err } @@ -445,8 +461,8 @@ func (sn *StorageNode) Unseal(ctx context.Context, logStreamID types.LogStreamID return lse.Unseal(ctx, replicas) } -func (sn *StorageNode) Sync(ctx context.Context, logStreamID types.LogStreamID, replica snpb.Replica) (*snpb.SyncStatus, error) { - lse, err := sn.lookupExecutor(logStreamID) +func (sn *StorageNode) Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, replica varlogpb.Replica) (*snpb.SyncStatus, error) { + lse, err := sn.lookupExecutor(topicID, logStreamID) if err != nil { return nil, err } @@ -455,7 +471,7 @@ func (sn *StorageNode) Sync(ctx context.Context, logStreamID types.LogStreamID, return sts, err } -func (sn *StorageNode) GetPrevCommitInfo(ctx context.Context, hwm types.GLSN) (infos []*snpb.LogStreamCommitInfo, err error) { +func (sn *StorageNode) GetPrevCommitInfo(ctx context.Context, version types.Version) (infos []*snpb.LogStreamCommitInfo, err error) { var mu sync.Mutex var wg sync.WaitGroup infos = make([]*snpb.LogStreamCommitInfo, 0, sn.estimatedNumberOfExecutors()) @@ -464,7 +480,7 @@ func (sn *StorageNode) GetPrevCommitInfo(ctx context.Context, hwm types.GLSN) (i wg.Add(1) go func() { defer wg.Done() - info, cerr := extor.GetPrevCommitInfo(hwm) + info, cerr := extor.GetPrevCommitInfo(version) mu.Lock() infos = append(infos, info) err = multierr.Append(err, cerr) @@ -479,16 +495,8 @@ func (sn *StorageNode) GetPrevCommitInfo(ctx context.Context, hwm types.GLSN) (i return infos, nil } -func (sn *StorageNode) verifyClusterID(cid types.ClusterID) bool { - return sn.cid == cid -} - -func (sn *StorageNode) verifyStorageNodeID(snid types.StorageNodeID) bool { - return sn.snid == snid -} - -func (sn *StorageNode) ReportCommitter(logStreamID types.LogStreamID) (reportcommitter.ReportCommitter, bool) { - extor, err := sn.lookupExecutor(logStreamID) +func (sn *StorageNode) ReportCommitter(topicID types.TopicID, logStreamID types.LogStreamID) (reportcommitter.ReportCommitter, bool) { + extor, err := sn.lookupExecutor(topicID, logStreamID) if err != nil { return nil, false } @@ -515,16 +523,16 @@ func (sn *StorageNode) GetReports(rsp *snpb.GetReportResponse, f func(reportcomm }) } -func (sn *StorageNode) Replicator(logStreamID types.LogStreamID) (replication.Replicator, bool) { - extor, err := sn.lookupExecutor(logStreamID) +func (sn *StorageNode) Replicator(topicID types.TopicID, logStreamID types.LogStreamID) (replication.Replicator, bool) { + extor, err := sn.lookupExecutor(topicID, logStreamID) if err != nil { return nil, false } return extor, true } -func (sn *StorageNode) ReadWriter(logStreamID types.LogStreamID) (logio.ReadWriter, bool) { - extor, err := sn.lookupExecutor(logStreamID) +func (sn *StorageNode) ReadWriter(topicID types.TopicID, logStreamID types.LogStreamID) (logio.ReadWriter, bool) { + extor, err := sn.lookupExecutor(topicID, logStreamID) if err != nil { return nil, false } diff --git a/internal/storagenode/storage_node_test.go b/internal/storagenode/storage_node_test.go index 9170f465a..42fc7d110 100644 --- a/internal/storagenode/storage_node_test.go +++ b/internal/storagenode/storage_node_test.go @@ -15,6 +15,13 @@ import ( "github.com/kakao/varlog/proto/varlogpb" ) +func TestStorageNodeBadConfig(t *testing.T) { + defer goleak.VerifyNone(t) + + _, err := New(context.Background(), WithListenAddress("localhost:0")) + require.Error(t, err) +} + func TestStorageNodeRunAndClose(t *testing.T) { defer goleak.VerifyNone(t) @@ -38,49 +45,29 @@ func TestStorageNodeAddLogStream(t *testing.T) { const ( storageNodeID = types.StorageNodeID(1) logStreamID = types.LogStreamID(1) + topicID = types.TopicID(1) ) - // create storage node - sn, err := New( - context.TODO(), - WithListenAddress("localhost:0"), - WithVolumes(t.TempDir()), - WithStorageNodeID(storageNodeID), - ) - require.NoError(t, err) + tsn := newTestStorageNode(t, storageNodeID, 1) + defer tsn.close() + sn := tsn.sn var ( ok bool - wg sync.WaitGroup snmd *varlogpb.StorageNodeMetadataDescriptor lsmd varlogpb.LogStreamMetadataDescriptor ) - // run storage node - wg.Add(1) - go func() { - defer wg.Done() - err := sn.Run() - assert.NoError(t, err) - }() - - // wait for listening - assert.Eventually(t, func() bool { - snmd, err := sn.GetMetadata(context.TODO()) - assert.NoError(t, err) - return len(snmd.GetStorageNode().GetAddress()) > 0 - - }, time.Second, 10*time.Millisecond) - snmd, err = sn.GetMetadata(context.TODO()) - assert.NoError(t, err) - // AddLogStream: LSID=1, expected=ok + snmd, err := sn.GetMetadata(context.Background()) + require.NoError(t, err) assert.Len(t, snmd.GetStorageNode().GetStorages(), 1) snPath := snmd.GetStorageNode().GetStorages()[0].GetPath() - lsPath, err := sn.AddLogStream(context.TODO(), logStreamID, snPath) + lsPath, err := sn.AddLogStream(context.TODO(), topicID, logStreamID, snPath) assert.NoError(t, err) assert.Positive(t, len(lsPath)) + // GetMetadata: Check if the log stream is created. snmd, err = sn.GetMetadata(context.TODO()) assert.NoError(t, err) assert.Len(t, snmd.GetLogStreams(), 1) @@ -90,16 +77,67 @@ func TestStorageNodeAddLogStream(t *testing.T) { assert.Equal(t, lsPath, lsmd.GetPath()) // AddLogStream: LSID=1, expected=error - _, err = sn.AddLogStream(context.TODO(), logStreamID, snPath) + _, err = sn.AddLogStream(context.TODO(), topicID, logStreamID, snPath) assert.Error(t, err) // FIXME: RemoveLogStream doesn't care about liveness of log stream, so this results in // resource leak. - err = sn.RemoveLogStream(context.TODO(), logStreamID) + err = sn.RemoveLogStream(context.TODO(), topicID, logStreamID) assert.NoError(t, err) +} - sn.Close() - wg.Wait() +func TestStorageNodeIncorrectTopic(t *testing.T) { + defer goleak.VerifyNone(t) + + const ( + snID = types.StorageNodeID(1) + tpID = types.TopicID(1) + lsID = types.LogStreamID(1) + ) + + tsn := newTestStorageNode(t, snID, 1) + defer tsn.close() + sn := tsn.sn + + // AddLogStream + snmd, err := sn.GetMetadata(context.Background()) + require.NoError(t, err) + assert.Len(t, snmd.GetStorageNode().GetStorages(), 1) + snPath := snmd.GetStorageNode().GetStorages()[0].GetPath() + lsPath, err := sn.AddLogStream(context.Background(), tpID, lsID, snPath) + assert.NoError(t, err) + assert.Positive(t, len(lsPath)) + + // Seal: ERROR (incorrect topicID) + _, _, err = sn.Seal(context.Background(), tpID+1, lsID, types.InvalidGLSN) + require.Error(t, err) + + // Seal: OK + status, _, err := sn.Seal(context.Background(), tpID, lsID, types.InvalidGLSN) + require.NoError(t, err) + assert.Equal(t, varlogpb.LogStreamStatusSealed, status) + + // Unseal: ERROR (incorrect topicID) + assert.Error(t, sn.Unseal(context.Background(), tpID+1, lsID, []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, + TopicID: tpID, + LogStreamID: lsID, + }, + })) + + // Unseal: OK + assert.NoError(t, sn.Unseal(context.Background(), tpID, lsID, []varlogpb.Replica{ + { + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, + TopicID: tpID, + LogStreamID: lsID, + }, + })) } func TestStorageNodeGetPrevCommitInfo(t *testing.T) { @@ -107,71 +145,53 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { const ( storageNodeID = types.StorageNodeID(1) + topicID = types.TopicID(1) logStreamID1 = types.LogStreamID(1) logStreamID2 = types.LogStreamID(2) ) - // create storage node - sn, err := New( - context.TODO(), - WithListenAddress("localhost:0"), - WithVolumes(t.TempDir(), t.TempDir()), - WithStorageNodeID(storageNodeID), - ) - require.NoError(t, err) + tsn := newTestStorageNode(t, storageNodeID, 2) + defer tsn.close() + sn := tsn.sn var wg sync.WaitGroup - defer func() { - sn.Close() - wg.Wait() - }() - - // run storage node - wg.Add(1) - go func() { - defer wg.Done() - err := sn.Run() - assert.NoError(t, err) - }() - - // wait for listening - assert.Eventually(t, func() bool { - snmd, err := sn.GetMetadata(context.TODO()) - assert.NoError(t, err) - return len(snmd.GetStorageNode().GetAddress()) > 0 - - }, time.Second, 10*time.Millisecond) - snmd, err := sn.GetMetadata(context.TODO()) - assert.NoError(t, err) // AddLogStream: LSID=1, expected=ok // AddLogStream: LSID=2, expected=ok + snmd, err := sn.GetMetadata(context.TODO()) + assert.NoError(t, err) assert.Len(t, snmd.GetStorageNode().GetStorages(), 2) - lsPath, err := sn.AddLogStream(context.TODO(), logStreamID1, snmd.GetStorageNode().GetStorages()[0].GetPath()) + lsPath, err := sn.AddLogStream(context.TODO(), topicID, logStreamID1, snmd.GetStorageNode().GetStorages()[0].GetPath()) assert.NoError(t, err) assert.Positive(t, len(lsPath)) - lsPath, err = sn.AddLogStream(context.TODO(), logStreamID2, snmd.GetStorageNode().GetStorages()[1].GetPath()) + lsPath, err = sn.AddLogStream(context.TODO(), topicID, logStreamID2, snmd.GetStorageNode().GetStorages()[1].GetPath()) assert.NoError(t, err) assert.Positive(t, len(lsPath)) - status, _, err := sn.Seal(context.TODO(), logStreamID1, types.InvalidGLSN) + status, _, err := sn.Seal(context.TODO(), topicID, logStreamID1, types.InvalidGLSN) + require.NoError(t, err) assert.Equal(t, varlogpb.LogStreamStatusSealed, status) - status, _, err = sn.Seal(context.TODO(), logStreamID2, types.InvalidGLSN) + status, _, err = sn.Seal(context.TODO(), topicID, logStreamID2, types.InvalidGLSN) + require.NoError(t, err) assert.Equal(t, varlogpb.LogStreamStatusSealed, status) - assert.NoError(t, sn.Unseal(context.TODO(), logStreamID1, []snpb.Replica{ + assert.NoError(t, sn.Unseal(context.TODO(), topicID, logStreamID1, []varlogpb.Replica{ { - StorageNodeID: storageNodeID, - LogStreamID: logStreamID1, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: storageNodeID, + }, + LogStreamID: logStreamID1, }, })) - assert.NoError(t, sn.Unseal(context.TODO(), logStreamID2, []snpb.Replica{ + assert.NoError(t, sn.Unseal(context.TODO(), topicID, logStreamID2, []varlogpb.Replica{ { - StorageNodeID: storageNodeID, - LogStreamID: logStreamID2, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: storageNodeID, + }, + LogStreamID: logStreamID2, }, })) @@ -196,7 +216,7 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { if i%2 != 0 { lsid = logStreamID2 } - writer, ok := sn.ReadWriter(lsid) + writer, ok := sn.ReadWriter(topicID, lsid) require.True(t, ok) _, err := writer.Append(context.TODO(), []byte("foo")) require.NoError(t, err) @@ -213,43 +233,46 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { return reports[0].GetUncommittedLLSNLength() == 5 && reports[1].GetUncommittedLLSNLength() == 5 }, time.Second, 10*time.Millisecond) - // LSID | LLSN | GLSN | HWM | PrevHWM - // 1 | 1 | 5 | 20 | 0 - // 1 | 2 | 6 | 20 | 0 - // 1 | 3 | 7 | 20 | 0 - // 1 | 4 | 8 | 20 | 0 - // 1 | 5 | 9 | 20 | 0 - // 2 | 1 | 11 | 20 | 0 - // 2 | 2 | 12 | 20 | 0 - // 2 | 3 | 13 | 20 | 0 - // 2 | 4 | 14 | 20 | 0 - // 2 | 5 | 15 | 20 | 0 + // LSID | LLSN | GLSN | Ver + // 1 | 1 | 5 | 2 + // 1 | 2 | 6 | 2 + // 1 | 3 | 7 | 2 + // 1 | 4 | 8 | 2 + // 1 | 5 | 9 | 2 + // 2 | 1 | 11 | 2 + // 2 | 2 | 12 | 2 + // 2 | 3 | 13 | 2 + // 2 | 4 | 14 | 2 + // 2 | 5 | 15 | 2 sn.lsr.Commit(context.TODO(), snpb.LogStreamCommitResult{ + TopicID: topicID, LogStreamID: logStreamID1, CommittedLLSNOffset: 1, CommittedGLSNOffset: 5, CommittedGLSNLength: 5, - HighWatermark: 20, - PrevHighWatermark: 0, + Version: 2, }) sn.lsr.Commit(context.TODO(), snpb.LogStreamCommitResult{ + TopicID: topicID, LogStreamID: logStreamID2, CommittedLLSNOffset: 1, CommittedGLSNOffset: 11, CommittedGLSNLength: 5, - HighWatermark: 20, - PrevHighWatermark: 0, + Version: 2, }) require.Eventually(t, func() bool { rsp := snpb.GetReportResponse{} err := sn.lsr.GetReport(context.TODO(), &rsp) - require.NoError(t, err) + if !assert.NoError(t, err) { + return false + } reports := rsp.UncommitReports - require.Len(t, reports, 2) - return reports[0].GetHighWatermark() == 20 && reports[1].GetHighWatermark() == 20 + return len(reports) == 2 && + reports[0].GetVersion() == 2 && + reports[1].GetVersion() == 2 }, time.Second, 10*time.Millisecond) infos, err := sn.GetPrevCommitInfo(context.TODO(), 0) @@ -262,8 +285,7 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 5, CommittedGLSNLength: 5, HighestWrittenLLSN: 5, - HighWatermark: 20, - PrevHighWatermark: 0, + Version: 2, }) require.Contains(t, infos, &snpb.LogStreamCommitInfo{ LogStreamID: logStreamID2, @@ -272,8 +294,7 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 11, CommittedGLSNLength: 5, HighestWrittenLLSN: 5, - HighWatermark: 20, - PrevHighWatermark: 0, + Version: 2, }) infos, err = sn.GetPrevCommitInfo(context.TODO(), 1) @@ -286,32 +307,7 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 5, CommittedGLSNLength: 5, HighestWrittenLLSN: 5, - HighWatermark: 20, - PrevHighWatermark: 0, - }) - require.Contains(t, infos, &snpb.LogStreamCommitInfo{ - LogStreamID: logStreamID2, - Status: snpb.GetPrevCommitStatusOK, - CommittedLLSNOffset: 1, - CommittedGLSNOffset: 11, - CommittedGLSNLength: 5, - HighestWrittenLLSN: 5, - HighWatermark: 20, - PrevHighWatermark: 0, - }) - - infos, err = sn.GetPrevCommitInfo(context.TODO(), 5) - require.NoError(t, err) - require.Len(t, infos, 2) - require.Contains(t, infos, &snpb.LogStreamCommitInfo{ - LogStreamID: logStreamID1, - Status: snpb.GetPrevCommitStatusOK, - CommittedLLSNOffset: 1, - CommittedGLSNOffset: 5, - CommittedGLSNLength: 5, - HighestWrittenLLSN: 5, - HighWatermark: 20, - PrevHighWatermark: 0, + Version: 2, }) require.Contains(t, infos, &snpb.LogStreamCommitInfo{ LogStreamID: logStreamID2, @@ -320,35 +316,10 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 11, CommittedGLSNLength: 5, HighestWrittenLLSN: 5, - HighWatermark: 20, - PrevHighWatermark: 0, - }) - - infos, err = sn.GetPrevCommitInfo(context.TODO(), 20) - require.NoError(t, err) - require.Len(t, infos, 2) - require.Contains(t, infos, &snpb.LogStreamCommitInfo{ - LogStreamID: logStreamID1, - Status: snpb.GetPrevCommitStatusNotFound, - CommittedLLSNOffset: 0, - CommittedGLSNOffset: 0, - CommittedGLSNLength: 0, - HighestWrittenLLSN: 5, - HighWatermark: 0, - PrevHighWatermark: 0, - }) - require.Contains(t, infos, &snpb.LogStreamCommitInfo{ - LogStreamID: logStreamID2, - Status: snpb.GetPrevCommitStatusNotFound, - CommittedLLSNOffset: 0, - CommittedGLSNOffset: 0, - CommittedGLSNLength: 0, - HighestWrittenLLSN: 5, - HighWatermark: 0, - PrevHighWatermark: 0, + Version: 2, }) - infos, err = sn.GetPrevCommitInfo(context.TODO(), 21) + infos, err = sn.GetPrevCommitInfo(context.TODO(), 2) require.NoError(t, err) require.Len(t, infos, 2) require.Contains(t, infos, &snpb.LogStreamCommitInfo{ @@ -358,8 +329,7 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 0, CommittedGLSNLength: 0, HighestWrittenLLSN: 5, - HighWatermark: 0, - PrevHighWatermark: 0, + Version: 0, }) require.Contains(t, infos, &snpb.LogStreamCommitInfo{ LogStreamID: logStreamID2, @@ -368,7 +338,6 @@ func TestStorageNodeGetPrevCommitInfo(t *testing.T) { CommittedGLSNOffset: 0, CommittedGLSNLength: 0, HighestWrittenLLSN: 5, - HighWatermark: 0, - PrevHighWatermark: 0, + Version: 0, }) } diff --git a/internal/storagenode/storagenodetest_test.go b/internal/storagenode/storagenodetest_test.go new file mode 100644 index 000000000..69bc98a38 --- /dev/null +++ b/internal/storagenode/storagenodetest_test.go @@ -0,0 +1,57 @@ +package storagenode + +import ( + "context" + "sync" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/kakao/varlog/pkg/types" +) + +type testStorageNode struct { + sn *StorageNode + wg sync.WaitGroup +} + +func newTestStorageNode(t *testing.T, snID types.StorageNodeID, numVolumes int) *testStorageNode { + volumes := make([]string, 0, numVolumes) + for i := 0; i < numVolumes; i++ { + volumes = append(volumes, t.TempDir()) + } + + sn, err := New(context.Background(), + WithListenAddress("localhost:0"), + WithVolumes(volumes...), + WithStorageNodeID(snID), + ) + require.NoError(t, err) + + tsn := &testStorageNode{sn: sn} + + tsn.wg.Add(1) + go func() { + defer tsn.wg.Done() + err := tsn.sn.Run() + assert.NoError(t, err) + }() + + // wait for listening + assert.Eventually(t, func() bool { + snmd, err := tsn.sn.GetMetadata(context.Background()) + if err != nil { + return false + } + return len(snmd.GetStorageNode().GetAddress()) > 0 + }, time.Second, 10*time.Millisecond) + + return tsn +} + +func (tsn *testStorageNode) close() { + tsn.sn.Close() + tsn.wg.Wait() +} diff --git a/internal/storagenode/telemetry/storage_node_metrics.go b/internal/storagenode/telemetry/storage_node_metrics.go index 2eab6023c..ab791318a 100644 --- a/internal/storagenode/telemetry/storage_node_metrics.go +++ b/internal/storagenode/telemetry/storage_node_metrics.go @@ -6,8 +6,8 @@ import ( ) type MetricsBag struct { - RpcServerAppendDuration metric.Float64ValueRecorder - RpcServerReplicateDuration metric.Float64ValueRecorder + RPCServerAppendDuration metric.Float64ValueRecorder + RPCServerReplicateDuration metric.Float64ValueRecorder ExecutorWriteQueueTime metric.Float64ValueRecorder ExecutorWriteQueueTasks metric.Int64ValueRecorder @@ -34,11 +34,11 @@ type MetricsBag struct { func newMetricsBag(ts *TelemetryStub) *MetricsBag { meter := metric.Must(ts.mt) return &MetricsBag{ - RpcServerAppendDuration: meter.NewFloat64ValueRecorder( + RPCServerAppendDuration: meter.NewFloat64ValueRecorder( "rpc.server.append.duration", metric.WithUnit(unit.Milliseconds), ), - RpcServerReplicateDuration: meter.NewFloat64ValueRecorder( + RPCServerReplicateDuration: meter.NewFloat64ValueRecorder( "rpc.server.replicate.duration", metric.WithUnit(unit.Milliseconds), ), diff --git a/internal/storagenode/telemetry/testing.go b/internal/storagenode/telemetry/testing.go deleted file mode 100644 index d65f170cc..000000000 --- a/internal/storagenode/telemetry/testing.go +++ /dev/null @@ -1,10 +0,0 @@ -package telemetry - -import gomock "github.com/golang/mock/gomock" - -func NewTestMeasurable(ctrl *gomock.Controller) *MockMeasurable { - m := NewMockMeasurable(ctrl) - nop := NewNopTelmetryStub() - m.EXPECT().Stub().Return(nop).AnyTimes() - return m -} diff --git a/internal/storagenode/volume.go b/internal/storagenode/volume.go deleted file mode 100644 index b4470744b..000000000 --- a/internal/storagenode/volume.go +++ /dev/null @@ -1,162 +0,0 @@ -package storagenode - -import ( - "fmt" - "io/ioutil" - "os" - "path/filepath" - "strings" - - "github.com/pkg/errors" - - "github.com/kakao/varlog/pkg/types" - "github.com/kakao/varlog/pkg/util/fputil" - "github.com/kakao/varlog/pkg/verrors" -) - -// //cid_/snid_/lsid_ -const ( - clusterDirPrefix = "cid" - storageDirPrefix = "snid" - logStreamDirPrefix = "lsid" - - VolumeFileMode = os.FileMode(0700) -) - -// Volume is an absolute directory to store varlog data. -type Volume string - -// NewVolume returns volume that should already exists. If the given volume does not exist, it -// returns os.ErrNotExist. -func NewVolume(volume string) (Volume, error) { - volume, err := filepath.Abs(volume) - if err != nil { - return "", err - } - if err := ValidDir(volume); err != nil { - return "", err - } - return Volume(volume), nil -} - -func (vol Volume) Valid() error { - dir := string(vol) - if !filepath.IsAbs(dir) { - return errors.Wrapf(verrors.ErrInvalid, "not absolute path: %s", dir) - } - return ValidDir(dir) -} - -// ValidDir check if the volume (or path) is valid. Valid volume (or path) should be: -// - absolute path -// - existing directory -// - writable directory -func ValidDir(dir string) error { - if !filepath.IsAbs(dir) { - return errors.New("storagenode: not absolute path") - } - fi, err := os.Stat(dir) - if err != nil { - return errors.WithStack(err) - } - if !fi.IsDir() { - return errors.New("storagenode: not directory") - } - err = fputil.IsWritableDir(dir) - return errors.WithMessage(err, "storagenode") -} - -// CreateStorageNodePath creates a new directory to store various data related to the storage node. -// If creating the new directory fails, it returns an error. -func (vol Volume) CreateStorageNodePath(clusterID types.ClusterID, storageNodeID types.StorageNodeID) (string, error) { - clusterDir := fmt.Sprintf("%s_%v", clusterDirPrefix, clusterID) - storageNodeDir := fmt.Sprintf("%s_%v", storageDirPrefix, storageNodeID) - snPath := filepath.Join(string(vol), clusterDir, storageNodeDir) - snPath, err := filepath.Abs(snPath) - if err != nil { - return "", errors.Wrapf(err, "storagenode") - } - snPath, err = createPath(snPath) - return snPath, err -} - -func (vol Volume) ReadLogStreamPaths(clusterID types.ClusterID, storageNodeID types.StorageNodeID) []string { - clusterDir := fmt.Sprintf("%s_%v", clusterDirPrefix, clusterID) - storageNodeDir := fmt.Sprintf("%s_%v", storageDirPrefix, storageNodeID) - storageNodePath := filepath.Join(string(vol), clusterDir, storageNodeDir) - - var logStreamPaths []string - fis, err := ioutil.ReadDir(storageNodePath) - if err != nil { - return nil - } - for _, fi := range fis { - if !fi.IsDir() { - continue - } - toks := strings.SplitN(fi.Name(), "_", 2) - if toks[0] != logStreamDirPrefix { - continue - } - if _, err := types.ParseLogStreamID(toks[1]); err != nil { - continue - } - path := filepath.Join(storageNodePath, fi.Name()) - logStreamPaths = append(logStreamPaths, path) - } - return logStreamPaths -} - -// CreateLogStreamPath creates a new directory to store various data related to the log stream -// replica. If creating the new directory fails, it returns an error. -func CreateLogStreamPath(storageNodePath string, logStreamID types.LogStreamID) (string, error) { - logStreamDir := fmt.Sprintf("%s_%v", logStreamDirPrefix, logStreamID) - lsPath := filepath.Join(storageNodePath, logStreamDir) - lsPath, err := filepath.Abs(lsPath) - if err != nil { - return "", errors.WithStack(err) - } - return createPath(lsPath) -} - -func createPath(dir string) (string, error) { - if err := os.MkdirAll(dir, VolumeFileMode); err != nil { - return "", errors.WithStack(err) - } - if err := ValidDir(dir); err != nil { - return "", err - } - return dir, nil -} - -func ParseLogStreamPath(path string) (vol Volume, cid types.ClusterID, snid types.StorageNodeID, lsid types.LogStreamID, err error) { - path = filepath.Clean(path) - if !filepath.IsAbs(path) { - return "", 0, 0, 0, errors.New("not absolute path") - } - - toks := strings.Split(path, string(filepath.Separator)) - if len(toks) < 4 { - return "", 0, 0, 0, errors.New("invalid path") - } - - lsidPath := toks[len(toks)-1] - snidPath := toks[len(toks)-2] - cidPath := toks[len(toks)-3] - volPath := filepath.Join("/", filepath.Join(toks[0:len(toks)-3]...)) - - vol = Volume(volPath) - if lsid, err = types.ParseLogStreamID(strings.TrimPrefix(lsidPath, logStreamDirPrefix+"_")); err != nil { - goto errOut - } - if snid, err = types.ParseStorageNodeID(strings.TrimPrefix(snidPath, storageDirPrefix+"_")); err != nil { - goto errOut - } - if cid, err = types.ParseClusterID(strings.TrimPrefix(cidPath, clusterDirPrefix+"_")); err != nil { - goto errOut - } - return vol, cid, snid, lsid, nil - -errOut: - return "", 0, 0, 0, errors.New("invalid path") -} diff --git a/internal/storagenode/volume/volume.go b/internal/storagenode/volume/volume.go new file mode 100644 index 000000000..cf9be2ec9 --- /dev/null +++ b/internal/storagenode/volume/volume.go @@ -0,0 +1,168 @@ +package volume + +import ( + "fmt" + "io/ioutil" + "os" + "path/filepath" + "strings" + + "github.com/pkg/errors" + + "github.com/kakao/varlog/pkg/types" + "github.com/kakao/varlog/pkg/util/fputil" +) + +// //cid_/snid_/tpid__lsid_ +const ( + clusterDirPrefix = "cid" + storageDirPrefix = "snid" + topicDirPrefix = "tpid" + logStreamDirPrefix = "lsid" + + VolumeFileMode = os.FileMode(0700) +) + +// Volume is an absolute directory to store varlog data. +type Volume string + +// New returns volume that should already exist and be a writable directory. +// The result will be converted to absolute if the given volume is relative. +// If the given volume does not exist, it returns os.ErrNotExist. +func New(volume string) (Volume, error) { + volume, err := filepath.Abs(volume) + if err != nil { + return "", err + } + if err := validDirPath(volume); err != nil { + return "", err + } + return Volume(volume), nil +} + +// ReadLogStreamPaths returns all of log stream paths under the given clusterID and storageNodeID. +func (vol Volume) ReadLogStreamPaths(clusterID types.ClusterID, storageNodeID types.StorageNodeID) []string { + clusterDir := fmt.Sprintf("%s_%d", clusterDirPrefix, clusterID) + storageNodeDir := fmt.Sprintf("%s_%d", storageDirPrefix, storageNodeID) + storageNodePath := filepath.Join(string(vol), clusterDir, storageNodeDir) + + fis, err := ioutil.ReadDir(storageNodePath) + if err != nil { + return nil + } + logStreamPaths := make([]string, 0, len(fis)) + for _, fi := range fis { + if !fi.IsDir() { + continue + } + if _, _, err := parseLogStreamDirName(fi.Name()); err != nil { + continue + } + path := filepath.Join(storageNodePath, fi.Name()) + logStreamPaths = append(logStreamPaths, path) + } + return logStreamPaths +} + +// ParseLogStreamPath parses the given path into volume, ClusterID, StorageNodeID, TopicID and LogStreamID. +func ParseLogStreamPath(path string) (vol Volume, cid types.ClusterID, snid types.StorageNodeID, tpid types.TopicID, lsid types.LogStreamID, err error) { + const minParts = 4 + path = filepath.Clean(path) + if !filepath.IsAbs(path) { + return "", 0, 0, 0, 0, errors.Errorf("not absolute path: %s", path) + } + + toks := strings.Split(path, string(filepath.Separator)) + if len(toks) < minParts { + return "", 0, 0, 0, 0, errors.Errorf("invalid path: %s", path) + } + + lsidPath := toks[len(toks)-1] + snidPath := toks[len(toks)-2] + cidPath := toks[len(toks)-3] + volPath := filepath.Join("/", filepath.Join(toks[0:len(toks)-3]...)) + + vol = Volume(volPath) + + if tpid, lsid, err = parseLogStreamDirName(lsidPath); err != nil { + goto errOut + } + if snid, err = types.ParseStorageNodeID(strings.TrimPrefix(snidPath, storageDirPrefix+"_")); err != nil { + goto errOut + } + if cid, err = types.ParseClusterID(strings.TrimPrefix(cidPath, clusterDirPrefix+"_")); err != nil { + goto errOut + } + return vol, cid, snid, tpid, lsid, nil + +errOut: + return "", 0, 0, 0, 0, err +} + +// CreateStorageNodePath creates a new directory to store various data related to the storage node. +// If creating the new directory fails, it returns an error. +// StorageNodePath = //cid_/snid_ +func CreateStorageNodePath(vol Volume, clusterID types.ClusterID, storageNodeID types.StorageNodeID) (string, error) { + clusterDir := fmt.Sprintf("%s_%d", clusterDirPrefix, clusterID) + storageNodeDir := fmt.Sprintf("%s_%d", storageDirPrefix, storageNodeID) + snPath := filepath.Join(string(vol), clusterDir, storageNodeDir) + snPath, err := filepath.Abs(snPath) + if err != nil { + return "", errors.Wrapf(err, "storagenode") + } + return createPath(snPath) +} + +// CreateLogStreamPath creates a new directory to store various data related to the log stream +// replica. If creating the new directory fails, it returns an error. +// LogStreamPath = //cid_/snid_/tpid__lsid_ +func CreateLogStreamPath(storageNodePath string, topicID types.TopicID, logStreamID types.LogStreamID) (string, error) { + logStreamDir := fmt.Sprintf("%s_%d_%s_%d", topicDirPrefix, topicID, logStreamDirPrefix, logStreamID) + lsPath := filepath.Join(storageNodePath, logStreamDir) + lsPath, err := filepath.Abs(lsPath) + if err != nil { + return "", errors.WithStack(err) + } + return createPath(lsPath) +} + +// validDirPath checks if the parameter dir is an absolute path and existed writable directory. +func validDirPath(dir string) error { + if !filepath.IsAbs(dir) { + return errors.Errorf("not absolute path: %s", dir) + } + fi, err := os.Stat(dir) + if err != nil { + return errors.WithStack(err) + } + if !fi.IsDir() { + return errors.Errorf("not directory: %s", dir) + } + return fputil.IsWritableDir(dir) +} + +func createPath(dir string) (string, error) { + if err := os.MkdirAll(dir, VolumeFileMode); err != nil { + return "", errors.WithStack(err) + } + if err := validDirPath(dir); err != nil { + return "", err + } + return dir, nil +} + +func parseLogStreamDirName(dirName string) (tpid types.TopicID, lsid types.LogStreamID, err error) { + toks := strings.Split(dirName, "_") + if len(toks) != 4 || toks[0] != topicDirPrefix || toks[2] != logStreamDirPrefix { + goto Out + } + if tpid, err = types.ParseTopicID(toks[1]); err != nil { + goto Out + } + if lsid, err = types.ParseLogStreamID(toks[3]); err != nil { + goto Out + } + return tpid, lsid, nil +Out: + return tpid, lsid, errors.Errorf("invalid log stream directory name: %s", dirName) +} diff --git a/internal/storagenode/volume_test.go b/internal/storagenode/volume/volume_test.go similarity index 77% rename from internal/storagenode/volume_test.go rename to internal/storagenode/volume/volume_test.go index 042cf3f84..891478b2e 100644 --- a/internal/storagenode/volume_test.go +++ b/internal/storagenode/volume/volume_test.go @@ -1,4 +1,4 @@ -package storagenode +package volume import ( "io/ioutil" @@ -15,7 +15,7 @@ import ( func newTempVolume(t *testing.T) Volume { t.Helper() - volume, err := NewVolume(t.TempDir()) + volume, err := New(t.TempDir()) if err != nil { t.Error(err) } @@ -82,18 +82,18 @@ func TestVolume(t *testing.T) { name: "snid_1", isDir: true, children: []pathEntry{ - {name: "lsid_1", isDir: true}, - {name: "lsid_2", isDir: true}, - {name: "lsid_3"}, + {name: "tpid_1_lsid_1", isDir: true}, + {name: "tpid_2_lsid_2", isDir: true}, + {name: "tpid_3_lsid_3"}, }, }, { name: "snid_2", isDir: true, children: []pathEntry{ - {name: "lsid_1", isDir: true}, - {name: "lsid_2", isDir: true}, - {name: "lsid_3"}, + {name: "tpid_1_lsid_1", isDir: true}, + {name: "tpid_2_lsid_2", isDir: true}, + {name: "tpid_3_lsid_3"}, }, }, { @@ -109,8 +109,8 @@ func TestVolume(t *testing.T) { createPathEntries(string(volume), pathEntries, t) logStreamPaths := volume.ReadLogStreamPaths(types.ClusterID(1), types.StorageNodeID(1)) So(len(logStreamPaths), ShouldEqual, 2) - So(logStreamPaths, ShouldContain, filepath.Join(string(volume), "cid_1", "snid_1", "lsid_1")) - So(logStreamPaths, ShouldContain, filepath.Join(string(volume), "cid_1", "snid_1", "lsid_2")) + So(logStreamPaths, ShouldContain, filepath.Join(string(volume), "cid_1", "snid_1", "tpid_1_lsid_1")) + So(logStreamPaths, ShouldContain, filepath.Join(string(volume), "cid_1", "snid_1", "tpid_2_lsid_2")) }) }) } @@ -142,7 +142,6 @@ func TestValidDir(t *testing.T) { if err := os.RemoveAll(string(notWritableDir)); err != nil { t.Error(err) } - }() var tests = []struct { @@ -158,7 +157,7 @@ func TestValidDir(t *testing.T) { for i := range tests { test := tests[i] t.Run(test.in, func(t *testing.T) { - actual := ValidDir(test.in) + actual := validDirPath(test.in) if test.ok != (actual == nil) { t.Errorf("input=%v, expected=%v, actual=%v", test.in, test.ok, actual) } @@ -171,62 +170,79 @@ func TestParseLogStreamPath(t *testing.T) { volume Volume cid types.ClusterID snid types.StorageNodeID + tpid types.TopicID lsid types.LogStreamID isErr bool } tests := []struct { + name string input string output outputST }{ { + name: "RelativePath", input: "abc", output: outputST{isErr: true}, }, { + name: "OnlyClusterID", input: "/abc/cid_1", output: outputST{isErr: true}, }, { + name: "NoTopicIDLogStreamID", input: "/abc/cid_1/snid_2", output: outputST{isErr: true}, }, { - input: "/abc/cid_1/snid_2/lsid_3", + name: "NoTopicID", + input: "/abc/cid_1/snid_2/lsid_3", + output: outputST{isErr: true}, + }, + { + name: "GoodPath", + input: "/abc/cid_1/snid_2/tpid_3_lsid_4", output: outputST{ volume: Volume("/abc"), cid: types.ClusterID(1), snid: types.StorageNodeID(2), - lsid: types.LogStreamID(3), + tpid: types.TopicID(3), + lsid: types.LogStreamID(4), }, }, { - input: "/cid_1/snid_2/lsid_3", + name: "GoodPath", + input: "/cid_1/snid_2/tpid_3_lsid_4", output: outputST{ volume: Volume("/"), cid: types.ClusterID(1), snid: types.StorageNodeID(2), - lsid: types.LogStreamID(3), + tpid: types.TopicID(3), + lsid: types.LogStreamID(4), }, }, { - input: "/abc/cid_1/snid_2/lsid_", + name: "BadTopicID", + input: "/abc/cid_1/snid_2/tpid_lsid_4", output: outputST{isErr: true}, }, { - input: "/abc/cid_1/snid_2/lsid_", + name: "BadSeperator", + input: "/abc/cid_1/snid_2/tpid_3__lsid_1", output: outputST{isErr: true}, }, { - input: "/abc/cid_1/snid_2/lsid_" + strconv.FormatUint(uint64(math.MaxUint32)+1, 10), + name: "BadLogStreamID", + input: "/abc/cid_1/snid_2/tpid_3_lsid_" + strconv.FormatUint(uint64(math.MaxUint32)+1, 10), output: outputST{isErr: true}, }, } for i := range tests { test := tests[i] - t.Run(test.input, func(t *testing.T) { - vol, cid, snid, lsid, err := ParseLogStreamPath(test.input) + t.Run(test.name, func(t *testing.T) { + vol, cid, snid, tpid, lsid, err := ParseLogStreamPath(test.input) if test.output.isErr != (err != nil) { t.Errorf("expected error=%v, actual error=%+v", test.output.isErr, err) } @@ -242,6 +258,9 @@ func TestParseLogStreamPath(t *testing.T) { if test.output.snid != snid { t.Errorf("expected snid=%v, actual snid=%v", test.output.snid, snid) } + if test.output.tpid != tpid { + t.Errorf("expected tpid=%v, actual tpid=%v", test.output.tpid, tpid) + } if test.output.lsid != lsid { t.Errorf("expected lsid=%v, actual lsid=%v", test.output.lsid, lsid) } diff --git a/internal/vms/cluster_manager.go b/internal/vms/cluster_manager.go index 3f7e3deb5..c46e43411 100644 --- a/internal/vms/cluster_manager.go +++ b/internal/vms/cluster_manager.go @@ -43,16 +43,20 @@ type ClusterManager interface { UnregisterStorageNode(ctx context.Context, storageNodeID types.StorageNodeID) error - AddLogStream(ctx context.Context, replicas []*varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) + AddTopic(ctx context.Context) (*varlogpb.TopicDescriptor, error) - UnregisterLogStream(ctx context.Context, logStreamID types.LogStreamID) error + UnregisterTopic(ctx context.Context, topicID types.TopicID) error - RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID) error + AddLogStream(ctx context.Context, topicID types.TopicID, replicas []*varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) + + UnregisterLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) error + + RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID) error UpdateLogStream(ctx context.Context, logStreamID types.LogStreamID, poppedReplica, pushedReplica *varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) // Seal seals the log stream replicas corresponded with the given logStreamID. - Seal(ctx context.Context, logStreamID types.LogStreamID) ([]varlogpb.LogStreamMetadataDescriptor, types.GLSN, error) + Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) ([]varlogpb.LogStreamMetadataDescriptor, types.GLSN, error) // Sync copies the log entries of the src to the dst. Sync may be long-running, thus it // returns immediately without waiting for the completion of sync. Callers of Sync @@ -64,10 +68,10 @@ type ClusterManager interface { // To start sync, the log stream status of the src must be LogStreamStatusSealed and the log // stream status of the dst must be LogStreamStatusSealing. If either of the statuses is not // correct, Sync returns ErrSyncInvalidStatus. - Sync(ctx context.Context, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID) (*snpb.SyncStatus, error) + Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID) (*snpb.SyncStatus, error) // Unseal unseals the log stream replicas corresponded with the given logStreamID. - Unseal(ctx context.Context, logStreamID types.LogStreamID) (*varlogpb.LogStreamDescriptor, error) + Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*varlogpb.LogStreamDescriptor, error) Metadata(ctx context.Context) (*varlogpb.MetadataDescriptor, error) @@ -112,6 +116,7 @@ type clusterManager struct { snWatcher StorageNodeWatcher statRepository StatRepository logStreamIDGen LogStreamIDGenerator + topicIDGen TopicIDGenerator logger *zap.Logger options *Options @@ -140,6 +145,11 @@ func NewClusterManager(ctx context.Context, opts *Options) (ClusterManager, erro return nil, err } + topicIDGen, err := NewSequentialTopicIDGenerator(ctx, cmView, snMgr) + if err != nil { + return nil, err + } + snSelector, err := newRandomReplicaSelector(cmView, opts.ReplicationFactor) if err != nil { return nil, err @@ -154,6 +164,7 @@ func NewClusterManager(ctx context.Context, opts *Options) (ClusterManager, erro snSelector: snSelector, statRepository: NewStatRepository(ctx, cmView), logStreamIDGen: logStreamIDGen, + topicIDGen: topicIDGen, logger: opts.Logger, options: opts, } @@ -367,8 +378,49 @@ func (cm *clusterManager) UnregisterStorageNode(ctx context.Context, storageNode return nil } -func (cm *clusterManager) AddLogStream(ctx context.Context, replicas []*varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) { - lsdesc, err := cm.addLogStreamInternal(ctx, replicas) +func (cm *clusterManager) AddTopic(ctx context.Context) (*varlogpb.TopicDescriptor, error) { + cm.mu.Lock() + defer cm.mu.Unlock() + + var err error + + topicID := cm.topicIDGen.Generate() + if err = cm.mrMgr.RegisterTopic(ctx, topicID); err != nil { + goto errOut + } + + return &varlogpb.TopicDescriptor{TopicID: topicID}, nil + +errOut: + return nil, err +} + +func (cm *clusterManager) UnregisterTopic(ctx context.Context, topicID types.TopicID) error { + cm.mu.Lock() + defer cm.mu.Unlock() + + clusmeta, err := cm.cmView.ClusterMetadata(ctx) + if err != nil { + return err + } + + topicdesc, err := clusmeta.MustHaveTopic(topicID) + if err != nil { + return err + } + + status := topicdesc.GetStatus() + if status.Deleted() { + return errors.Errorf("invalid topic status: %s", status) + } + + //TODO:: seal logStreams and refresh metadata + + return cm.mrMgr.UnregisterTopic(ctx, topicID) +} + +func (cm *clusterManager) AddLogStream(ctx context.Context, topicID types.TopicID, replicas []*varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) { + lsdesc, err := cm.addLogStreamInternal(ctx, topicID, replicas) if err != nil { return lsdesc, err } @@ -378,10 +430,10 @@ func (cm *clusterManager) AddLogStream(ctx context.Context, replicas []*varlogpb return lsdesc, err } - return cm.Unseal(ctx, lsdesc.LogStreamID) + return cm.Unseal(ctx, topicID, lsdesc.LogStreamID) } -func (cm *clusterManager) addLogStreamInternal(ctx context.Context, replicas []*varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) { +func (cm *clusterManager) addLogStreamInternal(ctx context.Context, topicID types.TopicID, replicas []*varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) { cm.mu.Lock() defer cm.mu.Unlock() @@ -410,6 +462,7 @@ func (cm *clusterManager) addLogStreamInternal(ctx context.Context, replicas []* } logStreamDesc := &varlogpb.LogStreamDescriptor{ + TopicID: topicID, LogStreamID: logStreamID, Status: varlogpb.LogStreamStatusSealing, Replicas: replicas, @@ -440,7 +493,7 @@ func (cm *clusterManager) waitSealed(ctx context.Context, logStreamID types.LogS } } -func (cm *clusterManager) UnregisterLogStream(ctx context.Context, logStreamID types.LogStreamID) error { +func (cm *clusterManager) UnregisterLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) error { cm.mu.Lock() defer cm.mu.Unlock() @@ -491,7 +544,7 @@ func (cm *clusterManager) addLogStream(ctx context.Context, lsdesc *varlogpb.Log return lsdesc, cm.mrMgr.RegisterLogStream(ctx, lsdesc) } -func (cm *clusterManager) RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID) error { +func (cm *clusterManager) RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID) error { cm.mu.Lock() defer cm.mu.Unlock() @@ -504,7 +557,7 @@ func (cm *clusterManager) RemoveLogStreamReplica(ctx context.Context, storageNod return err } - return cm.snMgr.RemoveLogStream(ctx, storageNodeID, logStreamID) + return cm.snMgr.RemoveLogStream(ctx, storageNodeID, topicID, logStreamID) } func (cm *clusterManager) UpdateLogStream(ctx context.Context, logStreamID types.LogStreamID, poppedReplica, pushedReplica *varlogpb.ReplicaDescriptor) (*varlogpb.LogStreamDescriptor, error) { @@ -570,7 +623,7 @@ func (cm *clusterManager) UpdateLogStream(ctx context.Context, logStreamID types cm.logger.Panic("logstream push/pop error") } - if err := cm.snMgr.AddLogStreamReplica(ctx, pushedReplica.GetStorageNodeID(), logStreamID, pushedReplica.GetPath()); err != nil { + if err := cm.snMgr.AddLogStreamReplica(ctx, pushedReplica.GetStorageNodeID(), newLSDesc.TopicID, logStreamID, pushedReplica.GetPath()); err != nil { return nil, err } @@ -602,7 +655,7 @@ func (cm *clusterManager) removableLogStreamReplica(clusmeta *varlogpb.MetadataD return nil } -func (cm *clusterManager) Seal(ctx context.Context, logStreamID types.LogStreamID) ([]varlogpb.LogStreamMetadataDescriptor, types.GLSN, error) { +func (cm *clusterManager) Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) ([]varlogpb.LogStreamMetadataDescriptor, types.GLSN, error) { cm.mu.Lock() defer cm.mu.Unlock() @@ -614,7 +667,7 @@ func (cm *clusterManager) Seal(ctx context.Context, logStreamID types.LogStreamI return nil, types.InvalidGLSN, err } - result, err := cm.snMgr.Seal(ctx, logStreamID, lastGLSN) + result, err := cm.snMgr.Seal(ctx, topicID, logStreamID, lastGLSN) if err != nil { cm.statRepository.SetLogStreamStatus(logStreamID, varlogpb.LogStreamStatusRunning) } @@ -622,7 +675,7 @@ func (cm *clusterManager) Seal(ctx context.Context, logStreamID types.LogStreamI return result, lastGLSN, err } -func (cm *clusterManager) Sync(ctx context.Context, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID) (*snpb.SyncStatus, error) { +func (cm *clusterManager) Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID) (*snpb.SyncStatus, error) { cm.mu.Lock() defer cm.mu.Unlock() @@ -630,10 +683,10 @@ func (cm *clusterManager) Sync(ctx context.Context, logStreamID types.LogStreamI if err != nil { return nil, err } - return cm.snMgr.Sync(ctx, logStreamID, srcID, dstID, lastGLSN) + return cm.snMgr.Sync(ctx, topicID, logStreamID, srcID, dstID, lastGLSN) } -func (cm *clusterManager) Unseal(ctx context.Context, logStreamID types.LogStreamID) (*varlogpb.LogStreamDescriptor, error) { +func (cm *clusterManager) Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*varlogpb.LogStreamDescriptor, error) { cm.mu.Lock() defer cm.mu.Unlock() @@ -641,7 +694,7 @@ func (cm *clusterManager) Unseal(ctx context.Context, logStreamID types.LogStrea var clusmeta *varlogpb.MetadataDescriptor cm.statRepository.SetLogStreamStatus(logStreamID, varlogpb.LogStreamStatusUnsealing) - if err = cm.snMgr.Unseal(ctx, logStreamID); err != nil { + if err = cm.snMgr.Unseal(ctx, topicID, logStreamID); err != nil { goto errOut } @@ -671,31 +724,39 @@ func (cm *clusterManager) HandleHeartbeatTimeout(ctx context.Context, snID types for _, ls := range meta.GetLogStreams() { if ls.IsReplica(snID) { cm.logger.Debug("seal due to heartbeat timeout", zap.Any("snid", snID), zap.Any("lsid", ls.LogStreamID)) - cm.Seal(ctx, ls.LogStreamID) + cm.Seal(ctx, ls.TopicID, ls.LogStreamID) } } } -func (cm *clusterManager) checkLogStreamStatus(ctx context.Context, logStreamID types.LogStreamID, mrStatus, replicaStatus varlogpb.LogStreamStatus) { +func (cm *clusterManager) checkLogStreamStatus(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, mrStatus, replicaStatus varlogpb.LogStreamStatus) { lsStat := cm.statRepository.GetLogStream(logStreamID).Copy() switch lsStat.Status() { case varlogpb.LogStreamStatusRunning: if mrStatus.Sealed() || replicaStatus.Sealed() { cm.logger.Info("seal due to status mismatch", zap.Any("lsid", logStreamID)) - cm.Seal(ctx, logStreamID) + cm.Seal(ctx, topicID, logStreamID) } case varlogpb.LogStreamStatusSealing: for _, r := range lsStat.Replicas() { if r.Status != varlogpb.LogStreamStatusSealed { cm.logger.Info("seal due to status", zap.Any("lsid", logStreamID)) - cm.Seal(ctx, logStreamID) + cm.Seal(ctx, topicID, logStreamID) return } } cm.statRepository.SetLogStreamStatus(logStreamID, varlogpb.LogStreamStatusSealed) + case varlogpb.LogStreamStatusSealed: + for _, r := range lsStat.Replicas() { + if r.Status != varlogpb.LogStreamStatusSealed { + cm.statRepository.SetLogStreamStatus(logStreamID, varlogpb.LogStreamStatusSealing) + return + } + } + case varlogpb.LogStreamStatusUnsealing: for _, r := range lsStat.Replicas() { if r.Status == varlogpb.LogStreamStatusRunning { @@ -704,7 +765,7 @@ func (cm *clusterManager) checkLogStreamStatus(ctx context.Context, logStreamID return } else if r.Status == varlogpb.LogStreamStatusSealing { cm.logger.Info("seal due to unexpected status", zap.Any("lsid", logStreamID)) - cm.Seal(ctx, logStreamID) + cm.Seal(ctx, topicID, logStreamID) return } } @@ -712,8 +773,8 @@ func (cm *clusterManager) checkLogStreamStatus(ctx context.Context, logStreamID } } -func (cm *clusterManager) syncLogStream(ctx context.Context, logStreamID types.LogStreamID) { - min, max := types.MaxGLSN, types.InvalidGLSN +func (cm *clusterManager) syncLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) { + min, max := types.MaxVersion, types.InvalidVersion var src, tgt types.StorageNodeID lsStat := cm.statRepository.GetLogStream(logStreamID).Copy() @@ -737,19 +798,19 @@ func (cm *clusterManager) syncLogStream(ctx context.Context, logStreamID types.L return } - if i == 0 || r.HighWatermark < min { - min = r.HighWatermark + if i == 0 || r.Version < min { + min = r.Version tgt = snID } - if i == 0 || r.HighWatermark > max { - max = r.HighWatermark + if i == 0 || r.Version > max { + max = r.Version src = snID } } if src != tgt { - status, err := cm.Sync(ctx, logStreamID, src, tgt) + status, err := cm.Sync(ctx, topicID, logStreamID, src, tgt) cm.logger.Debug("sync", zap.Any("lsid", logStreamID), zap.Any("src", src), zap.Any("dst", tgt), zap.String("status", status.String()), zap.Error(err)) //TODO: Unseal @@ -772,18 +833,18 @@ func (cm *clusterManager) HandleReport(ctx context.Context, snm *varlogpb.Storag for _, ls := range snm.GetLogStreams() { mls := meta.GetLogStream(ls.LogStreamID) if mls != nil { - cm.checkLogStreamStatus(ctx, ls.LogStreamID, mls.Status, ls.Status) + cm.checkLogStreamStatus(ctx, ls.TopicID, ls.LogStreamID, mls.Status, ls.Status) continue } if time.Since(ls.CreatedTime) > cm.options.WatcherOptions.GCTimeout { - cm.RemoveLogStreamReplica(ctx, snm.StorageNode.StorageNodeID, ls.LogStreamID) + cm.RemoveLogStreamReplica(ctx, snm.StorageNode.StorageNodeID, ls.TopicID, ls.LogStreamID) } } // Sync LogStream for _, ls := range snm.GetLogStreams() { if ls.Status.Sealed() { - cm.syncLogStream(ctx, ls.LogStreamID) + cm.syncLogStream(ctx, ls.TopicID, ls.LogStreamID) } } } diff --git a/internal/vms/cluster_manager_service.go b/internal/vms/cluster_manager_service.go index 0a969b362..6a81c2b2b 100644 --- a/internal/vms/cluster_manager_service.go +++ b/internal/vms/cluster_manager_service.go @@ -70,10 +70,30 @@ func (s *clusterManagerService) UnregisterStorageNode(ctx context.Context, req * return rspI.(*vmspb.UnregisterStorageNodeResponse), verrors.ToStatusError(err) } +func (s *clusterManagerService) AddTopic(ctx context.Context, req *vmspb.AddTopicRequest) (*vmspb.AddTopicResponse, error) { + rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/AddTopic", req, + func(ctx context.Context, reqI interface{}) (interface{}, error) { + topicDesc, err := s.clusManager.AddTopic(ctx) + return &vmspb.AddTopicResponse{Topic: topicDesc}, err + }, + ) + return rspI.(*vmspb.AddTopicResponse), verrors.ToStatusErrorWithCode(err, codes.Unavailable) +} + +func (s *clusterManagerService) UnregisterTopic(ctx context.Context, req *vmspb.UnregisterTopicRequest) (*vmspb.UnregisterTopicResponse, error) { + rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/UnregisterTopic", req, + func(ctx context.Context, reqI interface{}) (interface{}, error) { + err := s.clusManager.UnregisterTopic(ctx, req.GetTopicID()) + return &vmspb.UnregisterTopicResponse{}, err + }, + ) + return rspI.(*vmspb.UnregisterTopicResponse), verrors.ToStatusError(err) +} + func (s *clusterManagerService) AddLogStream(ctx context.Context, req *vmspb.AddLogStreamRequest) (*vmspb.AddLogStreamResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/AddLogStream", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - logStreamDesc, err := s.clusManager.AddLogStream(ctx, req.GetReplicas()) + logStreamDesc, err := s.clusManager.AddLogStream(ctx, req.GetTopicID(), req.GetReplicas()) return &vmspb.AddLogStreamResponse{LogStream: logStreamDesc}, err }, ) @@ -83,7 +103,7 @@ func (s *clusterManagerService) AddLogStream(ctx context.Context, req *vmspb.Add func (s *clusterManagerService) UnregisterLogStream(ctx context.Context, req *vmspb.UnregisterLogStreamRequest) (*vmspb.UnregisterLogStreamResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/UnregisterLogStream", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - err := s.clusManager.UnregisterLogStream(ctx, req.GetLogStreamID()) + err := s.clusManager.UnregisterLogStream(ctx, req.GetTopicID(), req.GetLogStreamID()) return &vmspb.UnregisterLogStreamResponse{}, err }, ) @@ -93,7 +113,7 @@ func (s *clusterManagerService) UnregisterLogStream(ctx context.Context, req *vm func (s *clusterManagerService) RemoveLogStreamReplica(ctx context.Context, req *vmspb.RemoveLogStreamReplicaRequest) (*vmspb.RemoveLogStreamReplicaResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/RemoveLogStreamReplica", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - err := s.clusManager.RemoveLogStreamReplica(ctx, req.GetStorageNodeID(), req.GetLogStreamID()) + err := s.clusManager.RemoveLogStreamReplica(ctx, req.GetStorageNodeID(), req.GetTopicID(), req.GetLogStreamID()) return &vmspb.RemoveLogStreamReplicaResponse{}, err }, ) @@ -113,7 +133,7 @@ func (s *clusterManagerService) UpdateLogStream(ctx context.Context, req *vmspb. func (s *clusterManagerService) Seal(ctx context.Context, req *vmspb.SealRequest) (*vmspb.SealResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/Seal", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - lsmetas, sealedGLSN, err := s.clusManager.Seal(ctx, req.GetLogStreamID()) + lsmetas, sealedGLSN, err := s.clusManager.Seal(ctx, req.GetTopicID(), req.GetLogStreamID()) return &vmspb.SealResponse{ LogStreams: lsmetas, SealedGLSN: sealedGLSN, @@ -126,7 +146,7 @@ func (s *clusterManagerService) Seal(ctx context.Context, req *vmspb.SealRequest func (s *clusterManagerService) Sync(ctx context.Context, req *vmspb.SyncRequest) (*vmspb.SyncResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/Sync", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - status, err := s.clusManager.Sync(ctx, req.GetLogStreamID(), req.GetSrcStorageNodeID(), req.GetDstStorageNodeID()) + status, err := s.clusManager.Sync(ctx, req.GetTopicID(), req.GetLogStreamID(), req.GetSrcStorageNodeID(), req.GetDstStorageNodeID()) return &vmspb.SyncResponse{Status: status}, err }, ) @@ -136,7 +156,7 @@ func (s *clusterManagerService) Sync(ctx context.Context, req *vmspb.SyncRequest func (s *clusterManagerService) Unseal(ctx context.Context, req *vmspb.UnsealRequest) (*vmspb.UnsealResponse, error) { rspI, err := s.withTelemetry(ctx, "varlog.vmspb.ClusterManager/Unseal", req, func(ctx context.Context, reqI interface{}) (interface{}, error) { - lsdesc, err := s.clusManager.Unseal(ctx, req.GetLogStreamID()) + lsdesc, err := s.clusManager.Unseal(ctx, req.GetTopicID(), req.GetLogStreamID()) return &vmspb.UnsealResponse{LogStream: lsdesc}, err }, ) diff --git a/internal/vms/id_generator.go b/internal/vms/id_generator.go index 374135f1b..ce133c352 100644 --- a/internal/vms/id_generator.go +++ b/internal/vms/id_generator.go @@ -18,6 +18,15 @@ type LogStreamIDGenerator interface { Refresh(ctx context.Context) error } +type TopicIDGenerator interface { + // Generate returns conflict-free TopicID. If the returned identifier is duplicated, it + // means that the varlog cluster consistency is broken. + Generate() types.TopicID + + // Refresh renews TopicIDGenerator to update the latest cluster metadata. + Refresh(ctx context.Context) error +} + // TODO: seqLSIDGen does not consider the restart of VMS. type seqLSIDGen struct { seq types.LogStreamID @@ -107,3 +116,60 @@ func getLocalMaxLogStreamID(ctx context.Context, storageNodeID types.StorageNode } return maxID, nil } + +// TODO: seqTopicIDGen does not consider the restart of VMS. +type seqTopicIDGen struct { + seq types.TopicID + mu sync.Mutex + + cmView ClusterMetadataView + snMgr StorageNodeManager +} + +func NewSequentialTopicIDGenerator(ctx context.Context, cmView ClusterMetadataView, snMgr StorageNodeManager) (TopicIDGenerator, error) { + gen := &seqTopicIDGen{ + cmView: cmView, + snMgr: snMgr, + } + if err := gen.Refresh(ctx); err != nil { + return nil, err + } + return gen, nil +} + +func (gen *seqTopicIDGen) Generate() types.TopicID { + gen.mu.Lock() + defer gen.mu.Unlock() + gen.seq++ + return gen.seq +} + +func (gen *seqTopicIDGen) Refresh(ctx context.Context) error { + maxID, err := gen.getMaxTopicID(ctx) + if err != nil { + return err + } + + gen.mu.Lock() + defer gen.mu.Unlock() + if gen.seq < maxID { + gen.seq = maxID + } + return nil +} + +func (gen *seqTopicIDGen) getMaxTopicID(ctx context.Context) (maxID types.TopicID, err error) { + clusmeta, err := gen.cmView.ClusterMetadata(ctx) + if err != nil { + return maxID, err + } + + topicDescs := clusmeta.GetTopics() + for _, topicDesc := range topicDescs { + if maxID < topicDesc.TopicID { + maxID = topicDesc.TopicID + } + } + + return maxID, nil +} diff --git a/internal/vms/mr_manager.go b/internal/vms/mr_manager.go index 4dc4d17b1..353baccb7 100644 --- a/internal/vms/mr_manager.go +++ b/internal/vms/mr_manager.go @@ -40,7 +40,7 @@ type ClusterMetadataViewGetter interface { } const ( - RELOAD_INTERVAL = time.Second + ReloadInterval = time.Second ) type MetadataRepositoryManager interface { @@ -51,6 +51,10 @@ type MetadataRepositoryManager interface { UnregisterStorageNode(ctx context.Context, storageNodeID types.StorageNodeID) error + RegisterTopic(ctx context.Context, topicID types.TopicID) error + + UnregisterTopic(ctx context.Context, topicID types.TopicID) error + RegisterLogStream(ctx context.Context, logStreamDesc *varlogpb.LogStreamDescriptor) error UnregisterLogStream(ctx context.Context, logStreamID types.LogStreamID) error @@ -93,8 +97,6 @@ type mrManager struct { } const ( - // TODO (jun): Fix code styles (See https://golang.org/doc/effective_go.html#mixed-caps) - MRMANAGER_INIT_TIMEOUT = 5 * time.Second RPCAddrsFetchRetryInterval = 100 * time.Millisecond ) @@ -215,6 +217,44 @@ func (mrm *mrManager) UnregisterStorageNode(ctx context.Context, storageNodeID t return err } +func (mrm *mrManager) RegisterTopic(ctx context.Context, topicID types.TopicID) error { + mrm.mu.Lock() + defer func() { + mrm.dirty = true + mrm.mu.Unlock() + }() + + cli, err := mrm.c() + if err != nil { + return errors.WithMessage(err, "mrmanager: not accessible") + } + + if err := cli.RegisterTopic(ctx, topicID); err != nil { + return multierr.Append(err, cli.Close()) + } + + return err +} + +func (mrm *mrManager) UnregisterTopic(ctx context.Context, topicID types.TopicID) error { + mrm.mu.Lock() + defer func() { + mrm.dirty = true + mrm.mu.Unlock() + }() + + cli, err := mrm.c() + if err != nil { + return errors.WithMessage(err, "mrmanager: not accessible") + } + + if err := cli.UnregisterTopic(ctx, topicID); err != nil { + return multierr.Append(err, cli.Close()) + } + + return err +} + func (mrm *mrManager) RegisterLogStream(ctx context.Context, logStreamDesc *varlogpb.LogStreamDescriptor) error { mrm.mu.Lock() defer func() { @@ -364,7 +404,7 @@ func (mrm *mrManager) ClusterMetadata(ctx context.Context) (*varlogpb.MetadataDe mrm.mu.Lock() defer mrm.mu.Unlock() - if mrm.dirty || time.Since(mrm.updated) > RELOAD_INTERVAL { + if mrm.dirty || time.Since(mrm.updated) > ReloadInterval { meta, err := mrm.clusterMetadata(ctx) if err != nil { return nil, err diff --git a/internal/vms/replica_selector_test.go b/internal/vms/replica_selector_test.go index c8a6ed430..702e0a725 100644 --- a/internal/vms/replica_selector_test.go +++ b/internal/vms/replica_selector_test.go @@ -21,7 +21,9 @@ func TestRandomSelector(t *testing.T) { &varlogpb.MetadataDescriptor{ StorageNodes: []*varlogpb.StorageNodeDescriptor{ { - StorageNodeID: types.StorageNodeID(1), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(1), + }, Storages: []*varlogpb.StorageDescriptor{ { Path: "/tmp", @@ -29,7 +31,9 @@ func TestRandomSelector(t *testing.T) { }, }, { - StorageNodeID: types.StorageNodeID(2), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(2), + }, Storages: []*varlogpb.StorageDescriptor{ { Path: "/tmp", @@ -87,7 +91,6 @@ func TestRandomSelector(t *testing.T) { So(replicas[0].GetStorageNodeID(), ShouldEqual, types.StorageNodeID(2)) }) }) - }) } @@ -115,7 +118,9 @@ func TestVictimSelector(t *testing.T) { func(_ context.Context, snid types.StorageNodeID) (*varlogpb.StorageNodeMetadataDescriptor, error) { return &varlogpb.StorageNodeMetadataDescriptor{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snid, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snid, + }, }, LogStreams: []varlogpb.LogStreamMetadataDescriptor{ { @@ -125,7 +130,6 @@ func TestVictimSelector(t *testing.T) { }, }, }, nil - }, ).AnyTimes() @@ -135,7 +139,6 @@ func TestVictimSelector(t *testing.T) { So(err, ShouldNotBeNil) So(err.Error(), ShouldEqual, "victimselector: no victim") }) - }) Convey("When all replicas are not LogStreamStatusSealed, thus all are victims", func() { @@ -143,7 +146,9 @@ func TestVictimSelector(t *testing.T) { func(_ context.Context, snid types.StorageNodeID) (*varlogpb.StorageNodeMetadataDescriptor, error) { return &varlogpb.StorageNodeMetadataDescriptor{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snid, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snid, + }, }, LogStreams: []varlogpb.LogStreamMetadataDescriptor{ { @@ -177,7 +182,9 @@ func TestVictimSelector(t *testing.T) { } return &varlogpb.StorageNodeMetadataDescriptor{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snid, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snid, + }, }, LogStreams: []varlogpb.LogStreamMetadataDescriptor{ { diff --git a/internal/vms/sn_manager.go b/internal/vms/sn_manager.go index 71b938718..b782333bb 100644 --- a/internal/vms/sn_manager.go +++ b/internal/vms/sn_manager.go @@ -36,17 +36,17 @@ type StorageNodeManager interface { AddLogStream(ctx context.Context, logStreamDesc *varlogpb.LogStreamDescriptor) error - AddLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID, path string) error + AddLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID, path string) error - RemoveLogStream(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID) error + RemoveLogStream(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID) error // Seal seals logstream replicas of storage nodes corresponded with the logStreamID. It // passes the last committed GLSN to the logstream replicas. - Seal(ctx context.Context, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) ([]varlogpb.LogStreamMetadataDescriptor, error) + Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) ([]varlogpb.LogStreamMetadataDescriptor, error) - Sync(ctx context.Context, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID, lastGLSN types.GLSN) (*snpb.SyncStatus, error) + Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID, lastGLSN types.GLSN) (*snpb.SyncStatus, error) - Unseal(ctx context.Context, logStreamID types.LogStreamID) error + Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) error Close() error } @@ -204,7 +204,7 @@ func (sm *snManager) AddStorageNode(snmcl snc.StorageNodeManagementClient) { func (sm *snManager) addStorageNode(snmcl snc.StorageNodeManagementClient) { storageNodeID := snmcl.PeerStorageNodeID() if _, ok := sm.cs[storageNodeID]; ok { - sm.logger.Panic("already registered storagenode", zap.Uint32("snid", uint32(storageNodeID))) + sm.logger.Panic("already registered storagenode", zap.Int32("snid", int32(storageNodeID))) } sm.cs[storageNodeID] = snmcl } @@ -223,29 +223,30 @@ func (sm *snManager) RemoveStorageNode(storageNodeID types.StorageNodeID) { } } -func (sm *snManager) AddLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID, path string) error { +func (sm *snManager) AddLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID, path string) error { sm.mu.Lock() defer sm.mu.Unlock() - return sm.addLogStreamReplica(ctx, storageNodeID, logStreamID, path) + return sm.addLogStreamReplica(ctx, storageNodeID, topicID, logStreamID, path) } -func (sm *snManager) addLogStreamReplica(ctx context.Context, snid types.StorageNodeID, lsid types.LogStreamID, path string) error { +func (sm *snManager) addLogStreamReplica(ctx context.Context, snid types.StorageNodeID, topicid types.TopicID, lsid types.LogStreamID, path string) error { snmcl, ok := sm.cs[snid] if !ok { sm.refresh(ctx) return errors.Wrap(verrors.ErrNotExist, "storage node") } - return snmcl.AddLogStream(ctx, lsid, path) + return snmcl.AddLogStreamReplica(ctx, topicid, lsid, path) } func (sm *snManager) AddLogStream(ctx context.Context, logStreamDesc *varlogpb.LogStreamDescriptor) error { sm.mu.Lock() defer sm.mu.Unlock() + topicID := logStreamDesc.GetTopicID() logStreamID := logStreamDesc.GetLogStreamID() for _, replica := range logStreamDesc.GetReplicas() { - err := sm.addLogStreamReplica(ctx, replica.GetStorageNodeID(), logStreamID, replica.GetPath()) + err := sm.addLogStreamReplica(ctx, replica.GetStorageNodeID(), topicID, logStreamID, replica.GetPath()) if err != nil { return err } @@ -253,7 +254,7 @@ func (sm *snManager) AddLogStream(ctx context.Context, logStreamDesc *varlogpb.L return nil } -func (sm *snManager) RemoveLogStream(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID) error { +func (sm *snManager) RemoveLogStream(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID) error { sm.mu.Lock() defer sm.mu.Unlock() @@ -262,10 +263,10 @@ func (sm *snManager) RemoveLogStream(ctx context.Context, storageNodeID types.St sm.refresh(ctx) return errors.Wrap(verrors.ErrNotExist, "storage node") } - return snmcl.RemoveLogStream(ctx, logStreamID) + return snmcl.RemoveLogStream(ctx, topicID, logStreamID) } -func (sm *snManager) Seal(ctx context.Context, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) ([]varlogpb.LogStreamMetadataDescriptor, error) { +func (sm *snManager) Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) ([]varlogpb.LogStreamMetadataDescriptor, error) { sm.mu.Lock() defer sm.mu.Unlock() @@ -283,25 +284,26 @@ func (sm *snManager) Seal(ctx context.Context, logStreamID types.LogStreamID, la sm.refresh(ctx) return nil, errors.Wrap(verrors.ErrNotExist, "storage node") } - status, highWatermark, errSeal := cli.Seal(ctx, logStreamID, lastCommittedGLSN) + //status, highWatermark, errSeal := cli.Seal(ctx, logStreamID, lastCommittedGLSN) + status, _, errSeal := cli.Seal(ctx, topicID, logStreamID, lastCommittedGLSN) if errSeal != nil { // NOTE: The sealing log stream ignores the failure of sealing its replica. - sm.logger.Warn("could not seal replica", zap.Uint32("snid", uint32(storageNodeID)), zap.Uint32("lsid", uint32(logStreamID))) + sm.logger.Warn("could not seal replica", zap.Int32("snid", int32(storageNodeID)), zap.Int32("lsid", int32(logStreamID))) continue } lsmetaDesc = append(lsmetaDesc, varlogpb.LogStreamMetadataDescriptor{ StorageNodeID: storageNodeID, LogStreamID: logStreamID, Status: status, - HighWatermark: highWatermark, - Path: replica.GetPath(), + //HighWatermark: highWatermark, + Path: replica.GetPath(), }) } sm.logger.Info("seal result", zap.Reflect("logstream_meta", lsmetaDesc)) return lsmetaDesc, err } -func (sm *snManager) Sync(ctx context.Context, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID, lastGLSN types.GLSN) (*snpb.SyncStatus, error) { +func (sm *snManager) Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, srcID, dstID types.StorageNodeID, lastGLSN types.GLSN) (*snpb.SyncStatus, error) { sm.mu.Lock() defer sm.mu.Unlock() @@ -327,10 +329,10 @@ func (sm *snManager) Sync(ctx context.Context, logStreamID types.LogStreamID, sr return nil, errors.Wrap(verrors.ErrNotExist, "storage node") } // TODO: check cluster meta if snids exist - return srcCli.Sync(ctx, logStreamID, dstID, dstCli.PeerAddress(), lastGLSN) + return srcCli.Sync(ctx, topicID, logStreamID, dstID, dstCli.PeerAddress(), lastGLSN) } -func (sm *snManager) Unseal(ctx context.Context, logStreamID types.LogStreamID) error { +func (sm *snManager) Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) error { sm.mu.Lock() defer sm.mu.Unlock() @@ -339,11 +341,13 @@ func (sm *snManager) Unseal(ctx context.Context, logStreamID types.LogStreamID) return err } - replicas := make([]snpb.Replica, 0, len(rds)) + replicas := make([]varlogpb.Replica, 0, len(rds)) for _, rd := range rds { - replicas = append(replicas, snpb.Replica{ - StorageNodeID: rd.StorageNodeID, - LogStreamID: logStreamID, + replicas = append(replicas, varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: rd.StorageNodeID, + }, + LogStreamID: logStreamID, // TODO: need address field? }) } @@ -356,7 +360,7 @@ func (sm *snManager) Unseal(ctx context.Context, logStreamID types.LogStreamID) sm.refresh(ctx) return errors.Wrap(verrors.ErrNotExist, "storage node") } - if err := cli.Unseal(ctx, logStreamID, replicas); err != nil { + if err := cli.Unseal(ctx, topicID, logStreamID, replicas); err != nil { return err } } diff --git a/internal/vms/sn_manager_test.go b/internal/vms/sn_manager_test.go index b92dadd1e..9d1fb0c22 100644 --- a/internal/vms/sn_manager_test.go +++ b/internal/vms/sn_manager_test.go @@ -42,7 +42,6 @@ func TestAddStorageNode(t *testing.T) { snmcl.EXPECT().PeerStorageNodeID().Return(types.StorageNodeID(1)).Times(2) Convey("Then the StorageNode should be added to it", func() { - snManager.AddStorageNode(snmcl) Convey("When the StorageNodeID of StorageNode already exists in it", func() { @@ -97,10 +96,10 @@ func TestAddLogStream(t *testing.T) { }) Convey("When at least one of AddLogStream rpc to storage node fails", func() { - snmclList[0].EXPECT().AddLogStream(gomock.Any(), gomock.Any(), gomock.Any()).Return(verrors.ErrInternal).MaxTimes(1) + snmclList[0].EXPECT().AddLogStreamReplica(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(verrors.ErrInternal).MaxTimes(1) for i := 1; i < len(snmclList); i++ { snmcl := snmclList[i] - snmcl.EXPECT().AddLogStream(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).MaxTimes(1) + snmcl.EXPECT().AddLogStreamReplica(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).MaxTimes(1) } for i := 0; i < len(snmclList); i++ { snmcl := snmclList[i] @@ -117,7 +116,7 @@ func TestAddLogStream(t *testing.T) { for i := 0; i < len(snmclList); i++ { snmcl := snmclList[i] snManager.AddStorageNode(snmcl) - snmcl.EXPECT().AddLogStream(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil) + snmcl.EXPECT().AddLogStreamReplica(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil) } Convey("Then LogStream should be added", func() { @@ -133,6 +132,7 @@ func TestSeal(t *testing.T) { Convey("Given a StorageNodeManager and StorageNodes", t, withTestStorageNodeManager(t, func(ctrl *gomock.Controller, snManager StorageNodeManager, cmView *MockClusterMetadataView) { const ( nrSN = 3 + topicID = types.TopicID(1) logStreamID = types.LogStreamID(1) ) @@ -149,9 +149,11 @@ func TestSeal(t *testing.T) { snmclList = append(snmclList, snmcl) sndescList = append(sndescList, &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snid, - Address: "127.0.0.1:" + strconv.Itoa(10000+int(snid)), - Status: varlogpb.StorageNodeStatusRunning, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snid, + Address: "127.0.0.1:" + strconv.Itoa(10000+int(snid)), + }, + Status: varlogpb.StorageNodeStatusRunning, Storages: []*varlogpb.StorageDescriptor{ {Path: "/tmp", Used: 0, Total: 1}, }, @@ -178,7 +180,7 @@ func TestSeal(t *testing.T) { cmView.EXPECT().ClusterMetadata(gomock.Any()).Return(nil, verrors.ErrInternal).AnyTimes() Convey("Then Seal shoud return error", func() { - _, err := snManager.Seal(context.TODO(), logStreamID, types.MinGLSN) + _, err := snManager.Seal(context.TODO(), topicID, logStreamID, types.MinGLSN) So(err, ShouldNotBeNil) }) }) @@ -189,17 +191,12 @@ func TestSeal(t *testing.T) { for i := 0; i < len(snmclList)-1; i++ { snmcl := snmclList[i] - snmcl.EXPECT().Seal(gomock.Any(), gomock.Any(), lastGLSN).Return(varlogpb.LogStreamStatusSealed, lastGLSN, nil) + snmcl.EXPECT().Seal(gomock.Any(), gomock.Any(), gomock.Any(), lastGLSN).Return(varlogpb.LogStreamStatusSealed, lastGLSN, nil) } - snmclList[len(sndescList)-1].EXPECT().Seal(gomock.Any(), gomock.Any(), lastGLSN).Return(varlogpb.LogStreamStatusRunning, types.InvalidGLSN, verrors.ErrInternal) + snmclList[len(sndescList)-1].EXPECT().Seal(gomock.Any(), gomock.Any(), gomock.Any(), lastGLSN).Return(varlogpb.LogStreamStatusRunning, types.InvalidGLSN, verrors.ErrInternal) Convey("Then Seal should return response not having the failed node", func() { - - lsMetaDescList, err := snManager.Seal(context.TODO(), logStreamID, lastGLSN) - /* - So(err, ShouldNotBeNil) - */ - + lsMetaDescList, err := snManager.Seal(context.TODO(), topicID, logStreamID, lastGLSN) So(err, ShouldBeNil) So(len(lsMetaDescList), ShouldEqual, nrSN-1) @@ -220,6 +217,7 @@ func TestUnseal(t *testing.T) { Convey("Given a StorageNodeManager and StorageNodes", t, withTestStorageNodeManager(t, func(ctrl *gomock.Controller, snManager StorageNodeManager, cmView *MockClusterMetadataView) { const ( nrSN = 3 + topicID = types.TopicID(1) logStreamID = types.LogStreamID(1) ) @@ -236,9 +234,11 @@ func TestUnseal(t *testing.T) { snmclList = append(snmclList, snmcl) sndescList = append(sndescList, &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snid, - Address: "127.0.0.1:" + strconv.Itoa(10000+int(snid)), - Status: varlogpb.StorageNodeStatusRunning, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snid, + Address: "127.0.0.1:" + strconv.Itoa(10000+int(snid)), + }, + Status: varlogpb.StorageNodeStatusRunning, Storages: []*varlogpb.StorageDescriptor{ {Path: "/tmp", Used: 0, Total: 1}, }, @@ -265,7 +265,7 @@ func TestUnseal(t *testing.T) { cmView.EXPECT().ClusterMetadata(gomock.Any()).Return(nil, verrors.ErrInternal).AnyTimes() Convey("Then Unseal should return error", func() { - err := snManager.Unseal(context.TODO(), logStreamID) + err := snManager.Unseal(context.TODO(), topicID, logStreamID) So(err, ShouldNotBeNil) }) }) @@ -274,12 +274,12 @@ func TestUnseal(t *testing.T) { cmView.EXPECT().ClusterMetadata(gomock.Any()).Return(metaDesc, nil).AnyTimes() for i := 0; i < len(snmclList)-1; i++ { snmcl := snmclList[i] - snmcl.EXPECT().Unseal(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil) + snmcl.EXPECT().Unseal(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil) } - snmclList[len(sndescList)-1].EXPECT().Unseal(gomock.Any(), gomock.Any(), gomock.Any()).Return(verrors.ErrInternal) + snmclList[len(sndescList)-1].EXPECT().Unseal(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(verrors.ErrInternal) Convey("Then Unseal should fail", func() { - err := snManager.Unseal(context.TODO(), logStreamID) + err := snManager.Unseal(context.TODO(), topicID, logStreamID) So(err, ShouldNotBeNil) }) }) @@ -288,11 +288,11 @@ func TestUnseal(t *testing.T) { cmView.EXPECT().ClusterMetadata(gomock.Any()).Return(metaDesc, nil).AnyTimes() for i := 0; i < len(snmclList); i++ { snmcl := snmclList[i] - snmcl.EXPECT().Unseal(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil) + snmcl.EXPECT().Unseal(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil) } Convey("Then Unseal should succeed", func() { - err := snManager.Unseal(context.TODO(), logStreamID) + err := snManager.Unseal(context.TODO(), topicID, logStreamID) So(err, ShouldBeNil) }) }) diff --git a/internal/vms/vms_mock.go b/internal/vms/vms_mock.go index 0ade294b7..7a3a738b5 100644 --- a/internal/vms/vms_mock.go +++ b/internal/vms/vms_mock.go @@ -107,17 +107,17 @@ func (mr *MockStorageNodeManagerMockRecorder) AddLogStream(arg0, arg1 interface{ } // AddLogStreamReplica mocks base method. -func (m *MockStorageNodeManager) AddLogStreamReplica(arg0 context.Context, arg1 types.StorageNodeID, arg2 types.LogStreamID, arg3 string) error { +func (m *MockStorageNodeManager) AddLogStreamReplica(arg0 context.Context, arg1 types.StorageNodeID, arg2 types.TopicID, arg3 types.LogStreamID, arg4 string) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "AddLogStreamReplica", arg0, arg1, arg2, arg3) + ret := m.ctrl.Call(m, "AddLogStreamReplica", arg0, arg1, arg2, arg3, arg4) ret0, _ := ret[0].(error) return ret0 } // AddLogStreamReplica indicates an expected call of AddLogStreamReplica. -func (mr *MockStorageNodeManagerMockRecorder) AddLogStreamReplica(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagerMockRecorder) AddLogStreamReplica(arg0, arg1, arg2, arg3, arg4 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStreamReplica", reflect.TypeOf((*MockStorageNodeManager)(nil).AddLogStreamReplica), arg0, arg1, arg2, arg3) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStreamReplica", reflect.TypeOf((*MockStorageNodeManager)(nil).AddLogStreamReplica), arg0, arg1, arg2, arg3, arg4) } // AddStorageNode mocks base method. @@ -220,17 +220,17 @@ func (mr *MockStorageNodeManagerMockRecorder) Refresh(arg0 interface{}) *gomock. } // RemoveLogStream mocks base method. -func (m *MockStorageNodeManager) RemoveLogStream(arg0 context.Context, arg1 types.StorageNodeID, arg2 types.LogStreamID) error { +func (m *MockStorageNodeManager) RemoveLogStream(arg0 context.Context, arg1 types.StorageNodeID, arg2 types.TopicID, arg3 types.LogStreamID) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "RemoveLogStream", arg0, arg1, arg2) + ret := m.ctrl.Call(m, "RemoveLogStream", arg0, arg1, arg2, arg3) ret0, _ := ret[0].(error) return ret0 } // RemoveLogStream indicates an expected call of RemoveLogStream. -func (mr *MockStorageNodeManagerMockRecorder) RemoveLogStream(arg0, arg1, arg2 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagerMockRecorder) RemoveLogStream(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RemoveLogStream", reflect.TypeOf((*MockStorageNodeManager)(nil).RemoveLogStream), arg0, arg1, arg2) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RemoveLogStream", reflect.TypeOf((*MockStorageNodeManager)(nil).RemoveLogStream), arg0, arg1, arg2, arg3) } // RemoveStorageNode mocks base method. @@ -246,45 +246,45 @@ func (mr *MockStorageNodeManagerMockRecorder) RemoveStorageNode(arg0 interface{} } // Seal mocks base method. -func (m *MockStorageNodeManager) Seal(arg0 context.Context, arg1 types.LogStreamID, arg2 types.GLSN) ([]varlogpb.LogStreamMetadataDescriptor, error) { +func (m *MockStorageNodeManager) Seal(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3 types.GLSN) ([]varlogpb.LogStreamMetadataDescriptor, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Seal", arg0, arg1, arg2) + ret := m.ctrl.Call(m, "Seal", arg0, arg1, arg2, arg3) ret0, _ := ret[0].([]varlogpb.LogStreamMetadataDescriptor) ret1, _ := ret[1].(error) return ret0, ret1 } // Seal indicates an expected call of Seal. -func (mr *MockStorageNodeManagerMockRecorder) Seal(arg0, arg1, arg2 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagerMockRecorder) Seal(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Seal", reflect.TypeOf((*MockStorageNodeManager)(nil).Seal), arg0, arg1, arg2) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Seal", reflect.TypeOf((*MockStorageNodeManager)(nil).Seal), arg0, arg1, arg2, arg3) } // Sync mocks base method. -func (m *MockStorageNodeManager) Sync(arg0 context.Context, arg1 types.LogStreamID, arg2, arg3 types.StorageNodeID, arg4 types.GLSN) (*snpb.SyncStatus, error) { +func (m *MockStorageNodeManager) Sync(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3, arg4 types.StorageNodeID, arg5 types.GLSN) (*snpb.SyncStatus, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Sync", arg0, arg1, arg2, arg3, arg4) + ret := m.ctrl.Call(m, "Sync", arg0, arg1, arg2, arg3, arg4, arg5) ret0, _ := ret[0].(*snpb.SyncStatus) ret1, _ := ret[1].(error) return ret0, ret1 } // Sync indicates an expected call of Sync. -func (mr *MockStorageNodeManagerMockRecorder) Sync(arg0, arg1, arg2, arg3, arg4 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagerMockRecorder) Sync(arg0, arg1, arg2, arg3, arg4, arg5 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Sync", reflect.TypeOf((*MockStorageNodeManager)(nil).Sync), arg0, arg1, arg2, arg3, arg4) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Sync", reflect.TypeOf((*MockStorageNodeManager)(nil).Sync), arg0, arg1, arg2, arg3, arg4, arg5) } // Unseal mocks base method. -func (m *MockStorageNodeManager) Unseal(arg0 context.Context, arg1 types.LogStreamID) error { +func (m *MockStorageNodeManager) Unseal(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Unseal", arg0, arg1) + ret := m.ctrl.Call(m, "Unseal", arg0, arg1, arg2) ret0, _ := ret[0].(error) return ret0 } // Unseal indicates an expected call of Unseal. -func (mr *MockStorageNodeManagerMockRecorder) Unseal(arg0, arg1 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagerMockRecorder) Unseal(arg0, arg1, arg2 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Unseal", reflect.TypeOf((*MockStorageNodeManager)(nil).Unseal), arg0, arg1) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Unseal", reflect.TypeOf((*MockStorageNodeManager)(nil).Unseal), arg0, arg1, arg2) } diff --git a/pkg/benchmark/benchmark.go b/pkg/benchmark/benchmark.go index 345f3bf53..ce6a12a17 100644 --- a/pkg/benchmark/benchmark.go +++ b/pkg/benchmark/benchmark.go @@ -150,7 +150,7 @@ func (b *benchmarkImpl) clientLoop(ctx context.Context, idx int) error { for i := 1; i <= b.maxOpsPerClient; i++ { begin := time.Now() - _, err := client.Append(ctx, b.data) + _, err := client.Append(ctx, 0, b.data) end := time.Now() records = append(records, record{ err: err, diff --git a/pkg/logc/log_io_client.go b/pkg/logc/log_io_client.go index af9f90e06..a688f26e8 100644 --- a/pkg/logc/log_io_client.go +++ b/pkg/logc/log_io_client.go @@ -13,38 +13,32 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/verrors" "github.com/kakao/varlog/proto/snpb" + "github.com/kakao/varlog/proto/varlogpb" ) -// StorageNode is a structure to represent identifier and address of storage node. -type StorageNode struct { - ID types.StorageNodeID - Addr string -} - type SubscribeResult struct { - types.LogEntry + varlogpb.LogEntry Error error } var InvalidSubscribeResult = SubscribeResult{ - LogEntry: types.InvalidLogEntry, + LogEntry: varlogpb.InvalidLogEntry(), Error: stderrors.New("invalid subscribe result"), } // LogIOClient contains methods to use basic operations - append, read, subscribe, trim of // single storage node. type LogIOClient interface { - Append(ctx context.Context, logStreamID types.LogStreamID, data []byte, backups ...StorageNode) (types.GLSN, error) - Read(ctx context.Context, logStreamID types.LogStreamID, glsn types.GLSN) (*types.LogEntry, error) - Subscribe(ctx context.Context, logStreamID types.LogStreamID, begin, end types.GLSN) (<-chan SubscribeResult, error) - Trim(ctx context.Context, glsn types.GLSN) error + Append(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, data []byte, backups ...varlogpb.StorageNode) (types.GLSN, error) + Read(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, glsn types.GLSN) (*varlogpb.LogEntry, error) + Subscribe(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, begin, end types.GLSN) (<-chan SubscribeResult, error) + Trim(ctx context.Context, topicID types.TopicID, glsn types.GLSN) error io.Closer } type logIOClient struct { rpcConn *rpc.Conn rpcClient snpb.LogIOClient - s StorageNode } func NewLogIOClient(ctx context.Context, address string) (LogIOClient, error) { @@ -52,49 +46,42 @@ func NewLogIOClient(ctx context.Context, address string) (LogIOClient, error) { if err != nil { return nil, errors.WithMessage(err, "logiocl") } - return NewLogIOClientFromRpcConn(rpcConn) + return NewLogIOClientFromRPCConn(rpcConn) } -func NewLogIOClientFromRpcConn(rpcConn *rpc.Conn) (LogIOClient, error) { +func NewLogIOClientFromRPCConn(rpcConn *rpc.Conn) (LogIOClient, error) { return &logIOClient{ rpcConn: rpcConn, rpcClient: snpb.NewLogIOClient(rpcConn.Conn), }, nil } -// Append sends given data to the log stream in the storage node. To replicate the data, it -// provides argument backups that indicate backup storage nodes. If append operation completes -// successfully, valid GLSN is sent to the caller. When it goes wrong, zero is returned. -func (c *logIOClient) Append(ctx context.Context, logStreamID types.LogStreamID, data []byte, backups ...StorageNode) (types.GLSN, error) { +// Append stores data to the log stream specified with the topicID and the logStreamID. +// The backup indicates the storage nodes that have backup replicas of that log stream. +// It returns valid GLSN if the append completes successfully. +func (c *logIOClient) Append(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, data []byte, backups ...varlogpb.StorageNode) (types.GLSN, error) { req := &snpb.AppendRequest{ Payload: data, + TopicID: topicID, LogStreamID: logStreamID, - } - - for _, b := range backups { - req.Backups = append(req.Backups, snpb.AppendRequest_BackupNode{ - StorageNodeID: b.ID, - Address: b.Addr, - }) + Backups: backups, } rsp, err := c.rpcClient.Append(ctx, req) - if err != nil { - return types.InvalidGLSN, errors.Wrap(verrors.FromStatusError(err), "logiocl") - } - return rsp.GetGLSN(), nil + return rsp.GetGLSN(), errors.Wrap(verrors.FromStatusError(err), "logiocl") } // Read operation asks the storage node to retrieve data at a given log position in the log stream. -func (c *logIOClient) Read(ctx context.Context, logStreamID types.LogStreamID, glsn types.GLSN) (*types.LogEntry, error) { +func (c *logIOClient) Read(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, glsn types.GLSN) (*varlogpb.LogEntry, error) { req := &snpb.ReadRequest{ GLSN: glsn, + TopicID: topicID, LogStreamID: logStreamID, } rsp, err := c.rpcClient.Read(ctx, req) if err != nil { return nil, errors.Wrap(verrors.FromStatusError(err), "logiocl") } - return &types.LogEntry{ + return &varlogpb.LogEntry{ GLSN: rsp.GetGLSN(), LLSN: rsp.GetLLSN(), Data: rsp.GetPayload(), @@ -103,12 +90,13 @@ func (c *logIOClient) Read(ctx context.Context, logStreamID types.LogStreamID, g // Subscribe gets log entries continuously from the storage node. It guarantees that LLSNs of log // entries taken are sequential. -func (c *logIOClient) Subscribe(ctx context.Context, logStreamID types.LogStreamID, begin, end types.GLSN) (<-chan SubscribeResult, error) { +func (c *logIOClient) Subscribe(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, begin, end types.GLSN) (<-chan SubscribeResult, error) { if begin >= end { return nil, errors.New("logiocl: invalid argument") } req := &snpb.SubscribeRequest{ + TopicID: topicID, LogStreamID: logStreamID, GLSNBegin: begin, GLSNEnd: end, @@ -126,7 +114,7 @@ func (c *logIOClient) Subscribe(ctx context.Context, logStreamID types.LogStream err := verrors.FromStatusError(rpcErr) result := SubscribeResult{Error: err} if err == nil { - result.LogEntry = types.LogEntry{ + result.LogEntry = varlogpb.LogEntry{ GLSN: rsp.GetGLSN(), LLSN: rsp.GetLLSN(), Data: rsp.GetPayload(), @@ -147,8 +135,11 @@ func (c *logIOClient) Subscribe(ctx context.Context, logStreamID types.LogStream // Trim deletes log entries greater than or equal to given GLSN in the storage node. The number of // deleted log entries are returned. -func (c *logIOClient) Trim(ctx context.Context, glsn types.GLSN) error { - req := &snpb.TrimRequest{GLSN: glsn} +func (c *logIOClient) Trim(ctx context.Context, topicID types.TopicID, glsn types.GLSN) error { + req := &snpb.TrimRequest{ + TopicID: topicID, + GLSN: glsn, + } _, err := c.rpcClient.Trim(ctx, req) return errors.Wrap(verrors.FromStatusError(err), "logiocl") } diff --git a/pkg/logc/log_io_client_mock.go b/pkg/logc/log_io_client_mock.go index c3a1670c0..dcea8a3e1 100644 --- a/pkg/logc/log_io_client_mock.go +++ b/pkg/logc/log_io_client_mock.go @@ -11,6 +11,7 @@ import ( gomock "github.com/golang/mock/gomock" types "github.com/kakao/varlog/pkg/types" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) // MockLogIOClient is a mock of LogIOClient interface. @@ -37,10 +38,10 @@ func (m *MockLogIOClient) EXPECT() *MockLogIOClientMockRecorder { } // Append mocks base method. -func (m *MockLogIOClient) Append(arg0 context.Context, arg1 types.LogStreamID, arg2 []byte, arg3 ...StorageNode) (types.GLSN, error) { +func (m *MockLogIOClient) Append(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3 []byte, arg4 ...varlogpb.StorageNode) (types.GLSN, error) { m.ctrl.T.Helper() - varargs := []interface{}{arg0, arg1, arg2} - for _, a := range arg3 { + varargs := []interface{}{arg0, arg1, arg2, arg3} + for _, a := range arg4 { varargs = append(varargs, a) } ret := m.ctrl.Call(m, "Append", varargs...) @@ -50,9 +51,9 @@ func (m *MockLogIOClient) Append(arg0 context.Context, arg1 types.LogStreamID, a } // Append indicates an expected call of Append. -func (mr *MockLogIOClientMockRecorder) Append(arg0, arg1, arg2 interface{}, arg3 ...interface{}) *gomock.Call { +func (mr *MockLogIOClientMockRecorder) Append(arg0, arg1, arg2, arg3 interface{}, arg4 ...interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - varargs := append([]interface{}{arg0, arg1, arg2}, arg3...) + varargs := append([]interface{}{arg0, arg1, arg2, arg3}, arg4...) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Append", reflect.TypeOf((*MockLogIOClient)(nil).Append), varargs...) } @@ -71,45 +72,45 @@ func (mr *MockLogIOClientMockRecorder) Close() *gomock.Call { } // Read mocks base method. -func (m *MockLogIOClient) Read(arg0 context.Context, arg1 types.LogStreamID, arg2 types.GLSN) (*types.LogEntry, error) { +func (m *MockLogIOClient) Read(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3 types.GLSN) (*varlogpb.LogEntry, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Read", arg0, arg1, arg2) - ret0, _ := ret[0].(*types.LogEntry) + ret := m.ctrl.Call(m, "Read", arg0, arg1, arg2, arg3) + ret0, _ := ret[0].(*varlogpb.LogEntry) ret1, _ := ret[1].(error) return ret0, ret1 } // Read indicates an expected call of Read. -func (mr *MockLogIOClientMockRecorder) Read(arg0, arg1, arg2 interface{}) *gomock.Call { +func (mr *MockLogIOClientMockRecorder) Read(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Read", reflect.TypeOf((*MockLogIOClient)(nil).Read), arg0, arg1, arg2) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Read", reflect.TypeOf((*MockLogIOClient)(nil).Read), arg0, arg1, arg2, arg3) } // Subscribe mocks base method. -func (m *MockLogIOClient) Subscribe(arg0 context.Context, arg1 types.LogStreamID, arg2, arg3 types.GLSN) (<-chan SubscribeResult, error) { +func (m *MockLogIOClient) Subscribe(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3, arg4 types.GLSN) (<-chan SubscribeResult, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Subscribe", arg0, arg1, arg2, arg3) + ret := m.ctrl.Call(m, "Subscribe", arg0, arg1, arg2, arg3, arg4) ret0, _ := ret[0].(<-chan SubscribeResult) ret1, _ := ret[1].(error) return ret0, ret1 } // Subscribe indicates an expected call of Subscribe. -func (mr *MockLogIOClientMockRecorder) Subscribe(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { +func (mr *MockLogIOClientMockRecorder) Subscribe(arg0, arg1, arg2, arg3, arg4 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Subscribe", reflect.TypeOf((*MockLogIOClient)(nil).Subscribe), arg0, arg1, arg2, arg3) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Subscribe", reflect.TypeOf((*MockLogIOClient)(nil).Subscribe), arg0, arg1, arg2, arg3, arg4) } // Trim mocks base method. -func (m *MockLogIOClient) Trim(arg0 context.Context, arg1 types.GLSN) error { +func (m *MockLogIOClient) Trim(arg0 context.Context, arg1 types.TopicID, arg2 types.GLSN) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Trim", arg0, arg1) + ret := m.ctrl.Call(m, "Trim", arg0, arg1, arg2) ret0, _ := ret[0].(error) return ret0 } // Trim indicates an expected call of Trim. -func (mr *MockLogIOClientMockRecorder) Trim(arg0, arg1 interface{}) *gomock.Call { +func (mr *MockLogIOClientMockRecorder) Trim(arg0, arg1, arg2 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Trim", reflect.TypeOf((*MockLogIOClient)(nil).Trim), arg0, arg1) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Trim", reflect.TypeOf((*MockLogIOClient)(nil).Trim), arg0, arg1, arg2) } diff --git a/pkg/logc/log_io_client_test.go b/pkg/logc/log_io_client_test.go index fc2fa4ade..8862d1cb3 100644 --- a/pkg/logc/log_io_client_test.go +++ b/pkg/logc/log_io_client_test.go @@ -15,6 +15,7 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/proto/snpb" "github.com/kakao/varlog/proto/snpb/mock" + "github.com/kakao/varlog/proto/varlogpb" ) type byGLSN []types.GLSN @@ -139,32 +140,33 @@ func TestBasicOperations(t *testing.T) { mockClient := newMockStorageNodeServiceClient(ctrl, &sn) const logStreamID = types.LogStreamID(0) + const topicID = types.TopicID(1) client := &logIOClient{rpcClient: mockClient} Convey("Simple Append/Read/Subscribe/Trim operations should work", t, func() { var prevGLSN types.GLSN var currGLSN types.GLSN - var currLogEntry *types.LogEntry + var currLogEntry *varlogpb.LogEntry var err error var msg string msg = "msg-1" - currGLSN, err = client.Append(context.TODO(), logStreamID, []byte(msg)) + currGLSN, err = client.Append(context.TODO(), topicID, logStreamID, []byte(msg)) So(err, ShouldBeNil) - currLogEntry, err = client.Read(context.TODO(), logStreamID, currGLSN) + currLogEntry, err = client.Read(context.TODO(), topicID, logStreamID, currGLSN) So(err, ShouldBeNil) So(string(currLogEntry.Data), ShouldEqual, msg) prevGLSN = currGLSN msg = "msg-2" - currGLSN, err = client.Append(context.TODO(), logStreamID, []byte(msg)) + currGLSN, err = client.Append(context.TODO(), topicID, logStreamID, []byte(msg)) So(err, ShouldBeNil) So(currGLSN, ShouldBeGreaterThan, prevGLSN) - currLogEntry, err = client.Read(context.TODO(), logStreamID, currGLSN) + currLogEntry, err = client.Read(context.TODO(), topicID, logStreamID, currGLSN) So(err, ShouldBeNil) So(string(currLogEntry.Data), ShouldEqual, msg) prevGLSN = currGLSN - ch, err := client.Subscribe(context.TODO(), logStreamID, types.GLSN(0), types.GLSN(10)) + ch, err := client.Subscribe(context.TODO(), topicID, logStreamID, types.GLSN(0), types.GLSN(10)) So(err, ShouldBeNil) subRes := <-ch So(subRes.Error, ShouldBeNil) @@ -178,10 +180,10 @@ func TestBasicOperations(t *testing.T) { So(subRes.LLSN, ShouldEqual, types.LLSN(1)) So(string(subRes.Data), ShouldEqual, "msg-2") - err = client.Trim(context.TODO(), types.GLSN(0)) + err = client.Trim(context.TODO(), topicID, types.GLSN(0)) So(subRes.Error, ShouldBeNil) - currLogEntry, err = client.Read(context.TODO(), logStreamID, types.GLSN(0)) + currLogEntry, err = client.Read(context.TODO(), topicID, logStreamID, types.GLSN(0)) So(err, ShouldNotBeNil) }) } diff --git a/pkg/logc/log_io_proxy.go b/pkg/logc/log_io_proxy.go index 35d9c1653..d00aeb443 100644 --- a/pkg/logc/log_io_proxy.go +++ b/pkg/logc/log_io_proxy.go @@ -4,6 +4,7 @@ import ( "context" "github.com/kakao/varlog/pkg/types" + "github.com/kakao/varlog/proto/varlogpb" ) type logClientProxy struct { @@ -20,20 +21,20 @@ func newLogIOProxy(client LogIOClient, closer func() error) *logClientProxy { } } -func (l *logClientProxy) Append(ctx context.Context, logStreamID types.LogStreamID, data []byte, backups ...StorageNode) (types.GLSN, error) { - return l.client.Append(ctx, logStreamID, data, backups...) +func (l *logClientProxy) Append(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, data []byte, backups ...varlogpb.StorageNode) (types.GLSN, error) { + return l.client.Append(ctx, topicID, logStreamID, data, backups...) } -func (l *logClientProxy) Read(ctx context.Context, logStreamID types.LogStreamID, glsn types.GLSN) (*types.LogEntry, error) { - return l.client.Read(ctx, logStreamID, glsn) +func (l *logClientProxy) Read(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, glsn types.GLSN) (*varlogpb.LogEntry, error) { + return l.client.Read(ctx, topicID, logStreamID, glsn) } -func (l *logClientProxy) Subscribe(ctx context.Context, logStreamID types.LogStreamID, begin, end types.GLSN) (<-chan SubscribeResult, error) { - return l.client.Subscribe(ctx, logStreamID, begin, end) +func (l *logClientProxy) Subscribe(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, begin, end types.GLSN) (<-chan SubscribeResult, error) { + return l.client.Subscribe(ctx, topicID, logStreamID, begin, end) } -func (l *logClientProxy) Trim(ctx context.Context, glsn types.GLSN) error { - return l.client.Trim(ctx, glsn) +func (l *logClientProxy) Trim(ctx context.Context, topicID types.TopicID, glsn types.GLSN) error { + return l.client.Trim(ctx, topicID, glsn) } func (l *logClientProxy) Close() error { diff --git a/pkg/mrc/metadata_repository_client.go b/pkg/mrc/metadata_repository_client.go index 953b46429..435a98a4c 100644 --- a/pkg/mrc/metadata_repository_client.go +++ b/pkg/mrc/metadata_repository_client.go @@ -19,6 +19,8 @@ import ( type MetadataRepositoryClient interface { RegisterStorageNode(context.Context, *varlogpb.StorageNodeDescriptor) error UnregisterStorageNode(context.Context, types.StorageNodeID) error + RegisterTopic(context.Context, types.TopicID) error + UnregisterTopic(context.Context, types.TopicID) error RegisterLogStream(context.Context, *varlogpb.LogStreamDescriptor) error UnregisterLogStream(context.Context, types.LogStreamID) error UpdateLogStream(context.Context, *varlogpb.LogStreamDescriptor) error @@ -51,10 +53,10 @@ func NewMetadataRepositoryClient(ctx context.Context, address string) (MetadataR err := errors.Errorf("mrmcl: not ready (%+v)", status) return nil, multierr.Append(err, rpcConn.Close()) } - return NewMetadataRepositoryClientFromRpcConn(rpcConn) + return NewMetadataRepositoryClientFromRPCConn(rpcConn) } -func NewMetadataRepositoryClientFromRpcConn(rpcConn *rpc.Conn) (MetadataRepositoryClient, error) { +func NewMetadataRepositoryClientFromRPCConn(rpcConn *rpc.Conn) (MetadataRepositoryClient, error) { client := &metadataRepositoryClient{ rpcConn: rpcConn, client: mrpb.NewMetadataRepositoryServiceClient(rpcConn.Conn), @@ -82,7 +84,9 @@ func (c *metadataRepositoryClient) RegisterStorageNode(ctx context.Context, sn * func (c *metadataRepositoryClient) UnregisterStorageNode(ctx context.Context, snID types.StorageNodeID) error { req := &mrpb.StorageNodeRequest{ StorageNode: &varlogpb.StorageNodeDescriptor{ - StorageNodeID: snID, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + }, }, } @@ -90,6 +94,24 @@ func (c *metadataRepositoryClient) UnregisterStorageNode(ctx context.Context, sn return verrors.FromStatusError(errors.WithStack(err)) } +func (c *metadataRepositoryClient) RegisterTopic(ctx context.Context, topicID types.TopicID) error { + req := &mrpb.TopicRequest{ + TopicID: topicID, + } + + _, err := c.client.RegisterTopic(ctx, req) + return verrors.FromStatusError(errors.WithStack(err)) +} + +func (c *metadataRepositoryClient) UnregisterTopic(ctx context.Context, topicID types.TopicID) error { + req := &mrpb.TopicRequest{ + TopicID: topicID, + } + + _, err := c.client.UnregisterTopic(ctx, req) + return verrors.FromStatusError(errors.WithStack(err)) +} + func (c *metadataRepositoryClient) RegisterLogStream(ctx context.Context, ls *varlogpb.LogStreamDescriptor) error { if !ls.Valid() { return errors.WithStack(verrors.ErrInvalid) diff --git a/pkg/mrc/metadata_repository_client_mock.go b/pkg/mrc/metadata_repository_client_mock.go index 07c9c6b01..e8248a9c1 100644 --- a/pkg/mrc/metadata_repository_client_mock.go +++ b/pkg/mrc/metadata_repository_client_mock.go @@ -94,6 +94,20 @@ func (mr *MockMetadataRepositoryClientMockRecorder) RegisterStorageNode(arg0, ar return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RegisterStorageNode", reflect.TypeOf((*MockMetadataRepositoryClient)(nil).RegisterStorageNode), arg0, arg1) } +// RegisterTopic mocks base method. +func (m *MockMetadataRepositoryClient) RegisterTopic(arg0 context.Context, arg1 types.TopicID) error { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "RegisterTopic", arg0, arg1) + ret0, _ := ret[0].(error) + return ret0 +} + +// RegisterTopic indicates an expected call of RegisterTopic. +func (mr *MockMetadataRepositoryClientMockRecorder) RegisterTopic(arg0, arg1 interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RegisterTopic", reflect.TypeOf((*MockMetadataRepositoryClient)(nil).RegisterTopic), arg0, arg1) +} + // Seal mocks base method. func (m *MockMetadataRepositoryClient) Seal(arg0 context.Context, arg1 types.LogStreamID) (types.GLSN, error) { m.ctrl.T.Helper() @@ -137,6 +151,20 @@ func (mr *MockMetadataRepositoryClientMockRecorder) UnregisterStorageNode(arg0, return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnregisterStorageNode", reflect.TypeOf((*MockMetadataRepositoryClient)(nil).UnregisterStorageNode), arg0, arg1) } +// UnregisterTopic mocks base method. +func (m *MockMetadataRepositoryClient) UnregisterTopic(arg0 context.Context, arg1 types.TopicID) error { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "UnregisterTopic", arg0, arg1) + ret0, _ := ret[0].(error) + return ret0 +} + +// UnregisterTopic indicates an expected call of UnregisterTopic. +func (mr *MockMetadataRepositoryClientMockRecorder) UnregisterTopic(arg0, arg1 interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnregisterTopic", reflect.TypeOf((*MockMetadataRepositoryClient)(nil).UnregisterTopic), arg0, arg1) +} + // Unseal mocks base method. func (m *MockMetadataRepositoryClient) Unseal(arg0 context.Context, arg1 types.LogStreamID) error { m.ctrl.T.Helper() diff --git a/pkg/mrc/metadata_repository_client_test.go b/pkg/mrc/metadata_repository_client_test.go index 8af183ac1..3a3f2b9c2 100644 --- a/pkg/mrc/metadata_repository_client_test.go +++ b/pkg/mrc/metadata_repository_client_test.go @@ -61,7 +61,9 @@ func TestMRClientRegisterStorageNode(t *testing.T) { Convey("When passed Address in StorageNodeDescriptor is empty", func() { Convey("Then the MRClient should return an ErrInvalid", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + }, Storages: []*varlogpb.StorageDescriptor{ { Path: "path", @@ -78,8 +80,10 @@ func TestMRClientRegisterStorageNode(t *testing.T) { Convey("When passed Storages in StorageNodeDescriptor is empty", func() { Convey("Then the MRClient should return an ErrInvalid", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), - Address: "address", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + Address: "address", + }, } err := mc.RegisterStorageNode(context.TODO(), sn) So(err, assert.ShouldWrap, verrors.ErrInvalid) @@ -89,8 +93,10 @@ func TestMRClientRegisterStorageNode(t *testing.T) { Convey("When passed Path in StorageDescriptor is empty", func() { Convey("Then the MRClient should return an ErrInvalid", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), - Address: "address", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + Address: "address", + }, Storages: []*varlogpb.StorageDescriptor{ { Used: 0, @@ -106,8 +112,10 @@ func TestMRClientRegisterStorageNode(t *testing.T) { Convey("When passed Used > Total in StorageDescriptor", func() { Convey("Then the MRClient should return an ErrInvalid", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), - Address: "address", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + Address: "address", + }, Storages: []*varlogpb.StorageDescriptor{ { Path: "path", @@ -125,8 +133,10 @@ func TestMRClientRegisterStorageNode(t *testing.T) { mockClient.EXPECT().RegisterStorageNode(gomock.Any(), gomock.Any()).Return(nil, verrors.ErrInternal) Convey("Then the MRClient should return the error", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), - Address: "address", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + Address: "address", + }, Storages: []*varlogpb.StorageDescriptor{ { Path: "path", @@ -146,8 +156,10 @@ func TestMRClientRegisterStorageNode(t *testing.T) { mockClient.EXPECT().RegisterStorageNode(gomock.Any(), gomock.Any()).Return(&pbtypes.Empty{}, nil) Convey("Then the MRClient should return success", func() { sn := &varlogpb.StorageNodeDescriptor{ - StorageNodeID: types.StorageNodeID(0), - Address: "address", + StorageNode: varlogpb.StorageNode{ + StorageNodeID: types.StorageNodeID(0), + Address: "address", + }, Storages: []*varlogpb.StorageDescriptor{ { Path: "path", diff --git a/pkg/mrc/metadata_repository_management_client.go b/pkg/mrc/metadata_repository_management_client.go index 78cb52087..3312907bc 100644 --- a/pkg/mrc/metadata_repository_management_client.go +++ b/pkg/mrc/metadata_repository_management_client.go @@ -45,10 +45,10 @@ func NewMetadataRepositoryManagementClient(ctx context.Context, address string) return nil, multierr.Append(err, rpcConn.Close()) } - return NewMetadataRepositoryManagementClientFromRpcConn(rpcConn) + return NewMetadataRepositoryManagementClientFromRPCConn(rpcConn) } -func NewMetadataRepositoryManagementClientFromRpcConn(rpcConn *rpc.Conn) (MetadataRepositoryManagementClient, error) { +func NewMetadataRepositoryManagementClientFromRPCConn(rpcConn *rpc.Conn) (MetadataRepositoryManagementClient, error) { c := &metadataRepositoryManagementClient{ rpcConn: rpcConn, client: mrpb.NewManagementClient(rpcConn.Conn), diff --git a/pkg/mrc/mrconnector/mr_connector.go b/pkg/mrc/mrconnector/mr_connector.go index c046f2643..7675051ef 100644 --- a/pkg/mrc/mrconnector/mr_connector.go +++ b/pkg/mrc/mrconnector/mr_connector.go @@ -242,11 +242,9 @@ func (c *connectorImpl) connect(ctx context.Context) (*mrProxy, error) { } if err != nil { _ = c.updateClusterInfoFromSeed(ctx) - } else { - if !c.casProxy(nil, proxy) { - _ = proxy.Close() - proxy, err = c.loadProxy() - } + } else if !c.casProxy(nil, proxy) { + _ = proxy.Close() + proxy, err = c.loadProxy() } return proxy, err }) @@ -270,8 +268,8 @@ func (c *connectorImpl) connectToMR(ctx context.Context, addr string) (cl mrc.Me } // It always returns nil as error value. - cl, _ = mrc.NewMetadataRepositoryClientFromRpcConn(conn) - mcl, _ = mrc.NewMetadataRepositoryManagementClientFromRpcConn(conn) + cl, _ = mrc.NewMetadataRepositoryClientFromRPCConn(conn) + mcl, _ = mrc.NewMetadataRepositoryManagementClientFromRPCConn(conn) return cl, mcl, nil } diff --git a/pkg/mrc/mrconnector/mrc_proxy.go b/pkg/mrc/mrconnector/mrc_proxy.go index 190adec6f..a979a0787 100644 --- a/pkg/mrc/mrconnector/mrc_proxy.go +++ b/pkg/mrc/mrconnector/mrc_proxy.go @@ -76,6 +76,30 @@ func (m *mrProxy) UnregisterStorageNode(ctx context.Context, id types.StorageNod return m.cl.UnregisterStorageNode(ctx, id) } +func (m *mrProxy) RegisterTopic(ctx context.Context, id types.TopicID) error { + m.mu.RLock() + defer func() { + atomic.AddInt64(&m.inflight, -1) + m.mu.RUnlock() + m.cond.Signal() + }() + atomic.AddInt64(&m.inflight, 1) + + return m.cl.RegisterTopic(ctx, id) +} + +func (m *mrProxy) UnregisterTopic(ctx context.Context, id types.TopicID) error { + m.mu.RLock() + defer func() { + atomic.AddInt64(&m.inflight, -1) + m.mu.RUnlock() + m.cond.Signal() + }() + atomic.AddInt64(&m.inflight, 1) + + return m.cl.UnregisterTopic(ctx, id) +} + func (m *mrProxy) RegisterLogStream(ctx context.Context, descriptor *varlogpb.LogStreamDescriptor) error { m.mu.RLock() defer func() { diff --git a/pkg/rpc/rpc_conn.go b/pkg/rpc/rpc_conn.go index ab5f4496c..cd70a2857 100644 --- a/pkg/rpc/rpc_conn.go +++ b/pkg/rpc/rpc_conn.go @@ -18,8 +18,7 @@ type Conn struct { } func NewConn(ctx context.Context, address string, opts ...grpc.DialOption) (*Conn, error) { - dialOpts := append(defaultDialOption, opts...) - conn, err := grpc.DialContext(ctx, address, dialOpts...) + conn, err := grpc.DialContext(ctx, address, append(defaultDialOption, opts...)...) if err != nil { return nil, errors.Wrapf(err, "rpc: %s", address) } diff --git a/pkg/snc/snc_mock.go b/pkg/snc/snc_mock.go index 7168e4101..5bef1f1ff 100644 --- a/pkg/snc/snc_mock.go +++ b/pkg/snc/snc_mock.go @@ -38,18 +38,18 @@ func (m *MockStorageNodeManagementClient) EXPECT() *MockStorageNodeManagementCli return m.recorder } -// AddLogStream mocks base method. -func (m *MockStorageNodeManagementClient) AddLogStream(arg0 context.Context, arg1 types.LogStreamID, arg2 string) error { +// AddLogStreamReplica mocks base method. +func (m *MockStorageNodeManagementClient) AddLogStreamReplica(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3 string) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "AddLogStream", arg0, arg1, arg2) + ret := m.ctrl.Call(m, "AddLogStreamReplica", arg0, arg1, arg2, arg3) ret0, _ := ret[0].(error) return ret0 } -// AddLogStream indicates an expected call of AddLogStream. -func (mr *MockStorageNodeManagementClientMockRecorder) AddLogStream(arg0, arg1, arg2 interface{}) *gomock.Call { +// AddLogStreamReplica indicates an expected call of AddLogStreamReplica. +func (mr *MockStorageNodeManagementClientMockRecorder) AddLogStreamReplica(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStream", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).AddLogStream), arg0, arg1, arg2) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStreamReplica", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).AddLogStreamReplica), arg0, arg1, arg2, arg3) } // Close mocks base method. @@ -82,7 +82,7 @@ func (mr *MockStorageNodeManagementClientMockRecorder) GetMetadata(arg0 interfac } // GetPrevCommitInfo mocks base method. -func (m *MockStorageNodeManagementClient) GetPrevCommitInfo(arg0 context.Context, arg1 types.GLSN) (*snpb.GetPrevCommitInfoResponse, error) { +func (m *MockStorageNodeManagementClient) GetPrevCommitInfo(arg0 context.Context, arg1 types.Version) (*snpb.GetPrevCommitInfoResponse, error) { m.ctrl.T.Helper() ret := m.ctrl.Call(m, "GetPrevCommitInfo", arg0, arg1) ret0, _ := ret[0].(*snpb.GetPrevCommitInfoResponse) @@ -125,23 +125,23 @@ func (mr *MockStorageNodeManagementClientMockRecorder) PeerStorageNodeID() *gomo } // RemoveLogStream mocks base method. -func (m *MockStorageNodeManagementClient) RemoveLogStream(arg0 context.Context, arg1 types.LogStreamID) error { +func (m *MockStorageNodeManagementClient) RemoveLogStream(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "RemoveLogStream", arg0, arg1) + ret := m.ctrl.Call(m, "RemoveLogStream", arg0, arg1, arg2) ret0, _ := ret[0].(error) return ret0 } // RemoveLogStream indicates an expected call of RemoveLogStream. -func (mr *MockStorageNodeManagementClientMockRecorder) RemoveLogStream(arg0, arg1 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagementClientMockRecorder) RemoveLogStream(arg0, arg1, arg2 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RemoveLogStream", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).RemoveLogStream), arg0, arg1) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RemoveLogStream", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).RemoveLogStream), arg0, arg1, arg2) } // Seal mocks base method. -func (m *MockStorageNodeManagementClient) Seal(arg0 context.Context, arg1 types.LogStreamID, arg2 types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { +func (m *MockStorageNodeManagementClient) Seal(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3 types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Seal", arg0, arg1, arg2) + ret := m.ctrl.Call(m, "Seal", arg0, arg1, arg2, arg3) ret0, _ := ret[0].(varlogpb.LogStreamStatus) ret1, _ := ret[1].(types.GLSN) ret2, _ := ret[2].(error) @@ -149,36 +149,36 @@ func (m *MockStorageNodeManagementClient) Seal(arg0 context.Context, arg1 types. } // Seal indicates an expected call of Seal. -func (mr *MockStorageNodeManagementClientMockRecorder) Seal(arg0, arg1, arg2 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagementClientMockRecorder) Seal(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Seal", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).Seal), arg0, arg1, arg2) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Seal", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).Seal), arg0, arg1, arg2, arg3) } // Sync mocks base method. -func (m *MockStorageNodeManagementClient) Sync(arg0 context.Context, arg1 types.LogStreamID, arg2 types.StorageNodeID, arg3 string, arg4 types.GLSN) (*snpb.SyncStatus, error) { +func (m *MockStorageNodeManagementClient) Sync(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3 types.StorageNodeID, arg4 string, arg5 types.GLSN) (*snpb.SyncStatus, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Sync", arg0, arg1, arg2, arg3, arg4) + ret := m.ctrl.Call(m, "Sync", arg0, arg1, arg2, arg3, arg4, arg5) ret0, _ := ret[0].(*snpb.SyncStatus) ret1, _ := ret[1].(error) return ret0, ret1 } // Sync indicates an expected call of Sync. -func (mr *MockStorageNodeManagementClientMockRecorder) Sync(arg0, arg1, arg2, arg3, arg4 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagementClientMockRecorder) Sync(arg0, arg1, arg2, arg3, arg4, arg5 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Sync", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).Sync), arg0, arg1, arg2, arg3, arg4) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Sync", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).Sync), arg0, arg1, arg2, arg3, arg4, arg5) } // Unseal mocks base method. -func (m *MockStorageNodeManagementClient) Unseal(arg0 context.Context, arg1 types.LogStreamID, arg2 []snpb.Replica) error { +func (m *MockStorageNodeManagementClient) Unseal(arg0 context.Context, arg1 types.TopicID, arg2 types.LogStreamID, arg3 []varlogpb.Replica) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Unseal", arg0, arg1, arg2) + ret := m.ctrl.Call(m, "Unseal", arg0, arg1, arg2, arg3) ret0, _ := ret[0].(error) return ret0 } // Unseal indicates an expected call of Unseal. -func (mr *MockStorageNodeManagementClientMockRecorder) Unseal(arg0, arg1, arg2 interface{}) *gomock.Call { +func (mr *MockStorageNodeManagementClientMockRecorder) Unseal(arg0, arg1, arg2, arg3 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Unseal", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).Unseal), arg0, arg1, arg2) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Unseal", reflect.TypeOf((*MockStorageNodeManagementClient)(nil).Unseal), arg0, arg1, arg2, arg3) } diff --git a/pkg/snc/storage_node_management_client.go b/pkg/snc/storage_node_management_client.go index 21fba8160..ca043a1c2 100644 --- a/pkg/snc/storage_node_management_client.go +++ b/pkg/snc/storage_node_management_client.go @@ -21,12 +21,12 @@ type StorageNodeManagementClient interface { PeerAddress() string PeerStorageNodeID() types.StorageNodeID GetMetadata(ctx context.Context) (*varlogpb.StorageNodeMetadataDescriptor, error) - AddLogStream(ctx context.Context, logStreamID types.LogStreamID, path string) error - RemoveLogStream(ctx context.Context, logStreamID types.LogStreamID) error - Seal(ctx context.Context, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) - Unseal(ctx context.Context, logStreamID types.LogStreamID, replicas []snpb.Replica) error - Sync(ctx context.Context, logStreamID types.LogStreamID, backupStorageNodeID types.StorageNodeID, backupAddress string, lastGLSN types.GLSN) (*snpb.SyncStatus, error) - GetPrevCommitInfo(ctx context.Context, prevHWM types.GLSN) (*snpb.GetPrevCommitInfoResponse, error) + AddLogStreamReplica(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, path string) error + RemoveLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) error + Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) + Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, replicas []varlogpb.Replica) error + Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, backupStorageNodeID types.StorageNodeID, backupAddress string, lastGLSN types.GLSN) (*snpb.SyncStatus, error) + GetPrevCommitInfo(ctx context.Context, ver types.Version) (*snpb.GetPrevCommitInfoResponse, error) Close() error } @@ -86,14 +86,15 @@ func (c *snManagementClient) GetMetadata(ctx context.Context) (*varlogpb.Storage return rsp.GetStorageNodeMetadata(), errors.Wrap(verrors.FromStatusError(err), "snmcl") } -func (c *snManagementClient) AddLogStream(ctx context.Context, lsid types.LogStreamID, path string) error { +func (c *snManagementClient) AddLogStreamReplica(ctx context.Context, tpid types.TopicID, lsid types.LogStreamID, path string) error { if stringsutil.Empty(path) { return errors.New("snmcl: invalid argument") } // FIXME(jun): Does the return value of AddLogStream need? - _, err := c.rpcClient.AddLogStream(ctx, &snpb.AddLogStreamRequest{ + _, err := c.rpcClient.AddLogStreamReplica(ctx, &snpb.AddLogStreamReplicaRequest{ ClusterID: c.clusterID, StorageNodeID: c.storageNodeID, + TopicID: tpid, LogStreamID: lsid, Storage: &varlogpb.StorageDescriptor{ Path: path, @@ -102,40 +103,44 @@ func (c *snManagementClient) AddLogStream(ctx context.Context, lsid types.LogStr return errors.Wrap(verrors.FromStatusError(err), "snmcl") } -func (c *snManagementClient) RemoveLogStream(ctx context.Context, lsid types.LogStreamID) error { +func (c *snManagementClient) RemoveLogStream(ctx context.Context, tpid types.TopicID, lsid types.LogStreamID) error { _, err := c.rpcClient.RemoveLogStream(ctx, &snpb.RemoveLogStreamRequest{ ClusterID: c.clusterID, StorageNodeID: c.storageNodeID, + TopicID: tpid, LogStreamID: lsid, }) return errors.Wrap(verrors.FromStatusError(err), "snmcl") } -func (c *snManagementClient) Seal(ctx context.Context, lsid types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { +func (c *snManagementClient) Seal(ctx context.Context, tpid types.TopicID, lsid types.LogStreamID, lastCommittedGLSN types.GLSN) (varlogpb.LogStreamStatus, types.GLSN, error) { rsp, err := c.rpcClient.Seal(ctx, &snpb.SealRequest{ ClusterID: c.clusterID, StorageNodeID: c.storageNodeID, + TopicID: tpid, LogStreamID: lsid, LastCommittedGLSN: lastCommittedGLSN, }) return rsp.GetStatus(), rsp.GetLastCommittedGLSN(), errors.Wrap(verrors.FromStatusError(err), "snmcl") } -func (c *snManagementClient) Unseal(ctx context.Context, lsid types.LogStreamID, replicas []snpb.Replica) error { +func (c *snManagementClient) Unseal(ctx context.Context, tpid types.TopicID, lsid types.LogStreamID, replicas []varlogpb.Replica) error { // TODO(jun): Check ranges CID, SNID and LSID _, err := c.rpcClient.Unseal(ctx, &snpb.UnsealRequest{ ClusterID: c.clusterID, StorageNodeID: c.storageNodeID, + TopicID: tpid, LogStreamID: lsid, Replicas: replicas, }) return errors.Wrap(verrors.FromStatusError(err), "snmcl") } -func (c *snManagementClient) Sync(ctx context.Context, logStreamID types.LogStreamID, backupStorageNodeID types.StorageNodeID, backupAddress string, lastGLSN types.GLSN) (*snpb.SyncStatus, error) { +func (c *snManagementClient) Sync(ctx context.Context, tpid types.TopicID, logStreamID types.LogStreamID, backupStorageNodeID types.StorageNodeID, backupAddress string, lastGLSN types.GLSN) (*snpb.SyncStatus, error) { rsp, err := c.rpcClient.Sync(ctx, &snpb.SyncRequest{ ClusterID: c.clusterID, StorageNodeID: c.storageNodeID, + TopicID: tpid, LogStreamID: logStreamID, Backup: &snpb.SyncRequest_BackupNode{ StorageNodeID: backupStorageNodeID, @@ -145,9 +150,9 @@ func (c *snManagementClient) Sync(ctx context.Context, logStreamID types.LogStre return rsp.GetStatus(), errors.Wrap(verrors.FromStatusError(err), "snmcl") } -func (c *snManagementClient) GetPrevCommitInfo(ctx context.Context, prevHWM types.GLSN) (*snpb.GetPrevCommitInfoResponse, error) { +func (c *snManagementClient) GetPrevCommitInfo(ctx context.Context, prevVer types.Version) (*snpb.GetPrevCommitInfoResponse, error) { rsp, err := c.rpcClient.GetPrevCommitInfo(ctx, &snpb.GetPrevCommitInfoRequest{ - PrevHighWatermark: prevHWM, + PrevVersion: prevVer, }) return rsp, errors.WithStack(verrors.FromStatusError(err)) } diff --git a/pkg/snc/storage_node_management_client_test.go b/pkg/snc/storage_node_management_client_test.go index ca00fb466..728a3f215 100644 --- a/pkg/snc/storage_node_management_client_test.go +++ b/pkg/snc/storage_node_management_client_test.go @@ -12,6 +12,7 @@ import ( "github.com/kakao/varlog/pkg/verrors" "github.com/kakao/varlog/proto/snpb" "github.com/kakao/varlog/proto/snpb/mock" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) func TestManagementClientGetMetadata(t *testing.T) { @@ -56,23 +57,23 @@ func TestManagementClientAddLogStream(t *testing.T) { Convey("When the length of passed path is zero", func() { Convey("Then the ManagementClient should return an error", func() { - err := mc.AddLogStream(context.TODO(), types.LogStreamID(1), "") + err := mc.AddLogStreamReplica(context.TODO(), types.TopicID(1), types.LogStreamID(1), "") So(err, ShouldNotBeNil) }) }) Convey("When the ManagementService returns an error", func() { - mockClient.EXPECT().AddLogStream(gomock.Any(), gomock.Any()).Return(nil, verrors.ErrInternal) + mockClient.EXPECT().AddLogStreamReplica(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil, verrors.ErrInternal) Convey("Then the ManagementClient should return the error", func() { - err := mc.AddLogStream(context.TODO(), types.LogStreamID(1), "/tmp") + err := mc.AddLogStreamReplica(context.TODO(), types.TopicID(1), types.LogStreamID(1), "/tmp") So(err, ShouldNotBeNil) }) }) Convey("When the ManagementService succeeds to add the LogStream", func() { - mockClient.EXPECT().AddLogStream(gomock.Any(), gomock.Any()).Return(&snpb.AddLogStreamResponse{}, nil) + mockClient.EXPECT().AddLogStreamReplica(gomock.Any(), gomock.Any(), gomock.Any()).Return(&snpb.AddLogStreamReplicaResponse{}, nil) Convey("Then the ManagementClient should return the path of the LogStream", func() { - err := mc.AddLogStream(context.TODO(), types.LogStreamID(1), "/tmp") + err := mc.AddLogStreamReplica(context.TODO(), types.TopicID(1), types.LogStreamID(1), "/tmp") So(err, ShouldBeNil) // TODO(jun) // Check returned path @@ -97,7 +98,7 @@ func TestManagementClientRemoveLogStream(t *testing.T) { Convey("When the ManagementService returns an error", func() { mockClient.EXPECT().RemoveLogStream(gomock.Any(), gomock.Any()).Return(nil, verrors.ErrInternal) Convey("Then the ManagementClient should return the error", func() { - err := mc.RemoveLogStream(context.TODO(), types.LogStreamID(1)) + err := mc.RemoveLogStream(context.TODO(), types.TopicID(1), types.LogStreamID(1)) So(err, ShouldNotBeNil) }) }) @@ -105,7 +106,7 @@ func TestManagementClientRemoveLogStream(t *testing.T) { Convey("When the ManagementService succeeds to remove the LogStream", func() { mockClient.EXPECT().RemoveLogStream(gomock.Any(), gomock.Any()).Return(&pbtypes.Empty{}, nil) Convey("Then the ManagementClient should not return an error", func() { - err := mc.RemoveLogStream(context.TODO(), types.LogStreamID(1)) + err := mc.RemoveLogStream(context.TODO(), types.TopicID(1), types.LogStreamID(1)) So(err, ShouldBeNil) }) }) @@ -123,7 +124,7 @@ func TestManagementClientSeal(t *testing.T) { Convey("When the ManagementService returns an error", func() { mockClient.EXPECT().Seal(gomock.Any(), gomock.Any()).Return(nil, verrors.ErrInternal) Convey("Then the ManagementClient should return the error", func() { - _, _, err := mc.Seal(context.TODO(), types.LogStreamID(1), types.GLSN(1)) + _, _, err := mc.Seal(context.TODO(), types.TopicID(1), types.LogStreamID(1), types.GLSN(1)) So(err, ShouldNotBeNil) }) }) @@ -131,7 +132,7 @@ func TestManagementClientSeal(t *testing.T) { Convey("When the ManagementService succeeds to seal the LogStream", func() { mockClient.EXPECT().Seal(gomock.Any(), gomock.Any()).Return(&snpb.SealResponse{}, nil) Convey("Then the ManagementClient should not return an error", func() { - _, _, err := mc.Seal(context.TODO(), types.LogStreamID(1), types.GLSN(1)) + _, _, err := mc.Seal(context.TODO(), types.TopicID(1), types.LogStreamID(1), types.GLSN(1)) So(err, ShouldBeNil) }) }) @@ -149,10 +150,12 @@ func TestManagementClientUnseal(t *testing.T) { Convey("When the ManagementService returns an error", func() { mockClient.EXPECT().Unseal(gomock.Any(), gomock.Any()).Return(nil, verrors.ErrInternal) Convey("Then the ManagementClient should return the error", func() { - err := mc.Unseal(context.TODO(), types.LogStreamID(1), []snpb.Replica{ + err := mc.Unseal(context.TODO(), types.TopicID(1), types.LogStreamID(1), []varlogpb.Replica{ { - StorageNodeID: 1, - LogStreamID: 1, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, }, }) So(err, ShouldNotBeNil) @@ -162,10 +165,12 @@ func TestManagementClientUnseal(t *testing.T) { Convey("When the ManagementService succeeds to unseal the LogStream", func() { mockClient.EXPECT().Unseal(gomock.Any(), gomock.Any()).Return(&pbtypes.Empty{}, nil) Convey("Then the ManagementClient should not return an error", func() { - err := mc.Unseal(context.TODO(), types.LogStreamID(1), []snpb.Replica{ + err := mc.Unseal(context.TODO(), types.TopicID(1), types.LogStreamID(1), []varlogpb.Replica{ { - StorageNodeID: 1, - LogStreamID: 1, + StorageNode: varlogpb.StorageNode{ + StorageNodeID: 1, + }, + LogStreamID: 1, }, }) So(err, ShouldBeNil) diff --git a/pkg/types/log_entry.go b/pkg/types/log_entry.go deleted file mode 100644 index 12ee7076a..000000000 --- a/pkg/types/log_entry.go +++ /dev/null @@ -1,17 +0,0 @@ -package types - -type LogEntry struct { - GLSN GLSN - LLSN LLSN - Data []byte -} - -var InvalidLogEntry = LogEntry{ - GLSN: InvalidGLSN, - LLSN: InvalidLLSN, - Data: nil, -} - -func (le LogEntry) Invalid() bool { - return le.GLSN.Invalid() && le.LLSN.Invalid() && len(le.Data) == 0 -} diff --git a/pkg/types/types.go b/pkg/types/types.go index 17e21ed43..8eb858bc5 100644 --- a/pkg/types/types.go +++ b/pkg/types/types.go @@ -3,7 +3,6 @@ package types import ( "encoding/binary" "fmt" - "hash/fnv" "math" "math/rand" "net" @@ -33,46 +32,83 @@ func (cid ClusterID) String() string { return strconv.FormatUint(uint64(cid), 10) } -type StorageNodeID uint32 +type StorageNodeID int32 var _ fmt.Stringer = (*StorageNodeID)(nil) -func NewStorageNodeIDFromUint(u uint) (StorageNodeID, error) { - if u > math.MaxUint32 { - return 0, fmt.Errorf("storage node id overflow %v", u) - } - return StorageNodeID(u), nil -} - -func NewStorageNodeID() StorageNodeID { +func RandomStorageNodeID() StorageNodeID { r := rand.New(rand.NewSource(time.Now().UnixNano())) - buf := make([]byte, 4) - r.Read(buf) // (*Rand).Read always returns a nil error. - h := fnv.New32a() // (*Hash).Write always returns a nil error. - h.Write(buf) - return StorageNodeID(h.Sum32()) + return StorageNodeID(r.Int31()) } func ParseStorageNodeID(s string) (StorageNodeID, error) { - id, err := strconv.ParseUint(s, 10, 32) + id, err := strconv.ParseInt(s, 10, 32) return StorageNodeID(id), err } func (snid StorageNodeID) String() string { - return strconv.FormatUint(uint64(snid), 10) + return strconv.FormatInt(int64(snid), 10) } -type LogStreamID uint32 +type LogStreamID int32 + +const MaxLogStreamID = math.MaxInt32 var _ fmt.Stringer = (*LogStreamID)(nil) func ParseLogStreamID(s string) (LogStreamID, error) { - id, err := strconv.ParseUint(s, 10, 32) + id, err := strconv.ParseInt(s, 10, 32) return LogStreamID(id), err } func (lsid LogStreamID) String() string { - return strconv.FormatUint(uint64(lsid), 10) + return strconv.FormatInt(int64(lsid), 10) +} + +type TopicID int32 + +var _ fmt.Stringer = (*TopicID)(nil) + +func ParseTopicID(s string) (TopicID, error) { + id, err := strconv.ParseInt(s, 10, 32) + return TopicID(id), err +} + +func (tpid TopicID) String() string { + return strconv.FormatInt(int64(tpid), 10) +} + +type Version uint64 + +const ( + InvalidVersion = Version(0) + MinVersion = Version(1) + MaxVersion = Version(math.MaxUint64) +) + +var VersionLen = binary.Size(InvalidVersion) + +func (ver Version) Invalid() bool { + return ver == InvalidVersion +} + +type AtomicVersion uint64 + +func (ver *AtomicVersion) Add(delta uint64) Version { + return Version(atomic.AddUint64((*uint64)(ver), delta)) +} + +func (ver *AtomicVersion) Load() Version { + return Version(atomic.LoadUint64((*uint64)(ver))) +} + +func (ver *AtomicVersion) Store(val Version) { + atomic.StoreUint64((*uint64)(ver), uint64(val)) +} + +func (ver *AtomicVersion) CompareAndSwap(old, new Version) (swapped bool) { + swapped = atomic.CompareAndSwapUint64((*uint64)(ver), uint64(old), uint64(new)) + return swapped } type GLSN uint64 diff --git a/pkg/types/types_test.go b/pkg/types/types_test.go index b8cee2456..03c01c2ff 100644 --- a/pkg/types/types_test.go +++ b/pkg/types/types_test.go @@ -29,25 +29,12 @@ func TestClusterID(t *testing.T) { func TestStorageNodeID(t *testing.T) { Convey("StorageNodeID", t, func() { - Convey("Too large number", func() { // 64bit processor - var number uint = math.MaxUint32 + 1 - _, err := NewStorageNodeIDFromUint(number) - So(err, ShouldNotBeNil) - }) - - Convey("Valid number", func() { - for i := 0; i < 10000; i++ { - _, err := NewStorageNodeIDFromUint(uint(rand.Uint32())) - So(err, ShouldBeNil) - } - }) - Convey("Random generator (non-deterministic test)", func() { idset := make(map[StorageNodeID]bool) for i := 0; i < 10000; i++ { var id StorageNodeID testutil.CompareWait1(func() bool { - id = NewStorageNodeID() + id = RandomStorageNodeID() return !idset[id] }) So(idset[id], ShouldBeFalse) diff --git a/pkg/util/netutil/netutil.go b/pkg/util/netutil/netutil.go index f0ee2db85..4bb831cbe 100644 --- a/pkg/util/netutil/netutil.go +++ b/pkg/util/netutil/netutil.go @@ -18,7 +18,6 @@ import ( var ( errNotSupportedNetwork = errors.New("not supported network") - errNotLocalAdddress = errors.New("not local address") errNotGlobalUnicastAddress = errors.New("not global unicast address") ) diff --git a/pkg/util/telemetry/telemetry.go b/pkg/util/telemetry/telemetry.go index c9a31a024..df7972322 100644 --- a/pkg/util/telemetry/telemetry.go +++ b/pkg/util/telemetry/telemetry.go @@ -162,6 +162,6 @@ func NewNopTelemetry() *nopTelemetry { return &nopTelemetry{} } -func (_ nopTelemetry) Close(_ context.Context) error { +func (nopTelemetry) Close(context.Context) error { return nil } diff --git a/pkg/varlog/allowlist.go b/pkg/varlog/allowlist.go index 9e8606908..f6a0095d0 100644 --- a/pkg/varlog/allowlist.go +++ b/pkg/varlog/allowlist.go @@ -5,7 +5,6 @@ import ( "io" "math/rand" "sync" - "sync/atomic" "time" "go.uber.org/zap" @@ -19,9 +18,9 @@ import ( // Allowlist represents selectable log streams. type Allowlist interface { - Pick() (types.LogStreamID, bool) - Deny(logStreamID types.LogStreamID) - Contains(logStreamID types.LogStreamID) bool + Pick(topicID types.TopicID) (types.LogStreamID, bool) + Deny(topicID types.TopicID, logStreamID types.LogStreamID) + Contains(topicID types.TopicID, logStreamID types.LogStreamID) bool } // RenewableAllowlist expands Allowlist and it provides Renew method to update allowlist. @@ -31,8 +30,6 @@ type RenewableAllowlist interface { io.Closer } -const initialCacheSize = 32 - type allowlistItem struct { denied bool ts time.Time @@ -41,8 +38,8 @@ type allowlistItem struct { // transientAllowlist provides allowlist and denylist of log streams. It can provide stale // information. type transientAllowlist struct { - allowlist sync.Map // map[types.LogStreamID]allowlistItem - cache atomic.Value // []types.LogStreamID + allowlist sync.Map // map[types.TopicID]map[types.LogStreamID]allowlistItem + cache sync.Map // map[types.TopicID][]types.LogStreamID group singleflight.Group denyTTL time.Duration expireInterval time.Duration @@ -65,7 +62,7 @@ func newTransientAllowlist(denyTTL time.Duration, expireInterval time.Duration, runner: runner.New("denylist", logger), logger: logger, } - adl.cache.Store(make([]types.LogStreamID, 0, initialCacheSize)) + cancel, err := adl.runner.Run(adl.expireDenyTTL) if err != nil { adl.runner.Stop() @@ -92,18 +89,23 @@ func (adl *transientAllowlist) expireDenyTTL(ctx context.Context) { case <-tick.C: changed := false adl.allowlist.Range(func(k, v interface{}) bool { - logStreamID := k.(types.LogStreamID) - item := v.(allowlistItem) + lsMap := v.(*sync.Map) - if !item.denied { - return true - } + lsMap.Range(func(k, v interface{}) bool { + logStreamID := k.(types.LogStreamID) + item := v.(allowlistItem) - if time.Since(item.ts) >= adl.denyTTL { - item.denied = false - adl.allowlist.Store(logStreamID, item) - changed = true - } + if !item.denied { + return true + } + + if time.Since(item.ts) >= adl.denyTTL { + item.denied = false + lsMap.Store(logStreamID, item) + changed = true + } + return true + }) return true }) if changed { @@ -116,23 +118,42 @@ func (adl *transientAllowlist) expireDenyTTL(ctx context.Context) { func (adl *transientAllowlist) warmup() { adl.group.Do("warmup", func() (interface{}, error) { - oldCache := adl.cache.Load().([]types.LogStreamID) - newCache := make([]types.LogStreamID, 0, cap(oldCache)) adl.allowlist.Range(func(k, v interface{}) bool { - logStreamID := k.(types.LogStreamID) - item := v.(allowlistItem) - if !item.denied { - newCache = append(newCache, logStreamID) + topicID := k.(types.TopicID) + lsMap := v.(*sync.Map) + + cacheCap := 0 + oldCache, ok := adl.cache.Load(topicID) + if ok { + cacheCap = cap(oldCache.([]types.LogStreamID)) } + newCache := make([]types.LogStreamID, 0, cacheCap) + + lsMap.Range(func(lsidIf, itemIf interface{}) bool { + logStreamID := lsidIf.(types.LogStreamID) + item := itemIf.(allowlistItem) + + if !item.denied { + newCache = append(newCache, logStreamID) + } + + return true + }) + + adl.cache.Store(topicID, newCache) return true }) - adl.cache.Store(newCache) return nil, nil }) } -func (adl *transientAllowlist) Pick() (types.LogStreamID, bool) { - cache := adl.cache.Load().([]types.LogStreamID) +func (adl *transientAllowlist) Pick(topicID types.TopicID) (types.LogStreamID, bool) { + cacheIf, ok := adl.cache.Load(topicID) + if !ok { + return 0, false + } + + cache := cacheIf.([]types.LogStreamID) cacheLen := len(cache) if cacheLen == 0 { return 0, false @@ -141,42 +162,86 @@ func (adl *transientAllowlist) Pick() (types.LogStreamID, bool) { return cache[idx], true } -func (adl *transientAllowlist) Deny(logStreamID types.LogStreamID) { +func (adl *transientAllowlist) Deny(topicID types.TopicID, logStreamID types.LogStreamID) { item := allowlistItem{denied: true, ts: time.Now()} // NB: Storing denied LogStreamID without any checking may result in saving unknown // LogStreamID. But it can be deleted by Renew. - adl.allowlist.Store(logStreamID, item) + lsMapIf, ok := adl.allowlist.Load(topicID) + if !ok { + lsMap := new(sync.Map) + lsMap.Store(logStreamID, item) + adl.allowlist.Store(topicID, lsMap) + } else { + lsMap := lsMapIf.(*sync.Map) + lsMap.Store(logStreamID, item) + } + adl.warmup() } -func (adl *transientAllowlist) Contains(logStreamID types.LogStreamID) bool { - item, ok := adl.allowlist.Load(logStreamID) +func (adl *transientAllowlist) Contains(topicID types.TopicID, logStreamID types.LogStreamID) bool { + lsMapIf, ok := adl.allowlist.Load(topicID) + if !ok { + return false + } + + lsMap := lsMapIf.(*sync.Map) + item, ok := lsMap.Load(logStreamID) return ok && !item.(allowlistItem).denied } func (adl *transientAllowlist) Renew(metadata *varlogpb.MetadataDescriptor) { lsdescs := metadata.GetLogStreams() + topicdescs := metadata.GetTopics() recentLSIDs := set.New(len(lsdescs)) - for _, lsdesc := range lsdescs { - recentLSIDs.Add(lsdesc.GetLogStreamID()) + recentTopicIDs := set.New(len(topicdescs)) + + for _, topicdesc := range topicdescs { + recentTopicIDs.Add(topicdesc.GetTopicID()) + for _, lsid := range topicdesc.GetLogStreams() { + recentLSIDs.Add(lsid) + } } changed := false adl.allowlist.Range(func(k, v interface{}) bool { - logStreamID := k.(types.LogStreamID) - item := v.(allowlistItem) - if !recentLSIDs.Contains(logStreamID) { - adl.allowlist.Delete(logStreamID) - changed = changed || !item.denied + topicID := k.(types.TopicID) + lsMap := v.(*sync.Map) + + if !recentTopicIDs.Contains(topicID) { + adl.allowlist.Delete(topicID) + changed = true + return true } + + lsMap.Range(func(k, v interface{}) bool { + logStreamID := k.(types.LogStreamID) + item := v.(allowlistItem) + if !recentLSIDs.Contains(logStreamID) { + lsMap.Delete(logStreamID) + changed = changed || !item.denied + } + return true + }) + return true }) now := time.Now() - for logStreamID := range recentLSIDs { - aitem := allowlistItem{denied: false, ts: now} - _, loaded := adl.allowlist.LoadOrStore(logStreamID, aitem) + for _, topicdesc := range topicdescs { + lsMap := new(sync.Map) + lsMapIf, loaded := adl.allowlist.LoadOrStore(topicdesc.TopicID, lsMap) changed = changed || !loaded + + if loaded { + lsMap = lsMapIf.(*sync.Map) + } + + for _, logStreamID := range topicdesc.GetLogStreams() { + aitem := allowlistItem{denied: false, ts: now} + _, loaded := lsMap.LoadOrStore(logStreamID, aitem) + changed = changed || !loaded + } } if changed { diff --git a/pkg/varlog/allowlist_test.go b/pkg/varlog/allowlist_test.go index 5e053b7c1..3f357a2bb 100644 --- a/pkg/varlog/allowlist_test.go +++ b/pkg/varlog/allowlist_test.go @@ -31,7 +31,7 @@ func TestAllowlistPick(t *testing.T) { allowlist.Renew(metadata) Convey("Then any log stream should not be picked", func() { - _, picked := allowlist.Pick() + _, picked := allowlist.Pick(types.TopicID(1)) So(picked, ShouldBeFalse) }) }) @@ -42,11 +42,20 @@ func TestAllowlistPick(t *testing.T) { {LogStreamID: types.LogStreamID(1)}, {LogStreamID: types.LogStreamID(2)}, }, + Topics: []*varlogpb.TopicDescriptor{ + { + TopicID: types.TopicID(1), + LogStreams: []types.LogStreamID{ + types.LogStreamID(1), + types.LogStreamID(2), + }, + }, + }, } allowlist.Renew(metadata) Convey("Then a log stream should be picked", func() { - _, picked := allowlist.Pick() + _, picked := allowlist.Pick(types.TopicID(1)) So(picked, ShouldBeTrue) }) }) @@ -59,6 +68,7 @@ func TestAllowlistDeny(t *testing.T) { denyTTL = 10 * time.Second expireInterval = 10 * time.Minute // not expire logStreamID = types.LogStreamID(1) + topicID = types.TopicID(1) ) allowlist, err := newTransientAllowlist(denyTTL, expireInterval, zap.L()) @@ -73,13 +83,21 @@ func TestAllowlistDeny(t *testing.T) { LogStreams: []*varlogpb.LogStreamDescriptor{ {LogStreamID: logStreamID}, }, + Topics: []*varlogpb.TopicDescriptor{ + { + TopicID: topicID, + LogStreams: []types.LogStreamID{ + logStreamID, + }, + }, + }, }) - allowlist.Deny(logStreamID) + allowlist.Deny(topicID, logStreamID) Convey("Then the log stream should not be picked", func() { - _, picked := allowlist.Pick() + _, picked := allowlist.Pick(topicID) So(picked, ShouldBeFalse) - So(allowlist.Contains(logStreamID), ShouldBeFalse) + So(allowlist.Contains(topicID, logStreamID), ShouldBeFalse) }) }) @@ -89,15 +107,24 @@ func TestAllowlistDeny(t *testing.T) { {LogStreamID: logStreamID}, {LogStreamID: logStreamID + 1}, }, + Topics: []*varlogpb.TopicDescriptor{ + { + TopicID: topicID, + LogStreams: []types.LogStreamID{ + logStreamID, + logStreamID + 1, + }, + }, + }, }) - allowlist.Deny(logStreamID + 1) - So(allowlist.Contains(logStreamID+1), ShouldBeFalse) + allowlist.Deny(topicID, logStreamID+1) + So(allowlist.Contains(topicID, logStreamID+1), ShouldBeFalse) Convey("Then the other should be picked", func() { - pickedLogStreamID, picked := allowlist.Pick() + pickedLogStreamID, picked := allowlist.Pick(topicID) So(picked, ShouldBeTrue) So(pickedLogStreamID, ShouldEqual, logStreamID) - So(allowlist.Contains(logStreamID), ShouldBeTrue) + So(allowlist.Contains(topicID, logStreamID), ShouldBeTrue) }) }) }) @@ -109,12 +136,21 @@ func TestAllowlistExpire(t *testing.T) { denyTTL = 100 * time.Millisecond expireInterval = 100 * time.Millisecond logStreamID = types.LogStreamID(1) + topicID = types.TopicID(1) ) metadata := &varlogpb.MetadataDescriptor{ LogStreams: []*varlogpb.LogStreamDescriptor{ {LogStreamID: logStreamID}, }, + Topics: []*varlogpb.TopicDescriptor{ + { + TopicID: topicID, + LogStreams: []types.LogStreamID{ + logStreamID, + }, + }, + }, } allowlist, err := newTransientAllowlist(denyTTL, expireInterval, zap.L()) @@ -125,18 +161,18 @@ func TestAllowlistExpire(t *testing.T) { }) allowlist.Renew(metadata) - _, picked := allowlist.Pick() + _, picked := allowlist.Pick(topicID) So(picked, ShouldBeTrue) - allowlist.Deny(logStreamID) - _, picked = allowlist.Pick() + allowlist.Deny(topicID, logStreamID) + _, picked = allowlist.Pick(topicID) So(picked, ShouldBeFalse) // wait for some expiration loops log.Println("after deny") time.Sleep(3 * time.Second) - _, picked = allowlist.Pick() + _, picked = allowlist.Pick(topicID) So(picked, ShouldBeTrue) }) } @@ -145,7 +181,8 @@ func BenchmarkAllowlistPick(b *testing.B) { const ( denyTTL = 1 * time.Second expireInterval = 1 * time.Second - numLogStreams = 1000 + numLogStreams = 100 + numTopics = 100 ) allowlist, err := newTransientAllowlist(denyTTL, expireInterval, zap.L()) @@ -159,16 +196,25 @@ func BenchmarkAllowlistPick(b *testing.B) { }() metadata := &varlogpb.MetadataDescriptor{} - for i := 0; i < numLogStreams; i++ { - metadata.LogStreams = append(metadata.LogStreams, - &varlogpb.LogStreamDescriptor{LogStreamID: types.LogStreamID(i + 1)}, - ) + for i := 0; i < numTopics; i++ { + topicdesc := &varlogpb.TopicDescriptor{ + TopicID: types.TopicID(i + 1), + } + + for j := 0; j < numLogStreams; j++ { + metadata.LogStreams = append(metadata.LogStreams, + &varlogpb.LogStreamDescriptor{LogStreamID: types.LogStreamID(i*numLogStreams + j + 1)}, + ) + topicdesc.LogStreams = append(topicdesc.LogStreams, types.LogStreamID(i*numLogStreams+j+1)) + } + metadata.Topics = append(metadata.Topics, topicdesc) } + allowlist.Renew(metadata) b.ResetTimer() for i := 0; i < b.N; i++ { - if _, picked := allowlist.Pick(); !picked { + if _, picked := allowlist.Pick(types.TopicID(i%numTopics + 1)); !picked { b.Fatal("pick error") } } diff --git a/pkg/varlog/cluster_manager_client.go b/pkg/varlog/cluster_manager_client.go index 6d0e9f715..01212ca01 100644 --- a/pkg/varlog/cluster_manager_client.go +++ b/pkg/varlog/cluster_manager_client.go @@ -15,13 +15,15 @@ import ( type ClusterManagerClient interface { AddStorageNode(ctx context.Context, addr string) (*vmspb.AddStorageNodeResponse, error) UnregisterStorageNode(ctx context.Context, storageNodeID types.StorageNodeID) (*vmspb.UnregisterStorageNodeResponse, error) - AddLogStream(ctx context.Context, logStreamReplicas []*varlogpb.ReplicaDescriptor) (*vmspb.AddLogStreamResponse, error) - UnregisterLogStream(ctx context.Context, logStreamID types.LogStreamID) (*vmspb.UnregisterLogStreamResponse, error) - RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID) (*vmspb.RemoveLogStreamReplicaResponse, error) - UpdateLogStream(ctx context.Context, logStreamID types.LogStreamID, poppedReplica *varlogpb.ReplicaDescriptor, pushedReplica *varlogpb.ReplicaDescriptor) (*vmspb.UpdateLogStreamResponse, error) - Seal(ctx context.Context, logStreamID types.LogStreamID) (*vmspb.SealResponse, error) - Unseal(ctx context.Context, logStreamID types.LogStreamID) (*vmspb.UnsealResponse, error) - Sync(ctx context.Context, logStreamID types.LogStreamID, srcStorageNodeId, dstStorageNodeId types.StorageNodeID) (*vmspb.SyncResponse, error) + AddTopic(ctx context.Context) (*vmspb.AddTopicResponse, error) + UnregisterTopic(ctx context.Context, topicID types.TopicID) (*vmspb.UnregisterTopicResponse, error) + AddLogStream(ctx context.Context, topicID types.TopicID, logStreamReplicas []*varlogpb.ReplicaDescriptor) (*vmspb.AddLogStreamResponse, error) + UnregisterLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.UnregisterLogStreamResponse, error) + RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.RemoveLogStreamReplicaResponse, error) + UpdateLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, poppedReplica *varlogpb.ReplicaDescriptor, pushedReplica *varlogpb.ReplicaDescriptor) (*vmspb.UpdateLogStreamResponse, error) + Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.SealResponse, error) + Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.UnsealResponse, error) + Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, srcStorageNodeID, dstStorageNodeID types.StorageNodeID) (*vmspb.SyncResponse, error) GetMRMembers(ctx context.Context) (*vmspb.GetMRMembersResponse, error) AddMRPeer(ctx context.Context, raftURL, rpcAddr string) (*vmspb.AddMRPeerResponse, error) RemoveMRPeer(ctx context.Context, raftURL string) (*vmspb.RemoveMRPeerResponse, error) @@ -62,26 +64,44 @@ func (c *clusterManagerClient) UnregisterStorageNode(ctx context.Context, storag return rsp, verrors.FromStatusError(err) } -func (c *clusterManagerClient) AddLogStream(ctx context.Context, logStreamReplicas []*varlogpb.ReplicaDescriptor) (*vmspb.AddLogStreamResponse, error) { - rsp, err := c.rpcClient.AddLogStream(ctx, &vmspb.AddLogStreamRequest{Replicas: logStreamReplicas}) +func (c *clusterManagerClient) AddTopic(ctx context.Context) (*vmspb.AddTopicResponse, error) { + rsp, err := c.rpcClient.AddTopic(ctx, &vmspb.AddTopicRequest{}) return rsp, verrors.FromStatusError(err) } -func (c *clusterManagerClient) UnregisterLogStream(ctx context.Context, logStreamID types.LogStreamID) (*vmspb.UnregisterLogStreamResponse, error) { - rsp, err := c.rpcClient.UnregisterLogStream(ctx, &vmspb.UnregisterLogStreamRequest{LogStreamID: logStreamID}) +func (c *clusterManagerClient) UnregisterTopic(ctx context.Context, topicID types.TopicID) (*vmspb.UnregisterTopicResponse, error) { + rsp, err := c.rpcClient.UnregisterTopic(ctx, &vmspb.UnregisterTopicRequest{TopicID: topicID}) return rsp, verrors.FromStatusError(err) } -func (c *clusterManagerClient) RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, logStreamID types.LogStreamID) (*vmspb.RemoveLogStreamReplicaResponse, error) { +func (c *clusterManagerClient) AddLogStream(ctx context.Context, topicID types.TopicID, logStreamReplicas []*varlogpb.ReplicaDescriptor) (*vmspb.AddLogStreamResponse, error) { + rsp, err := c.rpcClient.AddLogStream(ctx, &vmspb.AddLogStreamRequest{ + TopicID: topicID, + Replicas: logStreamReplicas, + }) + return rsp, verrors.FromStatusError(err) +} + +func (c *clusterManagerClient) UnregisterLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.UnregisterLogStreamResponse, error) { + rsp, err := c.rpcClient.UnregisterLogStream(ctx, &vmspb.UnregisterLogStreamRequest{ + TopicID: topicID, + LogStreamID: logStreamID, + }) + return rsp, verrors.FromStatusError(err) +} + +func (c *clusterManagerClient) RemoveLogStreamReplica(ctx context.Context, storageNodeID types.StorageNodeID, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.RemoveLogStreamReplicaResponse, error) { rsp, err := c.rpcClient.RemoveLogStreamReplica(ctx, &vmspb.RemoveLogStreamReplicaRequest{ StorageNodeID: storageNodeID, + TopicID: topicID, LogStreamID: logStreamID, }) return rsp, verrors.FromStatusError(err) } -func (c *clusterManagerClient) UpdateLogStream(ctx context.Context, logStreamID types.LogStreamID, poppedReplica, pushedReplica *varlogpb.ReplicaDescriptor) (*vmspb.UpdateLogStreamResponse, error) { +func (c *clusterManagerClient) UpdateLogStream(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, poppedReplica, pushedReplica *varlogpb.ReplicaDescriptor) (*vmspb.UpdateLogStreamResponse, error) { rsp, err := c.rpcClient.UpdateLogStream(ctx, &vmspb.UpdateLogStreamRequest{ + TopicID: topicID, LogStreamID: logStreamID, PoppedReplica: poppedReplica, PushedReplica: pushedReplica, @@ -89,21 +109,28 @@ func (c *clusterManagerClient) UpdateLogStream(ctx context.Context, logStreamID return rsp, verrors.FromStatusError(err) } -func (c *clusterManagerClient) Seal(ctx context.Context, logStreamID types.LogStreamID) (*vmspb.SealResponse, error) { - rsp, err := c.rpcClient.Seal(ctx, &vmspb.SealRequest{LogStreamID: logStreamID}) +func (c *clusterManagerClient) Seal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.SealResponse, error) { + rsp, err := c.rpcClient.Seal(ctx, &vmspb.SealRequest{ + TopicID: topicID, + LogStreamID: logStreamID, + }) return rsp, verrors.FromStatusError(err) } -func (c *clusterManagerClient) Unseal(ctx context.Context, logStreamID types.LogStreamID) (*vmspb.UnsealResponse, error) { - rsp, err := c.rpcClient.Unseal(ctx, &vmspb.UnsealRequest{LogStreamID: logStreamID}) +func (c *clusterManagerClient) Unseal(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID) (*vmspb.UnsealResponse, error) { + rsp, err := c.rpcClient.Unseal(ctx, &vmspb.UnsealRequest{ + TopicID: topicID, + LogStreamID: logStreamID, + }) return rsp, verrors.FromStatusError(err) } -func (c *clusterManagerClient) Sync(ctx context.Context, logStreamID types.LogStreamID, srcStorageNodeId, dstStorageNodeId types.StorageNodeID) (*vmspb.SyncResponse, error) { +func (c *clusterManagerClient) Sync(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, srcStorageNodeID, dstStorageNodeID types.StorageNodeID) (*vmspb.SyncResponse, error) { rsp, err := c.rpcClient.Sync(ctx, &vmspb.SyncRequest{ + TopicID: topicID, LogStreamID: logStreamID, - SrcStorageNodeID: srcStorageNodeId, - DstStorageNodeID: dstStorageNodeId, + SrcStorageNodeID: srcStorageNodeID, + DstStorageNodeID: dstStorageNodeID, }) return rsp, verrors.FromStatusError(err) } diff --git a/pkg/varlog/log_stream_selector.go b/pkg/varlog/log_stream_selector.go index eafdc5bc3..81ee52ab6 100644 --- a/pkg/varlog/log_stream_selector.go +++ b/pkg/varlog/log_stream_selector.go @@ -14,7 +14,7 @@ var ( // // Select selects a log stream, but if there is no log stream to choose it returns false. type LogStreamSelector interface { - Select() (types.LogStreamID, bool) + Select(topicID types.TopicID) (types.LogStreamID, bool) } // alsSelector implements LogStreamSelector. It uses allowlist to select an appendable log stream. @@ -29,6 +29,6 @@ func newAppendableLogStreamSelector(allowlist Allowlist) *alsSelector { } // Select implements (LogStreamSelector).Select method. -func (als *alsSelector) Select() (types.LogStreamID, bool) { - return als.allowlist.Pick() +func (als *alsSelector) Select(topicID types.TopicID) (types.LogStreamID, bool) { + return als.allowlist.Pick(topicID) } diff --git a/pkg/varlog/metadata_refresher.go b/pkg/varlog/metadata_refresher.go index 2600bbcd7..b20a31880 100644 --- a/pkg/varlog/metadata_refresher.go +++ b/pkg/varlog/metadata_refresher.go @@ -50,7 +50,6 @@ func newMetadataRefresher( refreshInterval, refreshTimeout time.Duration, logger *zap.Logger) (*metadataRefresher, error) { - if logger == nil { logger = zap.NewNop() } diff --git a/pkg/varlog/operations.go b/pkg/varlog/operations.go index 3a7009403..7644bac32 100644 --- a/pkg/varlog/operations.go +++ b/pkg/varlog/operations.go @@ -13,7 +13,7 @@ import ( ) // TODO: use ops-accumulator? -func (v *varlog) append(ctx context.Context, logStreamID types.LogStreamID, data []byte, opts ...AppendOption) (glsn types.GLSN, err error) { +func (v *varlog) append(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, data []byte, opts ...AppendOption) (glsn types.GLSN, err error) { appendOpts := defaultAppendOptions() for _, opt := range opts { opt.apply(&appendOpts) @@ -28,12 +28,12 @@ func (v *varlog) append(ctx context.Context, logStreamID types.LogStreamID, data var ok bool var currErr error if appendOpts.selectLogStream { - if logStreamID, ok = v.lsSelector.Select(); !ok { + if logStreamID, ok = v.lsSelector.Select(topicID); !ok { err = multierr.Append(err, errors.New("no usable log stream")) continue } } - replicas, ok = v.replicasRetriever.Retrieve(logStreamID) + replicas, ok = v.replicasRetriever.Retrieve(topicID, logStreamID) if !ok { err = multierr.Append(err, errors.New("no such log stream replicas")) continue @@ -42,15 +42,15 @@ func (v *varlog) append(ctx context.Context, logStreamID types.LogStreamID, data primaryLogCL, currErr = v.logCLManager.GetOrConnect(ctx, primarySNID, replicas[0].GetAddress()) if currErr != nil { err = multierr.Append(err, currErr) - v.allowlist.Deny(logStreamID) + v.allowlist.Deny(topicID, logStreamID) continue } - snList := make([]logc.StorageNode, len(replicas)-1) + snList := make([]varlogpb.StorageNode, len(replicas)-1) for i := range replicas[1:] { - snList[i].Addr = replicas[i+1].GetAddress() - snList[i].ID = replicas[i+1].GetStorageNodeID() + snList[i].Address = replicas[i+1].GetAddress() + snList[i].StorageNodeID = replicas[i+1].GetStorageNodeID() } - glsn, currErr = primaryLogCL.Append(ctx, logStreamID, data, snList...) + glsn, currErr = primaryLogCL.Append(ctx, topicID, logStreamID, data, snList...) if currErr != nil { replicasInfo := make([]string, 0, len(replicas)) for _, replica := range replicas { @@ -60,7 +60,7 @@ func (v *varlog) append(ctx context.Context, logStreamID types.LogStreamID, data // FIXME (jun): It affects other goroutines that are doing I/O. // Close a client only when err is related to the connection. primaryLogCL.Close() - v.allowlist.Deny(logStreamID) + v.allowlist.Deny(topicID, logStreamID) continue } return glsn, nil @@ -68,22 +68,22 @@ func (v *varlog) append(ctx context.Context, logStreamID types.LogStreamID, data return glsn, err } -func (v *varlog) read(ctx context.Context, logStreamID types.LogStreamID, glsn types.GLSN) (types.LogEntry, error) { - replicas, ok := v.replicasRetriever.Retrieve(logStreamID) +func (v *varlog) read(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, glsn types.GLSN) (varlogpb.LogEntry, error) { + replicas, ok := v.replicasRetriever.Retrieve(topicID, logStreamID) if !ok { - return types.InvalidLogEntry, errNoLogStream + return varlogpb.InvalidLogEntry(), errNoLogStream } primarySNID := replicas[0].GetStorageNodeID() primaryLogCL, err := v.logCLManager.GetOrConnect(ctx, primarySNID, replicas[0].GetAddress()) if err != nil { - return types.InvalidLogEntry, errNoLogIOClient + return varlogpb.InvalidLogEntry(), errNoLogIOClient } // FIXME (jun // 1) LogEntry -> non-nullable field // 2) deepcopy LogEntry - logEntry, err := primaryLogCL.Read(ctx, logStreamID, glsn) + logEntry, err := primaryLogCL.Read(ctx, topicID, logStreamID, glsn) if err != nil { - return types.InvalidLogEntry, err + return varlogpb.InvalidLogEntry(), err } return *logEntry, nil } diff --git a/pkg/varlog/replicas_retriever.go b/pkg/varlog/replicas_retriever.go index 1ff5dd12e..337ada631 100644 --- a/pkg/varlog/replicas_retriever.go +++ b/pkg/varlog/replicas_retriever.go @@ -4,7 +4,6 @@ package varlog import ( "errors" - "sync" "sync/atomic" "github.com/kakao/varlog/pkg/types" @@ -19,8 +18,8 @@ var ( // // Retrieve searches replicas belongs to the log stream. type ReplicasRetriever interface { - Retrieve(logStreamID types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) - All() map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor + Retrieve(topicID types.TopicID, logStreamID types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) + All(topicID types.TopicID) map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor } type RenewableReplicasRetriever interface { @@ -29,57 +28,72 @@ type RenewableReplicasRetriever interface { } type renewableReplicasRetriever struct { - lsreplicas atomic.Value // *sync.Map // map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor + topic atomic.Value // map[types.TopicID]map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor } -func (r *renewableReplicasRetriever) Retrieve(logStreamID types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) { - lsReplicasMapIf := r.lsreplicas.Load() - if lsReplicasMapIf == nil { +func (r *renewableReplicasRetriever) Retrieve(topicID types.TopicID, logStreamID types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) { + topicMapIf := r.topic.Load() + if topicMapIf == nil { return nil, false } - lsReplicasMap := lsReplicasMapIf.(*sync.Map) - if lsreplicas, ok := lsReplicasMap.Load(logStreamID); ok { - return lsreplicas.([]varlogpb.LogStreamReplicaDescriptor), true + topicMap := topicMapIf.(map[types.TopicID]map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor) + if lsReplicasMap, ok := topicMap[topicID]; ok { + if lsreplicas, ok := lsReplicasMap[logStreamID]; ok { + return lsreplicas, true + } } return nil, false } -func (r *renewableReplicasRetriever) All() map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor { - lsReplicasMapIf := r.lsreplicas.Load() - if lsReplicasMapIf == nil { +func (r *renewableReplicasRetriever) All(topicID types.TopicID) map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor { + topicMapIf := r.topic.Load() + if topicMapIf == nil { return nil } - lsReplicasMap := lsReplicasMapIf.(*sync.Map) + topicMap := topicMapIf.(map[types.TopicID]map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor) + + lsReplicasMap, ok := topicMap[topicID] + if !ok { + return nil + } + ret := make(map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor) - lsReplicasMap.Range(func(logStreamID interface{}, replicas interface{}) bool { - ret[logStreamID.(types.LogStreamID)] = replicas.([]varlogpb.LogStreamReplicaDescriptor) - return true - }) + for lsID, replicas := range lsReplicasMap { + ret[lsID] = replicas + } return ret } func (r *renewableReplicasRetriever) Renew(metadata *varlogpb.MetadataDescriptor) { - newLSReplicasMap := new(sync.Map) - storageNodes := metadata.GetStorageNodes() snMap := make(map[types.StorageNodeID]string, len(storageNodes)) for _, storageNode := range storageNodes { snMap[storageNode.GetStorageNodeID()] = storageNode.GetAddress() } - lsdescs := metadata.GetLogStreams() - for _, lsdesc := range lsdescs { - logStreamID := lsdesc.GetLogStreamID() - replicas := lsdesc.GetReplicas() - lsreplicas := make([]varlogpb.LogStreamReplicaDescriptor, len(replicas)) - for i, replica := range replicas { - storageNodeID := replica.GetStorageNodeID() - lsreplicas[i].StorageNodeID = storageNodeID - lsreplicas[i].LogStreamID = logStreamID - lsreplicas[i].Address = snMap[storageNodeID] + newTopicMap := make(map[types.TopicID]map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor) + topicdescs := metadata.GetTopics() + for _, topicdesc := range topicdescs { + topicID := topicdesc.TopicID + + newLSReplicasMap := make(map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor) + for _, lsid := range topicdesc.LogStreams { + lsdesc := metadata.GetLogStream(lsid) + + logStreamID := lsdesc.GetLogStreamID() + replicas := lsdesc.GetReplicas() + lsreplicas := make([]varlogpb.LogStreamReplicaDescriptor, len(replicas)) + for i, replica := range replicas { + storageNodeID := replica.GetStorageNodeID() + lsreplicas[i].StorageNodeID = storageNodeID + lsreplicas[i].LogStreamID = logStreamID + lsreplicas[i].Address = snMap[storageNodeID] + } + newLSReplicasMap[logStreamID] = lsreplicas } - newLSReplicasMap.Store(logStreamID, lsreplicas) + + newTopicMap[topicID] = newLSReplicasMap } - r.lsreplicas.Store(newLSReplicasMap) + r.topic.Store(newTopicMap) } diff --git a/pkg/varlog/replicas_retriever_mock.go b/pkg/varlog/replicas_retriever_mock.go index 6cb794db0..cfb6918f3 100644 --- a/pkg/varlog/replicas_retriever_mock.go +++ b/pkg/varlog/replicas_retriever_mock.go @@ -37,32 +37,32 @@ func (m *MockReplicasRetriever) EXPECT() *MockReplicasRetrieverMockRecorder { } // All mocks base method. -func (m *MockReplicasRetriever) All() map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor { +func (m *MockReplicasRetriever) All(arg0 types.TopicID) map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "All") + ret := m.ctrl.Call(m, "All", arg0) ret0, _ := ret[0].(map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor) return ret0 } // All indicates an expected call of All. -func (mr *MockReplicasRetrieverMockRecorder) All() *gomock.Call { +func (mr *MockReplicasRetrieverMockRecorder) All(arg0 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "All", reflect.TypeOf((*MockReplicasRetriever)(nil).All)) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "All", reflect.TypeOf((*MockReplicasRetriever)(nil).All), arg0) } // Retrieve mocks base method. -func (m *MockReplicasRetriever) Retrieve(arg0 types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) { +func (m *MockReplicasRetriever) Retrieve(arg0 types.TopicID, arg1 types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Retrieve", arg0) + ret := m.ctrl.Call(m, "Retrieve", arg0, arg1) ret0, _ := ret[0].([]varlogpb.LogStreamReplicaDescriptor) ret1, _ := ret[1].(bool) return ret0, ret1 } // Retrieve indicates an expected call of Retrieve. -func (mr *MockReplicasRetrieverMockRecorder) Retrieve(arg0 interface{}) *gomock.Call { +func (mr *MockReplicasRetrieverMockRecorder) Retrieve(arg0, arg1 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Retrieve", reflect.TypeOf((*MockReplicasRetriever)(nil).Retrieve), arg0) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Retrieve", reflect.TypeOf((*MockReplicasRetriever)(nil).Retrieve), arg0, arg1) } // MockRenewableReplicasRetriever is a mock of RenewableReplicasRetriever interface. @@ -89,17 +89,17 @@ func (m *MockRenewableReplicasRetriever) EXPECT() *MockRenewableReplicasRetrieve } // All mocks base method. -func (m *MockRenewableReplicasRetriever) All() map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor { +func (m *MockRenewableReplicasRetriever) All(arg0 types.TopicID) map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "All") + ret := m.ctrl.Call(m, "All", arg0) ret0, _ := ret[0].(map[types.LogStreamID][]varlogpb.LogStreamReplicaDescriptor) return ret0 } // All indicates an expected call of All. -func (mr *MockRenewableReplicasRetrieverMockRecorder) All() *gomock.Call { +func (mr *MockRenewableReplicasRetrieverMockRecorder) All(arg0 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "All", reflect.TypeOf((*MockRenewableReplicasRetriever)(nil).All)) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "All", reflect.TypeOf((*MockRenewableReplicasRetriever)(nil).All), arg0) } // Renew mocks base method. @@ -115,16 +115,16 @@ func (mr *MockRenewableReplicasRetrieverMockRecorder) Renew(arg0 interface{}) *g } // Retrieve mocks base method. -func (m *MockRenewableReplicasRetriever) Retrieve(arg0 types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) { +func (m *MockRenewableReplicasRetriever) Retrieve(arg0 types.TopicID, arg1 types.LogStreamID) ([]varlogpb.LogStreamReplicaDescriptor, bool) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "Retrieve", arg0) + ret := m.ctrl.Call(m, "Retrieve", arg0, arg1) ret0, _ := ret[0].([]varlogpb.LogStreamReplicaDescriptor) ret1, _ := ret[1].(bool) return ret0, ret1 } // Retrieve indicates an expected call of Retrieve. -func (mr *MockRenewableReplicasRetrieverMockRecorder) Retrieve(arg0 interface{}) *gomock.Call { +func (mr *MockRenewableReplicasRetrieverMockRecorder) Retrieve(arg0, arg1 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Retrieve", reflect.TypeOf((*MockRenewableReplicasRetriever)(nil).Retrieve), arg0) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Retrieve", reflect.TypeOf((*MockRenewableReplicasRetriever)(nil).Retrieve), arg0, arg1) } diff --git a/pkg/varlog/subscribe.go b/pkg/varlog/subscribe.go index b6e6cd920..ed5d128a0 100644 --- a/pkg/varlog/subscribe.go +++ b/pkg/varlog/subscribe.go @@ -16,11 +16,12 @@ import ( "github.com/kakao/varlog/pkg/util/runner" "github.com/kakao/varlog/pkg/util/syncutil/atomicutil" "github.com/kakao/varlog/pkg/verrors" + "github.com/kakao/varlog/proto/varlogpb" ) type SubscribeCloser func() -func (v *varlog) subscribe(ctx context.Context, begin, end types.GLSN, onNext OnNext, opts ...SubscribeOption) (closer SubscribeCloser, err error) { +func (v *varlog) subscribe(ctx context.Context, topicID types.TopicID, begin, end types.GLSN, onNext OnNext, opts ...SubscribeOption) (closer SubscribeCloser, err error) { if begin >= end { return nil, verrors.ErrInvalid } @@ -41,6 +42,7 @@ func (v *varlog) subscribe(ctx context.Context, begin, end types.GLSN, onNext On tlogger := v.logger.Named("transmitter") tsm := &transmitter{ + topicID: topicID, subscribers: make(map[types.LogStreamID]*subscriber), refresher: v.refresher, replicasRetriever: v.replicasRetriever, @@ -152,6 +154,7 @@ func (tq *transmitQueue) Front() (transmitResult, bool) { } type subscriber struct { + topicID types.TopicID logStreamID types.LogStreamID storageNodeID types.StorageNodeID logCL logc.LogIOClient @@ -169,12 +172,13 @@ type subscriber struct { logger *zap.Logger } -func newSubscriber(ctx context.Context, logStreamID types.LogStreamID, storageNodeID types.StorageNodeID, logCL logc.LogIOClient, begin, end types.GLSN, transmitQ *transmitQueue, transmitCV chan struct{}, logger *zap.Logger) (*subscriber, error) { - resultC, err := logCL.Subscribe(ctx, logStreamID, begin, end) +func newSubscriber(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, storageNodeID types.StorageNodeID, logCL logc.LogIOClient, begin, end types.GLSN, transmitQ *transmitQueue, transmitCV chan struct{}, logger *zap.Logger) (*subscriber, error) { + resultC, err := logCL.Subscribe(ctx, topicID, logStreamID, begin, end) if err != nil { return nil, err } s := &subscriber{ + topicID: topicID, logStreamID: logStreamID, storageNodeID: storageNodeID, logCL: logCL, @@ -182,7 +186,7 @@ func newSubscriber(ctx context.Context, logStreamID types.LogStreamID, storageNo transmitQ: transmitQ, transmitCV: transmitCV, done: make(chan struct{}), - logger: logger.Named("subscriber").With(zap.Uint32("lsid", uint32(logStreamID))), + logger: logger.Named("subscriber").With(zap.Int32("lsid", int32(logStreamID))), } s.lastSubscribeAt.Store(time.Now()) s.closed.Store(false) @@ -244,6 +248,7 @@ func (s *subscriber) getLastSubscribeAt() time.Time { } type transmitter struct { + topicID types.TopicID subscribers map[types.LogStreamID]*subscriber refresher MetadataRefresher replicasRetriever ReplicasRetriever @@ -292,8 +297,8 @@ func (p *transmitter) transmit(ctx context.Context) { func (p *transmitter) refreshSubscriber(ctx context.Context) error { p.refresher.Refresh(ctx) - replicasMap := p.replicasRetriever.All() + replicasMap := p.replicasRetriever.All(p.topicID) for logStreamID, replicas := range replicasMap { idx := 0 if s, ok := p.subscribers[logStreamID]; ok { @@ -322,7 +327,7 @@ func (p *transmitter) refreshSubscriber(ctx context.Context) error { continue CONNECT } - s, err = newSubscriber(ctx, logStreamID, snid, logCL, p.wanted, p.end, p.transmitQ, p.transmitCV, p.logger) + s, err = newSubscriber(ctx, p.topicID, logStreamID, snid, logCL, p.wanted, p.end, p.transmitQ, p.transmitCV, p.logger) if err != nil { logCL.Close() continue CONNECT @@ -488,6 +493,6 @@ func (p *dispatcher) dispatch(_ context.Context) { sentErr = sentErr || res.Error != nil } if !sentErr { - p.onNextFunc(types.InvalidLogEntry, io.EOF) + p.onNextFunc(varlogpb.InvalidLogEntry(), io.EOF) } } diff --git a/pkg/varlog/subscribe_test.go b/pkg/varlog/subscribe_test.go index 72add5ec6..eab4d11d5 100644 --- a/pkg/varlog/subscribe_test.go +++ b/pkg/varlog/subscribe_test.go @@ -27,6 +27,7 @@ func TestSubscribe(t *testing.T) { numLogStreams = 10 numLogs = 100 minLogStreamID = types.LogStreamID(1) + topicID = types.TopicID(1) ) var ( begin = types.InvalidGLSN @@ -51,14 +52,14 @@ func TestSubscribe(t *testing.T) { }, } } - replicasRetriever.EXPECT().All().Return(replicasMap).MaxTimes(1) + replicasRetriever.EXPECT().All(topicID).Return(replicasMap).MaxTimes(1) createMockLogClientManager := func(results map[types.LogStreamID][]logc.SubscribeResult) *logc.MockLogClientManager { logCLManager := logc.NewMockLogClientManager(ctrl) logCLManager.EXPECT().GetOrConnect(gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn( func(_ context.Context, storageNodeID types.StorageNodeID, addr string) (logc.LogIOClient, error) { logCL := logc.NewMockLogIOClient(ctrl) - logCL.EXPECT().Subscribe(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ context.Context, logStreamID types.LogStreamID, _ types.GLSN, _ types.GLSN) (<-chan logc.SubscribeResult, error) { + logCL.EXPECT().Subscribe(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(func(_ context.Context, _ types.TopicID, logStreamID types.LogStreamID, _ types.GLSN, _ types.GLSN) (<-chan logc.SubscribeResult, error) { result := results[logStreamID] c := make(chan logc.SubscribeResult, len(result)) for _, res := range result { @@ -84,7 +85,7 @@ func TestSubscribe(t *testing.T) { end = types.GLSN(1) Convey("Then subscribe should return an error", func() { - _, err := vlg.subscribe(context.TODO(), begin, end, func(_ types.LogEntry, _ error) {}) + _, err := vlg.subscribe(context.TODO(), topicID, begin, end, func(_ varlogpb.LogEntry, _ error) {}) So(err, ShouldNotBeNil) }) }) @@ -102,14 +103,14 @@ func TestSubscribe(t *testing.T) { Convey("Then subscribe should work well", func() { var wg sync.WaitGroup wg.Add(1) - onNext := func(logEntry types.LogEntry, err error) { + onNext := func(logEntry varlogpb.LogEntry, err error) { if err == io.EOF { wg.Done() return } t.Error("no log entries are expected") } - closer, err := vlg.subscribe(context.TODO(), begin, end, onNext) + closer, err := vlg.subscribe(context.TODO(), topicID, begin, end, onNext) So(err, ShouldBeNil) wg.Wait() closer() @@ -127,7 +128,7 @@ func TestSubscribe(t *testing.T) { lastLLSN := lastLLSNs[logStreamID] lastLLSN++ results[logStreamID] = append(results[logStreamID], logc.SubscribeResult{ - LogEntry: types.LogEntry{ + LogEntry: varlogpb.LogEntry{ GLSN: glsn, LLSN: lastLLSN, Data: []byte("foo"), @@ -142,7 +143,7 @@ func TestSubscribe(t *testing.T) { var wg sync.WaitGroup wg.Add(1) expectedGLSN := begin - onNext := func(logEntry types.LogEntry, err error) { + onNext := func(logEntry varlogpb.LogEntry, err error) { if err == io.EOF { wg.Done() return @@ -157,7 +158,7 @@ func TestSubscribe(t *testing.T) { expectedGLSN++ } } - closer, err := vlg.subscribe(context.TODO(), begin, end, onNext) + closer, err := vlg.subscribe(context.TODO(), topicID, begin, end, onNext) So(err, ShouldBeNil) wg.Wait() closer() @@ -168,7 +169,7 @@ func TestSubscribe(t *testing.T) { var wg sync.WaitGroup wg.Add(1) expectedGLSN := begin - onNext := func(logEntry types.LogEntry, err error) { + onNext := func(logEntry varlogpb.LogEntry, err error) { if err == io.EOF { wg.Done() return @@ -183,7 +184,7 @@ func TestSubscribe(t *testing.T) { expectedGLSN++ } } - closer, err := vlg.subscribe(context.TODO(), begin, end+numMoreLogs, onNext) + closer, err := vlg.subscribe(context.TODO(), topicID, begin, end+numMoreLogs, onNext) So(err, ShouldBeNil) wg.Wait() closer() @@ -195,7 +196,7 @@ func TestSubscribe(t *testing.T) { wg.Add(1) expectedGLSN := begin glsnC := make(chan types.GLSN) - onNext := func(logEntry types.LogEntry, err error) { + onNext := func(logEntry varlogpb.LogEntry, err error) { if err != nil { // NOTE: Regardless of context error or EOF, an // error should be raised only once. @@ -211,15 +212,12 @@ func TestSubscribe(t *testing.T) { expectedGLSN++ } } - closer, err := vlg.subscribe(context.TODO(), begin, end, onNext) + closer, err := vlg.subscribe(context.TODO(), topicID, begin, end, onNext) So(err, ShouldBeNil) go func() { - for { - select { - case glsn := <-glsnC: - if glsn == closePoint { - closer() - } + for glsn := range glsnC { + if glsn == closePoint { + closer() } } }() diff --git a/pkg/varlog/trim.go b/pkg/varlog/trim.go index f36217507..f6dac91c4 100644 --- a/pkg/varlog/trim.go +++ b/pkg/varlog/trim.go @@ -21,8 +21,8 @@ type trimArgument struct { err error } -func (v *varlog) trim(ctx context.Context, until types.GLSN, opts TrimOption) error { - trimArgs := createTrimArguments(v.replicasRetriever.All()) +func (v *varlog) trim(ctx context.Context, topicID types.TopicID, until types.GLSN, opts TrimOption) error { + trimArgs := createTrimArguments(v.replicasRetriever.All(topicID)) if len(trimArgs) == 0 { return errors.New("no storage node") } @@ -33,7 +33,7 @@ func (v *varlog) trim(ctx context.Context, until types.GLSN, opts TrimOption) er wg := new(sync.WaitGroup) wg.Add(len(trimArgs)) for _, trimArg := range trimArgs { - trimmer := v.makeTrimmer(trimArg, until, wg) + trimmer := v.makeTrimmer(trimArg, topicID, until, wg) v.runner.RunC(mctx, trimmer) } wg.Wait() @@ -47,7 +47,7 @@ func (v *varlog) trim(ctx context.Context, until types.GLSN, opts TrimOption) er return trimArgs[0].err } -func (v *varlog) makeTrimmer(trimArg *trimArgument, until types.GLSN, wg *sync.WaitGroup) func(context.Context) { +func (v *varlog) makeTrimmer(trimArg *trimArgument, topicID types.TopicID, until types.GLSN, wg *sync.WaitGroup) func(context.Context) { return func(ctx context.Context) { defer wg.Done() logCL, err := v.logCLManager.GetOrConnect(ctx, trimArg.storageNodeID, trimArg.address) @@ -55,7 +55,7 @@ func (v *varlog) makeTrimmer(trimArg *trimArgument, until types.GLSN, wg *sync.W trimArg.err = err return } - trimArg.err = logCL.Trim(ctx, until) + trimArg.err = logCL.Trim(ctx, topicID, until) // TODO (jun): Like subscribe, `ErrUndecidable` is ignored since the local // highwatermark of some log streams are less than the `until` of trim. // It is a sign of the need to clarify undecidable error in the log stream executor. diff --git a/pkg/varlog/trim_test.go b/pkg/varlog/trim_test.go index 2cd6fd374..0e8c34d7a 100644 --- a/pkg/varlog/trim_test.go +++ b/pkg/varlog/trim_test.go @@ -22,6 +22,7 @@ func TestTrim(t *testing.T) { const ( numStorageNodes = 10 minStorageNodeID = 0 + topicID = types.TopicID(1) ) ctrl := gomock.NewController(t) @@ -38,7 +39,7 @@ func TestTrim(t *testing.T) { }, } } - replicasRetriever.EXPECT().All().Return(replicasMap).MaxTimes(1) + replicasRetriever.EXPECT().All(topicID).Return(replicasMap).MaxTimes(1) createMockLogClientManager := func(expectedTrimResults []error) *logc.MockLogClientManager { logCLManager := logc.NewMockLogClientManager(ctrl) @@ -46,7 +47,7 @@ func TestTrim(t *testing.T) { func(_ context.Context, storageNodeID types.StorageNodeID, storagNodeAddr string) (logc.LogIOClient, error) { logCL := logc.NewMockLogIOClient(ctrl) expectedTrimResult := expectedTrimResults[int(storageNodeID)] - logCL.EXPECT().Trim(gomock.Any(), gomock.Any()).Return(expectedTrimResult) + logCL.EXPECT().Trim(gomock.Any(), gomock.Any(), gomock.Any()).Return(expectedTrimResult) return logCL, nil }, ).Times(numStorageNodes) @@ -66,7 +67,7 @@ func TestTrim(t *testing.T) { vlg.logCLManager = createMockLogClientManager(errs) Convey("Then the trim should fail", func() { - err := vlg.Trim(context.TODO(), types.GLSN(1), TrimOption{}) + err := vlg.Trim(context.TODO(), topicID, types.GLSN(1), TrimOption{}) So(err, ShouldNotBeNil) }) }) @@ -80,7 +81,7 @@ func TestTrim(t *testing.T) { vlg.logCLManager = createMockLogClientManager(errs) Convey("Then the trim should succeed", func() { - err := vlg.Trim(context.TODO(), types.GLSN(1), TrimOption{}) + err := vlg.Trim(context.TODO(), topicID, types.GLSN(1), TrimOption{}) So(err, ShouldBeNil) }) }) @@ -94,7 +95,7 @@ func TestTrim(t *testing.T) { vlg.logCLManager = createMockLogClientManager(errs) Convey("Then the trim should succeed", func() { - err := vlg.Trim(context.TODO(), types.GLSN(1), TrimOption{}) + err := vlg.Trim(context.TODO(), topicID, types.GLSN(1), TrimOption{}) So(err, ShouldBeNil) }) }) @@ -108,7 +109,7 @@ func TestTrim(t *testing.T) { vlg.logCLManager = createMockLogClientManager(errs) Convey("Then the trim should succeed", func() { - err := vlg.Trim(context.TODO(), types.GLSN(1), TrimOption{}) + err := vlg.Trim(context.TODO(), topicID, types.GLSN(1), TrimOption{}) So(err, ShouldBeNil) }) }) diff --git a/pkg/varlog/varlog.go b/pkg/varlog/varlog.go index 5c714bf4d..dcc430e82 100644 --- a/pkg/varlog/varlog.go +++ b/pkg/varlog/varlog.go @@ -11,24 +11,25 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/util/runner" "github.com/kakao/varlog/pkg/util/syncutil/atomicutil" + "github.com/kakao/varlog/proto/varlogpb" ) // Varlog is a log interface with thread-safety. Many goroutines can share the same varlog object. type Varlog interface { io.Closer - Append(ctx context.Context, data []byte, opts ...AppendOption) (types.GLSN, error) + Append(ctx context.Context, topicID types.TopicID, data []byte, opts ...AppendOption) (types.GLSN, error) - AppendTo(ctx context.Context, logStreamID types.LogStreamID, data []byte, opts ...AppendOption) (types.GLSN, error) + AppendTo(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, data []byte, opts ...AppendOption) (types.GLSN, error) - Read(ctx context.Context, logStreamID types.LogStreamID, glsn types.GLSN) ([]byte, error) + Read(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, glsn types.GLSN) ([]byte, error) - Subscribe(ctx context.Context, begin types.GLSN, end types.GLSN, onNextFunc OnNext, opts ...SubscribeOption) (SubscribeCloser, error) + Subscribe(ctx context.Context, topicID types.TopicID, begin types.GLSN, end types.GLSN, onNextFunc OnNext, opts ...SubscribeOption) (SubscribeCloser, error) - Trim(ctx context.Context, until types.GLSN, opts TrimOption) error + Trim(ctx context.Context, topicID types.TopicID, until types.GLSN, opts TrimOption) error } -type OnNext func(logEntry types.LogEntry, err error) +type OnNext func(logEntry varlogpb.LogEntry, err error) type varlog struct { clusterID types.ClusterID @@ -121,29 +122,29 @@ func Open(ctx context.Context, clusterID types.ClusterID, mrAddrs []string, opts return v, nil } -func (v *varlog) Append(ctx context.Context, data []byte, opts ...AppendOption) (types.GLSN, error) { - return v.append(ctx, 0, data, opts...) +func (v *varlog) Append(ctx context.Context, topicID types.TopicID, data []byte, opts ...AppendOption) (types.GLSN, error) { + return v.append(ctx, topicID, 0, data, opts...) } -func (v *varlog) AppendTo(ctx context.Context, logStreamID types.LogStreamID, data []byte, opts ...AppendOption) (types.GLSN, error) { +func (v *varlog) AppendTo(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, data []byte, opts ...AppendOption) (types.GLSN, error) { opts = append(opts, withoutSelectLogStream()) - return v.append(ctx, logStreamID, data, opts...) + return v.append(ctx, topicID, logStreamID, data, opts...) } -func (v *varlog) Read(ctx context.Context, logStreamID types.LogStreamID, glsn types.GLSN) ([]byte, error) { - logEntry, err := v.read(ctx, logStreamID, glsn) +func (v *varlog) Read(ctx context.Context, topicID types.TopicID, logStreamID types.LogStreamID, glsn types.GLSN) ([]byte, error) { + logEntry, err := v.read(ctx, topicID, logStreamID, glsn) if err != nil { return nil, err } return logEntry.Data, nil } -func (v *varlog) Subscribe(ctx context.Context, begin types.GLSN, end types.GLSN, onNextFunc OnNext, opts ...SubscribeOption) (SubscribeCloser, error) { - return v.subscribe(ctx, begin, end, onNextFunc, opts...) +func (v *varlog) Subscribe(ctx context.Context, topicID types.TopicID, begin types.GLSN, end types.GLSN, onNextFunc OnNext, opts ...SubscribeOption) (SubscribeCloser, error) { + return v.subscribe(ctx, topicID, begin, end, onNextFunc, opts...) } -func (v *varlog) Trim(ctx context.Context, until types.GLSN, opts TrimOption) error { - return v.trim(ctx, until, opts) +func (v *varlog) Trim(ctx context.Context, topicID types.TopicID, until types.GLSN, opts TrimOption) error { + return v.trim(ctx, topicID, until, opts) } func (v *varlog) Close() (err error) { diff --git a/proto/errpb/errors.pb.go b/proto/errpb/errors.pb.go index 45f05e735..f251a2e6f 100644 --- a/proto/errpb/errors.pb.go +++ b/proto/errpb/errors.pb.go @@ -7,6 +7,7 @@ import ( fmt "fmt" math "math" + _ "github.com/gogo/protobuf/gogoproto" proto "github.com/gogo/protobuf/proto" ) @@ -22,10 +23,7 @@ var _ = math.Inf const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type ErrorDetail struct { - ErrorString string `protobuf:"bytes,1,opt,name=error_string,json=errorString,proto3" json:"error_string,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ErrorString string `protobuf:"bytes,1,opt,name=error_string,json=errorString,proto3" json:"error_string,omitempty"` } func (m *ErrorDetail) Reset() { *m = ErrorDetail{} } @@ -66,14 +64,16 @@ func init() { func init() { proto.RegisterFile("proto/errpb/errors.proto", fileDescriptor_1428f469e232257f) } var fileDescriptor_1428f469e232257f = []byte{ - // 130 bytes of a gzipped FileDescriptorProto + // 166 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x28, 0x28, 0xca, 0x2f, 0xc9, 0xd7, 0x4f, 0x2d, 0x2a, 0x2a, 0x48, 0x02, 0x91, 0xf9, 0x45, 0xc5, 0x7a, 0x60, 0x21, 0x21, - 0x9e, 0xb2, 0xc4, 0xa2, 0x9c, 0xfc, 0x74, 0x3d, 0xb0, 0x94, 0x92, 0x01, 0x17, 0xb7, 0x2b, 0x48, - 0xd6, 0x25, 0xb5, 0x24, 0x31, 0x33, 0x47, 0x48, 0x91, 0x8b, 0x07, 0xac, 0x38, 0xbe, 0xb8, 0xa4, - 0x28, 0x33, 0x2f, 0x5d, 0x82, 0x51, 0x81, 0x51, 0x83, 0x33, 0x88, 0x1b, 0x2c, 0x16, 0x0c, 0x16, - 0x72, 0x32, 0x88, 0xd2, 0x4b, 0xcf, 0x2c, 0xc9, 0x28, 0x4d, 0xd2, 0x4b, 0x49, 0x2c, 0xcd, 0xcd, - 0x4e, 0xcc, 0x4e, 0xcc, 0xd7, 0x4b, 0xce, 0xcf, 0xd5, 0x87, 0x18, 0x0b, 0xa3, 0x90, 0xac, 0x4f, - 0x62, 0x03, 0x73, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa6, 0xa8, 0x83, 0xfb, 0x94, 0x00, - 0x00, 0x00, + 0x9e, 0xb2, 0xc4, 0xa2, 0x9c, 0xfc, 0x74, 0x3d, 0xb0, 0x94, 0x94, 0x6e, 0x7a, 0x66, 0x49, 0x46, + 0x69, 0x92, 0x5e, 0x72, 0x7e, 0xae, 0x7e, 0x7a, 0x7e, 0x7a, 0xbe, 0x3e, 0x58, 0x51, 0x52, 0x69, + 0x1a, 0x98, 0x07, 0x31, 0x04, 0xc4, 0x82, 0x68, 0x56, 0x32, 0xe0, 0xe2, 0x76, 0x05, 0x19, 0xe6, + 0x92, 0x5a, 0x92, 0x98, 0x99, 0x23, 0xa4, 0xc8, 0xc5, 0x03, 0x36, 0x3b, 0xbe, 0xb8, 0xa4, 0x28, + 0x33, 0x2f, 0x5d, 0x82, 0x51, 0x81, 0x51, 0x83, 0x33, 0x88, 0x1b, 0x2c, 0x16, 0x0c, 0x16, 0x72, + 0xb2, 0x99, 0xf0, 0x58, 0x8e, 0xe1, 0xc2, 0x63, 0x39, 0x86, 0x1b, 0x8f, 0xe5, 0x18, 0xa2, 0xf4, + 0xa0, 0xd6, 0xa5, 0x24, 0x96, 0xe6, 0x66, 0x27, 0x66, 0x27, 0xe6, 0x83, 0x2d, 0x86, 0xb8, 0x08, + 0x46, 0x21, 0xb9, 0x3c, 0x89, 0x0d, 0xcc, 0x31, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0x5f, 0x86, + 0xbb, 0xbf, 0xcf, 0x00, 0x00, 0x00, } diff --git a/proto/errpb/errors.proto b/proto/errpb/errors.proto index 59995153e..0f49bc043 100644 --- a/proto/errpb/errors.proto +++ b/proto/errpb/errors.proto @@ -2,8 +2,14 @@ syntax = "proto3"; package varlog.errpb; +import "github.com/gogo/protobuf/gogoproto/gogo.proto"; + option go_package = "github.com/kakao/varlog/proto/errpb"; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; + message ErrorDetail { string error_string = 1; } diff --git a/proto/mrpb/management.pb.go b/proto/mrpb/management.pb.go index 81621e597..d56dee877 100644 --- a/proto/mrpb/management.pb.go +++ b/proto/mrpb/management.pb.go @@ -32,12 +32,9 @@ var _ = math.Inf const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type AddPeerRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,2,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` - Url string `protobuf:"bytes,3,opt,name=url,proto3" json:"url,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,2,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` + Url string `protobuf:"bytes,3,opt,name=url,proto3" json:"url,omitempty"` } func (m *AddPeerRequest) Reset() { *m = AddPeerRequest{} } @@ -95,11 +92,8 @@ func (m *AddPeerRequest) GetUrl() string { } type RemovePeerRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,2,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,2,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` } func (m *RemovePeerRequest) Reset() { *m = RemovePeerRequest{} } @@ -150,10 +144,7 @@ func (m *RemovePeerRequest) GetNodeID() github_daumkakao_com_varlog_varlog_pkg_t } type GetClusterInfoRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` } func (m *GetClusterInfoRequest) Reset() { *m = GetClusterInfoRequest{} } @@ -205,10 +196,7 @@ type ClusterInfo struct { // applied_index is the AppliedIndex of RAFT that is updated by changing // configuration of members. For example, AddPeer and RemovePeer result in // increasing applied_index. - AppliedIndex uint64 `protobuf:"varint,6,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + AppliedIndex uint64 `protobuf:"varint,6,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` } func (m *ClusterInfo) Reset() { *m = ClusterInfo{} } @@ -287,12 +275,9 @@ func (m *ClusterInfo) GetAppliedIndex() uint64 { } type ClusterInfo_Member struct { - Peer string `protobuf:"bytes,1,opt,name=peer,proto3" json:"peer,omitempty"` - Endpoint string `protobuf:"bytes,2,opt,name=endpoint,proto3" json:"endpoint,omitempty"` - Learner bool `protobuf:"varint,3,opt,name=learner,proto3" json:"learner,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Peer string `protobuf:"bytes,1,opt,name=peer,proto3" json:"peer,omitempty"` + Endpoint string `protobuf:"bytes,2,opt,name=endpoint,proto3" json:"endpoint,omitempty"` + Learner bool `protobuf:"varint,3,opt,name=learner,proto3" json:"learner,omitempty"` } func (m *ClusterInfo_Member) Reset() { *m = ClusterInfo_Member{} } @@ -350,10 +335,7 @@ func (m *ClusterInfo_Member) GetLearner() bool { } type GetClusterInfoResponse struct { - ClusterInfo *ClusterInfo `protobuf:"bytes,1,opt,name=cluster_info,json=clusterInfo,proto3" json:"cluster_info,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterInfo *ClusterInfo `protobuf:"bytes,1,opt,name=cluster_info,json=clusterInfo,proto3" json:"cluster_info,omitempty"` } func (m *GetClusterInfoResponse) Reset() { *m = GetClusterInfoResponse{} } @@ -409,46 +391,47 @@ func init() { func init() { proto.RegisterFile("proto/mrpb/management.proto", fileDescriptor_8658321b298c6927) } var fileDescriptor_8658321b298c6927 = []byte{ - // 618 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x55, 0x3d, 0x6f, 0x13, 0x4d, - 0x10, 0xce, 0x25, 0xfe, 0x88, 0xc7, 0x49, 0xf4, 0x66, 0xa5, 0x37, 0x3a, 0x5d, 0x24, 0xdb, 0xba, - 0x08, 0xc9, 0x4d, 0xce, 0x92, 0x23, 0x08, 0x22, 0x0d, 0x31, 0x24, 0x91, 0x8b, 0x00, 0x5a, 0x89, - 0x26, 0x29, 0xa2, 0xb3, 0x77, 0x7c, 0x9c, 0x7c, 0xb7, 0x7b, 0xac, 0xf7, 0xa2, 0xb8, 0x42, 0xe2, - 0x97, 0xc0, 0xbf, 0xa1, 0xa4, 0xa3, 0x0b, 0x92, 0x91, 0xf8, 0x03, 0x74, 0x54, 0xe8, 0x76, 0x6d, - 0xc7, 0xe6, 0x23, 0x40, 0x0a, 0x8a, 0x54, 0x9e, 0x9d, 0x99, 0x7d, 0x9e, 0x99, 0xb9, 0xd9, 0xc7, - 0xb0, 0x99, 0x48, 0xa1, 0x44, 0x23, 0x96, 0x49, 0xa7, 0x11, 0xfb, 0xdc, 0x0f, 0x30, 0x46, 0xae, - 0x3c, 0xed, 0x25, 0xe5, 0x73, 0x5f, 0x46, 0x22, 0xf0, 0xb2, 0xa8, 0xb3, 0x1d, 0x84, 0xea, 0x45, - 0xda, 0xf1, 0xba, 0x22, 0x6e, 0x04, 0x22, 0x10, 0x0d, 0x9d, 0xd3, 0x49, 0x7b, 0xfa, 0x64, 0x60, - 0x32, 0xcb, 0xdc, 0x75, 0x36, 0x03, 0x21, 0x82, 0x08, 0xaf, 0xb2, 0x30, 0x4e, 0xd4, 0xd0, 0x04, - 0xdd, 0xcf, 0x16, 0xac, 0xed, 0x33, 0xf6, 0x0c, 0x51, 0x52, 0x7c, 0x99, 0xe2, 0x40, 0x91, 0x1e, - 0x40, 0x37, 0x4a, 0x07, 0x0a, 0xe5, 0x59, 0xc8, 0x6c, 0xab, 0x66, 0xd5, 0x57, 0x5b, 0x47, 0xa3, - 0xcb, 0x6a, 0xe9, 0x91, 0xf1, 0xb6, 0x1f, 0x7f, 0xbd, 0xac, 0xde, 0x1b, 0xd7, 0xc0, 0xfc, 0x34, - 0xee, 0xfb, 0x7d, 0x5f, 0xe8, 0x6a, 0x4c, 0x95, 0x93, 0x9f, 0xa4, 0x1f, 0x34, 0xd4, 0x30, 0xc1, - 0x81, 0x37, 0xbd, 0x49, 0x4b, 0x63, 0xe8, 0x36, 0x23, 0x27, 0x50, 0xe4, 0x82, 0x61, 0x46, 0xb2, - 0x58, 0xb3, 0xea, 0xb9, 0xd6, 0xfe, 0xe8, 0xb2, 0x5a, 0x78, 0x22, 0x18, 0x6a, 0x86, 0x9d, 0xbf, - 0x62, 0x30, 0xd7, 0x68, 0x21, 0x43, 0x6c, 0x33, 0xf2, 0x1f, 0x2c, 0xa5, 0x32, 0xb2, 0x97, 0x6a, - 0x56, 0xbd, 0x44, 0x33, 0xd3, 0xfd, 0x60, 0xc1, 0x3a, 0xc5, 0x58, 0x9c, 0xe3, 0x2d, 0xeb, 0xd5, - 0x7d, 0x05, 0xff, 0x1f, 0xa1, 0x9a, 0xd0, 0xf2, 0x9e, 0xf8, 0xc7, 0xcd, 0xb9, 0x6f, 0xf3, 0x50, - 0x9e, 0xa1, 0xbf, 0x15, 0x0b, 0xf4, 0x14, 0x0a, 0x11, 0xfa, 0x0c, 0xa5, 0xde, 0xa1, 0x5c, 0x6b, - 0xf7, 0xc6, 0x80, 0x06, 0x86, 0x6c, 0x03, 0x91, 0x98, 0x44, 0x61, 0xd7, 0x57, 0xa1, 0xe0, 0x67, - 0x3d, 0xbf, 0xab, 0x84, 0xb4, 0x73, 0x35, 0xab, 0x9e, 0xa7, 0xeb, 0x33, 0x91, 0x43, 0x1d, 0x20, - 0x17, 0x50, 0x8c, 0x31, 0xee, 0xa0, 0x1c, 0xd8, 0xf9, 0xda, 0x52, 0xbd, 0xdc, 0xbc, 0xe3, 0xcd, - 0x48, 0x80, 0x37, 0x33, 0x6e, 0xef, 0xd8, 0xe4, 0x1d, 0x70, 0x25, 0x87, 0xad, 0xdd, 0xd7, 0x1f, - 0x6f, 0x56, 0xe7, 0x84, 0x8e, 0x6c, 0xc1, 0xaa, 0x9f, 0x24, 0x51, 0x88, 0xec, 0x2c, 0xe4, 0x0c, - 0x2f, 0xec, 0x42, 0x36, 0x00, 0xba, 0x32, 0x76, 0xb6, 0x33, 0x9f, 0x43, 0xa1, 0x60, 0x68, 0x09, - 0x81, 0x5c, 0x82, 0x28, 0xf5, 0x67, 0x2e, 0x51, 0x6d, 0x13, 0x07, 0x96, 0x91, 0xb3, 0x44, 0x84, - 0x5c, 0xe9, 0x2f, 0x53, 0xa2, 0xd3, 0x33, 0xb1, 0xa1, 0x18, 0xa1, 0x2f, 0xf9, 0x78, 0xb2, 0xcb, - 0x74, 0x72, 0x74, 0x4e, 0x61, 0x65, 0xb6, 0x95, 0xec, 0x0d, 0xf7, 0x71, 0xa8, 0x81, 0x73, 0x34, - 0x33, 0xc9, 0x5d, 0xc8, 0x9f, 0xfb, 0x51, 0x8a, 0x1a, 0xb4, 0xdc, 0xac, 0xfe, 0x66, 0x24, 0xd4, - 0x64, 0x3f, 0x58, 0xbc, 0x6f, 0xb9, 0xcf, 0x61, 0xe3, 0xfb, 0x47, 0x32, 0x48, 0x04, 0x1f, 0x20, - 0xd9, 0x83, 0x95, 0xe9, 0xb6, 0xf2, 0x9e, 0xd0, 0x7c, 0xe5, 0xa6, 0xfd, 0x2b, 0x6c, 0x5a, 0xee, - 0x5e, 0x1d, 0x9a, 0x5f, 0x2c, 0x80, 0xe3, 0xa9, 0x58, 0x93, 0x87, 0x50, 0x1c, 0x8b, 0x29, 0xd9, - 0x9c, 0x03, 0x98, 0x97, 0x58, 0x67, 0xc3, 0x33, 0x9a, 0xec, 0x4d, 0x34, 0xd9, 0x3b, 0xc8, 0x34, - 0xd9, 0x5d, 0x20, 0x87, 0x00, 0x57, 0x2a, 0x45, 0x2a, 0x73, 0x20, 0x3f, 0xc8, 0xd7, 0x35, 0x38, - 0xa7, 0xb0, 0x36, 0xdf, 0x2f, 0x71, 0xe7, 0xb0, 0x7e, 0xaa, 0x18, 0xce, 0xd6, 0xb5, 0x39, 0x66, - 0x60, 0xee, 0x42, 0x6b, 0xef, 0xdd, 0xa8, 0x62, 0xbd, 0x1f, 0x55, 0xac, 0x37, 0x9f, 0x2a, 0xd6, - 0xc9, 0xf6, 0x9f, 0xac, 0xdb, 0xf4, 0xbf, 0xad, 0x53, 0xd0, 0xf6, 0xce, 0xb7, 0x00, 0x00, 0x00, - 0xff, 0xff, 0x91, 0x94, 0xfa, 0xd8, 0xf0, 0x06, 0x00, 0x00, + // 630 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x55, 0xcd, 0x6e, 0xd3, 0x40, + 0x10, 0x8e, 0xdb, 0xfc, 0x34, 0x93, 0xb6, 0xa2, 0x2b, 0x51, 0x59, 0xae, 0xe4, 0x44, 0xae, 0x90, + 0x72, 0xa9, 0x23, 0xa5, 0x82, 0x22, 0xb8, 0xd0, 0x40, 0x5b, 0xe5, 0x50, 0x40, 0x2b, 0x71, 0x69, + 0x0f, 0x95, 0x13, 0x4f, 0x8c, 0x15, 0xdb, 0x6b, 0xd6, 0xeb, 0xaa, 0x39, 0x21, 0xf1, 0x04, 0x3c, + 0x02, 0xbc, 0x0d, 0xc7, 0xde, 0xe0, 0x54, 0xa4, 0x44, 0xe2, 0x05, 0xb8, 0x71, 0x42, 0xde, 0x4d, + 0xd2, 0x84, 0x9f, 0x02, 0x3d, 0x70, 0xe8, 0x29, 0xb3, 0x33, 0xb3, 0xdf, 0x37, 0x33, 0x9e, 0xfd, + 0x02, 0x1b, 0x31, 0x67, 0x82, 0x35, 0x42, 0x1e, 0x77, 0x1a, 0xa1, 0x13, 0x39, 0x1e, 0x86, 0x18, + 0x09, 0x5b, 0x7a, 0x49, 0xe5, 0xd4, 0xe1, 0x01, 0xf3, 0xec, 0x2c, 0x6a, 0x6c, 0x79, 0xbe, 0x78, + 0x99, 0x76, 0xec, 0x2e, 0x0b, 0x1b, 0x1e, 0xf3, 0x58, 0x43, 0xe6, 0x74, 0xd2, 0x9e, 0x3c, 0x29, + 0x98, 0xcc, 0x52, 0x77, 0x8d, 0x0d, 0x8f, 0x31, 0x2f, 0xc0, 0xcb, 0x2c, 0x0c, 0x63, 0x31, 0x50, + 0x41, 0xeb, 0x8b, 0x06, 0xab, 0xbb, 0xae, 0xfb, 0x1c, 0x91, 0x53, 0x7c, 0x95, 0x62, 0x22, 0x48, + 0x0f, 0xa0, 0x1b, 0xa4, 0x89, 0x40, 0x7e, 0xe2, 0xbb, 0xba, 0x56, 0xd3, 0xea, 0x2b, 0xad, 0x83, + 0xe1, 0x45, 0xb5, 0xfc, 0x58, 0x79, 0xdb, 0x4f, 0xbe, 0x5d, 0x54, 0xef, 0x8d, 0x6b, 0x70, 0x9d, + 0x34, 0xec, 0x3b, 0x7d, 0x87, 0xc9, 0x6a, 0x54, 0x95, 0x93, 0x9f, 0xb8, 0xef, 0x35, 0xc4, 0x20, + 0xc6, 0xc4, 0x9e, 0xde, 0xa4, 0xe5, 0x31, 0x74, 0xdb, 0x25, 0x47, 0x50, 0x8a, 0x98, 0x8b, 0x19, + 0xc9, 0x42, 0x4d, 0xab, 0xe7, 0x5b, 0xbb, 0xc3, 0x8b, 0x6a, 0xf1, 0x29, 0x73, 0x51, 0x32, 0x6c, + 0xff, 0x13, 0x83, 0xba, 0x46, 0x8b, 0x19, 0x62, 0xdb, 0x25, 0xb7, 0x60, 0x31, 0xe5, 0x81, 0xbe, + 0x58, 0xd3, 0xea, 0x65, 0x9a, 0x99, 0xd6, 0x47, 0x0d, 0xd6, 0x28, 0x86, 0xec, 0x14, 0x6f, 0x58, + 0xaf, 0xd6, 0x6b, 0xb8, 0x7d, 0x80, 0x62, 0x42, 0x1b, 0xf5, 0xd8, 0x7f, 0x6e, 0xce, 0x7a, 0x5f, + 0x80, 0xca, 0x0c, 0xfd, 0x8d, 0x58, 0xa0, 0x67, 0x50, 0x0c, 0xd0, 0x71, 0x91, 0xcb, 0x1d, 0xca, + 0xb7, 0x76, 0xae, 0x0d, 0xa8, 0x60, 0xc8, 0x16, 0x10, 0x8e, 0x71, 0xe0, 0x77, 0x1d, 0xe1, 0xb3, + 0xe8, 0xa4, 0xe7, 0x74, 0x05, 0xe3, 0x7a, 0xbe, 0xa6, 0xd5, 0x0b, 0x74, 0x6d, 0x26, 0xb2, 0x2f, + 0x03, 0xe4, 0x0c, 0x4a, 0x21, 0x86, 0x1d, 0xe4, 0x89, 0x5e, 0xa8, 0x2d, 0xd6, 0x2b, 0xcd, 0x3b, + 0xf6, 0x8c, 0x04, 0xd8, 0x33, 0xe3, 0xb6, 0x0f, 0x55, 0xde, 0x5e, 0x24, 0xf8, 0xa0, 0xb5, 0xf3, + 0xe6, 0xf3, 0xf5, 0xea, 0x9c, 0xd0, 0x91, 0x4d, 0x58, 0x71, 0xe2, 0x38, 0xf0, 0xd1, 0x3d, 0xf1, + 0x23, 0x17, 0xcf, 0xf4, 0x62, 0x36, 0x00, 0xba, 0x3c, 0x76, 0xb6, 0x33, 0x9f, 0x41, 0xa1, 0xa8, + 0x68, 0x09, 0x81, 0x7c, 0x8c, 0xc8, 0xe5, 0x67, 0x2e, 0x53, 0x69, 0x13, 0x03, 0x96, 0x30, 0x72, + 0x63, 0xe6, 0x47, 0x42, 0x7e, 0x99, 0x32, 0x9d, 0x9e, 0x89, 0x0e, 0xa5, 0x00, 0x1d, 0x1e, 0x8d, + 0x27, 0xbb, 0x44, 0x27, 0x47, 0xe3, 0x18, 0x96, 0x67, 0x5b, 0xc9, 0xde, 0x70, 0x1f, 0x07, 0x12, + 0x38, 0x4f, 0x33, 0x93, 0xdc, 0x85, 0xc2, 0xa9, 0x13, 0xa4, 0x28, 0x41, 0x2b, 0xcd, 0xea, 0x1f, + 0x46, 0x42, 0x55, 0xf6, 0x83, 0x85, 0xfb, 0x9a, 0xf5, 0x02, 0xd6, 0x7f, 0x7c, 0x24, 0x49, 0xcc, + 0xa2, 0x04, 0xc9, 0x43, 0x58, 0x9e, 0x6e, 0x6b, 0xd4, 0x63, 0x92, 0xaf, 0xd2, 0xd4, 0x7f, 0x87, + 0x4d, 0x2b, 0xdd, 0xcb, 0x43, 0xf3, 0xab, 0x06, 0x70, 0x38, 0x15, 0x6b, 0xf2, 0x08, 0x4a, 0x63, + 0x31, 0x25, 0x1b, 0x73, 0x00, 0xf3, 0x12, 0x6b, 0xac, 0xdb, 0x4a, 0x93, 0xed, 0x89, 0x26, 0xdb, + 0x7b, 0x99, 0x26, 0x5b, 0x39, 0xb2, 0x0f, 0x70, 0xa9, 0x52, 0xc4, 0x9c, 0x03, 0xf9, 0x49, 0xbe, + 0xae, 0xc0, 0x39, 0x86, 0xd5, 0xf9, 0x7e, 0x89, 0x35, 0x87, 0xf5, 0x4b, 0xc5, 0x30, 0x36, 0xaf, + 0xcc, 0x51, 0x03, 0xb3, 0x72, 0xad, 0x83, 0x0f, 0x43, 0x53, 0x3b, 0x1f, 0x9a, 0xda, 0xdb, 0x91, + 0x99, 0x7b, 0x37, 0x32, 0xb5, 0xf3, 0x91, 0x99, 0xfb, 0x34, 0x32, 0x73, 0x47, 0x5b, 0x7f, 0xb3, + 0x7a, 0xd3, 0xff, 0xb9, 0x4e, 0x51, 0xda, 0xdb, 0xdf, 0x03, 0x00, 0x00, 0xff, 0xff, 0xd8, 0x52, + 0x16, 0xa7, 0xfc, 0x06, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -623,10 +606,6 @@ func (m *AddPeerRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Url) > 0 { i -= len(m.Url) copy(dAtA[i:], m.Url) @@ -667,10 +646,6 @@ func (m *RemovePeerRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.NodeID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.NodeID)) i-- @@ -704,10 +679,6 @@ func (m *GetClusterInfoRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.ClusterID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.ClusterID)) i-- @@ -736,10 +707,6 @@ func (m *ClusterInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.AppliedIndex != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.AppliedIndex)) i-- @@ -812,10 +779,6 @@ func (m *ClusterInfo_Member) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Learner { i-- if m.Learner { @@ -863,10 +826,6 @@ func (m *GetClusterInfoResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.ClusterInfo != nil { { size, err := m.ClusterInfo.MarshalToSizedBuffer(dAtA[:i]) @@ -909,9 +868,6 @@ func (m *AddPeerRequest) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -927,9 +883,6 @@ func (m *RemovePeerRequest) ProtoSize() (n int) { if m.NodeID != 0 { n += 1 + sovManagement(uint64(m.NodeID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -942,9 +895,6 @@ func (m *GetClusterInfoRequest) ProtoSize() (n int) { if m.ClusterID != 0 { n += 1 + sovManagement(uint64(m.ClusterID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -982,9 +932,6 @@ func (m *ClusterInfo) ProtoSize() (n int) { if m.AppliedIndex != 0 { n += 1 + sovManagement(uint64(m.AppliedIndex)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1005,9 +952,6 @@ func (m *ClusterInfo_Member) ProtoSize() (n int) { if m.Learner { n += 2 } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1021,9 +965,6 @@ func (m *GetClusterInfoResponse) ProtoSize() (n int) { l = m.ClusterInfo.ProtoSize() n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1144,7 +1085,6 @@ func (m *AddPeerRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1233,7 +1173,6 @@ func (m *RemovePeerRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1303,7 +1242,6 @@ func (m *GetClusterInfoRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1564,7 +1502,6 @@ func (m *ClusterInfo) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1699,7 +1636,6 @@ func (m *ClusterInfo_Member) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1786,7 +1722,6 @@ func (m *GetClusterInfoResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/mrpb/management.proto b/proto/mrpb/management.proto index 4972e0afb..10397717d 100644 --- a/proto/mrpb/management.proto +++ b/proto/mrpb/management.proto @@ -10,6 +10,9 @@ option go_package = "github.com/kakao/varlog/proto/mrpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message AddPeerRequest { uint32 cluster_id = 1 [ diff --git a/proto/mrpb/metadata_repository.pb.go b/proto/mrpb/metadata_repository.pb.go index ef922f6da..b42650038 100644 --- a/proto/mrpb/metadata_repository.pb.go +++ b/proto/mrpb/metadata_repository.pb.go @@ -33,9 +33,6 @@ var _ = math.Inf const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type GetMetadataRequest struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *GetMetadataRequest) Reset() { *m = GetMetadataRequest{} } @@ -72,10 +69,7 @@ func (m *GetMetadataRequest) XXX_DiscardUnknown() { var xxx_messageInfo_GetMetadataRequest proto.InternalMessageInfo type GetMetadataResponse struct { - Metadata *varlogpb.MetadataDescriptor `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Metadata *varlogpb.MetadataDescriptor `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` } func (m *GetMetadataResponse) Reset() { *m = GetMetadataResponse{} } @@ -119,10 +113,7 @@ func (m *GetMetadataResponse) GetMetadata() *varlogpb.MetadataDescriptor { } type StorageNodeRequest struct { - StorageNode *varlogpb.StorageNodeDescriptor `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNode *varlogpb.StorageNodeDescriptor `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` } func (m *StorageNodeRequest) Reset() { *m = StorageNodeRequest{} } @@ -166,10 +157,7 @@ func (m *StorageNodeRequest) GetStorageNode() *varlogpb.StorageNodeDescriptor { } type LogStreamRequest struct { - LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` } func (m *LogStreamRequest) Reset() { *m = LogStreamRequest{} } @@ -213,11 +201,8 @@ func (m *LogStreamRequest) GetLogStream() *varlogpb.LogStreamDescriptor { } type SealRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *SealRequest) Reset() { *m = SealRequest{} } @@ -268,11 +253,8 @@ func (m *SealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_ty } type SealResponse struct { - Status varlogpb.LogStreamStatus `protobuf:"varint,1,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` - LastCommittedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=last_committed_glsn,json=lastCommittedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"last_committed_glsn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Status varlogpb.LogStreamStatus `protobuf:"varint,1,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` + LastCommittedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=last_committed_glsn,json=lastCommittedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"last_committed_glsn,omitempty"` } func (m *SealResponse) Reset() { *m = SealResponse{} } @@ -323,11 +305,8 @@ func (m *SealResponse) GetLastCommittedGLSN() github_daumkakao_com_varlog_varlog } type UnsealRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *UnsealRequest) Reset() { *m = UnsealRequest{} } @@ -378,10 +357,7 @@ func (m *UnsealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_ } type UnsealResponse struct { - Status varlogpb.LogStreamStatus `protobuf:"varint,1,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Status varlogpb.LogStreamStatus `protobuf:"varint,1,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` } func (m *UnsealResponse) Reset() { *m = UnsealResponse{} } @@ -424,6 +400,50 @@ func (m *UnsealResponse) GetStatus() varlogpb.LogStreamStatus { return varlogpb.LogStreamStatusRunning } +type TopicRequest struct { + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` +} + +func (m *TopicRequest) Reset() { *m = TopicRequest{} } +func (m *TopicRequest) String() string { return proto.CompactTextString(m) } +func (*TopicRequest) ProtoMessage() {} +func (*TopicRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffe516e0fdff161, []int{8} +} +func (m *TopicRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *TopicRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_TopicRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *TopicRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_TopicRequest.Merge(m, src) +} +func (m *TopicRequest) XXX_Size() int { + return m.ProtoSize() +} +func (m *TopicRequest) XXX_DiscardUnknown() { + xxx_messageInfo_TopicRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_TopicRequest proto.InternalMessageInfo + +func (m *TopicRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func init() { proto.RegisterType((*GetMetadataRequest)(nil), "varlog.mrpb.GetMetadataRequest") proto.RegisterType((*GetMetadataResponse)(nil), "varlog.mrpb.GetMetadataResponse") @@ -433,6 +453,7 @@ func init() { proto.RegisterType((*SealResponse)(nil), "varlog.mrpb.SealResponse") proto.RegisterType((*UnsealRequest)(nil), "varlog.mrpb.UnsealRequest") proto.RegisterType((*UnsealResponse)(nil), "varlog.mrpb.UnsealResponse") + proto.RegisterType((*TopicRequest)(nil), "varlog.mrpb.TopicRequest") } func init() { @@ -440,48 +461,53 @@ func init() { } var fileDescriptor_0ffe516e0fdff161 = []byte{ - // 650 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x55, 0xdd, 0x4e, 0xd4, 0x40, - 0x14, 0xa6, 0x06, 0x89, 0x9c, 0x02, 0xca, 0xac, 0x3f, 0x50, 0x22, 0x25, 0x95, 0x18, 0x6f, 0xe8, - 0x46, 0x4c, 0x0c, 0x89, 0x31, 0x26, 0x0b, 0x86, 0x2c, 0xae, 0x68, 0xba, 0xc1, 0x0b, 0x13, 0xb3, - 0x99, 0xdd, 0x0e, 0x63, 0x43, 0xbb, 0x53, 0x67, 0x66, 0x49, 0xf0, 0xd6, 0x17, 0xf1, 0x61, 0xbc, - 0xe0, 0xd2, 0x27, 0xd8, 0x8b, 0x35, 0x31, 0x3e, 0x03, 0x57, 0xa6, 0xd3, 0x4e, 0x7f, 0x58, 0x30, - 0x12, 0xf7, 0xce, 0xab, 0xed, 0x9e, 0x73, 0xbe, 0xef, 0x9b, 0xd3, 0x33, 0xe7, 0x2b, 0xac, 0xc7, - 0x9c, 0x49, 0x56, 0x8f, 0x78, 0xdc, 0xad, 0x47, 0x44, 0x62, 0x1f, 0x4b, 0xdc, 0xe1, 0x24, 0x66, - 0x22, 0x90, 0x8c, 0x9f, 0xb8, 0x2a, 0x8d, 0xcc, 0x63, 0xcc, 0x43, 0x46, 0xdd, 0xa4, 0xcc, 0xda, - 0xa0, 0x81, 0xfc, 0x38, 0xe8, 0xba, 0x3d, 0x16, 0xd5, 0x29, 0xa3, 0xac, 0xae, 0x6a, 0xba, 0x83, - 0x43, 0xf5, 0x2f, 0xe5, 0x4b, 0x9e, 0x52, 0xac, 0xb5, 0x42, 0x19, 0xa3, 0x21, 0x29, 0xaa, 0x48, - 0x14, 0xcb, 0x8c, 0xd8, 0xba, 0x97, 0x12, 0x97, 0xc4, 0xd3, 0x84, 0x73, 0x1b, 0xd0, 0x2e, 0x91, - 0xaf, 0xb3, 0xa0, 0x47, 0x3e, 0x0d, 0x88, 0x90, 0xce, 0x3b, 0xa8, 0x55, 0xa2, 0x22, 0x66, 0x7d, - 0x41, 0xd0, 0x0b, 0xb8, 0xa1, 0xe1, 0x4b, 0xc6, 0x9a, 0xf1, 0xc8, 0xdc, 0x7c, 0xe0, 0x66, 0x27, - 0xd6, 0xfc, 0xae, 0x06, 0xed, 0x10, 0xd1, 0xe3, 0x41, 0x2c, 0x19, 0xf7, 0x72, 0x90, 0x43, 0x00, - 0xb5, 0x25, 0xe3, 0x98, 0x92, 0x7d, 0xe6, 0x93, 0x4c, 0x0d, 0xbd, 0x81, 0x39, 0x91, 0x46, 0x3b, - 0x7d, 0xe6, 0x93, 0x8c, 0xfa, 0xe1, 0x18, 0x75, 0x09, 0x5a, 0xb0, 0x37, 0xa6, 0x4f, 0x87, 0xb6, - 0xe1, 0x99, 0xa2, 0x48, 0x3a, 0x1f, 0xe0, 0x56, 0x8b, 0xd1, 0xb6, 0xe4, 0x04, 0x47, 0x5a, 0xa4, - 0x09, 0x10, 0x32, 0xda, 0x11, 0x2a, 0x98, 0x49, 0xac, 0x8f, 0x49, 0xe4, 0xb0, 0x31, 0x81, 0xd9, - 0x50, 0xa7, 0x9c, 0x9f, 0x06, 0x98, 0x6d, 0x82, 0x43, 0x4d, 0x7d, 0x08, 0xd0, 0x0b, 0x07, 0x42, - 0x12, 0xde, 0x09, 0x7c, 0x45, 0x3d, 0xdf, 0xd8, 0x1d, 0x0d, 0xed, 0xd9, 0xed, 0x34, 0xda, 0xdc, - 0x39, 0x1b, 0xda, 0x4f, 0xb3, 0x69, 0xfa, 0x78, 0x10, 0x1d, 0xe1, 0x23, 0xcc, 0xd4, 0x5c, 0x53, - 0x61, 0xfd, 0x13, 0x1f, 0xd1, 0xba, 0x3c, 0x89, 0x89, 0x70, 0x73, 0xa4, 0x37, 0x9b, 0x51, 0x37, - 0x7d, 0xc4, 0x60, 0xbe, 0x68, 0x21, 0x91, 0xba, 0xa6, 0xa4, 0x5e, 0x8d, 0x86, 0xb6, 0x99, 0x1f, - 0x5c, 0x89, 0x6d, 0x5d, 0x49, 0xac, 0x84, 0xf5, 0xcc, 0xbc, 0xcd, 0xa6, 0xef, 0x7c, 0x33, 0x60, - 0x2e, 0x6d, 0x34, 0xbb, 0x00, 0x5b, 0x30, 0x23, 0x24, 0x96, 0x03, 0xa1, 0xba, 0x5c, 0xd8, 0x5c, - 0xbb, 0xfc, 0x05, 0xb6, 0x55, 0x9d, 0x97, 0xd5, 0xa3, 0xcf, 0x50, 0x0b, 0xb1, 0x90, 0x9d, 0x1e, - 0x8b, 0xa2, 0x40, 0x4a, 0xe2, 0x77, 0x68, 0x28, 0xfa, 0xaa, 0x83, 0xe9, 0xc6, 0xde, 0x68, 0x68, - 0x2f, 0xb6, 0xb0, 0x90, 0xdb, 0x3a, 0xbb, 0xdb, 0x6a, 0xef, 0x9f, 0x0d, 0xed, 0xc7, 0x57, 0xea, - 0x23, 0x01, 0x79, 0x8b, 0x61, 0x85, 0x27, 0x14, 0x7d, 0xe7, 0x97, 0x01, 0xf3, 0x07, 0x7d, 0xf1, - 0x3f, 0x4c, 0x6c, 0x0f, 0x16, 0x74, 0xa7, 0xff, 0x3a, 0xb2, 0xcd, 0x2f, 0xd7, 0x61, 0xb9, 0xb0, - 0x00, 0xed, 0x54, 0x6d, 0xc2, 0x8f, 0x83, 0x1e, 0x41, 0x6f, 0xa1, 0xe6, 0x11, 0x1a, 0x24, 0x8d, - 0x96, 0xf6, 0x12, 0xd9, 0x6e, 0xc9, 0xc2, 0xdc, 0xf1, 0x65, 0xb7, 0xee, 0xba, 0xa9, 0x4f, 0xb9, - 0xda, 0xa7, 0xdc, 0x97, 0x89, 0x4f, 0x39, 0x53, 0xc8, 0x83, 0x3b, 0x07, 0x7d, 0x3e, 0x59, 0xce, - 0x16, 0x2c, 0xea, 0x53, 0xe6, 0x6d, 0xa2, 0xfb, 0x15, 0xbe, 0xf3, 0x4e, 0xf1, 0x07, 0xb6, 0x7d, - 0xa8, 0x15, 0x27, 0x9c, 0x00, 0xdf, 0x1e, 0xdc, 0x3c, 0x88, 0x7d, 0x2c, 0xc9, 0x04, 0xb8, 0x3c, - 0x30, 0x4b, 0x96, 0x7d, 0xee, 0x9d, 0x8d, 0x5b, 0xbc, 0xb5, 0x76, 0x79, 0x41, 0x7a, 0x73, 0x9c, - 0x29, 0xf4, 0x1c, 0xa6, 0x93, 0xf5, 0x47, 0x4b, 0xd5, 0x01, 0x14, 0x8b, 0x64, 0x2d, 0x5f, 0x90, - 0xc9, 0xe1, 0xdb, 0x30, 0x93, 0x5e, 0x46, 0x64, 0x55, 0xca, 0x2a, 0xbb, 0x68, 0xad, 0x5c, 0x98, - 0xd3, 0x24, 0x8d, 0x67, 0xa7, 0xa3, 0x55, 0xe3, 0xfb, 0x68, 0xd5, 0xf8, 0xfa, 0x63, 0xd5, 0x78, - 0xbf, 0xf1, 0x37, 0x6b, 0x92, 0x7f, 0x69, 0xbb, 0x33, 0xea, 0xf9, 0xc9, 0xef, 0x00, 0x00, 0x00, - 0xff, 0xff, 0x64, 0xe9, 0xdd, 0x6d, 0x7e, 0x07, 0x00, 0x00, + // 729 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x96, 0xcf, 0x4e, 0xdb, 0x4e, + 0x10, 0xc7, 0xe3, 0x9f, 0x20, 0xc0, 0x84, 0xc0, 0x8f, 0x4d, 0xff, 0x80, 0x51, 0xe3, 0xc8, 0x45, + 0x15, 0x17, 0x1c, 0x95, 0x56, 0x15, 0x97, 0xaa, 0x52, 0x42, 0x1b, 0x85, 0xa6, 0xb4, 0x72, 0x4a, + 0x0f, 0x95, 0xda, 0x68, 0x13, 0x2f, 0xae, 0x85, 0x9d, 0x75, 0xbd, 0x1b, 0x24, 0xfa, 0x14, 0xed, + 0x1b, 0xf4, 0x61, 0x7a, 0xe0, 0xc8, 0xb1, 0xa7, 0x1c, 0x12, 0xa9, 0xea, 0x33, 0x70, 0xaa, 0xbc, + 0xf6, 0xda, 0x09, 0x01, 0x54, 0x04, 0xb7, 0x9e, 0x62, 0xef, 0xcc, 0xf7, 0x33, 0x3b, 0x3b, 0xf6, + 0xd7, 0x81, 0x35, 0x3f, 0xa0, 0x9c, 0x96, 0xbd, 0xc0, 0x6f, 0x97, 0x3d, 0xc2, 0xb1, 0x85, 0x39, + 0x6e, 0x05, 0xc4, 0xa7, 0xcc, 0xe1, 0x34, 0x38, 0x32, 0x44, 0x18, 0xe5, 0x0e, 0x71, 0xe0, 0x52, + 0xdb, 0x08, 0xd3, 0xd4, 0x0d, 0xdb, 0xe1, 0x9f, 0x7a, 0x6d, 0xa3, 0x43, 0xbd, 0xb2, 0x4d, 0x6d, + 0x5a, 0x16, 0x39, 0xed, 0xde, 0xbe, 0xb8, 0x8b, 0x78, 0xe1, 0x55, 0xa4, 0x55, 0x57, 0x6d, 0x4a, + 0x6d, 0x97, 0xa4, 0x59, 0xc4, 0xf3, 0x79, 0x0c, 0x56, 0xef, 0x46, 0xe0, 0x91, 0xe2, 0x51, 0x40, + 0xbf, 0x05, 0xa8, 0x46, 0xf8, 0xab, 0x78, 0xd1, 0x24, 0x9f, 0x7b, 0x84, 0x71, 0xfd, 0x1d, 0x14, + 0xc6, 0x56, 0x99, 0x4f, 0xbb, 0x8c, 0xa0, 0x67, 0x30, 0x2b, 0xe5, 0xcb, 0x4a, 0x49, 0x59, 0xcf, + 0x6d, 0xde, 0x37, 0xe2, 0x1d, 0x4b, 0xbe, 0x21, 0x45, 0xdb, 0x84, 0x75, 0x02, 0xc7, 0xe7, 0x34, + 0x30, 0x13, 0x91, 0x4e, 0x00, 0x35, 0x39, 0x0d, 0xb0, 0x4d, 0x76, 0xa9, 0x45, 0xe2, 0x6a, 0xe8, + 0x35, 0xcc, 0xb3, 0x68, 0xb5, 0xd5, 0xa5, 0x16, 0x89, 0xd1, 0x0f, 0x26, 0xd0, 0x23, 0xd2, 0x94, + 0x5e, 0x99, 0x3a, 0xee, 0x6b, 0x8a, 0x99, 0x63, 0x69, 0x50, 0xff, 0x00, 0xff, 0x37, 0xa8, 0xdd, + 0xe4, 0x01, 0xc1, 0x9e, 0x2c, 0x52, 0x07, 0x70, 0xa9, 0xdd, 0x62, 0x62, 0x31, 0x2e, 0xb1, 0x36, + 0x51, 0x22, 0x91, 0x4d, 0x14, 0x98, 0x73, 0x65, 0x48, 0xff, 0xa5, 0x40, 0xae, 0x49, 0xb0, 0x2b, + 0xd1, 0xfb, 0x00, 0x1d, 0xb7, 0xc7, 0x38, 0x09, 0x5a, 0x8e, 0x25, 0xd0, 0xf9, 0x4a, 0x6d, 0xd0, + 0xd7, 0xe6, 0xaa, 0xd1, 0x6a, 0x7d, 0xfb, 0xb4, 0xaf, 0x3d, 0x89, 0xa7, 0x69, 0xe1, 0x9e, 0x77, + 0x80, 0x0f, 0x30, 0x15, 0x73, 0x8d, 0x0a, 0xcb, 0x1f, 0xff, 0xc0, 0x2e, 0xf3, 0x23, 0x9f, 0x30, + 0x23, 0x51, 0x9a, 0x73, 0x31, 0xba, 0x6e, 0x21, 0x0a, 0xf9, 0xb4, 0x85, 0xb0, 0xd4, 0x7f, 0x25, + 0x65, 0x7d, 0xba, 0xf2, 0x72, 0xd0, 0xd7, 0x72, 0xc9, 0xc6, 0x45, 0xb1, 0xad, 0x2b, 0x15, 0x1b, + 0xd1, 0x9a, 0xb9, 0xa4, 0xcd, 0xba, 0xa5, 0xff, 0x50, 0x60, 0x3e, 0x6a, 0x34, 0x7e, 0x00, 0xb6, + 0x20, 0xcb, 0x38, 0xe6, 0x3d, 0x26, 0xba, 0x5c, 0xd8, 0x2c, 0x5d, 0x7c, 0x80, 0x4d, 0x91, 0x67, + 0xc6, 0xf9, 0xe8, 0x0b, 0x14, 0x5c, 0xcc, 0x78, 0xab, 0x43, 0x3d, 0xcf, 0xe1, 0x9c, 0x58, 0x2d, + 0xdb, 0x65, 0x5d, 0xd1, 0xc1, 0x54, 0x65, 0x67, 0xd0, 0xd7, 0x96, 0x1a, 0x98, 0xf1, 0xaa, 0x8c, + 0xd6, 0x1a, 0xcd, 0xdd, 0xd3, 0xbe, 0xf6, 0xf0, 0x4a, 0x7d, 0x84, 0x22, 0x73, 0xc9, 0x1d, 0xe3, + 0xb8, 0xac, 0xab, 0xff, 0x56, 0x20, 0xbf, 0xd7, 0x65, 0xff, 0xc2, 0xc4, 0x76, 0x60, 0x41, 0x76, + 0x7a, 0xdd, 0x91, 0xe9, 0x5d, 0x98, 0x7f, 0x4b, 0x7d, 0xa7, 0x23, 0x0f, 0xed, 0x23, 0xcc, 0xf2, + 0xf0, 0x5e, 0x1e, 0xd9, 0x74, 0xa5, 0x3a, 0xe8, 0x6b, 0x33, 0x22, 0x47, 0xf4, 0xf0, 0xf8, 0x4a, + 0x3d, 0xc4, 0x3a, 0x73, 0x46, 0x40, 0xeb, 0xd6, 0xe6, 0xb7, 0x2c, 0xac, 0xa4, 0x96, 0x23, 0x9d, + 0xb1, 0x49, 0x82, 0x43, 0xa7, 0x43, 0xd0, 0x1b, 0x28, 0x98, 0xc4, 0x76, 0xc2, 0x83, 0x1d, 0xf1, + 0x01, 0xa4, 0x19, 0x23, 0x96, 0x69, 0x4c, 0x9a, 0x8b, 0x7a, 0xc7, 0x88, 0x7c, 0xd1, 0x90, 0xbe, + 0x68, 0x3c, 0x0f, 0x7d, 0x51, 0xcf, 0x20, 0x13, 0x6e, 0xef, 0x75, 0x83, 0x9b, 0x65, 0x6e, 0x43, + 0x5e, 0xee, 0x52, 0xf4, 0x87, 0x56, 0xc6, 0x58, 0xa3, 0xe7, 0x79, 0x09, 0xe5, 0x05, 0x2c, 0xa6, + 0x3b, 0xbb, 0x06, 0xa7, 0x01, 0x4b, 0x72, 0x37, 0xc9, 0x90, 0xd1, 0xbd, 0x31, 0xd2, 0x59, 0x9f, + 0xbc, 0x84, 0xb6, 0x0b, 0x85, 0x74, 0x57, 0x37, 0xc0, 0xdb, 0x81, 0xc5, 0x3d, 0xdf, 0xc2, 0x9c, + 0xdc, 0x00, 0xcb, 0x84, 0xdc, 0xc8, 0x07, 0xeb, 0xcc, 0x04, 0x27, 0x3f, 0x70, 0x6a, 0xe9, 0xe2, + 0x84, 0xe8, 0xbd, 0xd1, 0x33, 0xe8, 0x29, 0x4c, 0x85, 0xe6, 0x87, 0x96, 0xc7, 0x1f, 0x87, 0xd4, + 0x46, 0xd4, 0x95, 0x73, 0x22, 0x89, 0xbc, 0x0a, 0xd9, 0xe8, 0x55, 0x44, 0xea, 0x58, 0xda, 0x98, + 0x13, 0xa9, 0xab, 0xe7, 0xc6, 0x24, 0xa4, 0x52, 0x3b, 0x1e, 0x14, 0x95, 0x93, 0x41, 0x51, 0xf9, + 0x3a, 0x2c, 0x66, 0xbe, 0x0f, 0x8b, 0xca, 0xc9, 0xb0, 0x98, 0xf9, 0x39, 0x2c, 0x66, 0xde, 0x6f, + 0xfc, 0xcd, 0xcb, 0x96, 0xfc, 0xe7, 0x68, 0x67, 0xc5, 0xf5, 0xa3, 0x3f, 0x01, 0x00, 0x00, 0xff, + 0xff, 0x97, 0xbe, 0xc8, 0x4a, 0x88, 0x08, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -498,6 +524,8 @@ const _ = grpc.SupportPackageIsVersion4 type MetadataRepositoryServiceClient interface { RegisterStorageNode(ctx context.Context, in *StorageNodeRequest, opts ...grpc.CallOption) (*types.Empty, error) UnregisterStorageNode(ctx context.Context, in *StorageNodeRequest, opts ...grpc.CallOption) (*types.Empty, error) + RegisterTopic(ctx context.Context, in *TopicRequest, opts ...grpc.CallOption) (*types.Empty, error) + UnregisterTopic(ctx context.Context, in *TopicRequest, opts ...grpc.CallOption) (*types.Empty, error) RegisterLogStream(ctx context.Context, in *LogStreamRequest, opts ...grpc.CallOption) (*types.Empty, error) UnregisterLogStream(ctx context.Context, in *LogStreamRequest, opts ...grpc.CallOption) (*types.Empty, error) UpdateLogStream(ctx context.Context, in *LogStreamRequest, opts ...grpc.CallOption) (*types.Empty, error) @@ -532,6 +560,24 @@ func (c *metadataRepositoryServiceClient) UnregisterStorageNode(ctx context.Cont return out, nil } +func (c *metadataRepositoryServiceClient) RegisterTopic(ctx context.Context, in *TopicRequest, opts ...grpc.CallOption) (*types.Empty, error) { + out := new(types.Empty) + err := c.cc.Invoke(ctx, "/varlog.mrpb.MetadataRepositoryService/RegisterTopic", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *metadataRepositoryServiceClient) UnregisterTopic(ctx context.Context, in *TopicRequest, opts ...grpc.CallOption) (*types.Empty, error) { + out := new(types.Empty) + err := c.cc.Invoke(ctx, "/varlog.mrpb.MetadataRepositoryService/UnregisterTopic", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *metadataRepositoryServiceClient) RegisterLogStream(ctx context.Context, in *LogStreamRequest, opts ...grpc.CallOption) (*types.Empty, error) { out := new(types.Empty) err := c.cc.Invoke(ctx, "/varlog.mrpb.MetadataRepositoryService/RegisterLogStream", in, out, opts...) @@ -590,6 +636,8 @@ func (c *metadataRepositoryServiceClient) Unseal(ctx context.Context, in *Unseal type MetadataRepositoryServiceServer interface { RegisterStorageNode(context.Context, *StorageNodeRequest) (*types.Empty, error) UnregisterStorageNode(context.Context, *StorageNodeRequest) (*types.Empty, error) + RegisterTopic(context.Context, *TopicRequest) (*types.Empty, error) + UnregisterTopic(context.Context, *TopicRequest) (*types.Empty, error) RegisterLogStream(context.Context, *LogStreamRequest) (*types.Empty, error) UnregisterLogStream(context.Context, *LogStreamRequest) (*types.Empty, error) UpdateLogStream(context.Context, *LogStreamRequest) (*types.Empty, error) @@ -608,6 +656,12 @@ func (*UnimplementedMetadataRepositoryServiceServer) RegisterStorageNode(ctx con func (*UnimplementedMetadataRepositoryServiceServer) UnregisterStorageNode(ctx context.Context, req *StorageNodeRequest) (*types.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method UnregisterStorageNode not implemented") } +func (*UnimplementedMetadataRepositoryServiceServer) RegisterTopic(ctx context.Context, req *TopicRequest) (*types.Empty, error) { + return nil, status.Errorf(codes.Unimplemented, "method RegisterTopic not implemented") +} +func (*UnimplementedMetadataRepositoryServiceServer) UnregisterTopic(ctx context.Context, req *TopicRequest) (*types.Empty, error) { + return nil, status.Errorf(codes.Unimplemented, "method UnregisterTopic not implemented") +} func (*UnimplementedMetadataRepositoryServiceServer) RegisterLogStream(ctx context.Context, req *LogStreamRequest) (*types.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method RegisterLogStream not implemented") } @@ -667,6 +721,42 @@ func _MetadataRepositoryService_UnregisterStorageNode_Handler(srv interface{}, c return interceptor(ctx, in, info, handler) } +func _MetadataRepositoryService_RegisterTopic_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(TopicRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(MetadataRepositoryServiceServer).RegisterTopic(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/varlog.mrpb.MetadataRepositoryService/RegisterTopic", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(MetadataRepositoryServiceServer).RegisterTopic(ctx, req.(*TopicRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _MetadataRepositoryService_UnregisterTopic_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(TopicRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(MetadataRepositoryServiceServer).UnregisterTopic(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/varlog.mrpb.MetadataRepositoryService/UnregisterTopic", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(MetadataRepositoryServiceServer).UnregisterTopic(ctx, req.(*TopicRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _MetadataRepositoryService_RegisterLogStream_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(LogStreamRequest) if err := dec(in); err != nil { @@ -787,6 +877,14 @@ var _MetadataRepositoryService_serviceDesc = grpc.ServiceDesc{ MethodName: "UnregisterStorageNode", Handler: _MetadataRepositoryService_UnregisterStorageNode_Handler, }, + { + MethodName: "RegisterTopic", + Handler: _MetadataRepositoryService_RegisterTopic_Handler, + }, + { + MethodName: "UnregisterTopic", + Handler: _MetadataRepositoryService_UnregisterTopic_Handler, + }, { MethodName: "RegisterLogStream", Handler: _MetadataRepositoryService_RegisterLogStream_Handler, @@ -836,10 +934,6 @@ func (m *GetMetadataRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } return len(dAtA) - i, nil } @@ -863,10 +957,6 @@ func (m *GetMetadataResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Metadata != nil { { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) @@ -902,10 +992,6 @@ func (m *StorageNodeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.StorageNode != nil { { size, err := m.StorageNode.MarshalToSizedBuffer(dAtA[:i]) @@ -941,10 +1027,6 @@ func (m *LogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStream != nil { { size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) @@ -980,10 +1062,6 @@ func (m *SealRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintMetadataRepository(dAtA, i, uint64(m.LogStreamID)) i-- @@ -1017,10 +1095,6 @@ func (m *SealResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LastCommittedGLSN != 0 { i = encodeVarintMetadataRepository(dAtA, i, uint64(m.LastCommittedGLSN)) i-- @@ -1054,10 +1128,6 @@ func (m *UnsealRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintMetadataRepository(dAtA, i, uint64(m.LogStreamID)) i-- @@ -1091,10 +1161,6 @@ func (m *UnsealResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Status != 0 { i = encodeVarintMetadataRepository(dAtA, i, uint64(m.Status)) i-- @@ -1103,6 +1169,34 @@ func (m *UnsealResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *TopicRequest) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *TopicRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *TopicRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.TopicID != 0 { + i = encodeVarintMetadataRepository(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + func encodeVarintMetadataRepository(dAtA []byte, offset int, v uint64) int { offset -= sovMetadataRepository(v) base := offset @@ -1120,9 +1214,6 @@ func (m *GetMetadataRequest) ProtoSize() (n int) { } var l int _ = l - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1136,9 +1227,6 @@ func (m *GetMetadataResponse) ProtoSize() (n int) { l = m.Metadata.ProtoSize() n += 1 + l + sovMetadataRepository(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1152,9 +1240,6 @@ func (m *StorageNodeRequest) ProtoSize() (n int) { l = m.StorageNode.ProtoSize() n += 1 + l + sovMetadataRepository(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1168,9 +1253,6 @@ func (m *LogStreamRequest) ProtoSize() (n int) { l = m.LogStream.ProtoSize() n += 1 + l + sovMetadataRepository(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1186,9 +1268,6 @@ func (m *SealRequest) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovMetadataRepository(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1204,9 +1283,6 @@ func (m *SealResponse) ProtoSize() (n int) { if m.LastCommittedGLSN != 0 { n += 1 + sovMetadataRepository(uint64(m.LastCommittedGLSN)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1222,9 +1298,6 @@ func (m *UnsealRequest) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovMetadataRepository(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1237,8 +1310,17 @@ func (m *UnsealResponse) ProtoSize() (n int) { if m.Status != 0 { n += 1 + sovMetadataRepository(uint64(m.Status)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + return n +} + +func (m *TopicRequest) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.TopicID != 0 { + n += 1 + sovMetadataRepository(uint64(m.TopicID)) } return n } @@ -1290,7 +1372,6 @@ func (m *GetMetadataRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1377,7 +1458,6 @@ func (m *GetMetadataResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1464,7 +1544,6 @@ func (m *StorageNodeRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1551,7 +1630,6 @@ func (m *LogStreamRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1640,7 +1718,6 @@ func (m *SealRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1729,7 +1806,6 @@ func (m *SealResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1818,7 +1894,6 @@ func (m *UnsealRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1888,7 +1963,75 @@ func (m *UnsealResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *TopicRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadataRepository + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: TopicRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: TopicRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadataRepository + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipMetadataRepository(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthMetadataRepository + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } iNdEx += skippy } } diff --git a/proto/mrpb/metadata_repository.proto b/proto/mrpb/metadata_repository.proto index 148ecbfa0..aa71398f8 100644 --- a/proto/mrpb/metadata_repository.proto +++ b/proto/mrpb/metadata_repository.proto @@ -11,6 +11,9 @@ option go_package = "github.com/kakao/varlog/proto/mrpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message GetMetadataRequest {} @@ -33,7 +36,7 @@ message SealRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - uint32 log_stream_id = 2 [ + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -55,7 +58,7 @@ message UnsealRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - uint32 log_stream_id = 2 [ + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -66,11 +69,21 @@ message UnsealResponse { varlogpb.LogStreamStatus status = 1; } +message TopicRequest { + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; +} + service MetadataRepositoryService { rpc RegisterStorageNode(StorageNodeRequest) returns (google.protobuf.Empty) {} rpc UnregisterStorageNode(StorageNodeRequest) returns (google.protobuf.Empty) {} + rpc RegisterTopic(TopicRequest) returns (google.protobuf.Empty) {} + rpc UnregisterTopic(TopicRequest) returns (google.protobuf.Empty) {} rpc RegisterLogStream(LogStreamRequest) returns (google.protobuf.Empty) {} rpc UnregisterLogStream(LogStreamRequest) returns (google.protobuf.Empty) {} rpc UpdateLogStream(LogStreamRequest) returns (google.protobuf.Empty) {} diff --git a/proto/mrpb/mock/mrpb_mock.go b/proto/mrpb/mock/mrpb_mock.go index 52be90c5a..539e2795e 100644 --- a/proto/mrpb/mock/mrpb_mock.go +++ b/proto/mrpb/mock/mrpb_mock.go @@ -249,6 +249,26 @@ func (mr *MockMetadataRepositoryServiceClientMockRecorder) RegisterStorageNode(a return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RegisterStorageNode", reflect.TypeOf((*MockMetadataRepositoryServiceClient)(nil).RegisterStorageNode), varargs...) } +// RegisterTopic mocks base method. +func (m *MockMetadataRepositoryServiceClient) RegisterTopic(arg0 context.Context, arg1 *mrpb.TopicRequest, arg2 ...grpc.CallOption) (*types.Empty, error) { + m.ctrl.T.Helper() + varargs := []interface{}{arg0, arg1} + for _, a := range arg2 { + varargs = append(varargs, a) + } + ret := m.ctrl.Call(m, "RegisterTopic", varargs...) + ret0, _ := ret[0].(*types.Empty) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// RegisterTopic indicates an expected call of RegisterTopic. +func (mr *MockMetadataRepositoryServiceClientMockRecorder) RegisterTopic(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + varargs := append([]interface{}{arg0, arg1}, arg2...) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RegisterTopic", reflect.TypeOf((*MockMetadataRepositoryServiceClient)(nil).RegisterTopic), varargs...) +} + // Seal mocks base method. func (m *MockMetadataRepositoryServiceClient) Seal(arg0 context.Context, arg1 *mrpb.SealRequest, arg2 ...grpc.CallOption) (*mrpb.SealResponse, error) { m.ctrl.T.Helper() @@ -309,6 +329,26 @@ func (mr *MockMetadataRepositoryServiceClientMockRecorder) UnregisterStorageNode return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnregisterStorageNode", reflect.TypeOf((*MockMetadataRepositoryServiceClient)(nil).UnregisterStorageNode), varargs...) } +// UnregisterTopic mocks base method. +func (m *MockMetadataRepositoryServiceClient) UnregisterTopic(arg0 context.Context, arg1 *mrpb.TopicRequest, arg2 ...grpc.CallOption) (*types.Empty, error) { + m.ctrl.T.Helper() + varargs := []interface{}{arg0, arg1} + for _, a := range arg2 { + varargs = append(varargs, a) + } + ret := m.ctrl.Call(m, "UnregisterTopic", varargs...) + ret0, _ := ret[0].(*types.Empty) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// UnregisterTopic indicates an expected call of UnregisterTopic. +func (mr *MockMetadataRepositoryServiceClientMockRecorder) UnregisterTopic(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + varargs := append([]interface{}{arg0, arg1}, arg2...) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnregisterTopic", reflect.TypeOf((*MockMetadataRepositoryServiceClient)(nil).UnregisterTopic), varargs...) +} + // Unseal mocks base method. func (m *MockMetadataRepositoryServiceClient) Unseal(arg0 context.Context, arg1 *mrpb.UnsealRequest, arg2 ...grpc.CallOption) (*mrpb.UnsealResponse, error) { m.ctrl.T.Helper() @@ -417,6 +457,21 @@ func (mr *MockMetadataRepositoryServiceServerMockRecorder) RegisterStorageNode(a return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RegisterStorageNode", reflect.TypeOf((*MockMetadataRepositoryServiceServer)(nil).RegisterStorageNode), arg0, arg1) } +// RegisterTopic mocks base method. +func (m *MockMetadataRepositoryServiceServer) RegisterTopic(arg0 context.Context, arg1 *mrpb.TopicRequest) (*types.Empty, error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "RegisterTopic", arg0, arg1) + ret0, _ := ret[0].(*types.Empty) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// RegisterTopic indicates an expected call of RegisterTopic. +func (mr *MockMetadataRepositoryServiceServerMockRecorder) RegisterTopic(arg0, arg1 interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RegisterTopic", reflect.TypeOf((*MockMetadataRepositoryServiceServer)(nil).RegisterTopic), arg0, arg1) +} + // Seal mocks base method. func (m *MockMetadataRepositoryServiceServer) Seal(arg0 context.Context, arg1 *mrpb.SealRequest) (*mrpb.SealResponse, error) { m.ctrl.T.Helper() @@ -462,6 +517,21 @@ func (mr *MockMetadataRepositoryServiceServerMockRecorder) UnregisterStorageNode return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnregisterStorageNode", reflect.TypeOf((*MockMetadataRepositoryServiceServer)(nil).UnregisterStorageNode), arg0, arg1) } +// UnregisterTopic mocks base method. +func (m *MockMetadataRepositoryServiceServer) UnregisterTopic(arg0 context.Context, arg1 *mrpb.TopicRequest) (*types.Empty, error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "UnregisterTopic", arg0, arg1) + ret0, _ := ret[0].(*types.Empty) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// UnregisterTopic indicates an expected call of UnregisterTopic. +func (mr *MockMetadataRepositoryServiceServerMockRecorder) UnregisterTopic(arg0, arg1 interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnregisterTopic", reflect.TypeOf((*MockMetadataRepositoryServiceServer)(nil).UnregisterTopic), arg0, arg1) +} + // Unseal mocks base method. func (m *MockMetadataRepositoryServiceServer) Unseal(arg0 context.Context, arg1 *mrpb.UnsealRequest) (*mrpb.UnsealResponse, error) { m.ctrl.T.Helper() diff --git a/proto/mrpb/raft_entry.pb.go b/proto/mrpb/raft_entry.pb.go index 8e8239e16..539c3641e 100644 --- a/proto/mrpb/raft_entry.pb.go +++ b/proto/mrpb/raft_entry.pb.go @@ -33,10 +33,7 @@ var _ = time.Kitchen const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type RegisterStorageNode struct { - StorageNode *varlogpb.StorageNodeDescriptor `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNode *varlogpb.StorageNodeDescriptor `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` } func (m *RegisterStorageNode) Reset() { *m = RegisterStorageNode{} } @@ -80,10 +77,7 @@ func (m *RegisterStorageNode) GetStorageNode() *varlogpb.StorageNodeDescriptor { } type UnregisterStorageNode struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` } func (m *UnregisterStorageNode) Reset() { *m = UnregisterStorageNode{} } @@ -126,18 +120,103 @@ func (m *UnregisterStorageNode) GetStorageNodeID() github_daumkakao_com_varlog_v return 0 } +type RegisterTopic struct { + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` +} + +func (m *RegisterTopic) Reset() { *m = RegisterTopic{} } +func (m *RegisterTopic) String() string { return proto.CompactTextString(m) } +func (*RegisterTopic) ProtoMessage() {} +func (*RegisterTopic) Descriptor() ([]byte, []int) { + return fileDescriptor_9661c8402dd472d1, []int{2} +} +func (m *RegisterTopic) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *RegisterTopic) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_RegisterTopic.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *RegisterTopic) XXX_Merge(src proto.Message) { + xxx_messageInfo_RegisterTopic.Merge(m, src) +} +func (m *RegisterTopic) XXX_Size() int { + return m.ProtoSize() +} +func (m *RegisterTopic) XXX_DiscardUnknown() { + xxx_messageInfo_RegisterTopic.DiscardUnknown(m) +} + +var xxx_messageInfo_RegisterTopic proto.InternalMessageInfo + +func (m *RegisterTopic) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + +type UnregisterTopic struct { + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` +} + +func (m *UnregisterTopic) Reset() { *m = UnregisterTopic{} } +func (m *UnregisterTopic) String() string { return proto.CompactTextString(m) } +func (*UnregisterTopic) ProtoMessage() {} +func (*UnregisterTopic) Descriptor() ([]byte, []int) { + return fileDescriptor_9661c8402dd472d1, []int{3} +} +func (m *UnregisterTopic) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *UnregisterTopic) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_UnregisterTopic.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *UnregisterTopic) XXX_Merge(src proto.Message) { + xxx_messageInfo_UnregisterTopic.Merge(m, src) +} +func (m *UnregisterTopic) XXX_Size() int { + return m.ProtoSize() +} +func (m *UnregisterTopic) XXX_DiscardUnknown() { + xxx_messageInfo_UnregisterTopic.DiscardUnknown(m) +} + +var xxx_messageInfo_UnregisterTopic proto.InternalMessageInfo + +func (m *UnregisterTopic) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + type RegisterLogStream struct { - LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` } func (m *RegisterLogStream) Reset() { *m = RegisterLogStream{} } func (m *RegisterLogStream) String() string { return proto.CompactTextString(m) } func (*RegisterLogStream) ProtoMessage() {} func (*RegisterLogStream) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{2} + return fileDescriptor_9661c8402dd472d1, []int{4} } func (m *RegisterLogStream) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -174,17 +253,14 @@ func (m *RegisterLogStream) GetLogStream() *varlogpb.LogStreamDescriptor { } type UnregisterLogStream struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *UnregisterLogStream) Reset() { *m = UnregisterLogStream{} } func (m *UnregisterLogStream) String() string { return proto.CompactTextString(m) } func (*UnregisterLogStream) ProtoMessage() {} func (*UnregisterLogStream) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{3} + return fileDescriptor_9661c8402dd472d1, []int{5} } func (m *UnregisterLogStream) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -221,17 +297,14 @@ func (m *UnregisterLogStream) GetLogStreamID() github_daumkakao_com_varlog_varlo } type UpdateLogStream struct { - LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` } func (m *UpdateLogStream) Reset() { *m = UpdateLogStream{} } func (m *UpdateLogStream) String() string { return proto.CompactTextString(m) } func (*UpdateLogStream) ProtoMessage() {} func (*UpdateLogStream) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{4} + return fileDescriptor_9661c8402dd472d1, []int{6} } func (m *UpdateLogStream) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -268,18 +341,15 @@ func (m *UpdateLogStream) GetLogStream() *varlogpb.LogStreamDescriptor { } type Report struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - UncommitReport []snpb.LogStreamUncommitReport `protobuf:"bytes,2,rep,name=uncommit_report,json=uncommitReport,proto3" json:"uncommit_report"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + UncommitReport []snpb.LogStreamUncommitReport `protobuf:"bytes,2,rep,name=uncommit_report,json=uncommitReport,proto3" json:"uncommit_report"` } func (m *Report) Reset() { *m = Report{} } func (m *Report) String() string { return proto.CompactTextString(m) } func (*Report) ProtoMessage() {} func (*Report) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{5} + return fileDescriptor_9661c8402dd472d1, []int{7} } func (m *Report) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -323,17 +393,14 @@ func (m *Report) GetUncommitReport() []snpb.LogStreamUncommitReport { } type Reports struct { - Reports []*Report `protobuf:"bytes,1,rep,name=reports,proto3" json:"reports,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Reports []*Report `protobuf:"bytes,1,rep,name=reports,proto3" json:"reports,omitempty"` } func (m *Reports) Reset() { *m = Reports{} } func (m *Reports) String() string { return proto.CompactTextString(m) } func (*Reports) ProtoMessage() {} func (*Reports) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{6} + return fileDescriptor_9661c8402dd472d1, []int{8} } func (m *Reports) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -370,18 +437,15 @@ func (m *Reports) GetReports() []*Report { } type Commit struct { - NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` - CreatedTime time.Time `protobuf:"bytes,2,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` + CreatedTime time.Time `protobuf:"bytes,2,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` } func (m *Commit) Reset() { *m = Commit{} } func (m *Commit) String() string { return proto.CompactTextString(m) } func (*Commit) ProtoMessage() {} func (*Commit) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{7} + return fileDescriptor_9661c8402dd472d1, []int{9} } func (m *Commit) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -425,17 +489,14 @@ func (m *Commit) GetCreatedTime() time.Time { } type Seal struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *Seal) Reset() { *m = Seal{} } func (m *Seal) String() string { return proto.CompactTextString(m) } func (*Seal) ProtoMessage() {} func (*Seal) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{8} + return fileDescriptor_9661c8402dd472d1, []int{10} } func (m *Seal) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -472,17 +533,14 @@ func (m *Seal) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.Log } type Unseal struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *Unseal) Reset() { *m = Unseal{} } func (m *Unseal) String() string { return proto.CompactTextString(m) } func (*Unseal) ProtoMessage() {} func (*Unseal) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{9} + return fileDescriptor_9661c8402dd472d1, []int{11} } func (m *Unseal) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -519,19 +577,16 @@ func (m *Unseal) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.L } type AddPeer struct { - NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` - Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"` - IsLearner bool `protobuf:"varint,3,opt,name=is_learner,json=isLearner,proto3" json:"is_learner,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` + Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"` + IsLearner bool `protobuf:"varint,3,opt,name=is_learner,json=isLearner,proto3" json:"is_learner,omitempty"` } func (m *AddPeer) Reset() { *m = AddPeer{} } func (m *AddPeer) String() string { return proto.CompactTextString(m) } func (*AddPeer) ProtoMessage() {} func (*AddPeer) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{10} + return fileDescriptor_9661c8402dd472d1, []int{12} } func (m *AddPeer) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -582,17 +637,14 @@ func (m *AddPeer) GetIsLearner() bool { } type RemovePeer struct { - NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` } func (m *RemovePeer) Reset() { *m = RemovePeer{} } func (m *RemovePeer) String() string { return proto.CompactTextString(m) } func (*RemovePeer) ProtoMessage() {} func (*RemovePeer) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{11} + return fileDescriptor_9661c8402dd472d1, []int{13} } func (m *RemovePeer) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -629,18 +681,15 @@ func (m *RemovePeer) GetNodeID() github_daumkakao_com_varlog_varlog_pkg_types.No } type Endpoint struct { - NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` - Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` + Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"` } func (m *Endpoint) Reset() { *m = Endpoint{} } func (m *Endpoint) String() string { return proto.CompactTextString(m) } func (*Endpoint) ProtoMessage() {} func (*Endpoint) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{12} + return fileDescriptor_9661c8402dd472d1, []int{14} } func (m *Endpoint) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -684,17 +733,14 @@ func (m *Endpoint) GetUrl() string { } type RecoverStateMachine struct { - StateMachine *MetadataRepositoryDescriptor `protobuf:"bytes,1,opt,name=state_machine,json=stateMachine,proto3" json:"state_machine,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StateMachine *MetadataRepositoryDescriptor `protobuf:"bytes,1,opt,name=state_machine,json=stateMachine,proto3" json:"state_machine,omitempty"` } func (m *RecoverStateMachine) Reset() { *m = RecoverStateMachine{} } func (m *RecoverStateMachine) String() string { return proto.CompactTextString(m) } func (*RecoverStateMachine) ProtoMessage() {} func (*RecoverStateMachine) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{13} + return fileDescriptor_9661c8402dd472d1, []int{15} } func (m *RecoverStateMachine) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -731,20 +777,17 @@ func (m *RecoverStateMachine) GetStateMachine() *MetadataRepositoryDescriptor { } type RaftEntry struct { - NodeIndex uint64 `protobuf:"varint,1,opt,name=node_index,json=nodeIndex,proto3" json:"node_index,omitempty"` - RequestIndex uint64 `protobuf:"varint,2,opt,name=request_index,json=requestIndex,proto3" json:"request_index,omitempty"` - AppliedIndex uint64 `protobuf:"varint,3,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` - Request RaftEntry_Request `protobuf:"bytes,4,opt,name=request,proto3" json:"request"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + NodeIndex uint64 `protobuf:"varint,1,opt,name=node_index,json=nodeIndex,proto3" json:"node_index,omitempty"` + RequestIndex uint64 `protobuf:"varint,2,opt,name=request_index,json=requestIndex,proto3" json:"request_index,omitempty"` + AppliedIndex uint64 `protobuf:"varint,3,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` + Request RaftEntry_Request `protobuf:"bytes,4,opt,name=request,proto3" json:"request"` } func (m *RaftEntry) Reset() { *m = RaftEntry{} } func (m *RaftEntry) String() string { return proto.CompactTextString(m) } func (*RaftEntry) ProtoMessage() {} func (*RaftEntry) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{14} + return fileDescriptor_9661c8402dd472d1, []int{16} } func (m *RaftEntry) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -815,16 +858,15 @@ type RaftEntry_Request struct { RemovePeer *RemovePeer `protobuf:"bytes,11,opt,name=remove_peer,json=removePeer,proto3" json:"remove_peer,omitempty"` Endpoint *Endpoint `protobuf:"bytes,12,opt,name=endpoint,proto3" json:"endpoint,omitempty"` RecoverStateMachine *RecoverStateMachine `protobuf:"bytes,13,opt,name=recover_state_machine,json=recoverStateMachine,proto3" json:"recover_state_machine,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + RegisterTopic *RegisterTopic `protobuf:"bytes,14,opt,name=register_topic,json=registerTopic,proto3" json:"register_topic,omitempty"` + UnregisterTopic *UnregisterTopic `protobuf:"bytes,15,opt,name=unregister_topic,json=unregisterTopic,proto3" json:"unregister_topic,omitempty"` } func (m *RaftEntry_Request) Reset() { *m = RaftEntry_Request{} } func (m *RaftEntry_Request) String() string { return proto.CompactTextString(m) } func (*RaftEntry_Request) ProtoMessage() {} func (*RaftEntry_Request) Descriptor() ([]byte, []int) { - return fileDescriptor_9661c8402dd472d1, []int{14, 0} + return fileDescriptor_9661c8402dd472d1, []int{16, 0} } func (m *RaftEntry_Request) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -944,9 +986,25 @@ func (m *RaftEntry_Request) GetRecoverStateMachine() *RecoverStateMachine { return nil } +func (m *RaftEntry_Request) GetRegisterTopic() *RegisterTopic { + if m != nil { + return m.RegisterTopic + } + return nil +} + +func (m *RaftEntry_Request) GetUnregisterTopic() *UnregisterTopic { + if m != nil { + return m.UnregisterTopic + } + return nil +} + func init() { proto.RegisterType((*RegisterStorageNode)(nil), "varlog.mrpb.RegisterStorageNode") proto.RegisterType((*UnregisterStorageNode)(nil), "varlog.mrpb.UnregisterStorageNode") + proto.RegisterType((*RegisterTopic)(nil), "varlog.mrpb.RegisterTopic") + proto.RegisterType((*UnregisterTopic)(nil), "varlog.mrpb.UnregisterTopic") proto.RegisterType((*RegisterLogStream)(nil), "varlog.mrpb.RegisterLogStream") proto.RegisterType((*UnregisterLogStream)(nil), "varlog.mrpb.UnregisterLogStream") proto.RegisterType((*UpdateLogStream)(nil), "varlog.mrpb.UpdateLogStream") @@ -966,70 +1024,76 @@ func init() { func init() { proto.RegisterFile("proto/mrpb/raft_entry.proto", fileDescriptor_9661c8402dd472d1) } var fileDescriptor_9661c8402dd472d1 = []byte{ - // 1006 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x56, 0x4f, 0x6f, 0xe3, 0x44, - 0x14, 0xc7, 0x6d, 0x70, 0x92, 0x97, 0x84, 0x52, 0x87, 0xaa, 0x56, 0x81, 0xa4, 0xf2, 0x02, 0x5a, - 0x04, 0x75, 0x04, 0x7b, 0xa9, 0x16, 0x09, 0x69, 0xc3, 0xae, 0xa0, 0x62, 0xb7, 0x8b, 0xa6, 0xed, - 0xa5, 0x42, 0x58, 0x93, 0x78, 0xea, 0x5a, 0x8d, 0x3d, 0x66, 0x3c, 0xee, 0x6e, 0xbf, 0x00, 0x17, - 0x2e, 0x7b, 0xe2, 0xbc, 0x5f, 0x80, 0xef, 0xd1, 0x23, 0x07, 0xc4, 0x31, 0x48, 0xe1, 0x5b, 0x70, - 0x42, 0xf3, 0xc7, 0x8e, 0xdd, 0xf8, 0xc0, 0x1e, 0xba, 0x7b, 0xca, 0x78, 0xe6, 0xf7, 0xde, 0xbc, - 0xdf, 0x9b, 0x99, 0xdf, 0x2f, 0xf0, 0x7e, 0xc2, 0x28, 0xa7, 0xa3, 0x88, 0x25, 0x93, 0x11, 0xc3, - 0x67, 0xdc, 0x23, 0x31, 0x67, 0x57, 0xae, 0x9c, 0xb5, 0x3a, 0x97, 0x98, 0xcd, 0x68, 0xe0, 0x8a, - 0xd5, 0x9d, 0x61, 0x40, 0x69, 0x30, 0x23, 0x23, 0xb9, 0x34, 0xc9, 0xce, 0x46, 0x3c, 0x8c, 0x48, - 0xca, 0x71, 0x94, 0x28, 0xf4, 0xce, 0x5e, 0x10, 0xf2, 0xf3, 0x6c, 0xe2, 0x4e, 0x69, 0x34, 0x0a, - 0x68, 0x40, 0x97, 0x48, 0xf1, 0xa5, 0xf6, 0x11, 0x23, 0x0d, 0xdf, 0x56, 0xc9, 0x93, 0xc9, 0x28, - 0x22, 0x1c, 0xfb, 0x98, 0x63, 0xbd, 0x30, 0x48, 0xe3, 0x64, 0x32, 0x9a, 0xd1, 0xc0, 0x4b, 0x39, - 0x23, 0x38, 0xf2, 0x18, 0x49, 0x28, 0xe3, 0x84, 0xe9, 0xf5, 0x3b, 0xcb, 0x62, 0xf3, 0x48, 0x09, - 0x49, 0x43, 0x4e, 0xf3, 0xd2, 0x9d, 0x33, 0xe8, 0x23, 0x12, 0x84, 0x29, 0x27, 0xec, 0x88, 0x53, - 0x86, 0x03, 0x72, 0x48, 0x7d, 0x62, 0x3d, 0x85, 0x6e, 0xaa, 0x3e, 0xbd, 0x98, 0xfa, 0xc4, 0x36, - 0x76, 0x8d, 0xbb, 0x9d, 0x2f, 0x3f, 0x71, 0x35, 0xd1, 0xbc, 0x24, 0xb7, 0x14, 0xf3, 0x90, 0xa4, - 0x53, 0x16, 0x26, 0x9c, 0xb2, 0x71, 0xe3, 0x7a, 0x3e, 0x34, 0x50, 0x27, 0x5d, 0x2e, 0x3a, 0x2f, - 0x0c, 0xd8, 0x3a, 0x89, 0x59, 0xcd, 0x56, 0xcf, 0x60, 0xa3, 0xbc, 0x95, 0x17, 0xfa, 0x72, 0xb7, - 0xde, 0xf8, 0xe9, 0x62, 0x3e, 0xec, 0x95, 0x90, 0x07, 0x0f, 0xff, 0x9d, 0x0f, 0xef, 0xeb, 0xe6, - 0xf9, 0x38, 0x8b, 0x2e, 0xf0, 0x05, 0xa6, 0xb2, 0x8d, 0xaa, 0x9e, 0xfc, 0x27, 0xb9, 0x08, 0x46, - 0xfc, 0x2a, 0x21, 0xa9, 0x5b, 0x89, 0x46, 0xbd, 0x52, 0x41, 0x07, 0xbe, 0xf3, 0x13, 0x6c, 0xe6, - 0xd4, 0x1f, 0xd3, 0xe0, 0x48, 0xf6, 0xd0, 0x3a, 0x00, 0x58, 0x76, 0x54, 0xd3, 0xfe, 0x68, 0x85, - 0x76, 0x81, 0x5f, 0x21, 0xdd, 0x9e, 0xe5, 0x4b, 0xce, 0x2f, 0x06, 0xf4, 0x97, 0x94, 0x97, 0x5b, - 0x50, 0xe8, 0x95, 0x0e, 0xad, 0xa0, 0xfb, 0xfd, 0x62, 0x3e, 0xec, 0x14, 0x28, 0x49, 0x76, 0xff, - 0x95, 0xc8, 0x96, 0x62, 0x51, 0xa7, 0x28, 0xe3, 0xc0, 0x77, 0x7e, 0x84, 0x8d, 0x93, 0xc4, 0xc7, - 0x9c, 0xdc, 0x0a, 0xcd, 0xbf, 0x0c, 0x30, 0x91, 0xbc, 0x79, 0x6f, 0xec, 0x28, 0xad, 0x23, 0xd8, - 0xc8, 0xe2, 0x29, 0x8d, 0xa2, 0x90, 0xeb, 0x57, 0x60, 0xaf, 0xed, 0xae, 0x97, 0x39, 0x89, 0xb7, - 0xb2, 0xe4, 0x73, 0xa2, 0xc1, 0xaa, 0x6e, 0xc9, 0xe9, 0x2d, 0xf4, 0x4e, 0x56, 0x99, 0x75, 0xf6, - 0xa1, 0xa9, 0x46, 0xa9, 0xb5, 0x07, 0x4d, 0x95, 0x36, 0xb5, 0x0d, 0x99, 0xb7, 0xef, 0x96, 0x9e, - 0xbc, 0xab, 0x60, 0x28, 0xc7, 0x38, 0xbf, 0x1b, 0x60, 0x7e, 0x23, 0x53, 0x59, 0xa7, 0xd0, 0x2c, - 0xb7, 0xa2, 0x31, 0x7e, 0xb0, 0x98, 0x0f, 0xcd, 0xa2, 0x07, 0xf7, 0x5e, 0xa9, 0x07, 0x9a, 0xbc, - 0x19, 0x2b, 0xd6, 0xdf, 0x42, 0x77, 0xca, 0x08, 0xe6, 0xc4, 0xf7, 0x84, 0xc6, 0xd8, 0x6b, 0xf2, - 0x18, 0x77, 0x5c, 0x25, 0x40, 0x6e, 0x2e, 0x2b, 0xee, 0x71, 0x2e, 0x40, 0xe3, 0x96, 0x20, 0xfa, - 0xe2, 0x6f, 0xf1, 0x38, 0x75, 0xa4, 0x58, 0x73, 0x9e, 0x41, 0xe3, 0x88, 0xe0, 0xd9, 0xeb, 0xbf, - 0x99, 0x57, 0x60, 0x9e, 0xc4, 0xe9, 0x1b, 0xd9, 0xfa, 0x37, 0x03, 0x9a, 0x0f, 0x7c, 0xff, 0x07, - 0x42, 0xd8, 0xad, 0x1e, 0xd2, 0xbb, 0xb0, 0x9e, 0xb1, 0x99, 0x3c, 0x9b, 0x36, 0x12, 0x43, 0xeb, - 0x43, 0x80, 0x30, 0xf5, 0x66, 0x04, 0xb3, 0x98, 0x30, 0x7b, 0x7d, 0xd7, 0xb8, 0xdb, 0x42, 0xed, - 0x30, 0x7d, 0xac, 0x26, 0x9c, 0x73, 0x00, 0x44, 0x22, 0x7a, 0x49, 0x6e, 0xbb, 0x34, 0xe7, 0x39, - 0xb4, 0x1e, 0xc5, 0x7e, 0x42, 0xc3, 0x98, 0xbf, 0xde, 0x16, 0x38, 0x44, 0xb8, 0xce, 0x94, 0x5e, - 0x0a, 0x27, 0xc0, 0x9c, 0x3c, 0xc1, 0xd3, 0xf3, 0x30, 0x26, 0xd6, 0x21, 0xf4, 0x52, 0xf1, 0xed, - 0x45, 0x6a, 0x42, 0x0b, 0xd3, 0xa7, 0x95, 0xc7, 0xf6, 0x44, 0x7b, 0x19, 0x2a, 0xac, 0x6c, 0xa9, - 0x4e, 0xa8, 0x9b, 0x96, 0xf2, 0x39, 0xbf, 0xb6, 0xa0, 0x8d, 0xf0, 0x19, 0x7f, 0x24, 0xbc, 0x5a, - 0xf4, 0x5d, 0x51, 0x8c, 0x7d, 0xf2, 0x5c, 0xb1, 0x44, 0x6d, 0x59, 0xa2, 0x98, 0xb0, 0xee, 0x40, - 0x8f, 0x91, 0x9f, 0x33, 0x92, 0x72, 0x8d, 0x58, 0x93, 0x88, 0xae, 0x9e, 0x2c, 0x40, 0x38, 0x49, - 0x66, 0x21, 0xf1, 0x35, 0x68, 0x5d, 0x81, 0xf4, 0xa4, 0x02, 0x7d, 0x2d, 0xd4, 0x42, 0x06, 0xd9, - 0x0d, 0x49, 0x60, 0x50, 0x55, 0x8b, 0xbc, 0x22, 0x17, 0x29, 0x94, 0xd6, 0x9f, 0x3c, 0x68, 0xe7, - 0x4f, 0x53, 0x28, 0x8f, 0x1c, 0x5b, 0xc7, 0xb0, 0x95, 0x3b, 0x88, 0x57, 0xe3, 0xc8, 0xbb, 0x37, - 0x74, 0x68, 0xc5, 0x5e, 0x51, 0xbf, 0xce, 0x73, 0x4f, 0x61, 0x3b, 0x8b, 0xeb, 0xf3, 0x2a, 0x11, - 0x71, 0x2a, 0x79, 0x6b, 0x8d, 0x1b, 0x6d, 0x65, 0xb5, 0x7e, 0x7e, 0x08, 0xc5, 0x96, 0x5e, 0xc9, - 0x63, 0xd6, 0xeb, 0x3a, 0x71, 0xd3, 0x1b, 0xd1, 0xe6, 0xaa, 0x5d, 0x1e, 0x43, 0x69, 0xa3, 0x72, - 0xc6, 0x46, 0x4d, 0x07, 0x6a, 0xfc, 0x16, 0xf5, 0xb3, 0x1a, 0x13, 0xfe, 0x0e, 0x36, 0x33, 0xe9, - 0x89, 0xe5, 0x8c, 0x6f, 0xcb, 0x8c, 0x1f, 0x54, 0x33, 0x56, 0x9d, 0x13, 0x6d, 0x64, 0x37, 0xac, - 0xf4, 0x73, 0x30, 0xb5, 0xe5, 0x98, 0x32, 0xfc, 0xbd, 0x1a, 0x6b, 0x48, 0x91, 0xc6, 0x58, 0x9f, - 0x81, 0xa9, 0x4c, 0xc6, 0x6e, 0x4a, 0x74, 0xd5, 0x48, 0x94, 0x69, 0x20, 0x0d, 0xb1, 0x3e, 0x86, - 0x86, 0x10, 0x47, 0xbb, 0x25, 0xa1, 0x9b, 0x15, 0xa8, 0x10, 0x6c, 0x24, 0x97, 0x45, 0xce, 0x4c, - 0xaa, 0xa8, 0xdd, 0xae, 0xc9, 0xa9, 0x04, 0x16, 0x69, 0x88, 0x35, 0x82, 0x16, 0xf6, 0x7d, 0x2f, - 0x21, 0x84, 0xd9, 0x50, 0x53, 0xb0, 0xd6, 0x44, 0xd4, 0xc4, 0x5a, 0x1c, 0xf7, 0xa1, 0xc3, 0xa4, - 0x1e, 0xa9, 0x98, 0x8e, 0x8c, 0xd9, 0xbe, 0x41, 0x32, 0xd7, 0x2b, 0x04, 0x6c, 0xa9, 0x5d, 0x5f, - 0x40, 0x8b, 0x68, 0x7d, 0xb1, 0xbb, 0x32, 0x6c, 0xab, 0x12, 0x96, 0x8b, 0x0f, 0x2a, 0x60, 0xea, - 0xba, 0x4b, 0x61, 0xf0, 0xaa, 0x4a, 0xd0, 0xab, 0xbd, 0xee, 0x2b, 0x12, 0x22, 0xae, 0xfb, 0xca, - 0xe4, 0xfd, 0xc6, 0xf5, 0xcb, 0xa1, 0x31, 0xfe, 0xea, 0x7a, 0x31, 0x30, 0xfe, 0x58, 0x0c, 0x8c, - 0x97, 0xff, 0x0c, 0x8c, 0xd3, 0xbd, 0xff, 0xa3, 0x68, 0xc5, 0x7f, 0xfe, 0x89, 0x29, 0xc7, 0xf7, - 0xfe, 0x0b, 0x00, 0x00, 0xff, 0xff, 0x7f, 0x06, 0xaf, 0x5d, 0x08, 0x0c, 0x00, 0x00, + // 1104 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x57, 0x4d, 0x4f, 0xdc, 0xc6, + 0x1b, 0xc7, 0x61, 0xb3, 0x2f, 0xcf, 0xb2, 0x6c, 0x30, 0x7f, 0x84, 0xc5, 0xbf, 0xdd, 0x45, 0x4e, + 0x5b, 0xa5, 0x6a, 0xf1, 0xaa, 0x4d, 0x0f, 0x28, 0x87, 0x4a, 0x90, 0x44, 0x14, 0x35, 0x21, 0xd5, + 0x00, 0x17, 0x54, 0xc5, 0xf2, 0xae, 0x07, 0x63, 0xb1, 0xf6, 0x38, 0xe3, 0x31, 0x09, 0x5f, 0xa0, + 0x67, 0x4e, 0xed, 0x35, 0x5f, 0xa0, 0xdf, 0x83, 0x63, 0x4e, 0x55, 0x4f, 0xdb, 0x6a, 0xf9, 0x16, + 0x3d, 0x55, 0xf3, 0xe2, 0x37, 0xd6, 0x87, 0x72, 0x80, 0x9c, 0x98, 0x9d, 0xf9, 0x3d, 0x6f, 0x33, + 0x8f, 0x7f, 0xbf, 0x07, 0xf8, 0x7f, 0x44, 0x09, 0x23, 0x83, 0x80, 0x46, 0xc3, 0x01, 0x75, 0x8e, + 0x99, 0x8d, 0x43, 0x46, 0xcf, 0x2d, 0xb1, 0xab, 0xb7, 0xcf, 0x1c, 0x3a, 0x26, 0x9e, 0xc5, 0x4f, + 0xd7, 0xfa, 0x1e, 0x21, 0xde, 0x18, 0x0f, 0xc4, 0xd1, 0x30, 0x39, 0x1e, 0x30, 0x3f, 0xc0, 0x31, + 0x73, 0x82, 0x48, 0xa2, 0xd7, 0x36, 0x3c, 0x9f, 0x9d, 0x24, 0x43, 0x6b, 0x44, 0x82, 0x81, 0x47, + 0x3c, 0x92, 0x23, 0xf9, 0x2f, 0x19, 0x87, 0xaf, 0x14, 0x7c, 0x55, 0x3a, 0x8f, 0x86, 0x83, 0x00, + 0x33, 0xc7, 0x75, 0x98, 0xa3, 0x0e, 0x7a, 0x71, 0x18, 0x0d, 0x07, 0x63, 0xe2, 0xd9, 0x31, 0xa3, + 0xd8, 0x09, 0x6c, 0x8a, 0x23, 0x42, 0x19, 0xa6, 0xea, 0xfc, 0x61, 0x9e, 0x6c, 0x6a, 0x29, 0x20, + 0xb1, 0xcf, 0x48, 0x9a, 0xba, 0x79, 0x0c, 0xcb, 0x08, 0x7b, 0x7e, 0xcc, 0x30, 0xdd, 0x67, 0x84, + 0x3a, 0x1e, 0xde, 0x23, 0x2e, 0xd6, 0x5f, 0xc1, 0x42, 0x2c, 0x7f, 0xda, 0x21, 0x71, 0xb1, 0xa1, + 0xad, 0x6b, 0x8f, 0xda, 0xdf, 0x7e, 0x61, 0xa9, 0x42, 0xd3, 0x94, 0xac, 0x82, 0xcd, 0x33, 0x1c, + 0x8f, 0xa8, 0x1f, 0x31, 0x42, 0xb7, 0x6b, 0x97, 0x93, 0xbe, 0x86, 0xda, 0x71, 0x7e, 0x68, 0x5e, + 0x68, 0xb0, 0x72, 0x18, 0xd2, 0x8a, 0x50, 0x6f, 0xa1, 0x5b, 0x0c, 0x65, 0xfb, 0xae, 0x88, 0x76, + 0x7f, 0xfb, 0xd5, 0x74, 0xd2, 0xef, 0x14, 0x90, 0xbb, 0xcf, 0xfe, 0x99, 0xf4, 0x9f, 0xa8, 0xcb, + 0x73, 0x9d, 0x24, 0x38, 0x75, 0x4e, 0x1d, 0x22, 0xae, 0x51, 0xe6, 0x93, 0xfe, 0x89, 0x4e, 0xbd, + 0x01, 0x3b, 0x8f, 0x70, 0x6c, 0x95, 0xac, 0x51, 0xa7, 0x90, 0xd0, 0xae, 0x6b, 0x12, 0xe8, 0xa4, + 0xa5, 0x1f, 0x90, 0xc8, 0x1f, 0xe9, 0xaf, 0xa1, 0xc9, 0xf8, 0x22, 0x4f, 0xe1, 0xe9, 0x74, 0xd2, + 0x6f, 0x88, 0x43, 0x11, 0xfc, 0xbb, 0x1b, 0x05, 0x57, 0x76, 0xa8, 0x21, 0x9c, 0xee, 0xba, 0xe6, + 0x1b, 0xe8, 0xe6, 0x57, 0x70, 0x37, 0x21, 0x5f, 0xc3, 0x52, 0x5a, 0xe3, 0x0b, 0xe2, 0xed, 0x8b, + 0x3e, 0xd1, 0x77, 0x01, 0xf2, 0xae, 0x51, 0x4f, 0xfb, 0xd9, 0xcc, 0xd3, 0x66, 0xf8, 0x99, 0x87, + 0x6d, 0x8d, 0xd3, 0x23, 0xf3, 0x17, 0x0d, 0x96, 0xf3, 0x9a, 0xf2, 0x10, 0x04, 0x3a, 0x85, 0xc6, + 0xcc, 0x8a, 0xfb, 0x71, 0x3a, 0xe9, 0xb7, 0x33, 0x94, 0x28, 0x70, 0xf3, 0x46, 0x05, 0x16, 0x6c, + 0x51, 0x3b, 0x4b, 0x63, 0xd7, 0x35, 0x7f, 0x86, 0xee, 0x61, 0xe4, 0x3a, 0x0c, 0xdf, 0x4a, 0x99, + 0x7f, 0x68, 0x50, 0x47, 0xe2, 0xeb, 0xfa, 0x68, 0xed, 0xaa, 0xef, 0x43, 0x37, 0x09, 0x47, 0x24, + 0x08, 0x7c, 0xa6, 0xbe, 0x74, 0xe3, 0xde, 0xfa, 0x7c, 0xb1, 0x26, 0xce, 0x07, 0x79, 0x3d, 0x87, + 0x0a, 0x2c, 0xf3, 0x16, 0x35, 0xcd, 0xa1, 0xc5, 0xa4, 0xb4, 0x6b, 0x6e, 0x42, 0x43, 0xae, 0x62, + 0x7d, 0x03, 0x1a, 0xd2, 0x6d, 0x6c, 0x68, 0xc2, 0xef, 0xb2, 0x55, 0xa0, 0x35, 0x4b, 0xc2, 0x50, + 0x8a, 0x31, 0x7f, 0xd7, 0xa0, 0xfe, 0x54, 0xb8, 0xd2, 0x8f, 0xa0, 0x51, 0xbc, 0x8a, 0xda, 0xf6, + 0xd6, 0x74, 0xd2, 0xaf, 0x67, 0x77, 0xf0, 0xf8, 0x46, 0x77, 0xa0, 0x8a, 0xaf, 0x87, 0xb2, 0xea, + 0x1d, 0x58, 0x18, 0x51, 0xec, 0x30, 0xec, 0xda, 0x9c, 0x47, 0x8d, 0x7b, 0xe2, 0x19, 0xd7, 0x2c, + 0x49, 0xb2, 0x56, 0x4a, 0x9d, 0xd6, 0x41, 0x4a, 0xb2, 0xdb, 0x4d, 0x5e, 0xe8, 0xc5, 0x5f, 0x9c, + 0x80, 0x94, 0x25, 0x3f, 0x33, 0xdf, 0x42, 0x6d, 0x1f, 0x3b, 0xe3, 0xbb, 0xef, 0xcc, 0x73, 0xa8, + 0x1f, 0x86, 0xf1, 0x47, 0x09, 0xfd, 0xab, 0x06, 0x8d, 0x2d, 0xd7, 0xfd, 0x09, 0x63, 0x7a, 0xab, + 0x8f, 0xf4, 0x00, 0xe6, 0x13, 0x3a, 0x16, 0x6f, 0xd3, 0x42, 0x7c, 0xa9, 0x7f, 0x0a, 0xe0, 0xc7, + 0xf6, 0x18, 0x3b, 0x34, 0xc4, 0xd4, 0x98, 0x5f, 0xd7, 0x1e, 0x35, 0x51, 0xcb, 0x8f, 0x5f, 0xc8, + 0x0d, 0xf3, 0x04, 0x00, 0xe1, 0x80, 0x9c, 0xe1, 0xdb, 0x4e, 0xcd, 0x7c, 0x07, 0xcd, 0xe7, 0xa1, + 0x1b, 0x11, 0x3f, 0x64, 0x77, 0x7b, 0x05, 0x26, 0xe6, 0xca, 0x3a, 0x22, 0x67, 0x5c, 0xed, 0x1c, + 0x86, 0x5f, 0x3a, 0xa3, 0x13, 0x3f, 0xc4, 0xfa, 0x1e, 0x74, 0x62, 0xfe, 0xdb, 0x0e, 0xe4, 0x86, + 0x22, 0xa6, 0x2f, 0x4b, 0x1f, 0xdb, 0x4b, 0xa5, 0xd7, 0x28, 0x93, 0xeb, 0x9c, 0x9d, 0xd0, 0x42, + 0x5c, 0xf0, 0x67, 0xfe, 0xd6, 0x82, 0x16, 0x72, 0x8e, 0xd9, 0x73, 0x3e, 0x8f, 0xf0, 0x7b, 0x97, + 0x25, 0x86, 0x2e, 0x7e, 0x27, 0xab, 0x44, 0x2d, 0x91, 0x22, 0xdf, 0xd0, 0x1f, 0x42, 0x87, 0xe2, + 0x37, 0x09, 0x8e, 0x99, 0x42, 0xdc, 0x13, 0x88, 0x05, 0xb5, 0x99, 0x81, 0x9c, 0x28, 0x1a, 0xfb, + 0xd8, 0x55, 0xa0, 0x79, 0x09, 0x52, 0x9b, 0x12, 0xf4, 0x3d, 0x67, 0x0b, 0x61, 0x64, 0xd4, 0x44, + 0x01, 0xbd, 0x32, 0x5b, 0xa4, 0x19, 0x59, 0x48, 0xa2, 0x14, 0xff, 0xa4, 0x46, 0x6b, 0x7f, 0x37, + 0x38, 0xf3, 0x88, 0xb5, 0x7e, 0x00, 0x2b, 0xa9, 0x82, 0xd8, 0x15, 0x53, 0xc7, 0xfa, 0x35, 0x1e, + 0x9a, 0x19, 0x21, 0xd0, 0x72, 0xd5, 0x5c, 0x71, 0x04, 0xab, 0x49, 0x58, 0xed, 0x57, 0x92, 0x88, + 0x59, 0xf2, 0x5b, 0x39, 0x9c, 0xa0, 0x95, 0xa4, 0x72, 0x66, 0xd9, 0x83, 0x2c, 0xa4, 0x5d, 0xd0, + 0x98, 0xf9, 0xaa, 0x9b, 0xb8, 0xae, 0x8d, 0x68, 0x69, 0x56, 0x2e, 0x0f, 0xa0, 0x10, 0xa8, 0xe8, + 0xb1, 0x56, 0x71, 0x03, 0x15, 0x7a, 0x8b, 0x96, 0x93, 0x0a, 0x11, 0xfe, 0x01, 0x96, 0x12, 0xa1, + 0x89, 0x45, 0x8f, 0xf7, 0x85, 0xc7, 0x4f, 0xca, 0x1e, 0xcb, 0xca, 0x89, 0xba, 0xc9, 0x35, 0x29, + 0xfd, 0x1a, 0xea, 0x4a, 0x72, 0xea, 0xc2, 0xfc, 0x7f, 0x15, 0xd2, 0x10, 0x23, 0x85, 0xd1, 0xbf, + 0x82, 0xba, 0x14, 0x19, 0xa3, 0x21, 0xd0, 0x65, 0x21, 0x91, 0xa2, 0x81, 0x14, 0x44, 0xff, 0x1c, + 0x6a, 0x9c, 0x1c, 0x8d, 0xa6, 0x80, 0x2e, 0x95, 0xa0, 0x9c, 0xb0, 0x91, 0x38, 0xe6, 0x3e, 0x13, + 0xc1, 0xa2, 0x46, 0xab, 0xc2, 0xa7, 0x24, 0x58, 0xa4, 0x20, 0xfa, 0x00, 0x9a, 0x8e, 0xeb, 0xda, + 0x11, 0xc6, 0xd4, 0x80, 0x8a, 0x84, 0x15, 0x27, 0xa2, 0x86, 0xa3, 0xc8, 0x71, 0x13, 0xda, 0x54, + 0xf0, 0x91, 0xb4, 0x69, 0x0b, 0x9b, 0xd5, 0x6b, 0x45, 0xa6, 0x7c, 0x85, 0x80, 0xe6, 0xdc, 0xf5, + 0x0d, 0x34, 0xb1, 0xe2, 0x17, 0x63, 0x41, 0x98, 0xad, 0x94, 0xcc, 0x52, 0xf2, 0x41, 0x19, 0x4c, + 0xb6, 0xbb, 0x20, 0x06, 0xbb, 0xcc, 0x04, 0x9d, 0xca, 0x76, 0x9f, 0xa1, 0x10, 0xde, 0xee, 0xb3, + 0xbc, 0xb2, 0x05, 0x8b, 0x59, 0x03, 0x89, 0xe9, 0xcf, 0x58, 0x54, 0x52, 0x59, 0xd5, 0x8d, 0x62, + 0x50, 0x44, 0x9d, 0xf2, 0x30, 0xba, 0x03, 0x0f, 0x0a, 0x5d, 0x28, 0x9d, 0x74, 0xab, 0xda, 0xa5, + 0x3c, 0xc4, 0xa2, 0x6e, 0x52, 0xde, 0x78, 0x52, 0xbb, 0x7c, 0xdf, 0xd7, 0xb6, 0x77, 0x2e, 0xa7, + 0x3d, 0xed, 0xc3, 0xb4, 0xa7, 0x5d, 0x5c, 0xf5, 0xe6, 0xde, 0x5f, 0xf5, 0xb4, 0x0f, 0x57, 0xbd, + 0xb9, 0x3f, 0xaf, 0x7a, 0x73, 0x47, 0x1b, 0xff, 0x85, 0x69, 0xb3, 0xff, 0xb7, 0x86, 0x75, 0xb1, + 0x7e, 0xfc, 0x6f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x50, 0x50, 0x97, 0x6e, 0x84, 0x0d, 0x00, 0x00, } func (m *RegisterStorageNode) Marshal() (dAtA []byte, err error) { @@ -1052,10 +1116,6 @@ func (m *RegisterStorageNode) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.StorageNode != nil { { size, err := m.StorageNode.MarshalToSizedBuffer(dAtA[:i]) @@ -1091,10 +1151,6 @@ func (m *UnregisterStorageNode) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.StorageNodeID != 0 { i = encodeVarintRaftEntry(dAtA, i, uint64(m.StorageNodeID)) i-- @@ -1103,6 +1159,62 @@ func (m *UnregisterStorageNode) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *RegisterTopic) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *RegisterTopic) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *RegisterTopic) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.TopicID != 0 { + i = encodeVarintRaftEntry(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *UnregisterTopic) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UnregisterTopic) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UnregisterTopic) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.TopicID != 0 { + i = encodeVarintRaftEntry(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + func (m *RegisterLogStream) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) @@ -1123,10 +1235,6 @@ func (m *RegisterLogStream) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStream != nil { { size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) @@ -1162,10 +1270,6 @@ func (m *UnregisterLogStream) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintRaftEntry(dAtA, i, uint64(m.LogStreamID)) i-- @@ -1194,10 +1298,6 @@ func (m *UpdateLogStream) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStream != nil { { size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) @@ -1233,10 +1333,6 @@ func (m *Report) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.UncommitReport) > 0 { for iNdEx := len(m.UncommitReport) - 1; iNdEx >= 0; iNdEx-- { { @@ -1279,10 +1375,6 @@ func (m *Reports) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Reports) > 0 { for iNdEx := len(m.Reports) - 1; iNdEx >= 0; iNdEx-- { { @@ -1320,10 +1412,6 @@ func (m *Commit) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } n4, err4 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime):]) if err4 != nil { return 0, err4 @@ -1360,10 +1448,6 @@ func (m *Seal) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintRaftEntry(dAtA, i, uint64(m.LogStreamID)) i-- @@ -1392,10 +1476,6 @@ func (m *Unseal) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintRaftEntry(dAtA, i, uint64(m.LogStreamID)) i-- @@ -1424,10 +1504,6 @@ func (m *AddPeer) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.IsLearner { i-- if m.IsLearner { @@ -1473,10 +1549,6 @@ func (m *RemovePeer) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.NodeID != 0 { i = encodeVarintRaftEntry(dAtA, i, uint64(m.NodeID)) i-- @@ -1505,10 +1577,6 @@ func (m *Endpoint) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Url) > 0 { i -= len(m.Url) copy(dAtA[i:], m.Url) @@ -1544,10 +1612,6 @@ func (m *RecoverStateMachine) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.StateMachine != nil { { size, err := m.StateMachine.MarshalToSizedBuffer(dAtA[:i]) @@ -1583,10 +1647,6 @@ func (m *RaftEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } { size, err := m.Request.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -1635,9 +1695,29 @@ func (m *RaftEntry_Request) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + if m.UnregisterTopic != nil { + { + size, err := m.UnregisterTopic.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintRaftEntry(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x7a + } + if m.RegisterTopic != nil { + { + size, err := m.RegisterTopic.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintRaftEntry(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x72 } if m.RecoverStateMachine != nil { { @@ -1819,9 +1899,6 @@ func (m *RegisterStorageNode) ProtoSize() (n int) { l = m.StorageNode.ProtoSize() n += 1 + l + sovRaftEntry(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1834,8 +1911,29 @@ func (m *UnregisterStorageNode) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovRaftEntry(uint64(m.StorageNodeID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + return n +} + +func (m *RegisterTopic) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.TopicID != 0 { + n += 1 + sovRaftEntry(uint64(m.TopicID)) + } + return n +} + +func (m *UnregisterTopic) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.TopicID != 0 { + n += 1 + sovRaftEntry(uint64(m.TopicID)) } return n } @@ -1850,9 +1948,6 @@ func (m *RegisterLogStream) ProtoSize() (n int) { l = m.LogStream.ProtoSize() n += 1 + l + sovRaftEntry(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1865,9 +1960,6 @@ func (m *UnregisterLogStream) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovRaftEntry(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1881,9 +1973,6 @@ func (m *UpdateLogStream) ProtoSize() (n int) { l = m.LogStream.ProtoSize() n += 1 + l + sovRaftEntry(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1902,9 +1991,6 @@ func (m *Report) ProtoSize() (n int) { n += 1 + l + sovRaftEntry(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1920,9 +2006,6 @@ func (m *Reports) ProtoSize() (n int) { n += 1 + l + sovRaftEntry(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1937,9 +2020,6 @@ func (m *Commit) ProtoSize() (n int) { } l = github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime) n += 1 + l + sovRaftEntry(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1952,9 +2032,6 @@ func (m *Seal) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovRaftEntry(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1967,9 +2044,6 @@ func (m *Unseal) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovRaftEntry(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1989,9 +2063,6 @@ func (m *AddPeer) ProtoSize() (n int) { if m.IsLearner { n += 2 } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2004,9 +2075,6 @@ func (m *RemovePeer) ProtoSize() (n int) { if m.NodeID != 0 { n += 1 + sovRaftEntry(uint64(m.NodeID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2023,9 +2091,6 @@ func (m *Endpoint) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovRaftEntry(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2039,9 +2104,6 @@ func (m *RecoverStateMachine) ProtoSize() (n int) { l = m.StateMachine.ProtoSize() n += 1 + l + sovRaftEntry(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2062,9 +2124,6 @@ func (m *RaftEntry) ProtoSize() (n int) { } l = m.Request.ProtoSize() n += 1 + l + sovRaftEntry(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2126,8 +2185,13 @@ func (m *RaftEntry_Request) ProtoSize() (n int) { l = m.RecoverStateMachine.ProtoSize() n += 1 + l + sovRaftEntry(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.RegisterTopic != nil { + l = m.RegisterTopic.ProtoSize() + n += 1 + l + sovRaftEntry(uint64(l)) + } + if m.UnregisterTopic != nil { + l = m.UnregisterTopic.ProtoSize() + n += 1 + l + sovRaftEntry(uint64(l)) } return n } @@ -2178,6 +2242,12 @@ func (this *RaftEntry_Request) GetValue() interface{} { if this.RecoverStateMachine != nil { return this.RecoverStateMachine } + if this.RegisterTopic != nil { + return this.RegisterTopic + } + if this.UnregisterTopic != nil { + return this.UnregisterTopic + } return nil } @@ -2209,6 +2279,10 @@ func (this *RaftEntry_Request) SetValue(value interface{}) bool { this.Endpoint = vt case *RecoverStateMachine: this.RecoverStateMachine = vt + case *RegisterTopic: + this.RegisterTopic = vt + case *UnregisterTopic: + this.UnregisterTopic = vt default: return false } @@ -2291,7 +2365,6 @@ func (m *RegisterStorageNode) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2361,7 +2434,144 @@ func (m *UnregisterStorageNode) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RegisterTopic) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRaftEntry + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RegisterTopic: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RegisterTopic: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRaftEntry + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipRaftEntry(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthRaftEntry + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UnregisterTopic) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRaftEntry + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UnregisterTopic: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UnregisterTopic: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRaftEntry + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipRaftEntry(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthRaftEntry + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } iNdEx += skippy } } @@ -2448,7 +2658,6 @@ func (m *RegisterLogStream) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2518,7 +2727,6 @@ func (m *UnregisterLogStream) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2605,7 +2813,6 @@ func (m *UpdateLogStream) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2709,7 +2916,6 @@ func (m *Report) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2794,7 +3000,6 @@ func (m *Reports) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2897,7 +3102,6 @@ func (m *Commit) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2967,7 +3171,6 @@ func (m *Seal) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3037,7 +3240,6 @@ func (m *Unseal) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3159,7 +3361,6 @@ func (m *AddPeer) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3229,7 +3430,6 @@ func (m *RemovePeer) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3331,7 +3531,6 @@ func (m *Endpoint) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3418,7 +3617,6 @@ func (m *RecoverStateMachine) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3559,7 +3757,6 @@ func (m *RaftEntry) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4066,6 +4263,78 @@ func (m *RaftEntry_Request) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 14: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RegisterTopic", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRaftEntry + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthRaftEntry + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthRaftEntry + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.RegisterTopic == nil { + m.RegisterTopic = &RegisterTopic{} + } + if err := m.RegisterTopic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 15: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UnregisterTopic", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRaftEntry + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthRaftEntry + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthRaftEntry + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.UnregisterTopic == nil { + m.UnregisterTopic = &UnregisterTopic{} + } + if err := m.UnregisterTopic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipRaftEntry(dAtA[iNdEx:]) @@ -4078,7 +4347,6 @@ func (m *RaftEntry_Request) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/mrpb/raft_entry.proto b/proto/mrpb/raft_entry.proto index efdb82280..5b88858df 100644 --- a/proto/mrpb/raft_entry.proto +++ b/proto/mrpb/raft_entry.proto @@ -14,6 +14,9 @@ option go_package = "github.com/kakao/varlog/proto/mrpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message RegisterStorageNode { varlogpb.StorageNodeDescriptor storage_node = 1 @@ -21,19 +24,35 @@ message RegisterStorageNode { } message UnregisterStorageNode { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; } +message RegisterTopic { + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; +} + +message UnregisterTopic { + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; +} + message RegisterLogStream { varlogpb.LogStreamDescriptor log_stream = 1 [(gogoproto.nullable) = true]; } message UnregisterLogStream { - uint32 log_stream_id = 1 [ + int32 log_stream_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -45,7 +64,7 @@ message UpdateLogStream { } message Report { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" @@ -69,7 +88,7 @@ message Commit { } message Seal { - uint32 log_stream_id = 1 [ + int32 log_stream_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -77,7 +96,7 @@ message Seal { } message Unseal { - uint32 log_stream_id = 1 [ + int32 log_stream_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -132,6 +151,8 @@ message RaftEntry { RemovePeer remove_peer = 11; Endpoint endpoint = 12; RecoverStateMachine recover_state_machine = 13; + RegisterTopic register_topic = 14; + UnregisterTopic unregister_topic = 15; } uint64 node_index = 1; uint64 request_index = 2; diff --git a/proto/mrpb/raft_metadata_repository.go b/proto/mrpb/raft_metadata_repository.go index fc8d4e189..413087fc8 100644 --- a/proto/mrpb/raft_metadata_repository.go +++ b/proto/mrpb/raft_metadata_repository.go @@ -8,24 +8,12 @@ import ( "github.com/kakao/varlog/proto/varlogpb" ) -func (s *MetadataRepositoryDescriptor) LookupCommitResultsByPrev(glsn types.GLSN) *LogStreamCommitResults { +func (s *MetadataRepositoryDescriptor) LookupCommitResults(ver types.Version) *LogStreamCommitResults { i := sort.Search(len(s.LogStream.CommitHistory), func(i int) bool { - return s.LogStream.CommitHistory[i].PrevHighWatermark >= glsn + return s.LogStream.CommitHistory[i].Version >= ver }) - if i < len(s.LogStream.CommitHistory) && s.LogStream.CommitHistory[i].PrevHighWatermark == glsn { - return s.LogStream.CommitHistory[i] - } - - return nil -} - -func (s *MetadataRepositoryDescriptor) LookupCommitResults(glsn types.GLSN) *LogStreamCommitResults { - i := sort.Search(len(s.LogStream.CommitHistory), func(i int) bool { - return s.LogStream.CommitHistory[i].HighWatermark >= glsn - }) - - if i < len(s.LogStream.CommitHistory) && s.LogStream.CommitHistory[i].HighWatermark == glsn { + if i < len(s.LogStream.CommitHistory) && s.LogStream.CommitHistory[i].Version == ver { return s.LogStream.CommitHistory[i] } @@ -50,7 +38,7 @@ func (s *MetadataRepositoryDescriptor) GetFirstCommitResults() *LogStreamCommitR return s.LogStream.CommitHistory[0] } -func (crs *LogStreamCommitResults) LookupCommitResult(lsID types.LogStreamID, hintPos int) (snpb.LogStreamCommitResult, int, bool) { +func (crs *LogStreamCommitResults) LookupCommitResult(topicID types.TopicID, lsID types.LogStreamID, hintPos int) (snpb.LogStreamCommitResult, int, bool) { if crs == nil { return snpb.InvalidLogStreamCommitResult, -1, false } @@ -61,7 +49,11 @@ func (crs *LogStreamCommitResults) LookupCommitResult(lsID types.LogStreamID, hi } i := sort.Search(len(crs.CommitResults), func(i int) bool { - return crs.CommitResults[i].LogStreamID >= lsID + if crs.CommitResults[i].TopicID == topicID { + return crs.CommitResults[i].LogStreamID >= lsID + } + + return crs.CommitResults[i].TopicID >= topicID }) if i < len(crs.CommitResults) && crs.CommitResults[i].LogStreamID == lsID { @@ -110,3 +102,34 @@ func (l *StorageNodeUncommitReport) LookupReport(lsID types.LogStreamID) (snpb.L } return snpb.InvalidLogStreamUncommitReport, false } + +// return HighWatermark of the topic +// TODO:: lookup last logStream of the topic +func (crs *LogStreamCommitResults) LastHighWatermark(topicID types.TopicID, hintPos int) (types.GLSN, int) { + if crs == nil { + return types.InvalidGLSN, -1 + } + + n := len(crs.GetCommitResults()) + if n == 0 { + return types.InvalidGLSN, -1 + } + + cr, ok := crs.getCommitResultByIdx(hintPos) + if ok && cr.TopicID == topicID { + nxt, ok := crs.getCommitResultByIdx(hintPos + 1) + if !ok || nxt.TopicID != topicID { + return cr.GetHighWatermark(), hintPos + } + } + + i := sort.Search(len(crs.CommitResults), func(i int) bool { + return crs.CommitResults[i].TopicID >= topicID+1 + }) + + if i > 0 { + return crs.GetCommitResults()[i-1].GetHighWatermark(), i - 1 + } + + return types.InvalidGLSN, -1 +} diff --git a/proto/mrpb/raft_metadata_repository.pb.go b/proto/mrpb/raft_metadata_repository.pb.go index 9ecb94936..962205323 100644 --- a/proto/mrpb/raft_metadata_repository.pb.go +++ b/proto/mrpb/raft_metadata_repository.pb.go @@ -4,7 +4,6 @@ package mrpb import ( - bytes "bytes" fmt "fmt" io "io" math "math" @@ -30,12 +29,8 @@ var _ = math.Inf const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type LogStreamCommitResults struct { - HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` - PrevHighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=prev_high_watermark,json=prevHighWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"prev_high_watermark,omitempty"` - CommitResults []snpb.LogStreamCommitResult `protobuf:"bytes,3,rep,name=commit_results,json=commitResults,proto3" json:"commit_results"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Version github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,1,opt,name=version,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"version,omitempty"` + CommitResults []snpb.LogStreamCommitResult `protobuf:"bytes,3,rep,name=commit_results,json=commitResults,proto3" json:"commit_results"` } func (m *LogStreamCommitResults) Reset() { *m = LogStreamCommitResults{} } @@ -71,16 +66,9 @@ func (m *LogStreamCommitResults) XXX_DiscardUnknown() { var xxx_messageInfo_LogStreamCommitResults proto.InternalMessageInfo -func (m *LogStreamCommitResults) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *LogStreamCommitResults) GetVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { if m != nil { - return m.HighWatermark - } - return 0 -} - -func (m *LogStreamCommitResults) GetPrevHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { - if m != nil { - return m.PrevHighWatermark + return m.Version } return 0 } @@ -93,11 +81,8 @@ func (m *LogStreamCommitResults) GetCommitResults() []snpb.LogStreamCommitResult } type StorageNodeUncommitReport struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - UncommitReports []snpb.LogStreamUncommitReport `protobuf:"bytes,2,rep,name=uncommit_reports,json=uncommitReports,proto3" json:"uncommit_reports"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + UncommitReports []snpb.LogStreamUncommitReport `protobuf:"bytes,2,rep,name=uncommit_reports,json=uncommitReports,proto3" json:"uncommit_reports"` } func (m *StorageNodeUncommitReport) Reset() { *m = StorageNodeUncommitReport{} } @@ -148,11 +133,8 @@ func (m *StorageNodeUncommitReport) GetUncommitReports() []snpb.LogStreamUncommi } type LogStreamUncommitReports struct { - Replicas map[github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID]snpb.LogStreamUncommitReport `protobuf:"bytes,1,rep,name=replicas,proto3,castkey=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"replicas" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - Status varlogpb.LogStreamStatus `protobuf:"varint,2,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Replicas map[github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID]snpb.LogStreamUncommitReport `protobuf:"bytes,1,rep,name=replicas,proto3,castkey=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"replicas" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + Status varlogpb.LogStreamStatus `protobuf:"varint,2,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` } func (m *LogStreamUncommitReports) Reset() { *m = LogStreamUncommitReports{} } @@ -203,13 +185,10 @@ func (m *LogStreamUncommitReports) GetStatus() varlogpb.LogStreamStatus { } type MetadataRepositoryDescriptor struct { - Metadata *varlogpb.MetadataDescriptor `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` - LogStream *MetadataRepositoryDescriptor_LogStreamDescriptor `protobuf:"bytes,2,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - PeersMap MetadataRepositoryDescriptor_PeerDescriptorMap `protobuf:"bytes,3,opt,name=peers_map,json=peersMap,proto3" json:"peers_map"` - Endpoints map[github_daumkakao_com_varlog_varlog_pkg_types.NodeID]string `protobuf:"bytes,4,rep,name=endpoints,proto3,castkey=github.com/kakao/varlog/pkg/types.NodeID" json:"endpoints,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Metadata *varlogpb.MetadataDescriptor `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` + LogStream *MetadataRepositoryDescriptor_LogStreamDescriptor `protobuf:"bytes,2,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` + PeersMap MetadataRepositoryDescriptor_PeerDescriptorMap `protobuf:"bytes,3,opt,name=peers_map,json=peersMap,proto3" json:"peers_map"` + Endpoints map[github_daumkakao_com_varlog_varlog_pkg_types.NodeID]string `protobuf:"bytes,4,rep,name=endpoints,proto3,castkey=github.com/kakao/varlog/pkg/types.NodeID" json:"endpoints,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` } func (m *MetadataRepositoryDescriptor) Reset() { *m = MetadataRepositoryDescriptor{} } @@ -274,12 +253,9 @@ func (m *MetadataRepositoryDescriptor) GetEndpoints() map[github_daumkakao_com_v } type MetadataRepositoryDescriptor_LogStreamDescriptor struct { - TrimGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=trim_glsn,json=trimGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"trim_glsn,omitempty"` - CommitHistory []*LogStreamCommitResults `protobuf:"bytes,2,rep,name=commit_history,json=commitHistory,proto3" json:"commit_history,omitempty"` - UncommitReports map[github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID]*LogStreamUncommitReports `protobuf:"bytes,3,rep,name=uncommit_reports,json=uncommitReports,proto3,castkey=github.com/kakao/varlog/pkg/types.LogStreamID" json:"uncommit_reports,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TrimVersion github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,1,opt,name=trim_version,json=trimVersion,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"trim_version,omitempty"` + CommitHistory []*LogStreamCommitResults `protobuf:"bytes,2,rep,name=commit_history,json=commitHistory,proto3" json:"commit_history,omitempty"` + UncommitReports map[github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID]*LogStreamUncommitReports `protobuf:"bytes,3,rep,name=uncommit_reports,json=uncommitReports,proto3,castkey=github.com/kakao/varlog/pkg/types.LogStreamID" json:"uncommit_reports,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` } func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) Reset() { @@ -319,9 +295,9 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) XXX_DiscardUnknown() var xxx_messageInfo_MetadataRepositoryDescriptor_LogStreamDescriptor proto.InternalMessageInfo -func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) GetTrimGLSN() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) GetTrimVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { if m != nil { - return m.TrimGLSN + return m.TrimVersion } return 0 } @@ -341,11 +317,8 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) GetUncommitReports() } type MetadataRepositoryDescriptor_PeerDescriptor struct { - URL string `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"` - IsLearner bool `protobuf:"varint,2,opt,name=is_learner,json=isLearner,proto3" json:"is_learner,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + URL string `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"` + IsLearner bool `protobuf:"varint,2,opt,name=is_learner,json=isLearner,proto3" json:"is_learner,omitempty"` } func (m *MetadataRepositoryDescriptor_PeerDescriptor) Reset() { @@ -404,10 +377,7 @@ type MetadataRepositoryDescriptor_PeerDescriptorMap struct { // applied_index is the AppliedIndex of RAFT that is updated by changing // configuration of members. For example, AddPeer and RemovePeer result // in increasing applied_index. - AppliedIndex uint64 `protobuf:"varint,2,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + AppliedIndex uint64 `protobuf:"varint,2,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` } func (m *MetadataRepositoryDescriptor_PeerDescriptorMap) Reset() { @@ -480,64 +450,62 @@ func init() { } var fileDescriptor_60447af781d89487 = []byte{ - // 903 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x56, 0xcf, 0x6f, 0x1b, 0x45, - 0x14, 0x66, 0x6d, 0xb7, 0xd8, 0x2f, 0xd8, 0x6d, 0xa7, 0x15, 0xb8, 0x16, 0x64, 0xa3, 0x04, 0xa4, - 0x72, 0xe8, 0x5a, 0x24, 0x42, 0x58, 0x49, 0x39, 0x60, 0x12, 0xda, 0xa0, 0x24, 0xad, 0x36, 0x44, - 0x48, 0x08, 0xb1, 0x1a, 0x7b, 0xa7, 0x9b, 0x95, 0xf7, 0xc7, 0x68, 0x66, 0x36, 0xc5, 0xd7, 0xfc, - 0x03, 0x1c, 0x10, 0xf7, 0xfe, 0x11, 0xdc, 0xb8, 0x72, 0xe8, 0x11, 0x89, 0xbb, 0x23, 0x99, 0x0b, - 0x12, 0x17, 0xce, 0xbd, 0x80, 0x76, 0x66, 0xd6, 0xde, 0xad, 0xdd, 0xca, 0x89, 0x4f, 0xd9, 0xf9, - 0xf1, 0xde, 0xf7, 0xde, 0x37, 0xdf, 0xf7, 0x62, 0xf8, 0x98, 0xb2, 0x58, 0xc4, 0xed, 0x90, 0xd1, - 0x5e, 0x9b, 0xe1, 0xa7, 0xc2, 0x09, 0x89, 0xc0, 0x2e, 0x16, 0xd8, 0x61, 0x84, 0xc6, 0xdc, 0x17, - 0x31, 0x1b, 0x5a, 0xf2, 0x0e, 0x5a, 0x39, 0xc3, 0x2c, 0x88, 0x3d, 0x2b, 0xbd, 0xdb, 0xba, 0xef, - 0xf9, 0xe2, 0x34, 0xe9, 0x59, 0xfd, 0x38, 0x6c, 0x7b, 0xb1, 0x17, 0xb7, 0xe5, 0x9d, 0x5e, 0xf2, - 0x54, 0xae, 0x54, 0xd2, 0xf4, 0x4b, 0xc5, 0xb6, 0xde, 0x53, 0xb1, 0xb4, 0xd7, 0xce, 0xf2, 0xeb, - 0x83, 0x55, 0x1e, 0xd1, 0x5e, 0x3b, 0x88, 0x3d, 0x87, 0x0b, 0x46, 0x70, 0x28, 0x61, 0x99, 0x20, - 0x4c, 0x9d, 0xaf, 0xff, 0x5a, 0x82, 0x77, 0x0f, 0x62, 0xef, 0x58, 0x1e, 0x7e, 0x19, 0x87, 0xa1, - 0x2f, 0x6c, 0xc2, 0x93, 0x40, 0x70, 0xf4, 0x3d, 0x34, 0x4e, 0x7d, 0xef, 0xd4, 0x79, 0x86, 0x05, - 0x61, 0x21, 0x66, 0x83, 0xa6, 0xb1, 0x66, 0xdc, 0xab, 0x74, 0x3f, 0x7d, 0x39, 0x32, 0x3f, 0xd1, - 0xe5, 0xb9, 0x38, 0x09, 0x07, 0x78, 0x80, 0x63, 0x59, 0xa8, 0x2a, 0x22, 0xfb, 0x43, 0x07, 0x5e, - 0x5b, 0x0c, 0x29, 0xe1, 0xd6, 0xc3, 0x83, 0xe3, 0x23, 0xbb, 0x9e, 0x26, 0xfb, 0x36, 0xcb, 0x85, - 0x08, 0xdc, 0xa6, 0x8c, 0x9c, 0x39, 0xaf, 0x40, 0x94, 0x96, 0x81, 0xb8, 0x95, 0x66, 0x7c, 0x54, - 0x80, 0x79, 0x0c, 0x8d, 0xbe, 0xec, 0xca, 0x61, 0xaa, 0xad, 0x66, 0x79, 0xad, 0x7c, 0x6f, 0x65, - 0x73, 0xdd, 0xd2, 0x6c, 0xa7, 0xfc, 0x58, 0x73, 0x19, 0xe8, 0x56, 0x5e, 0x8c, 0xcc, 0xb7, 0xec, - 0x7a, 0x3f, 0xcf, 0xca, 0x76, 0xe5, 0xef, 0xe7, 0xa6, 0xb1, 0xfe, 0x8f, 0x01, 0x77, 0x8f, 0x45, - 0xcc, 0xb0, 0x47, 0x8e, 0x62, 0x97, 0x9c, 0x44, 0xd9, 0xa5, 0x94, 0x5c, 0xf4, 0x0c, 0x6e, 0x70, - 0x75, 0xe8, 0x44, 0xb1, 0x4b, 0x1c, 0xdf, 0x95, 0xd4, 0xd5, 0xbb, 0x8f, 0xc7, 0x23, 0xb3, 0x9e, - 0x8b, 0xdb, 0xdf, 0x7d, 0x39, 0x32, 0xb7, 0x2f, 0xd5, 0x68, 0x21, 0xda, 0xae, 0xf3, 0xdc, 0xd2, - 0x45, 0x27, 0x70, 0x33, 0x89, 0x26, 0xfd, 0xa6, 0xb5, 0xf0, 0x66, 0x49, 0xf6, 0xfb, 0xe1, 0xfc, - 0x7e, 0x8b, 0x85, 0xeb, 0x8e, 0x6f, 0x24, 0x85, 0x5d, 0xbe, 0xfe, 0x67, 0x09, 0x9a, 0xaf, 0x09, - 0xe1, 0xe8, 0x67, 0x03, 0xaa, 0x8c, 0xd0, 0xc0, 0xef, 0x63, 0xde, 0x34, 0x24, 0xd8, 0x96, 0x95, - 0x93, 0xf2, 0xeb, 0xc0, 0xb8, 0x65, 0xeb, 0xa8, 0xbd, 0x48, 0xb0, 0x61, 0xb7, 0x9b, 0x62, 0x9f, - 0x5f, 0x2c, 0x45, 0xc7, 0xa4, 0x10, 0xd4, 0x81, 0xeb, 0x5c, 0x60, 0x91, 0x70, 0xa9, 0xa8, 0xc6, - 0xe6, 0x5a, 0x56, 0x52, 0x66, 0x94, 0x69, 0x59, 0xc7, 0xf2, 0x9e, 0xad, 0xef, 0xb7, 0x30, 0xd4, - 0x0b, 0x85, 0xa1, 0x9b, 0x50, 0x1e, 0x90, 0xa1, 0x7a, 0x41, 0x3b, 0xfd, 0x44, 0xdb, 0x70, 0xed, - 0x0c, 0x07, 0x09, 0x91, 0xb9, 0x17, 0xe4, 0xd6, 0x56, 0x21, 0xdb, 0xa5, 0x8e, 0xa1, 0x35, 0xf4, - 0x1f, 0xc0, 0xfb, 0x87, 0xda, 0xad, 0xf6, 0x64, 0x18, 0xec, 0x12, 0xde, 0x67, 0x3e, 0x15, 0x31, - 0x43, 0x7b, 0x50, 0xcd, 0xdc, 0x2c, 0xd1, 0x57, 0x36, 0x37, 0x66, 0xba, 0xc8, 0x12, 0x4c, 0xc3, - 0xe4, 0x23, 0x1a, 0xf6, 0x24, 0x14, 0xf5, 0x00, 0xa6, 0xfe, 0xd7, 0x25, 0x7f, 0x5e, 0x78, 0xa1, - 0x37, 0x55, 0x31, 0xed, 0x67, 0x06, 0xa2, 0x16, 0x64, 0x47, 0xe8, 0x07, 0xa8, 0x51, 0x42, 0x18, - 0x77, 0x42, 0x4c, 0x9b, 0x65, 0x09, 0xb1, 0xb3, 0x38, 0xc4, 0x13, 0x42, 0xd8, 0x74, 0x79, 0x88, - 0xa9, 0x16, 0x62, 0x55, 0xe6, 0x3c, 0xc4, 0x14, 0xfd, 0x64, 0x40, 0x8d, 0x44, 0x2e, 0x8d, 0xfd, - 0x48, 0xf0, 0x66, 0x45, 0xaa, 0xac, 0xb3, 0x38, 0xc0, 0x5e, 0x16, 0xaa, 0xa4, 0xf6, 0xd9, 0xf9, - 0x85, 0xb9, 0x75, 0x29, 0x99, 0x69, 0x7d, 0x4d, 0x6b, 0x68, 0xfd, 0x5b, 0x86, 0xdb, 0x73, 0xa8, - 0x49, 0x99, 0x10, 0xcc, 0x0f, 0x1d, 0x2f, 0xe0, 0x91, 0x1e, 0x98, 0x5f, 0x8c, 0x47, 0x66, 0xf5, - 0x1b, 0xe6, 0x87, 0xe9, 0x90, 0xba, 0xda, 0x64, 0xab, 0xa6, 0x39, 0x1f, 0x06, 0x3c, 0x42, 0x4f, - 0x26, 0x03, 0xed, 0xd4, 0x4f, 0xcd, 0x3f, 0xd4, 0x06, 0xdf, 0x98, 0xef, 0xb9, 0xc2, 0x48, 0xd7, - 0xef, 0xa6, 0x27, 0xda, 0x23, 0x15, 0x8f, 0x7e, 0x33, 0xe6, 0x4c, 0x0d, 0x35, 0x25, 0xed, 0xa5, - 0x64, 0x62, 0xbd, 0x62, 0x78, 0x45, 0xfe, 0x83, 0xf3, 0x0b, 0xb3, 0x73, 0x29, 0x06, 0x26, 0xa9, - 0xf7, 0x77, 0x67, 0x66, 0x53, 0xcb, 0x87, 0x3b, 0xf3, 0x60, 0xe6, 0xb8, 0x76, 0xa7, 0xe8, 0xda, - 0x8f, 0x16, 0x1a, 0x52, 0x39, 0xdb, 0xb6, 0xbe, 0x86, 0x46, 0x51, 0xa9, 0xe8, 0x2e, 0x94, 0x13, - 0x16, 0x48, 0x90, 0x5a, 0xf7, 0xed, 0xf1, 0xc8, 0x2c, 0x9f, 0xd8, 0x07, 0x76, 0xba, 0x87, 0x3e, - 0x00, 0xf0, 0xb9, 0x13, 0x10, 0xcc, 0x22, 0xc2, 0x24, 0x64, 0xd5, 0xae, 0xf9, 0xfc, 0x40, 0x6d, - 0xb4, 0x7e, 0x2f, 0xc1, 0xad, 0x19, 0xd9, 0xa3, 0x5f, 0x0c, 0xb8, 0x26, 0x35, 0xaf, 0x07, 0xe9, - 0x57, 0x4b, 0x78, 0x48, 0xee, 0x2c, 0x2b, 0x78, 0x55, 0x0d, 0xda, 0x80, 0x3a, 0xa6, 0x34, 0xf0, - 0x89, 0xeb, 0xf8, 0x91, 0x4b, 0x7e, 0x54, 0xff, 0xa6, 0xed, 0x77, 0xf4, 0xe6, 0x7e, 0xba, 0xd7, - 0x62, 0x00, 0x53, 0xc8, 0x3c, 0xff, 0x15, 0xc5, 0xff, 0x51, 0x91, 0xff, 0xce, 0x55, 0x7b, 0xcb, - 0x3f, 0xc9, 0x03, 0x68, 0x14, 0xbd, 0x3d, 0x07, 0xf7, 0x4e, 0x1e, 0xb7, 0x96, 0x8b, 0xee, 0xee, - 0xbc, 0x18, 0xaf, 0x1a, 0x7f, 0x8c, 0x57, 0x8d, 0xe7, 0x7f, 0xad, 0x1a, 0xdf, 0xdd, 0x5f, 0x84, - 0xa0, 0xc9, 0xaf, 0xb9, 0xde, 0x75, 0xf9, 0xbd, 0xf5, 0x7f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x56, - 0xa5, 0x42, 0x08, 0xe2, 0x09, 0x00, 0x00, + // 871 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x56, 0xcf, 0x6f, 0x1b, 0x45, + 0x14, 0xce, 0xc6, 0x4e, 0x1b, 0xbf, 0xd4, 0x69, 0x19, 0x2a, 0x70, 0x2d, 0xb0, 0xa3, 0x04, 0xa4, + 0x70, 0xe8, 0x5a, 0x4a, 0x90, 0xb0, 0xd2, 0x72, 0x31, 0x09, 0x10, 0x94, 0xb4, 0xd5, 0x44, 0xe1, + 0x00, 0x12, 0xab, 0xb1, 0x77, 0xea, 0xae, 0xb2, 0xbb, 0x33, 0x9a, 0x99, 0x0d, 0xf8, 0x9a, 0x7f, + 0x00, 0x24, 0xc4, 0xbd, 0xff, 0x07, 0x47, 0x38, 0xf4, 0x58, 0x89, 0x0b, 0x27, 0x47, 0x4a, 0x2e, + 0x48, 0x88, 0x7f, 0xa0, 0xe2, 0x80, 0x76, 0x66, 0xd6, 0xde, 0x25, 0x1b, 0xe4, 0x90, 0x9e, 0xbc, + 0xf3, 0xe3, 0xbd, 0xef, 0x7b, 0x6f, 0xbe, 0xef, 0xc9, 0xf0, 0x01, 0x17, 0x4c, 0xb1, 0x4e, 0x24, + 0x78, 0xbf, 0x23, 0xc8, 0x53, 0xe5, 0x45, 0x54, 0x11, 0x9f, 0x28, 0xe2, 0x09, 0xca, 0x99, 0x0c, + 0x14, 0x13, 0x23, 0x57, 0xdf, 0x41, 0x4b, 0xc7, 0x44, 0x84, 0x6c, 0xe8, 0xa6, 0x77, 0x9b, 0xf7, + 0x87, 0x81, 0x7a, 0x96, 0xf4, 0xdd, 0x01, 0x8b, 0x3a, 0x43, 0x36, 0x64, 0x1d, 0x7d, 0xa7, 0x9f, + 0x3c, 0xd5, 0x2b, 0x93, 0x34, 0xfd, 0x32, 0xb1, 0xcd, 0xb7, 0x4d, 0x2c, 0xef, 0x77, 0xb2, 0xfc, + 0xf6, 0xa0, 0x25, 0x63, 0xde, 0xef, 0x84, 0x6c, 0xe8, 0x49, 0x25, 0x28, 0x89, 0x34, 0xac, 0x50, + 0x54, 0x98, 0xf3, 0xd5, 0x5f, 0x1c, 0x78, 0x6b, 0x8f, 0x0d, 0x0f, 0xf4, 0xe1, 0x27, 0x2c, 0x8a, + 0x02, 0x85, 0xa9, 0x4c, 0x42, 0x25, 0x11, 0x86, 0x9b, 0xc7, 0x54, 0xc8, 0x80, 0xc5, 0x0d, 0x67, + 0xc5, 0x59, 0xaf, 0xf6, 0xba, 0xaf, 0xc6, 0xed, 0x0f, 0x2d, 0x2f, 0x9f, 0x24, 0xd1, 0x11, 0x39, + 0x22, 0x4c, 0x33, 0x34, 0xe8, 0xd9, 0x0f, 0x3f, 0x1a, 0x76, 0xd4, 0x88, 0x53, 0xe9, 0x7e, 0x69, + 0xe2, 0x71, 0x96, 0x08, 0x3d, 0x86, 0xe5, 0x81, 0x06, 0xf1, 0x84, 0x41, 0x69, 0x54, 0x56, 0x2a, + 0xeb, 0x4b, 0x1b, 0xab, 0xae, 0x2d, 0x3e, 0xa5, 0xeb, 0x96, 0x12, 0xea, 0x55, 0x5f, 0x8c, 0xdb, + 0x73, 0xb8, 0x3e, 0xc8, 0x93, 0xdc, 0xaa, 0xfe, 0xf1, 0xbc, 0xed, 0xac, 0xfe, 0xe9, 0xc0, 0xbd, + 0x03, 0xc5, 0x04, 0x19, 0xd2, 0x47, 0xcc, 0xa7, 0x87, 0x71, 0x76, 0x29, 0xad, 0x15, 0x7d, 0x0b, + 0xb7, 0xa5, 0x39, 0xf4, 0x62, 0xe6, 0x53, 0x2f, 0xf0, 0x75, 0x41, 0x0b, 0xbd, 0xc7, 0x67, 0xe3, + 0x76, 0x3d, 0x17, 0xb7, 0xbb, 0xfd, 0x6a, 0xdc, 0xde, 0xba, 0x52, 0x85, 0x85, 0x68, 0x5c, 0x97, + 0xb9, 0xa5, 0x8f, 0x0e, 0xe1, 0x4e, 0x12, 0x4f, 0xea, 0x4d, 0xb9, 0xc8, 0xc6, 0xbc, 0xae, 0xf7, + 0xbd, 0xf2, 0x7a, 0x8b, 0xc4, 0x6d, 0xc5, 0xb7, 0x93, 0xc2, 0xae, 0x5c, 0xfd, 0x6d, 0x1e, 0x1a, + 0x97, 0x84, 0x48, 0xf4, 0xa3, 0x03, 0x8b, 0x82, 0xf2, 0x30, 0x18, 0x10, 0xd9, 0x70, 0x34, 0xd8, + 0xa6, 0x9b, 0x53, 0xd6, 0x65, 0x60, 0xd2, 0xc5, 0x36, 0x6a, 0x27, 0x56, 0x62, 0xd4, 0xeb, 0xa5, + 0xd8, 0x27, 0xa7, 0xd7, 0x6a, 0xc7, 0x84, 0x08, 0xea, 0xc2, 0x0d, 0xa9, 0x88, 0x4a, 0xd2, 0xfa, + 0x9d, 0xf5, 0xe5, 0x8d, 0x95, 0x8c, 0x52, 0xa6, 0xdb, 0x29, 0xad, 0x03, 0x7d, 0x0f, 0xdb, 0xfb, + 0x4d, 0x02, 0xf5, 0x02, 0x31, 0x74, 0x07, 0x2a, 0x47, 0x74, 0x64, 0x5e, 0x10, 0xa7, 0x9f, 0x68, + 0x0b, 0x16, 0x8e, 0x49, 0x98, 0x50, 0x9d, 0x7b, 0xc6, 0xde, 0x62, 0x13, 0xb2, 0x35, 0xdf, 0x75, + 0xac, 0x86, 0xfe, 0x06, 0x78, 0x67, 0xdf, 0x9a, 0x07, 0x4f, 0xbc, 0xb9, 0x4d, 0xe5, 0x40, 0x04, + 0x5c, 0x31, 0x81, 0x76, 0x60, 0x31, 0x33, 0x97, 0x46, 0x5f, 0xda, 0x58, 0xbb, 0x50, 0x45, 0x96, + 0x60, 0x1a, 0xa6, 0x1f, 0xd1, 0xc1, 0x93, 0x50, 0xd4, 0x07, 0x98, 0xda, 0xd1, 0x52, 0xfe, 0xb8, + 0xf0, 0x42, 0xff, 0xc5, 0x62, 0x5a, 0xcf, 0x05, 0x88, 0x5a, 0x98, 0x1d, 0xa1, 0x6f, 0xa0, 0xc6, + 0x29, 0x15, 0xd2, 0x8b, 0x08, 0x6f, 0x54, 0x34, 0xc4, 0x83, 0xd9, 0x21, 0x9e, 0x50, 0x2a, 0xa6, + 0xcb, 0x7d, 0xc2, 0xad, 0x10, 0x17, 0x75, 0xce, 0x7d, 0xc2, 0xd1, 0xf7, 0x0e, 0xd4, 0x68, 0xec, + 0x73, 0x16, 0xc4, 0x4a, 0x36, 0xaa, 0x5a, 0x65, 0xdd, 0xd9, 0x01, 0x76, 0xb2, 0x50, 0x23, 0xb5, + 0x8f, 0x4e, 0x4e, 0xdb, 0x9b, 0x57, 0x92, 0x99, 0xd5, 0xd7, 0x94, 0x43, 0xf3, 0xaf, 0x0a, 0xbc, + 0x59, 0xd2, 0x1a, 0xf4, 0x35, 0xdc, 0x52, 0x22, 0x88, 0xbc, 0xd7, 0x35, 0xc9, 0x96, 0xd2, 0x6c, + 0x76, 0x81, 0x9e, 0x4c, 0xa6, 0xd9, 0xb3, 0x20, 0x75, 0xfe, 0xc8, 0xba, 0x7b, 0xad, 0xdc, 0x70, + 0x85, 0xf1, 0x6a, 0x1f, 0xcd, 0x8e, 0xb3, 0xcf, 0x4d, 0x3c, 0xfa, 0xd9, 0x29, 0x19, 0x19, 0x66, + 0x44, 0xe2, 0x6b, 0x69, 0xc4, 0xfd, 0x97, 0xdb, 0x4d, 0xe7, 0x1f, 0x9e, 0x9c, 0xb6, 0xbb, 0x57, + 0xea, 0xc3, 0x24, 0xf5, 0xee, 0xf6, 0x85, 0xc1, 0xd4, 0x0c, 0xe0, 0x6e, 0x19, 0x4c, 0x89, 0x65, + 0x1f, 0x14, 0x2d, 0xfb, 0xfe, 0x4c, 0x13, 0x2a, 0xe7, 0xd9, 0xe6, 0x17, 0xb0, 0x5c, 0x94, 0x29, + 0xba, 0x07, 0x95, 0x44, 0x84, 0x1a, 0xa4, 0xd6, 0xbb, 0x79, 0x36, 0x6e, 0x57, 0x0e, 0xf1, 0x1e, + 0x4e, 0xf7, 0xd0, 0xbb, 0x00, 0x81, 0xf4, 0x42, 0x4a, 0x44, 0x4c, 0x85, 0x86, 0x5c, 0xc4, 0xb5, + 0x40, 0xee, 0x99, 0x8d, 0xe6, 0xaf, 0xf3, 0xf0, 0xc6, 0x05, 0xcd, 0xa3, 0x9f, 0x1c, 0x58, 0xd0, + 0x82, 0xb7, 0x53, 0xf4, 0xd3, 0x6b, 0x18, 0x48, 0xef, 0x5c, 0x57, 0xed, 0x86, 0x0d, 0x5a, 0x83, + 0x3a, 0xe1, 0x3c, 0x0c, 0xa8, 0xef, 0x05, 0xb1, 0x4f, 0xbf, 0xd3, 0xf5, 0x54, 0xf1, 0x2d, 0xbb, + 0xb9, 0x9b, 0xee, 0x35, 0x05, 0xc0, 0x14, 0x32, 0xdf, 0xff, 0xaa, 0xe9, 0xff, 0xa3, 0x62, 0xff, + 0xbb, 0xff, 0xb7, 0xb6, 0xfc, 0x93, 0x3c, 0x84, 0xe5, 0xa2, 0xb1, 0x4b, 0x70, 0xef, 0xe6, 0x71, + 0x6b, 0xb9, 0xe8, 0xde, 0x67, 0x2f, 0xce, 0x5a, 0xce, 0xcb, 0xb3, 0x96, 0xf3, 0xc3, 0x79, 0x6b, + 0xee, 0xf9, 0x79, 0xcb, 0x79, 0x79, 0xde, 0x9a, 0xfb, 0xfd, 0xbc, 0x35, 0xf7, 0xd5, 0xfd, 0x59, + 0x9a, 0x35, 0xf9, 0x97, 0xd5, 0xbf, 0xa1, 0xbf, 0x37, 0xff, 0x09, 0x00, 0x00, 0xff, 0xff, 0x64, + 0xd8, 0xc2, 0xf8, 0x7a, 0x09, 0x00, 0x00, } func (this *LogStreamCommitResults) Equal(that interface{}) bool { @@ -559,10 +527,7 @@ func (this *LogStreamCommitResults) Equal(that interface{}) bool { } else if this == nil { return false } - if this.HighWatermark != that1.HighWatermark { - return false - } - if this.PrevHighWatermark != that1.PrevHighWatermark { + if this.Version != that1.Version { return false } if len(this.CommitResults) != len(that1.CommitResults) { @@ -573,9 +538,6 @@ func (this *LogStreamCommitResults) Equal(that interface{}) bool { return false } } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } func (this *LogStreamUncommitReports) Equal(that interface{}) bool { @@ -610,9 +572,6 @@ func (this *LogStreamUncommitReports) Equal(that interface{}) bool { if this.Status != that1.Status { return false } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } func (m *LogStreamCommitResults) Marshal() (dAtA []byte, err error) { @@ -635,10 +594,6 @@ func (m *LogStreamCommitResults) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.CommitResults) > 0 { for iNdEx := len(m.CommitResults) - 1; iNdEx >= 0; iNdEx-- { { @@ -653,13 +608,8 @@ func (m *LogStreamCommitResults) MarshalToSizedBuffer(dAtA []byte) (int, error) dAtA[i] = 0x1a } } - if m.PrevHighWatermark != 0 { - i = encodeVarintRaftMetadataRepository(dAtA, i, uint64(m.PrevHighWatermark)) - i-- - dAtA[i] = 0x10 - } - if m.HighWatermark != 0 { - i = encodeVarintRaftMetadataRepository(dAtA, i, uint64(m.HighWatermark)) + if m.Version != 0 { + i = encodeVarintRaftMetadataRepository(dAtA, i, uint64(m.Version)) i-- dAtA[i] = 0x8 } @@ -686,10 +636,6 @@ func (m *StorageNodeUncommitReport) MarshalToSizedBuffer(dAtA []byte) (int, erro _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.UncommitReports) > 0 { for iNdEx := len(m.UncommitReports) - 1; iNdEx >= 0; iNdEx-- { { @@ -732,10 +678,6 @@ func (m *LogStreamUncommitReports) MarshalToSizedBuffer(dAtA []byte) (int, error _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Status != 0 { i = encodeVarintRaftMetadataRepository(dAtA, i, uint64(m.Status)) i-- @@ -786,10 +728,6 @@ func (m *MetadataRepositoryDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, e _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Endpoints) > 0 { for k := range m.Endpoints { v := m.Endpoints[k] @@ -864,10 +802,6 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) MarshalToSizedBuffer( _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.UncommitReports) > 0 { for k := range m.UncommitReports { v := m.UncommitReports[k] @@ -906,8 +840,8 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) MarshalToSizedBuffer( dAtA[i] = 0x12 } } - if m.TrimGLSN != 0 { - i = encodeVarintRaftMetadataRepository(dAtA, i, uint64(m.TrimGLSN)) + if m.TrimVersion != 0 { + i = encodeVarintRaftMetadataRepository(dAtA, i, uint64(m.TrimVersion)) i-- dAtA[i] = 0x8 } @@ -934,10 +868,6 @@ func (m *MetadataRepositoryDescriptor_PeerDescriptor) MarshalToSizedBuffer(dAtA _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.IsLearner { i-- if m.IsLearner { @@ -978,10 +908,6 @@ func (m *MetadataRepositoryDescriptor_PeerDescriptorMap) MarshalToSizedBuffer(dA _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.AppliedIndex != 0 { i = encodeVarintRaftMetadataRepository(dAtA, i, uint64(m.AppliedIndex)) i-- @@ -1031,11 +957,8 @@ func (m *LogStreamCommitResults) ProtoSize() (n int) { } var l int _ = l - if m.HighWatermark != 0 { - n += 1 + sovRaftMetadataRepository(uint64(m.HighWatermark)) - } - if m.PrevHighWatermark != 0 { - n += 1 + sovRaftMetadataRepository(uint64(m.PrevHighWatermark)) + if m.Version != 0 { + n += 1 + sovRaftMetadataRepository(uint64(m.Version)) } if len(m.CommitResults) > 0 { for _, e := range m.CommitResults { @@ -1043,9 +966,6 @@ func (m *LogStreamCommitResults) ProtoSize() (n int) { n += 1 + l + sovRaftMetadataRepository(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1064,9 +984,6 @@ func (m *StorageNodeUncommitReport) ProtoSize() (n int) { n += 1 + l + sovRaftMetadataRepository(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1088,9 +1005,6 @@ func (m *LogStreamUncommitReports) ProtoSize() (n int) { if m.Status != 0 { n += 1 + sovRaftMetadataRepository(uint64(m.Status)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1118,9 +1032,6 @@ func (m *MetadataRepositoryDescriptor) ProtoSize() (n int) { n += mapEntrySize + 1 + sovRaftMetadataRepository(uint64(mapEntrySize)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1130,8 +1041,8 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) ProtoSize() (n int) { } var l int _ = l - if m.TrimGLSN != 0 { - n += 1 + sovRaftMetadataRepository(uint64(m.TrimGLSN)) + if m.TrimVersion != 0 { + n += 1 + sovRaftMetadataRepository(uint64(m.TrimVersion)) } if len(m.CommitHistory) > 0 { for _, e := range m.CommitHistory { @@ -1152,9 +1063,6 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) ProtoSize() (n int) { n += mapEntrySize + 1 + sovRaftMetadataRepository(uint64(mapEntrySize)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1171,9 +1079,6 @@ func (m *MetadataRepositoryDescriptor_PeerDescriptor) ProtoSize() (n int) { if m.IsLearner { n += 2 } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1199,9 +1104,6 @@ func (m *MetadataRepositoryDescriptor_PeerDescriptorMap) ProtoSize() (n int) { if m.AppliedIndex != 0 { n += 1 + sovRaftMetadataRepository(uint64(m.AppliedIndex)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1242,28 +1144,9 @@ func (m *LogStreamCommitResults) Unmarshal(dAtA []byte) error { switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) - } - m.HighWatermark = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowRaftMetadataRepository - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.HighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PrevHighWatermark", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) } - m.PrevHighWatermark = 0 + m.Version = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowRaftMetadataRepository @@ -1273,7 +1156,7 @@ func (m *LogStreamCommitResults) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.PrevHighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.Version |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift if b < 0x80 { break } @@ -1324,7 +1207,6 @@ func (m *LogStreamCommitResults) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1428,7 +1310,6 @@ func (m *StorageNodeUncommitReport) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1499,7 +1380,7 @@ func (m *LogStreamUncommitReports) Unmarshal(dAtA []byte) error { if m.Replicas == nil { m.Replicas = make(map[github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID]snpb.LogStreamUncommitReport) } - var mapkey uint32 + var mapkey int32 mapvalue := &snpb.LogStreamUncommitReport{} for iNdEx < postIndex { entryPreIndex := iNdEx @@ -1529,7 +1410,7 @@ func (m *LogStreamUncommitReports) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - mapkey |= uint32(b&0x7F) << shift + mapkey |= int32(b&0x7F) << shift if b < 0x80 { break } @@ -1613,7 +1494,6 @@ func (m *LogStreamUncommitReports) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1882,7 +1762,6 @@ func (m *MetadataRepositoryDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1923,9 +1802,9 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) Unmarshal(dAtA []byte switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TrimGLSN", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TrimVersion", wireType) } - m.TrimGLSN = 0 + m.TrimVersion = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowRaftMetadataRepository @@ -1935,7 +1814,7 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) Unmarshal(dAtA []byte } b := dAtA[iNdEx] iNdEx++ - m.TrimGLSN |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.TrimVersion |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift if b < 0x80 { break } @@ -2006,7 +1885,7 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) Unmarshal(dAtA []byte if m.UncommitReports == nil { m.UncommitReports = make(map[github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID]*LogStreamUncommitReports) } - var mapkey uint32 + var mapkey int32 var mapvalue *LogStreamUncommitReports for iNdEx < postIndex { entryPreIndex := iNdEx @@ -2036,7 +1915,7 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) Unmarshal(dAtA []byte } b := dAtA[iNdEx] iNdEx++ - mapkey |= uint32(b&0x7F) << shift + mapkey |= int32(b&0x7F) << shift if b < 0x80 { break } @@ -2101,7 +1980,6 @@ func (m *MetadataRepositoryDescriptor_LogStreamDescriptor) Unmarshal(dAtA []byte if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2204,7 +2082,6 @@ func (m *MetadataRepositoryDescriptor_PeerDescriptor) Unmarshal(dAtA []byte) err if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2389,7 +2266,6 @@ func (m *MetadataRepositoryDescriptor_PeerDescriptorMap) Unmarshal(dAtA []byte) if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/mrpb/raft_metadata_repository.proto b/proto/mrpb/raft_metadata_repository.proto index 248486d01..0e79d4883 100644 --- a/proto/mrpb/raft_metadata_repository.proto +++ b/proto/mrpb/raft_metadata_repository.proto @@ -12,22 +12,22 @@ option go_package = "github.com/kakao/varlog/proto/mrpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message LogStreamCommitResults { option (gogoproto.equal) = true; - uint64 high_watermark = 1 + uint64 version = 1 [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; - uint64 prev_high_watermark = 2 - [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; + "github.com/kakao/varlog/pkg/types.Version"]; repeated snpb.LogStreamCommitResult commit_results = 3 [(gogoproto.nullable) = false]; } message StorageNodeUncommitReport { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" @@ -39,7 +39,7 @@ message StorageNodeUncommitReport { message LogStreamUncommitReports { option (gogoproto.equal) = true; - map replicas = 1 [ + map replicas = 1 [ (gogoproto.castkey) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.nullable) = false @@ -49,14 +49,12 @@ message LogStreamUncommitReports { message MetadataRepositoryDescriptor { message LogStreamDescriptor { - uint64 trim_glsn = 1 [ - (gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN", - (gogoproto.customname) = "TrimGLSN" - ]; + uint64 trim_version = 1 + [(gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.Version"]; repeated LogStreamCommitResults commit_history = 2 [(gogoproto.nullable) = true]; - map uncommit_reports = 3 + map uncommit_reports = 3 [(gogoproto.castkey) = "github.com/kakao/varlog/pkg/types.LogStreamID"]; } diff --git a/proto/mrpb/state_machine_log.go b/proto/mrpb/state_machine_log.go index 1dc0f3ca4..d9b37fb67 100644 --- a/proto/mrpb/state_machine_log.go +++ b/proto/mrpb/state_machine_log.go @@ -1,8 +1,5 @@ package mrpb func (rec *StateMachineLogRecord) Validate(crc uint32) bool { - if rec.Crc == crc { - return true - } - return false + return rec.Crc == crc } diff --git a/proto/mrpb/state_machine_log.pb.go b/proto/mrpb/state_machine_log.pb.go index 477b75f47..b86d72e41 100644 --- a/proto/mrpb/state_machine_log.pb.go +++ b/proto/mrpb/state_machine_log.pb.go @@ -52,11 +52,8 @@ func (StateMechineLogRecordType) EnumDescriptor() ([]byte, []int) { } type StateMachineLogCommitResult struct { - TrimGlsn github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=trim_glsn,json=trimGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"trim_glsn,omitempty"` - CommitResult *LogStreamCommitResults `protobuf:"bytes,2,opt,name=commit_result,json=commitResult,proto3" json:"commit_result,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TrimVersion github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,1,opt,name=trim_version,json=trimVersion,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"trim_version,omitempty"` + CommitResult *LogStreamCommitResults `protobuf:"bytes,2,opt,name=commit_result,json=commitResult,proto3" json:"commit_result,omitempty"` } func (m *StateMachineLogCommitResult) Reset() { *m = StateMachineLogCommitResult{} } @@ -92,9 +89,9 @@ func (m *StateMachineLogCommitResult) XXX_DiscardUnknown() { var xxx_messageInfo_StateMachineLogCommitResult proto.InternalMessageInfo -func (m *StateMachineLogCommitResult) GetTrimGlsn() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *StateMachineLogCommitResult) GetTrimVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { if m != nil { - return m.TrimGlsn + return m.TrimVersion } return 0 } @@ -107,11 +104,8 @@ func (m *StateMachineLogCommitResult) GetCommitResult() *LogStreamCommitResults } type StateMachineLogEntry struct { - AppliedIndex uint64 `protobuf:"varint,1,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` - Payload StateMachineLogEntry_Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + AppliedIndex uint64 `protobuf:"varint,1,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` + Payload StateMachineLogEntry_Payload `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload"` } func (m *StateMachineLogEntry) Reset() { *m = StateMachineLogEntry{} } @@ -168,9 +162,6 @@ type StateMachineLogEntry_Payload struct { UnregisterLogStream *UnregisterLogStream `protobuf:"bytes,4,opt,name=unregister_log_stream,json=unregisterLogStream,proto3" json:"unregister_log_stream,omitempty"` UpdateLogStream *UpdateLogStream `protobuf:"bytes,5,opt,name=update_log_stream,json=updateLogStream,proto3" json:"update_log_stream,omitempty"` CommitResult *StateMachineLogCommitResult `protobuf:"bytes,6,opt,name=commit_result,json=commitResult,proto3" json:"commit_result,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *StateMachineLogEntry_Payload) Reset() { *m = StateMachineLogEntry_Payload{} } @@ -249,12 +240,9 @@ func (m *StateMachineLogEntry_Payload) GetCommitResult() *StateMachineLogCommitR } type StateMachineLogRecord struct { - Type StateMechineLogRecordType `protobuf:"varint,1,opt,name=type,proto3,enum=varlog.mrpb.StateMechineLogRecordType" json:"type,omitempty"` - Crc uint32 `protobuf:"varint,2,opt,name=crc,proto3" json:"crc,omitempty"` - Data []byte `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Type StateMechineLogRecordType `protobuf:"varint,1,opt,name=type,proto3,enum=varlog.mrpb.StateMechineLogRecordType" json:"type,omitempty"` + Crc uint32 `protobuf:"varint,2,opt,name=crc,proto3" json:"crc,omitempty"` + Data []byte `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"` } func (m *StateMachineLogRecord) Reset() { *m = StateMachineLogRecord{} } @@ -324,44 +312,45 @@ func init() { } var fileDescriptor_f38190a80937bd89 = []byte{ - // 583 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x94, 0xc1, 0x6e, 0xd3, 0x30, - 0x18, 0xc7, 0x17, 0x96, 0xae, 0xcc, 0x6d, 0xa1, 0xf3, 0xa8, 0x28, 0x05, 0xb5, 0x55, 0x2b, 0xa1, - 0x82, 0xb4, 0x44, 0x14, 0x71, 0x19, 0xb7, 0x22, 0xb4, 0x4d, 0xea, 0x2a, 0xe4, 0x8e, 0xcb, 0x2e, - 0x91, 0x9b, 0x78, 0x59, 0xd4, 0x24, 0x8e, 0x6c, 0x07, 0xd1, 0x1b, 0x57, 0xde, 0x62, 0xef, 0xc1, - 0x0b, 0xf4, 0xc8, 0x13, 0xec, 0x30, 0xde, 0x82, 0x13, 0xb2, 0x93, 0xb6, 0x49, 0x17, 0x10, 0xa7, - 0xda, 0xff, 0x7c, 0xdf, 0xef, 0xfb, 0xf7, 0xfb, 0x6c, 0x83, 0x5e, 0xc4, 0xa8, 0xa0, 0x66, 0xc0, - 0xa2, 0x99, 0xc9, 0x05, 0x16, 0xc4, 0x0a, 0xb0, 0x7d, 0xed, 0x85, 0xc4, 0xf2, 0xa9, 0x6b, 0xa8, - 0x8f, 0xb0, 0xf2, 0x05, 0x33, 0xb9, 0x93, 0x41, 0xad, 0x23, 0xd7, 0x13, 0xd7, 0xf1, 0xcc, 0xb0, - 0x69, 0x60, 0xba, 0xd4, 0xa5, 0xa6, 0x8a, 0x99, 0xc5, 0x57, 0x6a, 0x97, 0xd0, 0xe4, 0x2a, 0xc9, - 0x6d, 0x35, 0x14, 0x99, 0xe1, 0x2b, 0x61, 0x91, 0x50, 0xb0, 0x45, 0x2a, 0xf7, 0x37, 0x72, 0x40, - 0x04, 0x76, 0xb0, 0xc0, 0x16, 0x23, 0x11, 0xe5, 0x9e, 0xa0, 0xab, 0xa0, 0xde, 0x0f, 0x0d, 0x3c, - 0x9f, 0x4a, 0x4f, 0xe7, 0x89, 0xa5, 0x31, 0x75, 0x3f, 0xd0, 0x20, 0xf0, 0x04, 0x22, 0x3c, 0xf6, - 0x05, 0x44, 0x60, 0x5f, 0x30, 0x2f, 0xb0, 0x5c, 0x9f, 0x87, 0x4d, 0xad, 0xab, 0x0d, 0xf4, 0xd1, - 0xbb, 0xdf, 0xb7, 0x9d, 0x37, 0xa9, 0x43, 0x07, 0xc7, 0xc1, 0x1c, 0xcf, 0x31, 0x55, 0x5e, 0x93, - 0xff, 0xb0, 0xfa, 0x89, 0xe6, 0xae, 0x29, 0x16, 0x11, 0xe1, 0xc6, 0xc9, 0x78, 0x3a, 0x41, 0x0f, - 0x25, 0xe7, 0xc4, 0xe7, 0x21, 0x3c, 0x05, 0x35, 0x5b, 0xd5, 0xb0, 0x98, 0x2a, 0xd2, 0x7c, 0xd0, - 0xd5, 0x06, 0x95, 0x61, 0xdf, 0xc8, 0xf4, 0xc0, 0x18, 0x53, 0x77, 0x2a, 0x18, 0xc1, 0x41, 0xd6, - 0x0e, 0x47, 0x55, 0x3b, 0xb3, 0xed, 0x7d, 0x2f, 0x81, 0x27, 0x5b, 0xee, 0x3f, 0xca, 0x0e, 0xc0, - 0x3e, 0xa8, 0xe1, 0x28, 0xf2, 0x3d, 0xe2, 0x58, 0x5e, 0xe8, 0x90, 0xaf, 0x89, 0x75, 0x54, 0x4d, - 0xc5, 0x33, 0xa9, 0xc1, 0x33, 0x50, 0x8e, 0xf0, 0xc2, 0xa7, 0xd8, 0x69, 0xee, 0x2a, 0x07, 0xaf, - 0x72, 0x0e, 0x8a, 0xc0, 0xc6, 0xa7, 0x24, 0x61, 0xa4, 0x2f, 0x6f, 0x3b, 0x3b, 0x68, 0x95, 0xdf, - 0xfa, 0xa6, 0x83, 0x72, 0xfa, 0x09, 0x5e, 0x80, 0x06, 0x23, 0xae, 0xc7, 0x05, 0x61, 0x16, 0x17, - 0x94, 0x61, 0x97, 0x58, 0x21, 0x75, 0x88, 0xf2, 0x50, 0x19, 0x76, 0x73, 0x45, 0x50, 0x1a, 0x39, - 0x4d, 0x02, 0x27, 0xd4, 0x21, 0xe8, 0x90, 0xdd, 0x17, 0xe1, 0x25, 0x78, 0x1a, 0x87, 0xc5, 0xdc, - 0xa4, 0x7d, 0xbd, 0x1c, 0xf7, 0x73, 0x58, 0x00, 0x41, 0x8d, 0xb8, 0x48, 0x86, 0x13, 0xb0, 0x2e, - 0x29, 0x8f, 0xa4, 0xc5, 0x55, 0xe3, 0xd3, 0xa6, 0xb4, 0x0b, 0xfd, 0xae, 0xc7, 0x83, 0x0e, 0xd8, - 0xb6, 0x24, 0x3b, 0x90, 0xf1, 0x9a, 0x21, 0xea, 0x05, 0x1d, 0xd8, 0x38, 0xdd, 0x30, 0x0f, 0xe3, - 0xfb, 0x22, 0x3c, 0x05, 0x07, 0x71, 0xe4, 0xc8, 0xeb, 0x93, 0x21, 0x96, 0x14, 0xf1, 0x45, 0x9e, - 0xa8, 0xa2, 0x36, 0xb4, 0xc7, 0x71, 0x5e, 0x80, 0xe7, 0xdb, 0x07, 0x70, 0x4f, 0x51, 0x06, 0xff, - 0x1a, 0x7f, 0xf6, 0x18, 0xe6, 0x4f, 0xe1, 0xb1, 0xbe, 0xbc, 0xe9, 0x68, 0xbd, 0x05, 0x68, 0x6c, - 0xa5, 0x20, 0x62, 0x53, 0xe6, 0xc0, 0x63, 0xa0, 0xcb, 0x6b, 0xa0, 0xc6, 0xff, 0x68, 0xf8, 0xb2, - 0xa0, 0x08, 0xc9, 0x67, 0x5c, 0x2c, 0x22, 0x82, 0x54, 0x0e, 0xac, 0x83, 0x5d, 0x9b, 0xd9, 0x6a, - 0xc2, 0x35, 0x24, 0x97, 0x10, 0x02, 0x5d, 0xde, 0x64, 0x35, 0x9c, 0x2a, 0x52, 0xeb, 0xd7, 0x26, - 0x78, 0xf6, 0x57, 0x10, 0xdc, 0x07, 0x25, 0xf5, 0x2a, 0xd4, 0x77, 0x60, 0x59, 0xd1, 0xea, 0xda, - 0xe8, 0xfd, 0xf2, 0xae, 0xad, 0xfd, 0xbc, 0x6b, 0x6b, 0x37, 0xbf, 0xda, 0xda, 0xe5, 0xd1, 0xff, - 0x5c, 0xe6, 0xf5, 0x13, 0x36, 0xdb, 0x53, 0xeb, 0xb7, 0x7f, 0x02, 0x00, 0x00, 0xff, 0xff, 0x6d, - 0x95, 0x38, 0xf2, 0xd7, 0x04, 0x00, 0x00, + // 595 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x54, 0x3f, 0x6f, 0xd3, 0x40, + 0x14, 0x8f, 0xa9, 0xd3, 0x88, 0x4b, 0x02, 0xe9, 0x95, 0x88, 0x10, 0x90, 0x13, 0x25, 0x12, 0x0a, + 0x48, 0xb5, 0xa5, 0xc0, 0x80, 0x3a, 0x06, 0x21, 0x5a, 0xa9, 0xad, 0xd0, 0xa5, 0x30, 0x94, 0xc1, + 0xba, 0xd8, 0x57, 0xd7, 0x4a, 0xec, 0xb3, 0xce, 0xe7, 0x0a, 0x6f, 0xac, 0x6c, 0x7c, 0x84, 0x7e, + 0x18, 0x86, 0x8c, 0x1d, 0x99, 0x3a, 0x34, 0xdf, 0x82, 0x09, 0xdd, 0xd9, 0x49, 0xec, 0xd4, 0xa0, + 0x4e, 0xbe, 0xf7, 0xf3, 0x7b, 0xbf, 0xf7, 0xbb, 0xf7, 0xe7, 0x40, 0x2f, 0x60, 0x94, 0x53, 0xc3, + 0x63, 0xc1, 0xc4, 0x08, 0x39, 0xe6, 0xc4, 0xf4, 0xb0, 0x75, 0xe1, 0xfa, 0xc4, 0x9c, 0x51, 0x47, + 0x97, 0x3f, 0x61, 0xf5, 0x12, 0x33, 0x61, 0x09, 0xa7, 0xf6, 0x9e, 0xe3, 0xf2, 0x8b, 0x68, 0xa2, + 0x5b, 0xd4, 0x33, 0x1c, 0xea, 0x50, 0x43, 0xfa, 0x4c, 0xa2, 0x73, 0x69, 0x25, 0x6c, 0xe2, 0x94, + 0xc4, 0xb6, 0x9b, 0x92, 0x99, 0xe1, 0x73, 0x6e, 0x12, 0x9f, 0xb3, 0x38, 0x85, 0xfb, 0x6b, 0xd8, + 0x23, 0x1c, 0xdb, 0x98, 0x63, 0x93, 0x91, 0x80, 0x86, 0x2e, 0xa7, 0x4b, 0xa7, 0xde, 0x2f, 0x05, + 0x3c, 0x1f, 0x0b, 0x4d, 0xc7, 0x89, 0xa4, 0x23, 0xea, 0xbc, 0xa7, 0x9e, 0xe7, 0x72, 0x44, 0xc2, + 0x68, 0xc6, 0xe1, 0x57, 0x50, 0xe3, 0xcc, 0xf5, 0xcc, 0x4b, 0xc2, 0x42, 0x97, 0xfa, 0x2d, 0xa5, + 0xab, 0x0c, 0xd4, 0xd1, 0xbb, 0x3f, 0x37, 0x9d, 0xb7, 0xa9, 0x48, 0x1b, 0x47, 0xde, 0x14, 0x4f, + 0x31, 0x95, 0x72, 0x93, 0x6b, 0x2c, 0x3f, 0xc1, 0xd4, 0x31, 0x78, 0x1c, 0x90, 0x50, 0xff, 0x92, + 0xc4, 0xa3, 0xaa, 0x60, 0x4b, 0x0d, 0x78, 0x00, 0xea, 0x96, 0x4c, 0x66, 0x32, 0x99, 0xad, 0xf5, + 0xa0, 0xab, 0x0c, 0xaa, 0xc3, 0xbe, 0x9e, 0x29, 0x86, 0x7e, 0x44, 0x9d, 0x31, 0x67, 0x04, 0x7b, + 0x59, 0x5d, 0x21, 0xaa, 0x59, 0x19, 0xb3, 0xf7, 0xa3, 0x0c, 0x9e, 0x6c, 0x5c, 0xe3, 0x83, 0x28, + 0x05, 0xec, 0x83, 0x3a, 0x0e, 0x82, 0x99, 0x4b, 0x6c, 0xd3, 0xf5, 0x6d, 0xf2, 0x2d, 0xb9, 0x00, + 0xaa, 0xa5, 0xe0, 0xa1, 0xc0, 0xe0, 0x21, 0xa8, 0x04, 0x38, 0x9e, 0x51, 0x6c, 0xb7, 0xb6, 0xa4, + 0x82, 0x57, 0x39, 0x05, 0x45, 0xc4, 0xfa, 0xa7, 0x24, 0x60, 0xa4, 0xce, 0x6f, 0x3a, 0x25, 0xb4, + 0x8c, 0x6f, 0x7f, 0x57, 0x41, 0x25, 0xfd, 0x05, 0x4f, 0x41, 0x93, 0x11, 0xc7, 0x0d, 0x39, 0x61, + 0x66, 0xc8, 0x29, 0xc3, 0x0e, 0x31, 0x7d, 0x6a, 0x13, 0xa9, 0xa1, 0x3a, 0xec, 0xe6, 0x92, 0xa0, + 0xd4, 0x73, 0x9c, 0x38, 0x9e, 0x50, 0x9b, 0xa0, 0x5d, 0x76, 0x17, 0x84, 0x67, 0xe0, 0x69, 0xe4, + 0x17, 0xf3, 0x26, 0xe5, 0xeb, 0xe5, 0x78, 0x3f, 0xfb, 0x05, 0x24, 0xa8, 0x19, 0x15, 0xc1, 0xf0, + 0x04, 0xac, 0x52, 0x8a, 0xd9, 0x34, 0x43, 0x59, 0xf8, 0xb4, 0x28, 0x5a, 0xa1, 0xde, 0x55, 0x7b, + 0xd0, 0x0e, 0xdb, 0x84, 0x44, 0x05, 0x32, 0x5a, 0x33, 0x8c, 0x6a, 0x41, 0x05, 0xd6, 0x4a, 0xd7, + 0x9c, 0xbb, 0xd1, 0x5d, 0x10, 0x1e, 0x80, 0x9d, 0x28, 0xb0, 0xc5, 0x1e, 0x65, 0x18, 0xcb, 0x92, + 0xf1, 0x45, 0x9e, 0x51, 0x7a, 0xad, 0xd9, 0x1e, 0x47, 0x79, 0x00, 0x1e, 0x6f, 0x0e, 0xe0, 0xb6, + 0x64, 0x19, 0xfc, 0xaf, 0xfd, 0xd9, 0x31, 0xcc, 0x4f, 0xe1, 0xbe, 0x3a, 0xbf, 0xea, 0x28, 0xbd, + 0x18, 0x34, 0x37, 0x42, 0x10, 0xb1, 0x28, 0xb3, 0xe1, 0x3e, 0x50, 0xc5, 0x32, 0xc8, 0xf6, 0x3f, + 0x1a, 0xbe, 0x2c, 0x48, 0x42, 0xf2, 0x11, 0xa7, 0x71, 0x40, 0x90, 0x8c, 0x81, 0x0d, 0xb0, 0x65, + 0x31, 0x4b, 0x76, 0xb8, 0x8e, 0xc4, 0x11, 0x42, 0xa0, 0x8a, 0x95, 0x96, 0xcd, 0xa9, 0x21, 0x79, + 0x7e, 0x6d, 0x80, 0x67, 0xff, 0x24, 0x82, 0x0f, 0x41, 0x59, 0x3e, 0x0f, 0x8d, 0x12, 0xac, 0x48, + 0xb6, 0x86, 0x32, 0xfa, 0x38, 0xbf, 0xd5, 0x94, 0xeb, 0x5b, 0x4d, 0xf9, 0xb9, 0xd0, 0x4a, 0x57, + 0x0b, 0x4d, 0xb9, 0x5e, 0x68, 0xa5, 0xdf, 0x0b, 0xad, 0x74, 0xb6, 0x77, 0x9f, 0xf5, 0x5e, 0xbd, + 0x6b, 0x93, 0x6d, 0x79, 0x7e, 0xf3, 0x37, 0x00, 0x00, 0xff, 0xff, 0x8e, 0xf0, 0x0a, 0x8a, 0xec, + 0x04, 0x00, 0x00, } func (m *StateMachineLogCommitResult) Marshal() (dAtA []byte, err error) { @@ -384,10 +373,6 @@ func (m *StateMachineLogCommitResult) MarshalToSizedBuffer(dAtA []byte) (int, er _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.CommitResult != nil { { size, err := m.CommitResult.MarshalToSizedBuffer(dAtA[:i]) @@ -400,8 +385,8 @@ func (m *StateMachineLogCommitResult) MarshalToSizedBuffer(dAtA []byte) (int, er i-- dAtA[i] = 0x12 } - if m.TrimGlsn != 0 { - i = encodeVarintStateMachineLog(dAtA, i, uint64(m.TrimGlsn)) + if m.TrimVersion != 0 { + i = encodeVarintStateMachineLog(dAtA, i, uint64(m.TrimVersion)) i-- dAtA[i] = 0x8 } @@ -428,10 +413,6 @@ func (m *StateMachineLogEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } { size, err := m.Payload.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -470,10 +451,6 @@ func (m *StateMachineLogEntry_Payload) MarshalToSizedBuffer(dAtA []byte) (int, e _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.CommitResult != nil { { size, err := m.CommitResult.MarshalToSizedBuffer(dAtA[:i]) @@ -569,10 +546,6 @@ func (m *StateMachineLogRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Data) > 0 { i -= len(m.Data) copy(dAtA[i:], m.Data) @@ -610,16 +583,13 @@ func (m *StateMachineLogCommitResult) ProtoSize() (n int) { } var l int _ = l - if m.TrimGlsn != 0 { - n += 1 + sovStateMachineLog(uint64(m.TrimGlsn)) + if m.TrimVersion != 0 { + n += 1 + sovStateMachineLog(uint64(m.TrimVersion)) } if m.CommitResult != nil { l = m.CommitResult.ProtoSize() n += 1 + l + sovStateMachineLog(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -634,9 +604,6 @@ func (m *StateMachineLogEntry) ProtoSize() (n int) { } l = m.Payload.ProtoSize() n += 1 + l + sovStateMachineLog(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -670,9 +637,6 @@ func (m *StateMachineLogEntry_Payload) ProtoSize() (n int) { l = m.CommitResult.ProtoSize() n += 1 + l + sovStateMachineLog(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -692,9 +656,6 @@ func (m *StateMachineLogRecord) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovStateMachineLog(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -776,9 +737,9 @@ func (m *StateMachineLogCommitResult) Unmarshal(dAtA []byte) error { switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TrimGlsn", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TrimVersion", wireType) } - m.TrimGlsn = 0 + m.TrimVersion = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowStateMachineLog @@ -788,7 +749,7 @@ func (m *StateMachineLogCommitResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.TrimGlsn |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.TrimVersion |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift if b < 0x80 { break } @@ -841,7 +802,6 @@ func (m *StateMachineLogCommitResult) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -944,7 +904,6 @@ func (m *StateMachineLogEntry) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1211,7 +1170,6 @@ func (m *StateMachineLogEntry_Payload) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1334,7 +1292,6 @@ func (m *StateMachineLogRecord) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/mrpb/state_machine_log.proto b/proto/mrpb/state_machine_log.proto index 2d08a7682..118084949 100644 --- a/proto/mrpb/state_machine_log.proto +++ b/proto/mrpb/state_machine_log.proto @@ -12,11 +12,14 @@ option go_package = "github.com/kakao/varlog/proto/mrpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message StateMachineLogCommitResult { - uint64 trim_glsn = 1 + uint64 trim_version = 1 [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; + "github.com/kakao/varlog/pkg/types.Version"]; LogStreamCommitResults commit_result = 2; } diff --git a/proto/rpcbenchpb/rpcbench.pb.go b/proto/rpcbenchpb/rpcbench.pb.go index 645b4cc0b..95ed63756 100644 --- a/proto/rpcbenchpb/rpcbench.pb.go +++ b/proto/rpcbenchpb/rpcbench.pb.go @@ -29,10 +29,7 @@ var _ = math.Inf const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type Request struct { - Data []byte `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Data []byte `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"` } func (m *Request) Reset() { *m = Request{} } @@ -76,10 +73,7 @@ func (m *Request) GetData() []byte { } type Response struct { - Seq uint64 `protobuf:"varint,1,opt,name=seq,proto3" json:"seq,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Seq uint64 `protobuf:"varint,1,opt,name=seq,proto3" json:"seq,omitempty"` } func (m *Response) Reset() { *m = Response{} } @@ -130,7 +124,7 @@ func init() { func init() { proto.RegisterFile("proto/rpcbenchpb/rpcbench.proto", fileDescriptor_a93c466267d65d69) } var fileDescriptor_a93c466267d65d69 = []byte{ - // 217 bytes of a gzipped FileDescriptorProto + // 230 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0xd7, 0x2f, 0x2a, 0x48, 0x4e, 0x4a, 0xcd, 0x4b, 0xce, 0x28, 0x48, 0x82, 0x33, 0xf5, 0xc0, 0x32, 0x42, 0x82, 0x65, 0x89, 0x45, 0x39, 0xf9, 0xe9, 0x7a, 0x08, 0x15, 0x52, 0xba, 0xe9, 0x99, @@ -141,10 +135,11 @@ var fileDescriptor_a93c466267d65d69 = []byte{ 0x57, 0x9c, 0x2a, 0x24, 0xc0, 0xc5, 0x5c, 0x9c, 0x5a, 0x08, 0x96, 0x66, 0x09, 0x02, 0x31, 0x8d, 0x7c, 0xb9, 0x38, 0x82, 0x02, 0x9c, 0x9d, 0x40, 0x36, 0x0b, 0x39, 0x72, 0xb1, 0x38, 0x27, 0xe6, 0xe4, 0x08, 0x49, 0xe9, 0x61, 0xb8, 0x49, 0x0f, 0x6a, 0x83, 0x94, 0x34, 0x56, 0x39, 0x88, 0xf1, - 0x4a, 0x0c, 0x4e, 0x8e, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0xb8, 0xe0, 0xb1, 0x1c, - 0x63, 0x94, 0x31, 0xd4, 0x23, 0x29, 0x89, 0xa5, 0xb9, 0xd9, 0x89, 0xd9, 0x89, 0xf9, 0x60, 0x2f, - 0x41, 0x0c, 0x80, 0x51, 0xe8, 0xe1, 0x93, 0xc4, 0x06, 0x16, 0x31, 0x06, 0x04, 0x00, 0x00, 0xff, - 0xff, 0x96, 0x74, 0x76, 0x2f, 0x3a, 0x01, 0x00, 0x00, + 0x4a, 0x0c, 0x4e, 0xbe, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0x38, 0xe1, 0xb1, 0x1c, + 0xc3, 0x82, 0xc7, 0x72, 0x8c, 0x17, 0x1e, 0xcb, 0x31, 0xdc, 0x78, 0x2c, 0xc7, 0x10, 0x65, 0x0c, + 0xf5, 0x54, 0x4a, 0x62, 0x69, 0x6e, 0x76, 0x62, 0x76, 0x62, 0x3e, 0xd8, 0x7b, 0x10, 0xc3, 0x60, + 0x14, 0x7a, 0x58, 0x25, 0xb1, 0x81, 0x45, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xdf, 0x77, + 0x8a, 0xaa, 0x46, 0x01, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -247,10 +242,6 @@ func (m *Request) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Data) > 0 { i -= len(m.Data) copy(dAtA[i:], m.Data) @@ -281,10 +272,6 @@ func (m *Response) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Seq != 0 { i = encodeVarintRpcbench(dAtA, i, uint64(m.Seq)) i-- @@ -314,9 +301,6 @@ func (m *Request) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovRpcbench(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -329,9 +313,6 @@ func (m *Response) ProtoSize() (n int) { if m.Seq != 0 { n += 1 + sovRpcbench(uint64(m.Seq)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -416,7 +397,6 @@ func (m *Request) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -486,7 +466,6 @@ func (m *Response) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/rpcbenchpb/rpcbench.proto b/proto/rpcbenchpb/rpcbench.proto index 4b09a07df..7de022630 100644 --- a/proto/rpcbenchpb/rpcbench.proto +++ b/proto/rpcbenchpb/rpcbench.proto @@ -9,6 +9,9 @@ option go_package = "github.com/kakao/varlog/proto/rpcbenchpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message Request { bytes data = 1; diff --git a/proto/snpb/log_io.pb.go b/proto/snpb/log_io.pb.go index bfef5e435..daaca4140 100644 --- a/proto/snpb/log_io.pb.go +++ b/proto/snpb/log_io.pb.go @@ -18,6 +18,7 @@ import ( status "google.golang.org/grpc/status" github_daumkakao_com_varlog_varlog_pkg_types "github.com/kakao/varlog/pkg/types" + varlogpb "github.com/kakao/varlog/proto/varlogpb" ) // Reference imports to suppress errors if they are not otherwise used. @@ -34,12 +35,10 @@ const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package // AppendRequest is a message to send a payload to a storage node. It contains // a vector of storage nodes to replicate the payload. type AppendRequest struct { - Payload []byte `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Backups []AppendRequest_BackupNode `protobuf:"bytes,3,rep,name=backups,proto3" json:"backups"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Payload []byte `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,2,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + Backups []varlogpb.StorageNode `protobuf:"bytes,4,rep,name=backups,proto3" json:"backups"` } func (m *AppendRequest) Reset() { *m = AppendRequest{} } @@ -82,83 +81,33 @@ func (m *AppendRequest) GetPayload() []byte { return nil } -func (m *AppendRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { +func (m *AppendRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { if m != nil { - return m.LogStreamID + return m.TopicID } return 0 } -func (m *AppendRequest) GetBackups() []AppendRequest_BackupNode { - if m != nil { - return m.Backups - } - return nil -} - -type AppendRequest_BackupNode struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} - -func (m *AppendRequest_BackupNode) Reset() { *m = AppendRequest_BackupNode{} } -func (m *AppendRequest_BackupNode) String() string { return proto.CompactTextString(m) } -func (*AppendRequest_BackupNode) ProtoMessage() {} -func (*AppendRequest_BackupNode) Descriptor() ([]byte, []int) { - return fileDescriptor_7692726f23e518ee, []int{0, 0} -} -func (m *AppendRequest_BackupNode) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *AppendRequest_BackupNode) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - if deterministic { - return xxx_messageInfo_AppendRequest_BackupNode.Marshal(b, m, deterministic) - } else { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil - } -} -func (m *AppendRequest_BackupNode) XXX_Merge(src proto.Message) { - xxx_messageInfo_AppendRequest_BackupNode.Merge(m, src) -} -func (m *AppendRequest_BackupNode) XXX_Size() int { - return m.ProtoSize() -} -func (m *AppendRequest_BackupNode) XXX_DiscardUnknown() { - xxx_messageInfo_AppendRequest_BackupNode.DiscardUnknown(m) -} - -var xxx_messageInfo_AppendRequest_BackupNode proto.InternalMessageInfo - -func (m *AppendRequest_BackupNode) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID { +func (m *AppendRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { - return m.StorageNodeID + return m.LogStreamID } return 0 } -func (m *AppendRequest_BackupNode) GetAddress() string { +func (m *AppendRequest) GetBackups() []varlogpb.StorageNode { if m != nil { - return m.Address + return m.Backups } - return "" + return nil } // AppendResponse contains GLSN (Global Log Sequence Number) that indicates log // position in global log space. type AppendResponse struct { - GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,2,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *AppendResponse) Reset() { *m = AppendResponse{} } @@ -201,6 +150,13 @@ func (m *AppendResponse) GetGLSN() github_daumkakao_com_varlog_varlog_pkg_types. return 0 } +func (m *AppendResponse) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *AppendResponse) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -210,11 +166,9 @@ func (m *AppendResponse) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg // ReadRequest asks a storage node to retrieve log entry at the GLSN. type ReadRequest struct { - GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,2,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *ReadRequest) Reset() { *m = ReadRequest{} } @@ -257,6 +211,13 @@ func (m *ReadRequest) GetGLSN() github_daumkakao_com_varlog_varlog_pkg_types.GLS return 0 } +func (m *ReadRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *ReadRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -267,12 +228,9 @@ func (m *ReadRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_ty // ReadResponse contains the contents of the log entry which is retrieved by // the ReadRequest. type ReadResponse struct { - GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` - LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` - Payload []byte `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` + LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` + Payload []byte `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` } func (m *ReadResponse) Reset() { *m = ReadResponse{} } @@ -332,12 +290,10 @@ func (m *ReadResponse) GetPayload() []byte { // SubscribeRequest has GLSN which indicates an inclusive starting position // from which a client wants to receive. type SubscribeRequest struct { - GLSNBegin github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn_begin,json=glsnBegin,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn_begin,omitempty"` - GLSNEnd github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=glsn_end,json=glsnEnd,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn_end,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + GLSNBegin github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn_begin,json=glsnBegin,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn_begin,omitempty"` + GLSNEnd github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=glsn_end,json=glsnEnd,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn_end,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,4,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *SubscribeRequest) Reset() { *m = SubscribeRequest{} } @@ -387,6 +343,13 @@ func (m *SubscribeRequest) GetGLSNEnd() github_daumkakao_com_varlog_varlog_pkg_t return 0 } +func (m *SubscribeRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *SubscribeRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -396,12 +359,9 @@ func (m *SubscribeRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_p // SubscribeResponse comprises the contents of the log entry and its GLSN. type SubscribeResponse struct { - GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` - LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` - Payload []byte `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` + LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` + Payload []byte `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` } func (m *SubscribeResponse) Reset() { *m = SubscribeResponse{} } @@ -462,10 +422,8 @@ func (m *SubscribeResponse) GetPayload() []byte { // If async field is true, the trim operation returns immediately and the // storage node removes its log entry in the background. type TrimRequest struct { - GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` } func (m *TrimRequest) Reset() { *m = TrimRequest{} } @@ -501,6 +459,13 @@ func (m *TrimRequest) XXX_DiscardUnknown() { var xxx_messageInfo_TrimRequest proto.InternalMessageInfo +func (m *TrimRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *TrimRequest) GetGLSN() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { if m != nil { return m.GLSN @@ -510,7 +475,6 @@ func (m *TrimRequest) GetGLSN() github_daumkakao_com_varlog_varlog_pkg_types.GLS func init() { proto.RegisterType((*AppendRequest)(nil), "varlog.snpb.AppendRequest") - proto.RegisterType((*AppendRequest_BackupNode)(nil), "varlog.snpb.AppendRequest.BackupNode") proto.RegisterType((*AppendResponse)(nil), "varlog.snpb.AppendResponse") proto.RegisterType((*ReadRequest)(nil), "varlog.snpb.ReadRequest") proto.RegisterType((*ReadResponse)(nil), "varlog.snpb.ReadResponse") @@ -522,47 +486,48 @@ func init() { func init() { proto.RegisterFile("proto/snpb/log_io.proto", fileDescriptor_7692726f23e518ee) } var fileDescriptor_7692726f23e518ee = []byte{ - // 625 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x56, 0xcf, 0x6e, 0xd3, 0x4e, - 0x10, 0xee, 0x26, 0xfe, 0x35, 0xbf, 0x4c, 0x1a, 0xfe, 0xec, 0x01, 0x82, 0x2b, 0xe2, 0x2a, 0x12, - 0x52, 0x2f, 0x75, 0xa0, 0x5c, 0x50, 0x51, 0x25, 0x30, 0x44, 0xa8, 0x22, 0x6a, 0x55, 0x87, 0x13, - 0x1c, 0xc2, 0xba, 0xbb, 0x2c, 0x51, 0x1c, 0xaf, 0xf1, 0xda, 0xa0, 0x9e, 0x79, 0x08, 0x8e, 0xf0, - 0x2e, 0x5c, 0x7a, 0x42, 0x20, 0x38, 0xfb, 0x60, 0xde, 0xa2, 0x27, 0xb4, 0xeb, 0x3a, 0x75, 0xaa, - 0x56, 0xa2, 0x95, 0xc2, 0xa1, 0xa7, 0xec, 0xec, 0xcc, 0x7c, 0xf9, 0xbe, 0xd9, 0x9d, 0x59, 0xc3, - 0xcd, 0x30, 0x12, 0xb1, 0xe8, 0xca, 0x20, 0xf4, 0xba, 0xbe, 0xe0, 0xc3, 0x91, 0xb0, 0xf5, 0x0e, - 0x6e, 0xbc, 0x27, 0x91, 0x2f, 0xb8, 0xad, 0x3c, 0xe6, 0x1a, 0x1f, 0xc5, 0x6f, 0x13, 0xcf, 0xde, - 0x13, 0x93, 0x2e, 0x17, 0x5c, 0x74, 0x75, 0x8c, 0x97, 0xbc, 0xd1, 0x56, 0x0e, 0xa1, 0x56, 0x79, - 0xae, 0xb9, 0xcc, 0x85, 0xe0, 0x3e, 0x3b, 0x8e, 0x62, 0x93, 0x30, 0xde, 0xcf, 0x9d, 0x9d, 0x8f, - 0x55, 0x68, 0x3e, 0x0e, 0x43, 0x16, 0x50, 0x97, 0xbd, 0x4b, 0x98, 0x8c, 0x71, 0x0b, 0x6a, 0x21, - 0xd9, 0xf7, 0x05, 0xa1, 0x2d, 0xb4, 0x82, 0x56, 0x97, 0xdc, 0xc2, 0xc4, 0x02, 0x9a, 0x8a, 0x94, - 0x8c, 0x23, 0x46, 0x26, 0xc3, 0x11, 0x6d, 0x55, 0x56, 0xd0, 0x6a, 0xd3, 0x79, 0x9e, 0xa5, 0x56, - 0xa3, 0x2f, 0xf8, 0x40, 0xef, 0x6f, 0x3d, 0x3d, 0x4c, 0xad, 0x07, 0x47, 0x0c, 0x29, 0x49, 0x26, - 0x63, 0x32, 0x26, 0x42, 0x73, 0xcd, 0x35, 0x14, 0x3f, 0xe1, 0x98, 0x77, 0xe3, 0xfd, 0x90, 0x49, - 0xbb, 0x94, 0xeb, 0x36, 0xfc, 0xa9, 0x41, 0x71, 0x0f, 0x6a, 0x1e, 0xd9, 0x1b, 0x27, 0xa1, 0x6c, - 0x55, 0x57, 0xaa, 0xab, 0x8d, 0xf5, 0x3b, 0x76, 0xa9, 0x0e, 0xf6, 0x0c, 0x6f, 0xdb, 0xd1, 0x91, - 0xdb, 0x82, 0x32, 0xc7, 0x38, 0x48, 0xad, 0x05, 0xb7, 0xc8, 0x35, 0x3f, 0x23, 0x80, 0x63, 0x2f, - 0xfe, 0x00, 0x57, 0x65, 0x2c, 0x22, 0xc2, 0xd9, 0x30, 0x10, 0x94, 0x29, 0x21, 0x48, 0x0b, 0xd9, - 0xc9, 0x52, 0xab, 0x39, 0xc8, 0x5d, 0x2a, 0x52, 0x4b, 0xd9, 0x38, 0x97, 0x94, 0x99, 0x6c, 0xb7, - 0x29, 0x4b, 0x26, 0x55, 0x95, 0x25, 0x94, 0x46, 0x4c, 0x4a, 0x5d, 0xb9, 0xba, 0x5b, 0x98, 0x9d, - 0x5f, 0x08, 0xae, 0x14, 0x6a, 0x64, 0x28, 0x02, 0xc9, 0xf0, 0x2e, 0x18, 0xdc, 0x97, 0x81, 0xa6, - 0x66, 0x38, 0x9b, 0x59, 0x6a, 0x19, 0xcf, 0xfa, 0x83, 0xed, 0xc3, 0xd4, 0xba, 0x77, 0x2e, 0x46, - 0x2a, 0xc9, 0xd5, 0x50, 0xff, 0xfc, 0xfc, 0x3a, 0x3f, 0x10, 0x34, 0x5c, 0x46, 0xa6, 0x57, 0xeb, - 0x32, 0x68, 0xfa, 0x86, 0x60, 0x29, 0xd7, 0x34, 0xbf, 0x83, 0xda, 0x05, 0xc3, 0x57, 0x90, 0x95, - 0x63, 0xc8, 0xfe, 0x45, 0x20, 0xfb, 0x1a, 0x52, 0x41, 0x95, 0xbb, 0xba, 0x3a, 0xd3, 0xd5, 0x9d, - 0xaf, 0x15, 0xb8, 0x36, 0x48, 0x3c, 0xb9, 0x17, 0x8d, 0x3c, 0x56, 0x9c, 0x14, 0x01, 0x50, 0x4c, - 0x86, 0x1e, 0xe3, 0xa3, 0x42, 0x9a, 0x93, 0xa5, 0x56, 0x5d, 0xb1, 0x74, 0xd4, 0xe6, 0xc5, 0xf4, - 0xd5, 0x15, 0xaa, 0xce, 0xc7, 0xaf, 0xe0, 0x7f, 0xfd, 0x17, 0x2c, 0xa0, 0x47, 0x42, 0x1f, 0x65, - 0xa9, 0x55, 0x53, 0x61, 0xbd, 0x80, 0x5e, 0x0c, 0xbe, 0xa6, 0x10, 0x7b, 0xc1, 0x29, 0xa3, 0xaa, - 0x3a, 0xe7, 0x6b, 0xf1, 0x13, 0xc1, 0xf5, 0x52, 0x15, 0x2f, 0xc9, 0xdd, 0x78, 0x0d, 0x8d, 0x17, - 0xd1, 0x68, 0x32, 0xbf, 0xfe, 0x5d, 0xff, 0x54, 0x81, 0xff, 0xfa, 0x82, 0x6f, 0xed, 0xe0, 0x27, - 0xb0, 0x98, 0x8f, 0x40, 0x6c, 0x9e, 0x3d, 0xe5, 0xcd, 0xe5, 0x53, 0x7d, 0x79, 0xb9, 0x3b, 0x0b, - 0x78, 0x13, 0x0c, 0xd5, 0x9c, 0xb8, 0x35, 0x13, 0x56, 0x9a, 0x41, 0xe6, 0xad, 0x53, 0x3c, 0xd3, - 0xf4, 0x6d, 0xa8, 0x4f, 0x0f, 0x11, 0xdf, 0x9e, 0x89, 0x3c, 0xd9, 0x22, 0x66, 0xfb, 0x2c, 0x77, - 0x81, 0x76, 0x17, 0xe1, 0x0d, 0x30, 0x54, 0xfd, 0x4e, 0xd0, 0x29, 0x95, 0xd4, 0xbc, 0x61, 0xe7, - 0xaf, 0xb3, 0x5d, 0xbc, 0xce, 0x76, 0x4f, 0xbd, 0xce, 0x9d, 0x05, 0xe7, 0xe1, 0x41, 0xd6, 0x46, - 0xdf, 0xb3, 0x36, 0xfa, 0xf2, 0xbb, 0x8d, 0x5e, 0xae, 0xfd, 0x4d, 0x81, 0xa7, 0x1f, 0x0f, 0xde, - 0xa2, 0x5e, 0xdf, 0xff, 0x13, 0x00, 0x00, 0xff, 0xff, 0x97, 0xa4, 0xb3, 0xd9, 0x51, 0x08, 0x00, - 0x00, + // 654 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x56, 0x4f, 0x4f, 0xd4, 0x40, + 0x14, 0xdf, 0x76, 0x0b, 0x0b, 0xb3, 0x60, 0x74, 0x0e, 0xb2, 0x16, 0x6d, 0x49, 0x4f, 0x5c, 0x68, + 0x15, 0x3d, 0x18, 0x23, 0x89, 0x16, 0x09, 0x21, 0x6e, 0x30, 0x74, 0x39, 0x69, 0x22, 0x99, 0x6e, + 0xc7, 0xb1, 0xd9, 0xb6, 0x33, 0xf6, 0x8f, 0x09, 0xdf, 0xc2, 0x9b, 0x57, 0x13, 0x3f, 0x81, 0x89, + 0x07, 0x8f, 0x1e, 0x39, 0x19, 0x12, 0x2f, 0x9e, 0x7a, 0xe8, 0x7e, 0x0b, 0x4e, 0x66, 0xa6, 0xdb, + 0xa5, 0x4b, 0x30, 0x0a, 0x11, 0x0f, 0xc4, 0xd3, 0xb6, 0xf3, 0xde, 0xfb, 0xbd, 0x37, 0xbf, 0xf7, + 0xdb, 0xd7, 0x07, 0x16, 0x58, 0x4c, 0x53, 0x6a, 0x25, 0x11, 0x73, 0xad, 0x80, 0x92, 0x3d, 0x9f, + 0x9a, 0xe2, 0x04, 0xb6, 0xdf, 0xa2, 0x38, 0xa0, 0xc4, 0xe4, 0x16, 0x75, 0x85, 0xf8, 0xe9, 0xeb, + 0xcc, 0x35, 0xfb, 0x34, 0xb4, 0x08, 0x25, 0xd4, 0x12, 0x3e, 0x6e, 0xf6, 0x4a, 0xbc, 0x95, 0x10, + 0xfc, 0xa9, 0x8c, 0x55, 0x17, 0x09, 0xa5, 0x24, 0xc0, 0xc7, 0x5e, 0x38, 0x64, 0xe9, 0xfe, 0xc8, + 0xb8, 0x50, 0x02, 0x33, 0xd7, 0x0a, 0x71, 0x8a, 0x3c, 0x94, 0xa2, 0xd2, 0x60, 0x7c, 0x91, 0xc1, + 0xfc, 0x63, 0xc6, 0x70, 0xe4, 0x39, 0xf8, 0x4d, 0x86, 0x93, 0x14, 0x76, 0x40, 0x8b, 0xa1, 0xfd, + 0x80, 0x22, 0xaf, 0x23, 0x2d, 0x49, 0xcb, 0x73, 0x4e, 0xf5, 0x0a, 0x5f, 0x82, 0x99, 0x94, 0x32, + 0xbf, 0xbf, 0xe7, 0x7b, 0x1d, 0x79, 0x49, 0x5a, 0x9e, 0xb2, 0xd7, 0x8b, 0x5c, 0x6f, 0xed, 0xf2, + 0xb3, 0xad, 0x27, 0x47, 0xb9, 0x7e, 0x6f, 0x54, 0xb1, 0x87, 0xb2, 0x70, 0x80, 0x06, 0x88, 0x8a, + 0xda, 0xcb, 0xd4, 0xd5, 0x0f, 0x1b, 0x10, 0x2b, 0xdd, 0x67, 0x38, 0x31, 0x47, 0x71, 0x4e, 0x4b, + 0x80, 0x6e, 0x79, 0x90, 0x82, 0x79, 0xce, 0x46, 0x92, 0xc6, 0x18, 0x85, 0x3c, 0x49, 0x53, 0x24, + 0x79, 0x5a, 0xe4, 0x7a, 0xbb, 0x4b, 0x49, 0x4f, 0x9c, 0x8b, 0x44, 0xf7, 0xcf, 0x94, 0xa8, 0x16, + 0xeb, 0xb4, 0x83, 0xf1, 0x8b, 0x07, 0x1f, 0x82, 0x96, 0x8b, 0xfa, 0x83, 0x8c, 0x25, 0x1d, 0x65, + 0xa9, 0xb9, 0xdc, 0x5e, 0xbd, 0x69, 0x8e, 0x1a, 0x50, 0xd1, 0x65, 0xf6, 0x52, 0x1a, 0x23, 0x82, + 0xb7, 0xa9, 0x87, 0x6d, 0xe5, 0x20, 0xd7, 0x1b, 0x4e, 0x15, 0x62, 0x7c, 0x96, 0xc1, 0x95, 0x8a, + 0xba, 0x84, 0xd1, 0x28, 0xc1, 0x70, 0x07, 0x28, 0x24, 0x48, 0x22, 0x41, 0x9c, 0x62, 0xaf, 0x15, + 0xb9, 0xae, 0x6c, 0x76, 0x7b, 0xdb, 0x47, 0xb9, 0x7e, 0xe7, 0x4c, 0x15, 0xf3, 0x20, 0x47, 0x40, + 0x5d, 0x3a, 0xd2, 0x8d, 0x4f, 0x32, 0x68, 0x3b, 0x18, 0x8d, 0xf5, 0xf6, 0x9f, 0xb3, 0xdf, 0x73, + 0xf6, 0x4d, 0x02, 0x73, 0x25, 0x67, 0x17, 0x27, 0xb4, 0x1d, 0xa0, 0x04, 0x1c, 0x52, 0x3e, 0x86, + 0xec, 0x9e, 0x07, 0xb2, 0x2b, 0x20, 0x39, 0x54, 0x7d, 0x94, 0x34, 0x27, 0x46, 0x89, 0xf1, 0xb1, + 0x09, 0xae, 0xf6, 0x32, 0x37, 0xe9, 0xc7, 0xbe, 0x8b, 0x2b, 0x25, 0x20, 0x00, 0x78, 0x25, 0x7b, + 0x2e, 0x26, 0x7e, 0x75, 0x35, 0xbb, 0xc8, 0xf5, 0x59, 0x5e, 0xa5, 0xcd, 0x0f, 0xcf, 0x77, 0xbf, + 0x59, 0x8e, 0x2a, 0xe2, 0xe1, 0x0b, 0x30, 0x23, 0x52, 0xe0, 0xc8, 0x1b, 0x5d, 0xf4, 0x11, 0x57, + 0x06, 0x77, 0xdb, 0x88, 0xbc, 0xf3, 0xc1, 0xb7, 0x38, 0xe2, 0x46, 0x34, 0x39, 0x1f, 0x9b, 0xff, + 0x42, 0x76, 0xca, 0x05, 0xcb, 0xee, 0xbb, 0x04, 0xae, 0xd5, 0xba, 0x74, 0x49, 0xb4, 0xf7, 0x55, + 0x02, 0xed, 0xdd, 0xd8, 0x0f, 0x2b, 0xd9, 0xd5, 0xdb, 0x26, 0x5d, 0x40, 0xdb, 0x2a, 0xbe, 0xe4, + 0xbf, 0xc6, 0xd7, 0xea, 0x7b, 0x19, 0x4c, 0x75, 0x29, 0xd9, 0x7a, 0x06, 0xd7, 0xc1, 0x74, 0xf9, + 0x0d, 0x82, 0xaa, 0x59, 0x5b, 0x1e, 0xcc, 0x89, 0x6f, 0xba, 0xba, 0x78, 0xaa, 0xad, 0xec, 0xa7, + 0xd1, 0x80, 0x6b, 0x40, 0xe1, 0xd3, 0x05, 0x76, 0x26, 0xdc, 0x6a, 0x43, 0x5a, 0xbd, 0x71, 0x8a, + 0x65, 0x1c, 0xbe, 0x0d, 0x66, 0xc7, 0x2a, 0x81, 0xb7, 0x26, 0x3c, 0x4f, 0xfe, 0xc7, 0x55, 0xed, + 0x57, 0xe6, 0x0a, 0xed, 0xb6, 0x04, 0x1f, 0x00, 0x85, 0xf7, 0xe7, 0x44, 0x39, 0xb5, 0x96, 0xa9, + 0xd7, 0xcd, 0x72, 0xd9, 0x31, 0xab, 0x65, 0xc7, 0xdc, 0xe0, 0xcb, 0x8e, 0xd1, 0xb0, 0x37, 0x0f, + 0x0a, 0x4d, 0x3a, 0x2c, 0x34, 0xe9, 0xdd, 0x50, 0x6b, 0x7c, 0x18, 0x6a, 0xd2, 0xe1, 0x50, 0x6b, + 0xfc, 0x18, 0x6a, 0x8d, 0xe7, 0x2b, 0x7f, 0x42, 0xf6, 0x78, 0x2f, 0x73, 0xa7, 0xc5, 0xf3, 0xdd, + 0x9f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xbd, 0x65, 0xf4, 0x07, 0xac, 0x09, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -801,10 +766,6 @@ func (m *AppendRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Backups) > 0 { for iNdEx := len(m.Backups) - 1; iNdEx >= 0; iNdEx-- { { @@ -816,12 +777,17 @@ func (m *AppendRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintLogIo(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 } } if m.LogStreamID != 0 { i = encodeVarintLogIo(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x18 + } + if m.TopicID != 0 { + i = encodeVarintLogIo(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x10 } if len(m.Payload) > 0 { @@ -834,45 +800,6 @@ func (m *AppendRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppendRequest_BackupNode) Marshal() (dAtA []byte, err error) { - size := m.ProtoSize() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *AppendRequest_BackupNode) MarshalTo(dAtA []byte) (int, error) { - size := m.ProtoSize() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *AppendRequest_BackupNode) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.Address) > 0 { - i -= len(m.Address) - copy(dAtA[i:], m.Address) - i = encodeVarintLogIo(dAtA, i, uint64(len(m.Address))) - i-- - dAtA[i] = 0x12 - } - if m.StorageNodeID != 0 { - i = encodeVarintLogIo(dAtA, i, uint64(m.StorageNodeID)) - i-- - dAtA[i] = 0x8 - } - return len(dAtA) - i, nil -} - func (m *AppendResponse) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) @@ -893,13 +820,14 @@ func (m *AppendResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintLogIo(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x18 + } + if m.TopicID != 0 { + i = encodeVarintLogIo(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x10 } if m.GLSN != 0 { @@ -930,13 +858,14 @@ func (m *ReadRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintLogIo(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x18 + } + if m.TopicID != 0 { + i = encodeVarintLogIo(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x10 } if m.GLSN != 0 { @@ -967,10 +896,6 @@ func (m *ReadResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Payload) > 0 { i -= len(m.Payload) copy(dAtA[i:], m.Payload) @@ -1011,13 +936,14 @@ func (m *SubscribeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintLogIo(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x20 + } + if m.TopicID != 0 { + i = encodeVarintLogIo(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x18 } if m.GLSNEnd != 0 { @@ -1053,10 +979,6 @@ func (m *SubscribeResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Payload) > 0 { i -= len(m.Payload) copy(dAtA[i:], m.Payload) @@ -1097,13 +1019,14 @@ func (m *TrimRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.GLSN != 0 { i = encodeVarintLogIo(dAtA, i, uint64(m.GLSN)) i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintLogIo(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x8 } return len(dAtA) - i, nil @@ -1130,6 +1053,9 @@ func (m *AppendRequest) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovLogIo(uint64(l)) } + if m.TopicID != 0 { + n += 1 + sovLogIo(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovLogIo(uint64(m.LogStreamID)) } @@ -1139,28 +1065,6 @@ func (m *AppendRequest) ProtoSize() (n int) { n += 1 + l + sovLogIo(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *AppendRequest_BackupNode) ProtoSize() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.StorageNodeID != 0 { - n += 1 + sovLogIo(uint64(m.StorageNodeID)) - } - l = len(m.Address) - if l > 0 { - n += 1 + l + sovLogIo(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1173,12 +1077,12 @@ func (m *AppendResponse) ProtoSize() (n int) { if m.GLSN != 0 { n += 1 + sovLogIo(uint64(m.GLSN)) } + if m.TopicID != 0 { + n += 1 + sovLogIo(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovLogIo(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1191,12 +1095,12 @@ func (m *ReadRequest) ProtoSize() (n int) { if m.GLSN != 0 { n += 1 + sovLogIo(uint64(m.GLSN)) } + if m.TopicID != 0 { + n += 1 + sovLogIo(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovLogIo(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1216,9 +1120,6 @@ func (m *ReadResponse) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovLogIo(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1234,12 +1135,12 @@ func (m *SubscribeRequest) ProtoSize() (n int) { if m.GLSNEnd != 0 { n += 1 + sovLogIo(uint64(m.GLSNEnd)) } + if m.TopicID != 0 { + n += 1 + sovLogIo(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovLogIo(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1259,9 +1160,6 @@ func (m *SubscribeResponse) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovLogIo(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1271,12 +1169,12 @@ func (m *TrimRequest) ProtoSize() (n int) { } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovLogIo(uint64(m.TopicID)) + } if m.GLSN != 0 { n += 1 + sovLogIo(uint64(m.GLSN)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1350,6 +1248,25 @@ func (m *AppendRequest) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowLogIo + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -1368,7 +1285,7 @@ func (m *AppendRequest) Unmarshal(dAtA []byte) error { break } } - case 3: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Backups", wireType) } @@ -1397,7 +1314,7 @@ func (m *AppendRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Backups = append(m.Backups, AppendRequest_BackupNode{}) + m.Backups = append(m.Backups, varlogpb.StorageNode{}) if err := m.Backups[len(m.Backups)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } @@ -1414,7 +1331,6 @@ func (m *AppendRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1424,7 +1340,7 @@ func (m *AppendRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppendRequest_BackupNode) Unmarshal(dAtA []byte) error { +func (m *AppendResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1447,17 +1363,17 @@ func (m *AppendRequest_BackupNode) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BackupNode: wiretype end group for non-group") + return fmt.Errorf("proto: AppendResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BackupNode: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppendResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StorageNodeID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GLSN", wireType) } - m.StorageNodeID = 0 + m.GLSN = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowLogIo @@ -1467,99 +1383,16 @@ func (m *AppendRequest_BackupNode) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.StorageNodeID |= github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID(b&0x7F) << shift + m.GLSN |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift if b < 0x80 { break } } case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowLogIo - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthLogIo - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthLogIo - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Address = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipLogIo(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthLogIo - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AppendResponse) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowLogIo - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AppendResponse: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AppendResponse: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field GLSN", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) } - m.GLSN = 0 + m.TopicID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowLogIo @@ -1569,12 +1402,12 @@ func (m *AppendResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.GLSN |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift if b < 0x80 { break } } - case 2: + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -1605,7 +1438,6 @@ func (m *AppendResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1664,6 +1496,25 @@ func (m *ReadRequest) Unmarshal(dAtA []byte) error { } } case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowLogIo + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -1694,7 +1545,6 @@ func (m *ReadRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1817,7 +1667,6 @@ func (m *ReadResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1895,6 +1744,25 @@ func (m *SubscribeRequest) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowLogIo + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -1925,7 +1793,6 @@ func (m *SubscribeRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2048,7 +1915,6 @@ func (m *SubscribeResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2088,6 +1954,25 @@ func (m *TrimRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowLogIo + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field GLSN", wireType) } @@ -2118,7 +2003,6 @@ func (m *TrimRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/snpb/log_io.proto b/proto/snpb/log_io.proto index 6b1f0f4f8..8f7a8e21a 100644 --- a/proto/snpb/log_io.proto +++ b/proto/snpb/log_io.proto @@ -5,30 +5,32 @@ package varlog.snpb; import "github.com/gogo/protobuf/gogoproto/gogo.proto"; import "google/protobuf/empty.proto"; +import "varlogpb/metadata.proto"; + option go_package = "github.com/kakao/varlog/proto/snpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; // AppendRequest is a message to send a payload to a storage node. It contains // a vector of storage nodes to replicate the payload. message AppendRequest { - message BackupNode { - uint32 storage_node_id = 1 [ - (gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.StorageNodeID", - (gogoproto.customname) = "StorageNodeID" - ]; - string address = 2; - } bytes payload = 1; - uint32 log_stream_id = 2 [ + int32 topic_id = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - repeated BackupNode backups = 3 [(gogoproto.nullable) = false]; + repeated varlogpb.StorageNode backups = 4 [(gogoproto.nullable) = false]; } // AppendResponse contains GLSN (Global Log Sequence Number) that indicates log @@ -39,7 +41,12 @@ message AppendResponse { "github.com/kakao/varlog/pkg/types.GLSN", (gogoproto.customname) = "GLSN" ]; - uint32 log_stream_id = 2 [ + int32 topic_id = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -53,7 +60,12 @@ message ReadRequest { "github.com/kakao/varlog/pkg/types.GLSN", (gogoproto.customname) = "GLSN" ]; - uint32 log_stream_id = 2 [ + int32 topic_id = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -89,7 +101,12 @@ message SubscribeRequest { "github.com/kakao/varlog/pkg/types.GLSN", (gogoproto.customname) = "GLSNEnd" ]; - uint32 log_stream_id = 3 [ + int32 topic_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -115,7 +132,12 @@ message SubscribeResponse { // If async field is true, the trim operation returns immediately and the // storage node removes its log entry in the background. message TrimRequest { - uint64 glsn = 1 [ + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + uint64 glsn = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.GLSN", (gogoproto.customname) = "GLSN" diff --git a/proto/snpb/log_stream_reporter.pb.go b/proto/snpb/log_stream_reporter.pb.go index 6829b0f7c..1252c5e0b 100644 --- a/proto/snpb/log_stream_reporter.pb.go +++ b/proto/snpb/log_stream_reporter.pb.go @@ -4,7 +4,6 @@ package snpb import ( - bytes "bytes" context "context" fmt "fmt" io "io" @@ -37,10 +36,8 @@ type LogStreamUncommitReport struct { LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` UncommittedLLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=uncommitted_llsn_offset,json=uncommittedLlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"uncommitted_llsn_offset,omitempty"` UncommittedLLSNLength uint64 `protobuf:"varint,3,opt,name=uncommitted_llsn_length,json=uncommittedLlsnLength,proto3" json:"uncommitted_llsn_length,omitempty"` - HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Version github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,4,opt,name=version,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"version,omitempty"` + HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,5,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` } func (m *LogStreamUncommitReport) Reset() { *m = LogStreamUncommitReport{} } @@ -97,6 +94,13 @@ func (m *LogStreamUncommitReport) GetUncommittedLLSNLength() uint64 { return 0 } +func (m *LogStreamUncommitReport) GetVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { + if m != nil { + return m.Version + } + return 0 +} + func (m *LogStreamUncommitReport) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { if m != nil { return m.HighWatermark @@ -105,9 +109,6 @@ func (m *LogStreamUncommitReport) GetHighWatermark() github_daumkakao_com_varlog } type GetReportRequest struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *GetReportRequest) Reset() { *m = GetReportRequest{} } @@ -144,11 +145,8 @@ func (m *GetReportRequest) XXX_DiscardUnknown() { var xxx_messageInfo_GetReportRequest proto.InternalMessageInfo type GetReportResponse struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - UncommitReports []LogStreamUncommitReport `protobuf:"bytes,2,rep,name=uncommit_reports,json=uncommitReports,proto3" json:"uncommit_reports"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + UncommitReports []LogStreamUncommitReport `protobuf:"bytes,2,rep,name=uncommit_reports,json=uncommitReports,proto3" json:"uncommit_reports"` } func (m *GetReportResponse) Reset() { *m = GetReportResponse{} } @@ -204,15 +202,13 @@ func (m *GetReportResponse) GetUncommitReports() []LogStreamUncommitReport { // Field commit_result contains positions of all log entries of log streams in // a storage node which is a receiver of this GlobalLogStreamDescriptor. type LogStreamCommitResult struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - CommittedLLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=committed_llsn_offset,json=committedLlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"committed_llsn_offset,omitempty"` - CommittedGLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,3,opt,name=committed_glsn_offset,json=committedGlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_offset,omitempty"` - CommittedGLSNLength uint64 `protobuf:"varint,4,opt,name=committed_glsn_length,json=committedGlsnLength,proto3" json:"committed_glsn_length,omitempty"` - HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,5,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` - PrevHighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,6,opt,name=prev_high_watermark,json=prevHighWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"prev_high_watermark,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,2,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + CommittedLLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,3,opt,name=committed_llsn_offset,json=committedLlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"committed_llsn_offset,omitempty"` + CommittedGLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=committed_glsn_offset,json=committedGlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_offset,omitempty"` + CommittedGLSNLength uint64 `protobuf:"varint,5,opt,name=committed_glsn_length,json=committedGlsnLength,proto3" json:"committed_glsn_length,omitempty"` + Version github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,6,opt,name=version,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"version,omitempty"` + HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,7,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` } func (m *LogStreamCommitResult) Reset() { *m = LogStreamCommitResult{} } @@ -255,6 +251,13 @@ func (m *LogStreamCommitResult) GetLogStreamID() github_daumkakao_com_varlog_var return 0 } +func (m *LogStreamCommitResult) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *LogStreamCommitResult) GetCommittedLLSNOffset() github_daumkakao_com_varlog_varlog_pkg_types.LLSN { if m != nil { return m.CommittedLLSNOffset @@ -276,26 +279,23 @@ func (m *LogStreamCommitResult) GetCommittedGLSNLength() uint64 { return 0 } -func (m *LogStreamCommitResult) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *LogStreamCommitResult) GetVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { if m != nil { - return m.HighWatermark + return m.Version } return 0 } -func (m *LogStreamCommitResult) GetPrevHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *LogStreamCommitResult) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { if m != nil { - return m.PrevHighWatermark + return m.HighWatermark } return 0 } type CommitRequest struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - CommitResult LogStreamCommitResult `protobuf:"bytes,2,opt,name=commit_result,json=commitResult,proto3" json:"commit_result"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + CommitResult LogStreamCommitResult `protobuf:"bytes,2,opt,name=commit_result,json=commitResult,proto3" json:"commit_result"` } func (m *CommitRequest) Reset() { *m = CommitRequest{} } @@ -346,9 +346,6 @@ func (m *CommitRequest) GetCommitResult() LogStreamCommitResult { } type CommitResponse struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *CommitResponse) Reset() { *m = CommitResponse{} } @@ -398,47 +395,52 @@ func init() { } var fileDescriptor_b6a839cf0bdc32d5 = []byte{ - // 637 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x55, 0x3f, 0x6f, 0xd3, 0x40, - 0x1c, 0xed, 0x35, 0x69, 0x25, 0x2e, 0xb8, 0x7f, 0xae, 0x8a, 0x1a, 0x82, 0x88, 0x2b, 0xab, 0x43, - 0x96, 0x3a, 0x50, 0x84, 0x84, 0xca, 0x96, 0x82, 0x02, 0xaa, 0x49, 0xc1, 0x51, 0x85, 0x84, 0x90, - 0x2c, 0xa7, 0xbe, 0x3a, 0x51, 0x6c, 0x9f, 0xf1, 0x9d, 0x5b, 0xb1, 0x31, 0xf1, 0x39, 0x3a, 0x31, - 0xf1, 0x41, 0x2a, 0x26, 0x36, 0x36, 0x0f, 0xee, 0xc2, 0x67, 0xe8, 0x84, 0x7c, 0x76, 0x1c, 0xbb, - 0x71, 0x10, 0x81, 0x4a, 0x9d, 0x92, 0xbb, 0xdf, 0xef, 0xf7, 0xde, 0xd3, 0xdd, 0x7b, 0x67, 0xb8, - 0xed, 0x7a, 0x84, 0x91, 0x16, 0x75, 0xdc, 0x7e, 0xcb, 0x22, 0xa6, 0x46, 0x99, 0x87, 0x75, 0x5b, - 0xf3, 0xb0, 0x4b, 0x3c, 0x86, 0x3d, 0x99, 0x97, 0x51, 0xe5, 0x54, 0xf7, 0x2c, 0x62, 0xca, 0x51, - 0x5b, 0x7d, 0xc7, 0x1c, 0xb2, 0x81, 0xdf, 0x97, 0x8f, 0x89, 0xdd, 0x32, 0x89, 0x49, 0x5a, 0xbc, - 0xa7, 0xef, 0x9f, 0xf0, 0x55, 0x8c, 0x17, 0xfd, 0x8b, 0x67, 0xa5, 0xef, 0x25, 0xb8, 0xa9, 0x10, - 0xb3, 0xc7, 0x81, 0x8f, 0x9c, 0x63, 0x62, 0xdb, 0x43, 0xa6, 0x72, 0x7c, 0x44, 0xa0, 0x90, 0x21, - 0x1d, 0x1a, 0x35, 0xb0, 0x05, 0x9a, 0x42, 0xfb, 0x20, 0x0c, 0xc4, 0x4a, 0x3a, 0xf3, 0xea, 0xf9, - 0x55, 0x20, 0x3e, 0x4d, 0x48, 0x0d, 0xdd, 0xb7, 0x47, 0xfa, 0x48, 0x27, 0x9c, 0x3e, 0x96, 0x35, - 0xfe, 0x71, 0x47, 0x66, 0x8b, 0x7d, 0x72, 0x31, 0x95, 0x33, 0xb3, 0x6a, 0xc5, 0x4a, 0x17, 0x06, - 0xfa, 0x02, 0xe0, 0xa6, 0x9f, 0x68, 0x60, 0xd8, 0xd0, 0x2c, 0x8b, 0x3a, 0x1a, 0x39, 0x39, 0xa1, - 0x98, 0xd5, 0x16, 0xb7, 0x40, 0xb3, 0xdc, 0xee, 0x86, 0x81, 0x58, 0x3d, 0x9a, 0xb4, 0x28, 0x4a, - 0xaf, 0x7b, 0xc8, 0x1b, 0xae, 0x02, 0xf1, 0xd1, 0x7c, 0x2a, 0x94, 0x5e, 0x57, 0xad, 0x66, 0xe8, - 0x14, 0x8b, 0x3a, 0x31, 0x16, 0x7a, 0x5b, 0xa0, 0xc3, 0xc2, 0x8e, 0xc9, 0x06, 0xb5, 0x12, 0xd7, - 0x71, 0xaf, 0x40, 0x87, 0xc2, 0x1b, 0xa6, 0x20, 0xe3, 0x6d, 0xf4, 0x01, 0xae, 0x0c, 0x86, 0xe6, - 0x40, 0x3b, 0xd3, 0x19, 0xf6, 0x6c, 0xdd, 0x1b, 0xd5, 0xca, 0x1c, 0xe9, 0xc9, 0xdc, 0xc2, 0x3b, - 0x91, 0x70, 0x21, 0x02, 0x7b, 0x37, 0xc6, 0xda, 0x2b, 0xff, 0x3a, 0x17, 0x81, 0x84, 0xe0, 0x5a, - 0x07, 0x27, 0xb7, 0xa7, 0xe2, 0x8f, 0x3e, 0xa6, 0x4c, 0xba, 0x04, 0x70, 0x3d, 0xb3, 0x49, 0x5d, - 0xe2, 0x50, 0x8c, 0xce, 0xe0, 0x2a, 0x65, 0xc4, 0xd3, 0x4d, 0xac, 0x39, 0xc4, 0xc0, 0x93, 0xcb, - 0x3d, 0x0c, 0x03, 0x51, 0xe8, 0xc5, 0xa5, 0x2e, 0x31, 0x30, 0xbf, 0xde, 0xbd, 0xb9, 0xf4, 0xe5, - 0xa6, 0x55, 0x81, 0x66, 0x96, 0x06, 0x3a, 0x82, 0x6b, 0xe3, 0xf3, 0x49, 0x6c, 0x4c, 0x6b, 0x8b, - 0x5b, 0xa5, 0x66, 0x65, 0x77, 0x5b, 0xce, 0xd8, 0x58, 0x9e, 0xe1, 0xc9, 0x76, 0xf9, 0x22, 0x10, - 0x17, 0xd4, 0x55, 0x3f, 0xb7, 0x4b, 0xa5, 0xaf, 0x4b, 0xb0, 0x9a, 0x8e, 0xec, 0x27, 0x25, 0xea, - 0x5b, 0xb7, 0x60, 0xe2, 0xcf, 0x00, 0x56, 0xff, 0x64, 0x61, 0x25, 0x0c, 0xc4, 0x8d, 0xfd, 0x9b, - 0x32, 0xf0, 0x46, 0x91, 0x7d, 0xf3, 0x12, 0xcc, 0x8c, 0x84, 0x52, 0x81, 0x84, 0xce, 0xbf, 0x4b, - 0xe8, 0xe4, 0x25, 0x74, 0x26, 0x12, 0x0e, 0xa6, 0x14, 0x24, 0xf9, 0x89, 0x5d, 0xbf, 0x39, 0xa5, - 0x20, 0x49, 0x4f, 0x1e, 0x6c, 0x66, 0x76, 0x96, 0x6e, 0x2e, 0x3b, 0x08, 0xc3, 0x0d, 0xd7, 0xc3, - 0xa7, 0xda, 0x35, 0x8a, 0xe5, 0xff, 0xa1, 0x58, 0x8f, 0x10, 0x5f, 0x16, 0x44, 0xf4, 0x27, 0x80, - 0xc2, 0xd8, 0x9f, 0x3c, 0xa0, 0xb7, 0x17, 0xc5, 0xd7, 0x50, 0x48, 0x83, 0x18, 0x45, 0x85, 0xfb, - 0xb3, 0xb2, 0x2b, 0x15, 0xe7, 0x30, 0x1b, 0xaa, 0x24, 0x85, 0x77, 0x8f, 0x33, 0x7b, 0xd2, 0x1a, - 0x5c, 0x49, 0x7b, 0xf8, 0x23, 0xb3, 0xfb, 0x0d, 0xc0, 0xf5, 0x74, 0x5e, 0x4d, 0xbe, 0x59, 0xe8, - 0x0d, 0xbc, 0x93, 0xbe, 0x47, 0xe8, 0x41, 0x8e, 0xec, 0xfa, 0xe3, 0x55, 0x6f, 0xcc, 0x2a, 0xc7, - 0x0c, 0xd2, 0x42, 0x13, 0x3c, 0x04, 0xe8, 0x05, 0x5c, 0x8e, 0x99, 0x51, 0x3d, 0xd7, 0x9f, 0x3b, - 0xe7, 0xfa, 0xfd, 0xc2, 0xda, 0x04, 0xa8, 0xfd, 0xec, 0x22, 0x6c, 0x80, 0x1f, 0x61, 0x03, 0x9c, - 0x5f, 0x36, 0xc0, 0xfb, 0x9d, 0xbf, 0x39, 0xe8, 0xf4, 0xeb, 0xdc, 0x5f, 0xe6, 0xff, 0x1f, 0xff, - 0x0e, 0x00, 0x00, 0xff, 0xff, 0x6b, 0x75, 0x43, 0xcc, 0xb2, 0x07, 0x00, 0x00, + // 705 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x96, 0xcf, 0x6e, 0xd3, 0x4c, + 0x14, 0xc5, 0x33, 0x6d, 0x9a, 0x7c, 0xdf, 0x84, 0xf4, 0xcf, 0x54, 0x51, 0x43, 0x10, 0x76, 0x65, + 0x75, 0x91, 0x4d, 0x13, 0x28, 0x20, 0x55, 0x5d, 0xa6, 0x40, 0xa8, 0x6a, 0x52, 0x70, 0x28, 0x48, + 0x08, 0x11, 0x39, 0xf1, 0xd4, 0x89, 0x62, 0x7b, 0x8c, 0xc7, 0x6e, 0xc5, 0x8e, 0x05, 0x62, 0xcd, + 0x13, 0xa0, 0x3e, 0x00, 0x0f, 0xd2, 0x65, 0x77, 0xb0, 0xf2, 0xc2, 0xd9, 0xf0, 0x0c, 0x5d, 0x21, + 0x8f, 0x1d, 0xc7, 0x26, 0xae, 0x44, 0x50, 0xa4, 0xae, 0xea, 0x99, 0xb9, 0xf7, 0x9c, 0xa3, 0xd1, + 0xfd, 0x4d, 0x03, 0xb7, 0x4c, 0x8b, 0xd8, 0xa4, 0x4e, 0x0d, 0xb3, 0x5b, 0xd7, 0x88, 0xda, 0xa1, + 0xb6, 0x85, 0x65, 0xbd, 0x63, 0x61, 0x93, 0x58, 0x36, 0xb6, 0x6a, 0xec, 0x18, 0x15, 0x4e, 0x65, + 0x4b, 0x23, 0x6a, 0xcd, 0x2f, 0xab, 0x6c, 0xab, 0x03, 0xbb, 0xef, 0x74, 0x6b, 0x3d, 0xa2, 0xd7, + 0x55, 0xa2, 0x92, 0x3a, 0xab, 0xe9, 0x3a, 0x27, 0x6c, 0x15, 0xe8, 0xf9, 0x5f, 0x41, 0xaf, 0xf0, + 0x2d, 0x0b, 0x37, 0x44, 0xa2, 0xb6, 0x99, 0xf0, 0xb1, 0xd1, 0x23, 0xba, 0x3e, 0xb0, 0x25, 0xa6, + 0x8f, 0x08, 0x2c, 0xc6, 0x4c, 0x07, 0x4a, 0x19, 0x6c, 0x82, 0xea, 0x52, 0xe3, 0xd0, 0x73, 0xf9, + 0x42, 0xd4, 0x73, 0xf0, 0xf8, 0xca, 0xe5, 0x77, 0x43, 0x53, 0x45, 0x76, 0xf4, 0xa1, 0x3c, 0x94, + 0x09, 0xb3, 0x0f, 0x62, 0x8d, 0xff, 0x98, 0x43, 0xb5, 0x6e, 0x7f, 0x34, 0x31, 0xad, 0xc5, 0x7a, + 0xa5, 0x82, 0x16, 0x2d, 0x14, 0xf4, 0x05, 0xc0, 0x0d, 0x27, 0xcc, 0x60, 0x63, 0xa5, 0xa3, 0x69, + 0xd4, 0xe8, 0x90, 0x93, 0x13, 0x8a, 0xed, 0xf2, 0xc2, 0x26, 0xa8, 0x66, 0x1b, 0x2d, 0xcf, 0xe5, + 0x4b, 0xc7, 0x93, 0x12, 0x51, 0x6c, 0xb7, 0x8e, 0x58, 0xc1, 0x95, 0xcb, 0xdf, 0x9f, 0x2d, 0x85, + 0xd8, 0x6e, 0x49, 0xa5, 0x98, 0x9d, 0xa8, 0x51, 0x23, 0xd0, 0x42, 0x2f, 0x53, 0x72, 0x68, 0xd8, + 0x50, 0xed, 0x7e, 0x79, 0x91, 0xe5, 0xb8, 0x9d, 0x92, 0x43, 0x64, 0x05, 0x53, 0x92, 0xc1, 0x36, + 0x92, 0x60, 0xfe, 0x14, 0x5b, 0x74, 0x40, 0x8c, 0x72, 0x96, 0x49, 0xec, 0x5e, 0xb9, 0xfc, 0xc3, + 0x99, 0x12, 0xbf, 0x0e, 0xfa, 0xa5, 0xb1, 0x10, 0x7a, 0x07, 0x97, 0xfb, 0x03, 0xb5, 0xdf, 0x39, + 0x93, 0x6d, 0x6c, 0xe9, 0xb2, 0x35, 0x2c, 0x2f, 0x31, 0xe9, 0x47, 0x33, 0x5f, 0x46, 0xd3, 0xbf, + 0x8c, 0xa2, 0x2f, 0xf6, 0x66, 0xac, 0xb5, 0x97, 0xfd, 0x75, 0xce, 0x03, 0x01, 0xc1, 0xd5, 0x26, + 0x0e, 0x27, 0x42, 0xc2, 0x1f, 0x1c, 0x4c, 0x6d, 0x61, 0x04, 0xe0, 0x5a, 0x6c, 0x93, 0x9a, 0xc4, + 0xa0, 0x18, 0x9d, 0xc1, 0x15, 0x6a, 0x13, 0x4b, 0x56, 0x71, 0xc7, 0x20, 0x0a, 0x9e, 0x0c, 0xcc, + 0x91, 0xe7, 0xf2, 0xc5, 0x76, 0x70, 0xd4, 0x22, 0x0a, 0x66, 0x23, 0xb3, 0x37, 0x53, 0xbe, 0x44, + 0xb7, 0x54, 0xa4, 0xb1, 0xa5, 0x82, 0x8e, 0xe1, 0xea, 0xf8, 0xce, 0x43, 0x34, 0x68, 0x79, 0x61, + 0x73, 0xb1, 0x5a, 0xd8, 0xd9, 0xaa, 0xc5, 0xd0, 0xa8, 0x5d, 0x33, 0xe7, 0x8d, 0xec, 0x85, 0xcb, + 0x67, 0xa4, 0x15, 0x27, 0xb1, 0x4b, 0x85, 0xcf, 0x39, 0x58, 0x8a, 0x5a, 0xf6, 0xc3, 0x23, 0xea, + 0x68, 0x37, 0x00, 0xc6, 0x7b, 0xf8, 0x9f, 0x4d, 0xcc, 0x41, 0xcf, 0xf7, 0x5a, 0x60, 0x5e, 0xfb, + 0x9e, 0xcb, 0xe7, 0x5f, 0xf9, 0x7b, 0xcc, 0x67, 0xb6, 0x41, 0x0a, 0xfb, 0xa4, 0x3c, 0x13, 0x3d, + 0x50, 0xd0, 0x27, 0x00, 0x4b, 0xe9, 0xd8, 0x05, 0xe3, 0x2e, 0x7a, 0x2e, 0xbf, 0xbe, 0x3f, 0x2f, + 0xe8, 0xd6, 0xd3, 0x90, 0x4b, 0x46, 0x50, 0x63, 0x11, 0xb2, 0x29, 0x11, 0x9a, 0xff, 0x1e, 0xa1, + 0x99, 0x8c, 0xd0, 0x9c, 0x44, 0x38, 0x9c, 0x4a, 0x10, 0x32, 0x1f, 0x50, 0xb5, 0x31, 0x95, 0x20, + 0x24, 0x3e, 0x29, 0x36, 0xcd, 0x7b, 0x6e, 0x5e, 0xbc, 0xeb, 0x53, 0xbc, 0xe7, 0x99, 0xf4, 0x53, + 0x1f, 0xb0, 0x67, 0x71, 0x78, 0xe7, 0xf8, 0x00, 0xfc, 0x00, 0xb0, 0x38, 0x9e, 0x7e, 0x86, 0xff, + 0xcd, 0x81, 0xfe, 0x1c, 0x16, 0x23, 0xcc, 0x7d, 0x10, 0x19, 0x0b, 0x85, 0x1d, 0x21, 0x9d, 0xf2, + 0x38, 0xb2, 0x21, 0xe3, 0xb7, 0x7a, 0xb1, 0x3d, 0x61, 0x15, 0x2e, 0x47, 0x35, 0xec, 0x09, 0xdb, + 0xf9, 0x0e, 0xe0, 0x5a, 0xd4, 0x2f, 0x85, 0xff, 0x65, 0xd1, 0x0b, 0xf8, 0x7f, 0xf4, 0xda, 0xa1, + 0xbb, 0x09, 0xb3, 0x3f, 0x9f, 0xc6, 0x0a, 0x77, 0xdd, 0x71, 0xe0, 0x20, 0x64, 0xaa, 0xe0, 0x1e, + 0x40, 0x4f, 0x60, 0x2e, 0x70, 0x46, 0x95, 0x44, 0x7d, 0xe2, 0x9e, 0x2b, 0x77, 0x52, 0xcf, 0x26, + 0x42, 0x8d, 0xe6, 0x85, 0xc7, 0x81, 0x4b, 0x8f, 0x03, 0x5f, 0x47, 0x5c, 0xe6, 0x7c, 0xc4, 0x81, + 0xcb, 0x11, 0x97, 0xf9, 0x39, 0xe2, 0x32, 0x6f, 0xb7, 0xff, 0xe6, 0xd2, 0xa3, 0xdf, 0x16, 0xdd, + 0x1c, 0xfb, 0x7e, 0xf0, 0x3b, 0x00, 0x00, 0xff, 0xff, 0x66, 0x70, 0x5f, 0x25, 0x70, 0x08, 0x00, + 0x00, } func (this *LogStreamUncommitReport) Equal(that interface{}) bool { @@ -469,10 +471,10 @@ func (this *LogStreamUncommitReport) Equal(that interface{}) bool { if this.UncommittedLLSNLength != that1.UncommittedLLSNLength { return false } - if this.HighWatermark != that1.HighWatermark { + if this.Version != that1.Version { return false } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + if this.HighWatermark != that1.HighWatermark { return false } return true @@ -499,6 +501,9 @@ func (this *LogStreamCommitResult) Equal(that interface{}) bool { if this.LogStreamID != that1.LogStreamID { return false } + if this.TopicID != that1.TopicID { + return false + } if this.CommittedLLSNOffset != that1.CommittedLLSNOffset { return false } @@ -508,13 +513,10 @@ func (this *LogStreamCommitResult) Equal(that interface{}) bool { if this.CommittedGLSNLength != that1.CommittedGLSNLength { return false } - if this.HighWatermark != that1.HighWatermark { - return false - } - if this.PrevHighWatermark != that1.PrevHighWatermark { + if this.Version != that1.Version { return false } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + if this.HighWatermark != that1.HighWatermark { return false } return true @@ -722,13 +724,14 @@ func (m *LogStreamUncommitReport) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.HighWatermark != 0 { i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.HighWatermark)) i-- + dAtA[i] = 0x28 + } + if m.Version != 0 { + i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.Version)) + i-- dAtA[i] = 0x20 } if m.UncommittedLLSNLength != 0 { @@ -769,10 +772,6 @@ func (m *GetReportRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } return len(dAtA) - i, nil } @@ -796,10 +795,6 @@ func (m *GetReportResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.UncommitReports) > 0 { for iNdEx := len(m.UncommitReports) - 1; iNdEx >= 0; iNdEx-- { { @@ -842,33 +837,34 @@ func (m *LogStreamCommitResult) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.PrevHighWatermark != 0 { - i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.PrevHighWatermark)) - i-- - dAtA[i] = 0x30 - } if m.HighWatermark != 0 { i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.HighWatermark)) i-- - dAtA[i] = 0x28 + dAtA[i] = 0x38 + } + if m.Version != 0 { + i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.Version)) + i-- + dAtA[i] = 0x30 } if m.CommittedGLSNLength != 0 { i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.CommittedGLSNLength)) i-- - dAtA[i] = 0x20 + dAtA[i] = 0x28 } if m.CommittedGLSNOffset != 0 { i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.CommittedGLSNOffset)) i-- - dAtA[i] = 0x18 + dAtA[i] = 0x20 } if m.CommittedLLSNOffset != 0 { i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.CommittedLLSNOffset)) i-- + dAtA[i] = 0x18 + } + if m.TopicID != 0 { + i = encodeVarintLogStreamReporter(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x10 } if m.LogStreamID != 0 { @@ -899,10 +895,6 @@ func (m *CommitRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } { size, err := m.CommitResult.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -941,10 +933,6 @@ func (m *CommitResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } return len(dAtA) - i, nil } @@ -974,12 +962,12 @@ func (m *LogStreamUncommitReport) ProtoSize() (n int) { if m.UncommittedLLSNLength != 0 { n += 1 + sovLogStreamReporter(uint64(m.UncommittedLLSNLength)) } + if m.Version != 0 { + n += 1 + sovLogStreamReporter(uint64(m.Version)) + } if m.HighWatermark != 0 { n += 1 + sovLogStreamReporter(uint64(m.HighWatermark)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -989,9 +977,6 @@ func (m *GetReportRequest) ProtoSize() (n int) { } var l int _ = l - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1010,9 +995,6 @@ func (m *GetReportResponse) ProtoSize() (n int) { n += 1 + l + sovLogStreamReporter(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1025,6 +1007,9 @@ func (m *LogStreamCommitResult) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovLogStreamReporter(uint64(m.LogStreamID)) } + if m.TopicID != 0 { + n += 1 + sovLogStreamReporter(uint64(m.TopicID)) + } if m.CommittedLLSNOffset != 0 { n += 1 + sovLogStreamReporter(uint64(m.CommittedLLSNOffset)) } @@ -1034,15 +1019,12 @@ func (m *LogStreamCommitResult) ProtoSize() (n int) { if m.CommittedGLSNLength != 0 { n += 1 + sovLogStreamReporter(uint64(m.CommittedGLSNLength)) } + if m.Version != 0 { + n += 1 + sovLogStreamReporter(uint64(m.Version)) + } if m.HighWatermark != 0 { n += 1 + sovLogStreamReporter(uint64(m.HighWatermark)) } - if m.PrevHighWatermark != 0 { - n += 1 + sovLogStreamReporter(uint64(m.PrevHighWatermark)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1057,9 +1039,6 @@ func (m *CommitRequest) ProtoSize() (n int) { } l = m.CommitResult.ProtoSize() n += 1 + l + sovLogStreamReporter(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1069,9 +1048,6 @@ func (m *CommitResponse) ProtoSize() (n int) { } var l int _ = l - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1168,6 +1144,25 @@ func (m *LogStreamUncommitReport) Unmarshal(dAtA []byte) error { } } case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) + } + m.Version = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowLogStreamReporter + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Version |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 5: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) } @@ -1198,7 +1193,6 @@ func (m *LogStreamUncommitReport) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1249,7 +1243,6 @@ func (m *GetReportRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1353,7 +1346,6 @@ func (m *GetReportResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1412,6 +1404,25 @@ func (m *LogStreamCommitResult) Unmarshal(dAtA []byte) error { } } case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowLogStreamReporter + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field CommittedLLSNOffset", wireType) } @@ -1430,7 +1441,7 @@ func (m *LogStreamCommitResult) Unmarshal(dAtA []byte) error { break } } - case 3: + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field CommittedGLSNOffset", wireType) } @@ -1449,7 +1460,7 @@ func (m *LogStreamCommitResult) Unmarshal(dAtA []byte) error { break } } - case 4: + case 5: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field CommittedGLSNLength", wireType) } @@ -1468,11 +1479,11 @@ func (m *LogStreamCommitResult) Unmarshal(dAtA []byte) error { break } } - case 5: + case 6: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) } - m.HighWatermark = 0 + m.Version = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowLogStreamReporter @@ -1482,16 +1493,16 @@ func (m *LogStreamCommitResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.HighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.Version |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift if b < 0x80 { break } } - case 6: + case 7: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PrevHighWatermark", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) } - m.PrevHighWatermark = 0 + m.HighWatermark = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowLogStreamReporter @@ -1501,7 +1512,7 @@ func (m *LogStreamCommitResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.PrevHighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.HighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift if b < 0x80 { break } @@ -1518,7 +1529,6 @@ func (m *LogStreamCommitResult) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1621,7 +1631,6 @@ func (m *CommitRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1672,7 +1681,6 @@ func (m *CommitResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/snpb/log_stream_reporter.proto b/proto/snpb/log_stream_reporter.proto index bd107c868..49a0ac43d 100644 --- a/proto/snpb/log_stream_reporter.proto +++ b/proto/snpb/log_stream_reporter.proto @@ -9,13 +9,16 @@ option go_package = "github.com/kakao/varlog/proto/snpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; // LogStreamUncommitReport is manifest that log stream reports to metadata // repository about log entries those are waiting to commit. message LogStreamUncommitReport { option (gogoproto.equal) = true; - uint32 log_stream_id = 1 [ + int32 log_stream_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -27,14 +30,17 @@ message LogStreamUncommitReport { ]; uint64 uncommitted_llsn_length = 3 [(gogoproto.customname) = "UncommittedLLSNLength"]; - uint64 high_watermark = 4 + uint64 version = 4 + [(gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.Version"]; + uint64 high_watermark = 5 [(gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.GLSN"]; } message GetReportRequest {} message GetReportResponse { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" @@ -51,33 +57,40 @@ message GetReportResponse { message LogStreamCommitResult { option (gogoproto.equal) = true; - uint32 log_stream_id = 1 [ + int32 log_stream_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - uint64 committed_llsn_offset = 2 [ + int32 topic_id = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + uint64 committed_llsn_offset = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LLSN", (gogoproto.customname) = "CommittedLLSNOffset" ]; - uint64 committed_glsn_offset = 3 [ + uint64 committed_glsn_offset = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.GLSN", (gogoproto.customname) = "CommittedGLSNOffset" ]; - uint64 committed_glsn_length = 4 + uint64 committed_glsn_length = 5 [(gogoproto.customname) = "CommittedGLSNLength"]; - uint64 high_watermark = 5 + uint64 version = 6 [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; - uint64 prev_high_watermark = 6 - [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; + "github.com/kakao/varlog/pkg/types.Version"]; + uint64 high_watermark = 7 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.GLSN", + (gogoproto.customname) = "HighWatermark" + ]; } message CommitRequest { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" diff --git a/proto/snpb/management.pb.go b/proto/snpb/management.pb.go index 1a8273a0c..8168c3e6e 100644 --- a/proto/snpb/management.pb.go +++ b/proto/snpb/management.pb.go @@ -4,7 +4,6 @@ package snpb import ( - bytes "bytes" context "context" fmt "fmt" io "io" @@ -62,10 +61,7 @@ func (LogStreamCommitInfo_Status) EnumDescriptor() ([]byte, []int) { } type GetMetadataRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` } func (m *GetMetadataRequest) Reset() { *m = GetMetadataRequest{} } @@ -109,10 +105,7 @@ func (m *GetMetadataRequest) GetClusterID() github_daumkakao_com_varlog_varlog_p } type GetMetadataResponse struct { - StorageNodeMetadata *varlogpb.StorageNodeMetadataDescriptor `protobuf:"bytes,1,opt,name=storage_node_metadata,json=storageNodeMetadata,proto3" json:"storage_node_metadata,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeMetadata *varlogpb.StorageNodeMetadataDescriptor `protobuf:"bytes,1,opt,name=storage_node_metadata,json=storageNodeMetadata,proto3" json:"storage_node_metadata,omitempty"` } func (m *GetMetadataResponse) Reset() { *m = GetMetadataResponse{} } @@ -155,28 +148,26 @@ func (m *GetMetadataResponse) GetStorageNodeMetadata() *varlogpb.StorageNodeMeta return nil } -type AddLogStreamRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Storage *varlogpb.StorageDescriptor `protobuf:"bytes,4,opt,name=storage,proto3" json:"storage,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` +type AddLogStreamReplicaRequest struct { + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,4,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + Storage *varlogpb.StorageDescriptor `protobuf:"bytes,5,opt,name=storage,proto3" json:"storage,omitempty"` } -func (m *AddLogStreamRequest) Reset() { *m = AddLogStreamRequest{} } -func (m *AddLogStreamRequest) String() string { return proto.CompactTextString(m) } -func (*AddLogStreamRequest) ProtoMessage() {} -func (*AddLogStreamRequest) Descriptor() ([]byte, []int) { +func (m *AddLogStreamReplicaRequest) Reset() { *m = AddLogStreamReplicaRequest{} } +func (m *AddLogStreamReplicaRequest) String() string { return proto.CompactTextString(m) } +func (*AddLogStreamReplicaRequest) ProtoMessage() {} +func (*AddLogStreamReplicaRequest) Descriptor() ([]byte, []int) { return fileDescriptor_b2a108895042472a, []int{2} } -func (m *AddLogStreamRequest) XXX_Unmarshal(b []byte) error { +func (m *AddLogStreamReplicaRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } -func (m *AddLogStreamRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { +func (m *AddLogStreamReplicaRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { if deterministic { - return xxx_messageInfo_AddLogStreamRequest.Marshal(b, m, deterministic) + return xxx_messageInfo_AddLogStreamReplicaRequest.Marshal(b, m, deterministic) } else { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) @@ -186,66 +177,70 @@ func (m *AddLogStreamRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, return b[:n], nil } } -func (m *AddLogStreamRequest) XXX_Merge(src proto.Message) { - xxx_messageInfo_AddLogStreamRequest.Merge(m, src) +func (m *AddLogStreamReplicaRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_AddLogStreamReplicaRequest.Merge(m, src) } -func (m *AddLogStreamRequest) XXX_Size() int { +func (m *AddLogStreamReplicaRequest) XXX_Size() int { return m.ProtoSize() } -func (m *AddLogStreamRequest) XXX_DiscardUnknown() { - xxx_messageInfo_AddLogStreamRequest.DiscardUnknown(m) +func (m *AddLogStreamReplicaRequest) XXX_DiscardUnknown() { + xxx_messageInfo_AddLogStreamReplicaRequest.DiscardUnknown(m) } -var xxx_messageInfo_AddLogStreamRequest proto.InternalMessageInfo +var xxx_messageInfo_AddLogStreamReplicaRequest proto.InternalMessageInfo -func (m *AddLogStreamRequest) GetClusterID() github_daumkakao_com_varlog_varlog_pkg_types.ClusterID { +func (m *AddLogStreamReplicaRequest) GetClusterID() github_daumkakao_com_varlog_varlog_pkg_types.ClusterID { if m != nil { return m.ClusterID } return 0 } -func (m *AddLogStreamRequest) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID { +func (m *AddLogStreamReplicaRequest) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID { if m != nil { return m.StorageNodeID } return 0 } -func (m *AddLogStreamRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { +func (m *AddLogStreamReplicaRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + +func (m *AddLogStreamReplicaRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID } return 0 } -func (m *AddLogStreamRequest) GetStorage() *varlogpb.StorageDescriptor { +func (m *AddLogStreamReplicaRequest) GetStorage() *varlogpb.StorageDescriptor { if m != nil { return m.Storage } return nil } -type AddLogStreamResponse struct { +type AddLogStreamReplicaResponse struct { // TODO (jun): Use LogStreamMetadataDescriptor - LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` } -func (m *AddLogStreamResponse) Reset() { *m = AddLogStreamResponse{} } -func (m *AddLogStreamResponse) String() string { return proto.CompactTextString(m) } -func (*AddLogStreamResponse) ProtoMessage() {} -func (*AddLogStreamResponse) Descriptor() ([]byte, []int) { +func (m *AddLogStreamReplicaResponse) Reset() { *m = AddLogStreamReplicaResponse{} } +func (m *AddLogStreamReplicaResponse) String() string { return proto.CompactTextString(m) } +func (*AddLogStreamReplicaResponse) ProtoMessage() {} +func (*AddLogStreamReplicaResponse) Descriptor() ([]byte, []int) { return fileDescriptor_b2a108895042472a, []int{3} } -func (m *AddLogStreamResponse) XXX_Unmarshal(b []byte) error { +func (m *AddLogStreamReplicaResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } -func (m *AddLogStreamResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { +func (m *AddLogStreamReplicaResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { if deterministic { - return xxx_messageInfo_AddLogStreamResponse.Marshal(b, m, deterministic) + return xxx_messageInfo_AddLogStreamReplicaResponse.Marshal(b, m, deterministic) } else { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) @@ -255,19 +250,19 @@ func (m *AddLogStreamResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte return b[:n], nil } } -func (m *AddLogStreamResponse) XXX_Merge(src proto.Message) { - xxx_messageInfo_AddLogStreamResponse.Merge(m, src) +func (m *AddLogStreamReplicaResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_AddLogStreamReplicaResponse.Merge(m, src) } -func (m *AddLogStreamResponse) XXX_Size() int { +func (m *AddLogStreamReplicaResponse) XXX_Size() int { return m.ProtoSize() } -func (m *AddLogStreamResponse) XXX_DiscardUnknown() { - xxx_messageInfo_AddLogStreamResponse.DiscardUnknown(m) +func (m *AddLogStreamReplicaResponse) XXX_DiscardUnknown() { + xxx_messageInfo_AddLogStreamReplicaResponse.DiscardUnknown(m) } -var xxx_messageInfo_AddLogStreamResponse proto.InternalMessageInfo +var xxx_messageInfo_AddLogStreamReplicaResponse proto.InternalMessageInfo -func (m *AddLogStreamResponse) GetLogStream() *varlogpb.LogStreamDescriptor { +func (m *AddLogStreamReplicaResponse) GetLogStream() *varlogpb.LogStreamDescriptor { if m != nil { return m.LogStream } @@ -275,12 +270,10 @@ func (m *AddLogStreamResponse) GetLogStream() *varlogpb.LogStreamDescriptor { } type RemoveLogStreamRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,4,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *RemoveLogStreamRequest) Reset() { *m = RemoveLogStreamRequest{} } @@ -330,6 +323,13 @@ func (m *RemoveLogStreamRequest) GetStorageNodeID() github_daumkakao_com_varlog_ return 0 } +func (m *RemoveLogStreamRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *RemoveLogStreamRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -338,13 +338,11 @@ func (m *RemoveLogStreamRequest) GetLogStreamID() github_daumkakao_com_varlog_va } type SealRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - LastCommittedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=last_committed_glsn,json=lastCommittedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"last_committed_glsn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,4,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + LastCommittedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,5,opt,name=last_committed_glsn,json=lastCommittedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"last_committed_glsn,omitempty"` } func (m *SealRequest) Reset() { *m = SealRequest{} } @@ -394,6 +392,13 @@ func (m *SealRequest) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pkg_ return 0 } +func (m *SealRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *SealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -409,11 +414,8 @@ func (m *SealRequest) GetLastCommittedGLSN() github_daumkakao_com_varlog_varlog_ } type SealResponse struct { - Status varlogpb.LogStreamStatus `protobuf:"varint,1,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` - LastCommittedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=last_committed_glsn,json=lastCommittedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"last_committed_glsn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Status varlogpb.LogStreamStatus `protobuf:"varint,1,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` + LastCommittedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=last_committed_glsn,json=lastCommittedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"last_committed_glsn,omitempty"` } func (m *SealResponse) Reset() { *m = SealResponse{} } @@ -464,13 +466,11 @@ func (m *SealResponse) GetLastCommittedGLSN() github_daumkakao_com_varlog_varlog } type UnsealRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Replicas []Replica `protobuf:"bytes,4,rep,name=replicas,proto3" json:"replicas"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,4,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + Replicas []varlogpb.Replica `protobuf:"bytes,5,rep,name=replicas,proto3" json:"replicas"` } func (m *UnsealRequest) Reset() { *m = UnsealRequest{} } @@ -520,6 +520,13 @@ func (m *UnsealRequest) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pk return 0 } +func (m *UnsealRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *UnsealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -527,7 +534,7 @@ func (m *UnsealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_ return 0 } -func (m *UnsealRequest) GetReplicas() []Replica { +func (m *UnsealRequest) GetReplicas() []varlogpb.Replica { if m != nil { return m.Replicas } @@ -535,13 +542,11 @@ func (m *UnsealRequest) GetReplicas() []Replica { } type SyncRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Backup *SyncRequest_BackupNode `protobuf:"bytes,4,opt,name=backup,proto3" json:"backup,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,4,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + Backup *SyncRequest_BackupNode `protobuf:"bytes,5,opt,name=backup,proto3" json:"backup,omitempty"` } func (m *SyncRequest) Reset() { *m = SyncRequest{} } @@ -591,6 +596,13 @@ func (m *SyncRequest) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pkg_ return 0 } +func (m *SyncRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *SyncRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -607,11 +619,8 @@ func (m *SyncRequest) GetBackup() *SyncRequest_BackupNode { // FIXME: Use Replica instead of BackupNode type SyncRequest_BackupNode struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"` } func (m *SyncRequest_BackupNode) Reset() { *m = SyncRequest_BackupNode{} } @@ -662,10 +671,7 @@ func (m *SyncRequest_BackupNode) GetAddress() string { } type SyncResponse struct { - Status *SyncStatus `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Status *SyncStatus `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` } func (m *SyncResponse) Reset() { *m = SyncResponse{} } @@ -709,10 +715,7 @@ func (m *SyncResponse) GetStatus() *SyncStatus { } type GetPrevCommitInfoRequest struct { - PrevHighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=prev_high_watermark,json=prevHighWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"prev_high_watermark,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + PrevVersion github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,1,opt,name=prev_version,json=prevVersion,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"prev_version,omitempty"` } func (m *GetPrevCommitInfoRequest) Reset() { *m = GetPrevCommitInfoRequest{} } @@ -748,25 +751,21 @@ func (m *GetPrevCommitInfoRequest) XXX_DiscardUnknown() { var xxx_messageInfo_GetPrevCommitInfoRequest proto.InternalMessageInfo -func (m *GetPrevCommitInfoRequest) GetPrevHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *GetPrevCommitInfoRequest) GetPrevVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { if m != nil { - return m.PrevHighWatermark + return m.PrevVersion } return 0 } type LogStreamCommitInfo struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Status LogStreamCommitInfo_Status `protobuf:"varint,2,opt,name=status,proto3,enum=varlog.snpb.LogStreamCommitInfo_Status" json:"status,omitempty"` - CommittedLLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,3,opt,name=committed_llsn_offset,json=committedLlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"committed_llsn_offset,omitempty"` - CommittedGLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=committed_glsn_offset,json=committedGlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_offset,omitempty"` - CommittedGLSNLength uint64 `protobuf:"varint,5,opt,name=committed_glsn_length,json=committedGlsnLength,proto3" json:"committed_glsn_length,omitempty"` - HighestWrittenLLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,6,opt,name=highest_written_llsn,json=highestWrittenLlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"highest_written_llsn,omitempty"` - HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,7,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` - PrevHighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,8,opt,name=prev_high_watermark,json=prevHighWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"prev_high_watermark,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + Status LogStreamCommitInfo_Status `protobuf:"varint,2,opt,name=status,proto3,enum=varlog.snpb.LogStreamCommitInfo_Status" json:"status,omitempty"` + CommittedLLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,3,opt,name=committed_llsn_offset,json=committedLlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"committed_llsn_offset,omitempty"` + CommittedGLSNOffset github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=committed_glsn_offset,json=committedGlsnOffset,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_offset,omitempty"` + CommittedGLSNLength uint64 `protobuf:"varint,5,opt,name=committed_glsn_length,json=committedGlsnLength,proto3" json:"committed_glsn_length,omitempty"` + HighestWrittenLLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,6,opt,name=highest_written_llsn,json=highestWrittenLlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"highest_written_llsn,omitempty"` + Version github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,7,opt,name=version,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"version,omitempty"` } func (m *LogStreamCommitInfo) Reset() { *m = LogStreamCommitInfo{} } @@ -844,26 +843,16 @@ func (m *LogStreamCommitInfo) GetHighestWrittenLLSN() github_daumkakao_com_varlo return 0 } -func (m *LogStreamCommitInfo) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { - if m != nil { - return m.HighWatermark - } - return 0 -} - -func (m *LogStreamCommitInfo) GetPrevHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *LogStreamCommitInfo) GetVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { if m != nil { - return m.PrevHighWatermark + return m.Version } return 0 } type GetPrevCommitInfoResponse struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - CommitInfos []*LogStreamCommitInfo `protobuf:"bytes,2,rep,name=commit_infos,json=commitInfos,proto3" json:"commit_infos,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + CommitInfos []*LogStreamCommitInfo `protobuf:"bytes,2,rep,name=commit_infos,json=commitInfos,proto3" json:"commit_infos,omitempty"` } func (m *GetPrevCommitInfoResponse) Reset() { *m = GetPrevCommitInfoResponse{} } @@ -917,8 +906,8 @@ func init() { proto.RegisterEnum("varlog.snpb.LogStreamCommitInfo_Status", LogStreamCommitInfo_Status_name, LogStreamCommitInfo_Status_value) proto.RegisterType((*GetMetadataRequest)(nil), "varlog.snpb.GetMetadataRequest") proto.RegisterType((*GetMetadataResponse)(nil), "varlog.snpb.GetMetadataResponse") - proto.RegisterType((*AddLogStreamRequest)(nil), "varlog.snpb.AddLogStreamRequest") - proto.RegisterType((*AddLogStreamResponse)(nil), "varlog.snpb.AddLogStreamResponse") + proto.RegisterType((*AddLogStreamReplicaRequest)(nil), "varlog.snpb.AddLogStreamReplicaRequest") + proto.RegisterType((*AddLogStreamReplicaResponse)(nil), "varlog.snpb.AddLogStreamReplicaResponse") proto.RegisterType((*RemoveLogStreamRequest)(nil), "varlog.snpb.RemoveLogStreamRequest") proto.RegisterType((*SealRequest)(nil), "varlog.snpb.SealRequest") proto.RegisterType((*SealResponse)(nil), "varlog.snpb.SealResponse") @@ -934,83 +923,85 @@ func init() { func init() { proto.RegisterFile("proto/snpb/management.proto", fileDescriptor_b2a108895042472a) } var fileDescriptor_b2a108895042472a = []byte{ - // 1215 bytes of a gzipped FileDescriptorProto + // 1248 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0xcf, 0x6f, 0x1b, 0xc5, - 0x17, 0xf7, 0x3a, 0xfb, 0x75, 0x9a, 0xe7, 0xb8, 0x3f, 0xc6, 0x4d, 0xeb, 0x6e, 0xa5, 0xac, 0xbb, - 0xfd, 0x02, 0xbd, 0x74, 0x2d, 0x82, 0x5a, 0x55, 0xe5, 0x47, 0x55, 0xa7, 0xad, 0x6b, 0xe2, 0xda, - 0xd5, 0xba, 0x51, 0x25, 0x40, 0x5a, 0xad, 0xbd, 0xe3, 0xb5, 0x95, 0xf5, 0x8e, 0xd9, 0x19, 0x27, - 0x32, 0x02, 0x09, 0x6e, 0x28, 0x7f, 0x01, 0x1c, 0x22, 0x2a, 0x21, 0x8e, 0xfc, 0x17, 0x3d, 0xf4, - 0xc8, 0x15, 0x21, 0xf9, 0x60, 0x2e, 0x88, 0x3f, 0xa1, 0x17, 0xd0, 0xce, 0xae, 0xd7, 0xbb, 0xb6, - 0x43, 0x48, 0x20, 0xe1, 0x92, 0x93, 0xed, 0x79, 0xbf, 0x3e, 0xef, 0xbd, 0xcf, 0xbc, 0x19, 0x0f, - 0x5c, 0xed, 0xb9, 0x84, 0x91, 0x02, 0x75, 0x7a, 0x8d, 0x42, 0xd7, 0x70, 0x0c, 0x0b, 0x77, 0xb1, - 0xc3, 0x54, 0xbe, 0x8a, 0xd2, 0xdb, 0x86, 0x6b, 0x13, 0x4b, 0xf5, 0xa4, 0xd2, 0x4d, 0xab, 0xc3, - 0xda, 0xfd, 0x86, 0xda, 0x24, 0xdd, 0x82, 0x45, 0x2c, 0x52, 0xe0, 0x3a, 0x8d, 0x7e, 0x8b, 0xff, - 0xf2, 0xdd, 0x78, 0xdf, 0x7c, 0x5b, 0xe9, 0xaa, 0x45, 0x88, 0x65, 0xe3, 0x89, 0x16, 0xee, 0xf6, - 0xd8, 0x20, 0x10, 0x5e, 0xf6, 0x1d, 0x7b, 0x31, 0x31, 0x33, 0x4c, 0x83, 0x19, 0x81, 0x60, 0x85, - 0x03, 0x71, 0x71, 0xcf, 0xee, 0x34, 0x0d, 0x46, 0xdc, 0x60, 0x19, 0x45, 0x97, 0xfd, 0x35, 0xe5, - 0x73, 0x40, 0x25, 0xcc, 0x9e, 0x04, 0xf6, 0x1a, 0xfe, 0xb4, 0x8f, 0x29, 0x43, 0x2d, 0x80, 0xa6, - 0xdd, 0xa7, 0x0c, 0xbb, 0x7a, 0xc7, 0xcc, 0x09, 0x79, 0xe1, 0x46, 0xa6, 0x58, 0x1a, 0x0d, 0xe5, - 0xa5, 0x75, 0x7f, 0xb5, 0xfc, 0xe0, 0xf5, 0x50, 0xbe, 0x1d, 0xa4, 0x62, 0x1a, 0xfd, 0xee, 0x96, - 0xb1, 0x65, 0x10, 0x9e, 0x94, 0x8f, 0x69, 0xfc, 0xd1, 0xdb, 0xb2, 0x0a, 0x6c, 0xd0, 0xc3, 0x54, - 0x0d, 0x2d, 0xb5, 0xa5, 0xc0, 0x75, 0xd9, 0x54, 0x06, 0x90, 0x8d, 0x45, 0xa7, 0x3d, 0xe2, 0x50, - 0x8c, 0x1a, 0xb0, 0x42, 0x19, 0x71, 0x0d, 0x0b, 0xeb, 0x0e, 0x31, 0xb1, 0x3e, 0x4e, 0x8f, 0x23, - 0x49, 0xaf, 0xa9, 0x6a, 0x50, 0xd1, 0x71, 0xfe, 0x6a, 0xdd, 0xd7, 0xae, 0x12, 0x13, 0x8f, 0x9d, - 0x3d, 0xc0, 0xb4, 0xe9, 0x76, 0x7a, 0x8c, 0xb8, 0x5a, 0x96, 0xce, 0x8a, 0x95, 0x1f, 0x16, 0x20, - 0x7b, 0xdf, 0x34, 0x2b, 0xc4, 0xaa, 0x33, 0x17, 0x1b, 0xdd, 0x13, 0x4e, 0x1d, 0xed, 0xc0, 0xb9, - 0x58, 0x8e, 0x1d, 0x33, 0x97, 0xe4, 0xc1, 0x6a, 0xa3, 0xa1, 0x9c, 0x89, 0x24, 0xc4, 0x03, 0xde, - 0x3d, 0x54, 0xc0, 0x98, 0xb5, 0x96, 0x89, 0xa4, 0x5f, 0x36, 0x11, 0x81, 0x8c, 0x4d, 0x2c, 0x9d, - 0xf2, 0xac, 0xbd, 0xb0, 0x0b, 0x3c, 0xec, 0xc6, 0x68, 0x28, 0xa7, 0xc3, 0x6a, 0xf0, 0xa0, 0x77, - 0x0e, 0x15, 0x34, 0x62, 0xab, 0xa5, 0xed, 0xf0, 0x87, 0x89, 0xde, 0x83, 0xc5, 0x00, 0x41, 0x4e, - 0xe4, 0xfd, 0x53, 0xf6, 0xeb, 0x5f, 0xa4, 0x67, 0x63, 0x13, 0xe5, 0x63, 0xb8, 0x18, 0x6f, 0x53, - 0xc0, 0x91, 0x75, 0x80, 0x49, 0x1a, 0x01, 0x31, 0xfe, 0x3f, 0xe3, 0x38, 0xb4, 0x8b, 0xb8, 0x5e, - 0x0a, 0xc1, 0x29, 0xbf, 0x27, 0xe1, 0x92, 0x86, 0xbb, 0x64, 0x1b, 0x9f, 0xf2, 0xe0, 0xb8, 0x79, - 0xa0, 0xfc, 0xb2, 0x00, 0xe9, 0x3a, 0x36, 0xec, 0xd3, 0x0a, 0x1f, 0xd7, 0x4e, 0xfb, 0x0c, 0xb2, - 0xb6, 0x41, 0x99, 0xde, 0x24, 0xdd, 0x6e, 0x87, 0x31, 0x6c, 0xea, 0x96, 0x4d, 0x1d, 0xbe, 0xeb, - 0xc4, 0xe2, 0x87, 0xa3, 0xa1, 0x7c, 0xa1, 0x62, 0x50, 0xb6, 0x3e, 0x96, 0x96, 0x2a, 0xf5, 0xea, - 0xeb, 0xa1, 0xfc, 0xf6, 0xa1, 0x82, 0x7b, 0x46, 0xda, 0x05, 0x3b, 0xe6, 0xc7, 0xa6, 0x8e, 0xf2, - 0x52, 0x80, 0x65, 0xbf, 0xbb, 0xc1, 0x06, 0xbd, 0x03, 0x29, 0xca, 0x0c, 0xd6, 0xa7, 0xbc, 0xb5, - 0x67, 0xd7, 0xf2, 0xfb, 0x6f, 0xce, 0x3a, 0xd7, 0xd3, 0x02, 0xfd, 0xfd, 0xd2, 0x48, 0x9e, 0x44, - 0x1a, 0xdf, 0x2e, 0x40, 0x66, 0xd3, 0xa1, 0xa7, 0x34, 0x3d, 0x46, 0x9a, 0xde, 0x86, 0x33, 0xc1, - 0x25, 0x84, 0xe6, 0xc4, 0xfc, 0xc2, 0x8d, 0xf4, 0xda, 0x45, 0x35, 0x72, 0x47, 0x52, 0x35, 0x5f, - 0x58, 0x14, 0x5f, 0x0d, 0xe5, 0x84, 0x16, 0xea, 0x2a, 0x2f, 0x45, 0x48, 0xd7, 0x07, 0x4e, 0xf3, - 0xb4, 0x33, 0xc7, 0xd5, 0x99, 0xfb, 0x90, 0x6a, 0x18, 0xcd, 0xad, 0x7e, 0x2f, 0x38, 0xa9, 0xaf, - 0xc7, 0xfa, 0x12, 0xa9, 0xbd, 0x5a, 0xe4, 0x6a, 0x1e, 0x4e, 0xde, 0x26, 0x41, 0x0b, 0x0c, 0xa5, - 0xef, 0x04, 0x80, 0x89, 0x70, 0x5e, 0xed, 0x84, 0x13, 0xa9, 0x5d, 0x0e, 0x16, 0x0d, 0xd3, 0x74, - 0x31, 0xa5, 0xbc, 0x59, 0x4b, 0xda, 0xf8, 0xa7, 0x72, 0x0f, 0x96, 0xfd, 0x4c, 0x82, 0x41, 0x55, - 0x88, 0x0d, 0xaa, 0xf4, 0xda, 0xe5, 0x99, 0xa4, 0xe3, 0xf3, 0x49, 0xf9, 0x4a, 0x80, 0x5c, 0x09, - 0xb3, 0xa7, 0x2e, 0xde, 0xf6, 0x87, 0x47, 0xd9, 0x69, 0x91, 0x31, 0x29, 0x31, 0x64, 0x7b, 0x2e, - 0xde, 0xd6, 0xdb, 0x1d, 0xab, 0xad, 0xef, 0x18, 0x0c, 0xbb, 0x5d, 0xc3, 0xdd, 0xe2, 0xae, 0xc5, - 0xe2, 0xad, 0x23, 0xce, 0x29, 0xcf, 0xe3, 0xe3, 0x8e, 0xd5, 0x7e, 0x3e, 0xf6, 0xa7, 0xfc, 0xb1, - 0x08, 0xd9, 0xb0, 0x8d, 0x13, 0x14, 0xb3, 0x94, 0x11, 0x8e, 0x99, 0x32, 0xf7, 0xc2, 0xea, 0x25, - 0xf9, 0x98, 0x7f, 0x2b, 0x56, 0xbd, 0x39, 0x10, 0xd5, 0xa9, 0x69, 0xff, 0xa5, 0x00, 0x2b, 0x93, - 0x49, 0x6f, 0xdb, 0xd4, 0xd1, 0x49, 0xab, 0x45, 0x31, 0xe3, 0x6c, 0x17, 0x8b, 0x95, 0xd1, 0x50, - 0xce, 0x86, 0x43, 0xba, 0x52, 0xa9, 0x57, 0x6b, 0x5c, 0x7c, 0xe8, 0x52, 0x7a, 0xa6, 0x5a, 0x36, - 0x0c, 0x55, 0xb1, 0xa9, 0xe3, 0x7b, 0x9a, 0x82, 0x60, 0x45, 0x20, 0x88, 0x73, 0x20, 0x94, 0x8e, - 0x0e, 0xa1, 0x14, 0x87, 0x50, 0x9a, 0x40, 0xd8, 0x98, 0x41, 0x60, 0x63, 0xc7, 0x62, 0xed, 0xdc, - 0xff, 0x38, 0x82, 0xcb, 0x33, 0x08, 0x2a, 0x5c, 0x3c, 0xe5, 0xcc, 0x5f, 0x44, 0x5f, 0xc0, 0x45, - 0x8f, 0x7e, 0x98, 0x32, 0x7d, 0xc7, 0xf5, 0x84, 0x0e, 0xaf, 0x6b, 0x2e, 0xc5, 0x7d, 0x79, 0x5c, - 0x40, 0x8f, 0x7d, 0xf9, 0x73, 0x5f, 0x5c, 0x39, 0xca, 0x11, 0xca, 0xeb, 0x89, 0xda, 0x71, 0x47, - 0x36, 0x75, 0xd0, 0x27, 0x70, 0x76, 0x8a, 0xfd, 0x8b, 0xff, 0x84, 0xfd, 0x99, 0x76, 0x94, 0xf9, - 0xfb, 0x6d, 0xb0, 0x33, 0xff, 0xf2, 0x06, 0xfb, 0x46, 0x80, 0x94, 0xcf, 0x54, 0x74, 0x0d, 0x92, - 0xb5, 0x8d, 0xf3, 0x09, 0xe9, 0xca, 0xee, 0x5e, 0x7e, 0x25, 0xb6, 0xf1, 0x7d, 0x85, 0xda, 0x06, - 0x52, 0x61, 0xa9, 0x5a, 0x7b, 0xa6, 0x3f, 0xaa, 0x6d, 0x56, 0x1f, 0x9c, 0x17, 0x24, 0x79, 0x77, - 0x2f, 0x7f, 0x75, 0x8e, 0x66, 0x95, 0xb0, 0x47, 0xa4, 0xef, 0x98, 0xe8, 0x16, 0x2c, 0x97, 0xab, - 0xeb, 0xb5, 0x6a, 0xbd, 0x5c, 0x7f, 0xf6, 0xb0, 0xfa, 0xec, 0x7c, 0x52, 0xba, 0xbe, 0xbb, 0x97, - 0x97, 0xe7, 0x98, 0x94, 0x9d, 0x26, 0x71, 0x68, 0x87, 0x32, 0xec, 0x30, 0x49, 0xfc, 0xfa, 0xfb, - 0xd5, 0xc4, 0x5d, 0xf1, 0xb7, 0x17, 0xb2, 0xa0, 0xfc, 0x2c, 0xc0, 0x95, 0x39, 0x53, 0x28, 0x18, - 0x6a, 0xff, 0xd9, 0xdc, 0x5d, 0x87, 0x65, 0x9f, 0x92, 0x7a, 0xc7, 0x69, 0x11, 0x6f, 0x2a, 0x78, - 0x07, 0x7c, 0xfe, 0xa0, 0xa9, 0xa0, 0xa5, 0x9b, 0xe1, 0x77, 0xba, 0xf6, 0xa3, 0x08, 0xf0, 0x24, - 0x7c, 0x47, 0x41, 0x1a, 0xa4, 0x23, 0xcf, 0x04, 0x48, 0x8e, 0x39, 0x9b, 0x7d, 0xbe, 0x90, 0xf2, - 0xfb, 0x2b, 0xf8, 0xe5, 0x51, 0x12, 0x68, 0x13, 0x96, 0xa3, 0xff, 0x2b, 0x51, 0xdc, 0x66, 0xce, - 0xcb, 0x80, 0x74, 0xed, 0x2f, 0x34, 0x42, 0xb7, 0x4f, 0xe1, 0xdc, 0xd4, 0x1f, 0x4a, 0x74, 0x7d, - 0xea, 0x72, 0x33, 0xef, 0xef, 0xa6, 0x74, 0x49, 0xf5, 0x5f, 0x7a, 0xd4, 0xf1, 0x4b, 0x8f, 0xfa, - 0xb0, 0xdb, 0x63, 0x03, 0x25, 0x81, 0xde, 0x07, 0xd1, 0xbb, 0x57, 0xa3, 0x5c, 0xfc, 0x58, 0x9a, - 0xdc, 0x50, 0xa5, 0x2b, 0x73, 0x24, 0x21, 0xa0, 0x0f, 0x20, 0xe5, 0xdf, 0x67, 0x91, 0x14, 0x53, - 0x8b, 0x5d, 0x72, 0x0f, 0x08, 0x3f, 0x70, 0x9a, 0xd3, 0xe1, 0x27, 0x57, 0x81, 0xe9, 0xf0, 0x91, - 0xa3, 0x55, 0x49, 0x20, 0x13, 0x2e, 0xcc, 0x90, 0x14, 0xbd, 0x31, 0xdd, 0x9f, 0xb9, 0x47, 0xa9, - 0xf4, 0xe6, 0x41, 0x6a, 0xe3, 0x28, 0xc5, 0x77, 0x5f, 0x8d, 0x56, 0x85, 0x9f, 0x46, 0xab, 0xc2, - 0x8b, 0x5f, 0x57, 0x85, 0x8f, 0x6e, 0xfe, 0x1d, 0x36, 0x87, 0x0f, 0x76, 0x8d, 0x14, 0xff, 0xfe, - 0xce, 0x9f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x3e, 0xac, 0x2f, 0xbc, 0xc5, 0x13, 0x00, 0x00, + 0x17, 0xf7, 0x26, 0x1b, 0xbb, 0x79, 0x4e, 0xbe, 0x6d, 0xc7, 0xdf, 0xb4, 0xee, 0x46, 0xf2, 0x9a, + 0x2d, 0x3f, 0x72, 0xe9, 0x5a, 0x84, 0x1f, 0xaa, 0x2a, 0xa0, 0xaa, 0xdd, 0xd6, 0x98, 0xb8, 0x76, + 0xb5, 0x4e, 0x41, 0x02, 0x09, 0x6b, 0xed, 0x1d, 0x6f, 0x4c, 0xd6, 0x3b, 0xcb, 0xce, 0x38, 0x95, + 0x11, 0x48, 0x1c, 0x51, 0x85, 0x10, 0x47, 0x2e, 0x15, 0x95, 0xe0, 0xbf, 0xe0, 0x86, 0x38, 0x54, + 0x9c, 0x7a, 0x84, 0x8b, 0x0f, 0xce, 0x85, 0xbf, 0xa1, 0x12, 0x12, 0xda, 0xd9, 0x1f, 0xf6, 0xda, + 0x8e, 0x82, 0x8b, 0x28, 0x8a, 0x94, 0x93, 0xbd, 0xfb, 0x7e, 0x7c, 0xde, 0x7b, 0xf3, 0x99, 0xf7, + 0x66, 0x07, 0x36, 0x1d, 0x97, 0x30, 0x52, 0xa0, 0xb6, 0xd3, 0x2a, 0xf4, 0x74, 0x5b, 0x37, 0x71, + 0x0f, 0xdb, 0x4c, 0xe5, 0x6f, 0x51, 0xfa, 0x40, 0x77, 0x2d, 0x62, 0xaa, 0x9e, 0x54, 0xba, 0x62, + 0x76, 0xd9, 0x5e, 0xbf, 0xa5, 0xb6, 0x49, 0xaf, 0x60, 0x12, 0x93, 0x14, 0xb8, 0x4e, 0xab, 0xdf, + 0xe1, 0x4f, 0xbe, 0x1b, 0xef, 0x9f, 0x6f, 0x2b, 0x6d, 0x9a, 0x84, 0x98, 0x16, 0x1e, 0x6b, 0xe1, + 0x9e, 0xc3, 0x06, 0x81, 0xf0, 0xa2, 0xef, 0xd8, 0xc3, 0xc4, 0x4c, 0x37, 0x74, 0xa6, 0x07, 0x82, + 0x0d, 0x1e, 0x88, 0x8b, 0x1d, 0xab, 0xdb, 0xd6, 0x19, 0x71, 0xfd, 0xd7, 0xca, 0xe7, 0x80, 0xca, + 0x98, 0xdd, 0x09, 0x74, 0x35, 0xfc, 0x69, 0x1f, 0x53, 0x86, 0x3a, 0x00, 0x6d, 0xab, 0x4f, 0x19, + 0x76, 0x9b, 0x5d, 0x23, 0x2b, 0xe4, 0x85, 0xad, 0xf5, 0x62, 0x79, 0x34, 0x94, 0x57, 0x4b, 0xfe, + 0xdb, 0xca, 0xcd, 0xa7, 0x43, 0xf9, 0xcd, 0x20, 0x6c, 0x43, 0xef, 0xf7, 0xf6, 0xf5, 0x7d, 0x9d, + 0xf0, 0x04, 0x7c, 0xfc, 0xf0, 0xc7, 0xd9, 0x37, 0x0b, 0x6c, 0xe0, 0x60, 0xaa, 0x46, 0x96, 0xda, + 0x6a, 0xe0, 0xba, 0x62, 0x28, 0x03, 0xc8, 0xc4, 0xd0, 0xa9, 0x43, 0x6c, 0x8a, 0x51, 0x0b, 0x36, + 0x28, 0x23, 0xae, 0x6e, 0xe2, 0xa6, 0x4d, 0x0c, 0xdc, 0x0c, 0x53, 0xe1, 0x91, 0xa4, 0xb7, 0x55, + 0x35, 0xa8, 0x5e, 0x98, 0xab, 0xda, 0xf0, 0xb5, 0x6b, 0xc4, 0xc0, 0xa1, 0xb3, 0x9b, 0x98, 0xb6, + 0xdd, 0xae, 0xc3, 0x88, 0xab, 0x65, 0xe8, 0xac, 0x58, 0xf9, 0x5a, 0x04, 0xe9, 0x86, 0x61, 0x54, + 0x89, 0xd9, 0x60, 0x2e, 0xd6, 0x7b, 0x9a, 0x5f, 0x99, 0xe7, 0x5c, 0x01, 0x74, 0x1f, 0xce, 0xc6, + 0x52, 0xed, 0x1a, 0xd9, 0xa5, 0xbc, 0xb0, 0xb5, 0x52, 0xac, 0x8f, 0x86, 0xf2, 0xfa, 0x44, 0x5e, + 0x1c, 0xf0, 0xda, 0x42, 0x80, 0x31, 0x6b, 0x6d, 0x7d, 0xa2, 0x0a, 0x15, 0x03, 0x7d, 0x0c, 0x67, + 0x18, 0x71, 0xba, 0x6d, 0x0f, 0x71, 0x99, 0x23, 0x96, 0x46, 0x43, 0x39, 0xb5, 0xeb, 0xbd, 0xe3, + 0x58, 0xaf, 0x2f, 0x84, 0x15, 0xd8, 0x69, 0x29, 0xee, 0xb4, 0x62, 0x20, 0x02, 0xeb, 0x16, 0x31, + 0x9b, 0x94, 0x17, 0xd7, 0x03, 0x11, 0x39, 0xc8, 0xce, 0x68, 0x28, 0xa7, 0xa3, 0xa2, 0x73, 0xa0, + 0xab, 0x0b, 0x01, 0x4d, 0xd8, 0x6a, 0x69, 0x2b, 0x7a, 0x30, 0xd0, 0x5b, 0x90, 0x0a, 0x32, 0xcc, + 0xae, 0x70, 0x9a, 0x28, 0x47, 0xd1, 0x64, 0x82, 0x1a, 0xa1, 0x89, 0xd2, 0x82, 0xcd, 0xb9, 0x6c, + 0x08, 0x18, 0x59, 0x02, 0x18, 0x67, 0x13, 0xd0, 0xf0, 0xc5, 0x19, 0xff, 0x91, 0xf9, 0x04, 0xc2, + 0x6a, 0x14, 0xa3, 0xf2, 0xeb, 0x32, 0x5c, 0xd0, 0x70, 0x8f, 0x1c, 0xe0, 0x09, 0x9c, 0x53, 0xba, + 0x9d, 0x48, 0xba, 0x29, 0x3f, 0x89, 0x90, 0x6e, 0x60, 0xdd, 0x3a, 0x5d, 0xc1, 0x93, 0xda, 0x30, + 0x3e, 0x83, 0x8c, 0xa5, 0x53, 0xd6, 0x6c, 0x93, 0x5e, 0xaf, 0xcb, 0x18, 0x36, 0x9a, 0xa6, 0x45, + 0x6d, 0xde, 0x3c, 0xc4, 0xe2, 0x7b, 0xa3, 0xa1, 0x7c, 0xbe, 0xaa, 0x53, 0x56, 0x0a, 0xa5, 0xe5, + 0x6a, 0xa3, 0xf6, 0x74, 0x28, 0xbf, 0xba, 0x10, 0xb8, 0x67, 0xa4, 0x9d, 0xb7, 0x62, 0x7e, 0x2c, + 0x6a, 0x2b, 0xbf, 0x08, 0xb0, 0xe6, 0xb3, 0x27, 0x68, 0x30, 0x57, 0x21, 0x49, 0x99, 0xce, 0xfa, + 0x94, 0x53, 0xe7, 0x7f, 0xdb, 0xf9, 0xa3, 0x9b, 0x4b, 0x83, 0xeb, 0x69, 0x81, 0xfe, 0x51, 0x69, + 0x2c, 0x3d, 0x8f, 0x34, 0xfe, 0x5c, 0x86, 0xf5, 0x7b, 0x36, 0x3d, 0xdd, 0x06, 0x27, 0x78, 0x1b, + 0x5c, 0x83, 0x33, 0xc1, 0xa9, 0x90, 0x66, 0x57, 0xf2, 0xcb, 0x5b, 0xe9, 0xed, 0xec, 0x0c, 0xf7, + 0x82, 0x71, 0x58, 0x14, 0x1f, 0x0f, 0xe5, 0x84, 0x16, 0xe9, 0x2b, 0x3f, 0xae, 0x40, 0xba, 0x31, + 0xb0, 0xdb, 0xa7, 0xab, 0x7f, 0x52, 0x57, 0xff, 0x06, 0x24, 0x5b, 0x7a, 0x7b, 0xbf, 0xef, 0x04, + 0x87, 0xa6, 0xcb, 0xea, 0xc4, 0x97, 0x89, 0x3a, 0xb1, 0xb6, 0x6a, 0x91, 0xab, 0x79, 0x75, 0xe0, + 0x34, 0x10, 0xb4, 0xc0, 0x50, 0xfa, 0x5e, 0x00, 0x18, 0x0b, 0xe7, 0xad, 0x8d, 0xf0, 0x5c, 0xd6, + 0x26, 0x0b, 0x29, 0xdd, 0x30, 0x5c, 0x4c, 0x29, 0x27, 0xc3, 0xaa, 0x16, 0x3e, 0x2a, 0xd7, 0x61, + 0xcd, 0xcf, 0x24, 0x68, 0xb6, 0x85, 0x58, 0xb3, 0x4d, 0x6f, 0x5f, 0x9c, 0x49, 0x3a, 0xde, 0x63, + 0x95, 0xfb, 0x90, 0x2d, 0x63, 0x76, 0xd7, 0xc5, 0x07, 0x7e, 0xff, 0xab, 0xd8, 0x1d, 0x12, 0x72, + 0xfe, 0x23, 0x58, 0x73, 0x5c, 0x7c, 0xd0, 0x3c, 0xc0, 0x2e, 0xed, 0x12, 0x9b, 0xbb, 0x14, 0x8b, + 0x57, 0x17, 0xe6, 0xc2, 0xfb, 0xbe, 0xbd, 0x96, 0xf6, 0xbc, 0x05, 0x0f, 0xca, 0x37, 0x29, 0xc8, + 0x44, 0x6b, 0x37, 0xc6, 0x9e, 0xe5, 0x89, 0xf0, 0x2f, 0xf3, 0xe4, 0x7a, 0x54, 0xb2, 0x25, 0x3e, + 0x9f, 0x5e, 0x89, 0x95, 0x6c, 0x4e, 0x88, 0xea, 0xd4, 0x98, 0xfa, 0x52, 0x80, 0x8d, 0xf1, 0x88, + 0xb2, 0x2c, 0x6a, 0x37, 0x49, 0xa7, 0x43, 0x31, 0xe3, 0xfb, 0x48, 0x2c, 0x56, 0x47, 0x43, 0x39, + 0x13, 0x4d, 0x97, 0x6a, 0xb5, 0x51, 0xab, 0x73, 0xf1, 0xc2, 0xb3, 0xca, 0x33, 0xd5, 0x32, 0x11, + 0x54, 0xd5, 0xa2, 0xb6, 0xef, 0x69, 0x2a, 0x04, 0x73, 0x22, 0x04, 0x71, 0x4e, 0x08, 0xe5, 0x67, + 0x0f, 0xa1, 0x1c, 0x0f, 0xa1, 0x3c, 0x0e, 0x61, 0x67, 0x26, 0x02, 0x0b, 0xdb, 0x26, 0xdb, 0x0b, + 0x4e, 0x1d, 0x17, 0x67, 0x22, 0xa8, 0x72, 0xf1, 0x94, 0x33, 0xff, 0x25, 0xfa, 0x02, 0xfe, 0xbf, + 0xd7, 0x35, 0xf7, 0x30, 0x65, 0xcd, 0xfb, 0xae, 0x27, 0xb4, 0x79, 0x5d, 0xb3, 0x49, 0xee, 0xcb, + 0xe3, 0x02, 0x7a, 0xd7, 0x97, 0x7f, 0xe0, 0x8b, 0xab, 0xcf, 0x32, 0xfb, 0x79, 0x3d, 0xd1, 0x5e, + 0xdc, 0x91, 0x45, 0x6d, 0xa4, 0x41, 0x2a, 0xe4, 0x7c, 0xea, 0x1f, 0x72, 0x3e, 0x74, 0xa4, 0x7c, + 0x27, 0x40, 0xd2, 0x27, 0x0e, 0x7a, 0x01, 0x96, 0xea, 0x3b, 0xe7, 0x12, 0xd2, 0xa5, 0x07, 0x0f, + 0xf3, 0x1b, 0xb1, 0xdd, 0xe7, 0x2b, 0xd4, 0x77, 0x90, 0x0a, 0xab, 0xb5, 0xfa, 0x6e, 0xf3, 0x76, + 0xfd, 0x5e, 0xed, 0xe6, 0x39, 0x41, 0x92, 0x1f, 0x3c, 0xcc, 0x6f, 0xce, 0xd1, 0xac, 0x11, 0x76, + 0x9b, 0xf4, 0x6d, 0x03, 0xbd, 0x01, 0x6b, 0x95, 0x5a, 0xa9, 0x5e, 0x6b, 0x54, 0x1a, 0xbb, 0xb7, + 0x6a, 0xbb, 0xe7, 0x96, 0xa4, 0xcb, 0x0f, 0x1e, 0xe6, 0xe5, 0x39, 0x26, 0x15, 0xbb, 0x4d, 0x6c, + 0xda, 0xa5, 0x0c, 0xdb, 0x4c, 0x12, 0xbf, 0xfa, 0x21, 0x97, 0xb8, 0x26, 0xfe, 0xf1, 0x48, 0x16, + 0x94, 0xdf, 0x05, 0xb8, 0x34, 0xa7, 0x15, 0x04, 0x8d, 0xe5, 0x3f, 0xeb, 0x7d, 0x25, 0x58, 0xf3, + 0x19, 0xd2, 0xec, 0xda, 0x1d, 0xe2, 0x6d, 0x52, 0x6f, 0x90, 0xe7, 0x8f, 0xdb, 0xa4, 0x5a, 0xba, + 0x1d, 0xfd, 0xa7, 0xdb, 0x3f, 0x8b, 0x00, 0x77, 0xa2, 0x9b, 0x2a, 0xa4, 0x41, 0x7a, 0xe2, 0x72, + 0x06, 0xc9, 0x31, 0x67, 0xb3, 0x97, 0x46, 0x52, 0xfe, 0x68, 0x05, 0xbf, 0x3c, 0x4a, 0x02, 0x7d, + 0x02, 0x99, 0x39, 0x9f, 0xd9, 0x28, 0xde, 0x4d, 0x8e, 0xbe, 0x96, 0x91, 0xb6, 0x8e, 0x57, 0x8c, + 0xb0, 0xee, 0xc2, 0xd9, 0xa9, 0xaf, 0x6d, 0x14, 0x9f, 0x6e, 0xf3, 0xbf, 0xc5, 0xa5, 0x0b, 0xaa, + 0x7f, 0xc1, 0xa6, 0x86, 0x17, 0x6c, 0xea, 0xad, 0x9e, 0xc3, 0x06, 0x4a, 0x02, 0xbd, 0x0d, 0xa2, + 0x77, 0x68, 0x47, 0xd9, 0xf8, 0xbc, 0x18, 0x1f, 0x7f, 0xa5, 0x4b, 0x73, 0x24, 0x51, 0x40, 0xef, + 0x40, 0xd2, 0x3f, 0x2c, 0x23, 0x29, 0xa6, 0x16, 0x3b, 0x41, 0x1f, 0x03, 0x3f, 0xb0, 0xdb, 0xd3, + 0xf0, 0xe3, 0x19, 0x3d, 0x0d, 0x3f, 0x31, 0xf3, 0x94, 0x04, 0x32, 0xe0, 0xfc, 0x0c, 0x73, 0xd1, + 0x4b, 0xd3, 0x8b, 0x36, 0x77, 0xc8, 0x49, 0x2f, 0x1f, 0xa7, 0x16, 0xa2, 0x14, 0xcb, 0x8f, 0x47, + 0x39, 0xe1, 0xc9, 0x28, 0x27, 0x7c, 0x7b, 0x98, 0x4b, 0x3c, 0x3a, 0xcc, 0x09, 0x4f, 0x0e, 0x73, + 0x89, 0xdf, 0x0e, 0x73, 0x89, 0x0f, 0xaf, 0xfc, 0x1d, 0xba, 0x47, 0x77, 0xa6, 0xad, 0x24, 0xff, + 0xff, 0xda, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x62, 0xb7, 0x11, 0x43, 0x48, 0x15, 0x00, 0x00, } func (this *LogStreamCommitInfo) Equal(that interface{}) bool { @@ -1050,13 +1041,7 @@ func (this *LogStreamCommitInfo) Equal(that interface{}) bool { if this.HighestWrittenLLSN != that1.HighestWrittenLLSN { return false } - if this.HighWatermark != that1.HighWatermark { - return false - } - if this.PrevHighWatermark != that1.PrevHighWatermark { - return false - } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + if this.Version != that1.Version { return false } return true @@ -1077,7 +1062,7 @@ type ManagementClient interface { // GetMetadata returns metadata of StorageNode. GetMetadata(ctx context.Context, in *GetMetadataRequest, opts ...grpc.CallOption) (*GetMetadataResponse, error) // AddLogStream adds a new LogStream to StorageNode. - AddLogStream(ctx context.Context, in *AddLogStreamRequest, opts ...grpc.CallOption) (*AddLogStreamResponse, error) + AddLogStreamReplica(ctx context.Context, in *AddLogStreamReplicaRequest, opts ...grpc.CallOption) (*AddLogStreamReplicaResponse, error) // RemoveLogStream removes a LogStream from StorageNode. RemoveLogStream(ctx context.Context, in *RemoveLogStreamRequest, opts ...grpc.CallOption) (*types.Empty, error) // Seal changes the status of LogStreamExecutor to LogStreamStatusSealing or @@ -1107,9 +1092,9 @@ func (c *managementClient) GetMetadata(ctx context.Context, in *GetMetadataReque return out, nil } -func (c *managementClient) AddLogStream(ctx context.Context, in *AddLogStreamRequest, opts ...grpc.CallOption) (*AddLogStreamResponse, error) { - out := new(AddLogStreamResponse) - err := c.cc.Invoke(ctx, "/varlog.snpb.Management/AddLogStream", in, out, opts...) +func (c *managementClient) AddLogStreamReplica(ctx context.Context, in *AddLogStreamReplicaRequest, opts ...grpc.CallOption) (*AddLogStreamReplicaResponse, error) { + out := new(AddLogStreamReplicaResponse) + err := c.cc.Invoke(ctx, "/varlog.snpb.Management/AddLogStreamReplica", in, out, opts...) if err != nil { return nil, err } @@ -1166,7 +1151,7 @@ type ManagementServer interface { // GetMetadata returns metadata of StorageNode. GetMetadata(context.Context, *GetMetadataRequest) (*GetMetadataResponse, error) // AddLogStream adds a new LogStream to StorageNode. - AddLogStream(context.Context, *AddLogStreamRequest) (*AddLogStreamResponse, error) + AddLogStreamReplica(context.Context, *AddLogStreamReplicaRequest) (*AddLogStreamReplicaResponse, error) // RemoveLogStream removes a LogStream from StorageNode. RemoveLogStream(context.Context, *RemoveLogStreamRequest) (*types.Empty, error) // Seal changes the status of LogStreamExecutor to LogStreamStatusSealing or @@ -1186,8 +1171,8 @@ type UnimplementedManagementServer struct { func (*UnimplementedManagementServer) GetMetadata(ctx context.Context, req *GetMetadataRequest) (*GetMetadataResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetMetadata not implemented") } -func (*UnimplementedManagementServer) AddLogStream(ctx context.Context, req *AddLogStreamRequest) (*AddLogStreamResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method AddLogStream not implemented") +func (*UnimplementedManagementServer) AddLogStreamReplica(ctx context.Context, req *AddLogStreamReplicaRequest) (*AddLogStreamReplicaResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method AddLogStreamReplica not implemented") } func (*UnimplementedManagementServer) RemoveLogStream(ctx context.Context, req *RemoveLogStreamRequest) (*types.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method RemoveLogStream not implemented") @@ -1227,20 +1212,20 @@ func _Management_GetMetadata_Handler(srv interface{}, ctx context.Context, dec f return interceptor(ctx, in, info, handler) } -func _Management_AddLogStream_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(AddLogStreamRequest) +func _Management_AddLogStreamReplica_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(AddLogStreamReplicaRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { - return srv.(ManagementServer).AddLogStream(ctx, in) + return srv.(ManagementServer).AddLogStreamReplica(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/varlog.snpb.Management/AddLogStream", + FullMethod: "/varlog.snpb.Management/AddLogStreamReplica", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ManagementServer).AddLogStream(ctx, req.(*AddLogStreamRequest)) + return srv.(ManagementServer).AddLogStreamReplica(ctx, req.(*AddLogStreamReplicaRequest)) } return interceptor(ctx, in, info, handler) } @@ -1344,8 +1329,8 @@ var _Management_serviceDesc = grpc.ServiceDesc{ Handler: _Management_GetMetadata_Handler, }, { - MethodName: "AddLogStream", - Handler: _Management_AddLogStream_Handler, + MethodName: "AddLogStreamReplica", + Handler: _Management_AddLogStreamReplica_Handler, }, { MethodName: "RemoveLogStream", @@ -1392,10 +1377,6 @@ func (m *GetMetadataRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.ClusterID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.ClusterID)) i-- @@ -1424,10 +1405,6 @@ func (m *GetMetadataResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.StorageNodeMetadata != nil { { size, err := m.StorageNodeMetadata.MarshalToSizedBuffer(dAtA[:i]) @@ -1443,7 +1420,7 @@ func (m *GetMetadataResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AddLogStreamRequest) Marshal() (dAtA []byte, err error) { +func (m *AddLogStreamReplicaRequest) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -1453,20 +1430,16 @@ func (m *AddLogStreamRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AddLogStreamRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *AddLogStreamReplicaRequest) MarshalTo(dAtA []byte) (int, error) { size := m.ProtoSize() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AddLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AddLogStreamReplicaRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Storage != nil { { size, err := m.Storage.MarshalToSizedBuffer(dAtA[:i]) @@ -1477,11 +1450,16 @@ func (m *AddLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintManagement(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x2a } if m.LogStreamID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x20 + } + if m.TopicID != 0 { + i = encodeVarintManagement(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x18 } if m.StorageNodeID != 0 { @@ -1497,7 +1475,7 @@ func (m *AddLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AddLogStreamResponse) Marshal() (dAtA []byte, err error) { +func (m *AddLogStreamReplicaResponse) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -1507,20 +1485,16 @@ func (m *AddLogStreamResponse) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AddLogStreamResponse) MarshalTo(dAtA []byte) (int, error) { +func (m *AddLogStreamReplicaResponse) MarshalTo(dAtA []byte) (int, error) { size := m.ProtoSize() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AddLogStreamResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AddLogStreamReplicaResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStream != nil { { size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) @@ -1556,13 +1530,14 @@ func (m *RemoveLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x20 + } + if m.TopicID != 0 { + i = encodeVarintManagement(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x18 } if m.StorageNodeID != 0 { @@ -1598,18 +1573,19 @@ func (m *SealRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LastCommittedGLSN != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.LastCommittedGLSN)) i-- - dAtA[i] = 0x20 + dAtA[i] = 0x28 } if m.LogStreamID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x20 + } + if m.TopicID != 0 { + i = encodeVarintManagement(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x18 } if m.StorageNodeID != 0 { @@ -1645,10 +1621,6 @@ func (m *SealResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LastCommittedGLSN != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.LastCommittedGLSN)) i-- @@ -1682,10 +1654,6 @@ func (m *UnsealRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Replicas) > 0 { for iNdEx := len(m.Replicas) - 1; iNdEx >= 0; iNdEx-- { { @@ -1697,12 +1665,17 @@ func (m *UnsealRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintManagement(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x2a } } if m.LogStreamID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x20 + } + if m.TopicID != 0 { + i = encodeVarintManagement(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x18 } if m.StorageNodeID != 0 { @@ -1738,10 +1711,6 @@ func (m *SyncRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Backup != nil { { size, err := m.Backup.MarshalToSizedBuffer(dAtA[:i]) @@ -1752,11 +1721,16 @@ func (m *SyncRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintManagement(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x2a } if m.LogStreamID != 0 { i = encodeVarintManagement(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x20 + } + if m.TopicID != 0 { + i = encodeVarintManagement(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x18 } if m.StorageNodeID != 0 { @@ -1792,10 +1766,6 @@ func (m *SyncRequest_BackupNode) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Address) > 0 { i -= len(m.Address) copy(dAtA[i:], m.Address) @@ -1831,10 +1801,6 @@ func (m *SyncResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Status != nil { { size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) @@ -1870,12 +1836,8 @@ func (m *GetPrevCommitInfoRequest) MarshalToSizedBuffer(dAtA []byte) (int, error _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.PrevHighWatermark != 0 { - i = encodeVarintManagement(dAtA, i, uint64(m.PrevHighWatermark)) + if m.PrevVersion != 0 { + i = encodeVarintManagement(dAtA, i, uint64(m.PrevVersion)) i-- dAtA[i] = 0x8 } @@ -1902,17 +1864,8 @@ func (m *LogStreamCommitInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.PrevHighWatermark != 0 { - i = encodeVarintManagement(dAtA, i, uint64(m.PrevHighWatermark)) - i-- - dAtA[i] = 0x40 - } - if m.HighWatermark != 0 { - i = encodeVarintManagement(dAtA, i, uint64(m.HighWatermark)) + if m.Version != 0 { + i = encodeVarintManagement(dAtA, i, uint64(m.Version)) i-- dAtA[i] = 0x38 } @@ -1969,10 +1922,6 @@ func (m *GetPrevCommitInfoResponse) MarshalToSizedBuffer(dAtA []byte) (int, erro _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.CommitInfos) > 0 { for iNdEx := len(m.CommitInfos) - 1; iNdEx >= 0; iNdEx-- { { @@ -2015,9 +1964,6 @@ func (m *GetMetadataRequest) ProtoSize() (n int) { if m.ClusterID != 0 { n += 1 + sovManagement(uint64(m.ClusterID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2031,13 +1977,10 @@ func (m *GetMetadataResponse) ProtoSize() (n int) { l = m.StorageNodeMetadata.ProtoSize() n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } -func (m *AddLogStreamRequest) ProtoSize() (n int) { +func (m *AddLogStreamReplicaRequest) ProtoSize() (n int) { if m == nil { return 0 } @@ -2049,6 +1992,9 @@ func (m *AddLogStreamRequest) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovManagement(uint64(m.StorageNodeID)) } + if m.TopicID != 0 { + n += 1 + sovManagement(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovManagement(uint64(m.LogStreamID)) } @@ -2056,13 +2002,10 @@ func (m *AddLogStreamRequest) ProtoSize() (n int) { l = m.Storage.ProtoSize() n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } -func (m *AddLogStreamResponse) ProtoSize() (n int) { +func (m *AddLogStreamReplicaResponse) ProtoSize() (n int) { if m == nil { return 0 } @@ -2072,9 +2015,6 @@ func (m *AddLogStreamResponse) ProtoSize() (n int) { l = m.LogStream.ProtoSize() n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2090,12 +2030,12 @@ func (m *RemoveLogStreamRequest) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovManagement(uint64(m.StorageNodeID)) } + if m.TopicID != 0 { + n += 1 + sovManagement(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovManagement(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2111,15 +2051,15 @@ func (m *SealRequest) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovManagement(uint64(m.StorageNodeID)) } + if m.TopicID != 0 { + n += 1 + sovManagement(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovManagement(uint64(m.LogStreamID)) } if m.LastCommittedGLSN != 0 { n += 1 + sovManagement(uint64(m.LastCommittedGLSN)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2135,9 +2075,6 @@ func (m *SealResponse) ProtoSize() (n int) { if m.LastCommittedGLSN != 0 { n += 1 + sovManagement(uint64(m.LastCommittedGLSN)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2153,6 +2090,9 @@ func (m *UnsealRequest) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovManagement(uint64(m.StorageNodeID)) } + if m.TopicID != 0 { + n += 1 + sovManagement(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovManagement(uint64(m.LogStreamID)) } @@ -2162,9 +2102,6 @@ func (m *UnsealRequest) ProtoSize() (n int) { n += 1 + l + sovManagement(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2180,6 +2117,9 @@ func (m *SyncRequest) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovManagement(uint64(m.StorageNodeID)) } + if m.TopicID != 0 { + n += 1 + sovManagement(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovManagement(uint64(m.LogStreamID)) } @@ -2187,9 +2127,6 @@ func (m *SyncRequest) ProtoSize() (n int) { l = m.Backup.ProtoSize() n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2206,9 +2143,6 @@ func (m *SyncRequest_BackupNode) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2222,9 +2156,6 @@ func (m *SyncResponse) ProtoSize() (n int) { l = m.Status.ProtoSize() n += 1 + l + sovManagement(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2234,11 +2165,8 @@ func (m *GetPrevCommitInfoRequest) ProtoSize() (n int) { } var l int _ = l - if m.PrevHighWatermark != 0 { - n += 1 + sovManagement(uint64(m.PrevHighWatermark)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.PrevVersion != 0 { + n += 1 + sovManagement(uint64(m.PrevVersion)) } return n } @@ -2267,14 +2195,8 @@ func (m *LogStreamCommitInfo) ProtoSize() (n int) { if m.HighestWrittenLLSN != 0 { n += 1 + sovManagement(uint64(m.HighestWrittenLLSN)) } - if m.HighWatermark != 0 { - n += 1 + sovManagement(uint64(m.HighWatermark)) - } - if m.PrevHighWatermark != 0 { - n += 1 + sovManagement(uint64(m.PrevHighWatermark)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.Version != 0 { + n += 1 + sovManagement(uint64(m.Version)) } return n } @@ -2294,9 +2216,6 @@ func (m *GetPrevCommitInfoResponse) ProtoSize() (n int) { n += 1 + l + sovManagement(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2366,7 +2285,6 @@ func (m *GetMetadataRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2453,7 +2371,6 @@ func (m *GetMetadataResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2463,7 +2380,7 @@ func (m *GetMetadataResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { +func (m *AddLogStreamReplicaRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -2486,10 +2403,10 @@ func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AddLogStreamRequest: wiretype end group for non-group") + return fmt.Errorf("proto: AddLogStreamReplicaRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AddLogStreamRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AddLogStreamReplicaRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -2531,6 +2448,25 @@ func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowManagement + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -2549,7 +2485,7 @@ func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { break } } - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Storage", wireType) } @@ -2597,7 +2533,6 @@ func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2607,7 +2542,7 @@ func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AddLogStreamResponse) Unmarshal(dAtA []byte) error { +func (m *AddLogStreamReplicaResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -2630,10 +2565,10 @@ func (m *AddLogStreamResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AddLogStreamResponse: wiretype end group for non-group") + return fmt.Errorf("proto: AddLogStreamReplicaResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AddLogStreamResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AddLogStreamReplicaResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -2684,7 +2619,6 @@ func (m *AddLogStreamResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2762,6 +2696,25 @@ func (m *RemoveLogStreamRequest) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowManagement + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -2792,7 +2745,6 @@ func (m *RemoveLogStreamRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2870,6 +2822,25 @@ func (m *SealRequest) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowManagement + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -2888,7 +2859,7 @@ func (m *SealRequest) Unmarshal(dAtA []byte) error { break } } - case 4: + case 5: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LastCommittedGLSN", wireType) } @@ -2919,7 +2890,6 @@ func (m *SealRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3008,7 +2978,6 @@ func (m *SealResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3086,6 +3055,25 @@ func (m *UnsealRequest) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowManagement + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -3104,7 +3092,7 @@ func (m *UnsealRequest) Unmarshal(dAtA []byte) error { break } } - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Replicas", wireType) } @@ -3133,7 +3121,7 @@ func (m *UnsealRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Replicas = append(m.Replicas, Replica{}) + m.Replicas = append(m.Replicas, varlogpb.Replica{}) if err := m.Replicas[len(m.Replicas)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } @@ -3150,7 +3138,6 @@ func (m *UnsealRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3228,6 +3215,25 @@ func (m *SyncRequest) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowManagement + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -3246,7 +3252,7 @@ func (m *SyncRequest) Unmarshal(dAtA []byte) error { break } } - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Backup", wireType) } @@ -3294,7 +3300,6 @@ func (m *SyncRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3396,7 +3401,6 @@ func (m *SyncRequest_BackupNode) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3483,7 +3487,6 @@ func (m *SyncResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3524,9 +3527,9 @@ func (m *GetPrevCommitInfoRequest) Unmarshal(dAtA []byte) error { switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PrevHighWatermark", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PrevVersion", wireType) } - m.PrevHighWatermark = 0 + m.PrevVersion = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowManagement @@ -3536,7 +3539,7 @@ func (m *GetPrevCommitInfoRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.PrevHighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.PrevVersion |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift if b < 0x80 { break } @@ -3553,7 +3556,6 @@ func (m *GetPrevCommitInfoRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3708,28 +3710,9 @@ func (m *LogStreamCommitInfo) Unmarshal(dAtA []byte) error { } case 7: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) - } - m.HighWatermark = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowManagement - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.HighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PrevHighWatermark", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) } - m.PrevHighWatermark = 0 + m.Version = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowManagement @@ -3739,7 +3722,7 @@ func (m *LogStreamCommitInfo) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.PrevHighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.Version |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift if b < 0x80 { break } @@ -3756,7 +3739,6 @@ func (m *LogStreamCommitInfo) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3860,7 +3842,6 @@ func (m *GetPrevCommitInfoResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/snpb/management.proto b/proto/snpb/management.proto index 19b686bef..38f1e3bb9 100644 --- a/proto/snpb/management.proto +++ b/proto/snpb/management.proto @@ -7,13 +7,15 @@ import "google/protobuf/empty.proto"; import "varlogpb/metadata.proto"; import "snpb/replicator.proto"; -import "snpb/replica.proto"; option go_package = "github.com/kakao/varlog/proto/snpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message GetMetadataRequest { uint32 cluster_id = 1 [ @@ -27,25 +29,30 @@ message GetMetadataResponse { varlogpb.StorageNodeMetadataDescriptor storage_node_metadata = 1; } -message AddLogStreamRequest { +message AddLogStreamReplicaRequest { uint32 cluster_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - uint32 storage_node_id = 2 [ + int32 storage_node_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 3 [ + int32 topic_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - varlogpb.StorageDescriptor storage = 4; + varlogpb.StorageDescriptor storage = 5; } -message AddLogStreamResponse { +message AddLogStreamReplicaResponse { // TODO (jun): Use LogStreamMetadataDescriptor varlogpb.LogStreamDescriptor log_stream = 1; } @@ -56,12 +63,17 @@ message RemoveLogStreamRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - uint32 storage_node_id = 2 [ + int32 storage_node_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 3 [ + int32 topic_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -74,17 +86,22 @@ message SealRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - uint32 storage_node_id = 2 [ + int32 storage_node_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 3 [ + int32 topic_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - uint64 last_committed_glsn = 4 [ + uint64 last_committed_glsn = 5 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.GLSN", (gogoproto.customname) = "LastCommittedGLSN" @@ -106,23 +123,28 @@ message UnsealRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - uint32 storage_node_id = 2 [ + int32 storage_node_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 3 [ + int32 topic_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - repeated Replica replicas = 4 [(gogoproto.nullable) = false]; + repeated varlogpb.Replica replicas = 5 [(gogoproto.nullable) = false]; } message SyncRequest { // FIXME: Use Replica instead of BackupNode message BackupNode { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" @@ -134,17 +156,22 @@ message SyncRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - uint32 storage_node_id = 2 [ + int32 storage_node_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 3 [ + int32 topic_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - BackupNode backup = 4 [(gogoproto.nullable) = true]; + BackupNode backup = 5 [(gogoproto.nullable) = true]; } message SyncResponse { @@ -152,9 +179,9 @@ message SyncResponse { } message GetPrevCommitInfoRequest { - uint64 prev_high_watermark = 1 + uint64 prev_version = 1 [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; + "github.com/kakao/varlog/pkg/types.Version"]; } message LogStreamCommitInfo { @@ -169,7 +196,7 @@ message LogStreamCommitInfo { "GetPrevCommitStatusInconsistent"]; } - uint32 log_stream_id = 1 [ + int32 log_stream_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -192,16 +219,13 @@ message LogStreamCommitInfo { "github.com/kakao/varlog/pkg/types.LLSN", (gogoproto.customname) = "HighestWrittenLLSN" ]; - uint64 high_watermark = 7 - [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; - uint64 prev_high_watermark = 8 + uint64 version = 7 [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; + "github.com/kakao/varlog/pkg/types.Version"]; } message GetPrevCommitInfoResponse { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" @@ -214,7 +238,8 @@ service Management { // GetMetadata returns metadata of StorageNode. rpc GetMetadata(GetMetadataRequest) returns (GetMetadataResponse) {} // AddLogStream adds a new LogStream to StorageNode. - rpc AddLogStream(AddLogStreamRequest) returns (AddLogStreamResponse) {} + rpc AddLogStreamReplica(AddLogStreamReplicaRequest) + returns (AddLogStreamReplicaResponse) {} // RemoveLogStream removes a LogStream from StorageNode. rpc RemoveLogStream(RemoveLogStreamRequest) returns (google.protobuf.Empty) {} diff --git a/proto/snpb/mock/snpb_mock.go b/proto/snpb/mock/snpb_mock.go index dcd794118..079db1d9f 100644 --- a/proto/snpb/mock/snpb_mock.go +++ b/proto/snpb/mock/snpb_mock.go @@ -1001,24 +1001,24 @@ func (m *MockManagementClient) EXPECT() *MockManagementClientMockRecorder { return m.recorder } -// AddLogStream mocks base method. -func (m *MockManagementClient) AddLogStream(arg0 context.Context, arg1 *snpb.AddLogStreamRequest, arg2 ...grpc.CallOption) (*snpb.AddLogStreamResponse, error) { +// AddLogStreamReplica mocks base method. +func (m *MockManagementClient) AddLogStreamReplica(arg0 context.Context, arg1 *snpb.AddLogStreamReplicaRequest, arg2 ...grpc.CallOption) (*snpb.AddLogStreamReplicaResponse, error) { m.ctrl.T.Helper() varargs := []interface{}{arg0, arg1} for _, a := range arg2 { varargs = append(varargs, a) } - ret := m.ctrl.Call(m, "AddLogStream", varargs...) - ret0, _ := ret[0].(*snpb.AddLogStreamResponse) + ret := m.ctrl.Call(m, "AddLogStreamReplica", varargs...) + ret0, _ := ret[0].(*snpb.AddLogStreamReplicaResponse) ret1, _ := ret[1].(error) return ret0, ret1 } -// AddLogStream indicates an expected call of AddLogStream. -func (mr *MockManagementClientMockRecorder) AddLogStream(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { +// AddLogStreamReplica indicates an expected call of AddLogStreamReplica. +func (mr *MockManagementClientMockRecorder) AddLogStreamReplica(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() varargs := append([]interface{}{arg0, arg1}, arg2...) - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStream", reflect.TypeOf((*MockManagementClient)(nil).AddLogStream), varargs...) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStreamReplica", reflect.TypeOf((*MockManagementClient)(nil).AddLogStreamReplica), varargs...) } // GetMetadata mocks base method. @@ -1164,19 +1164,19 @@ func (m *MockManagementServer) EXPECT() *MockManagementServerMockRecorder { return m.recorder } -// AddLogStream mocks base method. -func (m *MockManagementServer) AddLogStream(arg0 context.Context, arg1 *snpb.AddLogStreamRequest) (*snpb.AddLogStreamResponse, error) { +// AddLogStreamReplica mocks base method. +func (m *MockManagementServer) AddLogStreamReplica(arg0 context.Context, arg1 *snpb.AddLogStreamReplicaRequest) (*snpb.AddLogStreamReplicaResponse, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "AddLogStream", arg0, arg1) - ret0, _ := ret[0].(*snpb.AddLogStreamResponse) + ret := m.ctrl.Call(m, "AddLogStreamReplica", arg0, arg1) + ret0, _ := ret[0].(*snpb.AddLogStreamReplicaResponse) ret1, _ := ret[1].(error) return ret0, ret1 } -// AddLogStream indicates an expected call of AddLogStream. -func (mr *MockManagementServerMockRecorder) AddLogStream(arg0, arg1 interface{}) *gomock.Call { +// AddLogStreamReplica indicates an expected call of AddLogStreamReplica. +func (mr *MockManagementServerMockRecorder) AddLogStreamReplica(arg0, arg1 interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStream", reflect.TypeOf((*MockManagementServer)(nil).AddLogStream), arg0, arg1) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddLogStreamReplica", reflect.TypeOf((*MockManagementServer)(nil).AddLogStreamReplica), arg0, arg1) } // GetMetadata mocks base method. diff --git a/proto/snpb/replica.pb.go b/proto/snpb/replica.pb.go index b6f59492a..cfe1c374c 100644 --- a/proto/snpb/replica.pb.go +++ b/proto/snpb/replica.pb.go @@ -104,7 +104,7 @@ var fileDescriptor_c78e5eec0018720c = []byte{ 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0xd3, 0xf3, 0xd3, 0xf3, 0xf5, 0xc1, 0x6a, 0x92, 0x4a, 0xd3, 0xc0, 0x3c, 0x88, 0x19, 0x20, 0x16, 0x44, 0xaf, 0x52, 0x2f, 0x13, 0x17, 0x7b, 0x10, 0xc4, 0x34, 0xa1, 0x72, 0x2e, 0xfe, 0xe2, 0x92, 0xfc, 0xa2, 0xc4, 0xf4, 0xd4, 0xf8, 0xbc, - 0xfc, 0x94, 0xd4, 0xf8, 0xcc, 0x14, 0x09, 0x46, 0x05, 0x46, 0x0d, 0x5e, 0x27, 0xff, 0x47, 0xf7, + 0xfc, 0x94, 0xd4, 0xf8, 0xcc, 0x14, 0x09, 0x46, 0x05, 0x46, 0x0d, 0x56, 0x27, 0xff, 0x47, 0xf7, 0xe4, 0x79, 0x83, 0x21, 0x52, 0x7e, 0xf9, 0x29, 0xa9, 0x9e, 0x2e, 0xbf, 0xee, 0xc9, 0x5b, 0x41, 0x2d, 0x4a, 0x49, 0x2c, 0xcd, 0xcd, 0x4e, 0xcc, 0x4e, 0xcc, 0x07, 0x5b, 0x09, 0x71, 0x0a, 0x8c, 0x2a, 0xc8, 0x4e, 0xd7, 0x2f, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x43, 0xd1, 0x1d, 0xc4, 0x5b, 0x8c, @@ -114,7 +114,7 @@ var fileDescriptor_c78e5eec0018720c = []byte{ 0x70, 0xb1, 0x27, 0xa6, 0xa4, 0x14, 0xa5, 0x16, 0x17, 0x4b, 0x30, 0x2b, 0x30, 0x6a, 0x70, 0x06, 0xc1, 0xb8, 0x4e, 0xf6, 0x2b, 0x1e, 0xc9, 0x31, 0x9e, 0x78, 0x24, 0xc7, 0x78, 0xe1, 0x91, 0x1c, 0xe3, 0x82, 0xc7, 0x72, 0x8c, 0x51, 0xba, 0xc4, 0x58, 0x09, 0x8f, 0x9a, 0x24, 0x36, 0x30, 0xdb, - 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x24, 0x97, 0xde, 0xd3, 0xaf, 0x01, 0x00, 0x00, + 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0xe0, 0xf7, 0x78, 0x96, 0xaf, 0x01, 0x00, 0x00, } func (this *Replica) Equal(that interface{}) bool { @@ -338,7 +338,10 @@ func (m *Replica) Unmarshal(dAtA []byte) error { if err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { + if skippy < 0 { + return ErrInvalidLengthReplica + } + if (iNdEx + skippy) < 0 { return ErrInvalidLengthReplica } if (iNdEx + skippy) > l { diff --git a/proto/snpb/replica.proto b/proto/snpb/replica.proto deleted file mode 100644 index d8dc74fe2..000000000 --- a/proto/snpb/replica.proto +++ /dev/null @@ -1,26 +0,0 @@ -syntax = "proto3"; - -package varlog.snpb; - -import "github.com/gogo/protobuf/gogoproto/gogo.proto"; - -option go_package = "github.com/kakao/varlog/proto/snpb"; - -option (gogoproto.protosizer_all) = true; -option (gogoproto.marshaler_all) = true; -option (gogoproto.unmarshaler_all) = true; -option (gogoproto.equal_all) = true; - -message Replica { - uint32 storage_node_id = 1 [ - (gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.StorageNodeID", - (gogoproto.customname) = "StorageNodeID" - ]; - uint32 log_stream_id = 2 [ - (gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.LogStreamID", - (gogoproto.customname) = "LogStreamID" - ]; - string address = 3; -} diff --git a/proto/snpb/replicator.pb.go b/proto/snpb/replicator.pb.go index f8642f39a..d08a6c5b6 100644 --- a/proto/snpb/replicator.pb.go +++ b/proto/snpb/replicator.pb.go @@ -4,7 +4,6 @@ package snpb import ( - bytes "bytes" context "context" fmt "fmt" io "io" @@ -64,13 +63,11 @@ func (SyncState) EnumDescriptor() ([]byte, []int) { // ReplicationRequest contains LLSN (Local Log Sequence Number) that indicates // a log position in the local log stream of the primary storage node. type ReplicationRequest struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` - Payload []byte `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` - CreatedTime time.Time `protobuf:"bytes,4,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,3,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` + Payload []byte `protobuf:"bytes,4,opt,name=payload,proto3" json:"payload,omitempty"` + CreatedTime time.Time `protobuf:"bytes,5,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` } func (m *ReplicationRequest) Reset() { *m = ReplicationRequest{} } @@ -106,6 +103,13 @@ func (m *ReplicationRequest) XXX_DiscardUnknown() { var xxx_messageInfo_ReplicationRequest proto.InternalMessageInfo +func (m *ReplicationRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *ReplicationRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -136,13 +140,10 @@ func (m *ReplicationRequest) GetCreatedTime() time.Time { // ReplicationResponse indicates that a log entry at given LLSN is replicated. type ReplicationResponse struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,3,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` - CreatedTime time.Time `protobuf:"bytes,4,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,3,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` + CreatedTime time.Time `protobuf:"bytes,4,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` } func (m *ReplicationResponse) Reset() { *m = ReplicationResponse{} } @@ -207,11 +208,8 @@ func (m *ReplicationResponse) GetCreatedTime() time.Time { } type SyncPosition struct { - LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,1,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` - GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,1,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` + GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` } func (m *SyncPosition) Reset() { *m = SyncPosition{} } @@ -262,11 +260,8 @@ func (m *SyncPosition) GetGLSN() github_daumkakao_com_varlog_varlog_pkg_types.GL } type SyncRange struct { - FirstLLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,1,opt,name=first_llsn,json=firstLlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"first_llsn,omitempty"` - LastLLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=last_llsn,json=lastLlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"last_llsn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + FirstLLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,1,opt,name=first_llsn,json=firstLlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"first_llsn,omitempty"` + LastLLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=last_llsn,json=lastLlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"last_llsn,omitempty"` } func (m *SyncRange) Reset() { *m = SyncRange{} } @@ -317,13 +312,10 @@ func (m *SyncRange) GetLastLLSN() github_daumkakao_com_varlog_varlog_pkg_types.L } type SyncStatus struct { - State SyncState `protobuf:"varint,1,opt,name=state,proto3,enum=varlog.snpb.SyncState" json:"state,omitempty"` - First SyncPosition `protobuf:"bytes,2,opt,name=first,proto3" json:"first"` - Last SyncPosition `protobuf:"bytes,3,opt,name=last,proto3" json:"last"` - Current SyncPosition `protobuf:"bytes,4,opt,name=current,proto3" json:"current"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + State SyncState `protobuf:"varint,1,opt,name=state,proto3,enum=varlog.snpb.SyncState" json:"state,omitempty"` + First SyncPosition `protobuf:"bytes,2,opt,name=first,proto3" json:"first"` + Last SyncPosition `protobuf:"bytes,3,opt,name=last,proto3" json:"last"` + Current SyncPosition `protobuf:"bytes,4,opt,name=current,proto3" json:"current"` } func (m *SyncStatus) Reset() { *m = SyncStatus{} } @@ -388,11 +380,8 @@ func (m *SyncStatus) GetCurrent() SyncPosition { } type SyncPayload struct { - CommitContext *varlogpb.CommitContext `protobuf:"bytes,1,opt,name=commit_context,json=commitContext,proto3" json:"commit_context,omitempty"` - LogEntry *varlogpb.LogEntry `protobuf:"bytes,2,opt,name=log_entry,json=logEntry,proto3" json:"log_entry,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + CommitContext *varlogpb.CommitContext `protobuf:"bytes,1,opt,name=commit_context,json=commitContext,proto3" json:"commit_context,omitempty"` + LogEntry *varlogpb.LogEntry `protobuf:"bytes,2,opt,name=log_entry,json=logEntry,proto3" json:"log_entry,omitempty"` } func (m *SyncPayload) Reset() { *m = SyncPayload{} } @@ -443,13 +432,10 @@ func (m *SyncPayload) GetLogEntry() *varlogpb.LogEntry { } type SyncInitRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - Source Replica `protobuf:"bytes,2,opt,name=source,proto3" json:"source"` - Destination Replica `protobuf:"bytes,3,opt,name=destination,proto3" json:"destination"` - Range SyncRange `protobuf:"bytes,4,opt,name=range,proto3" json:"range"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + Source varlogpb.Replica `protobuf:"bytes,2,opt,name=source,proto3" json:"source"` + Destination varlogpb.Replica `protobuf:"bytes,3,opt,name=destination,proto3" json:"destination"` + Range SyncRange `protobuf:"bytes,4,opt,name=range,proto3" json:"range"` } func (m *SyncInitRequest) Reset() { *m = SyncInitRequest{} } @@ -492,18 +478,18 @@ func (m *SyncInitRequest) GetClusterID() github_daumkakao_com_varlog_varlog_pkg_ return 0 } -func (m *SyncInitRequest) GetSource() Replica { +func (m *SyncInitRequest) GetSource() varlogpb.Replica { if m != nil { return m.Source } - return Replica{} + return varlogpb.Replica{} } -func (m *SyncInitRequest) GetDestination() Replica { +func (m *SyncInitRequest) GetDestination() varlogpb.Replica { if m != nil { return m.Destination } - return Replica{} + return varlogpb.Replica{} } func (m *SyncInitRequest) GetRange() SyncRange { @@ -514,10 +500,7 @@ func (m *SyncInitRequest) GetRange() SyncRange { } type SyncInitResponse struct { - Range SyncRange `protobuf:"bytes,1,opt,name=range,proto3" json:"range"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Range SyncRange `protobuf:"bytes,1,opt,name=range,proto3" json:"range"` } func (m *SyncInitResponse) Reset() { *m = SyncInitResponse{} } @@ -561,13 +544,10 @@ func (m *SyncInitResponse) GetRange() SyncRange { } type SyncReplicateRequest struct { - ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` - Source Replica `protobuf:"bytes,2,opt,name=source,proto3" json:"source"` - Destination Replica `protobuf:"bytes,3,opt,name=destination,proto3" json:"destination"` - Payload SyncPayload `protobuf:"bytes,4,opt,name=payload,proto3" json:"payload"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` + Source varlogpb.Replica `protobuf:"bytes,2,opt,name=source,proto3" json:"source"` + Destination varlogpb.Replica `protobuf:"bytes,3,opt,name=destination,proto3" json:"destination"` + Payload SyncPayload `protobuf:"bytes,4,opt,name=payload,proto3" json:"payload"` } func (m *SyncReplicateRequest) Reset() { *m = SyncReplicateRequest{} } @@ -610,18 +590,18 @@ func (m *SyncReplicateRequest) GetClusterID() github_daumkakao_com_varlog_varlog return 0 } -func (m *SyncReplicateRequest) GetSource() Replica { +func (m *SyncReplicateRequest) GetSource() varlogpb.Replica { if m != nil { return m.Source } - return Replica{} + return varlogpb.Replica{} } -func (m *SyncReplicateRequest) GetDestination() Replica { +func (m *SyncReplicateRequest) GetDestination() varlogpb.Replica { if m != nil { return m.Destination } - return Replica{} + return varlogpb.Replica{} } func (m *SyncReplicateRequest) GetPayload() SyncPayload { @@ -632,10 +612,7 @@ func (m *SyncReplicateRequest) GetPayload() SyncPayload { } type SyncReplicateResponse struct { - Status *SyncStatus `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Status *SyncStatus `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` } func (m *SyncReplicateResponse) Reset() { *m = SyncReplicateResponse{} } @@ -695,70 +672,73 @@ func init() { func init() { proto.RegisterFile("proto/snpb/replicator.proto", fileDescriptor_85705cb817486b63) } var fileDescriptor_85705cb817486b63 = []byte{ - // 1008 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x56, 0x41, 0x6f, 0x1b, 0x45, - 0x14, 0xce, 0x38, 0x4e, 0x6a, 0x3f, 0xc7, 0x69, 0x98, 0xb6, 0xc4, 0x18, 0xea, 0x35, 0xe6, 0x62, - 0x21, 0xba, 0x06, 0x57, 0x54, 0x25, 0xc0, 0x01, 0x07, 0x37, 0x58, 0x98, 0x24, 0x1d, 0xe7, 0x80, - 0x38, 0x60, 0x6d, 0x76, 0x27, 0xcb, 0x2a, 0xeb, 0x9d, 0x65, 0x67, 0x0c, 0xe4, 0x17, 0x14, 0xe5, - 0x84, 0xb8, 0xa2, 0x88, 0x4a, 0xf4, 0x80, 0x38, 0x71, 0x86, 0x3f, 0x90, 0x13, 0xe2, 0x17, 0x18, - 0x61, 0x2e, 0xfc, 0x86, 0x9e, 0xd0, 0xcc, 0xce, 0xae, 0x9d, 0xb8, 0x2d, 0x49, 0x89, 0x38, 0xf4, - 0xe4, 0x9d, 0x79, 0xdf, 0x7c, 0xef, 0xbd, 0xef, 0xbd, 0x79, 0x1e, 0x78, 0x31, 0x8c, 0x98, 0x60, - 0x0d, 0x1e, 0x84, 0xbb, 0x8d, 0x88, 0x86, 0xbe, 0x67, 0x5b, 0x82, 0x45, 0xa6, 0xda, 0xc5, 0x85, - 0x2f, 0xac, 0xc8, 0x67, 0xae, 0x29, 0xad, 0x65, 0xc3, 0x65, 0xcc, 0xf5, 0x69, 0x43, 0x99, 0x76, - 0x87, 0x7b, 0x0d, 0xe1, 0x0d, 0x28, 0x17, 0xd6, 0x20, 0x8c, 0xd1, 0xe5, 0x1b, 0xae, 0x27, 0x3e, - 0x1b, 0xee, 0x9a, 0x36, 0x1b, 0x34, 0x5c, 0xe6, 0xb2, 0x09, 0x52, 0xae, 0x62, 0x3f, 0xf2, 0x4b, - 0xc3, 0xf1, 0xb4, 0x4f, 0xbd, 0xb7, 0x1a, 0x3b, 0x0c, 0x77, 0x1b, 0x03, 0x2a, 0x2c, 0xc7, 0x12, - 0xda, 0x50, 0xfb, 0x35, 0x03, 0x98, 0xe8, 0xf0, 0x3c, 0x16, 0x10, 0xfa, 0xf9, 0x90, 0x72, 0x81, - 0x19, 0x14, 0x7d, 0xe6, 0xf6, 0xb9, 0x88, 0xa8, 0x35, 0xe8, 0x7b, 0x4e, 0x09, 0x55, 0x51, 0xbd, - 0xd8, 0xfa, 0x70, 0x3c, 0x32, 0x0a, 0x5d, 0xe6, 0xf6, 0xd4, 0x7e, 0xe7, 0xfd, 0x87, 0x23, 0xe3, - 0xb6, 0x0e, 0xce, 0xb1, 0x86, 0x83, 0x7d, 0x6b, 0xdf, 0x62, 0x2a, 0xcc, 0xd8, 0x5d, 0xf2, 0x13, - 0xee, 0xbb, 0x0d, 0x71, 0x10, 0x52, 0x6e, 0x4e, 0x9d, 0x25, 0x05, 0x3f, 0x5d, 0x38, 0xf8, 0x2e, - 0x64, 0x7d, 0x9f, 0x07, 0xa5, 0x4c, 0x15, 0xd5, 0xb3, 0xad, 0x77, 0xc7, 0x23, 0x23, 0xdb, 0xed, - 0xf6, 0x36, 0x1f, 0x8e, 0x8c, 0x37, 0xce, 0xe7, 0xa0, 0xdb, 0xdb, 0x24, 0x8a, 0x0a, 0x97, 0xe0, - 0x52, 0x68, 0x1d, 0xf8, 0xcc, 0x72, 0x4a, 0xf3, 0x55, 0x54, 0x5f, 0x22, 0xc9, 0x12, 0x6f, 0xc0, - 0x92, 0x1d, 0x51, 0x4b, 0x50, 0xa7, 0x2f, 0xb5, 0x2e, 0x65, 0xab, 0xa8, 0x5e, 0x68, 0x96, 0xcd, - 0xb8, 0x10, 0x66, 0x22, 0xaf, 0xb9, 0x93, 0x14, 0xa2, 0x95, 0x3b, 0x1e, 0x19, 0x73, 0xdf, 0xfc, - 0x61, 0x20, 0x52, 0xd0, 0x27, 0xa5, 0xad, 0xf6, 0xdd, 0x3c, 0x5c, 0x39, 0xa1, 0x1e, 0x0f, 0x59, - 0xc0, 0x29, 0xfe, 0x12, 0x2e, 0x73, 0xc1, 0x22, 0xcb, 0xa5, 0xfd, 0x80, 0x39, 0x74, 0x22, 0xe0, - 0xd6, 0x78, 0x64, 0x14, 0x7b, 0xb1, 0x69, 0x93, 0x39, 0x54, 0x49, 0xb8, 0x76, 0xae, 0x0c, 0x4f, - 0x9c, 0x26, 0x45, 0x3e, 0xb5, 0x74, 0x66, 0xeb, 0x96, 0xf9, 0x9f, 0xea, 0x36, 0x7f, 0x71, 0x75, - 0xbb, 0xb0, 0xea, 0xfc, 0x82, 0x60, 0xa9, 0x77, 0x10, 0xd8, 0xdb, 0x8c, 0x7b, 0xb2, 0x3c, 0x69, - 0xb0, 0xe8, 0xe2, 0x82, 0xbd, 0x0b, 0x59, 0xf7, 0x54, 0xdf, 0x6e, 0x3c, 0x0d, 0xe5, 0x86, 0xa2, - 0x94, 0x54, 0x6b, 0xd9, 0xbf, 0xef, 0x1b, 0xa8, 0xf6, 0x1b, 0x82, 0xbc, 0x0c, 0x9e, 0x58, 0x81, - 0x4b, 0xb1, 0x05, 0xb0, 0xe7, 0x45, 0x5c, 0xf4, 0xa7, 0xe2, 0x6f, 0x8d, 0x47, 0x46, 0xfe, 0x8e, - 0xdc, 0x7d, 0xfa, 0x24, 0xf2, 0x8a, 0xb5, 0x2b, 0x33, 0xf9, 0x14, 0xf2, 0xbe, 0x95, 0x78, 0x88, - 0xd3, 0x79, 0x6f, 0x3c, 0x32, 0x72, 0x5d, 0xeb, 0xbf, 0x38, 0xc8, 0x49, 0x4e, 0xc9, 0x5f, 0xfb, - 0x13, 0x01, 0xc8, 0x84, 0x7a, 0xc2, 0x12, 0x43, 0x8e, 0x5f, 0x83, 0x05, 0x2e, 0x2c, 0x41, 0x55, - 0x32, 0xcb, 0xcd, 0xe7, 0xcd, 0xa9, 0x91, 0x68, 0x26, 0x38, 0x4a, 0x62, 0x10, 0x7e, 0x13, 0x16, - 0x54, 0xa4, 0x2a, 0xb0, 0x42, 0xf3, 0x85, 0x19, 0x74, 0x52, 0xe3, 0x56, 0x56, 0xf6, 0x02, 0x89, - 0xd1, 0xf8, 0x26, 0x64, 0xa5, 0x7f, 0xd5, 0x9d, 0x67, 0x38, 0xa5, 0xc0, 0xf8, 0x2d, 0xb8, 0x64, - 0x0f, 0xa3, 0x88, 0x06, 0x42, 0xb7, 0xde, 0xbf, 0x9e, 0x4b, 0xf0, 0xb5, 0x6f, 0x11, 0x14, 0x94, - 0x5d, 0x0f, 0x9a, 0x36, 0x2c, 0xdb, 0x6c, 0x30, 0xf0, 0x44, 0xdf, 0x66, 0x81, 0xa0, 0x5f, 0x09, - 0x95, 0x6d, 0xa1, 0x59, 0x49, 0x18, 0x93, 0xb1, 0x6c, 0xae, 0x2b, 0xd8, 0x7a, 0x8c, 0x22, 0x45, - 0x7b, 0x7a, 0x89, 0x6f, 0x41, 0x5e, 0xde, 0x6a, 0x1a, 0x88, 0xe8, 0xe0, 0xb4, 0x02, 0x29, 0x43, - 0x97, 0xb9, 0x6d, 0x09, 0x20, 0x39, 0x5f, 0x7f, 0xad, 0x65, 0x8f, 0x65, 0x27, 0x7d, 0x9f, 0x81, - 0xcb, 0x32, 0xa8, 0x4e, 0xe0, 0x89, 0x64, 0xbe, 0xef, 0x01, 0xd8, 0xfe, 0x90, 0x0b, 0x1a, 0x4d, - 0x66, 0xd3, 0x86, 0xec, 0xa7, 0xf5, 0x78, 0x57, 0x8d, 0x88, 0x5b, 0xe7, 0x2a, 0x77, 0x7a, 0x92, - 0xe4, 0x35, 0x75, 0xc7, 0xc1, 0x4d, 0x58, 0xe4, 0x6c, 0x18, 0xd9, 0x54, 0x87, 0x7d, 0xf5, 0x84, - 0x94, 0x7a, 0x74, 0x6a, 0x15, 0x35, 0x12, 0xbf, 0x03, 0x05, 0x87, 0x72, 0xe1, 0x05, 0x6a, 0xa6, - 0xea, 0xda, 0x3d, 0xe9, 0xe0, 0x34, 0x1c, 0x37, 0x61, 0x21, 0x92, 0x57, 0x46, 0xd7, 0x6e, 0xb6, - 0xaf, 0xd4, 0x85, 0x4a, 0xda, 0x44, 0x41, 0x6b, 0x77, 0x60, 0x65, 0x22, 0x90, 0x1e, 0xe1, 0x29, - 0x0f, 0x3a, 0x3b, 0xcf, 0x4f, 0x19, 0xb8, 0xaa, 0x4c, 0xfa, 0x2f, 0x81, 0x3e, 0xfb, 0x72, 0xdf, - 0x9e, 0xfc, 0xc9, 0xc6, 0x82, 0x97, 0x66, 0x2f, 0x4b, 0x6c, 0x4f, 0xee, 0x8a, 0x86, 0xd7, 0x3e, - 0x80, 0x6b, 0xa7, 0xb4, 0xd2, 0xca, 0x37, 0x60, 0x91, 0xab, 0x19, 0xa1, 0xa5, 0x5f, 0x7d, 0xe4, - 0x68, 0x18, 0x72, 0xa2, 0x61, 0xaf, 0xde, 0xd3, 0xa3, 0x52, 0x4d, 0x0c, 0x7c, 0x1d, 0x16, 0xda, - 0x84, 0x6c, 0x91, 0x95, 0xb9, 0x32, 0x3e, 0x3c, 0xaa, 0x2e, 0xa7, 0x96, 0x76, 0x14, 0xb1, 0x08, - 0xd7, 0xa1, 0xd0, 0xd9, 0xec, 0x6f, 0x93, 0xad, 0x0d, 0xd2, 0xee, 0xf5, 0x56, 0x50, 0x79, 0xf5, - 0xf0, 0xa8, 0x7a, 0x25, 0x05, 0x75, 0x82, 0xed, 0x88, 0xb9, 0x11, 0xe5, 0x1c, 0xbf, 0x02, 0xb9, - 0xf5, 0xad, 0x8f, 0xb6, 0xbb, 0xed, 0x9d, 0xf6, 0x4a, 0xa6, 0x7c, 0xed, 0xf0, 0xa8, 0xfa, 0x5c, - 0x0a, 0x5b, 0x67, 0x83, 0xd0, 0xa7, 0x82, 0x96, 0x97, 0xbe, 0xfe, 0xa1, 0x32, 0xf7, 0xe3, 0x83, - 0xca, 0xdc, 0xcf, 0x0f, 0x2a, 0xa8, 0x79, 0x2f, 0x03, 0x40, 0xd2, 0xc7, 0x1e, 0xde, 0x81, 0x7c, - 0x9a, 0x1e, 0x36, 0x1e, 0x25, 0xe9, 0xd4, 0x9b, 0xab, 0x5c, 0x7d, 0x3c, 0x20, 0x56, 0xa6, 0x36, - 0x57, 0x47, 0xaf, 0x23, 0xdc, 0x81, 0x5c, 0xd2, 0xad, 0xf8, 0xa5, 0x19, 0x6d, 0xa6, 0x6e, 0x79, - 0xf9, 0xfa, 0x63, 0xac, 0x09, 0x1d, 0xfe, 0x18, 0x8a, 0x27, 0x6a, 0x80, 0x5f, 0x9e, 0x6d, 0xf3, - 0x53, 0xbd, 0x5c, 0xae, 0x3d, 0x09, 0x92, 0x30, 0xb7, 0xde, 0x3e, 0x1e, 0x57, 0xd0, 0xef, 0xe3, - 0x0a, 0xba, 0xff, 0x57, 0x05, 0x7d, 0x72, 0xe3, 0x2c, 0xad, 0x9d, 0xbe, 0x97, 0x77, 0x17, 0xd5, - 0xf7, 0xcd, 0x7f, 0x02, 0x00, 0x00, 0xff, 0xff, 0x6e, 0x62, 0xcb, 0xf7, 0x44, 0x0b, 0x00, 0x00, + // 1042 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x56, 0x41, 0x6f, 0x1b, 0x45, + 0x14, 0xf6, 0xba, 0x76, 0x62, 0x3f, 0xc7, 0x69, 0x98, 0x52, 0x62, 0x0c, 0xf5, 0x1a, 0x73, 0xb1, + 0x10, 0x5d, 0x83, 0x0b, 0x51, 0x89, 0x84, 0x04, 0x36, 0xae, 0xb1, 0x30, 0x49, 0x3a, 0xce, 0x01, + 0x71, 0xa8, 0xb5, 0xd9, 0x9d, 0x2c, 0xab, 0xac, 0x77, 0x96, 0x9d, 0x31, 0x90, 0x5f, 0x50, 0x94, + 0x53, 0xc5, 0x15, 0x45, 0xaa, 0x44, 0x25, 0x38, 0x22, 0x8e, 0xfc, 0x82, 0x9c, 0x50, 0x8f, 0x9c, + 0x8c, 0x70, 0x2e, 0xfc, 0x86, 0x9e, 0xd0, 0xcc, 0xce, 0x6e, 0x9c, 0x98, 0xd2, 0x24, 0x70, 0x82, + 0xd3, 0xee, 0xcc, 0xfb, 0xde, 0xf7, 0xde, 0x9b, 0xef, 0xcd, 0xdb, 0x85, 0x97, 0x82, 0x90, 0x72, + 0xda, 0x60, 0x7e, 0xb0, 0xd3, 0x08, 0x49, 0xe0, 0xb9, 0x96, 0xc9, 0x69, 0x68, 0xc8, 0x5d, 0x54, + 0xf8, 0xc2, 0x0c, 0x3d, 0xea, 0x18, 0xc2, 0x5a, 0xd6, 0x1d, 0x4a, 0x1d, 0x8f, 0x34, 0xa4, 0x69, + 0x67, 0xbc, 0xdb, 0xe0, 0xee, 0x88, 0x30, 0x6e, 0x8e, 0x82, 0x08, 0x5d, 0xbe, 0xe9, 0xb8, 0xfc, + 0xb3, 0xf1, 0x8e, 0x61, 0xd1, 0x51, 0xc3, 0xa1, 0x0e, 0x3d, 0x41, 0x8a, 0x55, 0x14, 0x47, 0xbc, + 0x29, 0xf8, 0x6a, 0x44, 0x1e, 0xec, 0x34, 0x46, 0x84, 0x9b, 0xb6, 0xc9, 0xcd, 0xc8, 0x50, 0xfb, + 0xf6, 0x0a, 0x20, 0xac, 0x52, 0x71, 0xa9, 0x8f, 0xc9, 0xe7, 0x63, 0xc2, 0x38, 0xba, 0x07, 0x39, + 0x4e, 0x03, 0xd7, 0x1a, 0xba, 0x76, 0x49, 0xab, 0x6a, 0xf5, 0x6c, 0xab, 0x3d, 0x9d, 0xe8, 0x8b, + 0xdb, 0x62, 0xaf, 0xf7, 0xc1, 0x93, 0x89, 0xfe, 0x96, 0x8a, 0x6f, 0x9b, 0xe3, 0xd1, 0x9e, 0xb9, + 0x67, 0x52, 0x99, 0x49, 0x14, 0x25, 0x7e, 0x04, 0x7b, 0x4e, 0x83, 0xef, 0x07, 0x84, 0x19, 0xca, + 0x0f, 0x2f, 0x4a, 0xd2, 0x9e, 0x8d, 0x28, 0x14, 0x3d, 0xea, 0x0c, 0x19, 0x0f, 0x89, 0x39, 0x12, + 0x41, 0xd2, 0x32, 0xc8, 0x47, 0xd3, 0x89, 0x5e, 0xe8, 0x53, 0x67, 0x20, 0xf7, 0x65, 0xa0, 0xdb, + 0x17, 0x0a, 0x34, 0xe3, 0x8b, 0x0b, 0x5e, 0xb2, 0xb0, 0xd1, 0x5d, 0xc8, 0x78, 0x1e, 0xf3, 0x4b, + 0x57, 0xaa, 0x5a, 0x3d, 0xd3, 0x7a, 0x77, 0x3a, 0xd1, 0x33, 0xfd, 0xfe, 0x60, 0xe3, 0xc9, 0x44, + 0x7f, 0xf3, 0x62, 0x01, 0xfa, 0x83, 0x0d, 0x2c, 0xa9, 0x50, 0x09, 0x16, 0x03, 0x73, 0xdf, 0xa3, + 0xa6, 0x5d, 0xca, 0x54, 0xb5, 0xfa, 0x12, 0x8e, 0x97, 0xa8, 0x0b, 0x4b, 0x56, 0x48, 0x4c, 0x4e, + 0xec, 0xa1, 0xd0, 0xad, 0x94, 0xad, 0x6a, 0xf5, 0x42, 0xb3, 0x6c, 0x44, 0xa2, 0x1a, 0xb1, 0x54, + 0xc6, 0x76, 0x2c, 0x6a, 0x2b, 0x77, 0x34, 0xd1, 0x53, 0x0f, 0x7e, 0xd3, 0x35, 0x5c, 0x50, 0x9e, + 0xc2, 0x26, 0xd4, 0xb9, 0x76, 0x4a, 0x1d, 0x16, 0x50, 0x9f, 0x11, 0xf4, 0x25, 0x5c, 0x65, 0x9c, + 0x86, 0xa6, 0x43, 0x86, 0x3e, 0xb5, 0xc9, 0x89, 0x4a, 0x9b, 0xd3, 0x89, 0x5e, 0x1c, 0x44, 0xa6, + 0x0d, 0x6a, 0x13, 0x79, 0x84, 0xeb, 0x17, 0xaa, 0xf0, 0x94, 0x37, 0x2e, 0xb2, 0x99, 0xe5, 0x7f, + 0x43, 0xb7, 0xb3, 0xea, 0x64, 0x2e, 0xab, 0xce, 0xcf, 0x1a, 0x2c, 0x0d, 0xf6, 0x7d, 0x6b, 0x8b, + 0x32, 0x57, 0xc8, 0x93, 0x24, 0xab, 0xfd, 0x7b, 0xc9, 0xde, 0x85, 0x8c, 0x23, 0x28, 0xd3, 0x27, + 0x94, 0xdd, 0xcb, 0x50, 0x76, 0x25, 0xa5, 0xa0, 0x5a, 0xcf, 0xfc, 0xf1, 0x50, 0xd7, 0x6a, 0xbf, + 0x68, 0x90, 0x17, 0xc9, 0x63, 0xd3, 0x77, 0x08, 0x32, 0x01, 0x76, 0xdd, 0x90, 0xf1, 0xe1, 0x4c, + 0xfe, 0xad, 0xe9, 0x44, 0xcf, 0xdf, 0x11, 0xbb, 0x97, 0x2f, 0x22, 0x2f, 0x59, 0xfb, 0xa2, 0x92, + 0x7b, 0x90, 0xf7, 0xcc, 0x38, 0x42, 0x54, 0xce, 0xfb, 0xd3, 0x89, 0x9e, 0xeb, 0x9b, 0xff, 0x24, + 0x40, 0x4e, 0x70, 0x0a, 0xfe, 0xda, 0xef, 0x1a, 0x80, 0x28, 0x68, 0xc0, 0x4d, 0x3e, 0x66, 0xe8, + 0x75, 0xc8, 0x32, 0x6e, 0x72, 0x22, 0x8b, 0x59, 0x6e, 0xbe, 0x60, 0xcc, 0x8c, 0x57, 0x23, 0xc6, + 0x11, 0x1c, 0x81, 0xd0, 0xdb, 0x90, 0x95, 0x99, 0xca, 0xc4, 0x0a, 0xcd, 0x17, 0xe7, 0xd0, 0xb1, + 0xc6, 0xad, 0x8c, 0xe8, 0x05, 0x1c, 0xa1, 0xd1, 0x2d, 0xc8, 0x88, 0xf8, 0xb2, 0x3b, 0xcf, 0xe1, + 0x25, 0xc1, 0xe8, 0x1d, 0x58, 0xb4, 0xc6, 0x61, 0x48, 0x7c, 0xae, 0x5a, 0xef, 0x99, 0x7e, 0x31, + 0xbe, 0xf6, 0x8d, 0x06, 0x05, 0x69, 0x57, 0x83, 0xa6, 0x03, 0xcb, 0x16, 0x1d, 0x8d, 0x5c, 0x3e, + 0xb4, 0xa8, 0xcf, 0xc9, 0x57, 0x5c, 0x56, 0x5b, 0x68, 0x56, 0x62, 0xc6, 0x78, 0xec, 0x1b, 0x6d, + 0x09, 0x6b, 0x47, 0x28, 0x5c, 0xb4, 0x66, 0x97, 0x68, 0x0d, 0xf2, 0xe2, 0x56, 0x13, 0x9f, 0x87, + 0xfb, 0x67, 0x4f, 0x20, 0x61, 0xe8, 0x53, 0xa7, 0x23, 0x00, 0x38, 0xe7, 0xa9, 0xb7, 0xf5, 0xcc, + 0x91, 0xe8, 0xa4, 0xef, 0xd3, 0x70, 0x55, 0x24, 0xd5, 0xf3, 0x5d, 0x1e, 0x7f, 0x3f, 0x76, 0x01, + 0x2c, 0x6f, 0xcc, 0x38, 0x09, 0xe3, 0xd9, 0x54, 0x6c, 0x75, 0x45, 0x3f, 0xb5, 0xa3, 0x5d, 0x39, + 0x22, 0xd6, 0x2e, 0x24, 0x77, 0xe2, 0x89, 0xf3, 0x8a, 0xba, 0x67, 0xa3, 0x35, 0x58, 0x60, 0x74, + 0x1c, 0x5a, 0x44, 0xa5, 0x5d, 0x9a, 0x4b, 0x5b, 0x8d, 0x4f, 0x75, 0x92, 0x0a, 0x8d, 0xde, 0x83, + 0x82, 0x4d, 0x18, 0x77, 0x7d, 0x39, 0x57, 0x95, 0x7e, 0xcf, 0x72, 0x9e, 0x75, 0x41, 0x4d, 0xc8, + 0x86, 0xe2, 0xea, 0x28, 0x0d, 0xe7, 0xfb, 0x4b, 0x5e, 0xac, 0xb8, 0x5d, 0x24, 0xb4, 0x76, 0x07, + 0x56, 0x4e, 0x0e, 0x4a, 0x8d, 0xf2, 0x84, 0x47, 0x3b, 0x3f, 0xcf, 0x4f, 0x69, 0x78, 0x5e, 0x9a, + 0xd4, 0xa7, 0x81, 0xfc, 0x7f, 0x8e, 0xfd, 0xf6, 0xe9, 0x8f, 0xee, 0x8c, 0xf7, 0xc9, 0xe5, 0x89, + 0xec, 0xf1, 0xdd, 0x51, 0xf0, 0xda, 0x87, 0x70, 0xfd, 0xcc, 0x99, 0x29, 0x05, 0x1a, 0xb0, 0xc0, + 0xe4, 0xcc, 0x50, 0x12, 0xac, 0xfe, 0xe5, 0xa8, 0x18, 0x33, 0xac, 0x60, 0xaf, 0xdd, 0x57, 0xa3, + 0x53, 0x4e, 0x10, 0x74, 0x03, 0xb2, 0x1d, 0x8c, 0x37, 0xf1, 0x4a, 0xaa, 0x8c, 0x0e, 0x0e, 0xab, + 0xcb, 0x89, 0xa5, 0x13, 0x86, 0x34, 0x44, 0x75, 0x28, 0xf4, 0x36, 0x86, 0x5b, 0x78, 0xb3, 0x8b, + 0x3b, 0x83, 0xc1, 0x8a, 0x56, 0x5e, 0x3d, 0x38, 0xac, 0x5e, 0x4b, 0x40, 0x3d, 0x7f, 0x2b, 0xa4, + 0x4e, 0x48, 0x18, 0x43, 0xaf, 0x42, 0xae, 0xbd, 0xf9, 0xf1, 0x56, 0xbf, 0xb3, 0xdd, 0x59, 0x49, + 0x97, 0xaf, 0x1f, 0x1c, 0x56, 0x9f, 0x4b, 0x60, 0x6d, 0x3a, 0x0a, 0x3c, 0xc2, 0x49, 0x79, 0xe9, + 0xeb, 0xef, 0x2a, 0xa9, 0x1f, 0x1e, 0x55, 0x52, 0x3f, 0x3e, 0xaa, 0x68, 0xcd, 0xfb, 0x69, 0x00, + 0x9c, 0xfc, 0x48, 0xa2, 0x6d, 0xc8, 0x27, 0xe5, 0x21, 0xfd, 0x54, 0x19, 0xf3, 0xff, 0x78, 0xe5, + 0xea, 0xd3, 0x01, 0xd1, 0xc9, 0xd4, 0x52, 0x75, 0xed, 0x0d, 0x0d, 0xf5, 0x20, 0x17, 0x77, 0x2d, + 0x7a, 0x79, 0xee, 0x6c, 0x66, 0x6e, 0x7d, 0xf9, 0xc6, 0x53, 0xac, 0x31, 0x1d, 0xfa, 0x04, 0x8a, + 0xa7, 0x34, 0x40, 0xaf, 0xcc, 0xb7, 0xfb, 0x99, 0x9e, 0x2e, 0xd7, 0xfe, 0x0e, 0x12, 0x33, 0xb7, + 0xba, 0x47, 0xd3, 0x8a, 0xf6, 0x78, 0x5a, 0xd1, 0x1e, 0x1c, 0x57, 0x52, 0x0f, 0x8f, 0x2b, 0xda, + 0xe3, 0xe3, 0x4a, 0xea, 0xd7, 0xe3, 0x4a, 0xea, 0xd3, 0x9b, 0xe7, 0x69, 0xf7, 0xe4, 0xbf, 0x7c, + 0x67, 0x41, 0xbe, 0xdf, 0xfa, 0x33, 0x00, 0x00, 0xff, 0xff, 0x3f, 0x84, 0xc8, 0x7a, 0xac, 0x0b, + 0x00, 0x00, } func (x SyncState) String() string { @@ -793,9 +773,6 @@ func (this *SyncPosition) Equal(that interface{}) bool { if this.GLSN != that1.GLSN { return false } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } @@ -1004,10 +981,6 @@ func (m *ReplicationRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } n1, err1 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime):]) if err1 != nil { return 0, err1 @@ -1015,22 +988,27 @@ func (m *ReplicationRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= n1 i = encodeVarintReplicator(dAtA, i, uint64(n1)) i-- - dAtA[i] = 0x22 + dAtA[i] = 0x2a if len(m.Payload) > 0 { i -= len(m.Payload) copy(dAtA[i:], m.Payload) i = encodeVarintReplicator(dAtA, i, uint64(len(m.Payload))) i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 } if m.LLSN != 0 { i = encodeVarintReplicator(dAtA, i, uint64(m.LLSN)) i-- - dAtA[i] = 0x10 + dAtA[i] = 0x18 } if m.LogStreamID != 0 { i = encodeVarintReplicator(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintReplicator(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x8 } return len(dAtA) - i, nil @@ -1056,10 +1034,6 @@ func (m *ReplicationResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } n2, err2 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime):]) if err2 != nil { return 0, err2 @@ -1106,10 +1080,6 @@ func (m *SyncPosition) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.GLSN != 0 { i = encodeVarintReplicator(dAtA, i, uint64(m.GLSN)) i-- @@ -1143,10 +1113,6 @@ func (m *SyncRange) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LastLLSN != 0 { i = encodeVarintReplicator(dAtA, i, uint64(m.LastLLSN)) i-- @@ -1180,10 +1146,6 @@ func (m *SyncStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } { size, err := m.Current.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -1242,10 +1204,6 @@ func (m *SyncPayload) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogEntry != nil { { size, err := m.LogEntry.MarshalToSizedBuffer(dAtA[:i]) @@ -1293,10 +1251,6 @@ func (m *SyncInitRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } { size, err := m.Range.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -1355,10 +1309,6 @@ func (m *SyncInitResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } { size, err := m.Range.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -1392,10 +1342,6 @@ func (m *SyncReplicateRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } { size, err := m.Payload.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -1454,10 +1400,6 @@ func (m *SyncReplicateResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Status != nil { { size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) @@ -1490,6 +1432,9 @@ func (m *ReplicationRequest) ProtoSize() (n int) { } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovReplicator(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovReplicator(uint64(m.LogStreamID)) } @@ -1502,9 +1447,6 @@ func (m *ReplicationRequest) ProtoSize() (n int) { } l = github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime) n += 1 + l + sovReplicator(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1525,9 +1467,6 @@ func (m *ReplicationResponse) ProtoSize() (n int) { } l = github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime) n += 1 + l + sovReplicator(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1543,9 +1482,6 @@ func (m *SyncPosition) ProtoSize() (n int) { if m.GLSN != 0 { n += 1 + sovReplicator(uint64(m.GLSN)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1561,9 +1497,6 @@ func (m *SyncRange) ProtoSize() (n int) { if m.LastLLSN != 0 { n += 1 + sovReplicator(uint64(m.LastLLSN)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1582,9 +1515,6 @@ func (m *SyncStatus) ProtoSize() (n int) { n += 1 + l + sovReplicator(uint64(l)) l = m.Current.ProtoSize() n += 1 + l + sovReplicator(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1602,9 +1532,6 @@ func (m *SyncPayload) ProtoSize() (n int) { l = m.LogEntry.ProtoSize() n += 1 + l + sovReplicator(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1623,9 +1550,6 @@ func (m *SyncInitRequest) ProtoSize() (n int) { n += 1 + l + sovReplicator(uint64(l)) l = m.Range.ProtoSize() n += 1 + l + sovReplicator(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1637,9 +1561,6 @@ func (m *SyncInitResponse) ProtoSize() (n int) { _ = l l = m.Range.ProtoSize() n += 1 + l + sovReplicator(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1658,9 +1579,6 @@ func (m *SyncReplicateRequest) ProtoSize() (n int) { n += 1 + l + sovReplicator(uint64(l)) l = m.Payload.ProtoSize() n += 1 + l + sovReplicator(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1674,9 +1592,6 @@ func (m *SyncReplicateResponse) ProtoSize() (n int) { l = m.Status.ProtoSize() n += 1 + l + sovReplicator(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1737,6 +1652,25 @@ func (m *ReplicationRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowReplicator + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -1755,7 +1689,7 @@ func (m *ReplicationRequest) Unmarshal(dAtA []byte) error { break } } - case 2: + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LLSN", wireType) } @@ -1774,7 +1708,7 @@ func (m *ReplicationRequest) Unmarshal(dAtA []byte) error { break } } - case 3: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Payload", wireType) } @@ -1808,7 +1742,7 @@ func (m *ReplicationRequest) Unmarshal(dAtA []byte) error { m.Payload = []byte{} } iNdEx = postIndex - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field CreatedTime", wireType) } @@ -1853,7 +1787,6 @@ func (m *ReplicationRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -1994,7 +1927,6 @@ func (m *ReplicationResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2083,7 +2015,6 @@ func (m *SyncPosition) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2172,7 +2103,6 @@ func (m *SyncRange) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2341,7 +2271,6 @@ func (m *SyncStatus) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2464,7 +2393,6 @@ func (m *SyncPayload) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2633,7 +2561,6 @@ func (m *SyncInitRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2717,7 +2644,6 @@ func (m *SyncInitResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2886,7 +2812,6 @@ func (m *SyncReplicateRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2973,7 +2898,6 @@ func (m *SyncReplicateResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/snpb/replicator.proto b/proto/snpb/replicator.proto index df9b63801..8db15f0e1 100644 --- a/proto/snpb/replicator.proto +++ b/proto/snpb/replicator.proto @@ -5,7 +5,6 @@ package varlog.snpb; import "google/protobuf/timestamp.proto"; import "github.com/gogo/protobuf/gogoproto/gogo.proto"; -import "snpb/replica.proto"; import "varlogpb/metadata.proto"; option go_package = "github.com/kakao/varlog/proto/snpb"; @@ -13,33 +12,41 @@ option go_package = "github.com/kakao/varlog/proto/snpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; // ReplicationRequest contains LLSN (Local Log Sequence Number) that indicates // a log position in the local log stream of the primary storage node. message ReplicationRequest { - uint32 log_stream_id = 1 [ + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - uint64 llsn = 2 [ + uint64 llsn = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LLSN", (gogoproto.customname) = "LLSN" ]; - bytes payload = 3; - google.protobuf.Timestamp created_time = 4 + bytes payload = 4; + google.protobuf.Timestamp created_time = 5 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false]; } // ReplicationResponse indicates that a log entry at given LLSN is replicated. message ReplicationResponse { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 2 [ + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -109,8 +116,8 @@ message SyncInitRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - Replica source = 2 [(gogoproto.nullable) = false]; - Replica destination = 3 [(gogoproto.nullable) = false]; + varlogpb.Replica source = 2 [(gogoproto.nullable) = false]; + varlogpb.Replica destination = 3 [(gogoproto.nullable) = false]; SyncRange range = 4 [(gogoproto.nullable) = false]; } @@ -124,8 +131,8 @@ message SyncReplicateRequest { "github.com/kakao/varlog/pkg/types.ClusterID", (gogoproto.customname) = "ClusterID" ]; - Replica source = 2 [(gogoproto.nullable) = false]; - Replica destination = 3 [(gogoproto.nullable) = false]; + varlogpb.Replica source = 2 [(gogoproto.nullable) = false]; + varlogpb.Replica destination = 3 [(gogoproto.nullable) = false]; SyncPayload payload = 4 [(gogoproto.nullable) = false]; } diff --git a/proto/varlogpb/log_entry.go b/proto/varlogpb/log_entry.go new file mode 100644 index 000000000..3fe849f7e --- /dev/null +++ b/proto/varlogpb/log_entry.go @@ -0,0 +1,17 @@ +package varlogpb + +import "github.com/kakao/varlog/pkg/types" + +var invalidLogEntry = LogEntry{ + GLSN: types.InvalidGLSN, + LLSN: types.InvalidLLSN, + Data: nil, +} + +func InvalidLogEntry() LogEntry { + return invalidLogEntry +} + +func (le LogEntry) Invalid() bool { + return le.GLSN.Invalid() && le.LLSN.Invalid() && len(le.Data) == 0 +} diff --git a/proto/varlogpb/metadata.go b/proto/varlogpb/metadata.go index e53e0d97c..22c2d3a9b 100644 --- a/proto/varlogpb/metadata.go +++ b/proto/varlogpb/metadata.go @@ -29,6 +29,10 @@ func (s StorageNodeStatus) Deleted() bool { return s == StorageNodeStatusDeleted } +func (s TopicStatus) Deleted() bool { + return s == TopicStatusDeleted +} + func (s *StorageNodeDescriptor) Valid() bool { if s == nil || len(s.Address) == 0 || @@ -81,9 +85,7 @@ func DiffReplicaDescriptorSet(xs []*ReplicaDescriptor, ys []*ReplicaDescriptor) xss := makeReplicaDescriptorDiffSet(xs) yss := makeReplicaDescriptorDiffSet(ys) for s := range yss { - if _, ok := xss[s]; ok { - delete(xss, s) - } + delete(xss, s) } if len(xss) == 0 { return nil @@ -375,3 +377,166 @@ func (snmd *StorageNodeMetadataDescriptor) GetLogStream(logStreamID types.LogStr } return LogStreamMetadataDescriptor{}, false } + +func (m *MetadataDescriptor) searchTopic(id types.TopicID) (int, bool) { + i := sort.Search(len(m.Topics), func(i int) bool { + return m.Topics[i].TopicID >= id + }) + + if i < len(m.Topics) && m.Topics[i].TopicID == id { + return i, true + } + + return i, false +} + +func (m *MetadataDescriptor) insertTopicAt(idx int, topic *TopicDescriptor) { + l := m.Topics + l = append(l, &TopicDescriptor{}) + copy(l[idx+1:], l[idx:]) + + l[idx] = topic + m.Topics = l +} + +func (m *MetadataDescriptor) updateTopicAt(idx int, topic *TopicDescriptor) { + m.Topics[idx] = topic +} + +func (m *MetadataDescriptor) GetTopic(id types.TopicID) *TopicDescriptor { + if m == nil { + return nil + } + + idx, match := m.searchTopic(id) + if match { + return m.Topics[idx] + } + + return nil +} + +func (m *MetadataDescriptor) InsertTopic(topic *TopicDescriptor) error { + if m == nil || topic == nil { + return nil + } + + idx, match := m.searchTopic(topic.TopicID) + if match { + return errors.New("already exist") + } + + m.insertTopicAt(idx, topic) + return nil +} + +func (m *MetadataDescriptor) DeleteTopic(id types.TopicID) error { + if m == nil { + return nil + } + + idx, match := m.searchTopic(id) + if !match { + return errors.New("not exists") + } + + l := m.Topics + + copy(l[idx:], l[idx+1:]) + m.Topics = l[:len(l)-1] + + return nil +} + +func (m *MetadataDescriptor) UpdateTopic(topic *TopicDescriptor) error { + if m == nil || topic == nil { + return errors.New("not exist") + } + + idx, match := m.searchTopic(topic.TopicID) + if !match { + return errors.New("not exist") + } + + m.updateTopicAt(idx, topic) + return nil +} + +func (m *MetadataDescriptor) UpsertTopic(topic *TopicDescriptor) error { + if err := m.InsertTopic(topic); err != nil { + return m.UpdateTopic(topic) + } + + return nil +} + +func (m *MetadataDescriptor) HaveTopic(id types.TopicID) (*TopicDescriptor, error) { + if m == nil { + return nil, errors.New("MetadataDescriptor is nil") + } + if tnd := m.GetTopic(id); tnd != nil { + return tnd, nil + } + return nil, errors.Wrap(verrors.ErrNotExist, "storage node") +} + +func (m *MetadataDescriptor) MustHaveTopic(id types.TopicID) (*TopicDescriptor, error) { + return m.Must().HaveTopic(id) +} + +func (m *MetadataDescriptor) NotHaveTopic(id types.TopicID) error { + if m == nil { + return errors.New("MetadataDescriptor is nil") + } + if tnd := m.GetTopic(id); tnd == nil { + return nil + } + return errors.Wrap(verrors.ErrExist, "storage node") +} + +func (m *MetadataDescriptor) MustNotHaveTopic(id types.TopicID) error { + return m.Must().NotHaveTopic(id) +} + +func (t *TopicDescriptor) searchLogStream(id types.LogStreamID) (int, bool) { + i := sort.Search(len(t.LogStreams), func(i int) bool { + return t.LogStreams[i] >= id + }) + + if i < len(t.LogStreams) && t.LogStreams[i] == id { + return i, true + } + + return i, false +} + +func (t *TopicDescriptor) insertLogStreamAt(idx int, lsID types.LogStreamID) { + l := t.LogStreams + l = append(l, types.LogStreamID(0)) + copy(l[idx+1:], l[idx:]) + + l[idx] = lsID + t.LogStreams = l +} + +func (t *TopicDescriptor) InsertLogStream(lsID types.LogStreamID) { + if t == nil { + return + } + + idx, match := t.searchLogStream(lsID) + if match { + return + } + + t.insertLogStreamAt(idx, lsID) +} + +func (t *TopicDescriptor) HasLogStream(lsID types.LogStreamID) bool { + if t == nil { + return false + } + + _, match := t.searchLogStream(lsID) + return match +} diff --git a/proto/varlogpb/metadata.pb.go b/proto/varlogpb/metadata.pb.go index 199c4119b..1b9a54d57 100644 --- a/proto/varlogpb/metadata.pb.go +++ b/proto/varlogpb/metadata.pb.go @@ -4,7 +4,6 @@ package varlogpb import ( - bytes "bytes" fmt "fmt" io "io" math "math" @@ -90,20 +89,149 @@ func (LogStreamStatus) EnumDescriptor() ([]byte, []int) { return fileDescriptor_eb4411772ca3492a, []int{1} } +type TopicStatus int32 + +const ( + TopicStatusRunning TopicStatus = 0 + TopicStatusDeleted TopicStatus = 1 +) + +var TopicStatus_name = map[int32]string{ + 0: "TOPIC_STATUS_RUNNING", + 1: "TOPIC_STATUS_DELETED", +} + +var TopicStatus_value = map[string]int32{ + "TOPIC_STATUS_RUNNING": 0, + "TOPIC_STATUS_DELETED": 1, +} + +func (x TopicStatus) String() string { + return proto.EnumName(TopicStatus_name, int32(x)) +} + +func (TopicStatus) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_eb4411772ca3492a, []int{2} +} + +// StorageNode is a structure to represent identifier and address of storage +// node. +type StorageNode struct { + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"` +} + +func (m *StorageNode) Reset() { *m = StorageNode{} } +func (m *StorageNode) String() string { return proto.CompactTextString(m) } +func (*StorageNode) ProtoMessage() {} +func (*StorageNode) Descriptor() ([]byte, []int) { + return fileDescriptor_eb4411772ca3492a, []int{0} +} +func (m *StorageNode) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *StorageNode) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_StorageNode.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *StorageNode) XXX_Merge(src proto.Message) { + xxx_messageInfo_StorageNode.Merge(m, src) +} +func (m *StorageNode) XXX_Size() int { + return m.ProtoSize() +} +func (m *StorageNode) XXX_DiscardUnknown() { + xxx_messageInfo_StorageNode.DiscardUnknown(m) +} + +var xxx_messageInfo_StorageNode proto.InternalMessageInfo + +func (m *StorageNode) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID { + if m != nil { + return m.StorageNodeID + } + return 0 +} + +func (m *StorageNode) GetAddress() string { + if m != nil { + return m.Address + } + return "" +} + +type Replica struct { + StorageNode `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3,embedded=storage_node" json:"storage_node"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,2,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` +} + +func (m *Replica) Reset() { *m = Replica{} } +func (m *Replica) String() string { return proto.CompactTextString(m) } +func (*Replica) ProtoMessage() {} +func (*Replica) Descriptor() ([]byte, []int) { + return fileDescriptor_eb4411772ca3492a, []int{1} +} +func (m *Replica) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *Replica) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_Replica.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *Replica) XXX_Merge(src proto.Message) { + xxx_messageInfo_Replica.Merge(m, src) +} +func (m *Replica) XXX_Size() int { + return m.ProtoSize() +} +func (m *Replica) XXX_DiscardUnknown() { + xxx_messageInfo_Replica.DiscardUnknown(m) +} + +var xxx_messageInfo_Replica proto.InternalMessageInfo + +func (m *Replica) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + +func (m *Replica) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { + if m != nil { + return m.LogStreamID + } + return 0 +} + type StorageDescriptor struct { - Path string `protobuf:"bytes,1,opt,name=path,proto3" json:"path,omitempty"` - Used uint64 `protobuf:"varint,2,opt,name=used,proto3" json:"used,omitempty"` - Total uint64 `protobuf:"varint,3,opt,name=total,proto3" json:"total,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Path string `protobuf:"bytes,1,opt,name=path,proto3" json:"path,omitempty"` + Used uint64 `protobuf:"varint,2,opt,name=used,proto3" json:"used,omitempty"` + Total uint64 `protobuf:"varint,3,opt,name=total,proto3" json:"total,omitempty"` } func (m *StorageDescriptor) Reset() { *m = StorageDescriptor{} } func (m *StorageDescriptor) String() string { return proto.CompactTextString(m) } func (*StorageDescriptor) ProtoMessage() {} func (*StorageDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{0} + return fileDescriptor_eb4411772ca3492a, []int{2} } func (m *StorageDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -154,20 +282,16 @@ func (m *StorageDescriptor) GetTotal() uint64 { } type StorageNodeDescriptor struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"` - Status StorageNodeStatus `protobuf:"varint,3,opt,name=status,proto3,enum=varlog.varlogpb.StorageNodeStatus" json:"status,omitempty"` - Storages []*StorageDescriptor `protobuf:"bytes,4,rep,name=storages,proto3" json:"storages,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNode `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3,embedded=storage_node" json:"storage_node"` + Status StorageNodeStatus `protobuf:"varint,2,opt,name=status,proto3,enum=varlog.varlogpb.StorageNodeStatus" json:"status,omitempty"` + Storages []*StorageDescriptor `protobuf:"bytes,3,rep,name=storages,proto3" json:"storages,omitempty"` } func (m *StorageNodeDescriptor) Reset() { *m = StorageNodeDescriptor{} } func (m *StorageNodeDescriptor) String() string { return proto.CompactTextString(m) } func (*StorageNodeDescriptor) ProtoMessage() {} func (*StorageNodeDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{1} + return fileDescriptor_eb4411772ca3492a, []int{3} } func (m *StorageNodeDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -196,20 +320,6 @@ func (m *StorageNodeDescriptor) XXX_DiscardUnknown() { var xxx_messageInfo_StorageNodeDescriptor proto.InternalMessageInfo -func (m *StorageNodeDescriptor) GetStorageNodeID() github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID { - if m != nil { - return m.StorageNodeID - } - return 0 -} - -func (m *StorageNodeDescriptor) GetAddress() string { - if m != nil { - return m.Address - } - return "" -} - func (m *StorageNodeDescriptor) GetStatus() StorageNodeStatus { if m != nil { return m.Status @@ -225,18 +335,15 @@ func (m *StorageNodeDescriptor) GetStorages() []*StorageDescriptor { } type ReplicaDescriptor struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - Path string `protobuf:"bytes,2,opt,name=path,proto3" json:"path,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + Path string `protobuf:"bytes,2,opt,name=path,proto3" json:"path,omitempty"` } func (m *ReplicaDescriptor) Reset() { *m = ReplicaDescriptor{} } func (m *ReplicaDescriptor) String() string { return proto.CompactTextString(m) } func (*ReplicaDescriptor) ProtoMessage() {} func (*ReplicaDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{2} + return fileDescriptor_eb4411772ca3492a, []int{4} } func (m *ReplicaDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -280,19 +387,17 @@ func (m *ReplicaDescriptor) GetPath() string { } type LogStreamDescriptor struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Status LogStreamStatus `protobuf:"varint,2,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` - Replicas []*ReplicaDescriptor `protobuf:"bytes,3,rep,name=replicas,proto3" json:"replicas,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,2,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + Status LogStreamStatus `protobuf:"varint,3,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` + Replicas []*ReplicaDescriptor `protobuf:"bytes,4,rep,name=replicas,proto3" json:"replicas,omitempty"` } func (m *LogStreamDescriptor) Reset() { *m = LogStreamDescriptor{} } func (m *LogStreamDescriptor) String() string { return proto.CompactTextString(m) } func (*LogStreamDescriptor) ProtoMessage() {} func (*LogStreamDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{3} + return fileDescriptor_eb4411772ca3492a, []int{5} } func (m *LogStreamDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -328,6 +433,13 @@ func (m *LogStreamDescriptor) GetLogStreamID() github_daumkakao_com_varlog_varlo return 0 } +func (m *LogStreamDescriptor) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *LogStreamDescriptor) GetStatus() LogStreamStatus { if m != nil { return m.Status @@ -342,20 +454,78 @@ func (m *LogStreamDescriptor) GetReplicas() []*ReplicaDescriptor { return nil } +type TopicDescriptor struct { + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + Status TopicStatus `protobuf:"varint,2,opt,name=status,proto3,enum=varlog.varlogpb.TopicStatus" json:"status,omitempty"` + LogStreams []github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,rep,packed,name=log_streams,json=logStreams,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_streams,omitempty"` +} + +func (m *TopicDescriptor) Reset() { *m = TopicDescriptor{} } +func (m *TopicDescriptor) String() string { return proto.CompactTextString(m) } +func (*TopicDescriptor) ProtoMessage() {} +func (*TopicDescriptor) Descriptor() ([]byte, []int) { + return fileDescriptor_eb4411772ca3492a, []int{6} +} +func (m *TopicDescriptor) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *TopicDescriptor) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_TopicDescriptor.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *TopicDescriptor) XXX_Merge(src proto.Message) { + xxx_messageInfo_TopicDescriptor.Merge(m, src) +} +func (m *TopicDescriptor) XXX_Size() int { + return m.ProtoSize() +} +func (m *TopicDescriptor) XXX_DiscardUnknown() { + xxx_messageInfo_TopicDescriptor.DiscardUnknown(m) +} + +var xxx_messageInfo_TopicDescriptor proto.InternalMessageInfo + +func (m *TopicDescriptor) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + +func (m *TopicDescriptor) GetStatus() TopicStatus { + if m != nil { + return m.Status + } + return TopicStatusRunning +} + +func (m *TopicDescriptor) GetLogStreams() []github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { + if m != nil { + return m.LogStreams + } + return nil +} + type MetadataDescriptor struct { - AppliedIndex uint64 `protobuf:"varint,1,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` - StorageNodes []*StorageNodeDescriptor `protobuf:"bytes,2,rep,name=storage_nodes,json=storageNodes,proto3" json:"storage_nodes,omitempty"` - LogStreams []*LogStreamDescriptor `protobuf:"bytes,3,rep,name=log_streams,json=logStreams,proto3" json:"log_streams,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + AppliedIndex uint64 `protobuf:"varint,1,opt,name=applied_index,json=appliedIndex,proto3" json:"applied_index,omitempty"` + StorageNodes []*StorageNodeDescriptor `protobuf:"bytes,2,rep,name=storage_nodes,json=storageNodes,proto3" json:"storage_nodes,omitempty"` + LogStreams []*LogStreamDescriptor `protobuf:"bytes,3,rep,name=log_streams,json=logStreams,proto3" json:"log_streams,omitempty"` + Topics []*TopicDescriptor `protobuf:"bytes,4,rep,name=topics,proto3" json:"topics,omitempty"` } func (m *MetadataDescriptor) Reset() { *m = MetadataDescriptor{} } func (m *MetadataDescriptor) String() string { return proto.CompactTextString(m) } func (*MetadataDescriptor) ProtoMessage() {} func (*MetadataDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{4} + return fileDescriptor_eb4411772ca3492a, []int{7} } func (m *MetadataDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -405,26 +575,30 @@ func (m *MetadataDescriptor) GetLogStreams() []*LogStreamDescriptor { return nil } +func (m *MetadataDescriptor) GetTopics() []*TopicDescriptor { + if m != nil { + return m.Topics + } + return nil +} + // StorageNodeMetadataDescriptor represents metadata of stroage node. type StorageNodeMetadataDescriptor struct { // ClusterID is the identifier of the cluster that the storage node belongs // to. ClusterID github_daumkakao_com_varlog_varlog_pkg_types.ClusterID `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,casttype=github.com/kakao/varlog/pkg/types.ClusterID" json:"cluster_id,omitempty"` // StorageNode is detailed information about the storage node. - StorageNode *StorageNodeDescriptor `protobuf:"bytes,2,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` - LogStreams []LogStreamMetadataDescriptor `protobuf:"bytes,3,rep,name=log_streams,json=logStreams,proto3" json:"log_streams"` - CreatedTime time.Time `protobuf:"bytes,4,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` - UpdatedTime time.Time `protobuf:"bytes,5,opt,name=updated_time,json=updatedTime,proto3,stdtime" json:"updated_time"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNode *StorageNodeDescriptor `protobuf:"bytes,2,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` + LogStreams []LogStreamMetadataDescriptor `protobuf:"bytes,3,rep,name=log_streams,json=logStreams,proto3" json:"log_streams"` + CreatedTime time.Time `protobuf:"bytes,4,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` + UpdatedTime time.Time `protobuf:"bytes,5,opt,name=updated_time,json=updatedTime,proto3,stdtime" json:"updated_time"` } func (m *StorageNodeMetadataDescriptor) Reset() { *m = StorageNodeMetadataDescriptor{} } func (m *StorageNodeMetadataDescriptor) String() string { return proto.CompactTextString(m) } func (*StorageNodeMetadataDescriptor) ProtoMessage() {} func (*StorageNodeMetadataDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{5} + return fileDescriptor_eb4411772ca3492a, []int{8} } func (m *StorageNodeMetadataDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -489,23 +663,22 @@ func (m *StorageNodeMetadataDescriptor) GetUpdatedTime() time.Time { } type LogStreamMetadataDescriptor struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Status LogStreamStatus `protobuf:"varint,3,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` - HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` - Path string `protobuf:"bytes,5,opt,name=path,proto3" json:"path,omitempty"` - CreatedTime time.Time `protobuf:"bytes,6,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` - UpdatedTime time.Time `protobuf:"bytes,7,opt,name=updated_time,json=updatedTime,proto3,stdtime" json:"updated_time"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + Status LogStreamStatus `protobuf:"varint,4,opt,name=status,proto3,enum=varlog.varlogpb.LogStreamStatus" json:"status,omitempty"` + Version github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,5,opt,name=version,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"version,omitempty"` + HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,6,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` + Path string `protobuf:"bytes,7,opt,name=path,proto3" json:"path,omitempty"` + CreatedTime time.Time `protobuf:"bytes,8,opt,name=created_time,json=createdTime,proto3,stdtime" json:"created_time"` + UpdatedTime time.Time `protobuf:"bytes,9,opt,name=updated_time,json=updatedTime,proto3,stdtime" json:"updated_time"` } func (m *LogStreamMetadataDescriptor) Reset() { *m = LogStreamMetadataDescriptor{} } func (m *LogStreamMetadataDescriptor) String() string { return proto.CompactTextString(m) } func (*LogStreamMetadataDescriptor) ProtoMessage() {} func (*LogStreamMetadataDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{6} + return fileDescriptor_eb4411772ca3492a, []int{9} } func (m *LogStreamMetadataDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -548,6 +721,13 @@ func (m *LogStreamMetadataDescriptor) GetLogStreamID() github_daumkakao_com_varl return 0 } +func (m *LogStreamMetadataDescriptor) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *LogStreamMetadataDescriptor) GetStatus() LogStreamStatus { if m != nil { return m.Status @@ -555,6 +735,13 @@ func (m *LogStreamMetadataDescriptor) GetStatus() LogStreamStatus { return LogStreamStatusRunning } +func (m *LogStreamMetadataDescriptor) GetVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { + if m != nil { + return m.Version + } + return 0 +} + func (m *LogStreamMetadataDescriptor) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { if m != nil { return m.HighWatermark @@ -584,19 +771,17 @@ func (m *LogStreamMetadataDescriptor) GetUpdatedTime() time.Time { } type LogStreamReplicaDescriptor struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - Address string `protobuf:"bytes,3,opt,name=address,proto3" json:"address,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,3,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + Address string `protobuf:"bytes,4,opt,name=address,proto3" json:"address,omitempty"` } func (m *LogStreamReplicaDescriptor) Reset() { *m = LogStreamReplicaDescriptor{} } func (m *LogStreamReplicaDescriptor) String() string { return proto.CompactTextString(m) } func (*LogStreamReplicaDescriptor) ProtoMessage() {} func (*LogStreamReplicaDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{7} + return fileDescriptor_eb4411772ca3492a, []int{10} } func (m *LogStreamReplicaDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -639,6 +824,13 @@ func (m *LogStreamReplicaDescriptor) GetLogStreamID() github_daumkakao_com_varlo return 0 } +func (m *LogStreamReplicaDescriptor) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *LogStreamReplicaDescriptor) GetAddress() string { if m != nil { return m.Address @@ -647,19 +839,16 @@ func (m *LogStreamReplicaDescriptor) GetAddress() string { } type LogEntry struct { - GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` - LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` - Data []byte `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + GLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=glsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"glsn,omitempty"` + LLSN github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,2,opt,name=llsn,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"llsn,omitempty"` + Data []byte `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"` } func (m *LogEntry) Reset() { *m = LogEntry{} } func (m *LogEntry) String() string { return proto.CompactTextString(m) } func (*LogEntry) ProtoMessage() {} func (*LogEntry) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{8} + return fileDescriptor_eb4411772ca3492a, []int{11} } func (m *LogEntry) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -710,21 +899,18 @@ func (m *LogEntry) GetData() []byte { } type CommitContext struct { - HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,1,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` - PrevHighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=prev_high_watermark,json=prevHighWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"prev_high_watermark,omitempty"` - CommittedGLSNBegin github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,3,opt,name=committed_glsn_begin,json=committedGlsnBegin,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_begin,omitempty"` - CommittedGLSNEnd github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=committed_glsn_end,json=committedGlsnEnd,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_end,omitempty"` - CommittedLLSNBegin github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,5,opt,name=committed_llsn_begin,json=committedLlsnBegin,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"committed_llsn_begin,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Version github_daumkakao_com_varlog_varlog_pkg_types.Version `protobuf:"varint,1,opt,name=version,proto3,casttype=github.com/kakao/varlog/pkg/types.Version" json:"version,omitempty"` + HighWatermark github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=high_watermark,json=highWatermark,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"high_watermark,omitempty"` + CommittedGLSNBegin github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,3,opt,name=committed_glsn_begin,json=committedGlsnBegin,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_begin,omitempty"` + CommittedGLSNEnd github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,4,opt,name=committed_glsn_end,json=committedGlsnEnd,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"committed_glsn_end,omitempty"` + CommittedLLSNBegin github_daumkakao_com_varlog_varlog_pkg_types.LLSN `protobuf:"varint,5,opt,name=committed_llsn_begin,json=committedLlsnBegin,proto3,casttype=github.com/kakao/varlog/pkg/types.LLSN" json:"committed_llsn_begin,omitempty"` } func (m *CommitContext) Reset() { *m = CommitContext{} } func (m *CommitContext) String() string { return proto.CompactTextString(m) } func (*CommitContext) ProtoMessage() {} func (*CommitContext) Descriptor() ([]byte, []int) { - return fileDescriptor_eb4411772ca3492a, []int{9} + return fileDescriptor_eb4411772ca3492a, []int{12} } func (m *CommitContext) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -753,16 +939,16 @@ func (m *CommitContext) XXX_DiscardUnknown() { var xxx_messageInfo_CommitContext proto.InternalMessageInfo -func (m *CommitContext) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *CommitContext) GetVersion() github_daumkakao_com_varlog_varlog_pkg_types.Version { if m != nil { - return m.HighWatermark + return m.Version } return 0 } -func (m *CommitContext) GetPrevHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { +func (m *CommitContext) GetHighWatermark() github_daumkakao_com_varlog_varlog_pkg_types.GLSN { if m != nil { - return m.PrevHighWatermark + return m.HighWatermark } return 0 } @@ -791,10 +977,14 @@ func (m *CommitContext) GetCommittedLLSNBegin() github_daumkakao_com_varlog_varl func init() { proto.RegisterEnum("varlog.varlogpb.StorageNodeStatus", StorageNodeStatus_name, StorageNodeStatus_value) proto.RegisterEnum("varlog.varlogpb.LogStreamStatus", LogStreamStatus_name, LogStreamStatus_value) + proto.RegisterEnum("varlog.varlogpb.TopicStatus", TopicStatus_name, TopicStatus_value) + proto.RegisterType((*StorageNode)(nil), "varlog.varlogpb.StorageNode") + proto.RegisterType((*Replica)(nil), "varlog.varlogpb.Replica") proto.RegisterType((*StorageDescriptor)(nil), "varlog.varlogpb.StorageDescriptor") proto.RegisterType((*StorageNodeDescriptor)(nil), "varlog.varlogpb.StorageNodeDescriptor") proto.RegisterType((*ReplicaDescriptor)(nil), "varlog.varlogpb.ReplicaDescriptor") proto.RegisterType((*LogStreamDescriptor)(nil), "varlog.varlogpb.LogStreamDescriptor") + proto.RegisterType((*TopicDescriptor)(nil), "varlog.varlogpb.TopicDescriptor") proto.RegisterType((*MetadataDescriptor)(nil), "varlog.varlogpb.MetadataDescriptor") proto.RegisterType((*StorageNodeMetadataDescriptor)(nil), "varlog.varlogpb.StorageNodeMetadataDescriptor") proto.RegisterType((*LogStreamMetadataDescriptor)(nil), "varlog.varlogpb.LogStreamMetadataDescriptor") @@ -806,78 +996,148 @@ func init() { func init() { proto.RegisterFile("proto/varlogpb/metadata.proto", fileDescriptor_eb4411772ca3492a) } var fileDescriptor_eb4411772ca3492a = []byte{ - // 1098 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x57, 0xcf, 0x6b, 0x1b, 0x47, - 0x14, 0xce, 0xae, 0xd6, 0x8e, 0x3d, 0xb2, 0x62, 0x79, 0x9c, 0x14, 0x55, 0x6d, 0xbc, 0x42, 0x2d, - 0xc5, 0x84, 0x56, 0x22, 0x0d, 0x29, 0xc1, 0x50, 0x4a, 0x64, 0xa9, 0xae, 0xc8, 0x56, 0x25, 0xbb, - 0x36, 0x81, 0x52, 0x10, 0x63, 0xcd, 0x64, 0xb5, 0x78, 0xb5, 0xb3, 0xec, 0x8e, 0xe2, 0xe4, 0xd0, - 0x53, 0x2f, 0xc5, 0xf4, 0xd0, 0x63, 0x2f, 0x06, 0x43, 0xa1, 0x7f, 0x40, 0x6f, 0xbd, 0xf7, 0x10, - 0xe8, 0xa5, 0xa7, 0x1e, 0x15, 0x50, 0x2f, 0xfd, 0x1b, 0x72, 0x2a, 0x33, 0xfb, 0x5b, 0x2b, 0x87, - 0x58, 0x0d, 0x69, 0x4e, 0xda, 0x99, 0xd9, 0xef, 0x7b, 0xef, 0xfb, 0xde, 0xdb, 0xb7, 0x2b, 0x70, - 0xdd, 0xf5, 0x28, 0xa3, 0xcd, 0x47, 0xc8, 0xb3, 0xa9, 0xe9, 0x1e, 0x36, 0x47, 0x84, 0x21, 0x8c, - 0x18, 0x6a, 0x88, 0x7d, 0xb8, 0x1e, 0x1c, 0x34, 0xa2, 0xf3, 0xaa, 0x6a, 0x52, 0x6a, 0xda, 0xa4, - 0x29, 0x8e, 0x0f, 0xc7, 0x0f, 0x9b, 0xcc, 0x1a, 0x11, 0x9f, 0xa1, 0x91, 0x1b, 0x20, 0xaa, 0x1f, - 0x99, 0x16, 0x1b, 0x8e, 0x0f, 0x1b, 0x03, 0x3a, 0x6a, 0x9a, 0xd4, 0xa4, 0xc9, 0x9d, 0x7c, 0x15, - 0x44, 0xe3, 0x57, 0xc1, 0xed, 0xf5, 0x07, 0x60, 0xc3, 0x60, 0xd4, 0x43, 0x26, 0x69, 0x13, 0x7f, - 0xe0, 0x59, 0x2e, 0xa3, 0x1e, 0x84, 0x40, 0x71, 0x11, 0x1b, 0x56, 0xa4, 0x9a, 0xb4, 0xbd, 0xaa, - 0x8b, 0x6b, 0xbe, 0x37, 0xf6, 0x09, 0xae, 0xc8, 0x35, 0x69, 0x5b, 0xd1, 0xc5, 0x35, 0xbc, 0x0a, - 0x96, 0x18, 0x65, 0xc8, 0xae, 0x14, 0xc4, 0x66, 0xb0, 0xd8, 0x51, 0xfe, 0x39, 0x53, 0xa5, 0xfa, - 0xaf, 0x32, 0xb8, 0x16, 0x32, 0xf7, 0x28, 0x4e, 0xb3, 0x1f, 0x83, 0x75, 0x3f, 0x38, 0xe8, 0x3b, - 0x14, 0x93, 0xbe, 0x85, 0x45, 0xa0, 0x52, 0xeb, 0xab, 0xe9, 0x44, 0x2d, 0xa5, 0x30, 0xdd, 0xf6, - 0xf3, 0x89, 0xba, 0x13, 0xea, 0xc1, 0x68, 0x3c, 0x3a, 0x42, 0x47, 0x88, 0x0a, 0x65, 0x81, 0x1f, - 0xd1, 0x8f, 0x7b, 0x64, 0x36, 0xd9, 0x13, 0x97, 0xf8, 0x8d, 0x0c, 0x5a, 0x2f, 0xf9, 0xa9, 0x25, - 0x86, 0x15, 0x70, 0x19, 0x61, 0xec, 0x11, 0xdf, 0x17, 0x2a, 0x56, 0xf5, 0x68, 0x09, 0x77, 0xc0, - 0xb2, 0xcf, 0x10, 0x1b, 0xfb, 0x42, 0xc9, 0x95, 0x8f, 0xeb, 0x8d, 0x19, 0xdf, 0xd3, 0xc4, 0x86, - 0xb8, 0x53, 0x0f, 0x11, 0xb0, 0x0d, 0x56, 0xc2, 0x30, 0x7e, 0x45, 0xa9, 0x15, 0xb6, 0x8b, 0xe7, - 0xa3, 0x13, 0x13, 0x5a, 0xca, 0xd3, 0x89, 0x2a, 0xe9, 0x31, 0x32, 0x34, 0xed, 0x17, 0x09, 0x6c, - 0xe8, 0xc4, 0xb5, 0xad, 0x01, 0x7a, 0x13, 0x0c, 0x8b, 0xfa, 0x40, 0x4e, 0xfa, 0x20, 0x4c, 0xf4, - 0x07, 0x19, 0x6c, 0x6a, 0xd4, 0x34, 0x98, 0x47, 0xd0, 0x28, 0x95, 0x2a, 0x05, 0x25, 0x9b, 0x9a, - 0x7d, 0x5f, 0xec, 0x27, 0x89, 0xde, 0x9b, 0x4e, 0xd4, 0x62, 0x7c, 0xbf, 0x48, 0xf3, 0xce, 0x85, - 0xd2, 0x4c, 0x61, 0xf5, 0xa2, 0x1d, 0x2f, 0x30, 0xbc, 0x13, 0x57, 0x4e, 0x16, 0x95, 0xab, 0xe5, - 0xbc, 0x8f, 0xa1, 0xf9, 0xba, 0x79, 0x81, 0xd5, 0xbc, 0xea, 0xf3, 0xeb, 0x96, 0xab, 0x45, 0x54, - 0xb7, 0x08, 0x19, 0xda, 0xf1, 0x4c, 0x02, 0xf0, 0xcb, 0xf0, 0xc9, 0x4d, 0xb9, 0xf1, 0x1e, 0x28, - 0x21, 0xd7, 0xb5, 0x2d, 0x82, 0xfb, 0x96, 0x83, 0xc9, 0x63, 0xe1, 0x86, 0xa2, 0xaf, 0x85, 0x9b, - 0x5d, 0xbe, 0x07, 0xef, 0x83, 0x52, 0xba, 0xba, 0x5c, 0x08, 0x4f, 0xe6, 0x83, 0x17, 0xb5, 0x60, - 0x2e, 0xa1, 0xb5, 0x54, 0xe1, 0x7c, 0x78, 0x0f, 0x14, 0x93, 0x2a, 0x44, 0xea, 0xde, 0x3f, 0xdf, - 0x99, 0x1c, 0x1d, 0x88, 0x2d, 0x8e, 0x14, 0xfe, 0x56, 0x00, 0xd7, 0x53, 0x09, 0xcc, 0x11, 0xfb, - 0x10, 0x80, 0x81, 0x3d, 0xf6, 0x19, 0xf1, 0x92, 0xba, 0xef, 0x4d, 0x27, 0xea, 0xea, 0x6e, 0xb0, - 0x2b, 0xaa, 0xfe, 0xc9, 0x85, 0xaa, 0x1e, 0x23, 0xf5, 0xd5, 0x90, 0xba, 0x8b, 0x61, 0x17, 0xac, - 0xa5, 0xfd, 0x12, 0x75, 0x7f, 0x69, 0xbb, 0xf4, 0x62, 0xca, 0x28, 0x68, 0xcc, 0xf3, 0xe9, 0xc3, - 0xf3, 0x7d, 0xca, 0xab, 0x16, 0x7e, 0x5d, 0x4a, 0xfb, 0x05, 0xf7, 0xc0, 0xda, 0xc0, 0x23, 0x88, - 0x11, 0xdc, 0xe7, 0xb3, 0xb9, 0xa2, 0x88, 0xfc, 0xaa, 0x8d, 0x60, 0x70, 0x37, 0xa2, 0x71, 0xdc, - 0xd8, 0x8f, 0x06, 0x77, 0x6b, 0x85, 0x73, 0xfc, 0xf8, 0x4c, 0x95, 0xf4, 0x62, 0x88, 0xe4, 0x67, - 0x9c, 0x68, 0xec, 0xe2, 0x84, 0x68, 0xe9, 0x22, 0x44, 0x21, 0x92, 0x9f, 0xd5, 0xff, 0x52, 0xc0, - 0x3b, 0x2f, 0xd0, 0xf0, 0xff, 0xcd, 0x97, 0xdc, 0xb4, 0x90, 0x5f, 0xdb, 0xb4, 0x28, 0x5c, 0x70, - 0x5a, 0x8c, 0xc0, 0x95, 0xa1, 0x65, 0x0e, 0xfb, 0xc7, 0x88, 0x11, 0x6f, 0x84, 0xbc, 0x23, 0x51, - 0x57, 0xa5, 0xf5, 0x39, 0xb7, 0xe8, 0x0b, 0xcb, 0x1c, 0x3e, 0x88, 0x0e, 0x9e, 0x4f, 0xd4, 0x9b, - 0x17, 0xca, 0x76, 0x4f, 0x33, 0x7a, 0x7a, 0x69, 0x98, 0xe6, 0x88, 0x27, 0xef, 0x52, 0xea, 0x0d, - 0x3c, 0xdb, 0x58, 0xcb, 0xaf, 0xaa, 0xb1, 0x2e, 0x2f, 0xda, 0x58, 0x67, 0x32, 0xa8, 0xc6, 0x86, - 0xbd, 0x41, 0xef, 0xad, 0xd7, 0xde, 0x57, 0xa9, 0x2f, 0x8b, 0x42, 0xe6, 0xcb, 0xa2, 0xfe, 0xbb, - 0x04, 0x56, 0x34, 0x6a, 0x76, 0x1c, 0xe6, 0x3d, 0x81, 0xf7, 0x81, 0x62, 0xda, 0xbe, 0x13, 0xbc, - 0x06, 0x5a, 0x9f, 0x4e, 0x27, 0xaa, 0xc2, 0x8b, 0xbf, 0x58, 0xc7, 0x08, 0x2a, 0x4e, 0x69, 0x73, - 0x4a, 0x39, 0xa1, 0xd4, 0x16, 0xa1, 0xd4, 0x04, 0x25, 0xa7, 0xe2, 0xbd, 0xc7, 0x07, 0x84, 0x50, - 0xb2, 0xa6, 0x8b, 0xeb, 0xfa, 0x1f, 0x0a, 0x28, 0xed, 0xd2, 0xd1, 0xc8, 0x62, 0xbb, 0xd4, 0x61, - 0xe4, 0x31, 0x83, 0xdf, 0xe4, 0x1e, 0x88, 0x40, 0xd5, 0xed, 0x57, 0xd2, 0xff, 0x04, 0x6c, 0xba, - 0x1e, 0x79, 0xd4, 0x9f, 0x09, 0x21, 0xff, 0x97, 0x10, 0x1b, 0x9c, 0x31, 0xf3, 0xa8, 0xc2, 0x6f, - 0xc1, 0xd5, 0x81, 0x50, 0xc5, 0x9f, 0x05, 0xee, 0x67, 0xff, 0x90, 0x98, 0x96, 0x13, 0x7c, 0xcf, - 0x8a, 0x7e, 0x81, 0xbb, 0xd1, 0x39, 0xe7, 0x68, 0xf1, 0xd3, 0xc5, 0xa2, 0xc3, 0x38, 0xd0, 0x9e, - 0xed, 0x3b, 0x82, 0x08, 0x1e, 0x03, 0x38, 0x13, 0x9e, 0x38, 0x38, 0x1c, 0x2c, 0xdd, 0xe9, 0x44, - 0x2d, 0x67, 0x82, 0x77, 0x1c, 0xbc, 0x58, 0xe8, 0x72, 0x26, 0x74, 0xc7, 0xc1, 0x59, 0xdd, 0x76, - 0xa2, 0x7b, 0x69, 0x8e, 0x6e, 0x6d, 0x61, 0xdd, 0x5a, 0x56, 0xb7, 0x16, 0xe9, 0xbe, 0xf1, 0x9d, - 0x14, 0xff, 0xeb, 0x48, 0x3e, 0xa8, 0xe1, 0x2d, 0xb0, 0x61, 0xf4, 0xfa, 0xc6, 0xfe, 0xdd, 0xfd, - 0x03, 0xa3, 0xaf, 0x1f, 0xf4, 0x7a, 0xdd, 0xde, 0x5e, 0xf9, 0x52, 0xf5, 0xdd, 0x93, 0xd3, 0x5a, - 0x25, 0xff, 0xf9, 0x3d, 0x76, 0x1c, 0xcb, 0x31, 0xb3, 0xa0, 0x76, 0x47, 0xeb, 0xec, 0x77, 0xda, - 0x65, 0xe9, 0x1c, 0x50, 0x9b, 0xd8, 0x84, 0x11, 0x5c, 0x55, 0xbe, 0xff, 0x79, 0xeb, 0xd2, 0x8d, - 0x9f, 0x64, 0xb0, 0x3e, 0x33, 0xee, 0xe1, 0x4d, 0xb0, 0xa1, 0x19, 0xf9, 0x1c, 0xaa, 0x27, 0xa7, - 0xb5, 0xb7, 0x66, 0x5f, 0x0d, 0x61, 0x06, 0x19, 0x88, 0xd1, 0xb9, 0xab, 0x71, 0x88, 0x34, 0x17, - 0x62, 0x10, 0x64, 0x73, 0x48, 0x13, 0x94, 0xb3, 0x90, 0x4e, 0xbb, 0x2c, 0x57, 0xdf, 0x3e, 0x39, - 0xad, 0x5d, 0x9b, 0x83, 0x20, 0x38, 0x1b, 0x23, 0x52, 0x59, 0x98, 0x1b, 0x23, 0xd4, 0x08, 0x6f, - 0x83, 0xcd, 0x04, 0x72, 0xd0, 0x8b, 0x12, 0x53, 0x02, 0x6b, 0x66, 0x40, 0x07, 0x8e, 0x1f, 0xa4, - 0x16, 0x58, 0xd3, 0xfa, 0xec, 0xe9, 0x74, 0x4b, 0xfa, 0x73, 0xba, 0x25, 0x9d, 0xfd, 0xbd, 0x25, - 0x7d, 0xfd, 0x52, 0x55, 0xcf, 0xfc, 0x8d, 0x3d, 0x5c, 0x16, 0xeb, 0x5b, 0xff, 0x06, 0x00, 0x00, - 0xff, 0xff, 0x1e, 0xa8, 0xb2, 0x01, 0xdf, 0x0e, 0x00, 0x00, + // 1306 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x58, 0xcd, 0x6f, 0x1b, 0x45, + 0x14, 0xf7, 0xae, 0xd7, 0x71, 0x32, 0x8e, 0x1b, 0x67, 0xfa, 0x21, 0x63, 0x5a, 0xaf, 0x65, 0x10, + 0x8a, 0x2a, 0xb0, 0x69, 0xda, 0xa2, 0x28, 0x12, 0x88, 0xfa, 0x83, 0x60, 0xc5, 0xb8, 0x74, 0xed, + 0x50, 0x09, 0x21, 0xac, 0x8d, 0x77, 0xba, 0x59, 0x65, 0xbd, 0xbb, 0xda, 0x1d, 0x37, 0xed, 0x01, + 0x09, 0x89, 0x0b, 0xca, 0x29, 0x47, 0x2e, 0x11, 0x95, 0x90, 0x7a, 0xe1, 0x2f, 0xe0, 0xce, 0x21, + 0xdc, 0x72, 0xe4, 0x64, 0x24, 0xe7, 0x82, 0x10, 0xff, 0x00, 0x3d, 0xa1, 0x99, 0xdd, 0xb5, 0x67, + 0xbd, 0x4e, 0x54, 0x9b, 0x84, 0x22, 0x4e, 0xf1, 0xcc, 0xec, 0xef, 0xbd, 0xf7, 0x7b, 0xef, 0xf7, + 0xe6, 0x23, 0xe0, 0x86, 0x65, 0x9b, 0xd8, 0x2c, 0x3e, 0x96, 0x6d, 0xdd, 0x54, 0xad, 0xed, 0x62, + 0x17, 0x61, 0x59, 0x91, 0xb1, 0x5c, 0xa0, 0xf3, 0x70, 0xc9, 0x5d, 0x28, 0xf8, 0xeb, 0x19, 0x51, + 0x35, 0x4d, 0x55, 0x47, 0x45, 0xba, 0xbc, 0xdd, 0x7b, 0x54, 0xc4, 0x5a, 0x17, 0x39, 0x58, 0xee, + 0x5a, 0x2e, 0x22, 0xf3, 0x8e, 0xaa, 0xe1, 0x9d, 0xde, 0x76, 0xa1, 0x63, 0x76, 0x8b, 0xaa, 0xa9, + 0x9a, 0xa3, 0x2f, 0xc9, 0xc8, 0xf5, 0x46, 0x7e, 0xb9, 0x9f, 0xe7, 0x9f, 0x73, 0x20, 0xd1, 0xc4, + 0xa6, 0x2d, 0xab, 0xa8, 0x61, 0x2a, 0x08, 0xee, 0x81, 0x25, 0xc7, 0x1d, 0xb6, 0x0d, 0x53, 0x41, + 0x6d, 0x4d, 0x49, 0x73, 0x39, 0x6e, 0x25, 0x56, 0xba, 0x3f, 0xe8, 0x8b, 0x49, 0xe6, 0xcb, 0x5a, + 0xe5, 0x45, 0x5f, 0x5c, 0xf7, 0x9c, 0x29, 0x72, 0xaf, 0xbb, 0x2b, 0xef, 0xca, 0x26, 0x75, 0xeb, + 0x06, 0xeb, 0xff, 0xb1, 0x76, 0xd5, 0x22, 0x7e, 0x6a, 0x21, 0xa7, 0x10, 0x40, 0x4b, 0x49, 0x87, + 0x19, 0x2a, 0x30, 0x0d, 0xe2, 0xb2, 0xa2, 0xd8, 0xc8, 0x71, 0xd2, 0x7c, 0x8e, 0x5b, 0x59, 0x90, + 0xfc, 0xe1, 0xba, 0xf0, 0xfb, 0x33, 0x91, 0xcb, 0xff, 0xc8, 0x83, 0xb8, 0x84, 0x2c, 0x5d, 0xeb, + 0xc8, 0xb0, 0x06, 0x16, 0xd9, 0x20, 0x69, 0x84, 0x89, 0xd5, 0xeb, 0x85, 0xb1, 0x64, 0xb1, 0x0e, + 0x4b, 0xf3, 0x47, 0x7d, 0x31, 0x72, 0xdc, 0x17, 0x39, 0x29, 0xc1, 0x38, 0x86, 0x5f, 0x82, 0x79, + 0x6c, 0x5a, 0x5a, 0x87, 0x10, 0xe5, 0x29, 0xd1, 0xf2, 0xa0, 0x2f, 0xc6, 0x5b, 0x64, 0x8e, 0x52, + 0xbc, 0x33, 0x15, 0x45, 0x0f, 0x27, 0xc5, 0xa9, 0xd1, 0x9a, 0x02, 0x4d, 0x90, 0xd4, 0x4d, 0xb5, + 0xed, 0x60, 0x1b, 0xc9, 0x5d, 0xe2, 0x24, 0x4a, 0x9d, 0x6c, 0x0e, 0xfa, 0x62, 0xa2, 0x6e, 0xaa, + 0x4d, 0x3a, 0x4f, 0x1d, 0xad, 0x4d, 0xe5, 0x88, 0xc1, 0x4a, 0x09, 0x7d, 0x38, 0x50, 0xbc, 0x6c, + 0x3d, 0x04, 0xcb, 0x1e, 0xf9, 0x0a, 0x72, 0x3a, 0xb6, 0x66, 0x61, 0xd3, 0x86, 0x10, 0x08, 0x96, + 0x8c, 0x77, 0x68, 0xba, 0x16, 0x24, 0xfa, 0x9b, 0xcc, 0xf5, 0x1c, 0xe4, 0x72, 0x17, 0x24, 0xfa, + 0x1b, 0x5e, 0x01, 0x31, 0x6c, 0x62, 0x59, 0xa7, 0xb1, 0x0a, 0x92, 0x3b, 0xf0, 0x0c, 0xff, 0xc1, + 0x81, 0xab, 0x4c, 0x5a, 0x19, 0xeb, 0xe7, 0x58, 0x94, 0x75, 0x30, 0xe7, 0x60, 0x19, 0xf7, 0x5c, + 0x29, 0x5c, 0x5a, 0xcd, 0x9f, 0x65, 0xa4, 0x49, 0xbf, 0x94, 0x3c, 0x04, 0xac, 0x80, 0x79, 0xcf, + 0x94, 0x93, 0x8e, 0xe6, 0xa2, 0x2b, 0x89, 0xd3, 0xd1, 0xa3, 0xe0, 0x4b, 0xc2, 0x11, 0x09, 0x62, + 0x88, 0xf4, 0xc8, 0x3e, 0xe7, 0xc0, 0xb2, 0xa7, 0x39, 0x86, 0xe8, 0x2b, 0x6b, 0x11, 0xbf, 0x7e, + 0xfc, 0xa8, 0x7e, 0x5e, 0xa0, 0x7f, 0xf2, 0xe0, 0xf2, 0x50, 0x11, 0x4c, 0xa8, 0x21, 0xf5, 0x71, + 0x17, 0xab, 0xbe, 0x0b, 0x6f, 0xa7, 0xb5, 0xa1, 0x32, 0xa2, 0x54, 0x19, 0xb9, 0x50, 0x6d, 0x87, + 0xa1, 0x85, 0x75, 0x61, 0xbb, 0xa5, 0x74, 0xd2, 0xc2, 0x29, 0xba, 0x08, 0xd5, 0xda, 0xd7, 0x85, + 0x8f, 0xf4, 0xd2, 0x7d, 0xc0, 0x83, 0x25, 0x1a, 0x1a, 0x93, 0x6a, 0x96, 0x39, 0x77, 0x01, 0xcc, + 0xef, 0x8c, 0xf5, 0x44, 0xb8, 0xb1, 0x28, 0x64, 0x8c, 0xb5, 0x0c, 0x12, 0x23, 0x01, 0xb8, 0x0d, + 0x11, 0x2b, 0x7d, 0x48, 0x48, 0xfd, 0xa3, 0x9a, 0x83, 0x61, 0xcd, 0xfd, 0x94, 0x7c, 0xcf, 0x03, + 0xf8, 0x89, 0x77, 0x76, 0x31, 0x59, 0x79, 0x03, 0x24, 0x65, 0xcb, 0xd2, 0x35, 0xa4, 0xb4, 0x35, + 0x43, 0x41, 0x4f, 0x68, 0x6a, 0x04, 0x69, 0xd1, 0x9b, 0xac, 0x91, 0x39, 0xf8, 0x00, 0x24, 0xd9, + 0x86, 0x22, 0x0c, 0x49, 0x7d, 0xde, 0x3a, 0xab, 0xeb, 0x43, 0x35, 0x5a, 0x64, 0x7a, 0xc5, 0x81, + 0x9b, 0x61, 0xde, 0x89, 0xd5, 0x37, 0x4f, 0x17, 0x4b, 0xc8, 0x1c, 0xc3, 0x10, 0x7e, 0x00, 0xe6, + 0x68, 0x15, 0x7c, 0xe1, 0xe4, 0x26, 0xa7, 0x3e, 0x64, 0xc3, 0x43, 0x79, 0x19, 0xfa, 0x29, 0x0a, + 0x6e, 0x30, 0x04, 0x26, 0x24, 0xeb, 0x11, 0x00, 0x1d, 0xbd, 0xe7, 0x60, 0x64, 0xfb, 0x22, 0x4a, + 0x96, 0x36, 0x06, 0x7d, 0x71, 0xa1, 0xec, 0xce, 0x52, 0x19, 0xbd, 0x37, 0x55, 0xd1, 0x86, 0x48, + 0x69, 0xc1, 0x33, 0x5d, 0x53, 0x42, 0x3b, 0x35, 0x4f, 0x77, 0xea, 0x97, 0x4c, 0x77, 0x70, 0xa7, + 0x6e, 0x4e, 0xca, 0xf3, 0xdb, 0xa7, 0xe7, 0x39, 0xcc, 0x9a, 0xe6, 0x2a, 0x12, 0xc8, 0xf7, 0x06, + 0x58, 0xec, 0xd8, 0x48, 0xc6, 0x48, 0x69, 0x93, 0xdb, 0x4d, 0x5a, 0xa0, 0xf1, 0x65, 0x0a, 0xee, + 0xd5, 0xa7, 0xe0, 0x5f, 0x68, 0x0a, 0x2d, 0xff, 0xea, 0xe3, 0x9e, 0x23, 0x07, 0xbf, 0x91, 0x73, + 0xc4, 0x43, 0x92, 0x35, 0x62, 0xa8, 0x67, 0x29, 0x23, 0x43, 0xb1, 0x69, 0x0c, 0x79, 0x48, 0xb2, + 0x96, 0xff, 0x2b, 0x06, 0x5e, 0x3f, 0x83, 0xc3, 0xab, 0x3b, 0x12, 0x42, 0x1b, 0x3c, 0xff, 0x2f, + 0x6e, 0xf0, 0xd1, 0x0b, 0xdd, 0xe0, 0x85, 0x29, 0x37, 0x78, 0x09, 0xc4, 0x1f, 0x23, 0xdb, 0xd1, + 0x4c, 0x83, 0xd6, 0x59, 0x28, 0xad, 0x4d, 0x1d, 0xcd, 0x67, 0x2e, 0x5e, 0xf2, 0x0d, 0xc1, 0x2f, + 0xc0, 0xa5, 0x1d, 0x4d, 0xdd, 0x69, 0xef, 0xc9, 0x18, 0xd9, 0x5d, 0xd9, 0xde, 0x4d, 0xcf, 0x51, + 0xd3, 0x77, 0x5f, 0xf4, 0xc5, 0x5b, 0x53, 0x99, 0xde, 0xa8, 0x37, 0x1b, 0x52, 0x92, 0x18, 0x7b, + 0xe8, 0xdb, 0x1a, 0x9e, 0xe7, 0x71, 0xe6, 0x3e, 0x36, 0xae, 0xfd, 0xf9, 0xf3, 0xd2, 0xfe, 0xc2, + 0xac, 0xda, 0xff, 0x3a, 0x0a, 0x32, 0xc3, 0x9c, 0xff, 0x87, 0x6e, 0x43, 0xff, 0x3b, 0xe9, 0x33, + 0x2f, 0x20, 0x21, 0xf0, 0x02, 0xca, 0xff, 0xcc, 0x81, 0xf9, 0xba, 0xa9, 0x56, 0x0d, 0x6c, 0x3f, + 0x85, 0x0f, 0x80, 0xa0, 0xea, 0x8e, 0xe1, 0x9e, 0xa4, 0xa5, 0xf7, 0x07, 0x7d, 0x51, 0x20, 0xe2, + 0x9a, 0x4d, 0x91, 0xd4, 0x14, 0x31, 0xa9, 0x13, 0x93, 0xfc, 0xc8, 0x64, 0x7d, 0x16, 0x93, 0x75, + 0x6a, 0x92, 0x98, 0x22, 0xda, 0x26, 0x7b, 0x24, 0x4d, 0xd4, 0xa2, 0x44, 0x7f, 0xe7, 0x7f, 0x11, + 0x40, 0xb2, 0x6c, 0x76, 0xbb, 0x1a, 0x2e, 0x9b, 0x06, 0x46, 0x4f, 0x30, 0xdb, 0xb3, 0xdc, 0x79, + 0xf5, 0x6c, 0x37, 0xd4, 0xb3, 0x2e, 0xad, 0x8f, 0x88, 0x1e, 0x3f, 0x66, 0x1b, 0xf0, 0x5c, 0x9a, + 0xf8, 0x2b, 0x70, 0xa5, 0x43, 0x39, 0x91, 0x4e, 0x23, 0xd9, 0x6c, 0x6f, 0x23, 0x55, 0x33, 0xdc, + 0xb7, 0x13, 0x55, 0x23, 0x2c, 0xfb, 0xeb, 0x04, 0x5f, 0x22, 0xab, 0xb3, 0x79, 0x86, 0x43, 0x47, + 0x1b, 0xba, 0x63, 0x50, 0x43, 0x70, 0x0f, 0xc0, 0x31, 0xf7, 0xc8, 0x50, 0xa8, 0x7e, 0x84, 0x52, + 0x6d, 0xd0, 0x17, 0x53, 0x01, 0xe7, 0x55, 0x43, 0x99, 0xcd, 0x75, 0x2a, 0xe0, 0xba, 0x6a, 0x28, + 0x41, 0xde, 0xfa, 0x88, 0x77, 0x6c, 0x02, 0xef, 0xfa, 0xcc, 0xbc, 0xeb, 0x41, 0xde, 0x75, 0x9f, + 0xf7, 0xcd, 0x6f, 0xb8, 0xe1, 0x0b, 0x77, 0xf4, 0x08, 0x84, 0xb7, 0xc1, 0x72, 0xb3, 0xd1, 0x6e, + 0xb6, 0xee, 0xb5, 0xb6, 0x9a, 0x6d, 0x69, 0xab, 0xd1, 0xa8, 0x35, 0x36, 0x52, 0x91, 0xcc, 0xf5, + 0xfd, 0xc3, 0x5c, 0x3a, 0xfc, 0x64, 0xec, 0x19, 0x86, 0x66, 0xa8, 0x41, 0x50, 0xa5, 0x5a, 0xaf, + 0xb6, 0xaa, 0x95, 0x14, 0x77, 0x0a, 0xa8, 0x82, 0x74, 0x84, 0x91, 0x92, 0x11, 0xbe, 0xfd, 0x21, + 0x1b, 0xb9, 0xf9, 0x1d, 0x0f, 0x96, 0xc6, 0xce, 0x23, 0x78, 0x0b, 0x2c, 0xd7, 0x9b, 0xe1, 0x18, + 0x32, 0xfb, 0x87, 0xb9, 0x6b, 0xe3, 0x67, 0x97, 0x17, 0x41, 0x00, 0xd2, 0xac, 0xde, 0xab, 0x13, + 0x08, 0x37, 0x11, 0xd2, 0x44, 0xb2, 0x4e, 0x20, 0x45, 0x90, 0x0a, 0x42, 0xaa, 0x95, 0x14, 0x9f, + 0x79, 0x6d, 0xff, 0x30, 0x77, 0x75, 0x02, 0x02, 0x29, 0x41, 0x1f, 0x3e, 0xcb, 0xe8, 0x44, 0x1f, + 0x1e, 0x47, 0x78, 0x17, 0x5c, 0x1e, 0x41, 0xb6, 0x1a, 0x7e, 0x60, 0x82, 0x9b, 0x9a, 0x31, 0xd0, + 0x96, 0xe1, 0xb8, 0xa1, 0x79, 0xa9, 0xd9, 0x03, 0x09, 0xe6, 0x41, 0x02, 0xdf, 0x05, 0x57, 0x5a, + 0xf7, 0x3f, 0xad, 0x95, 0xc3, 0x89, 0xb9, 0xb6, 0x7f, 0x98, 0x83, 0xec, 0xdb, 0xc5, 0x4b, 0xca, + 0x38, 0x62, 0x54, 0x99, 0x71, 0x44, 0xa0, 0x26, 0xa5, 0xcd, 0xa3, 0x41, 0x96, 0x3b, 0x1e, 0x64, + 0xb9, 0x83, 0x93, 0x6c, 0xe4, 0xd9, 0x49, 0x96, 0x3b, 0x3e, 0xc9, 0x46, 0x7e, 0x3d, 0xc9, 0x46, + 0x3e, 0x7f, 0x29, 0xe9, 0x05, 0xfe, 0x1d, 0xb7, 0x3d, 0x47, 0xc7, 0xb7, 0xff, 0x0e, 0x00, 0x00, + 0xff, 0xff, 0x2c, 0xae, 0xd3, 0x64, 0xa7, 0x13, 0x00, 0x00, +} + +func (this *StorageNode) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*StorageNode) + if !ok { + that2, ok := that.(StorageNode) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.StorageNodeID != that1.StorageNodeID { + return false + } + if this.Address != that1.Address { + return false + } + return true } +func (this *Replica) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + that1, ok := that.(*Replica) + if !ok { + that2, ok := that.(Replica) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.StorageNode.Equal(&that1.StorageNode) { + return false + } + if this.TopicID != that1.TopicID { + return false + } + if this.LogStreamID != that1.LogStreamID { + return false + } + return true +} func (this *StorageDescriptor) Equal(that interface{}) bool { if that == nil { return this == nil @@ -906,9 +1166,6 @@ func (this *StorageDescriptor) Equal(that interface{}) bool { if this.Total != that1.Total { return false } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } func (this *StorageNodeDescriptor) Equal(that interface{}) bool { @@ -930,10 +1187,7 @@ func (this *StorageNodeDescriptor) Equal(that interface{}) bool { } else if this == nil { return false } - if this.StorageNodeID != that1.StorageNodeID { - return false - } - if this.Address != that1.Address { + if !this.StorageNode.Equal(&that1.StorageNode) { return false } if this.Status != that1.Status { @@ -947,9 +1201,6 @@ func (this *StorageNodeDescriptor) Equal(that interface{}) bool { return false } } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } func (this *ReplicaDescriptor) Equal(that interface{}) bool { @@ -977,9 +1228,6 @@ func (this *ReplicaDescriptor) Equal(that interface{}) bool { if this.Path != that1.Path { return false } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } func (this *LogStreamDescriptor) Equal(that interface{}) bool { @@ -1004,6 +1252,9 @@ func (this *LogStreamDescriptor) Equal(that interface{}) bool { if this.LogStreamID != that1.LogStreamID { return false } + if this.TopicID != that1.TopicID { + return false + } if this.Status != that1.Status { return false } @@ -1015,19 +1266,16 @@ func (this *LogStreamDescriptor) Equal(that interface{}) bool { return false } } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } -func (this *MetadataDescriptor) Equal(that interface{}) bool { +func (this *TopicDescriptor) Equal(that interface{}) bool { if that == nil { return this == nil } - that1, ok := that.(*MetadataDescriptor) + that1, ok := that.(*TopicDescriptor) if !ok { - that2, ok := that.(MetadataDescriptor) + that2, ok := that.(TopicDescriptor) if ok { that1 = &that2 } else { @@ -1039,35 +1287,153 @@ func (this *MetadataDescriptor) Equal(that interface{}) bool { } else if this == nil { return false } - if this.AppliedIndex != that1.AppliedIndex { + if this.TopicID != that1.TopicID { return false } - if len(this.StorageNodes) != len(that1.StorageNodes) { + if this.Status != that1.Status { return false } - for i := range this.StorageNodes { - if !this.StorageNodes[i].Equal(that1.StorageNodes[i]) { - return false - } - } if len(this.LogStreams) != len(that1.LogStreams) { return false } for i := range this.LogStreams { - if !this.LogStreams[i].Equal(that1.LogStreams[i]) { + if this.LogStreams[i] != that1.LogStreams[i] { return false } } - if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { - return false - } return true } -func (m *StorageDescriptor) Marshal() (dAtA []byte, err error) { - size := m.ProtoSize() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { +func (this *MetadataDescriptor) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*MetadataDescriptor) + if !ok { + that2, ok := that.(MetadataDescriptor) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.AppliedIndex != that1.AppliedIndex { + return false + } + if len(this.StorageNodes) != len(that1.StorageNodes) { + return false + } + for i := range this.StorageNodes { + if !this.StorageNodes[i].Equal(that1.StorageNodes[i]) { + return false + } + } + if len(this.LogStreams) != len(that1.LogStreams) { + return false + } + for i := range this.LogStreams { + if !this.LogStreams[i].Equal(that1.LogStreams[i]) { + return false + } + } + if len(this.Topics) != len(that1.Topics) { + return false + } + for i := range this.Topics { + if !this.Topics[i].Equal(that1.Topics[i]) { + return false + } + } + return true +} +func (m *StorageNode) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *StorageNode) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *StorageNode) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Address) > 0 { + i -= len(m.Address) + copy(dAtA[i:], m.Address) + i = encodeVarintMetadata(dAtA, i, uint64(len(m.Address))) + i-- + dAtA[i] = 0x12 + } + if m.StorageNodeID != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.StorageNodeID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *Replica) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Replica) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Replica) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.LogStreamID != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.LogStreamID)) + i-- + dAtA[i] = 0x18 + } + if m.TopicID != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x10 + } + { + size, err := m.StorageNode.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintMetadata(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *StorageDescriptor) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { return nil, err } return dAtA[:n], nil @@ -1083,10 +1449,6 @@ func (m *StorageDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Total != 0 { i = encodeVarintMetadata(dAtA, i, uint64(m.Total)) i-- @@ -1127,10 +1489,6 @@ func (m *StorageNodeDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Storages) > 0 { for iNdEx := len(m.Storages) - 1; iNdEx >= 0; iNdEx-- { { @@ -1142,26 +1500,24 @@ func (m *StorageNodeDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintMetadata(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x1a } } if m.Status != 0 { i = encodeVarintMetadata(dAtA, i, uint64(m.Status)) i-- - dAtA[i] = 0x18 - } - if len(m.Address) > 0 { - i -= len(m.Address) - copy(dAtA[i:], m.Address) - i = encodeVarintMetadata(dAtA, i, uint64(len(m.Address))) - i-- - dAtA[i] = 0x12 + dAtA[i] = 0x10 } - if m.StorageNodeID != 0 { - i = encodeVarintMetadata(dAtA, i, uint64(m.StorageNodeID)) - i-- - dAtA[i] = 0x8 + { + size, err := m.StorageNode.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintMetadata(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } @@ -1185,10 +1541,6 @@ func (m *ReplicaDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Path) > 0 { i -= len(m.Path) copy(dAtA[i:], m.Path) @@ -1224,10 +1576,6 @@ func (m *LogStreamDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Replicas) > 0 { for iNdEx := len(m.Replicas) - 1; iNdEx >= 0; iNdEx-- { { @@ -1239,12 +1587,17 @@ func (m *LogStreamDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintMetadata(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 } } if m.Status != 0 { i = encodeVarintMetadata(dAtA, i, uint64(m.Status)) i-- + dAtA[i] = 0x18 + } + if m.TopicID != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x10 } if m.LogStreamID != 0 { @@ -1255,6 +1608,58 @@ func (m *LogStreamDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *TopicDescriptor) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *TopicDescriptor) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *TopicDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.LogStreams) > 0 { + dAtA4 := make([]byte, len(m.LogStreams)*10) + var j3 int + for _, num1 := range m.LogStreams { + num := uint64(num1) + for num >= 1<<7 { + dAtA4[j3] = uint8(uint64(num)&0x7f | 0x80) + num >>= 7 + j3++ + } + dAtA4[j3] = uint8(num) + j3++ + } + i -= j3 + copy(dAtA[i:], dAtA4[:j3]) + i = encodeVarintMetadata(dAtA, i, uint64(j3)) + i-- + dAtA[i] = 0x1a + } + if m.Status != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.Status)) + i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + func (m *MetadataDescriptor) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) @@ -1275,9 +1680,19 @@ func (m *MetadataDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + if len(m.Topics) > 0 { + for iNdEx := len(m.Topics) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Topics[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintMetadata(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + } } if len(m.LogStreams) > 0 { for iNdEx := len(m.LogStreams) - 1; iNdEx >= 0; iNdEx-- { @@ -1335,24 +1750,20 @@ func (m *StorageNodeMetadataDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - n1, err1 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.UpdatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.UpdatedTime):]) - if err1 != nil { - return 0, err1 + n5, err5 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.UpdatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.UpdatedTime):]) + if err5 != nil { + return 0, err5 } - i -= n1 - i = encodeVarintMetadata(dAtA, i, uint64(n1)) + i -= n5 + i = encodeVarintMetadata(dAtA, i, uint64(n5)) i-- dAtA[i] = 0x2a - n2, err2 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime):]) - if err2 != nil { - return 0, err2 + n6, err6 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime):]) + if err6 != nil { + return 0, err6 } - i -= n2 - i = encodeVarintMetadata(dAtA, i, uint64(n2)) + i -= n6 + i = encodeVarintMetadata(dAtA, i, uint64(n6)) i-- dAtA[i] = 0x22 if len(m.LogStreams) > 0 { @@ -1409,41 +1820,47 @@ func (m *LogStreamMetadataDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, er _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - n4, err4 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.UpdatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.UpdatedTime):]) - if err4 != nil { - return 0, err4 + n8, err8 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.UpdatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.UpdatedTime):]) + if err8 != nil { + return 0, err8 } - i -= n4 - i = encodeVarintMetadata(dAtA, i, uint64(n4)) + i -= n8 + i = encodeVarintMetadata(dAtA, i, uint64(n8)) i-- - dAtA[i] = 0x3a - n5, err5 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime):]) - if err5 != nil { - return 0, err5 + dAtA[i] = 0x4a + n9, err9 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedTime):]) + if err9 != nil { + return 0, err9 } - i -= n5 - i = encodeVarintMetadata(dAtA, i, uint64(n5)) + i -= n9 + i = encodeVarintMetadata(dAtA, i, uint64(n9)) i-- - dAtA[i] = 0x32 + dAtA[i] = 0x42 if len(m.Path) > 0 { i -= len(m.Path) copy(dAtA[i:], m.Path) i = encodeVarintMetadata(dAtA, i, uint64(len(m.Path))) i-- - dAtA[i] = 0x2a + dAtA[i] = 0x3a } if m.HighWatermark != 0 { i = encodeVarintMetadata(dAtA, i, uint64(m.HighWatermark)) i-- - dAtA[i] = 0x20 + dAtA[i] = 0x30 + } + if m.Version != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.Version)) + i-- + dAtA[i] = 0x28 } if m.Status != 0 { i = encodeVarintMetadata(dAtA, i, uint64(m.Status)) i-- + dAtA[i] = 0x20 + } + if m.TopicID != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x18 } if m.LogStreamID != 0 { @@ -1479,16 +1896,17 @@ func (m *LogStreamReplicaDescriptor) MarshalToSizedBuffer(dAtA []byte) (int, err _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Address) > 0 { i -= len(m.Address) copy(dAtA[i:], m.Address) i = encodeVarintMetadata(dAtA, i, uint64(len(m.Address))) i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 + } + if m.TopicID != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x18 } if m.LogStreamID != 0 { i = encodeVarintMetadata(dAtA, i, uint64(m.LogStreamID)) @@ -1523,10 +1941,6 @@ func (m *LogEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Data) > 0 { i -= len(m.Data) copy(dAtA[i:], m.Data) @@ -1567,10 +1981,6 @@ func (m *CommitContext) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.CommittedLLSNBegin != 0 { i = encodeVarintMetadata(dAtA, i, uint64(m.CommittedLLSNBegin)) i-- @@ -1586,13 +1996,13 @@ func (m *CommitContext) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x18 } - if m.PrevHighWatermark != 0 { - i = encodeVarintMetadata(dAtA, i, uint64(m.PrevHighWatermark)) + if m.HighWatermark != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.HighWatermark)) i-- dAtA[i] = 0x10 } - if m.HighWatermark != 0 { - i = encodeVarintMetadata(dAtA, i, uint64(m.HighWatermark)) + if m.Version != 0 { + i = encodeVarintMetadata(dAtA, i, uint64(m.Version)) i-- dAtA[i] = 0x8 } @@ -1610,6 +2020,39 @@ func encodeVarintMetadata(dAtA []byte, offset int, v uint64) int { dAtA[offset] = uint8(v) return base } +func (m *StorageNode) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.StorageNodeID != 0 { + n += 1 + sovMetadata(uint64(m.StorageNodeID)) + } + l = len(m.Address) + if l > 0 { + n += 1 + l + sovMetadata(uint64(l)) + } + return n +} + +func (m *Replica) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.StorageNode.ProtoSize() + n += 1 + l + sovMetadata(uint64(l)) + if m.TopicID != 0 { + n += 1 + sovMetadata(uint64(m.TopicID)) + } + if m.LogStreamID != 0 { + n += 1 + sovMetadata(uint64(m.LogStreamID)) + } + return n +} + func (m *StorageDescriptor) ProtoSize() (n int) { if m == nil { return 0 @@ -1626,9 +2069,6 @@ func (m *StorageDescriptor) ProtoSize() (n int) { if m.Total != 0 { n += 1 + sovMetadata(uint64(m.Total)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1638,13 +2078,8 @@ func (m *StorageNodeDescriptor) ProtoSize() (n int) { } var l int _ = l - if m.StorageNodeID != 0 { - n += 1 + sovMetadata(uint64(m.StorageNodeID)) - } - l = len(m.Address) - if l > 0 { - n += 1 + l + sovMetadata(uint64(l)) - } + l = m.StorageNode.ProtoSize() + n += 1 + l + sovMetadata(uint64(l)) if m.Status != 0 { n += 1 + sovMetadata(uint64(m.Status)) } @@ -1654,9 +2089,6 @@ func (m *StorageNodeDescriptor) ProtoSize() (n int) { n += 1 + l + sovMetadata(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1673,9 +2105,6 @@ func (m *ReplicaDescriptor) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovMetadata(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1688,6 +2117,9 @@ func (m *LogStreamDescriptor) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovMetadata(uint64(m.LogStreamID)) } + if m.TopicID != 0 { + n += 1 + sovMetadata(uint64(m.TopicID)) + } if m.Status != 0 { n += 1 + sovMetadata(uint64(m.Status)) } @@ -1697,8 +2129,27 @@ func (m *LogStreamDescriptor) ProtoSize() (n int) { n += 1 + l + sovMetadata(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + return n +} + +func (m *TopicDescriptor) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.TopicID != 0 { + n += 1 + sovMetadata(uint64(m.TopicID)) + } + if m.Status != 0 { + n += 1 + sovMetadata(uint64(m.Status)) + } + if len(m.LogStreams) > 0 { + l = 0 + for _, e := range m.LogStreams { + l += sovMetadata(uint64(e)) + } + n += 1 + sovMetadata(uint64(l)) + l } return n } @@ -1724,8 +2175,11 @@ func (m *MetadataDescriptor) ProtoSize() (n int) { n += 1 + l + sovMetadata(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.Topics) > 0 { + for _, e := range m.Topics { + l = e.ProtoSize() + n += 1 + l + sovMetadata(uint64(l)) + } } return n } @@ -1753,9 +2207,6 @@ func (m *StorageNodeMetadataDescriptor) ProtoSize() (n int) { n += 1 + l + sovMetadata(uint64(l)) l = github_com_gogo_protobuf_types.SizeOfStdTime(m.UpdatedTime) n += 1 + l + sovMetadata(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1771,9 +2222,15 @@ func (m *LogStreamMetadataDescriptor) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovMetadata(uint64(m.LogStreamID)) } + if m.TopicID != 0 { + n += 1 + sovMetadata(uint64(m.TopicID)) + } if m.Status != 0 { n += 1 + sovMetadata(uint64(m.Status)) } + if m.Version != 0 { + n += 1 + sovMetadata(uint64(m.Version)) + } if m.HighWatermark != 0 { n += 1 + sovMetadata(uint64(m.HighWatermark)) } @@ -1785,9 +2242,6 @@ func (m *LogStreamMetadataDescriptor) ProtoSize() (n int) { n += 1 + l + sovMetadata(uint64(l)) l = github_com_gogo_protobuf_types.SizeOfStdTime(m.UpdatedTime) n += 1 + l + sovMetadata(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1803,13 +2257,13 @@ func (m *LogStreamReplicaDescriptor) ProtoSize() (n int) { if m.LogStreamID != 0 { n += 1 + sovMetadata(uint64(m.LogStreamID)) } + if m.TopicID != 0 { + n += 1 + sovMetadata(uint64(m.TopicID)) + } l = len(m.Address) if l > 0 { n += 1 + l + sovMetadata(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1829,9 +2283,6 @@ func (m *LogEntry) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovMetadata(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -1841,12 +2292,12 @@ func (m *CommitContext) ProtoSize() (n int) { } var l int _ = l + if m.Version != 0 { + n += 1 + sovMetadata(uint64(m.Version)) + } if m.HighWatermark != 0 { n += 1 + sovMetadata(uint64(m.HighWatermark)) } - if m.PrevHighWatermark != 0 { - n += 1 + sovMetadata(uint64(m.PrevHighWatermark)) - } if m.CommittedGLSNBegin != 0 { n += 1 + sovMetadata(uint64(m.CommittedGLSNBegin)) } @@ -1856,17 +2307,236 @@ func (m *CommitContext) ProtoSize() (n int) { if m.CommittedLLSNBegin != 0 { n += 1 + sovMetadata(uint64(m.CommittedLLSNBegin)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + return n +} + +func sovMetadata(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} +func sozMetadata(x uint64) (n int) { + return sovMetadata(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} +func (m *StorageNode) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: StorageNode: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: StorageNode: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StorageNodeID", wireType) + } + m.StorageNodeID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.StorageNodeID |= github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMetadata + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthMetadata + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Address = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipMetadata(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthMetadata + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *Replica) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Replica: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Replica: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field StorageNode", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthMetadata + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthMetadata + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.StorageNode.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) + } + m.LogStreamID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.LogStreamID |= github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipMetadata(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthMetadata + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } - return n -} -func sovMetadata(x uint64) (n int) { - return (math_bits.Len64(x|1) + 6) / 7 -} -func sozMetadata(x uint64) (n int) { - return sovMetadata(uint64((x << 1) ^ uint64((int64(x) >> 63)))) + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil } func (m *StorageDescriptor) Unmarshal(dAtA []byte) error { l := len(dAtA) @@ -1979,7 +2649,6 @@ func (m *StorageDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2019,29 +2688,10 @@ func (m *StorageNodeDescriptor) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StorageNodeID", wireType) - } - m.StorageNodeID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowMetadata - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StorageNodeID |= github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StorageNode", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowMetadata @@ -2051,25 +2701,26 @@ func (m *StorageNodeDescriptor) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthMetadata } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthMetadata } if postIndex > l { return io.ErrUnexpectedEOF } - m.Address = string(dAtA[iNdEx:postIndex]) + if err := m.StorageNode.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 3: + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } @@ -2088,7 +2739,7 @@ func (m *StorageNodeDescriptor) Unmarshal(dAtA []byte) error { break } } - case 4: + case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Storages", wireType) } @@ -2134,7 +2785,6 @@ func (m *StorageNodeDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2236,7 +2886,6 @@ func (m *ReplicaDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2295,6 +2944,25 @@ func (m *LogStreamDescriptor) Unmarshal(dAtA []byte) error { } } case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } @@ -2313,7 +2981,7 @@ func (m *LogStreamDescriptor) Unmarshal(dAtA []byte) error { break } } - case 3: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Replicas", wireType) } @@ -2359,7 +3027,170 @@ func (m *LogStreamDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *TopicDescriptor) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: TopicDescriptor: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: TopicDescriptor: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + m.Status = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Status |= TopicStatus(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType == 0 { + var v github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.LogStreams = append(m.LogStreams, v) + } else if wireType == 2 { + var packedLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + packedLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if packedLen < 0 { + return ErrInvalidLengthMetadata + } + postIndex := iNdEx + packedLen + if postIndex < 0 { + return ErrInvalidLengthMetadata + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + var elementCount int + var count int + for _, integer := range dAtA[iNdEx:postIndex] { + if integer < 128 { + count++ + } + } + elementCount = count + if elementCount != 0 && len(m.LogStreams) == 0 { + m.LogStreams = make([]github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID, 0, elementCount) + } + for iNdEx < postIndex { + var v github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.LogStreams = append(m.LogStreams, v) + } + } else { + return fmt.Errorf("proto: wrong wireType = %d for field LogStreams", wireType) + } + default: + iNdEx = preIndex + skippy, err := skipMetadata(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthMetadata + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } iNdEx += skippy } } @@ -2485,6 +3316,40 @@ func (m *MetadataDescriptor) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Topics", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthMetadata + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthMetadata + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Topics = append(m.Topics, &TopicDescriptor{}) + if err := m.Topics[len(m.Topics)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipMetadata(dAtA[iNdEx:]) @@ -2497,7 +3362,6 @@ func (m *MetadataDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2703,7 +3567,6 @@ func (m *StorageNodeMetadataDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -2781,6 +3644,25 @@ func (m *LogStreamMetadataDescriptor) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } @@ -2799,7 +3681,26 @@ func (m *LogStreamMetadataDescriptor) Unmarshal(dAtA []byte) error { break } } - case 4: + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) + } + m.Version = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Version |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 6: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) } @@ -2818,7 +3719,7 @@ func (m *LogStreamMetadataDescriptor) Unmarshal(dAtA []byte) error { break } } - case 5: + case 7: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } @@ -2850,7 +3751,7 @@ func (m *LogStreamMetadataDescriptor) Unmarshal(dAtA []byte) error { } m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 8: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field CreatedTime", wireType) } @@ -2883,7 +3784,7 @@ func (m *LogStreamMetadataDescriptor) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 7: + case 9: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UpdatedTime", wireType) } @@ -2928,7 +3829,6 @@ func (m *LogStreamMetadataDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3006,6 +3906,25 @@ func (m *LogStreamReplicaDescriptor) Unmarshal(dAtA []byte) error { } } case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType) } @@ -3049,7 +3968,6 @@ func (m *LogStreamReplicaDescriptor) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3172,7 +4090,6 @@ func (m *LogEntry) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3213,9 +4130,9 @@ func (m *CommitContext) Unmarshal(dAtA []byte) error { switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) } - m.HighWatermark = 0 + m.Version = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowMetadata @@ -3225,16 +4142,16 @@ func (m *CommitContext) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.HighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.Version |= github_daumkakao_com_varlog_varlog_pkg_types.Version(b&0x7F) << shift if b < 0x80 { break } } case 2: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PrevHighWatermark", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field HighWatermark", wireType) } - m.PrevHighWatermark = 0 + m.HighWatermark = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowMetadata @@ -3244,7 +4161,7 @@ func (m *CommitContext) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.PrevHighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift + m.HighWatermark |= github_daumkakao_com_varlog_varlog_pkg_types.GLSN(b&0x7F) << shift if b < 0x80 { break } @@ -3318,7 +4235,6 @@ func (m *CommitContext) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/varlogpb/metadata.proto b/proto/varlogpb/metadata.proto index 38dfc5f03..f11facb9a 100644 --- a/proto/varlogpb/metadata.proto +++ b/proto/varlogpb/metadata.proto @@ -10,6 +10,39 @@ option go_package = "github.com/kakao/varlog/proto/varlogpb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; + +// StorageNode is a structure to represent identifier and address of storage +// node. +message StorageNode { + option (gogoproto.equal) = true; + + int32 storage_node_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.StorageNodeID", + (gogoproto.customname) = "StorageNodeID" + ]; + string address = 2; +} + +message Replica { + option (gogoproto.equal) = true; + + StorageNode storage_node = 1 + [(gogoproto.nullable) = false, (gogoproto.embed) = true]; + int32 topic_id = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.LogStreamID", + (gogoproto.customname) = "LogStreamID" + ]; +} enum StorageNodeStatus { option (gogoproto.goproto_enum_prefix) = false; @@ -35,6 +68,15 @@ enum LogStreamStatus { [(gogoproto.enumvalue_customname) = "LogStreamStatusUnsealing"]; } +enum TopicStatus { + option (gogoproto.goproto_enum_prefix) = false; + + TOPIC_STATUS_RUNNING = 0 + [(gogoproto.enumvalue_customname) = "TopicStatusRunning"]; + TOPIC_STATUS_DELETED = 1 + [(gogoproto.enumvalue_customname) = "TopicStatusDeleted"]; +} + message StorageDescriptor { option (gogoproto.equal) = true; @@ -46,20 +88,16 @@ message StorageDescriptor { message StorageNodeDescriptor { option (gogoproto.equal) = true; - uint32 storage_node_id = 1 [ - (gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.StorageNodeID", - (gogoproto.customname) = "StorageNodeID" - ]; - string address = 2; - StorageNodeStatus status = 3; - repeated StorageDescriptor storages = 4 [(gogoproto.nullable) = true]; + StorageNode storage_node = 1 + [(gogoproto.nullable) = false, (gogoproto.embed) = true]; + StorageNodeStatus status = 2; + repeated StorageDescriptor storages = 3 [(gogoproto.nullable) = true]; } message ReplicaDescriptor { option (gogoproto.equal) = true; - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" @@ -70,14 +108,37 @@ message ReplicaDescriptor { message LogStreamDescriptor { option (gogoproto.equal) = true; - uint32 log_stream_id = 1 [ + int32 log_stream_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - LogStreamStatus status = 2; + int32 topic_id = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + LogStreamStatus status = 3; + + repeated ReplicaDescriptor replicas = 4 [(gogoproto.nullable) = true]; +} + +message TopicDescriptor { + option (gogoproto.equal) = true; + + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; - repeated ReplicaDescriptor replicas = 3 [(gogoproto.nullable) = true]; + TopicStatus status = 2; + + repeated int32 log_streams = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.LogStreamID", + (gogoproto.nullable) = true + ]; } message MetadataDescriptor { @@ -87,6 +148,7 @@ message MetadataDescriptor { repeated StorageNodeDescriptor storage_nodes = 2 [(gogoproto.nullable) = true]; repeated LogStreamDescriptor log_streams = 3 [(gogoproto.nullable) = true]; + repeated TopicDescriptor topics = 4 [(gogoproto.nullable) = true]; } // StorageNodeMetadataDescriptor represents metadata of stroage node. @@ -109,41 +171,52 @@ message StorageNodeMetadataDescriptor { } message LogStreamMetadataDescriptor { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 2 [ + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - LogStreamStatus status = 3; - uint64 high_watermark = 4 [ + int32 topic_id = 3 [ (gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN", - (gogoproto.customname) = "HighWatermark" + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" ]; - string path = 5; - google.protobuf.Timestamp created_time = 6 + LogStreamStatus status = 4; + uint64 version = 5 + [(gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.Version"]; + uint64 high_watermark = 6 + [(gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.GLSN"]; + string path = 7; + google.protobuf.Timestamp created_time = 8 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false]; - google.protobuf.Timestamp updated_time = 7 + google.protobuf.Timestamp updated_time = 9 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false]; } message LogStreamReplicaDescriptor { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 2 [ + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - string address = 3; + int32 topic_id = 3 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + string address = 4; } message LogEntry { @@ -161,12 +234,14 @@ message LogEntry { } message CommitContext { - uint64 high_watermark = 1 + uint64 version = 1 [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; - uint64 prev_high_watermark = 2 - [(gogoproto.casttype) = - "github.com/kakao/varlog/pkg/types.GLSN"]; + "github.com/kakao/varlog/pkg/types.Version"]; + uint64 high_watermark = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.GLSN", + (gogoproto.customname) = "HighWatermark" + ]; uint64 committed_glsn_begin = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.GLSN", diff --git a/proto/snpb/replica.go b/proto/varlogpb/replica.go similarity index 87% rename from proto/snpb/replica.go rename to proto/varlogpb/replica.go index cf4ee9d46..740cb87cc 100644 --- a/proto/snpb/replica.go +++ b/proto/varlogpb/replica.go @@ -1,4 +1,4 @@ -package snpb +package varlogpb import ( "github.com/pkg/errors" @@ -21,7 +21,7 @@ func EqualReplicas(xs []Replica, ys []Replica) bool { return false } */ - if x.StorageNodeID != y.StorageNodeID || x.LogStreamID != y.LogStreamID { + if x.StorageNode.StorageNodeID != y.StorageNode.StorageNodeID || x.LogStreamID != y.LogStreamID { return false } } @@ -39,7 +39,7 @@ func ValidReplicas(replicas []Replica) error { snidSet := set.New(len(replicas)) for _, replica := range replicas { lsidSet.Add(replica.LogStreamID) - snidSet.Add(replica.StorageNodeID) + snidSet.Add(replica.StorageNode.StorageNodeID) } if lsidSet.Size() != 1 { return errors.Wrap(verrors.ErrInvalid, "LogStreamID mismatch") diff --git a/proto/vmspb/vms.pb.go b/proto/vmspb/vms.pb.go index 1b16818b4..2a13d33c2 100644 --- a/proto/vmspb/vms.pb.go +++ b/proto/vmspb/vms.pb.go @@ -35,10 +35,7 @@ const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type AddStorageNodeRequest struct { // address is IP of a node to be added to the cluster. - Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"` } func (m *AddStorageNodeRequest) Reset() { *m = AddStorageNodeRequest{} } @@ -82,10 +79,7 @@ func (m *AddStorageNodeRequest) GetAddress() string { } type AddStorageNodeResponse struct { - StorageNode *varlogpb.StorageNodeMetadataDescriptor `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNode *varlogpb.StorageNodeMetadataDescriptor `protobuf:"bytes,1,opt,name=storage_node,json=storageNode,proto3" json:"storage_node,omitempty"` } func (m *AddStorageNodeResponse) Reset() { *m = AddStorageNodeResponse{} } @@ -129,10 +123,7 @@ func (m *AddStorageNodeResponse) GetStorageNode() *varlogpb.StorageNodeMetadataD } type UnregisterStorageNodeRequest struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` } func (m *UnregisterStorageNodeRequest) Reset() { *m = UnregisterStorageNodeRequest{} } @@ -176,9 +167,6 @@ func (m *UnregisterStorageNodeRequest) GetStorageNodeID() github_daumkakao_com_v } type UnregisterStorageNodeResponse struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *UnregisterStorageNodeResponse) Reset() { *m = UnregisterStorageNodeResponse{} } @@ -214,19 +202,185 @@ func (m *UnregisterStorageNodeResponse) XXX_DiscardUnknown() { var xxx_messageInfo_UnregisterStorageNodeResponse proto.InternalMessageInfo +type AddTopicRequest struct { + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` +} + +func (m *AddTopicRequest) Reset() { *m = AddTopicRequest{} } +func (m *AddTopicRequest) String() string { return proto.CompactTextString(m) } +func (*AddTopicRequest) ProtoMessage() {} +func (*AddTopicRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_682aff4a3f93d15c, []int{4} +} +func (m *AddTopicRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AddTopicRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AddTopicRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AddTopicRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_AddTopicRequest.Merge(m, src) +} +func (m *AddTopicRequest) XXX_Size() int { + return m.ProtoSize() +} +func (m *AddTopicRequest) XXX_DiscardUnknown() { + xxx_messageInfo_AddTopicRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_AddTopicRequest proto.InternalMessageInfo + +func (m *AddTopicRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + +type AddTopicResponse struct { + Topic *varlogpb.TopicDescriptor `protobuf:"bytes,1,opt,name=topic,proto3" json:"topic,omitempty"` +} + +func (m *AddTopicResponse) Reset() { *m = AddTopicResponse{} } +func (m *AddTopicResponse) String() string { return proto.CompactTextString(m) } +func (*AddTopicResponse) ProtoMessage() {} +func (*AddTopicResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_682aff4a3f93d15c, []int{5} +} +func (m *AddTopicResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AddTopicResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AddTopicResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AddTopicResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_AddTopicResponse.Merge(m, src) +} +func (m *AddTopicResponse) XXX_Size() int { + return m.ProtoSize() +} +func (m *AddTopicResponse) XXX_DiscardUnknown() { + xxx_messageInfo_AddTopicResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_AddTopicResponse proto.InternalMessageInfo + +func (m *AddTopicResponse) GetTopic() *varlogpb.TopicDescriptor { + if m != nil { + return m.Topic + } + return nil +} + +type UnregisterTopicRequest struct { + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` +} + +func (m *UnregisterTopicRequest) Reset() { *m = UnregisterTopicRequest{} } +func (m *UnregisterTopicRequest) String() string { return proto.CompactTextString(m) } +func (*UnregisterTopicRequest) ProtoMessage() {} +func (*UnregisterTopicRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_682aff4a3f93d15c, []int{6} +} +func (m *UnregisterTopicRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *UnregisterTopicRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_UnregisterTopicRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *UnregisterTopicRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_UnregisterTopicRequest.Merge(m, src) +} +func (m *UnregisterTopicRequest) XXX_Size() int { + return m.ProtoSize() +} +func (m *UnregisterTopicRequest) XXX_DiscardUnknown() { + xxx_messageInfo_UnregisterTopicRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_UnregisterTopicRequest proto.InternalMessageInfo + +func (m *UnregisterTopicRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + +type UnregisterTopicResponse struct { +} + +func (m *UnregisterTopicResponse) Reset() { *m = UnregisterTopicResponse{} } +func (m *UnregisterTopicResponse) String() string { return proto.CompactTextString(m) } +func (*UnregisterTopicResponse) ProtoMessage() {} +func (*UnregisterTopicResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_682aff4a3f93d15c, []int{7} +} +func (m *UnregisterTopicResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *UnregisterTopicResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_UnregisterTopicResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *UnregisterTopicResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_UnregisterTopicResponse.Merge(m, src) +} +func (m *UnregisterTopicResponse) XXX_Size() int { + return m.ProtoSize() +} +func (m *UnregisterTopicResponse) XXX_DiscardUnknown() { + xxx_messageInfo_UnregisterTopicResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_UnregisterTopicResponse proto.InternalMessageInfo + type AddLogStreamRequest struct { + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` // TODO: nullable = false - Replicas []*varlogpb.ReplicaDescriptor `protobuf:"bytes,1,rep,name=replicas,proto3" json:"replicas,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Replicas []*varlogpb.ReplicaDescriptor `protobuf:"bytes,2,rep,name=replicas,proto3" json:"replicas,omitempty"` } func (m *AddLogStreamRequest) Reset() { *m = AddLogStreamRequest{} } func (m *AddLogStreamRequest) String() string { return proto.CompactTextString(m) } func (*AddLogStreamRequest) ProtoMessage() {} func (*AddLogStreamRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{4} + return fileDescriptor_682aff4a3f93d15c, []int{8} } func (m *AddLogStreamRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -255,6 +409,13 @@ func (m *AddLogStreamRequest) XXX_DiscardUnknown() { var xxx_messageInfo_AddLogStreamRequest proto.InternalMessageInfo +func (m *AddLogStreamRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *AddLogStreamRequest) GetReplicas() []*varlogpb.ReplicaDescriptor { if m != nil { return m.Replicas @@ -263,17 +424,14 @@ func (m *AddLogStreamRequest) GetReplicas() []*varlogpb.ReplicaDescriptor { } type AddLogStreamResponse struct { - LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` } func (m *AddLogStreamResponse) Reset() { *m = AddLogStreamResponse{} } func (m *AddLogStreamResponse) String() string { return proto.CompactTextString(m) } func (*AddLogStreamResponse) ProtoMessage() {} func (*AddLogStreamResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{5} + return fileDescriptor_682aff4a3f93d15c, []int{9} } func (m *AddLogStreamResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -310,17 +468,15 @@ func (m *AddLogStreamResponse) GetLogStream() *varlogpb.LogStreamDescriptor { } type UnregisterLogStreamRequest struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *UnregisterLogStreamRequest) Reset() { *m = UnregisterLogStreamRequest{} } func (m *UnregisterLogStreamRequest) String() string { return proto.CompactTextString(m) } func (*UnregisterLogStreamRequest) ProtoMessage() {} func (*UnregisterLogStreamRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{6} + return fileDescriptor_682aff4a3f93d15c, []int{10} } func (m *UnregisterLogStreamRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -349,6 +505,13 @@ func (m *UnregisterLogStreamRequest) XXX_DiscardUnknown() { var xxx_messageInfo_UnregisterLogStreamRequest proto.InternalMessageInfo +func (m *UnregisterLogStreamRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *UnregisterLogStreamRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -357,16 +520,13 @@ func (m *UnregisterLogStreamRequest) GetLogStreamID() github_daumkakao_com_varlo } type UnregisterLogStreamResponse struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *UnregisterLogStreamResponse) Reset() { *m = UnregisterLogStreamResponse{} } func (m *UnregisterLogStreamResponse) String() string { return proto.CompactTextString(m) } func (*UnregisterLogStreamResponse) ProtoMessage() {} func (*UnregisterLogStreamResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{7} + return fileDescriptor_682aff4a3f93d15c, []int{11} } func (m *UnregisterLogStreamResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -396,18 +556,16 @@ func (m *UnregisterLogStreamResponse) XXX_DiscardUnknown() { var xxx_messageInfo_UnregisterLogStreamResponse proto.InternalMessageInfo type RemoveLogStreamReplicaRequest struct { - StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,1,opt,name=storage_node_id,json=storageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storage_node_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,2,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,3,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *RemoveLogStreamReplicaRequest) Reset() { *m = RemoveLogStreamReplicaRequest{} } func (m *RemoveLogStreamReplicaRequest) String() string { return proto.CompactTextString(m) } func (*RemoveLogStreamReplicaRequest) ProtoMessage() {} func (*RemoveLogStreamReplicaRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{8} + return fileDescriptor_682aff4a3f93d15c, []int{12} } func (m *RemoveLogStreamReplicaRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -443,6 +601,13 @@ func (m *RemoveLogStreamReplicaRequest) GetStorageNodeID() github_daumkakao_com_ return 0 } +func (m *RemoveLogStreamReplicaRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *RemoveLogStreamReplicaRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -451,16 +616,13 @@ func (m *RemoveLogStreamReplicaRequest) GetLogStreamID() github_daumkakao_com_va } type RemoveLogStreamReplicaResponse struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *RemoveLogStreamReplicaResponse) Reset() { *m = RemoveLogStreamReplicaResponse{} } func (m *RemoveLogStreamReplicaResponse) String() string { return proto.CompactTextString(m) } func (*RemoveLogStreamReplicaResponse) ProtoMessage() {} func (*RemoveLogStreamReplicaResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{9} + return fileDescriptor_682aff4a3f93d15c, []int{13} } func (m *RemoveLogStreamReplicaResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -490,7 +652,8 @@ func (m *RemoveLogStreamReplicaResponse) XXX_DiscardUnknown() { var xxx_messageInfo_RemoveLogStreamReplicaResponse proto.InternalMessageInfo type UpdateLogStreamRequest struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` // //// NOTE: popped_replica need not be varlog.ReplicaDescriptor, but it is //// natural. Though it is awkward, popped_storage_node_id is used here. @@ -499,18 +662,15 @@ type UpdateLogStreamRequest struct { //"github.com/kakao/varlog/pkg/types.StorageNodeID", //(gogoproto.customname) = "PoppedStorageNodeID" //]; - PoppedReplica *varlogpb.ReplicaDescriptor `protobuf:"bytes,2,opt,name=popped_replica,json=poppedReplica,proto3" json:"popped_replica,omitempty"` - PushedReplica *varlogpb.ReplicaDescriptor `protobuf:"bytes,3,opt,name=pushed_replica,json=pushedReplica,proto3" json:"pushed_replica,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + PoppedReplica *varlogpb.ReplicaDescriptor `protobuf:"bytes,3,opt,name=popped_replica,json=poppedReplica,proto3" json:"popped_replica,omitempty"` + PushedReplica *varlogpb.ReplicaDescriptor `protobuf:"bytes,4,opt,name=pushed_replica,json=pushedReplica,proto3" json:"pushed_replica,omitempty"` } func (m *UpdateLogStreamRequest) Reset() { *m = UpdateLogStreamRequest{} } func (m *UpdateLogStreamRequest) String() string { return proto.CompactTextString(m) } func (*UpdateLogStreamRequest) ProtoMessage() {} func (*UpdateLogStreamRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{10} + return fileDescriptor_682aff4a3f93d15c, []int{14} } func (m *UpdateLogStreamRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -539,6 +699,13 @@ func (m *UpdateLogStreamRequest) XXX_DiscardUnknown() { var xxx_messageInfo_UpdateLogStreamRequest proto.InternalMessageInfo +func (m *UpdateLogStreamRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *UpdateLogStreamRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -561,17 +728,14 @@ func (m *UpdateLogStreamRequest) GetPushedReplica() *varlogpb.ReplicaDescriptor } type UpdateLogStreamResponse struct { - LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` } func (m *UpdateLogStreamResponse) Reset() { *m = UpdateLogStreamResponse{} } func (m *UpdateLogStreamResponse) String() string { return proto.CompactTextString(m) } func (*UpdateLogStreamResponse) ProtoMessage() {} func (*UpdateLogStreamResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{11} + return fileDescriptor_682aff4a3f93d15c, []int{15} } func (m *UpdateLogStreamResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -608,17 +772,15 @@ func (m *UpdateLogStreamResponse) GetLogStream() *varlogpb.LogStreamDescriptor { } type SealRequest struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *SealRequest) Reset() { *m = SealRequest{} } func (m *SealRequest) String() string { return proto.CompactTextString(m) } func (*SealRequest) ProtoMessage() {} func (*SealRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{12} + return fileDescriptor_682aff4a3f93d15c, []int{16} } func (m *SealRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -647,6 +809,13 @@ func (m *SealRequest) XXX_DiscardUnknown() { var xxx_messageInfo_SealRequest proto.InternalMessageInfo +func (m *SealRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *SealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -655,18 +824,15 @@ func (m *SealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_ty } type SealResponse struct { - LogStreams []varlogpb.LogStreamMetadataDescriptor `protobuf:"bytes,1,rep,name=log_streams,json=logStreams,proto3" json:"log_streams"` - SealedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=sealed_glsn,json=sealedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"sealed_glsn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStreams []varlogpb.LogStreamMetadataDescriptor `protobuf:"bytes,1,rep,name=log_streams,json=logStreams,proto3" json:"log_streams"` + SealedGLSN github_daumkakao_com_varlog_varlog_pkg_types.GLSN `protobuf:"varint,2,opt,name=sealed_glsn,json=sealedGlsn,proto3,casttype=github.com/kakao/varlog/pkg/types.GLSN" json:"sealed_glsn,omitempty"` } func (m *SealResponse) Reset() { *m = SealResponse{} } func (m *SealResponse) String() string { return proto.CompactTextString(m) } func (*SealResponse) ProtoMessage() {} func (*SealResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{13} + return fileDescriptor_682aff4a3f93d15c, []int{17} } func (m *SealResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -710,19 +876,17 @@ func (m *SealResponse) GetSealedGLSN() github_daumkakao_com_varlog_varlog_pkg_ty } type SyncRequest struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - SrcStorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,2,opt,name=src_storage_node_id,json=srcStorageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"src_storage_node_id,omitempty"` - DstStorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,3,opt,name=dst_storage_node_id,json=dstStorageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"dst_storage_node_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` + SrcStorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,3,opt,name=src_storage_node_id,json=srcStorageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"src_storage_node_id,omitempty"` + DstStorageNodeID github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID `protobuf:"varint,4,opt,name=dst_storage_node_id,json=dstStorageNodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"dst_storage_node_id,omitempty"` } func (m *SyncRequest) Reset() { *m = SyncRequest{} } func (m *SyncRequest) String() string { return proto.CompactTextString(m) } func (*SyncRequest) ProtoMessage() {} func (*SyncRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{14} + return fileDescriptor_682aff4a3f93d15c, []int{18} } func (m *SyncRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -751,6 +915,13 @@ func (m *SyncRequest) XXX_DiscardUnknown() { var xxx_messageInfo_SyncRequest proto.InternalMessageInfo +func (m *SyncRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *SyncRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -773,17 +944,14 @@ func (m *SyncRequest) GetDstStorageNodeID() github_daumkakao_com_varlog_varlog_p } type SyncResponse struct { - Status *snpb.SyncStatus `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Status *snpb.SyncStatus `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` } func (m *SyncResponse) Reset() { *m = SyncResponse{} } func (m *SyncResponse) String() string { return proto.CompactTextString(m) } func (*SyncResponse) ProtoMessage() {} func (*SyncResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{15} + return fileDescriptor_682aff4a3f93d15c, []int{19} } func (m *SyncResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -820,17 +988,15 @@ func (m *SyncResponse) GetStatus() *snpb.SyncStatus { } type UnsealRequest struct { - LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,1,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + TopicID github_daumkakao_com_varlog_varlog_pkg_types.TopicID `protobuf:"varint,1,opt,name=topic_id,json=topicId,proto3,casttype=github.com/kakao/varlog/pkg/types.TopicID" json:"topic_id,omitempty"` + LogStreamID github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID `protobuf:"varint,2,opt,name=log_stream_id,json=logStreamId,proto3,casttype=github.com/kakao/varlog/pkg/types.LogStreamID" json:"log_stream_id,omitempty"` } func (m *UnsealRequest) Reset() { *m = UnsealRequest{} } func (m *UnsealRequest) String() string { return proto.CompactTextString(m) } func (*UnsealRequest) ProtoMessage() {} func (*UnsealRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{16} + return fileDescriptor_682aff4a3f93d15c, []int{20} } func (m *UnsealRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -859,6 +1025,13 @@ func (m *UnsealRequest) XXX_DiscardUnknown() { var xxx_messageInfo_UnsealRequest proto.InternalMessageInfo +func (m *UnsealRequest) GetTopicID() github_daumkakao_com_varlog_varlog_pkg_types.TopicID { + if m != nil { + return m.TopicID + } + return 0 +} + func (m *UnsealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_types.LogStreamID { if m != nil { return m.LogStreamID @@ -867,17 +1040,14 @@ func (m *UnsealRequest) GetLogStreamID() github_daumkakao_com_varlog_varlog_pkg_ } type UnsealResponse struct { - LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LogStream *varlogpb.LogStreamDescriptor `protobuf:"bytes,1,opt,name=log_stream,json=logStream,proto3" json:"log_stream,omitempty"` } func (m *UnsealResponse) Reset() { *m = UnsealResponse{} } func (m *UnsealResponse) String() string { return proto.CompactTextString(m) } func (*UnsealResponse) ProtoMessage() {} func (*UnsealResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{17} + return fileDescriptor_682aff4a3f93d15c, []int{21} } func (m *UnsealResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -914,19 +1084,16 @@ func (m *UnsealResponse) GetLogStream() *varlogpb.LogStreamDescriptor { } type GetMRMembersResponse struct { - Leader github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=leader,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"leader,omitempty"` - ReplicationFactor int32 `protobuf:"varint,2,opt,name=replication_factor,json=replicationFactor,proto3" json:"replication_factor,omitempty"` - Members map[github_daumkakao_com_varlog_varlog_pkg_types.NodeID]string `protobuf:"bytes,3,rep,name=members,proto3,castkey=github.com/kakao/varlog/pkg/types.NodeID" json:"members,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Leader github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=leader,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"leader,omitempty"` + ReplicationFactor int32 `protobuf:"varint,2,opt,name=replication_factor,json=replicationFactor,proto3" json:"replication_factor,omitempty"` + Members map[github_daumkakao_com_varlog_varlog_pkg_types.NodeID]string `protobuf:"bytes,3,rep,name=members,proto3,castkey=github.com/kakao/varlog/pkg/types.NodeID" json:"members,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` } func (m *GetMRMembersResponse) Reset() { *m = GetMRMembersResponse{} } func (m *GetMRMembersResponse) String() string { return proto.CompactTextString(m) } func (*GetMRMembersResponse) ProtoMessage() {} func (*GetMRMembersResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{18} + return fileDescriptor_682aff4a3f93d15c, []int{22} } func (m *GetMRMembersResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -977,17 +1144,14 @@ func (m *GetMRMembersResponse) GetMembers() map[github_daumkakao_com_varlog_varl } type GetStorageNodesResponse struct { - Storagenodes map[github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID]string `protobuf:"bytes,1,rep,name=storagenodes,proto3,castkey=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storagenodes,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Storagenodes map[github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID]string `protobuf:"bytes,1,rep,name=storagenodes,proto3,castkey=github.com/kakao/varlog/pkg/types.StorageNodeID" json:"storagenodes,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` } func (m *GetStorageNodesResponse) Reset() { *m = GetStorageNodesResponse{} } func (m *GetStorageNodesResponse) String() string { return proto.CompactTextString(m) } func (*GetStorageNodesResponse) ProtoMessage() {} func (*GetStorageNodesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{19} + return fileDescriptor_682aff4a3f93d15c, []int{23} } func (m *GetStorageNodesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1024,18 +1188,15 @@ func (m *GetStorageNodesResponse) GetStoragenodes() map[github_daumkakao_com_var } type AddMRPeerRequest struct { - RaftURL string `protobuf:"bytes,1,opt,name=raft_url,json=raftUrl,proto3" json:"raft_url,omitempty"` - RPCAddr string `protobuf:"bytes,2,opt,name=rpc_addr,json=rpcAddr,proto3" json:"rpc_addr,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + RaftURL string `protobuf:"bytes,1,opt,name=raft_url,json=raftUrl,proto3" json:"raft_url,omitempty"` + RPCAddr string `protobuf:"bytes,2,opt,name=rpc_addr,json=rpcAddr,proto3" json:"rpc_addr,omitempty"` } func (m *AddMRPeerRequest) Reset() { *m = AddMRPeerRequest{} } func (m *AddMRPeerRequest) String() string { return proto.CompactTextString(m) } func (*AddMRPeerRequest) ProtoMessage() {} func (*AddMRPeerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{20} + return fileDescriptor_682aff4a3f93d15c, []int{24} } func (m *AddMRPeerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1079,17 +1240,14 @@ func (m *AddMRPeerRequest) GetRPCAddr() string { } type AddMRPeerResponse struct { - NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + NodeID github_daumkakao_com_varlog_varlog_pkg_types.NodeID `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3,casttype=github.com/kakao/varlog/pkg/types.NodeID" json:"node_id,omitempty"` } func (m *AddMRPeerResponse) Reset() { *m = AddMRPeerResponse{} } func (m *AddMRPeerResponse) String() string { return proto.CompactTextString(m) } func (*AddMRPeerResponse) ProtoMessage() {} func (*AddMRPeerResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{21} + return fileDescriptor_682aff4a3f93d15c, []int{25} } func (m *AddMRPeerResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1126,17 +1284,14 @@ func (m *AddMRPeerResponse) GetNodeID() github_daumkakao_com_varlog_varlog_pkg_t } type RemoveMRPeerRequest struct { - RaftURL string `protobuf:"bytes,1,opt,name=raft_url,json=raftUrl,proto3" json:"raft_url,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + RaftURL string `protobuf:"bytes,1,opt,name=raft_url,json=raftUrl,proto3" json:"raft_url,omitempty"` } func (m *RemoveMRPeerRequest) Reset() { *m = RemoveMRPeerRequest{} } func (m *RemoveMRPeerRequest) String() string { return proto.CompactTextString(m) } func (*RemoveMRPeerRequest) ProtoMessage() {} func (*RemoveMRPeerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{22} + return fileDescriptor_682aff4a3f93d15c, []int{26} } func (m *RemoveMRPeerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1173,16 +1328,13 @@ func (m *RemoveMRPeerRequest) GetRaftURL() string { } type RemoveMRPeerResponse struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } func (m *RemoveMRPeerResponse) Reset() { *m = RemoveMRPeerResponse{} } func (m *RemoveMRPeerResponse) String() string { return proto.CompactTextString(m) } func (*RemoveMRPeerResponse) ProtoMessage() {} func (*RemoveMRPeerResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_682aff4a3f93d15c, []int{23} + return fileDescriptor_682aff4a3f93d15c, []int{27} } func (m *RemoveMRPeerResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1216,6 +1368,10 @@ func init() { proto.RegisterType((*AddStorageNodeResponse)(nil), "varlog.vmspb.AddStorageNodeResponse") proto.RegisterType((*UnregisterStorageNodeRequest)(nil), "varlog.vmspb.UnregisterStorageNodeRequest") proto.RegisterType((*UnregisterStorageNodeResponse)(nil), "varlog.vmspb.UnregisterStorageNodeResponse") + proto.RegisterType((*AddTopicRequest)(nil), "varlog.vmspb.AddTopicRequest") + proto.RegisterType((*AddTopicResponse)(nil), "varlog.vmspb.AddTopicResponse") + proto.RegisterType((*UnregisterTopicRequest)(nil), "varlog.vmspb.UnregisterTopicRequest") + proto.RegisterType((*UnregisterTopicResponse)(nil), "varlog.vmspb.UnregisterTopicResponse") proto.RegisterType((*AddLogStreamRequest)(nil), "varlog.vmspb.AddLogStreamRequest") proto.RegisterType((*AddLogStreamResponse)(nil), "varlog.vmspb.AddLogStreamResponse") proto.RegisterType((*UnregisterLogStreamRequest)(nil), "varlog.vmspb.UnregisterLogStreamRequest") @@ -1243,87 +1399,95 @@ func init() { func init() { proto.RegisterFile("proto/vmspb/vms.proto", fileDescriptor_682aff4a3f93d15c) } var fileDescriptor_682aff4a3f93d15c = []byte{ - // 1273 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x58, 0x4f, 0x73, 0xdb, 0x44, - 0x14, 0xb7, 0xe2, 0x34, 0x6e, 0x9e, 0xed, 0x34, 0xdd, 0xfc, 0x2b, 0x6a, 0x1b, 0x15, 0xd1, 0x32, - 0x05, 0x5a, 0x79, 0xda, 0x1e, 0xda, 0xe9, 0x00, 0x9d, 0x38, 0x29, 0x99, 0x0c, 0x49, 0x5b, 0xe4, - 0xf1, 0x30, 0xd3, 0x0e, 0x18, 0xd9, 0xbb, 0x51, 0x3d, 0x91, 0x25, 0xb1, 0x2b, 0x87, 0xf1, 0x85, - 0x81, 0x1c, 0x98, 0xe1, 0xc0, 0xb9, 0x07, 0x0e, 0xf0, 0x71, 0xca, 0x8d, 0x1b, 0x37, 0x77, 0xc6, - 0x7c, 0x09, 0x26, 0x27, 0x46, 0xbb, 0x92, 0x2c, 0x59, 0x72, 0xea, 0x84, 0x30, 0xb9, 0x24, 0xde, - 0x7d, 0xff, 0x7e, 0xef, 0xed, 0x7b, 0x6f, 0xdf, 0x0a, 0x96, 0x5c, 0xea, 0x78, 0x4e, 0x65, 0xbf, - 0xc3, 0xdc, 0xa6, 0xff, 0x57, 0xe3, 0x6b, 0x54, 0xda, 0x37, 0xa8, 0xe5, 0x98, 0x1a, 0xdf, 0x97, - 0x6f, 0x9b, 0x6d, 0xef, 0x65, 0xb7, 0xa9, 0xb5, 0x9c, 0x4e, 0xc5, 0x74, 0x4c, 0xa7, 0xc2, 0x99, - 0x9a, 0xdd, 0x5d, 0xbe, 0x12, 0x1a, 0xfc, 0x5f, 0x42, 0x58, 0xbe, 0x6c, 0x3a, 0x8e, 0x69, 0x91, - 0x21, 0x17, 0xe9, 0xb8, 0x5e, 0x2f, 0x20, 0xae, 0x08, 0xcd, 0x6e, 0xb3, 0xd2, 0x21, 0x9e, 0x81, - 0x0d, 0xcf, 0x08, 0x08, 0x4b, 0xcc, 0x76, 0x9b, 0x15, 0x4a, 0x5c, 0xab, 0xdd, 0x32, 0x3c, 0x87, - 0x8a, 0x6d, 0xf5, 0x0e, 0x2c, 0xad, 0x61, 0x5c, 0xf3, 0x1c, 0x6a, 0x98, 0xe4, 0x89, 0x83, 0x89, - 0x4e, 0xbe, 0xed, 0x12, 0xe6, 0xa1, 0x4b, 0x50, 0x30, 0x30, 0xa6, 0x84, 0xb1, 0x4b, 0xd2, 0x35, - 0xe9, 0xe6, 0xac, 0x1e, 0x2e, 0xd5, 0x3d, 0x58, 0x1e, 0x15, 0x61, 0xae, 0x63, 0x33, 0x82, 0xbe, - 0x80, 0x12, 0x13, 0xdb, 0x0d, 0xdb, 0xc1, 0x84, 0x0b, 0x16, 0xef, 0x6a, 0x5a, 0xe8, 0x6d, 0x00, - 0x4d, 0x8b, 0xc9, 0xee, 0x04, 0x28, 0x37, 0x08, 0x6b, 0xd1, 0xb6, 0xeb, 0x39, 0x54, 0x2f, 0xb2, - 0x21, 0x59, 0x7d, 0x25, 0xc1, 0x95, 0xba, 0x4d, 0x89, 0xd9, 0x66, 0x1e, 0xa1, 0x19, 0x38, 0xbf, - 0x83, 0x0b, 0x71, 0x9b, 0x8d, 0x36, 0xe6, 0x66, 0xcb, 0xd5, 0xa7, 0x83, 0xbe, 0x52, 0x8e, 0x09, - 0x6c, 0x6d, 0x1c, 0xf6, 0x95, 0x87, 0x41, 0xa8, 0xb1, 0xd1, 0xed, 0xec, 0x19, 0x7b, 0x86, 0xc3, - 0x83, 0x2e, 0x80, 0x85, 0xff, 0xdc, 0x3d, 0xb3, 0xe2, 0xf5, 0x5c, 0xc2, 0xb4, 0x84, 0xb4, 0x5e, - 0x8e, 0xe1, 0xda, 0xc2, 0xaa, 0x02, 0x57, 0xc7, 0x00, 0x13, 0xd1, 0x50, 0x5f, 0xc0, 0xc2, 0x1a, - 0xc6, 0xdb, 0x8e, 0x59, 0xf3, 0x28, 0x31, 0x3a, 0x21, 0xe0, 0x0d, 0x38, 0x1f, 0x9c, 0x82, 0x1f, - 0xd9, 0xfc, 0xcd, 0xe2, 0x5d, 0x35, 0x15, 0x20, 0x5d, 0x30, 0x0c, 0x83, 0x52, 0x9d, 0x7e, 0xdd, - 0x57, 0x24, 0x3d, 0x92, 0x54, 0x5f, 0xc0, 0x62, 0x52, 0x79, 0x70, 0x04, 0xeb, 0x00, 0x96, 0x63, - 0x36, 0x18, 0xdf, 0x0d, 0x0e, 0xe0, 0x7a, 0x4a, 0x7f, 0x24, 0x17, 0x0b, 0xfb, 0xac, 0x15, 0x6e, - 0xaa, 0xbf, 0x48, 0x20, 0x0f, 0x7d, 0x4b, 0x79, 0xe0, 0x40, 0x79, 0x68, 0x63, 0x18, 0xf0, 0xcf, - 0x07, 0x7d, 0xa5, 0x18, 0x31, 0xf3, 0x70, 0x3f, 0x38, 0x56, 0xb8, 0x63, 0xb2, 0x7a, 0x31, 0x42, - 0xb3, 0x85, 0xd5, 0xab, 0x70, 0x39, 0x13, 0x4e, 0x10, 0xe8, 0x9f, 0xa7, 0xe0, 0xaa, 0x4e, 0x3a, - 0xce, 0x3e, 0x89, 0xd1, 0x78, 0x9c, 0xce, 0x3a, 0x49, 0xd2, 0xa1, 0x9a, 0xfa, 0x9f, 0x43, 0x75, - 0x0d, 0x56, 0xc7, 0x85, 0x22, 0x88, 0xd6, 0x6f, 0x53, 0xb0, 0x5c, 0x77, 0xb1, 0xe1, 0x91, 0x33, - 0x3f, 0x58, 0xb4, 0x05, 0x73, 0xae, 0xe3, 0xba, 0x04, 0x37, 0x82, 0xc4, 0xe6, 0xf1, 0x99, 0xa8, - 0x22, 0xf4, 0xb2, 0x90, 0x0c, 0x08, 0x5c, 0x55, 0x97, 0xbd, 0x8c, 0xa9, 0xca, 0x1f, 0x43, 0x15, - 0x97, 0x0c, 0x08, 0xea, 0xd7, 0xb0, 0x92, 0x0a, 0xd0, 0x69, 0x96, 0xd7, 0xf7, 0x50, 0xac, 0x11, - 0xc3, 0x3a, 0xb3, 0x72, 0xfa, 0x43, 0x82, 0x92, 0x00, 0x10, 0x78, 0x55, 0x83, 0xe2, 0x10, 0x41, - 0xd8, 0x95, 0x6e, 0x8d, 0x77, 0x2b, 0xdd, 0xb4, 0x79, 0x7f, 0xca, 0xe9, 0x10, 0x99, 0x61, 0x08, - 0x43, 0x91, 0x11, 0xc3, 0x22, 0xb8, 0x61, 0x5a, 0xcc, 0xe6, 0x07, 0x3b, 0x5d, 0x5d, 0x1f, 0xf4, - 0x15, 0xa8, 0xf1, 0xed, 0xcd, 0xed, 0xda, 0x93, 0xc3, 0xbe, 0x72, 0xe7, 0x58, 0x3e, 0xf9, 0x42, - 0x3a, 0x08, 0xbd, 0x9b, 0x16, 0xb3, 0xd5, 0x9f, 0xf2, 0x50, 0xac, 0xf5, 0xec, 0xd6, 0x99, 0xa5, - 0xf0, 0x8f, 0x12, 0x2c, 0x30, 0xda, 0x6a, 0x8c, 0xf6, 0x17, 0x51, 0xe8, 0xfa, 0xa0, 0xaf, 0xcc, - 0xd7, 0x68, 0xeb, 0x34, 0x5b, 0xcc, 0x3c, 0x4b, 0xea, 0x13, 0x18, 0x30, 0xf3, 0x52, 0x18, 0xf2, - 0x43, 0x0c, 0x1b, 0xcc, 0x3b, 0x55, 0x0c, 0x38, 0xa9, 0x0f, 0xab, 0x8f, 0xa0, 0x24, 0xce, 0x21, - 0xc8, 0xa9, 0x0a, 0xcc, 0x30, 0xcf, 0xf0, 0xba, 0x2c, 0xa8, 0x92, 0x95, 0x30, 0x9d, 0xfc, 0x39, - 0x44, 0xf3, 0x59, 0x6b, 0x9c, 0xac, 0x07, 0x6c, 0xea, 0x0f, 0x12, 0x94, 0xeb, 0x36, 0x3b, 0xcb, - 0xc2, 0xa8, 0xc3, 0x5c, 0x88, 0xe0, 0x34, 0xeb, 0xfd, 0xaf, 0x29, 0x58, 0xdc, 0x24, 0xde, 0x8e, - 0xbe, 0x43, 0x3a, 0x4d, 0x42, 0x59, 0xa4, 0xfd, 0x29, 0xcc, 0x58, 0xc4, 0xc0, 0x84, 0x72, 0xcd, - 0xd3, 0xd5, 0xfb, 0x87, 0x7d, 0xe5, 0xde, 0xb1, 0x5c, 0x09, 0x8e, 0x23, 0x50, 0x83, 0x6e, 0x03, - 0x0a, 0x27, 0xbc, 0xb6, 0x63, 0x37, 0x76, 0x8d, 0x96, 0xe7, 0x50, 0x9e, 0x8a, 0xe7, 0xf4, 0x8b, - 0x31, 0xca, 0x67, 0x9c, 0x80, 0x0e, 0x24, 0x28, 0x74, 0x04, 0xa6, 0x4b, 0x79, 0x5e, 0xf4, 0x15, - 0x2d, 0x3e, 0x99, 0x6a, 0x59, 0xa8, 0xb5, 0x60, 0xfd, 0xd8, 0xf6, 0x68, 0xaf, 0x7a, 0xff, 0xe0, - 0xcd, 0xc9, 0x20, 0x87, 0x86, 0xe5, 0x87, 0x50, 0x8a, 0x6b, 0x44, 0xf3, 0x90, 0xdf, 0x23, 0x3d, - 0x11, 0x11, 0xdd, 0xff, 0x89, 0x16, 0xe1, 0xdc, 0xbe, 0x61, 0x75, 0x09, 0x77, 0x64, 0x56, 0x17, - 0x8b, 0x87, 0x53, 0x0f, 0x24, 0xf5, 0x1f, 0x09, 0x56, 0x36, 0x49, 0x3c, 0x13, 0x87, 0xc1, 0xfd, - 0x55, 0x8a, 0xa6, 0x51, 0xbf, 0x1e, 0xc2, 0xb6, 0x76, 0x3f, 0xe5, 0x61, 0x96, 0x74, 0x98, 0xee, - 0x5c, 0x52, 0x78, 0xfa, 0xe9, 0xc1, 0x9b, 0xff, 0x54, 0x32, 0x09, 0x30, 0xf2, 0x23, 0xb8, 0x98, - 0x32, 0x11, 0x77, 0xbd, 0xfc, 0x36, 0xd7, 0x9b, 0x30, 0xbf, 0x86, 0xf1, 0x8e, 0xfe, 0x8c, 0x10, - 0x1a, 0x16, 0xcc, 0xfb, 0x70, 0x9e, 0x1a, 0xbb, 0x5e, 0xa3, 0x4b, 0x2d, 0x31, 0xb4, 0x57, 0x8b, - 0x83, 0xbe, 0x52, 0xd0, 0x8d, 0x5d, 0xaf, 0xae, 0x6f, 0xeb, 0x05, 0x9f, 0x58, 0xa7, 0x16, 0xe7, - 0x73, 0x5b, 0x0d, 0x7f, 0xa0, 0x17, 0x8a, 0x03, 0xbe, 0x67, 0xeb, 0x6b, 0x18, 0x53, 0xbd, 0x40, - 0xdd, 0x96, 0xff, 0x43, 0x75, 0xe0, 0x62, 0xcc, 0x46, 0x10, 0xd7, 0xe7, 0x50, 0x88, 0xcf, 0x50, - 0xd3, 0xd5, 0xb5, 0x41, 0x5f, 0x99, 0x89, 0xba, 0xca, 0xc9, 0xf2, 0xd7, 0x16, 0x4d, 0xe4, 0x13, - 0x58, 0x10, 0xd3, 0xcb, 0x89, 0xfc, 0x52, 0x97, 0x61, 0x31, 0x29, 0x2e, 0x20, 0xdf, 0x7d, 0x35, - 0x0b, 0x73, 0xeb, 0x56, 0xd7, 0x9f, 0x1e, 0x77, 0x0c, 0xdb, 0x30, 0x09, 0x45, 0x5f, 0xc1, 0x5c, - 0xf2, 0x11, 0x83, 0xde, 0x4b, 0x26, 0x46, 0xe6, 0xab, 0x48, 0xbe, 0x7e, 0x34, 0x53, 0x30, 0x62, - 0xe5, 0x10, 0x85, 0xa5, 0xcc, 0xc7, 0x01, 0xfa, 0x30, 0xa9, 0xe0, 0xa8, 0xa7, 0x8d, 0xfc, 0xd1, - 0x44, 0xbc, 0x91, 0xcd, 0x2f, 0xa1, 0x14, 0x7f, 0x12, 0xa0, 0x77, 0x53, 0x58, 0x47, 0x07, 0x3e, - 0x59, 0x3d, 0x8a, 0x25, 0x52, 0x6c, 0xc1, 0x42, 0xc6, 0xf8, 0x8d, 0x6e, 0x8e, 0x83, 0x97, 0x32, - 0xf3, 0xc1, 0x04, 0x9c, 0x91, 0xb5, 0x2e, 0x2c, 0x67, 0x4f, 0xb0, 0x68, 0x24, 0x1e, 0x47, 0x8e, - 0xfc, 0xf2, 0xad, 0xc9, 0x98, 0x23, 0xb3, 0xdf, 0xc0, 0x85, 0x91, 0xa1, 0x0f, 0x8d, 0x1c, 0x76, - 0xf6, 0xd0, 0x2c, 0xdf, 0x78, 0x0b, 0x57, 0x64, 0xe1, 0x11, 0x4c, 0xfb, 0x93, 0x0f, 0x7a, 0x27, - 0x29, 0x10, 0x1b, 0x05, 0x65, 0x39, 0x8b, 0x94, 0x50, 0xd0, 0xb3, 0x5b, 0x29, 0x05, 0xc3, 0xf1, - 0x27, 0xa5, 0x20, 0x76, 0x23, 0xab, 0x39, 0xf4, 0x18, 0x66, 0xc4, 0xfd, 0x86, 0x2e, 0x8f, 0x9e, - 0x48, 0xec, 0xde, 0x95, 0xaf, 0x64, 0x13, 0x23, 0x35, 0xdb, 0x50, 0x8a, 0x5f, 0x0c, 0x68, 0x59, - 0x13, 0x5f, 0x24, 0xb4, 0xf0, 0x8b, 0x84, 0xf6, 0xb8, 0xe3, 0x7a, 0xbd, 0xd1, 0xec, 0xca, 0xba, - 0x4c, 0xd4, 0x1c, 0x7a, 0x02, 0xb3, 0x51, 0x93, 0x41, 0xab, 0xa9, 0x84, 0x4c, 0x74, 0x02, 0x59, - 0x19, 0x4b, 0x8f, 0x97, 0x41, 0xbc, 0x09, 0x8c, 0x96, 0x41, 0x46, 0x7f, 0x19, 0x05, 0x9a, 0xd5, - 0x43, 0xd4, 0x1c, 0xd2, 0xe1, 0xc2, 0xc8, 0x6d, 0x31, 0xd6, 0xf3, 0x1b, 0x13, 0x5d, 0x32, 0x6a, - 0xae, 0xfa, 0xf1, 0xeb, 0xc1, 0xaa, 0xf4, 0xe7, 0x60, 0x55, 0xfa, 0xfd, 0xef, 0x55, 0xe9, 0xb9, - 0x36, 0x49, 0xef, 0x1c, 0x7e, 0x50, 0x6a, 0xce, 0xf0, 0xc5, 0xbd, 0x7f, 0x03, 0x00, 0x00, 0xff, - 0xff, 0x43, 0x1b, 0x64, 0x41, 0x66, 0x12, 0x00, 0x00, + // 1408 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x58, 0xcf, 0x6f, 0x1b, 0xc5, + 0x17, 0xf7, 0x26, 0x4e, 0x9c, 0x3c, 0x3b, 0x3f, 0x3a, 0xf9, 0xd9, 0x6d, 0x63, 0xe7, 0xbb, 0xdf, + 0x16, 0x15, 0x68, 0xd7, 0x6a, 0x8b, 0x68, 0x55, 0x09, 0xaa, 0x38, 0x29, 0x21, 0x34, 0x69, 0xcb, + 0x9a, 0x08, 0xa9, 0x88, 0x9a, 0xb5, 0x67, 0xe2, 0x5a, 0x59, 0x7b, 0xb7, 0x3b, 0xeb, 0x80, 0x8f, + 0xf4, 0x2f, 0xe0, 0xc6, 0x81, 0x0b, 0xff, 0x07, 0x77, 0x54, 0x6e, 0xbd, 0xc1, 0xc9, 0x45, 0x8e, + 0x84, 0xc4, 0x81, 0x3b, 0xea, 0x05, 0xb4, 0x33, 0xb3, 0xbf, 0xbc, 0xeb, 0x34, 0x29, 0x21, 0x48, + 0xb9, 0x24, 0x9e, 0x7d, 0xbf, 0x3e, 0xef, 0xbd, 0x79, 0x6f, 0xe6, 0x0d, 0xcc, 0x59, 0xb6, 0xe9, + 0x98, 0xc5, 0xbd, 0x26, 0xb5, 0xaa, 0xee, 0x5f, 0x95, 0xad, 0x51, 0x6e, 0x4f, 0xb7, 0x0d, 0xb3, + 0xae, 0xb2, 0xef, 0xf2, 0x95, 0x7a, 0xc3, 0x79, 0xdc, 0xae, 0xaa, 0x35, 0xb3, 0x59, 0xac, 0x9b, + 0x75, 0xb3, 0xc8, 0x98, 0xaa, 0xed, 0x1d, 0xb6, 0xe2, 0x1a, 0xdc, 0x5f, 0x5c, 0x58, 0x3e, 0x57, + 0x37, 0xcd, 0xba, 0x41, 0x02, 0x2e, 0xd2, 0xb4, 0x9c, 0x8e, 0x20, 0x2e, 0x70, 0xcd, 0x56, 0xb5, + 0xd8, 0x24, 0x8e, 0x8e, 0x75, 0x47, 0x17, 0x84, 0x39, 0xda, 0xb2, 0xaa, 0x45, 0x9b, 0x58, 0x46, + 0xa3, 0xa6, 0x3b, 0xa6, 0xcd, 0x3f, 0x2b, 0x57, 0x61, 0x6e, 0x05, 0xe3, 0xb2, 0x63, 0xda, 0x7a, + 0x9d, 0xdc, 0x33, 0x31, 0xd1, 0xc8, 0x93, 0x36, 0xa1, 0x0e, 0x5a, 0x84, 0x8c, 0x8e, 0xb1, 0x4d, + 0x28, 0x5d, 0x94, 0x96, 0xa5, 0x4b, 0xe3, 0x9a, 0xb7, 0x54, 0x76, 0x61, 0xbe, 0x5f, 0x84, 0x5a, + 0x66, 0x8b, 0x12, 0xf4, 0x31, 0xe4, 0x28, 0xff, 0x5c, 0x69, 0x99, 0x98, 0x30, 0xc1, 0xec, 0x35, + 0x55, 0xf5, 0xbc, 0x15, 0xd0, 0xd4, 0x90, 0xec, 0x96, 0x40, 0xb9, 0x46, 0x68, 0xcd, 0x6e, 0x58, + 0x8e, 0x69, 0x6b, 0x59, 0x1a, 0x90, 0x95, 0x6f, 0x25, 0x38, 0xbf, 0xdd, 0xb2, 0x49, 0xbd, 0x41, + 0x1d, 0x62, 0x27, 0xe0, 0xfc, 0x12, 0xa6, 0xc2, 0x36, 0x2b, 0x0d, 0xcc, 0xcc, 0x8e, 0x94, 0xee, + 0xf7, 0xba, 0x85, 0x89, 0x90, 0xc0, 0xc6, 0xda, 0xcb, 0x6e, 0xe1, 0x96, 0x08, 0x35, 0xd6, 0xdb, + 0xcd, 0x5d, 0x7d, 0x57, 0x37, 0x59, 0xd0, 0x39, 0x30, 0xef, 0x9f, 0xb5, 0x5b, 0x2f, 0x3a, 0x1d, + 0x8b, 0x50, 0x35, 0x22, 0xad, 0x4d, 0x84, 0x70, 0x6d, 0x60, 0xa5, 0x00, 0x4b, 0x03, 0x80, 0xf1, + 0x68, 0x28, 0x4f, 0x60, 0x6a, 0x05, 0xe3, 0x4f, 0x4c, 0xab, 0x51, 0xf3, 0xc0, 0x3e, 0x82, 0x31, + 0xc7, 0x5d, 0x07, 0x28, 0x57, 0x7b, 0xdd, 0x42, 0x86, 0xf1, 0x30, 0x7c, 0xef, 0x1c, 0x09, 0x9f, + 0x90, 0xd3, 0x32, 0x4c, 0xe9, 0x06, 0x56, 0x3e, 0x82, 0xe9, 0xc0, 0xa4, 0x48, 0xca, 0xbb, 0x30, + 0xc2, 0xc8, 0x22, 0x1b, 0xcb, 0xb1, 0x6c, 0x30, 0xf6, 0x50, 0xfc, 0x39, 0xbb, 0xf2, 0x15, 0xcc, + 0x07, 0xfe, 0x9d, 0xa8, 0x17, 0x67, 0x61, 0x21, 0x66, 0x59, 0xc4, 0xf4, 0x07, 0x09, 0x66, 0x56, + 0x30, 0xde, 0x34, 0xeb, 0x65, 0xc7, 0x26, 0x7a, 0xf3, 0x84, 0x20, 0xa1, 0x35, 0x18, 0x13, 0xa5, + 0x43, 0x17, 0x87, 0x96, 0x87, 0x2f, 0x65, 0xaf, 0x29, 0xb1, 0x38, 0x6a, 0x9c, 0x21, 0x88, 0x64, + 0x29, 0xfd, 0xac, 0x5b, 0x90, 0x34, 0x5f, 0x52, 0xf9, 0x0c, 0x66, 0xa3, 0xe0, 0x45, 0x8a, 0x56, + 0x01, 0x0c, 0xb3, 0x5e, 0xa1, 0xec, 0xab, 0xc8, 0xd3, 0x85, 0x98, 0x7e, 0x5f, 0x2e, 0x94, 0xab, + 0x71, 0xc3, 0xfb, 0xa8, 0xfc, 0x21, 0x81, 0x1c, 0x84, 0xed, 0xc4, 0x23, 0x64, 0xc2, 0x44, 0xe0, + 0x83, 0x6b, 0x64, 0x88, 0x19, 0xb9, 0xdb, 0xeb, 0x16, 0xb2, 0x3e, 0x18, 0x66, 0xe8, 0xe6, 0x91, + 0x0c, 0x85, 0x64, 0xb5, 0xac, 0xef, 0xed, 0x06, 0x56, 0x96, 0xe0, 0x5c, 0xa2, 0xbb, 0x62, 0xa7, + 0xfc, 0x3e, 0x04, 0x4b, 0x1a, 0x69, 0x9a, 0x7b, 0x24, 0x44, 0x63, 0x79, 0xf8, 0xaf, 0x3b, 0x47, + 0x24, 0x15, 0x43, 0x27, 0x91, 0x8a, 0xe1, 0x7f, 0x39, 0x15, 0xcb, 0x90, 0x1f, 0x14, 0x6a, 0x91, + 0x8d, 0xbf, 0x86, 0x60, 0x7e, 0xdb, 0xc2, 0xba, 0x43, 0x4e, 0xfd, 0xc6, 0x44, 0x1b, 0x30, 0x69, + 0x99, 0x96, 0x45, 0x70, 0x45, 0x14, 0x3e, 0x8b, 0xff, 0xa1, 0x3a, 0x86, 0x36, 0xc1, 0x25, 0x05, + 0x81, 0xa9, 0x6a, 0xd3, 0xc7, 0x21, 0x55, 0xe9, 0x23, 0xa8, 0x62, 0x92, 0x82, 0xa0, 0x3c, 0x82, + 0x85, 0x58, 0x02, 0x8e, 0xb3, 0xfd, 0x74, 0x25, 0xc8, 0x96, 0x89, 0x6e, 0x9c, 0xda, 0x7e, 0xf3, + 0x93, 0x04, 0x39, 0xee, 0xa0, 0x08, 0x5b, 0x19, 0xb2, 0x01, 0x02, 0xf7, 0x96, 0xe4, 0x1e, 0x0b, + 0x97, 0x07, 0xc7, 0x2d, 0x7e, 0xd5, 0x61, 0x07, 0x44, 0x4a, 0x03, 0xdf, 0x0c, 0x45, 0x18, 0xb2, + 0x94, 0xe8, 0x06, 0xc1, 0x95, 0xba, 0x41, 0x5b, 0xcc, 0xa9, 0x34, 0x8b, 0x1c, 0x94, 0xd9, 0xe7, + 0xf5, 0xcd, 0xf2, 0xbd, 0x97, 0xdd, 0xc2, 0xd5, 0x23, 0xf9, 0xe4, 0x0a, 0x69, 0xc0, 0xf5, 0xae, + 0x1b, 0xb4, 0xa5, 0xfc, 0x36, 0x0c, 0xd9, 0x72, 0xa7, 0x55, 0x3b, 0xb5, 0x35, 0xf8, 0xb5, 0x04, + 0x33, 0xd4, 0xae, 0x55, 0xfa, 0x1b, 0x3c, 0xef, 0x84, 0x5a, 0xaf, 0x5b, 0x98, 0x2e, 0xdb, 0xb5, + 0xe3, 0xec, 0xf1, 0xd3, 0x34, 0xaa, 0x8f, 0x63, 0xc0, 0xd4, 0x89, 0x61, 0x48, 0x07, 0x18, 0xd6, + 0xa8, 0x73, 0xac, 0x18, 0x70, 0x54, 0x1f, 0x56, 0x6e, 0x43, 0x8e, 0xe7, 0x59, 0xec, 0xd9, 0x22, + 0x8c, 0x52, 0x47, 0x77, 0xda, 0x54, 0x94, 0xf9, 0x82, 0xb7, 0x5d, 0xdd, 0xe9, 0x40, 0x75, 0x59, + 0xcb, 0x8c, 0xac, 0x09, 0x36, 0xe5, 0x57, 0x09, 0x26, 0xb6, 0x5b, 0xf4, 0x34, 0x17, 0xf6, 0x36, + 0x4c, 0x7a, 0x1e, 0x1e, 0x67, 0x43, 0xfc, 0x79, 0x08, 0x66, 0xd7, 0x89, 0xb3, 0xa5, 0x6d, 0x91, + 0x66, 0x95, 0xd8, 0xd4, 0xd7, 0x7e, 0x1f, 0x46, 0x0d, 0xa2, 0x63, 0x62, 0x33, 0xcd, 0xe9, 0xd2, + 0x8d, 0x97, 0xdd, 0xc2, 0xf5, 0x23, 0xb9, 0x22, 0xd2, 0x2d, 0xd4, 0xa0, 0x2b, 0x80, 0xbc, 0xb9, + 0xae, 0x61, 0xb6, 0x2a, 0x3b, 0x7a, 0xcd, 0x31, 0x6d, 0x1e, 0x36, 0xed, 0x4c, 0x88, 0xf2, 0x01, + 0x23, 0xa0, 0xa7, 0x12, 0x64, 0x9a, 0x1c, 0xd3, 0xe2, 0x30, 0x6b, 0x5a, 0x45, 0x35, 0x3c, 0x8f, + 0xaa, 0x49, 0xa8, 0x55, 0xb1, 0xbe, 0xd3, 0x72, 0xec, 0x4e, 0xe9, 0xc6, 0xd3, 0x17, 0xaf, 0x07, + 0xd9, 0x33, 0x2c, 0xdf, 0x82, 0x5c, 0x58, 0x23, 0x9a, 0x86, 0xe1, 0x5d, 0xd2, 0xe1, 0x11, 0xd1, + 0xdc, 0x9f, 0x68, 0x16, 0x46, 0xf6, 0x74, 0xa3, 0x4d, 0x98, 0x23, 0xe3, 0x1a, 0x5f, 0xdc, 0x1a, + 0xba, 0x29, 0x29, 0x7f, 0x4a, 0xb0, 0xb0, 0x4e, 0xc2, 0x3b, 0x3d, 0x08, 0xee, 0x77, 0x92, 0x3f, + 0x83, 0xba, 0xf5, 0xe6, 0xb5, 0xe5, 0x1b, 0x31, 0x0f, 0x93, 0xa4, 0xbd, 0x72, 0x62, 0x92, 0xdc, + 0xd3, 0xf7, 0x9f, 0xbe, 0xf8, 0x47, 0x25, 0x19, 0x01, 0x23, 0xdf, 0x86, 0x33, 0x31, 0x13, 0x61, + 0xd7, 0x47, 0x5e, 0xe5, 0x7a, 0x95, 0x0d, 0x78, 0x5b, 0xda, 0x03, 0x42, 0x6c, 0xaf, 0x20, 0xdf, + 0x80, 0x31, 0x5b, 0xdf, 0x71, 0x2a, 0x6d, 0xdb, 0xe0, 0xa3, 0x7a, 0x29, 0xeb, 0x16, 0xa4, 0xa6, + 0xef, 0x38, 0xdb, 0xda, 0xa6, 0x96, 0x71, 0x89, 0xdb, 0xb6, 0xc1, 0xf8, 0xac, 0x5a, 0xc5, 0x1d, + 0xe3, 0xb9, 0x62, 0xc1, 0xf7, 0x60, 0x75, 0x05, 0x63, 0x5b, 0xcb, 0xd8, 0x56, 0xcd, 0xfd, 0xa1, + 0x98, 0x70, 0x26, 0x64, 0x43, 0xc4, 0xf5, 0x21, 0x64, 0xc2, 0x97, 0xe4, 0x74, 0x69, 0xa5, 0xd7, + 0x2d, 0x8c, 0xfa, 0x5d, 0xeb, 0xf5, 0xf6, 0x6f, 0x8b, 0x37, 0xa9, 0xf7, 0x60, 0x86, 0x5f, 0x1f, + 0x5f, 0xcb, 0x2f, 0x65, 0x1e, 0x66, 0xa3, 0xe2, 0x1c, 0xf2, 0xb5, 0x1f, 0x01, 0x26, 0x57, 0x8d, + 0xb6, 0x3b, 0x1e, 0x6c, 0xe9, 0x2d, 0xbd, 0x4e, 0x6c, 0xf4, 0x39, 0x4c, 0x46, 0x9f, 0x2e, 0xd0, + 0xff, 0xa3, 0x1b, 0x23, 0xf1, 0x2d, 0x44, 0xbe, 0x70, 0x30, 0x93, 0xb8, 0xe3, 0xa6, 0x90, 0x0d, + 0x73, 0x89, 0x4f, 0x02, 0xe8, 0xad, 0xa8, 0x82, 0x83, 0x1e, 0x34, 0xe4, 0xb7, 0x0f, 0xc5, 0xeb, + 0xdb, 0xbc, 0x0b, 0x63, 0xde, 0xc8, 0x8f, 0x96, 0x62, 0x38, 0xc3, 0x73, 0xbb, 0x9c, 0x1f, 0x44, + 0xf6, 0x95, 0x7d, 0x01, 0x53, 0x7d, 0x93, 0x37, 0xba, 0x30, 0x08, 0x4e, 0x44, 0xf5, 0xc5, 0x57, + 0x70, 0xf9, 0x16, 0x3e, 0x85, 0x5c, 0x78, 0x04, 0x46, 0xff, 0x8b, 0x61, 0xea, 0x1f, 0x10, 0x64, + 0xe5, 0x20, 0x16, 0x5f, 0xb1, 0x01, 0x33, 0x09, 0xe3, 0x20, 0xba, 0x34, 0x08, 0x58, 0xcc, 0xcc, + 0x9b, 0x87, 0xe0, 0xf4, 0xad, 0xb5, 0x61, 0x3e, 0x79, 0xe2, 0x41, 0x7d, 0xe9, 0x3b, 0x70, 0x04, + 0x95, 0x2f, 0x1f, 0x8e, 0x39, 0x92, 0x9f, 0xe8, 0x25, 0x3e, 0x96, 0x9f, 0xc4, 0x21, 0x2b, 0x96, + 0x9f, 0xe4, 0x49, 0x40, 0x49, 0xa1, 0xdb, 0x90, 0x76, 0x2f, 0x9a, 0xe8, 0x6c, 0x54, 0x20, 0x74, + 0xb3, 0x97, 0xe5, 0x24, 0x52, 0x44, 0x41, 0xa7, 0x55, 0x8b, 0x29, 0x08, 0x6e, 0x9b, 0x31, 0x05, + 0xa1, 0x0b, 0x8a, 0x92, 0x42, 0x77, 0x60, 0x94, 0x1f, 0xc7, 0xe8, 0x5c, 0x7f, 0x46, 0x42, 0xd7, + 0x10, 0xf9, 0x7c, 0x32, 0xd1, 0x57, 0xb3, 0x09, 0xb9, 0xf0, 0x39, 0x86, 0xe6, 0x55, 0xfe, 0x6c, + 0xaa, 0x7a, 0xcf, 0xa6, 0xea, 0x9d, 0xa6, 0xe5, 0x74, 0xfa, 0x77, 0x57, 0xd2, 0xd9, 0xa7, 0xa4, + 0xd0, 0x3d, 0x18, 0xf7, 0x7b, 0x22, 0x8a, 0xd7, 0x51, 0xa4, 0x71, 0xc9, 0x85, 0x81, 0xf4, 0x70, + 0x19, 0x84, 0x7b, 0x56, 0x7f, 0x19, 0x24, 0xb4, 0xc3, 0x7e, 0xa0, 0x49, 0x2d, 0x4f, 0x49, 0x21, + 0x0d, 0xa6, 0xfa, 0x0e, 0xb7, 0x81, 0x9e, 0x5f, 0x3c, 0xd4, 0x99, 0xa8, 0xa4, 0x4a, 0x1f, 0x3e, + 0xeb, 0xe5, 0xa5, 0xe7, 0xbd, 0xbc, 0xf4, 0xcd, 0x7e, 0x3e, 0xf5, 0xfd, 0x7e, 0x5e, 0x7a, 0xbe, + 0x9f, 0x4f, 0xfd, 0xb2, 0x9f, 0x4f, 0x3d, 0x54, 0x0f, 0xd3, 0xf6, 0x83, 0x17, 0xf0, 0xea, 0x28, + 0x5b, 0x5c, 0xff, 0x3b, 0x00, 0x00, 0xff, 0xff, 0xa5, 0x49, 0xbf, 0xe5, 0x17, 0x17, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -1340,6 +1504,8 @@ const _ = grpc.SupportPackageIsVersion4 type ClusterManagerClient interface { AddStorageNode(ctx context.Context, in *AddStorageNodeRequest, opts ...grpc.CallOption) (*AddStorageNodeResponse, error) UnregisterStorageNode(ctx context.Context, in *UnregisterStorageNodeRequest, opts ...grpc.CallOption) (*UnregisterStorageNodeResponse, error) + AddTopic(ctx context.Context, in *AddTopicRequest, opts ...grpc.CallOption) (*AddTopicResponse, error) + UnregisterTopic(ctx context.Context, in *UnregisterTopicRequest, opts ...grpc.CallOption) (*UnregisterTopicResponse, error) AddLogStream(ctx context.Context, in *AddLogStreamRequest, opts ...grpc.CallOption) (*AddLogStreamResponse, error) UnregisterLogStream(ctx context.Context, in *UnregisterLogStreamRequest, opts ...grpc.CallOption) (*UnregisterLogStreamResponse, error) RemoveLogStreamReplica(ctx context.Context, in *RemoveLogStreamReplicaRequest, opts ...grpc.CallOption) (*RemoveLogStreamReplicaResponse, error) @@ -1379,6 +1545,24 @@ func (c *clusterManagerClient) UnregisterStorageNode(ctx context.Context, in *Un return out, nil } +func (c *clusterManagerClient) AddTopic(ctx context.Context, in *AddTopicRequest, opts ...grpc.CallOption) (*AddTopicResponse, error) { + out := new(AddTopicResponse) + err := c.cc.Invoke(ctx, "/varlog.vmspb.ClusterManager/AddTopic", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *clusterManagerClient) UnregisterTopic(ctx context.Context, in *UnregisterTopicRequest, opts ...grpc.CallOption) (*UnregisterTopicResponse, error) { + out := new(UnregisterTopicResponse) + err := c.cc.Invoke(ctx, "/varlog.vmspb.ClusterManager/UnregisterTopic", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *clusterManagerClient) AddLogStream(ctx context.Context, in *AddLogStreamRequest, opts ...grpc.CallOption) (*AddLogStreamResponse, error) { out := new(AddLogStreamResponse) err := c.cc.Invoke(ctx, "/varlog.vmspb.ClusterManager/AddLogStream", in, out, opts...) @@ -1482,6 +1666,8 @@ func (c *clusterManagerClient) GetStorageNodes(ctx context.Context, in *types.Em type ClusterManagerServer interface { AddStorageNode(context.Context, *AddStorageNodeRequest) (*AddStorageNodeResponse, error) UnregisterStorageNode(context.Context, *UnregisterStorageNodeRequest) (*UnregisterStorageNodeResponse, error) + AddTopic(context.Context, *AddTopicRequest) (*AddTopicResponse, error) + UnregisterTopic(context.Context, *UnregisterTopicRequest) (*UnregisterTopicResponse, error) AddLogStream(context.Context, *AddLogStreamRequest) (*AddLogStreamResponse, error) UnregisterLogStream(context.Context, *UnregisterLogStreamRequest) (*UnregisterLogStreamResponse, error) RemoveLogStreamReplica(context.Context, *RemoveLogStreamReplicaRequest) (*RemoveLogStreamReplicaResponse, error) @@ -1505,6 +1691,12 @@ func (*UnimplementedClusterManagerServer) AddStorageNode(ctx context.Context, re func (*UnimplementedClusterManagerServer) UnregisterStorageNode(ctx context.Context, req *UnregisterStorageNodeRequest) (*UnregisterStorageNodeResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method UnregisterStorageNode not implemented") } +func (*UnimplementedClusterManagerServer) AddTopic(ctx context.Context, req *AddTopicRequest) (*AddTopicResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method AddTopic not implemented") +} +func (*UnimplementedClusterManagerServer) UnregisterTopic(ctx context.Context, req *UnregisterTopicRequest) (*UnregisterTopicResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UnregisterTopic not implemented") +} func (*UnimplementedClusterManagerServer) AddLogStream(ctx context.Context, req *AddLogStreamRequest) (*AddLogStreamResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method AddLogStream not implemented") } @@ -1579,6 +1771,42 @@ func _ClusterManager_UnregisterStorageNode_Handler(srv interface{}, ctx context. return interceptor(ctx, in, info, handler) } +func _ClusterManager_AddTopic_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(AddTopicRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ClusterManagerServer).AddTopic(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/varlog.vmspb.ClusterManager/AddTopic", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ClusterManagerServer).AddTopic(ctx, req.(*AddTopicRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ClusterManager_UnregisterTopic_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UnregisterTopicRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ClusterManagerServer).UnregisterTopic(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/varlog.vmspb.ClusterManager/UnregisterTopic", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ClusterManagerServer).UnregisterTopic(ctx, req.(*UnregisterTopicRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _ClusterManager_AddLogStream_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(AddLogStreamRequest) if err := dec(in); err != nil { @@ -1789,6 +2017,14 @@ var _ClusterManager_serviceDesc = grpc.ServiceDesc{ MethodName: "UnregisterStorageNode", Handler: _ClusterManager_UnregisterStorageNode_Handler, }, + { + MethodName: "AddTopic", + Handler: _ClusterManager_AddTopic_Handler, + }, + { + MethodName: "UnregisterTopic", + Handler: _ClusterManager_UnregisterTopic_Handler, + }, { MethodName: "AddLogStream", Handler: _ClusterManager_AddLogStream_Handler, @@ -1858,10 +2094,6 @@ func (m *AddStorageNodeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Address) > 0 { i -= len(m.Address) copy(dAtA[i:], m.Address) @@ -1892,10 +2124,6 @@ func (m *AddStorageNodeResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.StorageNode != nil { { size, err := m.StorageNode.MarshalToSizedBuffer(dAtA[:i]) @@ -1931,10 +2159,6 @@ func (m *UnregisterStorageNodeRequest) MarshalToSizedBuffer(dAtA []byte) (int, e _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.StorageNodeID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.StorageNodeID)) i-- @@ -1963,14 +2187,10 @@ func (m *UnregisterStorageNodeResponse) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } return len(dAtA) - i, nil } -func (m *AddLogStreamRequest) Marshal() (dAtA []byte, err error) { +func (m *AddTopicRequest) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -1980,38 +2200,25 @@ func (m *AddLogStreamRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AddLogStreamRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *AddTopicRequest) MarshalTo(dAtA []byte) (int, error) { size := m.ProtoSize() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AddLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AddTopicRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.Replicas) > 0 { - for iNdEx := len(m.Replicas) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Replicas[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintVms(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0xa - } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x8 } return len(dAtA) - i, nil } -func (m *AddLogStreamResponse) Marshal() (dAtA []byte, err error) { +func (m *AddTopicResponse) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -2021,23 +2228,19 @@ func (m *AddLogStreamResponse) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AddLogStreamResponse) MarshalTo(dAtA []byte) (int, error) { +func (m *AddTopicResponse) MarshalTo(dAtA []byte) (int, error) { size := m.ProtoSize() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AddLogStreamResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AddTopicResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.LogStream != nil { + if m.Topic != nil { { - size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Topic.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -2050,7 +2253,7 @@ func (m *AddLogStreamResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *UnregisterLogStreamRequest) Marshal() (dAtA []byte, err error) { +func (m *UnregisterTopicRequest) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -2060,29 +2263,25 @@ func (m *UnregisterLogStreamRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UnregisterLogStreamRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *UnregisterTopicRequest) MarshalTo(dAtA []byte) (int, error) { size := m.ProtoSize() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UnregisterLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UnregisterTopicRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.LogStreamID != 0 { - i = encodeVarintVms(dAtA, i, uint64(m.LogStreamID)) + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) i-- dAtA[i] = 0x8 } return len(dAtA) - i, nil } -func (m *UnregisterLogStreamResponse) Marshal() (dAtA []byte, err error) { +func (m *UnregisterTopicResponse) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -2092,23 +2291,152 @@ func (m *UnregisterLogStreamResponse) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UnregisterLogStreamResponse) MarshalTo(dAtA []byte) (int, error) { +func (m *UnregisterTopicResponse) MarshalTo(dAtA []byte) (int, error) { size := m.ProtoSize() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UnregisterLogStreamResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UnregisterTopicResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + return len(dAtA) - i, nil +} + +func (m *AddLogStreamRequest) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AddLogStreamRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AddLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Replicas) > 0 { + for iNdEx := len(m.Replicas) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Replicas[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintVms(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *AddLogStreamResponse) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AddLogStreamResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AddLogStreamResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.LogStream != nil { + { + size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintVms(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa } return len(dAtA) - i, nil } +func (m *UnregisterLogStreamRequest) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UnregisterLogStreamRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UnregisterLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.LogStreamID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.LogStreamID)) + i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *UnregisterLogStreamResponse) Marshal() (dAtA []byte, err error) { + size := m.ProtoSize() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UnregisterLogStreamResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.ProtoSize() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UnregisterLogStreamResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + func (m *RemoveLogStreamReplicaRequest) Marshal() (dAtA []byte, err error) { size := m.ProtoSize() dAtA = make([]byte, size) @@ -2129,13 +2457,14 @@ func (m *RemoveLogStreamReplicaRequest) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x18 + } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x10 } if m.StorageNodeID != 0 { @@ -2166,10 +2495,6 @@ func (m *RemoveLogStreamReplicaResponse) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } return len(dAtA) - i, nil } @@ -2193,10 +2518,6 @@ func (m *UpdateLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.PushedReplica != nil { { size, err := m.PushedReplica.MarshalToSizedBuffer(dAtA[:i]) @@ -2207,7 +2528,7 @@ func (m *UpdateLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) i = encodeVarintVms(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 } if m.PoppedReplica != nil { { @@ -2219,11 +2540,16 @@ func (m *UpdateLogStreamRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) i = encodeVarintVms(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x1a } if m.LogStreamID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x8 } return len(dAtA) - i, nil @@ -2249,10 +2575,6 @@ func (m *UpdateLogStreamResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStream != nil { { size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) @@ -2288,13 +2610,14 @@ func (m *SealRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x8 } return len(dAtA) - i, nil @@ -2320,10 +2643,6 @@ func (m *SealResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.SealedGLSN != 0 { i = encodeVarintVms(dAtA, i, uint64(m.SealedGLSN)) i-- @@ -2366,23 +2685,24 @@ func (m *SyncRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.DstStorageNodeID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.DstStorageNodeID)) i-- - dAtA[i] = 0x18 + dAtA[i] = 0x20 } if m.SrcStorageNodeID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.SrcStorageNodeID)) i-- - dAtA[i] = 0x10 + dAtA[i] = 0x18 } if m.LogStreamID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x8 } return len(dAtA) - i, nil @@ -2408,10 +2728,6 @@ func (m *SyncResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.Status != nil { { size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) @@ -2447,13 +2763,14 @@ func (m *UnsealRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStreamID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.LogStreamID)) i-- + dAtA[i] = 0x10 + } + if m.TopicID != 0 { + i = encodeVarintVms(dAtA, i, uint64(m.TopicID)) + i-- dAtA[i] = 0x8 } return len(dAtA) - i, nil @@ -2479,10 +2796,6 @@ func (m *UnsealResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.LogStream != nil { { size, err := m.LogStream.MarshalToSizedBuffer(dAtA[:i]) @@ -2518,10 +2831,6 @@ func (m *GetMRMembersResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Members) > 0 { for k := range m.Members { v := m.Members[k] @@ -2572,10 +2881,6 @@ func (m *GetStorageNodesResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.Storagenodes) > 0 { for k := range m.Storagenodes { v := m.Storagenodes[k] @@ -2616,10 +2921,6 @@ func (m *AddMRPeerRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.RPCAddr) > 0 { i -= len(m.RPCAddr) copy(dAtA[i:], m.RPCAddr) @@ -2657,10 +2958,6 @@ func (m *AddMRPeerResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if m.NodeID != 0 { i = encodeVarintVms(dAtA, i, uint64(m.NodeID)) i-- @@ -2689,10 +2986,6 @@ func (m *RemoveMRPeerRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } if len(m.RaftURL) > 0 { i -= len(m.RaftURL) copy(dAtA[i:], m.RaftURL) @@ -2723,10 +3016,6 @@ func (m *RemoveMRPeerResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } return len(dAtA) - i, nil } @@ -2751,9 +3040,6 @@ func (m *AddStorageNodeRequest) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2767,9 +3053,6 @@ func (m *AddStorageNodeResponse) ProtoSize() (n int) { l = m.StorageNode.ProtoSize() n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2782,9 +3065,6 @@ func (m *UnregisterStorageNodeRequest) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovVms(uint64(m.StorageNodeID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2794,27 +3074,70 @@ func (m *UnregisterStorageNodeResponse) ProtoSize() (n int) { } var l int _ = l - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + return n +} + +func (m *AddTopicRequest) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } + return n +} + +func (m *AddTopicResponse) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Topic != nil { + l = m.Topic.ProtoSize() + n += 1 + l + sovVms(uint64(l)) + } + return n +} + +func (m *UnregisterTopicRequest) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) } return n } +func (m *UnregisterTopicResponse) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + return n +} + func (m *AddLogStreamRequest) ProtoSize() (n int) { if m == nil { return 0 } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } if len(m.Replicas) > 0 { for _, e := range m.Replicas { l = e.ProtoSize() n += 1 + l + sovVms(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2828,9 +3151,6 @@ func (m *AddLogStreamResponse) ProtoSize() (n int) { l = m.LogStream.ProtoSize() n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2840,12 +3160,12 @@ func (m *UnregisterLogStreamRequest) ProtoSize() (n int) { } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovVms(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2855,9 +3175,6 @@ func (m *UnregisterLogStreamResponse) ProtoSize() (n int) { } var l int _ = l - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2870,12 +3187,12 @@ func (m *RemoveLogStreamReplicaRequest) ProtoSize() (n int) { if m.StorageNodeID != 0 { n += 1 + sovVms(uint64(m.StorageNodeID)) } + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovVms(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2885,9 +3202,6 @@ func (m *RemoveLogStreamReplicaResponse) ProtoSize() (n int) { } var l int _ = l - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2897,6 +3211,9 @@ func (m *UpdateLogStreamRequest) ProtoSize() (n int) { } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovVms(uint64(m.LogStreamID)) } @@ -2908,9 +3225,6 @@ func (m *UpdateLogStreamRequest) ProtoSize() (n int) { l = m.PushedReplica.ProtoSize() n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2924,9 +3238,6 @@ func (m *UpdateLogStreamResponse) ProtoSize() (n int) { l = m.LogStream.ProtoSize() n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2936,12 +3247,12 @@ func (m *SealRequest) ProtoSize() (n int) { } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovVms(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2960,9 +3271,6 @@ func (m *SealResponse) ProtoSize() (n int) { if m.SealedGLSN != 0 { n += 1 + sovVms(uint64(m.SealedGLSN)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2972,6 +3280,9 @@ func (m *SyncRequest) ProtoSize() (n int) { } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovVms(uint64(m.LogStreamID)) } @@ -2981,9 +3292,6 @@ func (m *SyncRequest) ProtoSize() (n int) { if m.DstStorageNodeID != 0 { n += 1 + sovVms(uint64(m.DstStorageNodeID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -2997,9 +3305,6 @@ func (m *SyncResponse) ProtoSize() (n int) { l = m.Status.ProtoSize() n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -3009,12 +3314,12 @@ func (m *UnsealRequest) ProtoSize() (n int) { } var l int _ = l + if m.TopicID != 0 { + n += 1 + sovVms(uint64(m.TopicID)) + } if m.LogStreamID != 0 { n += 1 + sovVms(uint64(m.LogStreamID)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -3028,9 +3333,6 @@ func (m *UnsealResponse) ProtoSize() (n int) { l = m.LogStream.ProtoSize() n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -3054,9 +3356,6 @@ func (m *GetMRMembersResponse) ProtoSize() (n int) { n += mapEntrySize + 1 + sovVms(uint64(mapEntrySize)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -3074,9 +3373,6 @@ func (m *GetStorageNodesResponse) ProtoSize() (n int) { n += mapEntrySize + 1 + sovVms(uint64(mapEntrySize)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } @@ -3094,62 +3390,337 @@ func (m *AddMRPeerRequest) ProtoSize() (n int) { if l > 0 { n += 1 + l + sovVms(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + return n +} + +func (m *AddMRPeerResponse) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.NodeID != 0 { + n += 1 + sovVms(uint64(m.NodeID)) + } + return n +} + +func (m *RemoveMRPeerRequest) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.RaftURL) + if l > 0 { + n += 1 + l + sovVms(uint64(l)) + } + return n +} + +func (m *RemoveMRPeerResponse) ProtoSize() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + return n +} + +func sovVms(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} +func sozVms(x uint64) (n int) { + return sovVms(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} +func (m *AddStorageNodeRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AddStorageNodeRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AddStorageNodeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthVms + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthVms + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Address = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipVms(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthVms + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AddStorageNodeResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AddStorageNodeResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AddStorageNodeResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field StorageNode", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthVms + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthVms + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.StorageNode == nil { + m.StorageNode = &varlogpb.StorageNodeMetadataDescriptor{} + } + if err := m.StorageNode.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipVms(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthVms + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UnregisterStorageNodeRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UnregisterStorageNodeRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UnregisterStorageNodeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StorageNodeID", wireType) + } + m.StorageNodeID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.StorageNodeID |= github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipVms(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthVms + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } - return n -} -func (m *AddMRPeerResponse) ProtoSize() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.NodeID != 0 { - n += 1 + sovVms(uint64(m.NodeID)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if iNdEx > l { + return io.ErrUnexpectedEOF } - return n + return nil } - -func (m *RemoveMRPeerRequest) ProtoSize() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = len(m.RaftURL) - if l > 0 { - n += 1 + l + sovVms(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) +func (m *UnregisterStorageNodeResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UnregisterStorageNodeResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UnregisterStorageNodeResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipVms(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthVms + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } - return n -} -func (m *RemoveMRPeerResponse) ProtoSize() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if iNdEx > l { + return io.ErrUnexpectedEOF } - return n -} - -func sovVms(x uint64) (n int) { - return (math_bits.Len64(x|1) + 6) / 7 -} -func sozVms(x uint64) (n int) { - return sovVms(uint64((x << 1) ^ uint64((int64(x) >> 63)))) + return nil } -func (m *AddStorageNodeRequest) Unmarshal(dAtA []byte) error { +func (m *AddTopicRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -3172,17 +3743,17 @@ func (m *AddStorageNodeRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AddStorageNodeRequest: wiretype end group for non-group") + return fmt.Errorf("proto: AddTopicRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AddStorageNodeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AddTopicRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) } - var stringLen uint64 + m.TopicID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowVms @@ -3192,24 +3763,11 @@ func (m *AddStorageNodeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthVms - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthVms - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Address = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipVms(dAtA[iNdEx:]) @@ -3222,7 +3780,6 @@ func (m *AddStorageNodeRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3232,7 +3789,7 @@ func (m *AddStorageNodeRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AddStorageNodeResponse) Unmarshal(dAtA []byte) error { +func (m *AddTopicResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -3255,15 +3812,15 @@ func (m *AddStorageNodeResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AddStorageNodeResponse: wiretype end group for non-group") + return fmt.Errorf("proto: AddTopicResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AddStorageNodeResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AddTopicResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StorageNode", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Topic", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -3290,10 +3847,10 @@ func (m *AddStorageNodeResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.StorageNode == nil { - m.StorageNode = &varlogpb.StorageNodeMetadataDescriptor{} + if m.Topic == nil { + m.Topic = &varlogpb.TopicDescriptor{} } - if err := m.StorageNode.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Topic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -3309,7 +3866,6 @@ func (m *AddStorageNodeResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3319,7 +3875,7 @@ func (m *AddStorageNodeResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *UnregisterStorageNodeRequest) Unmarshal(dAtA []byte) error { +func (m *UnregisterTopicRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -3342,17 +3898,17 @@ func (m *UnregisterStorageNodeRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UnregisterStorageNodeRequest: wiretype end group for non-group") + return fmt.Errorf("proto: UnregisterTopicRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UnregisterStorageNodeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UnregisterTopicRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StorageNodeID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) } - m.StorageNodeID = 0 + m.TopicID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowVms @@ -3362,7 +3918,7 @@ func (m *UnregisterStorageNodeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.StorageNodeID |= github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID(b&0x7F) << shift + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift if b < 0x80 { break } @@ -3379,7 +3935,6 @@ func (m *UnregisterStorageNodeRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3389,7 +3944,7 @@ func (m *UnregisterStorageNodeRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UnregisterStorageNodeResponse) Unmarshal(dAtA []byte) error { +func (m *UnregisterTopicResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -3412,10 +3967,10 @@ func (m *UnregisterStorageNodeResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UnregisterStorageNodeResponse: wiretype end group for non-group") + return fmt.Errorf("proto: UnregisterTopicResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UnregisterStorageNodeResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UnregisterTopicResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { default: @@ -3430,7 +3985,6 @@ func (m *UnregisterStorageNodeResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3470,6 +4024,25 @@ func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Replicas", wireType) } @@ -3515,7 +4088,6 @@ func (m *AddLogStreamRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3602,7 +4174,6 @@ func (m *AddLogStreamResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3642,6 +4213,25 @@ func (m *UnregisterLogStreamRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -3672,7 +4262,6 @@ func (m *UnregisterLogStreamRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3723,7 +4312,6 @@ func (m *UnregisterLogStreamResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3782,6 +4370,25 @@ func (m *RemoveLogStreamReplicaRequest) Unmarshal(dAtA []byte) error { } } case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -3812,7 +4419,6 @@ func (m *RemoveLogStreamReplicaRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3863,7 +4469,6 @@ func (m *RemoveLogStreamReplicaResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -3903,6 +4508,25 @@ func (m *UpdateLogStreamRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -3921,7 +4545,7 @@ func (m *UpdateLogStreamRequest) Unmarshal(dAtA []byte) error { break } } - case 2: + case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field PoppedReplica", wireType) } @@ -3957,7 +4581,7 @@ func (m *UpdateLogStreamRequest) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field PushedReplica", wireType) } @@ -4005,7 +4629,6 @@ func (m *UpdateLogStreamRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4092,7 +4715,6 @@ func (m *UpdateLogStreamResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4132,6 +4754,25 @@ func (m *SealRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -4162,7 +4803,6 @@ func (m *SealRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4266,7 +4906,6 @@ func (m *SealResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4306,6 +4945,25 @@ func (m *SyncRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -4324,7 +4982,7 @@ func (m *SyncRequest) Unmarshal(dAtA []byte) error { break } } - case 2: + case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field SrcStorageNodeID", wireType) } @@ -4343,7 +5001,7 @@ func (m *SyncRequest) Unmarshal(dAtA []byte) error { break } } - case 3: + case 4: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field DstStorageNodeID", wireType) } @@ -4374,7 +5032,6 @@ func (m *SyncRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4461,7 +5118,6 @@ func (m *SyncResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4501,6 +5157,25 @@ func (m *UnsealRequest) Unmarshal(dAtA []byte) error { } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TopicID", wireType) + } + m.TopicID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowVms + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TopicID |= github_daumkakao_com_varlog_varlog_pkg_types.TopicID(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field LogStreamID", wireType) } @@ -4531,7 +5206,6 @@ func (m *UnsealRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4618,7 +5292,6 @@ func (m *UnsealResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4820,7 +5493,6 @@ func (m *GetMRMembersResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -4891,7 +5563,7 @@ func (m *GetStorageNodesResponse) Unmarshal(dAtA []byte) error { if m.Storagenodes == nil { m.Storagenodes = make(map[github_daumkakao_com_varlog_varlog_pkg_types.StorageNodeID]string) } - var mapkey uint32 + var mapkey int32 var mapvalue string for iNdEx < postIndex { entryPreIndex := iNdEx @@ -4921,7 +5593,7 @@ func (m *GetStorageNodesResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - mapkey |= uint32(b&0x7F) << shift + mapkey |= int32(b&0x7F) << shift if b < 0x80 { break } @@ -4984,7 +5656,6 @@ func (m *GetStorageNodesResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -5099,7 +5770,6 @@ func (m *AddMRPeerRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -5169,7 +5839,6 @@ func (m *AddMRPeerResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -5252,7 +5921,6 @@ func (m *RemoveMRPeerRequest) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } @@ -5303,7 +5971,6 @@ func (m *RemoveMRPeerResponse) Unmarshal(dAtA []byte) error { if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) iNdEx += skippy } } diff --git a/proto/vmspb/vms.proto b/proto/vmspb/vms.proto index 93c65adc0..b7a73a280 100644 --- a/proto/vmspb/vms.proto +++ b/proto/vmspb/vms.proto @@ -12,6 +12,9 @@ option go_package = "github.com/kakao/varlog/proto/vmspb"; option (gogoproto.protosizer_all) = true; option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unkeyed_all) = false; +option (gogoproto.goproto_unrecognized_all) = false; +option (gogoproto.goproto_sizecache_all) = false; message AddStorageNodeRequest { // address is IP of a node to be added to the cluster. @@ -23,7 +26,7 @@ message AddStorageNodeResponse { } message UnregisterStorageNodeRequest { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" @@ -32,9 +35,36 @@ message UnregisterStorageNodeRequest { message UnregisterStorageNodeResponse {} +message AddTopicRequest { + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; +} + +message AddTopicResponse { + varlogpb.TopicDescriptor topic = 1; +} + +message UnregisterTopicRequest { + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; +} + +message UnregisterTopicResponse {} + message AddLogStreamRequest { + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; // TODO: nullable = false - repeated varlogpb.ReplicaDescriptor replicas = 1 + repeated varlogpb.ReplicaDescriptor replicas = 2 [(gogoproto.nullable) = true]; } @@ -43,7 +73,12 @@ message AddLogStreamResponse { } message UnregisterLogStreamRequest { - uint32 log_stream_id = 1 [ + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -53,12 +88,17 @@ message UnregisterLogStreamRequest { message UnregisterLogStreamResponse {} message RemoveLogStreamReplicaRequest { - uint32 storage_node_id = 1 [ + int32 storage_node_id = 1 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "StorageNodeID" ]; - uint32 log_stream_id = 2 [ + int32 topic_id = 2 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -68,7 +108,12 @@ message RemoveLogStreamReplicaRequest { message RemoveLogStreamReplicaResponse {} message UpdateLogStreamRequest { - uint32 log_stream_id = 1 [ + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -82,8 +127,8 @@ message UpdateLogStreamRequest { (gogoproto.customname) = "PoppedStorageNodeID" ]; */ - varlogpb.ReplicaDescriptor popped_replica = 2; - varlogpb.ReplicaDescriptor pushed_replica = 3; + varlogpb.ReplicaDescriptor popped_replica = 3; + varlogpb.ReplicaDescriptor pushed_replica = 4; } message UpdateLogStreamResponse { @@ -91,7 +136,12 @@ message UpdateLogStreamResponse { } message SealRequest { - uint32 log_stream_id = 1 [ + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -109,17 +159,22 @@ message SealResponse { } message SyncRequest { - uint32 log_stream_id = 1 [ + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" ]; - uint32 src_storage_node_id = 2 [ + int32 src_storage_node_id = 3 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "SrcStorageNodeID" ]; - uint32 dst_storage_node_id = 3 [ + int32 dst_storage_node_id = 4 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.StorageNodeID", (gogoproto.customname) = "DstStorageNodeID" @@ -131,7 +186,12 @@ message SyncResponse { } message UnsealRequest { - uint32 log_stream_id = 1 [ + int32 topic_id = 1 [ + (gogoproto.casttype) = + "github.com/kakao/varlog/pkg/types.TopicID", + (gogoproto.customname) = "TopicID" + ]; + int32 log_stream_id = 2 [ (gogoproto.casttype) = "github.com/kakao/varlog/pkg/types.LogStreamID", (gogoproto.customname) = "LogStreamID" @@ -155,7 +215,7 @@ message GetMRMembersResponse { } message GetStorageNodesResponse { - map storagenodes = 1 + map storagenodes = 1 [(gogoproto.castkey) = "github.com/kakao/varlog/pkg/types.StorageNodeID"]; } @@ -186,6 +246,11 @@ service ClusterManager { rpc UnregisterStorageNode(UnregisterStorageNodeRequest) returns (UnregisterStorageNodeResponse) {} + rpc AddTopic(AddTopicRequest) returns (AddTopicResponse) {} + + rpc UnregisterTopic(UnregisterTopicRequest) + returns (UnregisterTopicResponse) {} + rpc AddLogStream(AddLogStreamRequest) returns (AddLogStreamResponse) {} rpc UnregisterLogStream(UnregisterLogStreamRequest) diff --git a/reports/.gitignore b/reports/.gitignore new file mode 100644 index 000000000..42bcc6229 --- /dev/null +++ b/reports/.gitignore @@ -0,0 +1,7 @@ +test.out +test.xml +coverage.out.tmp +coverage.out +coverage.xml +bench.out +bench.xml diff --git a/test/e2e/action.go b/test/e2e/action.go index 70f6b4b41..9943e7331 100644 --- a/test/e2e/action.go +++ b/test/e2e/action.go @@ -14,6 +14,7 @@ import ( "github.com/kakao/varlog/pkg/util/runner" "github.com/kakao/varlog/pkg/util/testutil" "github.com/kakao/varlog/pkg/varlog" + "github.com/kakao/varlog/proto/varlogpb" ) type Action interface { @@ -155,7 +156,7 @@ func (act *action) subscribe(ctx context.Context) error { } defer vcli.Close() - limit, err := vcli.Append(ctx, []byte("foo"), varlog.WithRetryCount(5)) + limit, err := vcli.Append(ctx, act.topicID, []byte("foo"), varlog.WithRetryCount(5)) if err != nil { return errors.Wrap(err, "append") } @@ -163,7 +164,7 @@ func (act *action) subscribe(ctx context.Context) error { var received atomic.Value received.Store(types.InvalidGLSN) serrC := make(chan error) - nopOnNext := func(le types.LogEntry, err error) { + nopOnNext := func(le varlogpb.LogEntry, err error) { if err != nil { serrC <- err close(serrC) @@ -175,7 +176,7 @@ func (act *action) subscribe(ctx context.Context) error { fmt.Printf("[%v] Sub ~%v\n", time.Now(), limit) defer fmt.Printf("[%v] Sub ~%v Close\n", time.Now(), limit) - closer, err := vcli.Subscribe(ctx, types.MinGLSN, limit+types.GLSN(1), nopOnNext) + closer, err := vcli.Subscribe(ctx, act.topicID, types.MinGLSN, limit+types.GLSN(1), nopOnNext) if err != nil { return err } @@ -228,7 +229,7 @@ func (act *action) append(ctx context.Context) error { case <-ctx.Done(): return nil default: - glsn, err := vcli.Append(ctx, []byte("foo"), varlog.WithRetryCount(5)) + glsn, err := vcli.Append(ctx, act.topicID, []byte("foo"), varlog.WithRetryCount(5)) if err != nil { return err } diff --git a/test/e2e/action_helper.go b/test/e2e/action_helper.go index 65ae76c99..fe3c1c971 100644 --- a/test/e2e/action_helper.go +++ b/test/e2e/action_helper.go @@ -226,7 +226,7 @@ func RecoverMRCheck(k8s *K8sVarlogCluster) func() error { } } -func InitLogStream(k8s *K8sVarlogCluster) func() error { +func InitLogStream(k8s *K8sVarlogCluster, topicID types.TopicID) func() error { return func() error { vmsaddr, err := k8s.VMSAddress() if err != nil { @@ -243,7 +243,7 @@ func InitLogStream(k8s *K8sVarlogCluster) func() error { for i := 0; i < k8s.NrLS; i++ { ctx, cancel := context.WithTimeout(context.Background(), k8s.timeout) - _, err = mcli.AddLogStream(ctx, nil) + _, err = mcli.AddLogStream(ctx, topicID, nil) cancel() if err != nil { return err @@ -254,7 +254,7 @@ func InitLogStream(k8s *K8sVarlogCluster) func() error { } } -func AddLogStream(k8s *K8sVarlogCluster) func() error { +func AddLogStream(k8s *K8sVarlogCluster, topicID types.TopicID) func() error { return func() error { vmsaddr, err := k8s.VMSAddress() if err != nil { @@ -270,7 +270,7 @@ func AddLogStream(k8s *K8sVarlogCluster) func() error { defer mcli.Close() ctx, cancel := context.WithTimeout(context.Background(), k8s.timeout) - _, err = mcli.AddLogStream(ctx, nil) + _, err = mcli.AddLogStream(ctx, topicID, nil) defer cancel() return err @@ -331,7 +331,7 @@ func SealAnyLogStream(k8s *K8sVarlogCluster) func() error { mctx, mcancel := context.WithTimeout(context.Background(), k8s.timeout) defer mcancel() - _, err = mcli.Seal(mctx, lsdescs[idx].LogStreamID) + _, err = mcli.Seal(mctx, lsdescs[idx].TopicID, lsdescs[idx].LogStreamID) if err != nil { return err } @@ -385,6 +385,7 @@ func updateSealedLogStream(k8s *K8sVarlogCluster, meta *varlogpb.MetadataDescrip return err } + tpID := lsdesc.TopicID lsID := lsdesc.LogStreamID pushReplica, popReplica := getPushPopReplicas(k8s, meta, lsdesc.LogStreamID) @@ -402,7 +403,7 @@ func updateSealedLogStream(k8s *K8sVarlogCluster, meta *varlogpb.MetadataDescrip mctx, mcancel := context.WithTimeout(context.Background(), k8s.timeout) defer mcancel() - _, err = mcli.UpdateLogStream(mctx, lsID, popReplica, pushReplica) + _, err = mcli.UpdateLogStream(mctx, tpID, lsID, popReplica, pushReplica) if err != nil { return err } diff --git a/test/e2e/e2e_long_test.go b/test/e2e/e2e_long_test.go index ee8ce462f..a71d9227f 100644 --- a/test/e2e/e2e_long_test.go +++ b/test/e2e/e2e_long_test.go @@ -1,3 +1,4 @@ +//go:build long_e2e // +build long_e2e package e2e diff --git a/test/e2e/e2e_simple_test.go b/test/e2e/e2e_simple_test.go index c4ecc3c32..10a563a0f 100644 --- a/test/e2e/e2e_simple_test.go +++ b/test/e2e/e2e_simple_test.go @@ -1,3 +1,4 @@ +//go:build e2e // +build e2e package e2e diff --git a/test/e2e/k8s_util.go b/test/e2e/k8s_util.go index 6fdd033d9..19dbfb108 100644 --- a/test/e2e/k8s_util.go +++ b/test/e2e/k8s_util.go @@ -48,7 +48,7 @@ const ( VarlogNamespace = "default" IngressNginxNamespace = "ingress-nginx" - ENV_REP_FACTOR = "REP_FACTOR" + EnvRepFactor = "REP_FACTOR" // telemetry TelemetryLabelValue = "telemetry" @@ -175,11 +175,11 @@ func (k8s *K8sVarlogCluster) Reset() error { } rep := fmt.Sprintf("%d", k8s.RepFactor) - if err := k8s.ReplaceEnvToDeployment(VMSRSName, ENV_REP_FACTOR, rep); err != nil { + if err := k8s.ReplaceEnvToDeployment(VMSRSName, EnvRepFactor, rep); err != nil { return err } - if err := k8s.ReplaceEnvToDaemonset(MRLabel, ENV_REP_FACTOR, rep); err != nil { + if err := k8s.ReplaceEnvToDaemonset(MRLabel, EnvRepFactor, rep); err != nil { return err } @@ -804,7 +804,7 @@ func (view *k8sVarlogView) GetMRNodeName(mrID vtypes.NodeID) (string, error) { if err := view.Renew(); err != nil { return "", err } - nodeID, _ = view.mrs[mrID] + nodeID = view.mrs[mrID] } return nodeID, nil @@ -819,7 +819,7 @@ func (view *k8sVarlogView) GetSNNodeName(snID vtypes.StorageNodeID) (string, err if err := view.Renew(); err != nil { return "", err } - nodeID, _ = view.sns[snID] + nodeID = view.sns[snID] } return nodeID, nil diff --git a/test/e2e/k8s_util_test.go b/test/e2e/k8s_util_test.go index 8f95d663f..5659c963d 100644 --- a/test/e2e/k8s_util_test.go +++ b/test/e2e/k8s_util_test.go @@ -1,3 +1,4 @@ +//go:build e2e // +build e2e package e2e diff --git a/test/e2e/options.go b/test/e2e/options.go index 70e4cdd94..e372f3e20 100644 --- a/test/e2e/options.go +++ b/test/e2e/options.go @@ -10,16 +10,16 @@ import ( ) const ( - E2E_MASTERURL = "master-url" - E2E_CLUSTER = "cluster" - E2E_CONTEXT = "context" - E2E_USER = "user" - E2E_TOKEN = "token" + MasterURL = "master-url" + Cluster = "cluster" + Context = "context" + User = "user" + Token = "token" - DEFAULT_MR_CNT = 3 - DEFAULT_SN_CNT = 3 - DEFAULT_LS_CNT = 2 - DEFAULT_REP_FACTOR = 3 + DefaultMRCnt = 3 + DefaultSNCnt = 3 + DefaultLSCnt = 2 + DefaultRepFactor = 3 defaultClientCnt = 10 defaultSubscriberCnt = 10 @@ -29,7 +29,7 @@ const ( ) type K8sVarlogClusterOptions struct { - MasterUrl string + MasterURL string User string Token string Cluster string @@ -49,30 +49,30 @@ func getK8sVarlogClusterOpts() K8sVarlogClusterOptions { } opts := K8sVarlogClusterOptions{} - if f, ok := info[E2E_MASTERURL]; ok { - opts.MasterUrl = f.(string) + if f, ok := info[MasterURL]; ok { + opts.MasterURL = f.(string) } - if f, ok := info[E2E_CLUSTER]; ok { + if f, ok := info[Cluster]; ok { opts.Cluster = f.(string) } - if f, ok := info[E2E_CONTEXT]; ok { + if f, ok := info[Context]; ok { opts.Context = f.(string) } - if f, ok := info[E2E_USER]; ok { + if f, ok := info[User]; ok { opts.User = f.(string) } - if f, ok := info[E2E_TOKEN]; ok { + if f, ok := info[Token]; ok { opts.Token = f.(string) } - opts.NrMR = DEFAULT_MR_CNT - opts.NrSN = DEFAULT_SN_CNT - opts.NrLS = DEFAULT_LS_CNT - opts.RepFactor = DEFAULT_REP_FACTOR + opts.NrMR = DefaultMRCnt + opts.NrSN = DefaultSNCnt + opts.NrLS = DefaultLSCnt + opts.RepFactor = DefaultRepFactor opts.Reset = true opts.timeout = defaultTimeout @@ -98,7 +98,7 @@ func optsToConfigBytes(opts K8sVarlogClusterOptions) []byte { "- name: %s\n"+ " user:\n"+ " token: %s", - opts.MasterUrl, + opts.MasterURL, opts.Cluster, opts.Cluster, opts.User, @@ -113,6 +113,7 @@ type actionOptions struct { prevf func() error postf func() error clusterID vtypes.ClusterID + topicID vtypes.TopicID mrAddr string nrCli int nrSub int diff --git a/test/e2e/vault_util_test.go b/test/e2e/vault_util_test.go index 9c1b66c27..a5bb50745 100644 --- a/test/e2e/vault_util_test.go +++ b/test/e2e/vault_util_test.go @@ -1,3 +1,4 @@ +//go:build e2e // +build e2e package e2e @@ -11,9 +12,9 @@ import ( func TestGetVarlogK8sConnInfo(t *testing.T) { data, err := getVarlogK8sConnInfo() require.NoError(t, err) - require.Contains(t, data, E2E_MASTERURL) - require.Contains(t, data, E2E_CLUSTER) - require.Contains(t, data, E2E_CONTEXT) - require.Contains(t, data, E2E_USER) - require.Contains(t, data, E2E_TOKEN) + require.Contains(t, data, MasterURL) + require.Contains(t, data, Cluster) + require.Contains(t, data, Context) + require.Contains(t, data, User) + require.Contains(t, data, Token) } diff --git a/test/it/cluster/client_test.go b/test/it/cluster/client_test.go index 40e94634f..9a9e1e68d 100644 --- a/test/it/cluster/client_test.go +++ b/test/it/cluster/client_test.go @@ -28,6 +28,7 @@ func TestClientNoLogStream(t *testing.T) { it.WithReplicationFactor(3), it.WithNumberOfStorageNodes(3), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), it.WithVMSOptions(it.NewTestVMSOptions()), ) @@ -36,8 +37,9 @@ func TestClientNoLogStream(t *testing.T) { testutil.GC() }() + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, 0) - _, err := client.Append(context.TODO(), []byte("foo")) + _, err := client.Append(context.TODO(), topicID, []byte("foo")) require.Error(t, err) } @@ -49,6 +51,7 @@ func TestClientAppendTo(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithNumberOfClients(1), it.WithVMSOptions(it.NewTestVMSOptions()), + it.WithNumberOfTopics(1), ) defer func() { @@ -57,16 +60,17 @@ func TestClientAppendTo(t *testing.T) { }() // FIXME: remove this ugly code - lsID := clus.LogStreamID(t, 0) + topicID := clus.TopicIDs()[0] + lsID := clus.LogStreamID(t, topicID, 0) client := clus.ClientAtIndex(t, 0) - _, err := client.AppendTo(context.TODO(), lsID+1, []byte("foo")) + _, err := client.AppendTo(context.TODO(), topicID, lsID+1, []byte("foo")) require.Error(t, err) - glsn, err := client.AppendTo(context.TODO(), lsID, []byte("foo")) + glsn, err := client.AppendTo(context.TODO(), topicID, lsID, []byte("foo")) require.NoError(t, err) - data, err := client.Read(context.Background(), lsID, glsn) + data, err := client.Read(context.Background(), topicID, lsID, glsn) require.NoError(t, err) require.EqualValues(t, []byte("foo"), data) } @@ -79,6 +83,7 @@ func TestClientAppend(t *testing.T) { it.WithNumberOfLogStreams(3), it.WithNumberOfClients(1), it.WithVMSOptions(it.NewTestVMSOptions()), + it.WithNumberOfTopics(1), ) defer func() { @@ -86,19 +91,20 @@ func TestClientAppend(t *testing.T) { testutil.GC() }() + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, 0) expectedGLSN := types.MinGLSN for i := 0; i < 10; i++ { - glsn, err := client.Append(context.TODO(), []byte("foo")) + glsn, err := client.Append(context.TODO(), topicID, []byte("foo")) require.NoError(t, err) require.Equal(t, expectedGLSN, glsn) expectedGLSN++ } require.Condition(t, func() bool { - for _, lsid := range clus.LogStreamIDs() { - if _, errRead := client.Read(context.TODO(), lsid, 1); errRead == nil { + for _, lsid := range clus.LogStreamIDs(topicID) { + if _, errRead := client.Read(context.TODO(), topicID, lsid, 1); errRead == nil { return true } } @@ -114,6 +120,7 @@ func TestClientAppendCancel(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithNumberOfClients(1), it.WithVMSOptions(it.NewTestVMSOptions()), + it.WithNumberOfTopics(1), ) defer func() { @@ -121,6 +128,7 @@ func TestClientAppendCancel(t *testing.T) { testutil.GC() }() + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, 0) var ( @@ -133,7 +141,7 @@ func TestClientAppendCancel(t *testing.T) { defer wg.Done() expectedGLSN := types.MinGLSN for { - glsn, err := client.Append(ctx, []byte("foo")) + glsn, err := client.Append(ctx, topicID, []byte("foo")) if err == nil { require.Equal(t, expectedGLSN, glsn) expectedGLSN++ @@ -161,6 +169,7 @@ func TestClientSubscribe(t *testing.T) { it.WithNumberOfLogStreams(3), it.WithNumberOfClients(1), it.WithVMSOptions(it.NewTestVMSOptions()), + it.WithNumberOfTopics(1), ) defer func() { @@ -168,15 +177,16 @@ func TestClientSubscribe(t *testing.T) { testutil.GC() }() + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, 0) for i := 0; i < nrLogs; i++ { - _, err := client.Append(context.TODO(), []byte("foo")) + _, err := client.Append(context.TODO(), topicID, []byte("foo")) require.NoError(t, err) } errc := make(chan error, nrLogs) expectedGLSN := types.GLSN(1) - subscribeCloser, err := client.Subscribe(context.TODO(), types.GLSN(1), types.GLSN(nrLogs+1), func(le types.LogEntry, err error) { + subscribeCloser, err := client.Subscribe(context.TODO(), topicID, types.GLSN(1), types.GLSN(nrLogs+1), func(le varlogpb.LogEntry, err error) { if err != nil { require.ErrorIs(t, io.EOF, err) defer close(errc) @@ -208,6 +218,7 @@ func TestClientTrim(t *testing.T) { it.WithNumberOfLogStreams(3), it.WithNumberOfClients(1), it.WithVMSOptions(it.NewTestVMSOptions()), + it.WithNumberOfTopics(1), ) defer func() { @@ -215,29 +226,30 @@ func TestClientTrim(t *testing.T) { testutil.GC() }() + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, 0) expectedGLSN := types.GLSN(1) for i := 0; i < nrLogs; i++ { - glsn, err := client.Append(context.TODO(), []byte("foo")) + glsn, err := client.Append(context.TODO(), topicID, []byte("foo")) require.NoError(t, err) require.Equal(t, expectedGLSN, glsn) expectedGLSN++ } - err := client.Trim(context.Background(), trimPos, varlog.TrimOption{}) + err := client.Trim(context.Background(), topicID, trimPos, varlog.TrimOption{}) require.NoError(t, err) // actual deletion in SN is asynchronous. require.Eventually(t, func() bool { errC := make(chan error) - nopOnNext := func(le types.LogEntry, err error) { + nopOnNext := func(le varlogpb.LogEntry, err error) { isErr := err != nil errC <- err if isErr { close(errC) } } - closer, err := client.Subscribe(context.TODO(), types.MinGLSN, trimPos, nopOnNext) + closer, err := client.Subscribe(context.TODO(), topicID, types.MinGLSN, trimPos, nopOnNext) require.NoError(t, err) defer closer() @@ -249,15 +261,15 @@ func TestClientTrim(t *testing.T) { }, time.Second, 10*time.Millisecond) // subscribe remains - ch := make(chan types.LogEntry) - onNext := func(logEntry types.LogEntry, err error) { + ch := make(chan varlogpb.LogEntry) + onNext := func(logEntry varlogpb.LogEntry, err error) { if err != nil { close(ch) return } ch <- logEntry } - closer, err := client.Subscribe(context.TODO(), trimPos+1, types.GLSN(nrLogs), onNext) + closer, err := client.Subscribe(context.TODO(), topicID, trimPos+1, types.GLSN(nrLogs), onNext) require.NoError(t, err) defer closer() expectedGLSN = trimPos + 1 @@ -275,14 +287,16 @@ func TestVarlogSubscribeWithSNFail(t *testing.T) { it.WithNumberOfStorageNodes(3), it.WithNumberOfLogStreams(3), it.WithNumberOfClients(5), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) nrLogs := 64 for i := 0; i < nrLogs; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) So(err, ShouldBeNil) } @@ -295,7 +309,7 @@ func TestVarlogSubscribeWithSNFail(t *testing.T) { Convey("Then it should be able to subscribe", func(ctx C) { errc := make(chan error, nrLogs) expectedGLSN := types.GLSN(1) - subscribeCloser, err := client.Subscribe(context.TODO(), types.GLSN(1), types.GLSN(nrLogs+1), func(le types.LogEntry, err error) { + subscribeCloser, err := client.Subscribe(context.TODO(), topicID, types.GLSN(1), types.GLSN(nrLogs+1), func(le varlogpb.LogEntry, err error) { if err != nil { require.ErrorIs(t, io.EOF, err) defer close(errc) @@ -320,22 +334,23 @@ func TestVarlogSubscribeWithSNFail(t *testing.T) { func TestVarlogSubscribeWithAddLS(t *testing.T) { //defer goleak.VerifyNone(t) - opts := []it.Option{ it.WithReplicationFactor(2), it.WithNumberOfStorageNodes(5), it.WithNumberOfLogStreams(3), - it.WithNumberOfClients(5), + it.WithNumberOfClients(2), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { - nrLogs := 128 + nrLogs := 10 Convey("When add LogStream during subscribe", func(ctx C) { + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) errc := make(chan error, nrLogs) expectedGLSN := types.GLSN(1) - subscribeCloser, err := client.Subscribe(context.TODO(), types.GLSN(1), types.GLSN(nrLogs+1), func(le types.LogEntry, err error) { + subscribeCloser, err := client.Subscribe(context.TODO(), topicID, types.GLSN(1), types.GLSN(nrLogs+1), func(le varlogpb.LogEntry, err error) { if err != nil { require.ErrorIs(t, io.EOF, err) defer close(errc) @@ -348,25 +363,26 @@ func TestVarlogSubscribeWithAddLS(t *testing.T) { require.NoError(t, err) defer subscribeCloser() + var wg sync.WaitGroup + wg.Add(1) go func() { + defer wg.Done() client := env.ClientAtIndex(t, 1) for i := 0; i < nrLogs/2; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) require.NoError(t, err) } - snID := env.StorageNodeIDAtIndex(t, 0) - env.CloseSN(t, snID) - env.CloseSNClientOf(t, snID) - - env.AddLS(t) + topicID := env.TopicIDs()[0] + env.AddLS(t, topicID) for i := 0; i < nrLogs/2; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) require.NoError(t, err) } }() + wg.Wait() Convey("Then it should be able to subscribe", func(ctx C) { for e := range errc { @@ -381,22 +397,23 @@ func TestVarlogSubscribeWithAddLS(t *testing.T) { func TestVarlogSubscribeWithUpdateLS(t *testing.T) { //defer goleak.VerifyNone(t) - opts := []it.Option{ it.WithReplicationFactor(2), it.WithNumberOfStorageNodes(5), it.WithNumberOfLogStreams(3), it.WithNumberOfClients(5), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { nrLogs := 128 Convey("When update LogStream during subscribe", func(ctx C) { + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) errc := make(chan error, nrLogs) expectedGLSN := types.GLSN(1) - subscribeCloser, err := client.Subscribe(context.TODO(), types.GLSN(1), types.GLSN(nrLogs+1), func(le types.LogEntry, err error) { + subscribeCloser, err := client.Subscribe(context.TODO(), topicID, types.GLSN(1), types.GLSN(nrLogs+1), func(le varlogpb.LogEntry, err error) { if err != nil { require.ErrorIs(t, io.EOF, err) defer close(errc) @@ -413,13 +430,14 @@ func TestVarlogSubscribeWithUpdateLS(t *testing.T) { client := env.ClientAtIndex(t, 1) for i := 0; i < nrLogs/2; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) require.NoError(t, err) } addedSN := env.AddSN(t) - lsID := env.LogStreamID(t, 0) + topicID := env.TopicIDs()[0] + lsID := env.LogStreamID(t, topicID, 0) snID := env.PrimaryStorageNodeIDOf(t, lsID) env.CloseSN(t, snID) @@ -431,10 +449,10 @@ func TestVarlogSubscribeWithUpdateLS(t *testing.T) { return lsdesc.Status == varlogpb.LogStreamStatusSealed }, 5*time.Second, 10*time.Millisecond) - env.UpdateLS(t, lsID, snID, addedSN) + env.UpdateLS(t, topicID, lsID, snID, addedSN) for i := 0; i < nrLogs/2; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) require.NoError(t, err) } }() diff --git a/test/it/cluster/cluster_test.go b/test/it/cluster/cluster_test.go index a213343d9..73d16bcc5 100644 --- a/test/it/cluster/cluster_test.go +++ b/test/it/cluster/cluster_test.go @@ -15,6 +15,7 @@ import ( "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/util/testutil" "github.com/kakao/varlog/pkg/verrors" + "github.com/kakao/varlog/proto/varlogpb" "github.com/kakao/varlog/test/it" ) @@ -26,6 +27,7 @@ func TestAppendLogs(t *testing.T) { it.WithNumberOfStorageNodes(2), it.WithNumberOfLogStreams(10), it.WithNumberOfClients(10), + it.WithNumberOfTopics(1), ) defer clus.Close(t) @@ -38,9 +40,10 @@ func TestAppendLogs(t *testing.T) { idx := i grp.Go(func() error { max := types.InvalidGLSN + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, idx) for i := 0; i < numAppend; i++ { - glsn, err := client.Append(ctx, []byte("foo")) + glsn, err := client.Append(ctx, topicID, []byte("foo")) if err != nil { return err } @@ -59,7 +62,7 @@ func TestAppendLogs(t *testing.T) { require.Equal(t, types.GLSN(numAppend*clus.NumberOfClients()), maxGLSN) subC := make(chan types.GLSN, maxGLSN) - onNext := func(logEntry types.LogEntry, err error) { + onNext := func(logEntry varlogpb.LogEntry, err error) { if err != nil { close(subC) return @@ -67,8 +70,9 @@ func TestAppendLogs(t *testing.T) { subC <- logEntry.GLSN } + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, rand.Intn(clus.NumberOfClients())) - closer, err := client.Subscribe(context.Background(), types.MinGLSN, maxGLSN+1, onNext) + closer, err := client.Subscribe(context.Background(), topicID, types.MinGLSN, maxGLSN+1, onNext) require.NoError(t, err) defer closer() @@ -87,6 +91,7 @@ func TestReadSealedLogStream(t *testing.T) { it.WithNumberOfStorageNodes(1), it.WithNumberOfLogStreams(1), it.WithNumberOfClients(5), + it.WithNumberOfTopics(1), ) defer clus.Close(t) @@ -99,9 +104,10 @@ func TestReadSealedLogStream(t *testing.T) { wg.Add(1) go func(idx int) { defer wg.Done() + topicID := clus.TopicIDs()[0] client := clus.ClientAtIndex(t, idx) for { - glsn, err := client.Append(context.Background(), []byte("foo")) + glsn, err := client.Append(context.Background(), topicID, []byte("foo")) if err != nil { assert.ErrorIs(t, err, verrors.ErrSealed) errC <- err @@ -115,13 +121,14 @@ func TestReadSealedLogStream(t *testing.T) { // seal numSealedErr := 0 sealedGLSN := types.InvalidGLSN - lsID := clus.LogStreamIDs()[0] + topicID := clus.TopicIDs()[0] + lsID := clus.LogStreamIDs(topicID)[0] for numSealedErr < clus.NumberOfClients() { select { case glsn := <-glsnC: if sealedGLSN.Invalid() && glsn > boundary { - rsp, err := clus.GetVMSClient(t).Seal(context.Background(), lsID) + rsp, err := clus.GetVMSClient(t).Seal(context.Background(), topicID, lsID) require.NoError(t, err) sealedGLSN = rsp.GetSealedGLSN() t.Logf("SealedGLSN: %v", sealedGLSN) @@ -142,7 +149,7 @@ func TestReadSealedLogStream(t *testing.T) { for glsn := types.MinGLSN; glsn <= sealedGLSN; glsn++ { idx := rand.Intn(clus.NumberOfClients()) client := clus.ClientAtIndex(t, idx) - _, err := client.Read(context.TODO(), lsID, glsn) + _, err := client.Read(context.TODO(), topicID, lsID, glsn) require.NoError(t, err) } } @@ -155,10 +162,12 @@ func TestTrimGLS(t *testing.T) { it.WithNumberOfClients(1), it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(1), } Convey("Given cluster, when a client appends logs", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { - lsIDs := env.LogStreamIDs() + topicID := env.TopicIDs()[0] + lsIDs := env.LogStreamIDs(topicID) client := env.ClientAtIndex(t, 0) var ( @@ -166,24 +175,24 @@ func TestTrimGLS(t *testing.T) { glsn types.GLSN ) for i := 0; i < 10; i++ { - glsn, err = client.AppendTo(context.Background(), lsIDs[0], []byte("foo")) + glsn, err = client.AppendTo(context.Background(), topicID, lsIDs[0], []byte("foo")) So(err, ShouldBeNil) So(glsn, ShouldNotEqual, types.InvalidGLSN) } for i := 0; i < 10; i++ { - glsn, err = client.AppendTo(context.Background(), lsIDs[1], []byte("foo")) + glsn, err = client.AppendTo(context.Background(), topicID, lsIDs[1], []byte("foo")) So(err, ShouldBeNil) So(glsn, ShouldNotEqual, types.InvalidGLSN) } // TODO: Use RPC mr := env.GetMR(t) - hwm := mr.GetHighWatermark() - So(hwm, ShouldEqual, glsn) + ver := mr.GetLastCommitVersion() + So(ver, ShouldEqual, types.Version(glsn)) Convey("Then GLS history of MR should be trimmed", func(ctx C) { So(testutil.CompareWaitN(50, func() bool { - return mr.GetMinHighWatermark() == mr.GetPrevHighWatermark() + return mr.GetOldestCommitVersion() == mr.GetLastCommitVersion()-1 }), ShouldBeTrue) }) })) @@ -198,17 +207,19 @@ func TestTrimGLSWithSealedLS(t *testing.T) { it.WithNumberOfClients(1), it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(1), } Convey("When a client appends logs", it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { - lsIDs := env.LogStreamIDs() + topicID := env.TopicIDs()[0] + lsIDs := env.LogStreamIDs(topicID) client := env.ClientAtIndex(t, 0) var err error glsn := types.InvalidGLSN for i := 0; i < 32; i++ { - lsid := lsIDs[i%env.NumberOfLogStreams()] - glsn, err = client.AppendTo(context.Background(), lsid, []byte("foo")) + lsid := lsIDs[i%env.NumberOfLogStreams(topicID)] + glsn, err = client.AppendTo(context.Background(), topicID, lsid, []byte("foo")) So(err, ShouldBeNil) So(glsn, ShouldNotEqual, types.InvalidGLSN) } @@ -216,23 +227,23 @@ func TestTrimGLSWithSealedLS(t *testing.T) { sealedLSID := lsIDs[0] runningLSID := lsIDs[1] - _, err = env.GetVMSClient(t).Seal(context.Background(), sealedLSID) + _, err = env.GetVMSClient(t).Seal(context.Background(), topicID, sealedLSID) So(err, ShouldBeNil) for i := 0; i < 10; i++ { - glsn, err = client.AppendTo(context.Background(), runningLSID, []byte("foo")) + glsn, err = client.AppendTo(context.Background(), topicID, runningLSID, []byte("foo")) So(err, ShouldBeNil) So(glsn, ShouldNotEqual, types.InvalidGLSN) } // TODO: Use RPC mr := env.GetMR(t) - hwm := mr.GetHighWatermark() - So(hwm, ShouldEqual, glsn) + ver := mr.GetLastCommitVersion() + So(ver, ShouldEqual, types.Version(glsn)) Convey("Then GLS history of MR should be trimmed", func(ctx C) { So(testutil.CompareWaitN(50, func() bool { - return mr.GetMinHighWatermark() == mr.GetPrevHighWatermark() + return mr.GetOldestCommitVersion() == ver-1 }), ShouldBeTrue) }) })) @@ -247,34 +258,37 @@ func TestNewbieLogStream(t *testing.T) { it.WithNumberOfClients(1), it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(1), } Convey("Given LogStream", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { - lsIDs := env.LogStreamIDs() + topicID := env.TopicIDs()[0] + lsIDs := env.LogStreamIDs(topicID) client := env.ClientAtIndex(t, 0) var err error glsn := types.InvalidGLSN for i := 0; i < 32; i++ { - lsid := lsIDs[i%env.NumberOfLogStreams()] - glsn, err = client.AppendTo(context.Background(), lsid, []byte("foo")) + lsid := lsIDs[i%env.NumberOfLogStreams(topicID)] + glsn, err = client.AppendTo(context.Background(), topicID, lsid, []byte("foo")) So(err, ShouldBeNil) So(glsn, ShouldNotEqual, types.InvalidGLSN) } Convey("When add new logStream", func(ctx C) { - env.AddLS(t) + env.AddLS(t, topicID) env.ClientRefresh(t) Convey("Then it should be appendable", func(ctx C) { - lsIDs := env.LogStreamIDs() + topicID := env.TopicIDs()[0] + lsIDs := env.LogStreamIDs(topicID) client := env.ClientAtIndex(t, 0) var err error glsn := types.InvalidGLSN for i := 0; i < 32; i++ { - lsid := lsIDs[i%env.NumberOfLogStreams()] - glsn, err = client.AppendTo(context.Background(), lsid, []byte("foo")) + lsid := lsIDs[i%env.NumberOfLogStreams(topicID)] + glsn, err = client.AppendTo(context.Background(), topicID, lsid, []byte("foo")) So(err, ShouldBeNil) So(glsn, ShouldNotEqual, types.InvalidGLSN) } diff --git a/test/it/config.go b/test/it/config.go index 9f90f858d..0223e5b2e 100644 --- a/test/it/config.go +++ b/test/it/config.go @@ -21,10 +21,6 @@ const ( defaultUnsafeNoWAL = false defaultPortBase = 10000 - defaultNumSN = 0 - defaultNumLS = 0 - defaultNumCL = 0 - defaultVMSPortOffset = ports.ReservationSize - 1 defaultStartVMS = true ) @@ -47,9 +43,10 @@ type config struct { reporterClientFac metadata_repository.ReporterClientFactory snManagementClientFac metadata_repository.StorageNodeManagementClientFactory - numSN int - numLS int - numCL int + numSN int + numLS int + numCL int + numTopic int VMSOpts *vms.Options logger *zap.Logger @@ -176,6 +173,12 @@ func WithNumberOfLogStreams(numLS int) Option { } } +func WithNumberOfTopics(numTopic int) Option { + return func(c *config) { + c.numTopic = numTopic + } +} + func WithNumberOfClients(numCL int) Option { return func(c *config) { c.numCL = numCL diff --git a/test/it/failover/failover_test.go b/test/it/failover/failover_test.go index 1422e0717..ae89d406f 100644 --- a/test/it/failover/failover_test.go +++ b/test/it/failover/failover_test.go @@ -24,6 +24,7 @@ func TestVarlogFailoverMRLeaderFail(t *testing.T) { it.WithNumberOfStorageNodes(1), it.WithNumberOfLogStreams(1), it.WithNumberOfClients(5), + it.WithNumberOfTopics(1), } Convey("cluster", it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -32,7 +33,8 @@ func TestVarlogFailoverMRLeaderFail(t *testing.T) { errC := make(chan error, 1024) glsnC := make(chan types.GLSN, 1024) - lsID := env.LogStreamIDs()[0] + topicID := env.TopicIDs()[0] + lsID := env.LogStreamIDs(topicID)[0] var ( wg sync.WaitGroup @@ -50,7 +52,7 @@ func TestVarlogFailoverMRLeaderFail(t *testing.T) { default: } - glsn, err := client.Append(context.Background(), []byte("foo")) + glsn, err := client.Append(context.Background(), topicID, []byte("foo")) if err != nil { errC <- err } else { @@ -102,11 +104,11 @@ func TestVarlogFailoverMRLeaderFail(t *testing.T) { Convey("Then it should be able to keep appending log", func(ctx C) { client := env.ClientAtIndex(t, 0) - _, err := client.Append(context.Background(), []byte("bar")) + _, err := client.Append(context.Background(), topicID, []byte("bar")) So(err, ShouldBeNil) for glsn := types.MinGLSN; glsn <= maxGLSN; glsn += types.GLSN(1) { - _, err := client.Read(context.TODO(), lsID, glsn) + _, err := client.Read(context.TODO(), topicID, lsID, glsn) So(err, ShouldBeNil) } }) @@ -122,6 +124,7 @@ func TestVarlogFailoverSNBackupInitialFault(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithNumberOfClients(1), it.WithVMSOptions(it.NewTestVMSOptions()), + it.WithNumberOfTopics(1), ) defer func() { @@ -129,10 +132,11 @@ func TestVarlogFailoverSNBackupInitialFault(t *testing.T) { testutil.GC() }() - _, err := clus.ClientAtIndex(t, 0).Append(context.Background(), []byte("foo")) + topicID := clus.TopicIDs()[0] + _, err := clus.ClientAtIndex(t, 0).Append(context.Background(), topicID, []byte("foo")) require.NoError(t, err) - lsID := clus.LogStreamID(t, 0) + lsID := clus.LogStreamID(t, topicID, 0) backupSNID := clus.BackupStorageNodeIDOf(t, lsID) clus.CloseSN(t, backupSNID) @@ -156,11 +160,11 @@ func TestVarlogFailoverSNBackupInitialFault(t *testing.T) { return varlogpb.LogStreamStatusSealed == lsmd.GetStatus() }, 10*time.Second, 100*time.Millisecond) - _, err = clus.GetVMSClient(t).Unseal(context.Background(), lsID) + _, err = clus.GetVMSClient(t).Unseal(context.Background(), topicID, lsID) require.NoError(t, err) clus.ClientRefresh(t) - _, err = clus.ClientAtIndex(t, 0).Append(context.Background(), []byte("foo")) + _, err = clus.ClientAtIndex(t, 0).Append(context.Background(), topicID, []byte("foo")) require.NoError(t, err) } @@ -171,6 +175,7 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithNumberOfClients(5), it.WithVMSOptions(it.NewTestVMSOptions()), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -185,6 +190,7 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { wg.Add(1) go func(idx int) { defer wg.Done() + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, idx) for { select { @@ -192,7 +198,7 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { return default: } - glsn, err := client.Append(context.Background(), []byte("foo")) + glsn, err := client.Append(context.Background(), topicID, []byte("foo")) if err != nil { errC <- err return @@ -206,7 +212,8 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { errCnt := 0 maxGLSN := types.InvalidGLSN - lsID := env.LogStreamID(t, 0) + topicID := env.TopicIDs()[0] + lsID := env.LogStreamID(t, topicID, 0) backupSNID := env.BackupStorageNodeIDOf(t, lsID) timer := time.NewTimer(vtesting.TimeoutUnitTimesFactor(100)) @@ -231,7 +238,7 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { } Convey("Then it should not be able to append", func(ctx C) { - rsp, err := env.GetVMSClient(t).Seal(context.Background(), lsID) + rsp, err := env.GetVMSClient(t).Seal(context.Background(), topicID, lsID) So(err, ShouldBeNil) sealedGLSN := rsp.GetSealedGLSN() So(sealedGLSN, ShouldBeGreaterThanOrEqualTo, maxGLSN) @@ -262,7 +269,7 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { return ok && lsmd.GetStatus() == varlogpb.LogStreamStatusSealed }), ShouldBeTrue) - _, err := env.GetVMSClient(t).Unseal(context.TODO(), lsID) + _, err := env.GetVMSClient(t).Unseal(context.TODO(), topicID, lsID) So(err, ShouldBeNil) Convey("Then it should be able to append", func(ctx C) { @@ -272,7 +279,7 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { env.ClientRefresh(t) client := env.ClientAtIndex(t, 0) So(testutil.CompareWaitN(10, func() bool { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) return err == nil }), ShouldBeTrue) }) @@ -283,16 +290,20 @@ func TestVarlogFailoverSNBackupFail(t *testing.T) { } func TestVarlogFailoverRecoverFromSML(t *testing.T) { + t.Skip() + opts := []it.Option{ it.WithoutWAL(), it.WithNumberOfStorageNodes(1), it.WithNumberOfLogStreams(1), it.WithMRCount(1), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster with StateMachineLog", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { leader := env.IndexOfLeaderMR() - lsID := env.LogStreamID(t, 0) + topicID := env.TopicIDs()[0] + lsID := env.LogStreamID(t, topicID, 0) cli := env.NewLogIOClient(t, lsID) defer cli.Close() @@ -302,9 +313,10 @@ func TestVarlogFailoverRecoverFromSML(t *testing.T) { glsn types.GLSN ) for i := 0; i < 5; i++ { - glsn, err = cli.Append(context.TODO(), lsID, []byte("foo")) + glsn, err = cli.Append(context.TODO(), topicID, lsID, []byte("foo")) So(err, ShouldBeNil) } + ver := types.Version(glsn) Convey("When MR leader restart", func(ctx C) { env.RestartMR(t, leader) @@ -313,7 +325,7 @@ func TestVarlogFailoverRecoverFromSML(t *testing.T) { Convey("Then it should be recovered", func(ctx C) { mr := env.GetMR(t) - So(mr.GetHighWatermark(), ShouldEqual, glsn) + So(mr.GetLastCommitVersion(), ShouldEqual, ver) metadata, err := mr.GetMetadata(context.TODO()) So(err, ShouldBeNil) @@ -351,11 +363,11 @@ func TestVarlogFailoverRecoverFromSML(t *testing.T) { recoveredGLSN := types.InvalidGLSN So(testutil.CompareWaitN(10, func() bool { - cmCli.Unseal(context.TODO(), lsID) + cmCli.Unseal(context.TODO(), topicID, lsID) rctx, cancel := context.WithTimeout(context.TODO(), vtesting.TimeoutUnitTimesFactor(10)) defer cancel() - recoveredGLSN, err = cli.Append(rctx, lsID, []byte("foo")) + recoveredGLSN, err = cli.Append(rctx, topicID, lsID, []byte("foo")) return err == nil }), ShouldBeTrue) @@ -367,6 +379,8 @@ func TestVarlogFailoverRecoverFromSML(t *testing.T) { } func TestVarlogFailoverRecoverFromIncompleteSML(t *testing.T) { + t.Skip() + opts := []it.Option{ it.WithoutWAL(), it.WithReplicationFactor(1), @@ -374,6 +388,7 @@ func TestVarlogFailoverRecoverFromIncompleteSML(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithMRCount(1), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster with StateMachineLog", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -382,31 +397,34 @@ func TestVarlogFailoverRecoverFromIncompleteSML(t *testing.T) { meta := env.GetMetadata(t) So(meta, ShouldNotBeNil) + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) var ( err error glsn types.GLSN + ver types.Version ) for i := 0; i < nrAppend; i++ { - glsn, err = client.Append(context.TODO(), []byte("foo")) + glsn, err = client.Append(context.TODO(), topicID, []byte("foo")) So(err, ShouldBeNil) } + ver = types.Version(glsn) - lsID := env.LogStreamID(t, 0) - env.WaitCommit(t, lsID, glsn) + lsID := env.LogStreamID(t, topicID, 0) + env.WaitCommit(t, lsID, ver) Convey("When commit happens during MR close all without writing SML", func(ctx C) { env.CloseMRAllForRestart(t) for i := 0; i < nrAppend; i++ { - env.AppendUncommittedLog(t, lsID, []byte("foo")) + env.AppendUncommittedLog(t, topicID, lsID, []byte("foo")) } - prev := glsn offset := glsn + 1 - glsn = glsn + types.GLSN(nrAppend) - env.CommitWithoutMR(t, lsID, types.LLSN(offset), offset, nrAppend, prev, glsn) + glsn += types.GLSN(nrAppend) + ver++ + env.CommitWithoutMR(t, lsID, types.LLSN(offset), offset, nrAppend, ver, glsn) env.RecoverMR(t) @@ -414,7 +432,7 @@ func TestVarlogFailoverRecoverFromIncompleteSML(t *testing.T) { Convey("Then it should be recovered", func(ctx C) { mr := env.GetMR(t) - So(mr.GetHighWatermark(), ShouldEqual, glsn) + So(mr.GetLastCommitVersion(), ShouldEqual, ver) metadata, err := mr.GetMetadata(context.TODO()) So(err, ShouldBeNil) @@ -431,12 +449,12 @@ func TestVarlogFailoverRecoverFromIncompleteSML(t *testing.T) { recoveredGLSN := types.InvalidGLSN So(testutil.CompareWaitN(10, func() bool { - cmCli.Unseal(context.TODO(), lsID) + cmCli.Unseal(context.TODO(), topicID, lsID) rctx, cancel := context.WithTimeout(context.TODO(), vtesting.TimeoutUnitTimesFactor(10)) defer cancel() - recoveredGLSN, err = client.Append(rctx, []byte("foo")) + recoveredGLSN, err = client.Append(rctx, topicID, []byte("foo")) return err == nil }), ShouldBeTrue) @@ -448,6 +466,8 @@ func TestVarlogFailoverRecoverFromIncompleteSML(t *testing.T) { } func TestVarlogFailoverRecoverFromIncompleteSMLWithEmptyCommit(t *testing.T) { + t.Skip() + opts := []it.Option{ it.WithoutWAL(), it.WithReplicationFactor(1), @@ -455,6 +475,7 @@ func TestVarlogFailoverRecoverFromIncompleteSMLWithEmptyCommit(t *testing.T) { it.WithNumberOfLogStreams(2), it.WithMRCount(1), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster with StateMachineLog", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -463,19 +484,22 @@ func TestVarlogFailoverRecoverFromIncompleteSMLWithEmptyCommit(t *testing.T) { meta := env.GetMetadata(t) So(meta, ShouldNotBeNil) + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) var ( err error glsn types.GLSN + ver types.Version ) for i := 0; i < nrAppend; i++ { - glsn, err = client.Append(context.TODO(), []byte("foo")) + glsn, err = client.Append(context.TODO(), topicID, []byte("foo")) So(err, ShouldBeNil) } + ver = types.Version(glsn) - for _, lsID := range env.LogStreamIDs() { - env.WaitCommit(t, lsID, glsn) + for _, lsID := range env.LogStreamIDs(topicID) { + env.WaitCommit(t, lsID, ver) } Convey("When empty commit happens during MR close all without writing SML", func(ctx C) { @@ -497,15 +521,16 @@ func TestVarlogFailoverRecoverFromIncompleteSMLWithEmptyCommit(t *testing.T) { */ for i := 0; i < 2; i++ { - for _, lsID := range env.LogStreamIDs() { + for _, lsID := range env.LogStreamIDs(topicID) { llsn := env.GetUncommittedLLSNOffset(t, lsID) - env.AppendUncommittedLog(t, lsID, []byte("foo")) + env.AppendUncommittedLog(t, topicID, lsID, []byte("foo")) - env.CommitWithoutMR(t, lsID, llsn, glsn+types.GLSN(1), 1, glsn, glsn+types.GLSN(1)) - glsn += types.GLSN(1) + ver++ + glsn++ + env.CommitWithoutMR(t, lsID, llsn, glsn+types.GLSN(1), 1, ver, glsn) - env.WaitCommit(t, lsID, glsn) + env.WaitCommit(t, lsID, ver) } } @@ -515,15 +540,13 @@ func TestVarlogFailoverRecoverFromIncompleteSMLWithEmptyCommit(t *testing.T) { Convey("Then it should be recovered", func(ctx C) { mr := env.GetMR(t) - So(mr.GetHighWatermark(), ShouldEqual, glsn) - - prevHWM := mr.GetPrevHighWatermark() + So(mr.GetLastCommitVersion(), ShouldEqual, ver) - crs, _ := mr.LookupNextCommitResults(prevHWM) + crs, _ := mr.LookupNextCommitResults(ver - 1) So(crs, ShouldNotBeNil) - for _, lsID := range env.LogStreamIDs() { - cr, _, ok := crs.LookupCommitResult(lsID, -1) + for _, lsID := range env.LogStreamIDs(topicID) { + cr, _, ok := crs.LookupCommitResult(topicID, lsID, -1) So(ok, ShouldBeTrue) llsn := env.GetUncommittedLLSNOffset(t, lsID) @@ -537,6 +560,8 @@ func TestVarlogFailoverRecoverFromIncompleteSMLWithEmptyCommit(t *testing.T) { } func TestVarlogFailoverSyncLogStream(t *testing.T) { + t.Skip() + opts := []it.Option{ it.WithoutWAL(), it.WithReplicationFactor(1), @@ -544,6 +569,7 @@ func TestVarlogFailoverSyncLogStream(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithMRCount(1), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster with StateMachineLog", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -552,38 +578,41 @@ func TestVarlogFailoverSyncLogStream(t *testing.T) { meta := env.GetMetadata(t) So(meta, ShouldNotBeNil) + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) var ( err error glsn types.GLSN + ver types.Version ) for i := 0; i < nrAppend; i++ { - glsn, err = client.Append(context.TODO(), []byte("foo")) + glsn, err = client.Append(context.TODO(), topicID, []byte("foo")) So(err, ShouldBeNil) } + ver = types.Version(glsn) - lsID := env.LogStreamID(t, 0) - env.WaitCommit(t, lsID, glsn) + lsID := env.LogStreamID(t, topicID, 0) + env.WaitCommit(t, lsID, ver) Convey("When add log stream without writeing SML", func(ctx C) { env.CloseMRAllForRestart(t) - addedLSID := env.AddLSWithoutMR(t) + addedLSID := env.AddLSWithoutMR(t, topicID) for i := 0; i < nrAppend; i++ { - env.AppendUncommittedLog(t, addedLSID, []byte("foo")) + env.AppendUncommittedLog(t, topicID, addedLSID, []byte("foo")) } - prev := glsn offset := glsn + 1 - glsn = glsn + types.GLSN(nrAppend) + glsn += types.GLSN(nrAppend) + ver++ - env.CommitWithoutMR(t, lsID, types.LLSN(offset), offset, 0, prev, glsn) - env.WaitCommit(t, lsID, glsn) + env.CommitWithoutMR(t, lsID, types.LLSN(offset), offset, 0, ver, glsn) + env.WaitCommit(t, lsID, ver) - env.CommitWithoutMR(t, addedLSID, types.MinLLSN, offset, nrAppend, prev, glsn) - env.WaitCommit(t, addedLSID, glsn) + env.CommitWithoutMR(t, addedLSID, types.MinLLSN, offset, nrAppend, ver, glsn) + env.WaitCommit(t, addedLSID, ver) env.RecoverMR(t) @@ -591,7 +620,7 @@ func TestVarlogFailoverSyncLogStream(t *testing.T) { Convey("Then it should be recovered", func(ctx C) { mr := env.GetMR(t) - So(mr.GetHighWatermark(), ShouldEqual, glsn) + So(mr.GetLastCommitVersion(), ShouldEqual, ver) metadata, err := mr.GetMetadata(context.TODO()) So(err, ShouldBeNil) @@ -609,7 +638,7 @@ func TestVarlogFailoverSyncLogStream(t *testing.T) { recoveredGLSN := types.InvalidGLSN So(testutil.CompareWaitN(10, func() bool { - cmCli.Unseal(context.TODO(), addedLSID) + cmCli.Unseal(context.TODO(), topicID, addedLSID) rctx, cancel := context.WithTimeout(context.TODO(), vtesting.TimeoutUnitTimesFactor(10)) defer cancel() @@ -617,7 +646,7 @@ func TestVarlogFailoverSyncLogStream(t *testing.T) { env.ClientRefresh(t) client := env.ClientAtIndex(t, 0) - recoveredGLSN, err = client.Append(rctx, []byte("foo")) + recoveredGLSN, err = client.Append(rctx, topicID, []byte("foo")) return err == nil }), ShouldBeTrue) @@ -631,7 +660,7 @@ func TestVarlogFailoverSyncLogStream(t *testing.T) { env.CloseMRAllForRestart(t) - env.UpdateLSWithoutMR(t, lsID, addedSNID, true) + env.UpdateLSWithoutMR(t, topicID, lsID, addedSNID, true) env.RecoverMR(t) @@ -651,6 +680,8 @@ func TestVarlogFailoverSyncLogStream(t *testing.T) { } func TestVarlogFailoverSyncLogStreamSelectReplica(t *testing.T) { + t.Skip() + opts := []it.Option{ it.WithoutWAL(), it.WithReplicationFactor(1), @@ -658,6 +689,7 @@ func TestVarlogFailoverSyncLogStreamSelectReplica(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithMRCount(1), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster with StateMachineLog", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -667,7 +699,8 @@ func TestVarlogFailoverSyncLogStreamSelectReplica(t *testing.T) { So(meta, ShouldNotBeNil) snID := env.StorageNodeIDAtIndex(t, 0) - lsID := env.LogStreamID(t, 0) + topicID := env.TopicIDs()[0] + lsID := env.LogStreamID(t, topicID, 0) Convey("When update log stream without writing SML, and do not clear victim replica", func(ctx C) { client := env.ClientAtIndex(t, 0) @@ -675,18 +708,20 @@ func TestVarlogFailoverSyncLogStreamSelectReplica(t *testing.T) { var ( err error glsn types.GLSN + ver types.Version ) for i := 0; i < nrAppend; i++ { - glsn, err = client.Append(context.TODO(), []byte("foo")) + glsn, err = client.Append(context.TODO(), topicID, []byte("foo")) So(err, ShouldBeNil) } - env.WaitCommit(t, lsID, glsn) + ver = types.Version(glsn) + env.WaitCommit(t, lsID, ver) addedSNID := env.AddSN(t) env.CloseMRAllForRestart(t) - env.UpdateLSWithoutMR(t, lsID, addedSNID, false) + env.UpdateLSWithoutMR(t, topicID, lsID, addedSNID, false) env.RecoverMR(t) @@ -708,17 +743,18 @@ func TestVarlogFailoverSyncLogStreamSelectReplica(t *testing.T) { env.CloseMRAllForRestart(t) - env.UpdateLSWithoutMR(t, lsID, addedSNID, false) - env.UnsealWithoutMR(t, lsID, types.InvalidGLSN) + env.UpdateLSWithoutMR(t, topicID, lsID, addedSNID, false) + env.UnsealWithoutMR(t, topicID, lsID, types.InvalidGLSN) for i := 0; i < nrAppend; i++ { - env.AppendUncommittedLog(t, lsID, []byte("foo")) + env.AppendUncommittedLog(t, topicID, lsID, []byte("foo")) } glsn := types.GLSN(nrAppend) + ver := types.MinVersion - env.CommitWithoutMR(t, lsID, types.MinLLSN, types.MinGLSN, nrAppend, types.InvalidGLSN, glsn) - env.WaitCommit(t, lsID, glsn) + env.CommitWithoutMR(t, lsID, types.MinLLSN, types.MinGLSN, nrAppend, ver, glsn) + env.WaitCommit(t, lsID, ver) env.RecoverMR(t) @@ -738,6 +774,8 @@ func TestVarlogFailoverSyncLogStreamSelectReplica(t *testing.T) { } func TestVarlogFailoverSyncLogStreamIgnore(t *testing.T) { + t.Skip() + opts := []it.Option{ it.WithoutWAL(), it.WithReplicationFactor(2), @@ -745,6 +783,7 @@ func TestVarlogFailoverSyncLogStreamIgnore(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithMRCount(1), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster with StateMachineLog", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -753,24 +792,27 @@ func TestVarlogFailoverSyncLogStreamIgnore(t *testing.T) { meta := env.GetMetadata(t) So(meta, ShouldNotBeNil) + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) var ( err error glsn types.GLSN + ver types.Version ) for i := 0; i < nrAppend; i++ { - glsn, err = client.Append(context.TODO(), []byte("foo")) + glsn, err = client.Append(context.TODO(), topicID, []byte("foo")) So(err, ShouldBeNil) } + ver = types.Version(glsn) - lsID := env.LogStreamID(t, 0) - env.WaitCommit(t, lsID, glsn) + lsID := env.LogStreamID(t, topicID, 0) + env.WaitCommit(t, lsID, ver) Convey("When add log stream incomplete without writeing SML", func(ctx C) { env.CloseMRAllForRestart(t) - incompleteLSID := env.AddLSIncomplete(t) + incompleteLSID := env.AddLSIncomplete(t, topicID) env.RecoverMR(t) @@ -798,6 +840,7 @@ func TestVarlogFailoverSyncLogStreamError(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithMRCount(1), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster with StateMachineLog", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -806,19 +849,22 @@ func TestVarlogFailoverSyncLogStreamError(t *testing.T) { meta := env.GetMetadata(t) So(meta, ShouldNotBeNil) + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) var ( err error glsn types.GLSN + ver types.Version ) for i := 0; i < nrAppend; i++ { - glsn, err = client.Append(context.TODO(), []byte("foo")) + glsn, err = client.Append(context.TODO(), topicID, []byte("foo")) So(err, ShouldBeNil) } + ver = types.Version(glsn) - lsID := env.LogStreamID(t, 0) - env.WaitCommit(t, lsID, glsn) + lsID := env.LogStreamID(t, topicID, 0) + env.WaitCommit(t, lsID, ver) Convey("When remove replica without writing SML", func(ctx C) { env.CloseMRAllForRestart(t) @@ -826,7 +872,7 @@ func TestVarlogFailoverSyncLogStreamError(t *testing.T) { snID := env.StorageNodeIDAtIndex(t, 0) snMCL := env.SNClientOf(t, snID) - snMCL.RemoveLogStream(context.Background(), lsID) + snMCL.RemoveLogStream(context.Background(), topicID, lsID) env.RecoverMR(t) @@ -840,22 +886,22 @@ func TestVarlogFailoverSyncLogStreamError(t *testing.T) { Convey("When remove replica without writing SML", func(ctx C) { env.CloseMRAllForRestart(t) - addedLSID := env.AddLSWithoutMR(t) + addedLSID := env.AddLSWithoutMR(t, topicID) for i := 0; i < nrAppend; i++ { - env.AppendUncommittedLog(t, addedLSID, []byte("foo")) + env.AppendUncommittedLog(t, topicID, addedLSID, []byte("foo")) } - prev := glsn offset := glsn + 1 - glsn = glsn + types.GLSN(nrAppend) - env.CommitWithoutMR(t, addedLSID, types.MinLLSN, offset, nrAppend, prev, glsn) - env.WaitCommit(t, addedLSID, glsn) + glsn += types.GLSN(nrAppend) + ver++ + env.CommitWithoutMR(t, addedLSID, types.MinLLSN, offset, nrAppend, ver, glsn) + env.WaitCommit(t, addedLSID, ver) snID := env.StorageNodeIDAtIndex(t, 0) snMCL := env.SNClientOf(t, snID) - snMCL.RemoveLogStream(context.Background(), addedLSID) + snMCL.RemoveLogStream(context.Background(), topicID, addedLSID) env.RecoverMR(t) @@ -874,20 +920,22 @@ func TestVarlogFailoverUpdateLS(t *testing.T) { it.WithNumberOfStorageNodes(3), it.WithNumberOfLogStreams(2), it.WithNumberOfClients(5), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) for i := 0; i < 32; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) So(err, ShouldBeNil) } Convey("When SN fail", func(ctx C) { var victim types.StorageNodeID var updateLS types.LogStreamID - for _, lsID := range env.LogStreamIDs() { + for _, lsID := range env.LogStreamIDs(topicID) { sn := env.PrimaryStorageNodeIDOf(t, lsID) snCL := env.SNClientOf(t, sn) snmd, _ := snCL.GetMetadata(context.TODO()) @@ -913,7 +961,7 @@ func TestVarlogFailoverUpdateLS(t *testing.T) { env.CloseSNClientOf(t, victim) for i := 0; i < 32; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) So(err, ShouldBeNil) } @@ -924,10 +972,10 @@ func TestVarlogFailoverUpdateLS(t *testing.T) { return lsdesc.Status == varlogpb.LogStreamStatusSealed }), ShouldBeTrue) - env.UpdateLS(t, updateLS, victim, addedSN) + env.UpdateLS(t, topicID, updateLS, victim, addedSN) for i := 0; i < 32; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) So(err, ShouldBeNil) } @@ -947,7 +995,7 @@ func TestVarlogFailoverUpdateLS(t *testing.T) { }), ShouldBeTrue) for i := 0; i < 32; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) So(err, ShouldBeNil) } }) diff --git a/test/it/management/management_test.go b/test/it/management/management_test.go index 75be11aff..80548b580 100644 --- a/test/it/management/management_test.go +++ b/test/it/management/management_test.go @@ -2,15 +2,18 @@ package management import ( "context" + "fmt" "testing" "time" . "github.com/smartystreets/goconvey/convey" "github.com/stretchr/testify/require" + "golang.org/x/sync/errgroup" "github.com/kakao/varlog/internal/metadata_repository" "github.com/kakao/varlog/pkg/types" "github.com/kakao/varlog/pkg/util/testutil" + "github.com/kakao/varlog/pkg/varlog" "github.com/kakao/varlog/proto/varlogpb" "github.com/kakao/varlog/test/it" ) @@ -29,7 +32,7 @@ func TestUnregisterInactiveStorageNode(t *testing.T) { } func TestUnregisterActiveStorageNode(t *testing.T) { - clus := it.NewVarlogCluster(t, it.WithNumberOfStorageNodes(1), it.WithNumberOfLogStreams(1)) + clus := it.NewVarlogCluster(t, it.WithNumberOfStorageNodes(1), it.WithNumberOfLogStreams(1), it.WithNumberOfTopics(1)) defer clus.Close(t) snID := clus.StorageNodeIDAtIndex(t, 0) @@ -48,22 +51,23 @@ func TestAddAlreadyExistedStorageNode(t *testing.T) { } func TestUnregisterLogStream(t *testing.T) { - clus := it.NewVarlogCluster(t, it.WithNumberOfStorageNodes(1), it.WithNumberOfLogStreams(1)) + clus := it.NewVarlogCluster(t, it.WithNumberOfStorageNodes(1), it.WithNumberOfLogStreams(1), it.WithNumberOfTopics(1)) defer clus.Close(t) - lsID := clus.LogStreamIDs()[0] - _, err := clus.GetVMSClient(t).UnregisterLogStream(context.Background(), lsID) + topicID := clus.TopicIDs()[0] + lsID := clus.LogStreamIDs(topicID)[0] + _, err := clus.GetVMSClient(t).UnregisterLogStream(context.Background(), topicID, lsID) require.Error(t, err) - _, err = clus.GetVMSClient(t).Seal(context.Background(), lsID) + _, err = clus.GetVMSClient(t).Seal(context.Background(), topicID, lsID) require.NoError(t, err) - _, err = clus.GetVMSClient(t).UnregisterLogStream(context.Background(), lsID) + _, err = clus.GetVMSClient(t).UnregisterLogStream(context.Background(), topicID, lsID) require.NoError(t, err) } func TestAddLogStreamWithNotExistedNode(t *testing.T) { - clus := it.NewVarlogCluster(t) + clus := it.NewVarlogCluster(t, it.WithNumberOfTopics(1)) defer clus.Close(t) replicas := []*varlogpb.ReplicaDescriptor{ @@ -72,7 +76,8 @@ func TestAddLogStreamWithNotExistedNode(t *testing.T) { Path: "/fake", }, } - _, err := clus.GetVMSClient(t).AddLogStream(context.Background(), replicas) + topicID := clus.TopicIDs()[0] + _, err := clus.GetVMSClient(t).AddLogStream(context.Background(), topicID, replicas) require.Error(t, err) } @@ -80,6 +85,7 @@ func TestAddLogStreamManually(t *testing.T) { clus := it.NewVarlogCluster(t, it.WithReplicationFactor(2), it.WithNumberOfStorageNodes(2), + it.WithNumberOfTopics(1), ) defer clus.Close(t) @@ -94,7 +100,8 @@ func TestAddLogStreamManually(t *testing.T) { }) } - _, err := clus.GetVMSClient(t).AddLogStream(context.Background(), replicas) + topicID := clus.TopicIDs()[0] + _, err := clus.GetVMSClient(t).AddLogStream(context.Background(), topicID, replicas) require.NoError(t, err) } @@ -104,6 +111,7 @@ func TestAddLogStreamPartiallyRegistered(t *testing.T) { clus := it.NewVarlogCluster(t, it.WithReplicationFactor(2), it.WithNumberOfStorageNodes(2), + it.WithNumberOfTopics(1), ) defer clus.Close(t) @@ -114,7 +122,9 @@ func TestAddLogStreamPartiallyRegistered(t *testing.T) { sn1 := clus.SNClientOf(t, snid1) snmd1, err := sn1.GetMetadata(context.Background()) require.NoError(t, err) - err = sn1.AddLogStream(context.Background(), lsID, snmd1.GetStorageNode().GetStorages()[0].GetPath()) + + topicID := clus.TopicIDs()[0] + err = sn1.AddLogStreamReplica(context.Background(), topicID, lsID, snmd1.GetStorageNode().GetStorages()[0].GetPath()) require.NoError(t, err) snid2 := clus.StorageNodeIDAtIndex(t, 1) @@ -134,11 +144,11 @@ func TestAddLogStreamPartiallyRegistered(t *testing.T) { Path: snmd2.GetStorageNode().GetStorages()[0].GetPath(), }, } - _, err = clus.GetVMSClient(t).AddLogStream(context.Background(), replicas) + _, err = clus.GetVMSClient(t).AddLogStream(context.Background(), topicID, replicas) require.Error(t, err) // Retring add new log stream will be succeed, since VMS refreshes its ID pool. - _, err = clus.GetVMSClient(t).AddLogStream(context.Background(), replicas) + _, err = clus.GetVMSClient(t).AddLogStream(context.Background(), topicID, replicas) require.NoError(t, err) } @@ -148,6 +158,7 @@ func TestRemoveLogStreamReplica(t *testing.T) { clus := it.NewVarlogCluster(t, it.WithReplicationFactor(1), it.WithNumberOfStorageNodes(1), + it.WithNumberOfTopics(1), ) defer clus.Close(t) @@ -156,10 +167,11 @@ func TestRemoveLogStreamReplica(t *testing.T) { sn := clus.SNClientOf(t, snid) snmd, err := sn.GetMetadata(context.Background()) require.NoError(t, err) - err = sn.AddLogStream(context.Background(), lsID, snmd.GetStorageNode().GetStorages()[0].GetPath()) + topicID := clus.TopicIDs()[0] + err = sn.AddLogStreamReplica(context.Background(), topicID, lsID, snmd.GetStorageNode().GetStorages()[0].GetPath()) require.NoError(t, err) - _, err = clus.GetVMSClient(t).RemoveLogStreamReplica(context.TODO(), snid, lsID) + _, err = clus.GetVMSClient(t).RemoveLogStreamReplica(context.TODO(), snid, topicID, lsID) require.NoError(t, err) } @@ -169,15 +181,17 @@ func TestSealUnseal(t *testing.T) { it.WithNumberOfStorageNodes(2), it.WithNumberOfLogStreams(1), it.WithNumberOfClients(1), + it.WithNumberOfTopics(1), ) defer clus.Close(t) - lsID := clus.LogStreamIDs()[0] + topicID := clus.TopicIDs()[0] + lsID := clus.LogStreamIDs(topicID)[0] - _, err := clus.GetVMSClient(t).Seal(context.Background(), lsID) + _, err := clus.GetVMSClient(t).Seal(context.Background(), topicID, lsID) require.NoError(t, err) - _, err = clus.GetVMSClient(t).Unseal(context.Background(), lsID) + _, err = clus.GetVMSClient(t).Unseal(context.Background(), topicID, lsID) require.NoError(t, err) } @@ -191,18 +205,20 @@ func TestSyncLogStream(t *testing.T) { it.WithNumberOfClients(1), it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(1), } Convey("Given LogStream", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { + topicID := env.TopicIDs()[0] client := env.ClientAtIndex(t, 0) for i := 0; i < numLogs; i++ { - _, err := client.Append(context.Background(), []byte("foo")) + _, err := client.Append(context.Background(), topicID, []byte("foo")) So(err, ShouldBeNil) } Convey("Seal", func(ctx C) { - lsID := env.LogStreamID(t, 0) - rsp, err := env.GetVMSClient(t).Seal(context.Background(), lsID) + lsID := env.LogStreamID(t, topicID, 0) + rsp, err := env.GetVMSClient(t).Seal(context.Background(), topicID, lsID) So(err, ShouldBeNil) So(rsp.GetSealedGLSN(), ShouldEqual, types.GLSN(numLogs)) @@ -223,7 +239,7 @@ func TestSyncLogStream(t *testing.T) { So(snidmap, ShouldContainKey, victimSNID) // update LS - env.UpdateLS(t, lsID, victimSNID, newSNID) + env.UpdateLS(t, topicID, lsID, victimSNID, newSNID) // test if victimSNID does not exist in the logstream and newSNID exists // in the log stream @@ -252,13 +268,13 @@ func TestSyncLogStream(t *testing.T) { return false } rpt := rsp.GetUncommitReports()[0] - return rpt.GetHighWatermark() == types.GLSN(numLogs) && + return rpt.GetVersion() == types.Version(numLogs) && rpt.GetUncommittedLLSNOffset() == types.LLSN(numLogs+1) && rpt.GetUncommittedLLSNLength() == 0 && lsmd.Status == varlogpb.LogStreamStatusSealed }), ShouldBeTrue) - _, err := env.GetVMSClient(t).Unseal(context.Background(), lsID) + _, err := env.GetVMSClient(t).Unseal(context.Background(), topicID, lsID) So(err, ShouldBeNil) }) }) @@ -273,6 +289,7 @@ func TestSealLogStreamSealedIncompletely(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(1), } Convey("Given cluster", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { @@ -285,13 +302,14 @@ func TestSealLogStreamSealedIncompletely(t *testing.T) { } failedSN := env.SNClientOf(t, failedSNID) - // remove replica to make Seal LS imcomplete - lsID := env.LogStreamIDs()[0] - err := failedSN.RemoveLogStream(context.TODO(), lsID) + // remove replica to make Seal LS incomplete + topicID := env.TopicIDs()[0] + lsID := env.LogStreamIDs(topicID)[0] + err := failedSN.RemoveLogStream(context.TODO(), topicID, lsID) So(err, ShouldBeNil) vmsCL := env.GetVMSClient(t) - rsp, err := vmsCL.Seal(context.TODO(), lsID) + rsp, err := vmsCL.Seal(context.TODO(), topicID, lsID) // So(err, ShouldNotBeNil) So(err, ShouldBeNil) So(len(rsp.GetLogStreams()), ShouldBeLessThan, env.ReplicationFactor()) @@ -303,7 +321,7 @@ func TestSealLogStreamSealedIncompletely(t *testing.T) { path := snmeta.GetStorageNode().GetStorages()[0].GetPath() So(len(path), ShouldBeGreaterThan, 0) - err = failedSN.AddLogStream(context.TODO(), lsID, path) + err = failedSN.AddLogStreamReplica(context.TODO(), topicID, lsID, path) So(err, ShouldBeNil) So(testutil.CompareWaitN(100, func() bool { @@ -330,13 +348,15 @@ func TestUnsealLogStreamUnsealedIncompletely(t *testing.T) { it.WithNumberOfLogStreams(1), it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(1), } Convey("Given Sealed LogStream", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { - lsID := env.LogStreamIDs()[0] + topicID := env.TopicIDs()[0] + lsID := env.LogStreamIDs(topicID)[0] vmsCL := env.GetVMSClient(t) - _, err := vmsCL.Seal(context.TODO(), lsID) + _, err := vmsCL.Seal(context.TODO(), topicID, lsID) So(err, ShouldBeNil) Convey("When Unseal is incomplete", func(ctx C) { @@ -348,11 +368,11 @@ func TestUnsealLogStreamUnsealedIncompletely(t *testing.T) { } failedSN := env.SNClientOf(t, failedSNID) - // remove replica to make Unseal LS imcomplete - err := failedSN.RemoveLogStream(context.TODO(), lsID) + // remove replica to make Unseal LS incomplete + err := failedSN.RemoveLogStream(context.TODO(), topicID, lsID) So(err, ShouldBeNil) - _, err = vmsCL.Unseal(context.TODO(), lsID) + _, err = vmsCL.Unseal(context.TODO(), topicID, lsID) So(err, ShouldNotBeNil) Convey("Then SN Watcher make LS sealed", func(ctx C) { @@ -362,7 +382,7 @@ func TestUnsealLogStreamUnsealedIncompletely(t *testing.T) { path := snmeta.GetStorageNode().GetStorages()[0].GetPath() So(len(path), ShouldBeGreaterThan, 0) - err = failedSN.AddLogStream(context.TODO(), lsID, path) + err = failedSN.AddLogStreamReplica(context.TODO(), topicID, lsID, path) So(err, ShouldBeNil) So(testutil.CompareWaitN(100, func() bool { @@ -390,11 +410,13 @@ func TestGCZombieLogStream(t *testing.T) { it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), it.WithVMSOptions(vmsOpts), + it.WithNumberOfTopics(1), } Convey("Given Varlog cluster", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { snID := env.StorageNodeIDAtIndex(t, 0) lsID := types.LogStreamID(1) + topicID := env.TopicIDs()[0] Convey("When AddLogStream to SN but do not register MR", func(ctx C) { snMCL := env.SNClientOf(t, snID) @@ -403,7 +425,7 @@ func TestGCZombieLogStream(t *testing.T) { So(err, ShouldBeNil) path := meta.GetStorageNode().GetStorages()[0].GetPath() - err = snMCL.AddLogStream(context.TODO(), lsID, path) + err = snMCL.AddLogStreamReplica(context.TODO(), topicID, lsID, path) So(err, ShouldBeNil) meta, err = snMCL.GetMetadata(context.TODO()) @@ -432,3 +454,179 @@ func TestGCZombieLogStream(t *testing.T) { }) })) } + +func TestAddLogStreamTopic(t *testing.T) { + opts := []it.Option{ + it.WithReplicationFactor(2), + it.WithNumberOfStorageNodes(2), + it.WithNumberOfLogStreams(1), + it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), + it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(10), + it.WithNumberOfClients(1), + } + + Convey("Given Topic", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { + numLogs := 16 + + client := env.ClientAtIndex(t, 0) + for _, topicID := range env.TopicIDs() { + for i := 0; i < numLogs; i++ { + _, err := client.Append(context.Background(), topicID, []byte("foo")) + So(err, ShouldBeNil) + } + } + + env.ClientRefresh(t) + client = env.ClientAtIndex(t, 0) + + Convey("When AddLogStream", func(ctx C) { + vmsCL := env.GetVMSClient(t) + for _, topicID := range env.TopicIDs() { + _, err := vmsCL.AddLogStream(context.TODO(), topicID, nil) + So(err, ShouldBeNil) + } + + Convey("Then it should Appendable", func(ctx C) { + for _, topicID := range env.TopicIDs() { + for i := 0; i < numLogs; i++ { + _, err := client.Append(context.Background(), topicID, []byte("foo")) + So(err, ShouldBeNil) + } + } + }) + }) + })) +} + +func TestRemoveTopic(t *testing.T) { + opts := []it.Option{ + it.WithReplicationFactor(2), + it.WithNumberOfStorageNodes(2), + it.WithNumberOfLogStreams(2), + it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), + it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(10), + it.WithNumberOfClients(1), + } + + Convey("Given Topic", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { + numLogs := 8 + + client := env.ClientAtIndex(t, 0) + for _, topicID := range env.TopicIDs() { + for i := 0; i < numLogs; i++ { + _, err := client.Append(context.Background(), topicID, []byte("foo")) + So(err, ShouldBeNil) + } + } + + Convey("When RemoveTopic", func(ctx C) { + vmsCL := env.GetVMSClient(t) + rmTopicID := env.TopicIDs()[0] + _, err := vmsCL.UnregisterTopic(context.TODO(), rmTopicID) + So(err, ShouldBeNil) + + meta := env.GetMetadata(t) + So(meta.GetTopic(rmTopicID), ShouldBeNil) + + Convey("Then unregistered topic should be Unappendable", func(ctx C) { + actx, cancel := context.WithTimeout(context.Background(), time.Second) + defer cancel() + _, err := client.Append(actx, rmTopicID, []byte("foo")) + So(err, ShouldNotBeNil) + + Convey("And other topics should Appendable", func(ctx C) { + for _, topicID := range env.TopicIDs() { + if topicID == rmTopicID { + continue + } + + for i := 0; i < numLogs; i++ { + _, err := client.Append(context.Background(), topicID, []byte("foo")) + So(err, ShouldBeNil) + } + } + }) + }) + }) + })) +} + +func TestAddTopic(t *testing.T) { + opts := []it.Option{ + it.WithReplicationFactor(2), + it.WithNumberOfStorageNodes(2), + it.WithNumberOfLogStreams(2), + it.WithReporterClientFactory(metadata_repository.NewReporterClientFactory()), + it.WithStorageNodeManagementClientFactory(metadata_repository.NewEmptyStorageNodeClientFactory()), + it.WithNumberOfTopics(3), + } + + Convey("Given Topic", t, it.WithTestCluster(t, opts, func(env *it.VarlogCluster) { + testTimeout := 5 * time.Second + + tctx, tcancel := context.WithTimeout(context.TODO(), testTimeout) + defer tcancel() + + grp, gctx := errgroup.WithContext(tctx) + for _, topicID := range env.TopicIDs() { + tid := topicID + grp.Go(func() (err error) { + cl, err := varlog.Open(context.Background(), env.ClusterID(), env.MRRPCEndpoints()) + if err != nil { + return err + } + defer cl.Close() + + var glsn types.GLSN + for gctx.Err() == nil { + glsn, err = cl.Append(context.Background(), tid, []byte("foo")) + if err != nil { + err = fmt.Errorf("topic=%v,err=%v", tid, err) + break + } + } + + t.Logf("topic=%v, glsn:%v\n", tid, glsn) + return + }) + } + + Convey("When AddTopic", func(ctx C) { + vmsCL := env.GetVMSClient(t) + topicDesc, err := vmsCL.AddTopic(context.TODO()) + So(err, ShouldBeNil) + + addTopicID := topicDesc.Topic.TopicID + + _, err = vmsCL.AddLogStream(context.TODO(), addTopicID, nil) + So(err, ShouldBeNil) + + grp.Go(func() (err error) { + cl, err := varlog.Open(context.Background(), env.ClusterID(), env.MRRPCEndpoints()) + if err != nil { + return err + } + defer cl.Close() + + var glsn types.GLSN + for gctx.Err() == nil { + glsn, err = cl.Append(context.Background(), addTopicID, []byte("foo")) + if err != nil { + err = fmt.Errorf("topic=%v,err=%v", addTopicID, err) + break + } + } + + t.Logf("topic=%v, glsn:%v\n", addTopicID, glsn) + return + }) + + Convey("Then it should appendable", func(ctx C) { + err = grp.Wait() + So(err, ShouldBeNil) + }) + }) + })) +} diff --git a/test/it/management/vms_test.go b/test/it/management/vms_test.go index 40fedef5f..a77f86dcd 100644 --- a/test/it/management/vms_test.go +++ b/test/it/management/vms_test.go @@ -24,7 +24,7 @@ import ( "github.com/kakao/varlog/vtesting" ) -// FIXME: This test checks MRManager, move unit test or something similiar. +// FIXME: This test checks MRManager, move unit test or something similar. func TestVarlogNewMRManager(t *testing.T) { Convey("Given that MRManager runs without any running MR", t, func(c C) { const ( @@ -278,8 +278,8 @@ func TestVarlogSNWatcher(t *testing.T) { mrAddr := mr.GetServerAddr() snID := env.AddSN(t) - - lsID := env.AddLS(t) + topicID := env.AddTopic(t) + lsID := env.AddLS(t, topicID) vmsOpts := it.NewTestVMSOptions() vmsOpts.MRManagerOptions.MetadataRepositoryAddresses = []string{mrAddr} @@ -304,7 +304,7 @@ func TestVarlogSNWatcher(t *testing.T) { Convey("When seal LS", func(ctx C) { snID := env.PrimaryStorageNodeIDOf(t, lsID) - _, _, err := env.SNClientOf(t, snID).Seal(context.TODO(), lsID, types.InvalidGLSN) + _, _, err := env.SNClientOf(t, snID).Seal(context.TODO(), topicID, lsID, types.InvalidGLSN) So(err, ShouldBeNil) Convey("Then it should be reported by watcher", func(ctx C) { @@ -390,8 +390,8 @@ func TestVarlogStatRepositoryRefresh(t *testing.T) { mrAddr := mr.GetServerAddr() snID := env.AddSN(t) - - lsID := env.AddLS(t) + topicID := env.AddTopic(t) + lsID := env.AddLS(t, topicID) cmView := &dummyCMView{ clusterID: env.ClusterID(), @@ -457,7 +457,7 @@ func TestVarlogStatRepositoryRefresh(t *testing.T) { }) Convey("When AddLS", func(ctx C) { - lsID2 := env.AddLS(t) + lsID2 := env.AddLS(t, topicID) Convey("Then refresh the statRepository and it should be updated", func(ctx C) { statRepository.Refresh(context.TODO()) @@ -521,8 +521,8 @@ func TestVarlogStatRepositoryReport(t *testing.T) { mrAddr := mr.GetServerAddr() snID := env.AddSN(t) - - lsID := env.AddLS(t) + topicID := env.AddTopic(t) + lsID := env.AddLS(t, topicID) cmView := &dummyCMView{ clusterID: env.ClusterID(), @@ -537,7 +537,7 @@ func TestVarlogStatRepositoryReport(t *testing.T) { Convey("When Report", func(ctx C) { sn := env.LookupSN(t, snID) - _, _, err := sn.Seal(context.TODO(), lsID, types.InvalidGLSN) + _, _, err := sn.Seal(context.TODO(), topicID, lsID, types.InvalidGLSN) So(err, ShouldBeNil) snm, err := sn.GetMetadata(context.TODO()) diff --git a/test/it/mrconnector/mr_connector_test.go b/test/it/mrconnector/mr_connector_test.go index ccb155c00..6a78144b9 100644 --- a/test/it/mrconnector/mr_connector_test.go +++ b/test/it/mrconnector/mr_connector_test.go @@ -158,7 +158,7 @@ func TestMRConnector(t *testing.T) { } // NOTE (jun): Does not HealthCheck imply this? - mcl, err := mrc.NewMetadataRepositoryManagementClientFromRpcConn(conn) + mcl, err := mrc.NewMetadataRepositoryManagementClientFromRPCConn(conn) if err != nil { return false } @@ -229,11 +229,9 @@ func TestMRConnector(t *testing.T) { if cl, err := mrConn.Client(context.TODO()); err != nil { return err - } else { - if _, err := cl.GetMetadata(ctx); err != nil { - cl.Close() - return err - } + } else if _, err := cl.GetMetadata(ctx); err != nil { + cl.Close() + return err } return nil diff --git a/test/it/testenv.go b/test/it/testenv.go index 0d46491e1..641e0cacc 100644 --- a/test/it/testenv.go +++ b/test/it/testenv.go @@ -4,9 +4,9 @@ import ( "context" "fmt" "log" - "math" "math/rand" "os" + "sort" "sync" "testing" "time" @@ -16,6 +16,8 @@ import ( "go.uber.org/zap" "google.golang.org/grpc/health/grpc_health_v1" + "github.com/kakao/varlog/internal/storagenode/volume" + "github.com/kakao/varlog/internal/metadata_repository" "github.com/kakao/varlog/internal/storagenode" "github.com/kakao/varlog/internal/storagenode/reportcommitter" @@ -51,7 +53,7 @@ type VarlogCluster struct { storageNodes map[types.StorageNodeID]*storagenode.StorageNode snMCLs map[types.StorageNodeID]snc.StorageNodeManagementClient reportCommitters map[types.StorageNodeID]reportcommitter.Client - volumes map[types.StorageNodeID]storagenode.Volume + volumes map[types.StorageNodeID]volume.Volume snAddrs map[types.StorageNodeID]string storageNodeIDs []types.StorageNodeID nextSNID types.StorageNodeID @@ -59,8 +61,8 @@ type VarlogCluster struct { snWGs map[types.StorageNodeID]*sync.WaitGroup // log streams - muLS sync.Mutex - logStreamIDs []types.LogStreamID + muLS sync.Mutex + topicLogStreamIDs map[types.TopicID][]types.LogStreamID // FIXME: type of value replicas map[types.LogStreamID][]*varlogpb.ReplicaDescriptor @@ -89,13 +91,14 @@ func NewVarlogCluster(t *testing.T, opts ...Option) *VarlogCluster { mrMCLs: make(map[types.NodeID]mrc.MetadataRepositoryManagementClient), storageNodes: make(map[types.StorageNodeID]*storagenode.StorageNode), snMCLs: make(map[types.StorageNodeID]snc.StorageNodeManagementClient), - volumes: make(map[types.StorageNodeID]storagenode.Volume), + volumes: make(map[types.StorageNodeID]volume.Volume), snAddrs: make(map[types.StorageNodeID]string), reportCommitters: make(map[types.StorageNodeID]reportcommitter.Client), replicas: make(map[types.LogStreamID][]*varlogpb.ReplicaDescriptor), snWGs: make(map[types.StorageNodeID]*sync.WaitGroup), + topicLogStreamIDs: make(map[types.TopicID][]types.LogStreamID), nextSNID: types.StorageNodeID(1), - manualNextLSID: types.LogStreamID(math.MaxUint32), + manualNextLSID: types.MaxLogStreamID, rng: rand.New(rand.NewSource(time.Now().UnixNano())), } @@ -117,6 +120,9 @@ func NewVarlogCluster(t *testing.T, opts ...Option) *VarlogCluster { // sn clus.initSN(t) + // topic + clus.initTopic(t) + // ls clus.initLS(t) @@ -160,9 +166,17 @@ func (clus *VarlogCluster) initSN(t *testing.T) { } } +func (clus *VarlogCluster) initTopic(t *testing.T) { + for i := 0; i < clus.numTopic; i++ { + clus.AddTopic(t) + } +} + func (clus *VarlogCluster) initLS(t *testing.T) { - for i := 0; i < clus.numLS; i++ { - clus.AddLS(t) + for _, topicID := range clus.TopicIDs() { + for i := 0; i < clus.numLS; i++ { + clus.AddLS(t, topicID) + } } } @@ -398,7 +412,7 @@ func (clus *VarlogCluster) Close(t *testing.T) { for _, wg := range clus.snWGs { wg.Wait() } - clus.logStreamIDs = nil + clus.topicLogStreamIDs = nil require.NoError(t, clus.portLease.Release()) } @@ -470,7 +484,7 @@ func (clus *VarlogCluster) AddSN(t *testing.T) types.StorageNodeID { clus.nextSNID++ volumeDir := t.TempDir() - volume, err := storagenode.NewVolume(volumeDir) + volume, err := volume.New(volumeDir) require.NoError(t, err) sn, err := storagenode.New(context.TODO(), @@ -600,18 +614,38 @@ func (clus *VarlogCluster) RecoverSN(t *testing.T, snID types.StorageNodeID) *st return sn } -func (clus *VarlogCluster) AddLS(t *testing.T) types.LogStreamID { +func (clus *VarlogCluster) AddTopic(t *testing.T) types.TopicID { + clus.muSN.Lock() + defer clus.muSN.Unlock() + + clus.muLS.Lock() + defer clus.muLS.Unlock() + + log.Println("AddTopic") + + rsp, err := clus.vmsCL.AddTopic(context.Background()) + require.NoError(t, err) + log.Printf("AddTopic: %+v", rsp) + + topicDesc := rsp.GetTopic() + topicID := topicDesc.GetTopicID() + + clus.topicLogStreamIDs[topicID] = nil + return topicID +} + +func (clus *VarlogCluster) AddLS(t *testing.T, topicID types.TopicID) types.LogStreamID { clus.muSN.Lock() defer clus.muSN.Unlock() clus.muLS.Lock() defer clus.muLS.Unlock() - log.Println("AddLS") + log.Printf("AddLS[topidID:%v]\n", topicID) require.GreaterOrEqual(t, len(clus.storageNodes), clus.nrRep) - rsp, err := clus.vmsCL.AddLogStream(context.Background(), nil) + rsp, err := clus.vmsCL.AddLogStream(context.Background(), topicID, nil) require.NoError(t, err) log.Printf("AddLS: AddLogStream: %+v", rsp) @@ -619,12 +653,14 @@ func (clus *VarlogCluster) AddLS(t *testing.T) types.LogStreamID { logStreamID := logStreamDesc.GetLogStreamID() // FIXME: use map to store logstream and its replicas - clus.logStreamIDs = append(clus.logStreamIDs, logStreamID) + logStreamIDs, _ := clus.topicLogStreamIDs[topicID] + clus.topicLogStreamIDs[topicID] = append(logStreamIDs, logStreamID) clus.replicas[logStreamID] = logStreamDesc.GetReplicas() + return logStreamID } -func (clus *VarlogCluster) UpdateLS(t *testing.T, lsID types.LogStreamID, oldsn, newsn types.StorageNodeID) { +func (clus *VarlogCluster) UpdateLS(t *testing.T, tpID types.TopicID, lsID types.LogStreamID, oldsn, newsn types.StorageNodeID) { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -656,7 +692,7 @@ func (clus *VarlogCluster) UpdateLS(t *testing.T, lsID types.LogStreamID, oldsn, StorageNodeID: oldsn, } - _, err = clus.vmsCL.UpdateLogStream(context.Background(), lsID, oldReplica, newReplica) + _, err = clus.vmsCL.UpdateLogStream(context.Background(), tpID, lsID, oldReplica, newReplica) require.NoError(t, err) // update replicas @@ -667,7 +703,7 @@ func (clus *VarlogCluster) UpdateLS(t *testing.T, lsID types.LogStreamID, oldsn, } } -func (clus *VarlogCluster) AddLSWithoutMR(t *testing.T) types.LogStreamID { +func (clus *VarlogCluster) AddLSWithoutMR(t *testing.T, topicID types.TopicID) types.LogStreamID { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -680,13 +716,15 @@ func (clus *VarlogCluster) AddLSWithoutMR(t *testing.T) types.LogStreamID { clus.manualNextLSID-- rds := make([]*varlogpb.ReplicaDescriptor, 0, clus.nrRep) - replicas := make([]snpb.Replica, 0, clus.nrRep) + replicas := make([]varlogpb.Replica, 0, clus.nrRep) for idx := range clus.rng.Perm(len(clus.storageNodeIDs))[:clus.nrRep] { snID := clus.storageNodeIDs[idx] - replicas = append(replicas, snpb.Replica{ - StorageNodeID: snID, - LogStreamID: lsID, - Address: clus.snAddrs[snID], + replicas = append(replicas, varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + Address: clus.snAddrs[snID], + }, + LogStreamID: lsID, }) snmd, err := clus.storageNodeManagementClientOf(t, snID).GetMetadata(context.Background()) @@ -701,27 +739,29 @@ func (clus *VarlogCluster) AddLSWithoutMR(t *testing.T) types.LogStreamID { for _, rd := range rds { snID := rd.StorageNodeID path := rd.Path - require.NoError(t, clus.storageNodeManagementClientOf(t, snID).AddLogStream( + require.NoError(t, clus.storageNodeManagementClientOf(t, snID).AddLogStreamReplica( context.Background(), + topicID, lsID, path, )) - status, _, err := clus.storageNodeManagementClientOf(t, snID).Seal(context.Background(), lsID, types.InvalidGLSN) + status, _, err := clus.storageNodeManagementClientOf(t, snID).Seal(context.Background(), topicID, lsID, types.InvalidGLSN) require.NoError(t, err) require.Equal(t, varlogpb.LogStreamStatusSealed, status) - require.NoError(t, clus.storageNodeManagementClientOf(t, snID).Unseal(context.Background(), lsID, replicas)) + require.NoError(t, clus.storageNodeManagementClientOf(t, snID).Unseal(context.Background(), topicID, lsID, replicas)) } // FIXME: use map to store logstream and its replicas - clus.logStreamIDs = append(clus.logStreamIDs, lsID) + logStreamIDs, _ := clus.topicLogStreamIDs[topicID] + clus.topicLogStreamIDs[topicID] = append(logStreamIDs, lsID) clus.replicas[lsID] = rds t.Logf("AddLS without MR: lsid=%d, replicas=%+v", lsID, replicas) return lsID } -func (clus *VarlogCluster) AddLSIncomplete(t *testing.T) types.LogStreamID { +func (clus *VarlogCluster) AddLSIncomplete(t *testing.T, topicID types.TopicID) types.LogStreamID { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -735,24 +775,27 @@ func (clus *VarlogCluster) AddLSIncomplete(t *testing.T) types.LogStreamID { lsID := clus.manualNextLSID clus.manualNextLSID-- - replicas := make([]snpb.Replica, 0, clus.nrRep-1) + replicas := make([]varlogpb.Replica, 0, clus.nrRep-1) for idx := range clus.rng.Perm(len(clus.storageNodeIDs))[:clus.nrRep-1] { snID := clus.storageNodeIDs[idx] - replicas = append(replicas, snpb.Replica{ - StorageNodeID: snID, - LogStreamID: lsID, - Address: clus.snAddrs[snID], + replicas = append(replicas, varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snID, + Address: clus.snAddrs[snID], + }, + LogStreamID: lsID, }) } for _, replica := range replicas { - snID := replica.StorageNodeID + snID := replica.StorageNode.StorageNodeID snmd, err := clus.storageNodeManagementClientOf(t, snID).GetMetadata(context.Background()) require.NoError(t, err) path := snmd.GetStorageNode().GetStorages()[0].GetPath() - require.NoError(t, clus.storageNodeManagementClientOf(t, snID).AddLogStream( + require.NoError(t, clus.storageNodeManagementClientOf(t, snID).AddLogStreamReplica( context.Background(), + topicID, lsID, path, )) @@ -761,7 +804,7 @@ func (clus *VarlogCluster) AddLSIncomplete(t *testing.T) types.LogStreamID { return lsID } -func (clus *VarlogCluster) UpdateLSWithoutMR(t *testing.T, logStreamID types.LogStreamID, storageNodeID types.StorageNodeID, clear bool) { +func (clus *VarlogCluster) UpdateLSWithoutMR(t *testing.T, topicID types.TopicID, logStreamID types.LogStreamID, storageNodeID types.StorageNodeID, clear bool) { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -792,7 +835,7 @@ func (clus *VarlogCluster) UpdateLSWithoutMR(t *testing.T, logStreamID types.Log return false } - _, _, err = clus.snMCLs[snid].Seal(context.Background(), logStreamID, lsmd.HighWatermark) + _, _, err = clus.snMCLs[snid].Seal(context.Background(), topicID, logStreamID, lsmd.HighWatermark) require.NoError(t, err) } return true @@ -805,7 +848,7 @@ func (clus *VarlogCluster) UpdateLSWithoutMR(t *testing.T, logStreamID types.Log path := meta.GetStorageNode().GetStorages()[0].GetPath() - require.NoError(t, clus.snMCLs[storageNodeID].AddLogStream(context.Background(), logStreamID, path)) + require.NoError(t, clus.snMCLs[storageNodeID].AddLogStreamReplica(context.Background(), topicID, logStreamID, path)) replicas[0] = &varlogpb.ReplicaDescriptor{ StorageNodeID: storageNodeID, @@ -815,11 +858,11 @@ func (clus *VarlogCluster) UpdateLSWithoutMR(t *testing.T, logStreamID types.Log clus.replicas[logStreamID] = replicas if clear { - require.NoError(t, clus.snMCLs[victim.StorageNodeID].RemoveLogStream(context.Background(), logStreamID)) + require.NoError(t, clus.snMCLs[victim.StorageNodeID].RemoveLogStream(context.Background(), topicID, logStreamID)) } } -func (clus *VarlogCluster) UnsealWithoutMR(t *testing.T, logStreamID types.LogStreamID, expectedHighWatermark types.GLSN) { +func (clus *VarlogCluster) UnsealWithoutMR(t *testing.T, topicID types.TopicID, logStreamID types.LogStreamID, expectedHighWatermark types.GLSN) { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -831,7 +874,7 @@ func (clus *VarlogCluster) UnsealWithoutMR(t *testing.T, logStreamID types.LogSt rds, ok := clus.replicas[logStreamID] require.Equal(t, ok, true) - replicas := make([]snpb.Replica, 0, len(rds)) + replicas := make([]varlogpb.Replica, 0, len(rds)) for _, rd := range rds { snid := rd.GetStorageNodeID() require.Contains(t, clus.snMCLs, snid) @@ -845,9 +888,11 @@ func (clus *VarlogCluster) UnsealWithoutMR(t *testing.T, logStreamID types.LogSt require.Equal(t, expectedHighWatermark, lsmd.GetHighWatermark()) - replicas = append(replicas, snpb.Replica{ - StorageNodeID: snid, - LogStreamID: logStreamID, + replicas = append(replicas, varlogpb.Replica{ + StorageNode: varlogpb.StorageNode{ + StorageNodeID: snid, + }, + LogStreamID: logStreamID, }) } @@ -861,11 +906,11 @@ func (clus *VarlogCluster) UnsealWithoutMR(t *testing.T, logStreamID types.LogSt require.True(t, ok) if lsmd.GetStatus() == varlogpb.LogStreamStatusSealing { - _, _, err = clus.snMCLs[snid].Seal(context.Background(), logStreamID, types.InvalidGLSN) + _, _, err = clus.snMCLs[snid].Seal(context.Background(), topicID, logStreamID, types.InvalidGLSN) require.NoError(t, err) } - err = clus.snMCLs[snid].Unseal(context.Background(), logStreamID, replicas) + err = clus.snMCLs[snid].Unseal(context.Background(), topicID, logStreamID, replicas) require.NoError(t, err) } } @@ -917,10 +962,10 @@ func (clus *VarlogCluster) newMRClient(t *testing.T, idx int) { rpcConn, err := rpc.NewConn(context.Background(), addr) require.NoError(t, err) - cl, err := mrc.NewMetadataRepositoryClientFromRpcConn(rpcConn) + cl, err := mrc.NewMetadataRepositoryClientFromRPCConn(rpcConn) require.NoError(t, err) - mcl, err := mrc.NewMetadataRepositoryManagementClientFromRpcConn(rpcConn) + mcl, err := mrc.NewMetadataRepositoryManagementClientFromRPCConn(rpcConn) require.NoError(t, err) clus.mrCLs[id] = cl @@ -1157,21 +1202,41 @@ func (clus *VarlogCluster) Logger() *zap.Logger { return clus.logger } -func (clus *VarlogCluster) LogStreamIDs() []types.LogStreamID { +func (clus *VarlogCluster) TopicIDs() []types.TopicID { clus.muLS.Lock() defer clus.muLS.Unlock() - ret := make([]types.LogStreamID, len(clus.logStreamIDs)) - copy(ret, clus.logStreamIDs) + var ret []types.TopicID + for topicID := range clus.topicLogStreamIDs { + ret = append(ret, topicID) + } + + sort.Slice(ret, func(i, j int) bool { return ret[i] < ret[j] }) + return ret } -func (clus *VarlogCluster) LogStreamID(t *testing.T, idx int) types.LogStreamID { +func (clus *VarlogCluster) LogStreamIDs(topicID types.TopicID) []types.LogStreamID { clus.muLS.Lock() defer clus.muLS.Unlock() - require.Greater(t, len(clus.logStreamIDs), idx) - return clus.logStreamIDs[idx] + logStreamIDs, ok := clus.topicLogStreamIDs[topicID] + if !ok { + return nil + } + ret := make([]types.LogStreamID, len(logStreamIDs)) + copy(ret, logStreamIDs) + return ret +} + +func (clus *VarlogCluster) LogStreamID(t *testing.T, topicID types.TopicID, idx int) types.LogStreamID { + clus.muLS.Lock() + defer clus.muLS.Unlock() + + logStreamIDs, _ := clus.topicLogStreamIDs[topicID] + require.Greater(t, len(logStreamIDs), idx) + + return logStreamIDs[idx] } func (clus *VarlogCluster) CloseMRClientAt(t *testing.T, idx int) { @@ -1283,8 +1348,9 @@ func (clus *VarlogCluster) NumberOfStorageNodes() int { return len(clus.storageNodes) } -func (clus *VarlogCluster) NumberOfLogStreams() int { - return len(clus.logStreamIDs) +func (clus *VarlogCluster) NumberOfLogStreams(topicID types.TopicID) int { + logStreamIDs, _ := clus.topicLogStreamIDs[topicID] + return len(logStreamIDs) } func (clus *VarlogCluster) NumberOfClients() int { @@ -1360,7 +1426,7 @@ func (clus *VarlogCluster) getCachedMetadata() *varlogpb.MetadataDescriptor { return clus.cachedMetadata } -func (clus *VarlogCluster) AppendUncommittedLog(t *testing.T, lsID types.LogStreamID, data []byte) { +func (clus *VarlogCluster) AppendUncommittedLog(t *testing.T, topicID types.TopicID, lsID types.LogStreamID, data []byte) { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -1406,7 +1472,7 @@ func (clus *VarlogCluster) AppendUncommittedLog(t *testing.T, lsID types.LogStre } defer cli.Close() - _, err = cli.Append(ctx, lsID, data) + _, err = cli.Append(ctx, topicID, lsID, data) assert.Error(t, err) }() @@ -1428,7 +1494,7 @@ func (clus *VarlogCluster) AppendUncommittedLog(t *testing.T, lsID types.LogStre func (clus *VarlogCluster) CommitWithoutMR(t *testing.T, lsID types.LogStreamID, committedLLSNOffset types.LLSN, committedGLSNOffset types.GLSN, committedGLSNLen uint64, - prevHighWatermark, highWatermark types.GLSN) { + version types.Version, highWatermark types.GLSN) { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -1441,10 +1507,10 @@ func (clus *VarlogCluster) CommitWithoutMR(t *testing.T, lsID types.LogStreamID, StorageNodeID: r.StorageNodeID, CommitResult: snpb.LogStreamCommitResult{ LogStreamID: lsID, + Version: version, CommittedLLSNOffset: committedLLSNOffset, CommittedGLSNOffset: committedGLSNOffset, CommittedGLSNLength: committedGLSNLen, - PrevHighWatermark: prevHighWatermark, HighWatermark: highWatermark, }, } @@ -1455,7 +1521,7 @@ func (clus *VarlogCluster) CommitWithoutMR(t *testing.T, lsID types.LogStreamID, } } -func (clus *VarlogCluster) WaitCommit(t *testing.T, lsID types.LogStreamID, highWatermark types.GLSN) { +func (clus *VarlogCluster) WaitCommit(t *testing.T, lsID types.LogStreamID, version types.Version) { clus.muSN.Lock() defer clus.muSN.Unlock() @@ -1480,7 +1546,7 @@ func (clus *VarlogCluster) WaitCommit(t *testing.T, lsID types.LogStreamID, high reports := rsp.GetUncommitReports() for _, report := range reports { - if report.GetLogStreamID() == lsID && report.GetHighWatermark() == highWatermark { + if report.GetLogStreamID() == lsID && report.GetVersion() == version { committed++ break } @@ -1496,7 +1562,7 @@ func (clus *VarlogCluster) WaitSealed(t *testing.T, lsID types.LogStreamID) { require.Eventually(t, func() bool { vmsMeta, err := clus.vmsServer.Metadata(context.Background()) return err == nil && vmsMeta.GetLogStream(lsID) != nil - }, vms.RELOAD_INTERVAL*10, 100*time.Millisecond) + }, vms.ReloadInterval*10, 100*time.Millisecond) require.Eventually(t, func() bool { snMCLs := clus.StorageNodesManagementClients() diff --git a/test/it/testenv_test.go b/test/it/testenv_test.go index 57b52082f..1dbf379d8 100644 --- a/test/it/testenv_test.go +++ b/test/it/testenv_test.go @@ -14,6 +14,7 @@ func TestVarlogRegisterStorageNode(t *testing.T) { env := NewVarlogCluster(t, WithNumberOfStorageNodes(1), WithNumberOfLogStreams(1), + WithNumberOfTopics(1), ) defer env.Close(t) diff --git a/test/marshal_test.go b/test/marshal_test.go index 05de4bc0e..ed8c66824 100644 --- a/test/marshal_test.go +++ b/test/marshal_test.go @@ -16,13 +16,12 @@ func TestSnapshotMarshal(t *testing.T) { for i := 0; i < 128; i++ { gls := &mrpb.LogStreamCommitResults{} - gls.HighWatermark = types.GLSN((i + 1) * 16) - gls.PrevHighWatermark = types.GLSN(i * 16) + gls.Version = types.Version(i + 1) for j := 0; j < 1024; j++ { lls := snpb.LogStreamCommitResult{} lls.LogStreamID = types.LogStreamID(j) - lls.CommittedGLSNOffset = gls.PrevHighWatermark + types.GLSN(j*2) + lls.CommittedGLSNOffset = types.GLSN(1024*i + j*2) lls.CommittedGLSNLength = uint64(lls.CommittedGLSNOffset) + 1 gls.CommitResults = append(gls.CommitResults, lls) @@ -35,18 +34,17 @@ func TestSnapshotMarshal(t *testing.T) { smr.Marshal() - log.Println(time.Now().Sub(st)) + log.Println(time.Since(st)) } func TestGlobalLogStreamMarshal(t *testing.T) { gls := &mrpb.LogStreamCommitResults{} - gls.HighWatermark = types.GLSN(1000) - gls.PrevHighWatermark = types.GLSN(16) + gls.Version = types.Version(1000) for i := 0; i < 128*1024; i++ { lls := snpb.LogStreamCommitResult{} lls.LogStreamID = types.LogStreamID(i) - lls.CommittedGLSNOffset = gls.PrevHighWatermark + types.GLSN(i*2) + lls.CommittedGLSNOffset = types.GLSN(i) lls.CommittedGLSNLength = uint64(lls.CommittedGLSNOffset) + 1 gls.CommitResults = append(gls.CommitResults, lls) @@ -55,5 +53,5 @@ func TestGlobalLogStreamMarshal(t *testing.T) { gls.Marshal() - log.Println(time.Now().Sub(st)) + log.Println(time.Since(st)) } diff --git a/test/rpc_e2e/rpc_test.go b/test/rpc_e2e/rpc_test.go index ee0292c2b..75e4b6285 100644 --- a/test/rpc_e2e/rpc_test.go +++ b/test/rpc_e2e/rpc_test.go @@ -1,3 +1,4 @@ +//go:build rpc_e2e // +build rpc_e2e package rpc_e2e diff --git a/tools/tools.go b/tools/tools.go new file mode 100644 index 000000000..89b45d086 --- /dev/null +++ b/tools/tools.go @@ -0,0 +1,11 @@ +//go:build tools +// +build tools + +package tools + +import ( + _ "github.com/golang/mock/gomock" + _ "golang.org/x/lint/golint" + _ "golang.org/x/tools/cmd/goimports" + _ "golang.org/x/tools/cmd/stringer" +) diff --git a/vendor/github.com/cenkalti/backoff/v4/go.mod b/vendor/github.com/cenkalti/backoff/v4/go.mod deleted file mode 100644 index f811bead9..000000000 --- a/vendor/github.com/cenkalti/backoff/v4/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/cenkalti/backoff/v4 - -go 1.13 diff --git a/vendor/github.com/cespare/xxhash/v2/go.mod b/vendor/github.com/cespare/xxhash/v2/go.mod deleted file mode 100644 index 49f67608b..000000000 --- a/vendor/github.com/cespare/xxhash/v2/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/cespare/xxhash/v2 - -go 1.11 diff --git a/vendor/github.com/cespare/xxhash/v2/go.sum b/vendor/github.com/cespare/xxhash/v2/go.sum deleted file mode 100644 index e69de29bb..000000000 diff --git a/vendor/github.com/cockroachdb/errors/go.mod b/vendor/github.com/cockroachdb/errors/go.mod deleted file mode 100644 index 6e2cb2213..000000000 --- a/vendor/github.com/cockroachdb/errors/go.mod +++ /dev/null @@ -1,17 +0,0 @@ -module github.com/cockroachdb/errors - -go 1.13 - -require ( - github.com/cockroachdb/datadriven v1.0.0 - github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f - github.com/cockroachdb/redact v1.0.8 - github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2 - github.com/gogo/protobuf v1.3.1 - github.com/gogo/status v1.1.0 - github.com/golang/protobuf v1.4.2 - github.com/hydrogen18/memlistener v0.0.0-20141126152155-54553eb933fb - github.com/kr/pretty v0.1.0 - github.com/pkg/errors v0.9.1 - google.golang.org/grpc v1.29.1 -) diff --git a/vendor/github.com/cockroachdb/errors/go.sum b/vendor/github.com/cockroachdb/errors/go.sum deleted file mode 100644 index 0cddf217e..000000000 --- a/vendor/github.com/cockroachdb/errors/go.sum +++ /dev/null @@ -1,272 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/CloudyKit/fastprinter v0.0.0-20170127035650-74b38d55f37a/go.mod h1:EFZQ978U7x8IRnstaskI3IysnWY5Ao3QgZUKOXlsAdw= -github.com/CloudyKit/jet v2.1.3-0.20180809161101-62edd43e4f88+incompatible/go.mod h1:HPYO+50pSWkPoj9Q/eq0aRGByCL6ScRlUmiEX5Zgm+w= -github.com/Joker/hpp v1.0.0/go.mod h1:8x5n+M1Hp5hC0g8okX3sR3vFQwynaX/UgSOM9MeBKzY= -github.com/Joker/jade v1.0.1-0.20190614124447-d475f43051e7/go.mod h1:6E6s8o2AE4KhCrqr6GRJjdC/gNfTdxkIXvuGZZda2VM= -github.com/Shopify/goreferrer v0.0.0-20181106222321-ec9c9a553398/go.mod h1:a1uqRtAwp2Xwc6WNPJEufxJ7fx3npB4UV/JOLmbu5I0= -github.com/ajg/form v1.5.1/go.mod h1:uL1WgH+h2mgNtvBq0339dVnzXdBETtL2LeUXaIv25UY= -github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= -github.com/aymerick/raymond v2.0.3-0.20180322193309-b565731e1464+incompatible/go.mod h1:osfaiScAUVup+UC9Nfq76eWqDhXlp+4UYaA8uhTBO6g= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cockroachdb/datadriven v1.0.0 h1:uhZrAfEayBecH2w2tZmhe20HJ7hDvrrA4x2Bg9YdZKM= -github.com/cockroachdb/datadriven v1.0.0/go.mod h1:5Ib8Meh+jk1RlHIXej6Pzevx/NLlNvQB9pmSBZErGA4= -github.com/cockroachdb/errors v1.6.1/go.mod h1:tm6FTP5G81vwJ5lC0SizQo374JNCOPrHyXGitRJoDqM= -github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f h1:o/kfcElHqOiXqcou5a3rIlMc7oJbMQkeLk0VQJ7zgqY= -github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI= -github.com/cockroachdb/redact v1.0.8 h1:8QG/764wK+vmEYoOlfobpe12EQcS81ukx/a4hdVMxNw= -github.com/cockroachdb/redact v1.0.8/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg= -github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2 h1:IKgmqgMQlVJIZj19CdocBeSfSaiCbEBZGKODaixqtHM= -github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2/go.mod h1:8BT+cPK6xvFOcRlk0R8eg+OTkcqI6baNH4xAkpiYVvQ= -github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM= -github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= -github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= -github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= -github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4= -github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= -github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= -github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= -github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHjkjCrw= -github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8= -github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= -github.com/flosch/pongo2 v0.0.0-20190707114632-bbf5a6c351f4/go.mod h1:T9YF2M40nIgbVgp3rreNmTged+9HrbNTIQf1PsaIiTA= -github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= -github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= -github.com/gavv/httpexpect v2.0.0+incompatible/go.mod h1:x+9tiU1YnrOvnB725RkpoLv1M62hOWzwo5OXotisrKc= -github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3/go.mod h1:VJ0WA2NBN22VlZ2dKZQPAPnyWw5XTlK1KymzLKsr59s= -github.com/gin-gonic/gin v1.4.0/go.mod h1:OW2EZn3DO8Ln9oIKOvM++LBO+5UPHJJDH72/q/3rZdM= -github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclKsC9YodN5RgxqK/VD9HM9JsCSh7rNhMZE98= -github.com/go-errors/errors v1.0.1 h1:LUHzmkK3GUKUrL/1gfBUxAHzcev3apQlezX/+O7ma6w= -github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q= -github.com/go-martini/martini v0.0.0-20170121215854-22fa46961aab/go.mod h1:/P9AEU963A2AYjv4d1V5eVL1CQbEJq6aCNHDDjibzu8= -github.com/gobwas/httphead v0.0.0-20180130184737-2c6c146eadee/go.mod h1:L0fX3K22YWvt/FAX9NnzrNzcI4wNYi9Yku4O0LKYflo= -github.com/gobwas/pool v0.2.0/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw= -github.com/gobwas/ws v1.0.2/go.mod h1:szmBTxLgaFppYjEmNtny/v3w89xOydFnnZMcgRRu/EM= -github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a h1:dR8+Q0uO5S2ZBcs2IH6VBKYwSxPo2vYCYq0ot0mu7xA= -github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s= -github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= -github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls= -github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= -github.com/gogo/status v1.1.0 h1:+eIkrewn5q6b30y+g/BJINVVdi2xH7je5MPJ3ZPK3JA= -github.com/gogo/status v1.1.0/go.mod h1:BFv9nrluPLmrS0EmGVvLaPNmRosr9KapBYd5/hpY1WM= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3 h1:gyjaxf+svBWX08ZjK86iN9geUJF0H6gp2IRKX6Nf6/I= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/gomodule/redigo v1.7.1-0.20190724094224-574c33c3df38/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= -github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= -github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= -github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= -github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= -github.com/hydrogen18/memlistener v0.0.0-20141126152155-54553eb933fb h1:EPRgaDqXpLFUJLXZdGLnBTy1l6CLiNAPnvn2l+kHit0= -github.com/hydrogen18/memlistener v0.0.0-20141126152155-54553eb933fb/go.mod h1:qEIFzExnS6016fRpRfxrExeVn2gbClQA99gQhnIcdhE= -github.com/imkira/go-interpol v1.1.0/go.mod h1:z0h2/2T3XF8kyEPpRgJ3kmNv+C43p+I/CoI+jC3w2iA= -github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= -github.com/iris-contrib/blackfriday v2.0.0+incompatible/go.mod h1:UzZ2bDEoaSGPbkg6SAB4att1aAwTmVIx/5gCVqeyUdI= -github.com/iris-contrib/go.uuid v2.0.0+incompatible/go.mod h1:iz2lgM/1UnEf1kP0L/+fafWORmlnuysV2EMP8MW+qe0= -github.com/iris-contrib/i18n v0.0.0-20171121225848-987a633949d0/go.mod h1:pMCz62A0xJL6I+umB2YTlFRwWXaDFA0jy+5HzGiJjqI= -github.com/iris-contrib/schema v0.0.1/go.mod h1:urYA3uvUNG1TIIjOSCzHr9/LmbQo8LrOcOqfqxa4hXw= -github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= -github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= -github.com/juju/errors v0.0.0-20181118221551-089d3ea4e4d5/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q= -github.com/juju/loggo v0.0.0-20180524022052-584905176618/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U= -github.com/juju/testing v0.0.0-20180920084828-472a3e8b2073/go.mod h1:63prj8cnj0tU0S9OHjGJn+b1h0ZghCndfnbQolrYTwA= -github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k= -github.com/kataras/golog v0.0.9/go.mod h1:12HJgwBIZFNGL0EJnMRhmvGA0PQGx8VFwrZtM4CqbAk= -github.com/kataras/iris/v12 v12.0.1/go.mod h1:udK4vLQKkdDqMGJJVd/msuMtN6hpYJhg/lSzuxjhO+U= -github.com/kataras/neffos v0.0.10/go.mod h1:ZYmJC07hQPW67eKuzlfY7SO3bC0mw83A3j6im82hfqw= -github.com/kataras/pio v0.0.0-20190103105442-ea782b38602d/go.mod h1:NV88laa9UiiDuX9AhMbDPkGYSPugBOV6yTZB1l2K9Z0= -github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= -github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/klauspost/compress v1.8.2/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= -github.com/klauspost/compress v1.9.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= -github.com/klauspost/cpuid v1.2.1/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= -github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/labstack/echo/v4 v4.1.11/go.mod h1:i541M3Fj6f76NZtHSj7TXnyM8n2gaodfvfxNnFqi74g= -github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k= -github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= -github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= -github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ= -github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw= -github.com/mediocregopher/mediocre-go-lib v0.0.0-20181029021733-cb65787f37ed/go.mod h1:dSsfyI2zABAdhcbvkXqgxOxrCsbYeHCPgrZkku60dSg= -github.com/mediocregopher/radix/v3 v3.3.0/go.mod h1:EmfVyvspXz1uZEyPBMyGK+kjWiKQGvsUt6O3Pj+LDCQ= -github.com/microcosm-cc/bluemonday v1.0.2/go.mod h1:iVP4YcDBq+n/5fb23BhYFvIMq/leAFZyRl6bYmGDlGc= -github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/moul/http2curl v1.0.0/go.mod h1:8UbvGypXm98wA/IqH45anm5Y2Z6ep6O31QGOAZ3H0fQ= -github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM= -github.com/nats-io/nkeys v0.0.2/go.mod h1:dab7URMsZm6Z/jp9Z5UGa87Uutgc2mVpXLC4B7TDb/4= -github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= -github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= -github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= -github.com/onsi/ginkgo v1.13.0/go.mod h1:+REjRxOmWfHCjfv9TTWB1jD1Frx4XydAD3zm1lskyM0= -github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= -github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= -github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= -github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= -github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8= -github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= -github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= -github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= -github.com/sclevine/agouti v3.0.0+incompatible/go.mod h1:b4WX9W9L1sfQKXeJf1mUTLZKJ48R1S7H23Ji7oFO5Bw= -github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= -github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= -github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= -github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= -github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= -github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= -github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= -github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= -github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= -github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= -github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= -github.com/urfave/negroni v1.0.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4= -github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= -github.com/valyala/fasthttp v1.6.0/go.mod h1:FstJa9V+Pj9vQ7OJie2qMHdwemEDaDiSdBnvPM1Su9w= -github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8= -github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio= -github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= -github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= -github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= -github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= -github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmvncFJFHJ7Gvn9wZArjbV5/FppcK2fKk/tI= -github.com/yudai/gojsondiff v1.0.0/go.mod h1:AY32+k2cwILAkW1fbgxQ5mUmMiZFgLIV+FBNExI05xg= -github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82/go.mod h1:lgjkn3NuSvDfVJdfcVVdX+jpBxNmX4rDAzaS45IcYoM= -github.com/yudai/pp v2.0.1+incompatible/go.mod h1:PuxR/8QJ7cyCkFp/aUDS+JY727OFEZkTdatxwunjIkc= -golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190327091125-710a502c58a2/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= -golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7 h1:AeiKBIuRw3UomYXSbLy0Mc2dDLfdtbT/IVn4keq83P0= -golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a h1:aYOabOQFp6Vj6W1F80affTUvO9UxmJRx8K0gsfABByQ= -golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200519105757-fe76b779f299 h1:DYfZAGf2WMFjMxbgTjaC+2HC7NkNAQs+6Q8b9WEB/F4= -golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= -golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20181221001348-537d06c36207/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190327201419-c70d86f8b7cf/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180518175338-11a468237815/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/grpc v1.12.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.29.1 h1:EC2SB8S04d2r73uptxphDSUG+kTKVgjRPF+N3xpxRB4= -google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= -gopkg.in/go-playground/assert.v1 v1.2.1/go.mod h1:9RXL0bg/zibRAgZUYszZSwO/z8Y/a8bDuhia5mkpMnE= -gopkg.in/go-playground/validator.v8 v8.18.2/go.mod h1:RX2a/7Ha8BgOhfk7j780h4/u/RRjR0eouCJSH80/M2Y= -gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/vendor/github.com/cockroachdb/pebble/.travis.yml b/vendor/github.com/cockroachdb/pebble/.travis.yml index 40598945f..eec07bd70 100644 --- a/vendor/github.com/cockroachdb/pebble/.travis.yml +++ b/vendor/github.com/cockroachdb/pebble/.travis.yml @@ -23,28 +23,32 @@ matrix: go: 1.15.x os: linux script: make test generate - - name: "go1.15.x-linux-race" - go: 1.15.x + - name: "go1.16.x-linux" + go: 1.16.x + os: linux + script: make test generate + - name: "go1.16.x-linux-race" + go: 1.16.x os: linux script: make testrace TAGS= - - name: "go1.15.x-linux-no-invariants" - go: 1.15.x + - name: "go1.16.x-linux-no-invariants" + go: 1.16.x os: linux script: make test TAGS= - - name: "go1.15.x-linux-no-cgo" - go: 1.15.x + - name: "go1.16.x-linux-no-cgo" + go: 1.16.x os: linux script: CGO_ENABLED=0 make test TAGS= - - name: "go1.15.x-darwin" - go: 1.15.x + - name: "go1.16.x-darwin" + go: 1.16.x os: osx script: make test - - name: "go1.15.x-windows" - go: 1.15.x + - name: "go1.16.x-windows" + go: 1.16.x os: windows script: go test ./... - - name: "go1.15.x-freebsd" - go: 1.15.x + - name: "go1.16.x-freebsd" + go: 1.16.x os: linux # NB: "env: GOOS=freebsd" does not have the desired effect. script: GOOS=freebsd go build -v ./... diff --git a/vendor/github.com/cockroachdb/pebble/Makefile b/vendor/github.com/cockroachdb/pebble/Makefile index 1d3bb647b..2a2f92ff5 100644 --- a/vendor/github.com/cockroachdb/pebble/Makefile +++ b/vendor/github.com/cockroachdb/pebble/Makefile @@ -45,7 +45,7 @@ generate: # temporarily hiding those files. mod-update: mkdir -p cmd/pebble/_bak - mv cmd/pebble/{badger}.go cmd/pebble/_bak + mv cmd/pebble/badger.go cmd/pebble/_bak ${GO} get -u ${GO} mod tidy ${GO} mod vendor diff --git a/vendor/github.com/cockroachdb/pebble/commit.go b/vendor/github.com/cockroachdb/pebble/commit.go index 7132daebd..e9bd2dbdf 100644 --- a/vendor/github.com/cockroachdb/pebble/commit.go +++ b/vendor/github.com/cockroachdb/pebble/commit.go @@ -10,7 +10,7 @@ import ( "sync/atomic" "unsafe" - "github.com/cockroachdb/pebble/internal/record" + "github.com/cockroachdb/pebble/record" ) // commitQueue is a lock-free fixed-size single-producer, multi-consumer diff --git a/vendor/github.com/cockroachdb/pebble/compaction.go b/vendor/github.com/cockroachdb/pebble/compaction.go index d20603151..a9e95ef43 100644 --- a/vendor/github.com/cockroachdb/pebble/compaction.go +++ b/vendor/github.com/cockroachdb/pebble/compaction.go @@ -68,17 +68,13 @@ type compactionSplitSuggestion int const ( noSplit compactionSplitSuggestion = iota - splitSoon splitNow ) // String implements the Stringer interface. func (c compactionSplitSuggestion) String() string { - switch c { - case noSplit: + if c == noSplit { return "no-split" - case splitSoon: - return "split-soon" } return "split-now" } @@ -90,12 +86,9 @@ func (c compactionSplitSuggestion) String() string { // compactionOutputSplitters that compose other child compactionOutputSplitters. type compactionOutputSplitter interface { // shouldSplitBefore returns whether we should split outputs before the - // specified "current key". The return value is one of splitNow, splitSoon, - // or noSlit. splitNow means a split is advised before the specified key, - // splitSoon means no split is advised yet but the limit returned in - // onNewOutput can be considered invalidated and a splitNow suggestion will - // be made on an upcoming key shortly, and noSplit means no split is - // advised. + // specified "current key". The return value is splitNow or noSplit. + // splitNow means a split is advised before the specified key, and noSplit + // means no split is advised. shouldSplitBefore(key *InternalKey, tw *sstable.Writer) compactionSplitSuggestion // onNewOutput updates internal splitter state when the compaction switches // to a new sstable, and returns the next limit for the new output which @@ -284,16 +277,12 @@ type splitterGroup struct { func (a *splitterGroup) shouldSplitBefore( key *InternalKey, tw *sstable.Writer, ) (suggestion compactionSplitSuggestion) { - suggestion = noSplit for _, splitter := range a.splitters { - switch splitter.shouldSplitBefore(key, tw) { - case splitNow: + if splitter.shouldSplitBefore(key, tw) == splitNow { return splitNow - case splitSoon: - suggestion = splitSoon } } - return suggestion + return noSplit } func (a *splitterGroup) onNewOutput(key *InternalKey) []byte { @@ -315,7 +304,7 @@ func (a *splitterGroup) onNewOutput(key *InternalKey) []byte { // the compaction output is at the boundary between two user keys (also // the boundary between atomic compaction units). Use this splitter to wrap // any splitters that don't guarantee user key splits (i.e. splitters that make -// their determinatino in ways other than comparing the current key against a +// their determination in ways other than comparing the current key against a // limit key. type userKeyChangeSplitter struct { cmp Compare @@ -344,49 +333,6 @@ func (u *userKeyChangeSplitter) onNewOutput(key *InternalKey) []byte { return u.splitter.onNewOutput(key) } -// nonZeroSeqNumSplitter is a compactionOutputSplitter that takes in a child -// splitter, and advises a split when 1) that child splitter advises a split, -// and 2) the compaction output is at a point where the previous point sequence -// number is nonzero. -type nonZeroSeqNumSplitter struct { - c *compaction - splitter compactionOutputSplitter - prevPointSeqNum uint64 - splitOnNonZeroSeqNum bool -} - -func (n *nonZeroSeqNumSplitter) shouldSplitBefore( - key *InternalKey, tw *sstable.Writer, -) compactionSplitSuggestion { - curSeqNum := key.SeqNum() - keyKind := key.Kind() - prevPointSeqNum := n.prevPointSeqNum - if keyKind != InternalKeyKindRangeDelete { - n.prevPointSeqNum = curSeqNum - } - - if n.splitOnNonZeroSeqNum { - if prevPointSeqNum > 0 || n.c.rangeDelFrag.Empty() { - n.splitOnNonZeroSeqNum = false - return splitNow - } - } else if split := n.splitter.shouldSplitBefore(key, tw); split == splitNow { - userKeyChange := curSeqNum > prevPointSeqNum - if prevPointSeqNum > 0 || n.c.rangeDelFrag.Empty() || userKeyChange { - return splitNow - } - n.splitOnNonZeroSeqNum = true - return splitSoon - } - return noSplit -} - -func (n *nonZeroSeqNumSplitter) onNewOutput(key *InternalKey) []byte { - n.prevPointSeqNum = InternalKeySeqNumMax - n.splitOnNonZeroSeqNum = false - return n.splitter.onNewOutput(key) -} - // compactionFile is a vfs.File wrapper that, on every write, updates a metric // in `versions` on bytes written by in-progress compactions so far. It also // increments a per-compaction `written` int. @@ -951,7 +897,7 @@ func (c *compaction) errorOnUserKeyOverlap(ve *versionEdit) error { // snapshots requiring them to be kept. It performs this determination by // looking for an sstable which overlaps the bounds of the compaction at a // lower level in the LSM. -func (c *compaction) allowZeroSeqNum(iter internalIterator) bool { +func (c *compaction) allowZeroSeqNum() bool { return c.elideRangeTombstone(c.smallest.UserKey, c.largest.UserKey) } @@ -1990,6 +1936,16 @@ func (d *DB) compact1(c *compaction, errChannel chan error) (err error) { func (d *DB) runCompaction( jobID int, c *compaction, pacer pacer, ) (ve *versionEdit, pendingOutputs []*fileMetadata, retErr error) { + // As a sanity check, confirm that the smallest / largest keys for new and + // deleted files in the new versionEdit pass a validation function before + // returning the edit. + defer func() { + err := validateVersionEdit(ve, d.opts.Experimental.KeyValidationFunc, d.opts.Comparer.FormatKey) + if err != nil { + d.opts.Logger.Fatalf("pebble: version edit validation failed: %s", err) + } + }() + // Check for a delete-only compaction. This can occur when wide range // tombstones completely contain sstables. if c.kind == compactionKindDeleteOnly { @@ -2060,7 +2016,7 @@ func (d *DB) runCompaction( if err != nil { return nil, pendingOutputs, err } - c.allowedZeroSeqNum = c.allowZeroSeqNum(iiter) + c.allowedZeroSeqNum = c.allowZeroSeqNum() iter := newCompactionIter(c.cmp, c.formatKey, d.merge, iiter, snapshots, &c.rangeDelFrag, c.allowedZeroSeqNum, c.elideTombstone, c.elideRangeTombstone) @@ -2149,14 +2105,15 @@ func (d *DB) runCompaction( splitL0Outputs := c.outputLevel.level == 0 && d.opts.FlushSplitBytes > 0 - // finishOutput is called for an sstable with the first key of the next sstable, and for the - // last sstable with an empty key. - finishOutput := func(key []byte) error { + // finishOutput is called with the a user key up to which all tombstones + // should be flushed. Typically, this is the first key of the next + // sstable or an empty key if this output is the final sstable. + finishOutput := func(splitKey []byte) error { // If we haven't output any point records to the sstable (tw == nil) // then the sstable will only contain range tombstones. The smallest // key in the sstable will be the start key of the first range // tombstone added. We need to ensure that this start key is distinct - // from the limit (key) passed to finishOutput (if set), otherwise we + // from the splitKey passed to finishOutput (if set), otherwise we // would generate an sstable where the largest key is smaller than the // smallest key due to how the largest key boundary is set below. // NB: It is permissible for the range tombstone start key to be the @@ -2169,15 +2126,15 @@ func (d *DB) runCompaction( if len(iter.tombstones) > 0 { startKey = iter.tombstones[0].Start.UserKey } - if key != nil && d.cmp(startKey, key) == 0 { + if splitKey != nil && d.cmp(startKey, splitKey) == 0 { return nil } } // NB: clone the key because the data can be held on to by the call to // compactionIter.Tombstones via rangedel.Fragmenter.FlushTo. - key = append([]byte(nil), key...) - for _, v := range iter.Tombstones(key, splitL0Outputs) { + splitKey = append([]byte(nil), splitKey...) + for _, v := range iter.Tombstones(splitKey, splitL0Outputs) { if tw == nil { if err := newOutput(); err != nil { return err @@ -2294,7 +2251,7 @@ func (d *DB) runCompaction( } } - if key != nil && writerMeta.LargestRange.UserKey != nil { + if splitKey != nil && writerMeta.LargestRange.UserKey != nil { // The current file is not the last output file and there is a range tombstone in it. // If the tombstone extends into the next file, then truncate it for the purposes of // computing meta.Largest. For example, say the next file's first key is c#7,1 and the @@ -2303,8 +2260,8 @@ func (d *DB) runCompaction( // c#inf where inf is the InternalKeyRangeDeleteSentinel. Note that this is just for // purposes of bounds computation -- the current sstable will end up with a Largest key // of c#7,1 so the range tombstone in the current file will be able to delete c#7. - if d.cmp(writerMeta.LargestRange.UserKey, key) >= 0 { - writerMeta.LargestRange.UserKey = key + if d.cmp(writerMeta.LargestRange.UserKey, splitKey) >= 0 { + writerMeta.LargestRange.UserKey = splitKey writerMeta.LargestRange.Trailer = InternalKeyRangeDeleteSentinel } } @@ -2331,14 +2288,6 @@ func (d *DB) runCompaction( switch v := d.cmp(meta.Largest.UserKey, c.largest.UserKey); { case v <= 0: // Nothing to do. - case v == 0: - if meta.Largest.Trailer >= c.largest.Trailer { - break - } - if c.allowedZeroSeqNum && meta.Largest.SeqNum() == 0 { - break - } - fallthrough case v > 0: return errors.Errorf("pebble: compaction output grew beyond bounds of input: %s > %s", meta.Largest.Pretty(d.opts.Comparer.FormatKey), @@ -2365,27 +2314,11 @@ func (d *DB) runCompaction( // we start off with splitters for file sizes, grandparent limits, and (for // L0 splits) L0 limits, before wrapping them in an splitterGroup. // - // There is a complication here: we can't split outputs where the largest - // key on the left side has a seqnum of zero. This limitation - // exists because a range tombstone which extends into the next sstable - // will cause the smallest key for the next sstable to have the same user - // key, but we need the two tables to be disjoint in key space. Consider - // the scenario: - // - // a#RANGEDEL-c,3 b#SET,0 - // - // If b#SET,0 is the last key added to an sstable, the range tombstone - // [b-c)#3 will extend into the next sstable. The boundary generation - // code in finishOutput() will compute the smallest key for that sstable - // as b#RANGEDEL,3 which sorts before b#SET,0. Normally we just adjust - // the seqnum of this key, but that isn't possible for seqnum 0. To ensure - // we only split where the previous point key has a zero seqnum, we wrap - // our splitters with a nonZeroSeqNumSplitter. - // - // Another case where we may not be able to switch SSTables right away is - // when we are splitting an L0 output. We do not split the same user key - // across different sstables within one flush, so the userKeyChangeSplitter - // ensures we are at a user key change boundary when doing a split. + // There is a complication here: We may not be able to switch SSTables + // right away when we are splitting an L0 output. We do not split the + // same user key across different sstables within one flush, so the + // userKeyChangeSplitter ensures we are at a user key change boundary when + // doing a split. outputSplitters := []compactionOutputSplitter{ &fileSizeSplitter{maxFileSize: c.maxOutputFileSize}, &grandparentLimitSplitter{c: c, ve: ve}, @@ -2405,18 +2338,6 @@ func (d *DB) runCompaction( cmp: c.cmp, splitters: outputSplitters, } - // Compactions to L0 don't need nonzero last-point-key seqnums at split - // boundaries because when writing to L0, we are able to guarantee that - // the end key of tombstones will also be truncated (through the - // TruncateAndFlushTo call), and no user keys will - // be split between sstables. So a nonZeroSeqNumSplitter is unnecessary - // in that case. - if !splitL0Outputs { - splitter = &nonZeroSeqNumSplitter{ - c: c, - splitter: splitter, - } - } // NB: we avoid calling maybeThrottle on a nilPacer because the cost of // dynamic dispatch in the hot loop below is pronounced in CPU profiles (see @@ -2435,36 +2356,22 @@ func (d *DB) runCompaction( // progress guarantees ensure that eventually the input iterator will be // exhausted and the range tombstone fragments will all be flushed. for key, val := iter.First(); key != nil || !c.rangeDelFrag.Empty(); { - limit := splitter.onNewOutput(key) + splitterSuggestion := splitter.onNewOutput(key) // Each inner loop iteration processes one key from the input iterator. prevPointSeqNum := InternalKeySeqNumMax for ; key != nil; key, val = iter.Next() { - if split := splitter.shouldSplitBefore(key, tw); split != noSplit { - if split == splitNow { - limit = key.UserKey - if splitL0Outputs { - // Flush all tombstones up until key.UserKey, and - // truncate them at that key. - // - // The fragmenter could save the passed-in key. As this - // key could live beyond the write into the current - // sstable output file, make a copy. - c.rangeDelFrag.TruncateAndFlushTo(key.Clone().UserKey) - } - break + if split := splitter.shouldSplitBefore(key, tw); split == splitNow { + if splitL0Outputs { + // Flush all tombstones up until key.UserKey, and + // truncate them at that key. + // + // The fragmenter could save the passed-in key. As this + // key could live beyond the write into the current + // sstable output file, make a copy. + c.rangeDelFrag.TruncateAndFlushTo(key.Clone().UserKey) } - // split == splitSoon - // - // Invalidate the limit here. It has probably been exceeded - // by the current key, but we can't split just yet, such as to - // maintain the nonzero sequence number invariant mentioned - // above. Setting limit to nil is okay as it's just a transient - // setting, as when split eventually equals splitNow, we will - // set the limit to the key after that. If the compaction were - // to run out of keys before we get to that point, limit would - // be nil as it should be for all end-of-compaction cases. - limit = nil + break } atomic.StoreUint64(c.atomicBytesIterated, c.bytesIterated) @@ -2492,49 +2399,80 @@ func (d *DB) runCompaction( prevPointSeqNum = key.SeqNum() } + // A splitter requested a split, and we're ready to finish the output. + // We need to choose the key at which to split any pending range + // tombstones. + // + // There's a complication here. We need to ensure that for a user key + // k we never end up with one output's largest key as k#0 and the + // next output's smallest key as k#RANGEDEL,#x where x > 0. This is a + // problem because k#RANGEDEL,#x sorts before k#0. Normally, we just + // adjust the seqnum of the next output's smallest boundary to be + // less, but that's not possible with the zero seqnum. We can avoid + // this case with careful picking of where to split pending range + // tombstones. + var splitKey []byte switch { - case key == nil && prevPointSeqNum == 0 && !c.rangeDelFrag.Empty(): - // We ran out of keys and the last key added to the sstable has a zero - // seqnum and there are buffered range tombstones, so we're unable to use - // the grandparent/flush limit for the sstable boundary. See the example in the - // in the loop above with range tombstones straddling sstables. Setting - // limit to nil ensures that we flush the entirety of the rangedel - // fragmenter when writing the last output. - limit = nil - case key == nil && splitL0Outputs && !c.rangeDelFrag.Empty(): - // We ran out of keys with flush splits enabled, and have remaining - // buffered range tombstones. Set limit to nil so all range - // tombstones get flushed in the current sstable. Consider this - // example: + case key != nil: + // We hit the size, grandparent, or L0 limit for the sstable. + // The next key either has a greater user key than the previous + // key, or if not, the previous key must not have had a zero + // sequence number. + + // TODO(jackson): If we hit the grandparent limit, the next + // grandparent's smallest key may be less than the current key. + // Splitting at the current key will cause this output to overlap + // a potentially unbounded number of grandparents. + splitKey = key.UserKey + case key == nil && splitL0Outputs: + // We ran out of keys with flush splits enabled. Set splitKey to + // nil so all range tombstones get flushed in the current sstable. + // Consider this example: // // a.SET.4 // d.MERGE.5 // d.RANGEDEL.3:f // (no more keys remaining) // - // Where d is a flush split key (i.e. limit = 'd'). Since d.MERGE.5 - // has already been written to this output by this point (as it's - // <= limit), and flushes cannot have user keys split across - // multiple sstables, we have to set limit to a key greater than - // 'd' to ensure the range deletion also gets flushed. Setting - // the limit to nil is the simplest way to ensure that. - limit = nil - case key == nil /* && (prevPointSeqNum != 0 || c.rangeDelFrag.Empty()) */ : - // We ran out of keys. Because of the previous case, either rangeDelFrag - // is empty or the last record added to the sstable has a non-zero - // seqnum. If the rangeDelFragmenter is empty we have no concerns as - // there won't be another sstable generated by this compaction and the - // current limit is fine (it won't apply). Otherwise, if the last key - // added to the sstable had a non-zero seqnum we're also in the clear as - // we can decrement that seqnum to create a boundary key for the next - // sstable (if we end up generating a next sstable). - case key != nil: - // We either hit the size, grandparent, or L0 limit for the sstable. + // Where d is a flush split key (i.e. splitterSuggestion = 'd'). + // Since d.MERGE.5 has already been written to this output by this + // point (as it's <= splitterSuggestion), and flushes cannot have + // user keys split across multiple sstables, we have to set + // splitKey to a key greater than 'd' to ensure the range deletion + // also gets flushed. Setting the splitKey to nil is the simplest + // way to ensure that. + // + // TODO(jackson): This case is only problematic if splitKey equals + // the user key of the last point key added. We don't need to + // flush *all* range tombstones to the current sstable. We could + // flush up to the next grandparent limit greater than + // `splitterSuggestion` instead. + splitKey = nil + case key == nil && prevPointSeqNum != 0: + // The last key added did not have a zero sequence number, so + // we'll always be able to adjust the next table's smallest key. + // NB: Because of the splitter's `onNewOutput` contract, + // `splitterSuggestion` must be >= any key previously added to the + // current output sstable. + splitKey = splitterSuggestion + case key == nil && prevPointSeqNum == 0: + // The last key added did have a zero sequence number. The + // splitters' suggested split point might have the same user key, + // which would cause the next output to have an unadjustable + // smallest key. To prevent that, we ignore the splitter's + // suggestion, leaving splitKey nil to flush all pending range + // tombstones. + // TODO(jackson): This case is only problematic if splitKey equals + // the user key of the last point key added. We don't need to + // flush *all* range tombstones to the current sstable. We could + // flush up to the next grandparent limit greater than + // `splitterSuggestion` instead. + splitKey = nil default: return nil, nil, errors.New("pebble: not reached") } - if err := finishOutput(limit); err != nil { + if err := finishOutput(splitKey); err != nil { return nil, pendingOutputs, err } } @@ -2554,9 +2492,41 @@ func (d *DB) runCompaction( if err := d.dataDir.Sync(); err != nil { return nil, pendingOutputs, err } + return ve, pendingOutputs, nil } +// validateVersionEdit validates that start and end keys across new and deleted +// files in a versionEdit pass the given validation function. +func validateVersionEdit(ve *versionEdit, validateFn func([]byte) error, format base.FormatKey) error { + if validateFn == nil { + return nil + } + + validateMetaFn := func(f *manifest.FileMetadata) error { + for _, key := range []InternalKey{f.Smallest, f.Largest} { + if err := validateFn(key.UserKey); err != nil { + return errors.Wrapf(err, "key=%q; file=%s", format(key.UserKey), f) + } + } + return nil + } + + // Validate both new and deleted files. + for _, f := range ve.NewFiles { + if err := validateMetaFn(f.Meta); err != nil { + return err + } + } + for _, m := range ve.DeletedFiles { + if err := validateMetaFn(m); err != nil { + return err + } + } + + return nil +} + // scanObsoleteFiles scans the filesystem for files that are no longer needed // and adds those to the internal lists of obsolete files. Note that the files // are not actually deleted by this method. A subsequent call to @@ -2755,8 +2725,22 @@ func (d *DB) doDeleteObsoleteFiles(jobID int) { } d.mu.versions.obsoleteTables = nil - obsoleteManifests := d.mu.versions.obsoleteManifests - d.mu.versions.obsoleteManifests = nil + // Sort the manifests cause we want to delete some contiguous prefix + // of the older manifests. + sort.Slice(d.mu.versions.obsoleteManifests, func(i, j int) bool { + return d.mu.versions.obsoleteManifests[i].fileNum < + d.mu.versions.obsoleteManifests[j].fileNum + }) + + var obsoleteManifests []fileInfo + manifestsToDelete := len(d.mu.versions.obsoleteManifests) - d.opts.NumPrevManifest + if manifestsToDelete > 0 { + obsoleteManifests = d.mu.versions.obsoleteManifests[:manifestsToDelete] + d.mu.versions.obsoleteManifests = d.mu.versions.obsoleteManifests[manifestsToDelete:] + if len(d.mu.versions.obsoleteManifests) == 0 { + d.mu.versions.obsoleteManifests = nil + } + } obsoleteOptions := d.mu.versions.obsoleteOptions d.mu.versions.obsoleteOptions = nil diff --git a/vendor/github.com/cockroachdb/pebble/db.go b/vendor/github.com/cockroachdb/pebble/db.go index 703e896ce..60f576d9b 100644 --- a/vendor/github.com/cockroachdb/pebble/db.go +++ b/vendor/github.com/cockroachdb/pebble/db.go @@ -19,7 +19,7 @@ import ( "github.com/cockroachdb/pebble/internal/invariants" "github.com/cockroachdb/pebble/internal/manifest" "github.com/cockroachdb/pebble/internal/manual" - "github.com/cockroachdb/pebble/internal/record" + "github.com/cockroachdb/pebble/record" "github.com/cockroachdb/pebble/sstable" "github.com/cockroachdb/pebble/vfs" ) diff --git a/vendor/github.com/cockroachdb/pebble/event.go b/vendor/github.com/cockroachdb/pebble/event.go index b57801a2d..be1d12eba 100644 --- a/vendor/github.com/cockroachdb/pebble/event.go +++ b/vendor/github.com/cockroachdb/pebble/event.go @@ -577,6 +577,10 @@ func TeeEventListener(a, b EventListener) EventListener { a.CompactionEnd(info) b.CompactionEnd(info) }, + DiskSlow: func(info DiskSlowInfo) { + a.DiskSlow(info) + b.DiskSlow(info) + }, FlushBegin: func(info FlushInfo) { a.FlushBegin(info) b.FlushBegin(info) @@ -605,6 +609,10 @@ func TeeEventListener(a, b EventListener) EventListener { a.TableIngested(info) b.TableIngested(info) }, + TableStatsLoaded: func(info TableStatsInfo) { + a.TableStatsLoaded(info) + b.TableStatsLoaded(info) + }, WALCreated: func(info WALCreateInfo) { a.WALCreated(info) b.WALCreated(info) diff --git a/vendor/github.com/cockroachdb/pebble/go.mod b/vendor/github.com/cockroachdb/pebble/go.mod deleted file mode 100644 index 0e5744d05..000000000 --- a/vendor/github.com/cockroachdb/pebble/go.mod +++ /dev/null @@ -1,21 +0,0 @@ -module github.com/cockroachdb/pebble - -require ( - github.com/DataDog/zstd v1.4.5 - github.com/cespare/xxhash/v2 v2.1.1 - github.com/cockroachdb/errors v1.8.1 - github.com/cockroachdb/redact v1.0.8 - github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd - github.com/ghemawat/stream v0.0.0-20171120220530-696b145b53b9 - github.com/golang/snappy v0.0.3 - github.com/klauspost/compress v1.11.7 - github.com/kr/pretty v0.1.0 - github.com/pmezard/go-difflib v1.0.0 - github.com/spf13/cobra v0.0.5 - github.com/stretchr/testify v1.6.1 - golang.org/x/exp v0.0.0-20200513190911-00229845015e - golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 - golang.org/x/sys v0.0.0-20200519105757-fe76b779f299 -) - -go 1.13 diff --git a/vendor/github.com/cockroachdb/pebble/go.sum b/vendor/github.com/cockroachdb/pebble/go.sum deleted file mode 100644 index dff778214..000000000 --- a/vendor/github.com/cockroachdb/pebble/go.sum +++ /dev/null @@ -1,295 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= -github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= -github.com/CloudyKit/fastprinter v0.0.0-20170127035650-74b38d55f37a/go.mod h1:EFZQ978U7x8IRnstaskI3IysnWY5Ao3QgZUKOXlsAdw= -github.com/CloudyKit/jet v2.1.3-0.20180809161101-62edd43e4f88+incompatible/go.mod h1:HPYO+50pSWkPoj9Q/eq0aRGByCL6ScRlUmiEX5Zgm+w= -github.com/DataDog/zstd v1.4.5 h1:EndNeuB0l9syBZhut0wns3gV1hL8zX8LIu6ZiVHWLIQ= -github.com/DataDog/zstd v1.4.5/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= -github.com/Joker/hpp v1.0.0/go.mod h1:8x5n+M1Hp5hC0g8okX3sR3vFQwynaX/UgSOM9MeBKzY= -github.com/Joker/jade v1.0.1-0.20190614124447-d475f43051e7/go.mod h1:6E6s8o2AE4KhCrqr6GRJjdC/gNfTdxkIXvuGZZda2VM= -github.com/Shopify/goreferrer v0.0.0-20181106222321-ec9c9a553398/go.mod h1:a1uqRtAwp2Xwc6WNPJEufxJ7fx3npB4UV/JOLmbu5I0= -github.com/ajg/form v1.5.1/go.mod h1:uL1WgH+h2mgNtvBq0339dVnzXdBETtL2LeUXaIv25UY= -github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= -github.com/aymerick/raymond v2.0.3-0.20180322193309-b565731e1464+incompatible/go.mod h1:osfaiScAUVup+UC9Nfq76eWqDhXlp+4UYaA8uhTBO6g= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY= -github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cockroachdb/datadriven v1.0.0/go.mod h1:5Ib8Meh+jk1RlHIXej6Pzevx/NLlNvQB9pmSBZErGA4= -github.com/cockroachdb/errors v1.6.1/go.mod h1:tm6FTP5G81vwJ5lC0SizQo374JNCOPrHyXGitRJoDqM= -github.com/cockroachdb/errors v1.8.1 h1:A5+txlVZfOqFBDa4mGz2bUWSp0aHElvHX2bKkdbQu+Y= -github.com/cockroachdb/errors v1.8.1/go.mod h1:qGwQn6JmZ+oMjuLwjWzUNqblqk0xl4CVV3SQbGwK7Ac= -github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f h1:o/kfcElHqOiXqcou5a3rIlMc7oJbMQkeLk0VQJ7zgqY= -github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI= -github.com/cockroachdb/redact v1.0.8 h1:8QG/764wK+vmEYoOlfobpe12EQcS81ukx/a4hdVMxNw= -github.com/cockroachdb/redact v1.0.8/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg= -github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2 h1:IKgmqgMQlVJIZj19CdocBeSfSaiCbEBZGKODaixqtHM= -github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2/go.mod h1:8BT+cPK6xvFOcRlk0R8eg+OTkcqI6baNH4xAkpiYVvQ= -github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd h1:qMd81Ts1T2OTKmB4acZcyKaMtRnY5Y44NuXGX2GFJ1w= -github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI= -github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM= -github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= -github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= -github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= -github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4= -github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= -github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= -github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= -github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHjkjCrw= -github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8= -github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= -github.com/flosch/pongo2 v0.0.0-20190707114632-bbf5a6c351f4/go.mod h1:T9YF2M40nIgbVgp3rreNmTged+9HrbNTIQf1PsaIiTA= -github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= -github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= -github.com/gavv/httpexpect v2.0.0+incompatible/go.mod h1:x+9tiU1YnrOvnB725RkpoLv1M62hOWzwo5OXotisrKc= -github.com/ghemawat/stream v0.0.0-20171120220530-696b145b53b9 h1:r5GgOLGbza2wVHRzK7aAj6lWZjfbAwiu/RDCVOKjRyM= -github.com/ghemawat/stream v0.0.0-20171120220530-696b145b53b9/go.mod h1:106OIgooyS7OzLDOpUGgm9fA3bQENb/cFSyyBmMoJDs= -github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3/go.mod h1:VJ0WA2NBN22VlZ2dKZQPAPnyWw5XTlK1KymzLKsr59s= -github.com/gin-gonic/gin v1.4.0/go.mod h1:OW2EZn3DO8Ln9oIKOvM++LBO+5UPHJJDH72/q/3rZdM= -github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclKsC9YodN5RgxqK/VD9HM9JsCSh7rNhMZE98= -github.com/go-errors/errors v1.0.1 h1:LUHzmkK3GUKUrL/1gfBUxAHzcev3apQlezX/+O7ma6w= -github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q= -github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-martini/martini v0.0.0-20170121215854-22fa46961aab/go.mod h1:/P9AEU963A2AYjv4d1V5eVL1CQbEJq6aCNHDDjibzu8= -github.com/gobwas/httphead v0.0.0-20180130184737-2c6c146eadee/go.mod h1:L0fX3K22YWvt/FAX9NnzrNzcI4wNYi9Yku4O0LKYflo= -github.com/gobwas/pool v0.2.0/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw= -github.com/gobwas/ws v1.0.2/go.mod h1:szmBTxLgaFppYjEmNtny/v3w89xOydFnnZMcgRRu/EM= -github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s= -github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= -github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls= -github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= -github.com/gogo/status v1.1.0/go.mod h1:BFv9nrluPLmrS0EmGVvLaPNmRosr9KapBYd5/hpY1WM= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA= -github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= -github.com/gomodule/redigo v1.7.1-0.20190724094224-574c33c3df38/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= -github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= -github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= -github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= -github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= -github.com/hydrogen18/memlistener v0.0.0-20141126152155-54553eb933fb/go.mod h1:qEIFzExnS6016fRpRfxrExeVn2gbClQA99gQhnIcdhE= -github.com/imkira/go-interpol v1.1.0/go.mod h1:z0h2/2T3XF8kyEPpRgJ3kmNv+C43p+I/CoI+jC3w2iA= -github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM= -github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= -github.com/iris-contrib/blackfriday v2.0.0+incompatible/go.mod h1:UzZ2bDEoaSGPbkg6SAB4att1aAwTmVIx/5gCVqeyUdI= -github.com/iris-contrib/go.uuid v2.0.0+incompatible/go.mod h1:iz2lgM/1UnEf1kP0L/+fafWORmlnuysV2EMP8MW+qe0= -github.com/iris-contrib/i18n v0.0.0-20171121225848-987a633949d0/go.mod h1:pMCz62A0xJL6I+umB2YTlFRwWXaDFA0jy+5HzGiJjqI= -github.com/iris-contrib/schema v0.0.1/go.mod h1:urYA3uvUNG1TIIjOSCzHr9/LmbQo8LrOcOqfqxa4hXw= -github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= -github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= -github.com/juju/errors v0.0.0-20181118221551-089d3ea4e4d5/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q= -github.com/juju/loggo v0.0.0-20180524022052-584905176618/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U= -github.com/juju/testing v0.0.0-20180920084828-472a3e8b2073/go.mod h1:63prj8cnj0tU0S9OHjGJn+b1h0ZghCndfnbQolrYTwA= -github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k= -github.com/kataras/golog v0.0.9/go.mod h1:12HJgwBIZFNGL0EJnMRhmvGA0PQGx8VFwrZtM4CqbAk= -github.com/kataras/iris/v12 v12.0.1/go.mod h1:udK4vLQKkdDqMGJJVd/msuMtN6hpYJhg/lSzuxjhO+U= -github.com/kataras/neffos v0.0.10/go.mod h1:ZYmJC07hQPW67eKuzlfY7SO3bC0mw83A3j6im82hfqw= -github.com/kataras/pio v0.0.0-20190103105442-ea782b38602d/go.mod h1:NV88laa9UiiDuX9AhMbDPkGYSPugBOV6yTZB1l2K9Z0= -github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= -github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/klauspost/compress v1.8.2/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= -github.com/klauspost/compress v1.9.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= -github.com/klauspost/compress v1.11.7 h1:0hzRabrMN4tSTvMfnL3SCv1ZGeAP23ynzodBgaHeMeg= -github.com/klauspost/compress v1.11.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= -github.com/klauspost/cpuid v1.2.1/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= -github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/labstack/echo/v4 v4.1.11/go.mod h1:i541M3Fj6f76NZtHSj7TXnyM8n2gaodfvfxNnFqi74g= -github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k= -github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= -github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= -github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ= -github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw= -github.com/mediocregopher/mediocre-go-lib v0.0.0-20181029021733-cb65787f37ed/go.mod h1:dSsfyI2zABAdhcbvkXqgxOxrCsbYeHCPgrZkku60dSg= -github.com/mediocregopher/radix/v3 v3.3.0/go.mod h1:EmfVyvspXz1uZEyPBMyGK+kjWiKQGvsUt6O3Pj+LDCQ= -github.com/microcosm-cc/bluemonday v1.0.2/go.mod h1:iVP4YcDBq+n/5fb23BhYFvIMq/leAFZyRl6bYmGDlGc= -github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/moul/http2curl v1.0.0/go.mod h1:8UbvGypXm98wA/IqH45anm5Y2Z6ep6O31QGOAZ3H0fQ= -github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM= -github.com/nats-io/nkeys v0.0.2/go.mod h1:dab7URMsZm6Z/jp9Z5UGa87Uutgc2mVpXLC4B7TDb/4= -github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= -github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= -github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= -github.com/onsi/ginkgo v1.13.0/go.mod h1:+REjRxOmWfHCjfv9TTWB1jD1Frx4XydAD3zm1lskyM0= -github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= -github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= -github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= -github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= -github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= -github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= -github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= -github.com/sclevine/agouti v3.0.0+incompatible/go.mod h1:b4WX9W9L1sfQKXeJf1mUTLZKJ48R1S7H23Ji7oFO5Bw= -github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= -github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= -github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= -github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= -github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= -github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= -github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s= -github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= -github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= -github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg= -github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= -github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= -github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= -github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= -github.com/urfave/negroni v1.0.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4= -github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= -github.com/valyala/fasthttp v1.6.0/go.mod h1:FstJa9V+Pj9vQ7OJie2qMHdwemEDaDiSdBnvPM1Su9w= -github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8= -github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio= -github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= -github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= -github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= -github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= -github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmvncFJFHJ7Gvn9wZArjbV5/FppcK2fKk/tI= -github.com/yudai/gojsondiff v1.0.0/go.mod h1:AY32+k2cwILAkW1fbgxQ5mUmMiZFgLIV+FBNExI05xg= -github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82/go.mod h1:lgjkn3NuSvDfVJdfcVVdX+jpBxNmX4rDAzaS45IcYoM= -github.com/yudai/pp v2.0.1+incompatible/go.mod h1:PuxR/8QJ7cyCkFp/aUDS+JY727OFEZkTdatxwunjIkc= -golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20200513190911-00229845015e h1:rMqLP+9XLy+LdbCXHjJHAmTfXCr93W7oruWA6Hq1Alc= -golang.org/x/exp v0.0.0-20200513190911-00229845015e/go.mod h1:4M0jN8W1tt0AVLNr8HDosyJCDCDuyL9N9+3m7wDWgKw= -golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= -golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= -golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190327091125-710a502c58a2/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 h1:SQFwaSi55rU7vdNs9Yr0Z324VNlrF+0wMqRXT4St8ck= -golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200519105757-fe76b779f299 h1:DYfZAGf2WMFjMxbgTjaC+2HC7NkNAQs+6Q8b9WEB/F4= -golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20181221001348-537d06c36207/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190327201419-c70d86f8b7cf/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180518175338-11a468237815/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/grpc v1.12.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= -gopkg.in/go-playground/assert.v1 v1.2.1/go.mod h1:9RXL0bg/zibRAgZUYszZSwO/z8Y/a8bDuhia5mkpMnE= -gopkg.in/go-playground/validator.v8 v8.18.2/go.mod h1:RX2a/7Ha8BgOhfk7j780h4/u/RRjR0eouCJSH80/M2Y= -gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/vendor/github.com/cockroachdb/pebble/internal/manifest/l0_sublevels.go b/vendor/github.com/cockroachdb/pebble/internal/manifest/l0_sublevels.go index 036374173..5d54fecf8 100644 --- a/vendor/github.com/cockroachdb/pebble/internal/manifest/l0_sublevels.go +++ b/vendor/github.com/cockroachdb/pebble/internal/manifest/l0_sublevels.go @@ -127,19 +127,18 @@ type fileInterval struct { // this bool is used as a heuristic (but not as a complete disqualifier). intervalRangeIsBaseCompacting bool - // fileCount - compactingFileCount is the stack depth that requires + // All files in this interval, in increasing sublevel order. + files []*FileMetadata + + // len(files) - compactingFileCount is the stack depth that requires // starting new compactions. This metric is not precise since the // compactingFileCount can include files that are part of N (where N > 1) // intra-L0 compactions, so the stack depth after those complete will be - // fileCount - compactingFileCount + N. We ignore this imprecision since + // len(files) - compactingFileCount + N. We ignore this imprecision since // we don't want to track which files are part of which intra-L0 // compaction. - fileCount int compactingFileCount int - // All files in this interval, in increasing sublevel order. - files []*FileMetadata - // Interpolated from files in this interval. For files spanning multiple // intervals, we assume an equal distribution of bytes across all those // intervals. @@ -292,7 +291,6 @@ func NewL0Sublevels( subLevel <= interval.files[len(interval.files)-1].subLevel { subLevel = interval.files[len(interval.files)-1].subLevel + 1 } - s.orderedIntervals[i].fileCount++ interval.estimatedBytes += interpolatedBytes if f.minIntervalIndex < interval.filesMinIntervalIndex { interval.filesMinIntervalIndex = f.minIntervalIndex @@ -300,9 +298,6 @@ func NewL0Sublevels( if f.maxIntervalIndex > interval.filesMaxIntervalIndex { interval.filesMaxIntervalIndex = f.maxIntervalIndex } - } - for i := f.minIntervalIndex; i <= f.maxIntervalIndex; i++ { - interval := &s.orderedIntervals[i] interval.files = append(interval.files, f) } f.subLevel = subLevel @@ -476,7 +471,7 @@ func (s *L0Sublevels) describe(verbose bool) string { foundBaseCompactingIntervals := false for ; i < len(s.orderedIntervals); i++ { interval := &s.orderedIntervals[i] - if interval.fileCount == 0 { + if len(interval.files) == 0 { continue } if !interval.isBaseCompacting { @@ -514,8 +509,9 @@ func (s *L0Sublevels) ReadAmplification() int { amp := 0 for i := range s.orderedIntervals { interval := &s.orderedIntervals[i] - if amp < interval.fileCount { - amp = interval.fileCount + fileCount := len(interval.files) + if amp < fileCount { + amp = fileCount } } return amp @@ -550,7 +546,7 @@ func (s *L0Sublevels) InUseKeyRanges(smallest, largest []byte) []UserKeyRange { for i := start; i < end; { // Intervals with no files are not in use and can be skipped, once we // end the current UserKeyRange. - if s.orderedIntervals[i].fileCount == 0 { + if len(s.orderedIntervals[i].files) == 0 { curr = nil i++ continue @@ -606,7 +602,7 @@ func (s *L0Sublevels) MaxDepthAfterOngoingCompactions() int { depth := 0 for i := range s.orderedIntervals { interval := &s.orderedIntervals[i] - intervalDepth := interval.fileCount - interval.compactingFileCount + intervalDepth := len(interval.files) - interval.compactingFileCount if depth < intervalDepth { depth = intervalDepth } @@ -907,6 +903,29 @@ func (is intervalSorterByDecreasingScore) Swap(i, j int) { // // Lbase a--------i m---------w // +// Note that when ExtendL0ForBaseCompactionTo is called, the compaction expands +// to the following, given that the [l,o] file can be added without including +// additional files in Lbase: +// +// _____________ +// L0.3 |a--d g-j| _________ +// L0.2 | f--j| | r-t | +// L0.1 | b-d e---j|______| | +// L0.0 |a--d f--j||l--o p-----x| +// +// Lbase a--------i m---------w +// +// If an additional file existed in LBase that overlapped with [l,o], it would +// be excluded from the compaction. Concretely: +// +// _____________ +// L0.3 |a--d g-j| _________ +// L0.2 | f--j| | r-t | +// L0.1 | b-d e---j| | | +// L0.0 |a--d f--j| l--o |p-----x| +// +// Lbase a--------ij--lm---------w +// // Intra-L0: If the L0 score is high, but PickBaseCompaction() is unable to // pick a compaction, PickIntraL0Compaction will be used to pick an intra-L0 // compaction. Similar to L0 -> Lbase compactions, we want to allow for @@ -975,7 +994,7 @@ func (s *L0Sublevels) PickBaseCompaction( sublevelCount := len(s.levelFiles) for i := range s.orderedIntervals { interval := &s.orderedIntervals[i] - depth := interval.fileCount - interval.compactingFileCount + depth := len(interval.files) - interval.compactingFileCount if interval.isBaseCompacting || minCompactionDepth > depth { continue } @@ -1190,7 +1209,7 @@ func (s *L0Sublevels) PickIntraL0Compaction( scoredIntervals := make([]intervalAndScore, len(s.orderedIntervals)) for i := range s.orderedIntervals { interval := &s.orderedIntervals[i] - depth := interval.fileCount - interval.compactingFileCount + depth := len(interval.files) - interval.compactingFileCount if minCompactionDepth > depth { continue } diff --git a/vendor/github.com/cockroachdb/pebble/internal/manifest/version.go b/vendor/github.com/cockroachdb/pebble/internal/manifest/version.go index d9e80d9fa..48bd2cd0c 100644 --- a/vendor/github.com/cockroachdb/pebble/internal/manifest/version.go +++ b/vendor/github.com/cockroachdb/pebble/internal/manifest/version.go @@ -65,6 +65,19 @@ type TableStats struct { // FileMetadata holds the metadata for an on-disk table. type FileMetadata struct { + // Atomic contains fields which are accessed atomically. Go allocations + // are guaranteed to be 64-bit aligned which we take advantage of by + // placing the 64-bit fields which we access atomically at the beginning + // of the FileMetadata struct. For more information, see + // https://golang.org/pkg/sync/atomic/#pkg-note-BUG. + Atomic struct { + // AllowedSeeks is used to determine if a file should be picked for + // a read triggered compaction. It is decremented when read sampling + // in pebble.Iterator after every after every positioning operation + // that returns a user key (eg. Next, Prev, SeekGE, SeekLT, etc). + AllowedSeeks int64 + } + // Reference count for the file: incremented when a file is added to a // version and decremented when the version is unreferenced. The file is // obsolete when the reference count falls to zero. @@ -96,18 +109,10 @@ type FileMetadata struct { // is true and IsIntraL0Compacting is false for an L0 file, the file must // be part of a compaction to Lbase. IsIntraL0Compacting bool - // Fields inside the Atomic struct should be accessed atomically. - Atomic struct { - // AllowedSeeks is used to determine if a file should be picked for - // a read triggered compaction. It is decremented when read sampling - // in pebble.Iterator after every after every positioning operation - // that returns a user key (eg. Next, Prev, SeekGE, SeekLT, etc). - AllowedSeeks int64 - } - subLevel int - l0Index int - minIntervalIndex int - maxIntervalIndex int + subLevel int + l0Index int + minIntervalIndex int + maxIntervalIndex int // True if user asked us to compact this file. This flag is only set and // respected by RocksDB but exists here to preserve its value in the diff --git a/vendor/github.com/cockroachdb/pebble/open.go b/vendor/github.com/cockroachdb/pebble/open.go index 4e90fbfe6..8c0444fee 100644 --- a/vendor/github.com/cockroachdb/pebble/open.go +++ b/vendor/github.com/cockroachdb/pebble/open.go @@ -22,7 +22,7 @@ import ( "github.com/cockroachdb/pebble/internal/invariants" "github.com/cockroachdb/pebble/internal/manual" "github.com/cockroachdb/pebble/internal/rate" - "github.com/cockroachdb/pebble/internal/record" + "github.com/cockroachdb/pebble/record" "github.com/cockroachdb/pebble/vfs" ) @@ -293,6 +293,23 @@ func Open(dirname string, opts *Options) (db *DB, _ error) { if !d.opts.ReadOnly { // Create an empty .log file. newLogNum := d.mu.versions.getNextFileNum() + + // This logic is slightly different than RocksDB's. Specifically, RocksDB + // sets MinUnflushedLogNum to max-recovered-log-num + 1. We set it to the + // newLogNum. There should be no difference in using either value. + ve.MinUnflushedLogNum = newLogNum + + // Create the manifest with the updated MinUnflushedLogNum before + // creating the new log file. If we created the log file first, a + // crash before the manifest is synced could leave two WALs with + // unclean tails. + d.mu.versions.logLock() + if err := d.mu.versions.logAndApply(jobID, &ve, newFileMetrics(ve.NewFiles), d.dataDir, func() []compactionInfo { + return nil + }); err != nil { + return nil, err + } + newLogName := base.MakeFilename(opts.FS, d.walDirname, fileTypeLog, newLogNum) d.mu.log.queue = append(d.mu.log.queue, fileInfo{fileNum: newLogNum, fileSize: 0}) logFile, err := opts.FS.Create(newLogName) @@ -318,17 +335,6 @@ func Open(dirname string, opts *Options) (db *DB, _ error) { d.mu.log.LogWriter = record.NewLogWriter(logFile, newLogNum) d.mu.log.LogWriter.SetMinSyncInterval(d.opts.WALMinSyncInterval) d.mu.versions.metrics.WAL.Files++ - - // This logic is slightly different than RocksDB's. Specifically, RocksDB - // sets MinUnflushedLogNum to max-recovered-log-num + 1. We set it to the - // newLogNum. There should be no difference in using either value. - ve.MinUnflushedLogNum = newLogNum - d.mu.versions.logLock() - if err := d.mu.versions.logAndApply(jobID, &ve, newFileMetrics(ve.NewFiles), d.dataDir, func() []compactionInfo { - return nil - }); err != nil { - return nil, err - } } d.updateReadStateLocked(d.opts.DebugCheck) diff --git a/vendor/github.com/cockroachdb/pebble/options.go b/vendor/github.com/cockroachdb/pebble/options.go index f42206d97..874b5db08 100644 --- a/vendor/github.com/cockroachdb/pebble/options.go +++ b/vendor/github.com/cockroachdb/pebble/options.go @@ -7,6 +7,7 @@ package pebble import ( "bytes" "fmt" + "runtime" "strconv" "strings" "time" @@ -339,6 +340,29 @@ type Options struct { // and disables read triggered compactions. The default is 1 << 4. which // gets multiplied with a constant of 1 << 16 to yield 1 << 20 (1MB). ReadSamplingMultiplier int64 + + // TableCacheShards is the number of shards per table cache. + // Reducing the value can reduce the number of idle goroutines per DB + // instance which can be useful in scenarios with a lot of DB instances + // and a large number of CPUs, but doing so can lead to higher contention + // in the table cache and reduced performance. + // + // The default value is the number of logical CPUs, which can be + // limited by runtime.GOMAXPROCS. + TableCacheShards int + + // KeyValidationFunc is a function to validate a user key in an SSTable. + // + // Currently, this function is used to validate the smallest and largest + // keys in an SSTable undergoing compaction. In this case, returning an + // error from the validation function will result in a panic at runtime, + // given that there is rarely any way of recovering from malformed keys + // present in compacted files. By default, validation is not performed. + // + // Additional use-cases may be added in the future. + // + // NOTE: callers should take care to not mutate the key being validated. + KeyValidationFunc func(userKey []byte) error } // Filters is a map from filter policy name to filter policy. It is used for @@ -427,6 +451,11 @@ type Options struct { // when L0 read-amplification passes the L0CompactionConcurrency threshold. MaxConcurrentCompactions int + // NumPrevManifest is the number of non-current or older manifests which + // we want to keep around for debugging purposes. By default, we're going + // to keep one older manifest. + NumPrevManifest int + // ReadOnly indicates that the DB should be opened in read-only mode. Writes // to the DB will return an error, background compactions are disabled, and // the flush that normally occurs after replaying the WAL at startup is @@ -584,6 +613,10 @@ func (o *Options) EnsureDefaults() *Options { if o.MaxConcurrentCompactions <= 0 { o.MaxConcurrentCompactions = 1 } + if o.NumPrevManifest <= 0 { + o.NumPrevManifest = 1 + } + if o.FS == nil { o.FS = vfs.WithDiskHealthChecks(vfs.Default, 5*time.Second, func(name string, duration time.Duration) { @@ -602,6 +635,9 @@ func (o *Options) EnsureDefaults() *Options { if o.Experimental.ReadSamplingMultiplier == 0 { o.Experimental.ReadSamplingMultiplier = 1 << 4 } + if o.Experimental.TableCacheShards <= 0 { + o.Experimental.TableCacheShards = runtime.GOMAXPROCS(0) + } o.initMaps() return o @@ -688,6 +724,7 @@ func (o *Options) String() string { fmt.Fprintf(&buf, " read_compaction_rate=%d\n", o.Experimental.ReadCompactionRate) fmt.Fprintf(&buf, " read_sampling_multiplier=%d\n", o.Experimental.ReadSamplingMultiplier) fmt.Fprintf(&buf, " strict_wal_tail=%t\n", o.private.strictWALTail) + fmt.Fprintf(&buf, " table_cache_shards=%d\n", o.Experimental.TableCacheShards) fmt.Fprintf(&buf, " table_property_collectors=[") for i := range o.TablePropertyCollectors { if i > 0 { @@ -886,6 +923,8 @@ func (o *Options) Parse(s string, hooks *ParseHooks) error { o.Experimental.ReadCompactionRate, err = strconv.ParseInt(value, 10, 64) case "read_sampling_multiplier": o.Experimental.ReadSamplingMultiplier, err = strconv.ParseInt(value, 10, 64) + case "table_cache_shards": + o.Experimental.TableCacheShards, err = strconv.Atoi(value) case "table_format": switch value { case "leveldb": diff --git a/vendor/github.com/cockroachdb/pebble/internal/record/log_writer.go b/vendor/github.com/cockroachdb/pebble/record/log_writer.go similarity index 100% rename from vendor/github.com/cockroachdb/pebble/internal/record/log_writer.go rename to vendor/github.com/cockroachdb/pebble/record/log_writer.go diff --git a/vendor/github.com/cockroachdb/pebble/internal/record/record.go b/vendor/github.com/cockroachdb/pebble/record/record.go similarity index 99% rename from vendor/github.com/cockroachdb/pebble/internal/record/record.go rename to vendor/github.com/cockroachdb/pebble/record/record.go index ab48ab4ae..194724c0d 100644 --- a/vendor/github.com/cockroachdb/pebble/internal/record/record.go +++ b/vendor/github.com/cockroachdb/pebble/record/record.go @@ -96,7 +96,7 @@ // The wire format allows for limited recovery in the face of data corruption: // on a format error (such as a checksum mismatch), the reader moves to the // next block and looks for the next full or first chunk. -package record // import "github.com/cockroachdb/pebble/internal/record" +package record // The C++ Level-DB code calls this the log, but it has been renamed to record // to avoid clashing with the standard log package, and because it is generally diff --git a/vendor/github.com/cockroachdb/pebble/table_cache.go b/vendor/github.com/cockroachdb/pebble/table_cache.go index 502e02a5f..ab5a836be 100644 --- a/vendor/github.com/cockroachdb/pebble/table_cache.go +++ b/vendor/github.com/cockroachdb/pebble/table_cache.go @@ -8,7 +8,6 @@ import ( "bytes" "context" "fmt" - "runtime" "runtime/debug" "runtime/pprof" "sync" @@ -38,7 +37,7 @@ func (c *tableCache) init(cacheID uint64, dirname string, fs vfs.FS, opts *Optio c.cache = opts.Cache c.cache.Ref() - c.shards = make([]*tableCacheShard, runtime.GOMAXPROCS(0)) + c.shards = make([]*tableCacheShard, opts.Experimental.TableCacheShards) for i := range c.shards { c.shards[i] = &tableCacheShard{} c.shards[i].init(cacheID, dirname, fs, opts, size/len(c.shards)) diff --git a/vendor/github.com/cockroachdb/pebble/version_set.go b/vendor/github.com/cockroachdb/pebble/version_set.go index a638d90cb..e4e4487df 100644 --- a/vendor/github.com/cockroachdb/pebble/version_set.go +++ b/vendor/github.com/cockroachdb/pebble/version_set.go @@ -15,7 +15,7 @@ import ( "github.com/cockroachdb/pebble/internal/base" "github.com/cockroachdb/pebble/internal/invariants" "github.com/cockroachdb/pebble/internal/manifest" - "github.com/cockroachdb/pebble/internal/record" + "github.com/cockroachdb/pebble/record" "github.com/cockroachdb/pebble/vfs" ) diff --git a/vendor/github.com/cockroachdb/pebble/vfs/mem_fs.go b/vendor/github.com/cockroachdb/pebble/vfs/mem_fs.go index 6818b1bd7..424539c2a 100644 --- a/vendor/github.com/cockroachdb/pebble/vfs/mem_fs.go +++ b/vendor/github.com/cockroachdb/pebble/vfs/mem_fs.go @@ -497,7 +497,7 @@ func (*MemFS) PathDir(p string) string { // GetDiskUsage implements FS.GetDiskUsage. func (*MemFS) GetDiskUsage(string) (DiskUsage, error) { - return DiskUsage{}, errors.New("pebble: not supported") + return DiskUsage{}, ErrUnsupported } // memNode holds a file's data or a directory's children, and implements os.FileInfo. diff --git a/vendor/github.com/cockroachdb/pebble/vfs/vfs.go b/vendor/github.com/cockroachdb/pebble/vfs/vfs.go index 3ce5982cd..965bf373d 100644 --- a/vendor/github.com/cockroachdb/pebble/vfs/vfs.go +++ b/vendor/github.com/cockroachdb/pebble/vfs/vfs.go @@ -42,8 +42,9 @@ type OpenOption interface { // The names are filepath names: they may be / separated or \ separated, // depending on the underlying operating system. type FS interface { - // Create creates the named file for writing, truncating it if it already - // exists. + // Create creates the named file for reading and writing. If a file + // already exists at the provided name, it's removed first ensuring the + // resulting file descriptor points to a new inode. Create(name string) (File, error) // Link creates newname as a hard link to the oldname file. @@ -141,7 +142,23 @@ var Default FS = defaultFS{} type defaultFS struct{} func (defaultFS) Create(name string) (File, error) { - f, err := os.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_TRUNC|syscall.O_CLOEXEC, 0666) + const openFlags = os.O_RDWR | os.O_CREATE | os.O_EXCL | syscall.O_CLOEXEC + + f, err := os.OpenFile(name, openFlags, 0666) + // If the file already exists, remove it and try again. + // + // NB: We choose to remove the file instead of truncating it, despite the + // fact that we can't do so atomically, because it's more resistant to + // misuse when using hard links. + + // We must loop in case another goroutine/thread/process is also + // attempting to create the a file at the same path. + for oserror.IsExist(err) { + if removeErr := os.Remove(name); removeErr != nil && !oserror.IsNotExist(removeErr) { + return f, errors.WithStack(removeErr) + } + f, err = os.OpenFile(name, openFlags, 0666) + } return f, errors.WithStack(err) } @@ -324,3 +341,6 @@ func Root(fs FS) FS { } return fs } + +// ErrUnsupported may be returned a FS when it does not support an operation. +var ErrUnsupported = errors.New("pebble: not supported") diff --git a/vendor/github.com/cockroachdb/redact/go.mod b/vendor/github.com/cockroachdb/redact/go.mod deleted file mode 100644 index cadc968c8..000000000 --- a/vendor/github.com/cockroachdb/redact/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/cockroachdb/redact - -go 1.14 diff --git a/vendor/github.com/cockroachdb/sentry-go/go.mod b/vendor/github.com/cockroachdb/sentry-go/go.mod deleted file mode 100644 index a2e267144..000000000 --- a/vendor/github.com/cockroachdb/sentry-go/go.mod +++ /dev/null @@ -1,33 +0,0 @@ -module github.com/cockroachdb/sentry-go - -go 1.12 - -require ( - github.com/ajg/form v1.5.1 // indirect - github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0 // indirect - github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072 // indirect - github.com/gin-gonic/gin v1.4.0 - github.com/go-errors/errors v1.0.1 - github.com/go-martini/martini v0.0.0-20170121215854-22fa46961aab - github.com/google/go-cmp v0.4.0 - github.com/google/go-querystring v1.0.0 // indirect - github.com/imkira/go-interpol v1.1.0 // indirect - github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88 // indirect - github.com/kataras/iris/v12 v12.0.1 - github.com/labstack/echo/v4 v4.1.11 - github.com/moul/http2curl v1.0.0 // indirect - github.com/onsi/ginkgo v1.13.0 // indirect - github.com/pingcap/errors v0.11.4 - github.com/pkg/errors v0.8.1 - github.com/sergi/go-diff v1.1.0 // indirect - github.com/smartystreets/goconvey v1.6.4 // indirect - github.com/urfave/negroni v1.0.0 - github.com/valyala/fasthttp v1.6.0 - github.com/xeipuuv/gojsonschema v1.2.0 // indirect - github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0 // indirect - github.com/yudai/gojsondiff v1.0.0 // indirect - github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82 // indirect - github.com/yudai/pp v2.0.1+incompatible // indirect -) - -replace github.com/ugorji/go v1.1.4 => github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43 diff --git a/vendor/github.com/cockroachdb/sentry-go/go.sum b/vendor/github.com/cockroachdb/sentry-go/go.sum deleted file mode 100644 index 636081d9b..000000000 --- a/vendor/github.com/cockroachdb/sentry-go/go.sum +++ /dev/null @@ -1,285 +0,0 @@ -github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8= -github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/CloudyKit/fastprinter v0.0.0-20170127035650-74b38d55f37a h1:3SgJcK9l5uPdBC/X17wanyJAMxM33+4ZhEIV96MIH8U= -github.com/CloudyKit/fastprinter v0.0.0-20170127035650-74b38d55f37a/go.mod h1:EFZQ978U7x8IRnstaskI3IysnWY5Ao3QgZUKOXlsAdw= -github.com/CloudyKit/jet v2.1.3-0.20180809161101-62edd43e4f88+incompatible h1:rZgFj+Gtf3NMi/U5FvCvhzaxzW/TaPYgUYx3bAPz9DE= -github.com/CloudyKit/jet v2.1.3-0.20180809161101-62edd43e4f88+incompatible/go.mod h1:HPYO+50pSWkPoj9Q/eq0aRGByCL6ScRlUmiEX5Zgm+w= -github.com/Joker/hpp v1.0.0 h1:65+iuJYdRXv/XyN62C1uEmmOx3432rNG/rKlX6V7Kkc= -github.com/Joker/hpp v1.0.0/go.mod h1:8x5n+M1Hp5hC0g8okX3sR3vFQwynaX/UgSOM9MeBKzY= -github.com/Joker/jade v1.0.1-0.20190614124447-d475f43051e7 h1:mreN1m/5VJ/Zc3b4pzj9qU6D9SRQ6Vm+3KfI328t3S8= -github.com/Joker/jade v1.0.1-0.20190614124447-d475f43051e7/go.mod h1:6E6s8o2AE4KhCrqr6GRJjdC/gNfTdxkIXvuGZZda2VM= -github.com/Shopify/goreferrer v0.0.0-20181106222321-ec9c9a553398 h1:WDC6ySpJzbxGWFh4aMxFFC28wwGp5pEuoTtvA4q/qQ4= -github.com/Shopify/goreferrer v0.0.0-20181106222321-ec9c9a553398/go.mod h1:a1uqRtAwp2Xwc6WNPJEufxJ7fx3npB4UV/JOLmbu5I0= -github.com/ajg/form v1.5.1 h1:t9c7v8JUKu/XxOGBU0yjNpaMloxGEJhUkqFRq0ibGeU= -github.com/ajg/form v1.5.1/go.mod h1:uL1WgH+h2mgNtvBq0339dVnzXdBETtL2LeUXaIv25UY= -github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= -github.com/aymerick/raymond v2.0.3-0.20180322193309-b565731e1464+incompatible h1:Ppm0npCCsmuR9oQaBtRuZcmILVE74aXE+AmrJj8L2ns= -github.com/aymerick/raymond v2.0.3-0.20180322193309-b565731e1464+incompatible/go.mod h1:osfaiScAUVup+UC9Nfq76eWqDhXlp+4UYaA8uhTBO6g= -github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0 h1:sDMmm+q/3+BukdIpxwO365v/Rbspp2Nt5XntgQRXq8Q= -github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM= -github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= -github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= -github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= -github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4= -github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= -github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= -github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= -github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= -github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385 h1:clC1lXBpe2kTj2VHdaIu9ajZQe4kcEY9j0NsnDDBZ3o= -github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM= -github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHjkjCrw= -github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072 h1:DddqAaWDpywytcG8w/qoQ5sAN8X12d3Z3koB0C3Rxsc= -github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8= -github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo= -github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= -github.com/flosch/pongo2 v0.0.0-20190707114632-bbf5a6c351f4 h1:GY1+t5Dr9OKADM64SYnQjw/w99HMYvQ0A8/JoUkxVmc= -github.com/flosch/pongo2 v0.0.0-20190707114632-bbf5a6c351f4/go.mod h1:T9YF2M40nIgbVgp3rreNmTged+9HrbNTIQf1PsaIiTA= -github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= -github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= -github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= -github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= -github.com/gavv/httpexpect v2.0.0+incompatible h1:1X9kcRshkSKEjNJJxX9Y9mQ5BRfbxU5kORdjhlA1yX8= -github.com/gavv/httpexpect v2.0.0+incompatible/go.mod h1:x+9tiU1YnrOvnB725RkpoLv1M62hOWzwo5OXotisrKc= -github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3 h1:t8FVkw33L+wilf2QiWkw0UV77qRpcH/JHPKGpKa2E8g= -github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3/go.mod h1:VJ0WA2NBN22VlZ2dKZQPAPnyWw5XTlK1KymzLKsr59s= -github.com/gin-gonic/gin v1.4.0 h1:3tMoCCfM7ppqsR0ptz/wi1impNpT7/9wQtMZ8lr1mCQ= -github.com/gin-gonic/gin v1.4.0/go.mod h1:OW2EZn3DO8Ln9oIKOvM++LBO+5UPHJJDH72/q/3rZdM= -github.com/go-check/check v0.0.0-20180628173108-788fd7840127 h1:0gkP6mzaMqkmpcJYCFOLkIBwI7xFExG03bbkOkCvUPI= -github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclKsC9YodN5RgxqK/VD9HM9JsCSh7rNhMZE98= -github.com/go-errors/errors v1.0.1 h1:LUHzmkK3GUKUrL/1gfBUxAHzcev3apQlezX/+O7ma6w= -github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q= -github.com/go-martini/martini v0.0.0-20170121215854-22fa46961aab h1:xveKWz2iaueeTaUgdetzel+U7exyigDYBryyVfV/rZk= -github.com/go-martini/martini v0.0.0-20170121215854-22fa46961aab/go.mod h1:/P9AEU963A2AYjv4d1V5eVL1CQbEJq6aCNHDDjibzu8= -github.com/gobwas/httphead v0.0.0-20180130184737-2c6c146eadee/go.mod h1:L0fX3K22YWvt/FAX9NnzrNzcI4wNYi9Yku4O0LKYflo= -github.com/gobwas/pool v0.2.0/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw= -github.com/gobwas/ws v1.0.2/go.mod h1:szmBTxLgaFppYjEmNtny/v3w89xOydFnnZMcgRRu/EM= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg= -github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/gomodule/redigo v1.7.1-0.20190724094224-574c33c3df38/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= -github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= -github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8= -github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= -github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q= -github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= -github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= -github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= -github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= -github.com/imkira/go-interpol v1.1.0 h1:KIiKr0VSG2CUW1hl1jpiyuzuJeKUUpC8iM1AIE7N1Vk= -github.com/imkira/go-interpol v1.1.0/go.mod h1:z0h2/2T3XF8kyEPpRgJ3kmNv+C43p+I/CoI+jC3w2iA= -github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= -github.com/iris-contrib/blackfriday v2.0.0+incompatible h1:o5sHQHHm0ToHUlAJSTjW9UWicjJSDDauOOQ2AHuIVp4= -github.com/iris-contrib/blackfriday v2.0.0+incompatible/go.mod h1:UzZ2bDEoaSGPbkg6SAB4att1aAwTmVIx/5gCVqeyUdI= -github.com/iris-contrib/go.uuid v2.0.0+incompatible/go.mod h1:iz2lgM/1UnEf1kP0L/+fafWORmlnuysV2EMP8MW+qe0= -github.com/iris-contrib/i18n v0.0.0-20171121225848-987a633949d0/go.mod h1:pMCz62A0xJL6I+umB2YTlFRwWXaDFA0jy+5HzGiJjqI= -github.com/iris-contrib/schema v0.0.1 h1:10g/WnoRR+U+XXHWKBHeNy/+tZmM2kcAVGLOsz+yaDA= -github.com/iris-contrib/schema v0.0.1/go.mod h1:urYA3uvUNG1TIIjOSCzHr9/LmbQo8LrOcOqfqxa4hXw= -github.com/json-iterator/go v1.1.6 h1:MrUvLMLTMxbqFJ9kzlvat/rYZqZnW3u4wkLzWTaFwKs= -github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= -github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo= -github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= -github.com/juju/errors v0.0.0-20181118221551-089d3ea4e4d5 h1:rhqTjzJlm7EbkELJDKMTU7udov+Se0xZkWmugr6zGok= -github.com/juju/errors v0.0.0-20181118221551-089d3ea4e4d5/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q= -github.com/juju/loggo v0.0.0-20180524022052-584905176618 h1:MK144iBQF9hTSwBW/9eJm034bVoG30IshVm688T2hi8= -github.com/juju/loggo v0.0.0-20180524022052-584905176618/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U= -github.com/juju/testing v0.0.0-20180920084828-472a3e8b2073 h1:WQM1NildKThwdP7qWrNAFGzp4ijNLw8RlgENkaI4MJs= -github.com/juju/testing v0.0.0-20180920084828-472a3e8b2073/go.mod h1:63prj8cnj0tU0S9OHjGJn+b1h0ZghCndfnbQolrYTwA= -github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88 h1:uC1QfSlInpQF+M0ao65imhwqKnz3Q2z/d8PWZRMQvDM= -github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k= -github.com/kataras/golog v0.0.9 h1:J7Dl82843nbKQDrQM/abbNJZvQjS6PfmkkffhOTXEpM= -github.com/kataras/golog v0.0.9/go.mod h1:12HJgwBIZFNGL0EJnMRhmvGA0PQGx8VFwrZtM4CqbAk= -github.com/kataras/iris/v12 v12.0.1 h1:Wo5S7GMWv5OAzJmvFTvss/C4TS1W0uo6LkDlSymT4rM= -github.com/kataras/iris/v12 v12.0.1/go.mod h1:udK4vLQKkdDqMGJJVd/msuMtN6hpYJhg/lSzuxjhO+U= -github.com/kataras/neffos v0.0.10/go.mod h1:ZYmJC07hQPW67eKuzlfY7SO3bC0mw83A3j6im82hfqw= -github.com/kataras/pio v0.0.0-20190103105442-ea782b38602d h1:V5Rs9ztEWdp58oayPq/ulmlqJJZeJP6pP79uP3qjcao= -github.com/kataras/pio v0.0.0-20190103105442-ea782b38602d/go.mod h1:NV88laa9UiiDuX9AhMbDPkGYSPugBOV6yTZB1l2K9Z0= -github.com/klauspost/compress v1.8.2/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= -github.com/klauspost/compress v1.9.0 h1:GhthINjveNZAdFUD8QoQYfjxnOONZgztK/Yr6M23UTY= -github.com/klauspost/compress v1.9.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= -github.com/klauspost/cpuid v1.2.1/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= -github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/labstack/echo/v4 v4.1.11 h1:z0BZoArY4FqdpUEl+wlHp4hnr/oSR6MTmQmv8OHSoww= -github.com/labstack/echo/v4 v4.1.11/go.mod h1:i541M3Fj6f76NZtHSj7TXnyM8n2gaodfvfxNnFqi74g= -github.com/labstack/gommon v0.3.0 h1:JEeO0bvc78PKdyHxloTKiF8BD5iGrH8T6MSeGvSgob0= -github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k= -github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= -github.com/mattn/go-colorable v0.1.2 h1:/bC9yWikZXAL9uJdulbSfyVNIR3n3trXl+v8+1sx8mU= -github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= -github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.9 h1:d5US/mDsogSGW37IV293h//ZFaeajb69h+EHFsv2xGg= -github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ= -github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw= -github.com/mediocregopher/mediocre-go-lib v0.0.0-20181029021733-cb65787f37ed/go.mod h1:dSsfyI2zABAdhcbvkXqgxOxrCsbYeHCPgrZkku60dSg= -github.com/mediocregopher/radix/v3 v3.3.0/go.mod h1:EmfVyvspXz1uZEyPBMyGK+kjWiKQGvsUt6O3Pj+LDCQ= -github.com/microcosm-cc/bluemonday v1.0.2 h1:5lPfLTTAvAbtS0VqT+94yOtFnGfUWYyx0+iToC3Os3s= -github.com/microcosm-cc/bluemonday v1.0.2/go.mod h1:iVP4YcDBq+n/5fb23BhYFvIMq/leAFZyRl6bYmGDlGc= -github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI= -github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/moul/http2curl v1.0.0 h1:dRMWoAtb+ePxMlLkrCbAqh4TlPHXvoGUSQ323/9Zahs= -github.com/moul/http2curl v1.0.0/go.mod h1:8UbvGypXm98wA/IqH45anm5Y2Z6ep6O31QGOAZ3H0fQ= -github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM= -github.com/nats-io/nkeys v0.0.2/go.mod h1:dab7URMsZm6Z/jp9Z5UGa87Uutgc2mVpXLC4B7TDb/4= -github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= -github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78= -github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= -github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= -github.com/onsi/ginkgo v1.13.0 h1:M76yO2HkZASFjXL0HSoZJ1AYEmQxNJmY41Jx1zNUq1Y= -github.com/onsi/ginkgo v1.13.0/go.mod h1:+REjRxOmWfHCjfv9TTWB1jD1Frx4XydAD3zm1lskyM0= -github.com/onsi/gomega v1.7.1 h1:K0jcRCwNQM3vFGh1ppMtDh/+7ApJrjldlX8fA0jDTLQ= -github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= -github.com/onsi/gomega v1.10.1 h1:o0+MgICZLuZ7xjH7Vx6zS/zcu93/BEp1VwkIW1mEXCE= -github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= -github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= -github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= -github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8= -github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= -github.com/ryanuber/columnize v2.1.0+incompatible h1:j1Wcmh8OrK4Q7GXY+V7SVSY8nUWQxHW5TkBe7YUl+2s= -github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= -github.com/sclevine/agouti v3.0.0+incompatible/go.mod h1:b4WX9W9L1sfQKXeJf1mUTLZKJ48R1S7H23Ji7oFO5Bw= -github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0= -github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= -github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo= -github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= -github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM= -github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= -github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s= -github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= -github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= -github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= -github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= -github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= -github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= -github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/ugorji/go v1.1.2/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ= -github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8 h1:3SVOIvH7Ae1KRYyQWRjXWJEA9sS/c/pjvH++55Gr648= -github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= -github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43 h1:BasDe+IErOQKrMVXab7UayvSlIpiyGwRvuX3EKYY7UA= -github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43/go.mod h1:iT03XoTwV7xq/+UGwKO3UbC1nNNlopQiY61beSdrtOA= -github.com/urfave/negroni v1.0.0 h1:kIimOitoypq34K7TG7DUaJ9kq/N4Ofuwi1sjz0KipXc= -github.com/urfave/negroni v1.0.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4= -github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw= -github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= -github.com/valyala/fasthttp v1.6.0 h1:uWF8lgKmeaIewWVPwi4GRq2P6+R46IgYZdxWtM+GtEY= -github.com/valyala/fasthttp v1.6.0/go.mod h1:FstJa9V+Pj9vQ7OJie2qMHdwemEDaDiSdBnvPM1Su9w= -github.com/valyala/fasttemplate v1.0.1 h1:tY9CJiPnMXf1ERmG2EyK7gNUd+c6RKGD0IfU8WdUSz8= -github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8= -github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio= -github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= -github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= -github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= -github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= -github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= -github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= -github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= -github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0 h1:6fRhSjgLCkTD3JnJxvaJ4Sj+TYblw757bqYgZaOq5ZY= -github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmvncFJFHJ7Gvn9wZArjbV5/FppcK2fKk/tI= -github.com/yudai/gojsondiff v1.0.0 h1:27cbfqXLVEJ1o8I6v3y9lg8Ydm53EKqHXAOMxEGlCOA= -github.com/yudai/gojsondiff v1.0.0/go.mod h1:AY32+k2cwILAkW1fbgxQ5mUmMiZFgLIV+FBNExI05xg= -github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82 h1:BHyfKlQyqbsFN5p3IfnEUduWvb9is428/nNb5L3U01M= -github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82/go.mod h1:lgjkn3NuSvDfVJdfcVVdX+jpBxNmX4rDAzaS45IcYoM= -github.com/yudai/pp v2.0.1+incompatible h1:Q4//iY4pNF6yPLZIigmvcl7k/bPgrcTPIFIcmawg5bI= -github.com/yudai/pp v2.0.1+incompatible/go.mod h1:PuxR/8QJ7cyCkFp/aUDS+JY727OFEZkTdatxwunjIkc= -golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4 h1:HuIa8hRrWRSrqYzx1qI49NNxhdi2PrY7gxVSq1JjLDc= -golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190327091125-710a502c58a2/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= -golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7 h1:AeiKBIuRw3UomYXSbLy0Mc2dDLfdtbT/IVn4keq83P0= -golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a h1:aYOabOQFp6Vj6W1F80affTUvO9UxmJRx8K0gsfABByQ= -golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200519105757-fe76b779f299 h1:DYfZAGf2WMFjMxbgTjaC+2HC7NkNAQs+6Q8b9WEB/F4= -golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= -golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20181221001348-537d06c36207/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190327201419-c70d86f8b7cf/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= -gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= -gopkg.in/go-playground/assert.v1 v1.2.1 h1:xoYuJVE7KT85PYWrN730RguIQO0ePzVRfFMXadIrXTM= -gopkg.in/go-playground/assert.v1 v1.2.1/go.mod h1:9RXL0bg/zibRAgZUYszZSwO/z8Y/a8bDuhia5mkpMnE= -gopkg.in/go-playground/validator.v8 v8.18.2 h1:lFB4DoMU6B626w8ny76MV7VX6W2VHct2GVOI3xgiMrQ= -gopkg.in/go-playground/validator.v8 v8.18.2/go.mod h1:RX2a/7Ha8BgOhfk7j780h4/u/RRjR0eouCJSH80/M2Y= -gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce h1:xcEWjVhvbDy+nHP67nPDDpbYrY+ILlfndk4bRioVHaU= -gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= -gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I= -gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= diff --git a/vendor/github.com/go-logr/logr/go.mod b/vendor/github.com/go-logr/logr/go.mod deleted file mode 100644 index 591884e91..000000000 --- a/vendor/github.com/go-logr/logr/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/go-logr/logr - -go 1.14 diff --git a/vendor/github.com/go-ole/go-ole/go.mod b/vendor/github.com/go-ole/go-ole/go.mod deleted file mode 100644 index df98533ea..000000000 --- a/vendor/github.com/go-ole/go-ole/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/go-ole/go-ole - -go 1.12 diff --git a/vendor/github.com/gogo/status/go.mod b/vendor/github.com/gogo/status/go.mod deleted file mode 100644 index 6d2c363fe..000000000 --- a/vendor/github.com/gogo/status/go.mod +++ /dev/null @@ -1,12 +0,0 @@ -module github.com/gogo/status - -go 1.12 - -require ( - github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a - github.com/gogo/protobuf v1.2.0 - github.com/golang/protobuf v1.2.0 - golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6 // indirect - google.golang.org/genproto v0.0.0-20180518175338-11a468237815 - google.golang.org/grpc v1.12.0 -) diff --git a/vendor/github.com/gogo/status/go.sum b/vendor/github.com/gogo/status/go.sum deleted file mode 100644 index 6938bfb29..000000000 --- a/vendor/github.com/gogo/status/go.sum +++ /dev/null @@ -1,12 +0,0 @@ -github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a h1:dR8+Q0uO5S2ZBcs2IH6VBKYwSxPo2vYCYq0ot0mu7xA= -github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s= -github.com/gogo/protobuf v1.2.0 h1:xU6/SpYbvkNYiptHJYEDRseDLvYE7wSqhYYNy0QSUzI= -github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= -github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6 h1:bjcUS9ztw9kFmmIxJInhon/0Is3p+EHBKNgquIzo1OI= -golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -google.golang.org/genproto v0.0.0-20180518175338-11a468237815 h1:p3qKkjcSW6m32Lr1CInA3jW53vG29/JB6QOvQWie5WI= -google.golang.org/genproto v0.0.0-20180518175338-11a468237815/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/grpc v1.12.0 h1:Mm8atZtkT+P6R43n/dqNDWkPPu5BwRVu/1rJnJCeZH8= -google.golang.org/grpc v1.12.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= diff --git a/vendor/github.com/golang/snappy/go.mod b/vendor/github.com/golang/snappy/go.mod deleted file mode 100644 index f6406bb2c..000000000 --- a/vendor/github.com/golang/snappy/go.mod +++ /dev/null @@ -1 +0,0 @@ -module github.com/golang/snappy diff --git a/vendor/github.com/google/gofuzz/go.mod b/vendor/github.com/google/gofuzz/go.mod deleted file mode 100644 index 8ec4fe9e9..000000000 --- a/vendor/github.com/google/gofuzz/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/google/gofuzz - -go 1.12 diff --git a/vendor/github.com/json-iterator/go/go.mod b/vendor/github.com/json-iterator/go/go.mod deleted file mode 100644 index e05c42ff5..000000000 --- a/vendor/github.com/json-iterator/go/go.mod +++ /dev/null @@ -1,11 +0,0 @@ -module github.com/json-iterator/go - -go 1.12 - -require ( - github.com/davecgh/go-spew v1.1.1 - github.com/google/gofuzz v1.0.0 - github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 - github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 - github.com/stretchr/testify v1.3.0 -) diff --git a/vendor/github.com/json-iterator/go/go.sum b/vendor/github.com/json-iterator/go/go.sum deleted file mode 100644 index d778b5a14..000000000 --- a/vendor/github.com/json-iterator/go/go.sum +++ /dev/null @@ -1,14 +0,0 @@ -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw= -github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 h1:ZqeYNhU3OHLH3mGKHDcjJRFFRrJa6eAM5H+CtDdOsPc= -github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 h1:Esafd1046DLDQ0W1YjYsBW+p8U2u7vzgW2SQVmlNazg= -github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= diff --git a/vendor/github.com/kr/pretty/go.mod b/vendor/github.com/kr/pretty/go.mod deleted file mode 100644 index 9a27b6e96..000000000 --- a/vendor/github.com/kr/pretty/go.mod +++ /dev/null @@ -1,5 +0,0 @@ -module github.com/kr/pretty - -go 1.12 - -require github.com/kr/text v0.1.0 diff --git a/vendor/github.com/kr/pretty/go.sum b/vendor/github.com/kr/pretty/go.sum deleted file mode 100644 index 714f82a20..000000000 --- a/vendor/github.com/kr/pretty/go.sum +++ /dev/null @@ -1,3 +0,0 @@ -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= diff --git a/vendor/github.com/kr/text/go.mod b/vendor/github.com/kr/text/go.mod deleted file mode 100644 index fa0528b9a..000000000 --- a/vendor/github.com/kr/text/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module "github.com/kr/text" - -require "github.com/kr/pty" v1.1.1 diff --git a/vendor/github.com/prometheus/procfs/go.mod b/vendor/github.com/prometheus/procfs/go.mod deleted file mode 100644 index 8a1b839fd..000000000 --- a/vendor/github.com/prometheus/procfs/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/prometheus/procfs - -require golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 diff --git a/vendor/github.com/prometheus/procfs/go.sum b/vendor/github.com/prometheus/procfs/go.sum deleted file mode 100644 index 7827dd3d5..000000000 --- a/vendor/github.com/prometheus/procfs/go.sum +++ /dev/null @@ -1,2 +0,0 @@ -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= diff --git a/vendor/github.com/russross/blackfriday/v2/go.mod b/vendor/github.com/russross/blackfriday/v2/go.mod deleted file mode 100644 index 620b74e0a..000000000 --- a/vendor/github.com/russross/blackfriday/v2/go.mod +++ /dev/null @@ -1 +0,0 @@ -module github.com/russross/blackfriday/v2 diff --git a/vendor/github.com/shurcooL/sanitized_anchor_name/go.mod b/vendor/github.com/shurcooL/sanitized_anchor_name/go.mod deleted file mode 100644 index 1e2553475..000000000 --- a/vendor/github.com/shurcooL/sanitized_anchor_name/go.mod +++ /dev/null @@ -1 +0,0 @@ -module github.com/shurcooL/sanitized_anchor_name diff --git a/vendor/github.com/smartystreets/assertions/go.mod b/vendor/github.com/smartystreets/assertions/go.mod deleted file mode 100644 index 3e0f123cb..000000000 --- a/vendor/github.com/smartystreets/assertions/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/smartystreets/assertions - -go 1.13 diff --git a/vendor/github.com/spf13/pflag/go.mod b/vendor/github.com/spf13/pflag/go.mod deleted file mode 100644 index b2287eec1..000000000 --- a/vendor/github.com/spf13/pflag/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module github.com/spf13/pflag - -go 1.12 diff --git a/vendor/github.com/spf13/pflag/go.sum b/vendor/github.com/spf13/pflag/go.sum deleted file mode 100644 index e69de29bb..000000000 diff --git a/vendor/github.com/urfave/cli/v2/go.mod b/vendor/github.com/urfave/cli/v2/go.mod deleted file mode 100644 index 113966432..000000000 --- a/vendor/github.com/urfave/cli/v2/go.mod +++ /dev/null @@ -1,9 +0,0 @@ -module github.com/urfave/cli/v2 - -go 1.11 - -require ( - github.com/BurntSushi/toml v0.3.1 - github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d - gopkg.in/yaml.v2 v2.2.3 -) diff --git a/vendor/github.com/urfave/cli/v2/go.sum b/vendor/github.com/urfave/cli/v2/go.sum deleted file mode 100644 index 663ad7276..000000000 --- a/vendor/github.com/urfave/cli/v2/go.sum +++ /dev/null @@ -1,14 +0,0 @@ -github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY= -github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q= -github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo= -github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.3 h1:fvjTMHxHEw/mxHbtzPi3JCcKXQRAnQTBRo6YCJSVHKI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= diff --git a/vendor/go.opentelemetry.io/contrib/go.mod b/vendor/go.opentelemetry.io/contrib/go.mod deleted file mode 100644 index e59b22fa9..000000000 --- a/vendor/go.opentelemetry.io/contrib/go.mod +++ /dev/null @@ -1,3 +0,0 @@ -module go.opentelemetry.io/contrib - -go 1.15 diff --git a/vendor/go.opentelemetry.io/contrib/go.sum b/vendor/go.opentelemetry.io/contrib/go.sum deleted file mode 100644 index e69de29bb..000000000 diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.mod b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.mod deleted file mode 100644 index 20cf37bbd..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.mod +++ /dev/null @@ -1,83 +0,0 @@ -module go.opentelemetry.io/otel/exporters/otlp/otlpmetric - -go 1.15 - -require ( - github.com/cenkalti/backoff/v4 v4.1.1 - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/metric v0.21.0 - go.opentelemetry.io/otel/sdk v1.0.0-RC1 - go.opentelemetry.io/otel/sdk/export/metric v0.21.0 - go.opentelemetry.io/otel/sdk/metric v0.21.0 - go.opentelemetry.io/proto/otlp v0.9.0 - google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 - google.golang.org/grpc v1.38.0 - google.golang.org/protobuf v1.26.0 -) - -replace go.opentelemetry.io/otel => ../../.. - -replace go.opentelemetry.io/otel/sdk => ../../../sdk - -replace go.opentelemetry.io/otel/exporters/otlp => ../ - -replace go.opentelemetry.io/otel/metric => ../../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../../oteltest - -replace go.opentelemetry.io/otel/trace => ../../../trace - -replace go.opentelemetry.io/otel/sdk/export/metric => ../../../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../../../sdk/metric - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../../example/otel-collector - -replace go.opentelemetry.io/otel/example/passthrough => ../../../example/passthrough - -replace go.opentelemetry.io/otel/example/prom-collector => ../../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../metric/prometheus - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ./ - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ./otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../trace/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../../internal/tools - -replace go.opentelemetry.io/otel/internal/metric => ../../../internal/metric - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../jaeger - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../prometheus - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../zipkin - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../../stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../../stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.sum b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.sum deleted file mode 100644 index e9bd67196..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/go.sum +++ /dev/null @@ -1,125 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= -github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= -github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= -github.com/cenkalti/backoff/v4 v4.1.1 h1:G2HAfAmvm/GcKan2oOQpBXOd2tT2G57ZnZGWa1PxPBQ= -github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -go.opentelemetry.io/proto/otlp v0.9.0 h1:C0g6TWmQYvjKRnljRULLWUVJGy8Uvu0NEL/5frY2/t4= -go.opentelemetry.io/proto/otlp v0.9.0/go.mod h1:1vKfU9rv61e9EVGthD1zNvUbiwPcimSsOPU9brfSHJg= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up98WAHf3f/ulnJ62IyA= -golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd h1:xhmwyvizuTgC2qz7ZlMluP20uW+C3Rm0FD/WLDX8884= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= -google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0= -google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk= -google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.mod b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.mod deleted file mode 100644 index b9937ba93..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.mod +++ /dev/null @@ -1,81 +0,0 @@ -module go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc - -go 1.15 - -require ( - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.21.0 - go.opentelemetry.io/otel/metric v0.21.0 - go.opentelemetry.io/otel/sdk/metric v0.21.0 - go.opentelemetry.io/proto/otlp v0.9.0 - google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 - google.golang.org/grpc v1.38.0 - google.golang.org/protobuf v1.26.0 -) - -replace go.opentelemetry.io/otel => ../../../.. - -replace go.opentelemetry.io/otel/sdk => ../../../../sdk - -replace go.opentelemetry.io/otel/sdk/metric => ../../../../sdk/metric - -replace go.opentelemetry.io/otel/exporters/otlp => ../.. - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../ - -replace go.opentelemetry.io/otel/metric => ../../../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../../../oteltest - -replace go.opentelemetry.io/otel/trace => ../../../../trace - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../../../example/otel-collector - -replace go.opentelemetry.io/otel/example/passthrough => ../../../../example/passthrough - -replace go.opentelemetry.io/otel/example/prom-collector => ../../../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../../metric/prometheus - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ./ - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../../otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../../otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../../otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../../trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../../trace/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../../../internal/tools - -replace go.opentelemetry.io/otel/sdk/export/metric => ../../../../sdk/export/metric - -replace go.opentelemetry.io/otel/internal/metric => ../../../../internal/metric - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../../jaeger - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../../prometheus - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../../zipkin - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../../../stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../../../stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.sum b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.sum deleted file mode 100644 index e9bd67196..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/go.sum +++ /dev/null @@ -1,125 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= -github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= -github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= -github.com/cenkalti/backoff/v4 v4.1.1 h1:G2HAfAmvm/GcKan2oOQpBXOd2tT2G57ZnZGWa1PxPBQ= -github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -go.opentelemetry.io/proto/otlp v0.9.0 h1:C0g6TWmQYvjKRnljRULLWUVJGy8Uvu0NEL/5frY2/t4= -go.opentelemetry.io/proto/otlp v0.9.0/go.mod h1:1vKfU9rv61e9EVGthD1zNvUbiwPcimSsOPU9brfSHJg= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up98WAHf3f/ulnJ62IyA= -golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd h1:xhmwyvizuTgC2qz7ZlMluP20uW+C3Rm0FD/WLDX8884= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= -google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0= -google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk= -google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.mod b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.mod deleted file mode 100644 index 26f2ef360..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.mod +++ /dev/null @@ -1,81 +0,0 @@ -module go.opentelemetry.io/otel/exporters/otlp/otlptrace - -go 1.15 - -require ( - github.com/cenkalti/backoff/v4 v4.1.1 - github.com/google/go-cmp v0.5.6 - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/oteltest v1.0.0-RC1 - go.opentelemetry.io/otel/sdk v1.0.0-RC1 - go.opentelemetry.io/otel/trace v1.0.0-RC1 - go.opentelemetry.io/proto/otlp v0.9.0 - google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 - google.golang.org/grpc v1.38.0 - google.golang.org/protobuf v1.26.0 -) - -replace go.opentelemetry.io/otel => ../../.. - -replace go.opentelemetry.io/otel/sdk => ../../../sdk - -replace go.opentelemetry.io/otel/metric => ../../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../../oteltest - -replace go.opentelemetry.io/otel/trace => ../../../trace - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../prometheus - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ./ - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ./otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../../internal/tools - -replace go.opentelemetry.io/otel/sdk/export/metric => ../../../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../../../sdk/metric - -replace go.opentelemetry.io/otel/example/passthrough => ../../../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ./otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../../../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../../stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../../stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.sum b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.sum deleted file mode 100644 index a40bb40cf..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/go.sum +++ /dev/null @@ -1,123 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= -github.com/cenkalti/backoff/v4 v4.1.1 h1:G2HAfAmvm/GcKan2oOQpBXOd2tT2G57ZnZGWa1PxPBQ= -github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -go.opentelemetry.io/proto/otlp v0.9.0 h1:C0g6TWmQYvjKRnljRULLWUVJGy8Uvu0NEL/5frY2/t4= -go.opentelemetry.io/proto/otlp v0.9.0/go.mod h1:1vKfU9rv61e9EVGthD1zNvUbiwPcimSsOPU9brfSHJg= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up98WAHf3f/ulnJ62IyA= -golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd h1:xhmwyvizuTgC2qz7ZlMluP20uW+C3Rm0FD/WLDX8884= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= -google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0= -google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk= -google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.mod b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.mod deleted file mode 100644 index ed492b735..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.mod +++ /dev/null @@ -1,78 +0,0 @@ -module go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc - -go 1.15 - -require ( - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.0.0-RC1 - go.opentelemetry.io/otel/sdk v1.0.0-RC1 - go.opentelemetry.io/proto/otlp v0.9.0 - google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 - google.golang.org/grpc v1.38.0 - google.golang.org/protobuf v1.26.0 -) - -replace go.opentelemetry.io/otel => ../../../.. - -replace go.opentelemetry.io/otel/sdk => ../../../../sdk - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../ - -replace go.opentelemetry.io/otel/metric => ../../../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../../../oteltest - -replace go.opentelemetry.io/otel/trace => ../../../../trace - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../../../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../../../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../../prometheus - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ./ - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../../jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../../zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../../../internal/tools - -replace go.opentelemetry.io/otel/sdk/export/metric => ../../../../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../../../../sdk/metric - -replace go.opentelemetry.io/otel/example/passthrough => ../../../../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../../../../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../../metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../../trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../../trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../../otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../../otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../../../stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../../../stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.sum b/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.sum deleted file mode 100644 index a40bb40cf..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/go.sum +++ /dev/null @@ -1,123 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= -github.com/cenkalti/backoff/v4 v4.1.1 h1:G2HAfAmvm/GcKan2oOQpBXOd2tT2G57ZnZGWa1PxPBQ= -github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo= -github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -go.opentelemetry.io/proto/otlp v0.9.0 h1:C0g6TWmQYvjKRnljRULLWUVJGy8Uvu0NEL/5frY2/t4= -go.opentelemetry.io/proto/otlp v0.9.0/go.mod h1:1vKfU9rv61e9EVGthD1zNvUbiwPcimSsOPU9brfSHJg= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up98WAHf3f/ulnJ62IyA= -golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd h1:xhmwyvizuTgC2qz7ZlMluP20uW+C3Rm0FD/WLDX8884= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= -google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0= -google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk= -google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.mod b/vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.mod deleted file mode 100644 index d55f8974a..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.mod +++ /dev/null @@ -1,77 +0,0 @@ -module go.opentelemetry.io/otel/exporters/stdout/stdoutmetric - -go 1.15 - -replace ( - go.opentelemetry.io/otel => ../../.. - go.opentelemetry.io/otel/sdk => ../../../sdk -) - -require ( - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/metric v0.21.0 - go.opentelemetry.io/otel/sdk v1.0.0-RC1 - go.opentelemetry.io/otel/sdk/export/metric v0.21.0 - go.opentelemetry.io/otel/sdk/metric v0.21.0 -) - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../prometheus - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ./ - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../../internal/tools - -replace go.opentelemetry.io/otel/metric => ../../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../../oteltest - -replace go.opentelemetry.io/otel/sdk/export/metric => ../../../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../../../sdk/metric - -replace go.opentelemetry.io/otel/trace => ../../../trace - -replace go.opentelemetry.io/otel/example/passthrough => ../../../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../../otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../../otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../../otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../../../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../../otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../../otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.sum b/vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.sum deleted file mode 100644 index bbe08ebba..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdoutmetric/go.sum +++ /dev/null @@ -1,17 +0,0 @@ -github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= -github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.mod b/vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.mod deleted file mode 100644 index 158d08636..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.mod +++ /dev/null @@ -1,76 +0,0 @@ -module go.opentelemetry.io/otel/exporters/stdout/stdouttrace - -go 1.15 - -replace ( - go.opentelemetry.io/otel => ../../.. - go.opentelemetry.io/otel/sdk => ../../../sdk -) - -require ( - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/oteltest v1.0.0-RC1 - go.opentelemetry.io/otel/sdk v1.0.0-RC1 - go.opentelemetry.io/otel/trace v1.0.0-RC1 -) - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../prometheus - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ./ - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../../internal/tools - -replace go.opentelemetry.io/otel/metric => ../../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../../oteltest - -replace go.opentelemetry.io/otel/sdk/export/metric => ../../../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../../../sdk/metric - -replace go.opentelemetry.io/otel/trace => ../../../trace - -replace go.opentelemetry.io/otel/example/passthrough => ../../../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../../otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../../otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../../otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../../../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../../otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../../otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../stdoutmetric diff --git a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.sum b/vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.sum deleted file mode 100644 index f212493d5..000000000 --- a/vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/go.sum +++ /dev/null @@ -1,15 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.opentelemetry.io/otel/go.mod b/vendor/go.opentelemetry.io/otel/go.mod deleted file mode 100644 index bcf7a924e..000000000 --- a/vendor/go.opentelemetry.io/otel/go.mod +++ /dev/null @@ -1,74 +0,0 @@ -module go.opentelemetry.io/otel - -go 1.15 - -require ( - github.com/google/go-cmp v0.5.6 - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel/oteltest v1.0.0-RC1 - go.opentelemetry.io/otel/trace v1.0.0-RC1 -) - -replace go.opentelemetry.io/otel => ./ - -replace go.opentelemetry.io/otel/bridge/opencensus => ./bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ./bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ./example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ./example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ./example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ./example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ./example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ./example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ./example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ./exporters/prometheus - -replace go.opentelemetry.io/otel/exporters/jaeger => ./exporters/jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ./exporters/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ./internal/tools - -replace go.opentelemetry.io/otel/sdk => ./sdk - -replace go.opentelemetry.io/otel/internal/metric => ./internal/metric - -replace go.opentelemetry.io/otel/metric => ./metric - -replace go.opentelemetry.io/otel/oteltest => ./oteltest - -replace go.opentelemetry.io/otel/sdk/export/metric => ./sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ./sdk/metric - -replace go.opentelemetry.io/otel/trace => ./trace - -replace go.opentelemetry.io/otel/example/passthrough => ./example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ./exporters/otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ./exporters/otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ./exporters/otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ./exporters/metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ./exporters/trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ./exporters/trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ./exporters/otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ./exporters/otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ./exporters/stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ./exporters/stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/go.sum b/vendor/go.opentelemetry.io/otel/go.sum deleted file mode 100644 index f212493d5..000000000 --- a/vendor/go.opentelemetry.io/otel/go.sum +++ /dev/null @@ -1,15 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.opentelemetry.io/otel/internal/metric/go.mod b/vendor/go.opentelemetry.io/otel/internal/metric/go.mod deleted file mode 100644 index fc27d30b7..000000000 --- a/vendor/go.opentelemetry.io/otel/internal/metric/go.mod +++ /dev/null @@ -1,73 +0,0 @@ -module go.opentelemetry.io/otel/internal/metric - -go 1.15 - -require ( - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/metric v0.21.0 -) - -replace go.opentelemetry.io/otel => ../.. - -replace go.opentelemetry.io/otel/metric => ../../metric - -replace go.opentelemetry.io/otel/internal/metric => ./ - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../example/otel-collector - -replace go.opentelemetry.io/otel/example/passthrough => ../../example/passthrough - -replace go.opentelemetry.io/otel/example/prom-collector => ../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../exporters/prometheus - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../../exporters/otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../../exporters/otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../../exporters/otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../exporters/jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../exporters/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../tools - -replace go.opentelemetry.io/otel/oteltest => ../../oteltest - -replace go.opentelemetry.io/otel/sdk => ../../sdk - -replace go.opentelemetry.io/otel/sdk/export/metric => ../../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../../sdk/metric - -replace go.opentelemetry.io/otel/trace => ../../trace - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../exporters/metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../exporters/trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../exporters/trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../../exporters/otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../../exporters/otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../../exporters/stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../../exporters/stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/internal/metric/go.sum b/vendor/go.opentelemetry.io/otel/internal/metric/go.sum deleted file mode 100644 index f212493d5..000000000 --- a/vendor/go.opentelemetry.io/otel/internal/metric/go.sum +++ /dev/null @@ -1,15 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.opentelemetry.io/otel/metric/go.mod b/vendor/go.opentelemetry.io/otel/metric/go.mod deleted file mode 100644 index 153ac82d4..000000000 --- a/vendor/go.opentelemetry.io/otel/metric/go.mod +++ /dev/null @@ -1,74 +0,0 @@ -module go.opentelemetry.io/otel/metric - -go 1.15 - -replace go.opentelemetry.io/otel => ../ - -replace go.opentelemetry.io/otel/bridge/opencensus => ../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../exporters/prometheus - -replace go.opentelemetry.io/otel/exporters/jaeger => ../exporters/jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../exporters/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../internal/tools - -replace go.opentelemetry.io/otel/metric => ./ - -replace go.opentelemetry.io/otel/oteltest => ../oteltest - -replace go.opentelemetry.io/otel/sdk => ../sdk - -replace go.opentelemetry.io/otel/sdk/export/metric => ../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../sdk/metric - -replace go.opentelemetry.io/otel/trace => ../trace - -require ( - github.com/google/go-cmp v0.5.6 - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/internal/metric v0.21.0 -) - -replace go.opentelemetry.io/otel/example/passthrough => ../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../exporters/otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../exporters/otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../exporters/otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../exporters/metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../exporters/trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../exporters/trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../exporters/otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../exporters/otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../exporters/stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../exporters/stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/metric/go.sum b/vendor/go.opentelemetry.io/otel/metric/go.sum deleted file mode 100644 index f212493d5..000000000 --- a/vendor/go.opentelemetry.io/otel/metric/go.sum +++ /dev/null @@ -1,15 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.opentelemetry.io/otel/sdk/export/metric/go.mod b/vendor/go.opentelemetry.io/otel/sdk/export/metric/go.mod deleted file mode 100644 index 7a2ca752c..000000000 --- a/vendor/go.opentelemetry.io/otel/sdk/export/metric/go.mod +++ /dev/null @@ -1,74 +0,0 @@ -module go.opentelemetry.io/otel/sdk/export/metric - -go 1.15 - -replace go.opentelemetry.io/otel => ../../.. - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../../exporters/prometheus - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../../exporters/jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../../exporters/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../../internal/tools - -replace go.opentelemetry.io/otel/metric => ../../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../../oteltest - -replace go.opentelemetry.io/otel/sdk => ../.. - -replace go.opentelemetry.io/otel/sdk/export/metric => ./ - -replace go.opentelemetry.io/otel/sdk/metric => ../../metric - -replace go.opentelemetry.io/otel/trace => ../../../trace - -require ( - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/metric v0.21.0 - go.opentelemetry.io/otel/sdk v1.0.0-RC1 -) - -replace go.opentelemetry.io/otel/example/passthrough => ../../../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../../../exporters/otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../../../exporters/otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../../../exporters/otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../../../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../../exporters/metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../../exporters/trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../../exporters/trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../../../exporters/otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../../../exporters/otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../../../exporters/stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../../../exporters/stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/sdk/export/metric/go.sum b/vendor/go.opentelemetry.io/otel/sdk/export/metric/go.sum deleted file mode 100644 index f212493d5..000000000 --- a/vendor/go.opentelemetry.io/otel/sdk/export/metric/go.sum +++ /dev/null @@ -1,15 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.opentelemetry.io/otel/sdk/metric/go.mod b/vendor/go.opentelemetry.io/otel/sdk/metric/go.mod deleted file mode 100644 index 1fb1cddc6..000000000 --- a/vendor/go.opentelemetry.io/otel/sdk/metric/go.mod +++ /dev/null @@ -1,77 +0,0 @@ -module go.opentelemetry.io/otel/sdk/metric - -go 1.15 - -replace go.opentelemetry.io/otel => ../.. - -replace go.opentelemetry.io/otel/bridge/opencensus => ../../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../../exporters/prometheus - -replace go.opentelemetry.io/otel/exporters/jaeger => ../../exporters/jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../../exporters/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../../internal/tools - -replace go.opentelemetry.io/otel/metric => ../../metric - -replace go.opentelemetry.io/otel/oteltest => ../../oteltest - -replace go.opentelemetry.io/otel/sdk => ../ - -replace go.opentelemetry.io/otel/sdk/export/metric => ../export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ./ - -replace go.opentelemetry.io/otel/trace => ../../trace - -require ( - github.com/benbjohnson/clock v1.1.0 // do not upgrade to v1.1.x because it would require Go >= 1.15 - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 - go.opentelemetry.io/otel/internal/metric v0.21.0 - go.opentelemetry.io/otel/metric v0.21.0 - go.opentelemetry.io/otel/sdk v1.0.0-RC1 - go.opentelemetry.io/otel/sdk/export/metric v0.21.0 -) - -replace go.opentelemetry.io/otel/example/passthrough => ../../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../../exporters/otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../../exporters/otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../../exporters/otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../../exporters/metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../../exporters/trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../../exporters/trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../../exporters/otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../../exporters/otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../../exporters/stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../../exporters/stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/sdk/metric/go.sum b/vendor/go.opentelemetry.io/otel/sdk/metric/go.sum deleted file mode 100644 index bbe08ebba..000000000 --- a/vendor/go.opentelemetry.io/otel/sdk/metric/go.sum +++ /dev/null @@ -1,17 +0,0 @@ -github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= -github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.opentelemetry.io/otel/trace/go.mod b/vendor/go.opentelemetry.io/otel/trace/go.mod deleted file mode 100644 index 49bea6d3b..000000000 --- a/vendor/go.opentelemetry.io/otel/trace/go.mod +++ /dev/null @@ -1,73 +0,0 @@ -module go.opentelemetry.io/otel/trace - -go 1.15 - -replace go.opentelemetry.io/otel => ../ - -replace go.opentelemetry.io/otel/bridge/opencensus => ../bridge/opencensus - -replace go.opentelemetry.io/otel/bridge/opentracing => ../bridge/opentracing - -replace go.opentelemetry.io/otel/example/jaeger => ../example/jaeger - -replace go.opentelemetry.io/otel/example/namedtracer => ../example/namedtracer - -replace go.opentelemetry.io/otel/example/opencensus => ../example/opencensus - -replace go.opentelemetry.io/otel/example/otel-collector => ../example/otel-collector - -replace go.opentelemetry.io/otel/example/prom-collector => ../example/prom-collector - -replace go.opentelemetry.io/otel/example/prometheus => ../example/prometheus - -replace go.opentelemetry.io/otel/example/zipkin => ../example/zipkin - -replace go.opentelemetry.io/otel/exporters/prometheus => ../exporters/prometheus - -replace go.opentelemetry.io/otel/exporters/jaeger => ../exporters/jaeger - -replace go.opentelemetry.io/otel/exporters/zipkin => ../exporters/zipkin - -replace go.opentelemetry.io/otel/internal/tools => ../internal/tools - -replace go.opentelemetry.io/otel/metric => ../metric - -replace go.opentelemetry.io/otel/oteltest => ../oteltest - -replace go.opentelemetry.io/otel/sdk => ../sdk - -replace go.opentelemetry.io/otel/sdk/export/metric => ../sdk/export/metric - -replace go.opentelemetry.io/otel/sdk/metric => ../sdk/metric - -replace go.opentelemetry.io/otel/trace => ./ - -require ( - github.com/google/go-cmp v0.5.6 - github.com/stretchr/testify v1.7.0 - go.opentelemetry.io/otel v1.0.0-RC1 -) - -replace go.opentelemetry.io/otel/example/passthrough => ../example/passthrough - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../exporters/otlp/otlptrace - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../exporters/otlp/otlptrace/otlptracegrpc - -replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => ../exporters/otlp/otlptrace/otlptracehttp - -replace go.opentelemetry.io/otel/internal/metric => ../internal/metric - -replace go.opentelemetry.io/otel/exporters/metric/prometheus => ../exporters/metric/prometheus - -replace go.opentelemetry.io/otel/exporters/trace/jaeger => ../exporters/trace/jaeger - -replace go.opentelemetry.io/otel/exporters/trace/zipkin => ../exporters/trace/zipkin - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../exporters/otlp/otlpmetric - -replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../exporters/otlp/otlpmetric/otlpmetricgrpc - -replace go.opentelemetry.io/otel/exporters/stdout/stdoutmetric => ../exporters/stdout/stdoutmetric - -replace go.opentelemetry.io/otel/exporters/stdout/stdouttrace => ../exporters/stdout/stdouttrace diff --git a/vendor/go.opentelemetry.io/otel/trace/go.sum b/vendor/go.opentelemetry.io/otel/trace/go.sum deleted file mode 100644 index f212493d5..000000000 --- a/vendor/go.opentelemetry.io/otel/trace/go.sum +++ /dev/null @@ -1,15 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.uber.org/atomic/go.mod b/vendor/go.uber.org/atomic/go.mod deleted file mode 100644 index daa7599fe..000000000 --- a/vendor/go.uber.org/atomic/go.mod +++ /dev/null @@ -1,8 +0,0 @@ -module go.uber.org/atomic - -require ( - github.com/davecgh/go-spew v1.1.1 // indirect - github.com/stretchr/testify v1.3.0 -) - -go 1.13 diff --git a/vendor/go.uber.org/atomic/go.sum b/vendor/go.uber.org/atomic/go.sum deleted file mode 100644 index 4f76e62c1..000000000 --- a/vendor/go.uber.org/atomic/go.sum +++ /dev/null @@ -1,9 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= diff --git a/vendor/go.uber.org/automaxprocs/go.mod b/vendor/go.uber.org/automaxprocs/go.mod deleted file mode 100644 index 58c5eb34b..000000000 --- a/vendor/go.uber.org/automaxprocs/go.mod +++ /dev/null @@ -1,9 +0,0 @@ -module go.uber.org/automaxprocs - -go 1.13 - -require ( - github.com/kr/pretty v0.1.0 // indirect - github.com/stretchr/testify v1.4.0 - gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect -) diff --git a/vendor/go.uber.org/automaxprocs/go.sum b/vendor/go.uber.org/automaxprocs/go.sum deleted file mode 100644 index d7572c5b0..000000000 --- a/vendor/go.uber.org/automaxprocs/go.sum +++ /dev/null @@ -1,18 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= diff --git a/vendor/go.uber.org/goleak/go.mod b/vendor/go.uber.org/goleak/go.mod deleted file mode 100644 index 742547abd..000000000 --- a/vendor/go.uber.org/goleak/go.mod +++ /dev/null @@ -1,11 +0,0 @@ -module go.uber.org/goleak - -go 1.13 - -require ( - github.com/kr/pretty v0.1.0 // indirect - github.com/stretchr/testify v1.4.0 - golang.org/x/lint v0.0.0-20190930215403-16217165b5de - golang.org/x/tools v0.0.0-20191108193012-7d206e10da11 // indirect - gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect -) diff --git a/vendor/go.uber.org/goleak/go.sum b/vendor/go.uber.org/goleak/go.sum deleted file mode 100644 index 09b27d7ee..000000000 --- a/vendor/go.uber.org/goleak/go.sum +++ /dev/null @@ -1,30 +0,0 @@ -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs= -golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20191108193012-7d206e10da11 h1:Yq9t9jnGoR+dBuitxdo9l6Q7xh/zOyNnYUtDKaQ3x0E= -golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= diff --git a/vendor/go.uber.org/multierr/go.mod b/vendor/go.uber.org/multierr/go.mod deleted file mode 100644 index 398d6c99e..000000000 --- a/vendor/go.uber.org/multierr/go.mod +++ /dev/null @@ -1,9 +0,0 @@ -module go.uber.org/multierr - -go 1.14 - -require ( - github.com/stretchr/testify v1.7.0 - go.uber.org/atomic v1.7.0 - gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect -) diff --git a/vendor/go.uber.org/multierr/go.sum b/vendor/go.uber.org/multierr/go.sum deleted file mode 100644 index 75edd735e..000000000 --- a/vendor/go.uber.org/multierr/go.sum +++ /dev/null @@ -1,16 +0,0 @@ -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw= -go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo= -gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/vendor/go.uber.org/zap/go.mod b/vendor/go.uber.org/zap/go.mod deleted file mode 100644 index 6ef4db70e..000000000 --- a/vendor/go.uber.org/zap/go.mod +++ /dev/null @@ -1,13 +0,0 @@ -module go.uber.org/zap - -go 1.13 - -require ( - github.com/pkg/errors v0.8.1 - github.com/stretchr/testify v1.4.0 - go.uber.org/atomic v1.6.0 - go.uber.org/multierr v1.5.0 - golang.org/x/lint v0.0.0-20190930215403-16217165b5de - gopkg.in/yaml.v2 v2.2.2 - honnef.co/go/tools v0.0.1-2019.2.3 -) diff --git a/vendor/go.uber.org/zap/go.sum b/vendor/go.uber.org/zap/go.sum deleted file mode 100644 index 99cdb93ea..000000000 --- a/vendor/go.uber.org/zap/go.sum +++ /dev/null @@ -1,56 +0,0 @@ -github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= -github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk= -go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ= -go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A= -go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU= -go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4= -go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs= -golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c h1:IGkKhmfzcztjm6gYkykvu/NiS8kaqbCWAEWWAyf8J5U= -golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs= -golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= -gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM= -honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= diff --git a/vendor/golang.org/x/lint/go.mod b/vendor/golang.org/x/lint/go.mod deleted file mode 100644 index 44179f3a4..000000000 --- a/vendor/golang.org/x/lint/go.mod +++ /dev/null @@ -1,5 +0,0 @@ -module golang.org/x/lint - -go 1.11 - -require golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f diff --git a/vendor/golang.org/x/lint/go.sum b/vendor/golang.org/x/lint/go.sum deleted file mode 100644 index 539c98a94..000000000 --- a/vendor/golang.org/x/lint/go.sum +++ /dev/null @@ -1,8 +0,0 @@ -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f h1:kDxGY2VmgABOe55qheT/TFqUMtcTHnomIPS1iv3G4Ms= -golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/vendor/golang.org/x/mod/LICENSE b/vendor/golang.org/x/mod/LICENSE new file mode 100644 index 000000000..6a66aea5e --- /dev/null +++ b/vendor/golang.org/x/mod/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/golang.org/x/mod/PATENTS b/vendor/golang.org/x/mod/PATENTS new file mode 100644 index 000000000..733099041 --- /dev/null +++ b/vendor/golang.org/x/mod/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/vendor/golang.org/x/mod/module/module.go b/vendor/golang.org/x/mod/module/module.go new file mode 100644 index 000000000..6cd37280a --- /dev/null +++ b/vendor/golang.org/x/mod/module/module.go @@ -0,0 +1,718 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package module defines the module.Version type along with support code. +// +// The module.Version type is a simple Path, Version pair: +// +// type Version struct { +// Path string +// Version string +// } +// +// There are no restrictions imposed directly by use of this structure, +// but additional checking functions, most notably Check, verify that +// a particular path, version pair is valid. +// +// Escaped Paths +// +// Module paths appear as substrings of file system paths +// (in the download cache) and of web server URLs in the proxy protocol. +// In general we cannot rely on file systems to be case-sensitive, +// nor can we rely on web servers, since they read from file systems. +// That is, we cannot rely on the file system to keep rsc.io/QUOTE +// and rsc.io/quote separate. Windows and macOS don't. +// Instead, we must never require two different casings of a file path. +// Because we want the download cache to match the proxy protocol, +// and because we want the proxy protocol to be possible to serve +// from a tree of static files (which might be stored on a case-insensitive +// file system), the proxy protocol must never require two different casings +// of a URL path either. +// +// One possibility would be to make the escaped form be the lowercase +// hexadecimal encoding of the actual path bytes. This would avoid ever +// needing different casings of a file path, but it would be fairly illegible +// to most programmers when those paths appeared in the file system +// (including in file paths in compiler errors and stack traces) +// in web server logs, and so on. Instead, we want a safe escaped form that +// leaves most paths unaltered. +// +// The safe escaped form is to replace every uppercase letter +// with an exclamation mark followed by the letter's lowercase equivalent. +// +// For example, +// +// github.com/Azure/azure-sdk-for-go -> github.com/!azure/azure-sdk-for-go. +// github.com/GoogleCloudPlatform/cloudsql-proxy -> github.com/!google!cloud!platform/cloudsql-proxy +// github.com/Sirupsen/logrus -> github.com/!sirupsen/logrus. +// +// Import paths that avoid upper-case letters are left unchanged. +// Note that because import paths are ASCII-only and avoid various +// problematic punctuation (like : < and >), the escaped form is also ASCII-only +// and avoids the same problematic punctuation. +// +// Import paths have never allowed exclamation marks, so there is no +// need to define how to escape a literal !. +// +// Unicode Restrictions +// +// Today, paths are disallowed from using Unicode. +// +// Although paths are currently disallowed from using Unicode, +// we would like at some point to allow Unicode letters as well, to assume that +// file systems and URLs are Unicode-safe (storing UTF-8), and apply +// the !-for-uppercase convention for escaping them in the file system. +// But there are at least two subtle considerations. +// +// First, note that not all case-fold equivalent distinct runes +// form an upper/lower pair. +// For example, U+004B ('K'), U+006B ('k'), and U+212A ('K' for Kelvin) +// are three distinct runes that case-fold to each other. +// When we do add Unicode letters, we must not assume that upper/lower +// are the only case-equivalent pairs. +// Perhaps the Kelvin symbol would be disallowed entirely, for example. +// Or perhaps it would escape as "!!k", or perhaps as "(212A)". +// +// Second, it would be nice to allow Unicode marks as well as letters, +// but marks include combining marks, and then we must deal not +// only with case folding but also normalization: both U+00E9 ('é') +// and U+0065 U+0301 ('e' followed by combining acute accent) +// look the same on the page and are treated by some file systems +// as the same path. If we do allow Unicode marks in paths, there +// must be some kind of normalization to allow only one canonical +// encoding of any character used in an import path. +package module + +// IMPORTANT NOTE +// +// This file essentially defines the set of valid import paths for the go command. +// There are many subtle considerations, including Unicode ambiguity, +// security, network, and file system representations. +// +// This file also defines the set of valid module path and version combinations, +// another topic with many subtle considerations. +// +// Changes to the semantics in this file require approval from rsc. + +import ( + "fmt" + "sort" + "strings" + "unicode" + "unicode/utf8" + + "golang.org/x/mod/semver" + errors "golang.org/x/xerrors" +) + +// A Version (for clients, a module.Version) is defined by a module path and version pair. +// These are stored in their plain (unescaped) form. +type Version struct { + // Path is a module path, like "golang.org/x/text" or "rsc.io/quote/v2". + Path string + + // Version is usually a semantic version in canonical form. + // There are three exceptions to this general rule. + // First, the top-level target of a build has no specific version + // and uses Version = "". + // Second, during MVS calculations the version "none" is used + // to represent the decision to take no version of a given module. + // Third, filesystem paths found in "replace" directives are + // represented by a path with an empty version. + Version string `json:",omitempty"` +} + +// String returns a representation of the Version suitable for logging +// (Path@Version, or just Path if Version is empty). +func (m Version) String() string { + if m.Version == "" { + return m.Path + } + return m.Path + "@" + m.Version +} + +// A ModuleError indicates an error specific to a module. +type ModuleError struct { + Path string + Version string + Err error +} + +// VersionError returns a ModuleError derived from a Version and error, +// or err itself if it is already such an error. +func VersionError(v Version, err error) error { + var mErr *ModuleError + if errors.As(err, &mErr) && mErr.Path == v.Path && mErr.Version == v.Version { + return err + } + return &ModuleError{ + Path: v.Path, + Version: v.Version, + Err: err, + } +} + +func (e *ModuleError) Error() string { + if v, ok := e.Err.(*InvalidVersionError); ok { + return fmt.Sprintf("%s@%s: invalid %s: %v", e.Path, v.Version, v.noun(), v.Err) + } + if e.Version != "" { + return fmt.Sprintf("%s@%s: %v", e.Path, e.Version, e.Err) + } + return fmt.Sprintf("module %s: %v", e.Path, e.Err) +} + +func (e *ModuleError) Unwrap() error { return e.Err } + +// An InvalidVersionError indicates an error specific to a version, with the +// module path unknown or specified externally. +// +// A ModuleError may wrap an InvalidVersionError, but an InvalidVersionError +// must not wrap a ModuleError. +type InvalidVersionError struct { + Version string + Pseudo bool + Err error +} + +// noun returns either "version" or "pseudo-version", depending on whether +// e.Version is a pseudo-version. +func (e *InvalidVersionError) noun() string { + if e.Pseudo { + return "pseudo-version" + } + return "version" +} + +func (e *InvalidVersionError) Error() string { + return fmt.Sprintf("%s %q invalid: %s", e.noun(), e.Version, e.Err) +} + +func (e *InvalidVersionError) Unwrap() error { return e.Err } + +// Check checks that a given module path, version pair is valid. +// In addition to the path being a valid module path +// and the version being a valid semantic version, +// the two must correspond. +// For example, the path "yaml/v2" only corresponds to +// semantic versions beginning with "v2.". +func Check(path, version string) error { + if err := CheckPath(path); err != nil { + return err + } + if !semver.IsValid(version) { + return &ModuleError{ + Path: path, + Err: &InvalidVersionError{Version: version, Err: errors.New("not a semantic version")}, + } + } + _, pathMajor, _ := SplitPathVersion(path) + if err := CheckPathMajor(version, pathMajor); err != nil { + return &ModuleError{Path: path, Err: err} + } + return nil +} + +// firstPathOK reports whether r can appear in the first element of a module path. +// The first element of the path must be an LDH domain name, at least for now. +// To avoid case ambiguity, the domain name must be entirely lower case. +func firstPathOK(r rune) bool { + return r == '-' || r == '.' || + '0' <= r && r <= '9' || + 'a' <= r && r <= 'z' +} + +// pathOK reports whether r can appear in an import path element. +// Paths can be ASCII letters, ASCII digits, and limited ASCII punctuation: + - . _ and ~. +// This matches what "go get" has historically recognized in import paths. +// TODO(rsc): We would like to allow Unicode letters, but that requires additional +// care in the safe encoding (see "escaped paths" above). +func pathOK(r rune) bool { + if r < utf8.RuneSelf { + return r == '+' || r == '-' || r == '.' || r == '_' || r == '~' || + '0' <= r && r <= '9' || + 'A' <= r && r <= 'Z' || + 'a' <= r && r <= 'z' + } + return false +} + +// fileNameOK reports whether r can appear in a file name. +// For now we allow all Unicode letters but otherwise limit to pathOK plus a few more punctuation characters. +// If we expand the set of allowed characters here, we have to +// work harder at detecting potential case-folding and normalization collisions. +// See note about "escaped paths" above. +func fileNameOK(r rune) bool { + if r < utf8.RuneSelf { + // Entire set of ASCII punctuation, from which we remove characters: + // ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~ + // We disallow some shell special characters: " ' * < > ? ` | + // (Note that some of those are disallowed by the Windows file system as well.) + // We also disallow path separators / : and \ (fileNameOK is only called on path element characters). + // We allow spaces (U+0020) in file names. + const allowed = "!#$%&()+,-.=@[]^_{}~ " + if '0' <= r && r <= '9' || 'A' <= r && r <= 'Z' || 'a' <= r && r <= 'z' { + return true + } + for i := 0; i < len(allowed); i++ { + if rune(allowed[i]) == r { + return true + } + } + return false + } + // It may be OK to add more ASCII punctuation here, but only carefully. + // For example Windows disallows < > \, and macOS disallows :, so we must not allow those. + return unicode.IsLetter(r) +} + +// CheckPath checks that a module path is valid. +// A valid module path is a valid import path, as checked by CheckImportPath, +// with two additional constraints. +// First, the leading path element (up to the first slash, if any), +// by convention a domain name, must contain only lower-case ASCII letters, +// ASCII digits, dots (U+002E), and dashes (U+002D); +// it must contain at least one dot and cannot start with a dash. +// Second, for a final path element of the form /vN, where N looks numeric +// (ASCII digits and dots) must not begin with a leading zero, must not be /v1, +// and must not contain any dots. For paths beginning with "gopkg.in/", +// this second requirement is replaced by a requirement that the path +// follow the gopkg.in server's conventions. +func CheckPath(path string) error { + if err := checkPath(path, false); err != nil { + return fmt.Errorf("malformed module path %q: %v", path, err) + } + i := strings.Index(path, "/") + if i < 0 { + i = len(path) + } + if i == 0 { + return fmt.Errorf("malformed module path %q: leading slash", path) + } + if !strings.Contains(path[:i], ".") { + return fmt.Errorf("malformed module path %q: missing dot in first path element", path) + } + if path[0] == '-' { + return fmt.Errorf("malformed module path %q: leading dash in first path element", path) + } + for _, r := range path[:i] { + if !firstPathOK(r) { + return fmt.Errorf("malformed module path %q: invalid char %q in first path element", path, r) + } + } + if _, _, ok := SplitPathVersion(path); !ok { + return fmt.Errorf("malformed module path %q: invalid version", path) + } + return nil +} + +// CheckImportPath checks that an import path is valid. +// +// A valid import path consists of one or more valid path elements +// separated by slashes (U+002F). (It must not begin with nor end in a slash.) +// +// A valid path element is a non-empty string made up of +// ASCII letters, ASCII digits, and limited ASCII punctuation: + - . _ and ~. +// It must not begin or end with a dot (U+002E), nor contain two dots in a row. +// +// The element prefix up to the first dot must not be a reserved file name +// on Windows, regardless of case (CON, com1, NuL, and so on). +// +// CheckImportPath may be less restrictive in the future, but see the +// top-level package documentation for additional information about +// subtleties of Unicode. +func CheckImportPath(path string) error { + if err := checkPath(path, false); err != nil { + return fmt.Errorf("malformed import path %q: %v", path, err) + } + return nil +} + +// checkPath checks that a general path is valid. +// It returns an error describing why but not mentioning path. +// Because these checks apply to both module paths and import paths, +// the caller is expected to add the "malformed ___ path %q: " prefix. +// fileName indicates whether the final element of the path is a file name +// (as opposed to a directory name). +func checkPath(path string, fileName bool) error { + if !utf8.ValidString(path) { + return fmt.Errorf("invalid UTF-8") + } + if path == "" { + return fmt.Errorf("empty string") + } + if path[0] == '-' { + return fmt.Errorf("leading dash") + } + if strings.Contains(path, "//") { + return fmt.Errorf("double slash") + } + if path[len(path)-1] == '/' { + return fmt.Errorf("trailing slash") + } + elemStart := 0 + for i, r := range path { + if r == '/' { + if err := checkElem(path[elemStart:i], fileName); err != nil { + return err + } + elemStart = i + 1 + } + } + if err := checkElem(path[elemStart:], fileName); err != nil { + return err + } + return nil +} + +// checkElem checks whether an individual path element is valid. +// fileName indicates whether the element is a file name (not a directory name). +func checkElem(elem string, fileName bool) error { + if elem == "" { + return fmt.Errorf("empty path element") + } + if strings.Count(elem, ".") == len(elem) { + return fmt.Errorf("invalid path element %q", elem) + } + if elem[0] == '.' && !fileName { + return fmt.Errorf("leading dot in path element") + } + if elem[len(elem)-1] == '.' { + return fmt.Errorf("trailing dot in path element") + } + charOK := pathOK + if fileName { + charOK = fileNameOK + } + for _, r := range elem { + if !charOK(r) { + return fmt.Errorf("invalid char %q", r) + } + } + + // Windows disallows a bunch of path elements, sadly. + // See https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file + short := elem + if i := strings.Index(short, "."); i >= 0 { + short = short[:i] + } + for _, bad := range badWindowsNames { + if strings.EqualFold(bad, short) { + return fmt.Errorf("%q disallowed as path element component on Windows", short) + } + } + return nil +} + +// CheckFilePath checks that a slash-separated file path is valid. +// The definition of a valid file path is the same as the definition +// of a valid import path except that the set of allowed characters is larger: +// all Unicode letters, ASCII digits, the ASCII space character (U+0020), +// and the ASCII punctuation characters +// “!#$%&()+,-.=@[]^_{}~”. +// (The excluded punctuation characters, " * < > ? ` ' | / \ and :, +// have special meanings in certain shells or operating systems.) +// +// CheckFilePath may be less restrictive in the future, but see the +// top-level package documentation for additional information about +// subtleties of Unicode. +func CheckFilePath(path string) error { + if err := checkPath(path, true); err != nil { + return fmt.Errorf("malformed file path %q: %v", path, err) + } + return nil +} + +// badWindowsNames are the reserved file path elements on Windows. +// See https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file +var badWindowsNames = []string{ + "CON", + "PRN", + "AUX", + "NUL", + "COM1", + "COM2", + "COM3", + "COM4", + "COM5", + "COM6", + "COM7", + "COM8", + "COM9", + "LPT1", + "LPT2", + "LPT3", + "LPT4", + "LPT5", + "LPT6", + "LPT7", + "LPT8", + "LPT9", +} + +// SplitPathVersion returns prefix and major version such that prefix+pathMajor == path +// and version is either empty or "/vN" for N >= 2. +// As a special case, gopkg.in paths are recognized directly; +// they require ".vN" instead of "/vN", and for all N, not just N >= 2. +// SplitPathVersion returns with ok = false when presented with +// a path whose last path element does not satisfy the constraints +// applied by CheckPath, such as "example.com/pkg/v1" or "example.com/pkg/v1.2". +func SplitPathVersion(path string) (prefix, pathMajor string, ok bool) { + if strings.HasPrefix(path, "gopkg.in/") { + return splitGopkgIn(path) + } + + i := len(path) + dot := false + for i > 0 && ('0' <= path[i-1] && path[i-1] <= '9' || path[i-1] == '.') { + if path[i-1] == '.' { + dot = true + } + i-- + } + if i <= 1 || i == len(path) || path[i-1] != 'v' || path[i-2] != '/' { + return path, "", true + } + prefix, pathMajor = path[:i-2], path[i-2:] + if dot || len(pathMajor) <= 2 || pathMajor[2] == '0' || pathMajor == "/v1" { + return path, "", false + } + return prefix, pathMajor, true +} + +// splitGopkgIn is like SplitPathVersion but only for gopkg.in paths. +func splitGopkgIn(path string) (prefix, pathMajor string, ok bool) { + if !strings.HasPrefix(path, "gopkg.in/") { + return path, "", false + } + i := len(path) + if strings.HasSuffix(path, "-unstable") { + i -= len("-unstable") + } + for i > 0 && ('0' <= path[i-1] && path[i-1] <= '9') { + i-- + } + if i <= 1 || path[i-1] != 'v' || path[i-2] != '.' { + // All gopkg.in paths must end in vN for some N. + return path, "", false + } + prefix, pathMajor = path[:i-2], path[i-2:] + if len(pathMajor) <= 2 || pathMajor[2] == '0' && pathMajor != ".v0" { + return path, "", false + } + return prefix, pathMajor, true +} + +// MatchPathMajor reports whether the semantic version v +// matches the path major version pathMajor. +// +// MatchPathMajor returns true if and only if CheckPathMajor returns nil. +func MatchPathMajor(v, pathMajor string) bool { + return CheckPathMajor(v, pathMajor) == nil +} + +// CheckPathMajor returns a non-nil error if the semantic version v +// does not match the path major version pathMajor. +func CheckPathMajor(v, pathMajor string) error { + // TODO(jayconrod): return errors or panic for invalid inputs. This function + // (and others) was covered by integration tests for cmd/go, and surrounding + // code protected against invalid inputs like non-canonical versions. + if strings.HasPrefix(pathMajor, ".v") && strings.HasSuffix(pathMajor, "-unstable") { + pathMajor = strings.TrimSuffix(pathMajor, "-unstable") + } + if strings.HasPrefix(v, "v0.0.0-") && pathMajor == ".v1" { + // Allow old bug in pseudo-versions that generated v0.0.0- pseudoversion for gopkg .v1. + // For example, gopkg.in/yaml.v2@v2.2.1's go.mod requires gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405. + return nil + } + m := semver.Major(v) + if pathMajor == "" { + if m == "v0" || m == "v1" || semver.Build(v) == "+incompatible" { + return nil + } + pathMajor = "v0 or v1" + } else if pathMajor[0] == '/' || pathMajor[0] == '.' { + if m == pathMajor[1:] { + return nil + } + pathMajor = pathMajor[1:] + } + return &InvalidVersionError{ + Version: v, + Err: fmt.Errorf("should be %s, not %s", pathMajor, semver.Major(v)), + } +} + +// PathMajorPrefix returns the major-version tag prefix implied by pathMajor. +// An empty PathMajorPrefix allows either v0 or v1. +// +// Note that MatchPathMajor may accept some versions that do not actually begin +// with this prefix: namely, it accepts a 'v0.0.0-' prefix for a '.v1' +// pathMajor, even though that pathMajor implies 'v1' tagging. +func PathMajorPrefix(pathMajor string) string { + if pathMajor == "" { + return "" + } + if pathMajor[0] != '/' && pathMajor[0] != '.' { + panic("pathMajor suffix " + pathMajor + " passed to PathMajorPrefix lacks separator") + } + if strings.HasPrefix(pathMajor, ".v") && strings.HasSuffix(pathMajor, "-unstable") { + pathMajor = strings.TrimSuffix(pathMajor, "-unstable") + } + m := pathMajor[1:] + if m != semver.Major(m) { + panic("pathMajor suffix " + pathMajor + "passed to PathMajorPrefix is not a valid major version") + } + return m +} + +// CanonicalVersion returns the canonical form of the version string v. +// It is the same as semver.Canonical(v) except that it preserves the special build suffix "+incompatible". +func CanonicalVersion(v string) string { + cv := semver.Canonical(v) + if semver.Build(v) == "+incompatible" { + cv += "+incompatible" + } + return cv +} + +// Sort sorts the list by Path, breaking ties by comparing Version fields. +// The Version fields are interpreted as semantic versions (using semver.Compare) +// optionally followed by a tie-breaking suffix introduced by a slash character, +// like in "v0.0.1/go.mod". +func Sort(list []Version) { + sort.Slice(list, func(i, j int) bool { + mi := list[i] + mj := list[j] + if mi.Path != mj.Path { + return mi.Path < mj.Path + } + // To help go.sum formatting, allow version/file. + // Compare semver prefix by semver rules, + // file by string order. + vi := mi.Version + vj := mj.Version + var fi, fj string + if k := strings.Index(vi, "/"); k >= 0 { + vi, fi = vi[:k], vi[k:] + } + if k := strings.Index(vj, "/"); k >= 0 { + vj, fj = vj[:k], vj[k:] + } + if vi != vj { + return semver.Compare(vi, vj) < 0 + } + return fi < fj + }) +} + +// EscapePath returns the escaped form of the given module path. +// It fails if the module path is invalid. +func EscapePath(path string) (escaped string, err error) { + if err := CheckPath(path); err != nil { + return "", err + } + + return escapeString(path) +} + +// EscapeVersion returns the escaped form of the given module version. +// Versions are allowed to be in non-semver form but must be valid file names +// and not contain exclamation marks. +func EscapeVersion(v string) (escaped string, err error) { + if err := checkElem(v, true); err != nil || strings.Contains(v, "!") { + return "", &InvalidVersionError{ + Version: v, + Err: fmt.Errorf("disallowed version string"), + } + } + return escapeString(v) +} + +func escapeString(s string) (escaped string, err error) { + haveUpper := false + for _, r := range s { + if r == '!' || r >= utf8.RuneSelf { + // This should be disallowed by CheckPath, but diagnose anyway. + // The correctness of the escaping loop below depends on it. + return "", fmt.Errorf("internal error: inconsistency in EscapePath") + } + if 'A' <= r && r <= 'Z' { + haveUpper = true + } + } + + if !haveUpper { + return s, nil + } + + var buf []byte + for _, r := range s { + if 'A' <= r && r <= 'Z' { + buf = append(buf, '!', byte(r+'a'-'A')) + } else { + buf = append(buf, byte(r)) + } + } + return string(buf), nil +} + +// UnescapePath returns the module path for the given escaped path. +// It fails if the escaped path is invalid or describes an invalid path. +func UnescapePath(escaped string) (path string, err error) { + path, ok := unescapeString(escaped) + if !ok { + return "", fmt.Errorf("invalid escaped module path %q", escaped) + } + if err := CheckPath(path); err != nil { + return "", fmt.Errorf("invalid escaped module path %q: %v", escaped, err) + } + return path, nil +} + +// UnescapeVersion returns the version string for the given escaped version. +// It fails if the escaped form is invalid or describes an invalid version. +// Versions are allowed to be in non-semver form but must be valid file names +// and not contain exclamation marks. +func UnescapeVersion(escaped string) (v string, err error) { + v, ok := unescapeString(escaped) + if !ok { + return "", fmt.Errorf("invalid escaped version %q", escaped) + } + if err := checkElem(v, true); err != nil { + return "", fmt.Errorf("invalid escaped version %q: %v", v, err) + } + return v, nil +} + +func unescapeString(escaped string) (string, bool) { + var buf []byte + + bang := false + for _, r := range escaped { + if r >= utf8.RuneSelf { + return "", false + } + if bang { + bang = false + if r < 'a' || 'z' < r { + return "", false + } + buf = append(buf, byte(r+'A'-'a')) + continue + } + if r == '!' { + bang = true + continue + } + if 'A' <= r && r <= 'Z' { + return "", false + } + buf = append(buf, byte(r)) + } + if bang { + return "", false + } + return string(buf), true +} diff --git a/vendor/golang.org/x/mod/semver/semver.go b/vendor/golang.org/x/mod/semver/semver.go new file mode 100644 index 000000000..2988e3cf9 --- /dev/null +++ b/vendor/golang.org/x/mod/semver/semver.go @@ -0,0 +1,388 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package semver implements comparison of semantic version strings. +// In this package, semantic version strings must begin with a leading "v", +// as in "v1.0.0". +// +// The general form of a semantic version string accepted by this package is +// +// vMAJOR[.MINOR[.PATCH[-PRERELEASE][+BUILD]]] +// +// where square brackets indicate optional parts of the syntax; +// MAJOR, MINOR, and PATCH are decimal integers without extra leading zeros; +// PRERELEASE and BUILD are each a series of non-empty dot-separated identifiers +// using only alphanumeric characters and hyphens; and +// all-numeric PRERELEASE identifiers must not have leading zeros. +// +// This package follows Semantic Versioning 2.0.0 (see semver.org) +// with two exceptions. First, it requires the "v" prefix. Second, it recognizes +// vMAJOR and vMAJOR.MINOR (with no prerelease or build suffixes) +// as shorthands for vMAJOR.0.0 and vMAJOR.MINOR.0. +package semver + +// parsed returns the parsed form of a semantic version string. +type parsed struct { + major string + minor string + patch string + short string + prerelease string + build string + err string +} + +// IsValid reports whether v is a valid semantic version string. +func IsValid(v string) bool { + _, ok := parse(v) + return ok +} + +// Canonical returns the canonical formatting of the semantic version v. +// It fills in any missing .MINOR or .PATCH and discards build metadata. +// Two semantic versions compare equal only if their canonical formattings +// are identical strings. +// The canonical invalid semantic version is the empty string. +func Canonical(v string) string { + p, ok := parse(v) + if !ok { + return "" + } + if p.build != "" { + return v[:len(v)-len(p.build)] + } + if p.short != "" { + return v + p.short + } + return v +} + +// Major returns the major version prefix of the semantic version v. +// For example, Major("v2.1.0") == "v2". +// If v is an invalid semantic version string, Major returns the empty string. +func Major(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return v[:1+len(pv.major)] +} + +// MajorMinor returns the major.minor version prefix of the semantic version v. +// For example, MajorMinor("v2.1.0") == "v2.1". +// If v is an invalid semantic version string, MajorMinor returns the empty string. +func MajorMinor(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + i := 1 + len(pv.major) + if j := i + 1 + len(pv.minor); j <= len(v) && v[i] == '.' && v[i+1:j] == pv.minor { + return v[:j] + } + return v[:i] + "." + pv.minor +} + +// Prerelease returns the prerelease suffix of the semantic version v. +// For example, Prerelease("v2.1.0-pre+meta") == "-pre". +// If v is an invalid semantic version string, Prerelease returns the empty string. +func Prerelease(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return pv.prerelease +} + +// Build returns the build suffix of the semantic version v. +// For example, Build("v2.1.0+meta") == "+meta". +// If v is an invalid semantic version string, Build returns the empty string. +func Build(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return pv.build +} + +// Compare returns an integer comparing two versions according to +// semantic version precedence. +// The result will be 0 if v == w, -1 if v < w, or +1 if v > w. +// +// An invalid semantic version string is considered less than a valid one. +// All invalid semantic version strings compare equal to each other. +func Compare(v, w string) int { + pv, ok1 := parse(v) + pw, ok2 := parse(w) + if !ok1 && !ok2 { + return 0 + } + if !ok1 { + return -1 + } + if !ok2 { + return +1 + } + if c := compareInt(pv.major, pw.major); c != 0 { + return c + } + if c := compareInt(pv.minor, pw.minor); c != 0 { + return c + } + if c := compareInt(pv.patch, pw.patch); c != 0 { + return c + } + return comparePrerelease(pv.prerelease, pw.prerelease) +} + +// Max canonicalizes its arguments and then returns the version string +// that compares greater. +func Max(v, w string) string { + v = Canonical(v) + w = Canonical(w) + if Compare(v, w) > 0 { + return v + } + return w +} + +func parse(v string) (p parsed, ok bool) { + if v == "" || v[0] != 'v' { + p.err = "missing v prefix" + return + } + p.major, v, ok = parseInt(v[1:]) + if !ok { + p.err = "bad major version" + return + } + if v == "" { + p.minor = "0" + p.patch = "0" + p.short = ".0.0" + return + } + if v[0] != '.' { + p.err = "bad minor prefix" + ok = false + return + } + p.minor, v, ok = parseInt(v[1:]) + if !ok { + p.err = "bad minor version" + return + } + if v == "" { + p.patch = "0" + p.short = ".0" + return + } + if v[0] != '.' { + p.err = "bad patch prefix" + ok = false + return + } + p.patch, v, ok = parseInt(v[1:]) + if !ok { + p.err = "bad patch version" + return + } + if len(v) > 0 && v[0] == '-' { + p.prerelease, v, ok = parsePrerelease(v) + if !ok { + p.err = "bad prerelease" + return + } + } + if len(v) > 0 && v[0] == '+' { + p.build, v, ok = parseBuild(v) + if !ok { + p.err = "bad build" + return + } + } + if v != "" { + p.err = "junk on end" + ok = false + return + } + ok = true + return +} + +func parseInt(v string) (t, rest string, ok bool) { + if v == "" { + return + } + if v[0] < '0' || '9' < v[0] { + return + } + i := 1 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + if v[0] == '0' && i != 1 { + return + } + return v[:i], v[i:], true +} + +func parsePrerelease(v string) (t, rest string, ok bool) { + // "A pre-release version MAY be denoted by appending a hyphen and + // a series of dot separated identifiers immediately following the patch version. + // Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-]. + // Identifiers MUST NOT be empty. Numeric identifiers MUST NOT include leading zeroes." + if v == "" || v[0] != '-' { + return + } + i := 1 + start := 1 + for i < len(v) && v[i] != '+' { + if !isIdentChar(v[i]) && v[i] != '.' { + return + } + if v[i] == '.' { + if start == i || isBadNum(v[start:i]) { + return + } + start = i + 1 + } + i++ + } + if start == i || isBadNum(v[start:i]) { + return + } + return v[:i], v[i:], true +} + +func parseBuild(v string) (t, rest string, ok bool) { + if v == "" || v[0] != '+' { + return + } + i := 1 + start := 1 + for i < len(v) { + if !isIdentChar(v[i]) && v[i] != '.' { + return + } + if v[i] == '.' { + if start == i { + return + } + start = i + 1 + } + i++ + } + if start == i { + return + } + return v[:i], v[i:], true +} + +func isIdentChar(c byte) bool { + return 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' || c == '-' +} + +func isBadNum(v string) bool { + i := 0 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + return i == len(v) && i > 1 && v[0] == '0' +} + +func isNum(v string) bool { + i := 0 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + return i == len(v) +} + +func compareInt(x, y string) int { + if x == y { + return 0 + } + if len(x) < len(y) { + return -1 + } + if len(x) > len(y) { + return +1 + } + if x < y { + return -1 + } else { + return +1 + } +} + +func comparePrerelease(x, y string) int { + // "When major, minor, and patch are equal, a pre-release version has + // lower precedence than a normal version. + // Example: 1.0.0-alpha < 1.0.0. + // Precedence for two pre-release versions with the same major, minor, + // and patch version MUST be determined by comparing each dot separated + // identifier from left to right until a difference is found as follows: + // identifiers consisting of only digits are compared numerically and + // identifiers with letters or hyphens are compared lexically in ASCII + // sort order. Numeric identifiers always have lower precedence than + // non-numeric identifiers. A larger set of pre-release fields has a + // higher precedence than a smaller set, if all of the preceding + // identifiers are equal. + // Example: 1.0.0-alpha < 1.0.0-alpha.1 < 1.0.0-alpha.beta < + // 1.0.0-beta < 1.0.0-beta.2 < 1.0.0-beta.11 < 1.0.0-rc.1 < 1.0.0." + if x == y { + return 0 + } + if x == "" { + return +1 + } + if y == "" { + return -1 + } + for x != "" && y != "" { + x = x[1:] // skip - or . + y = y[1:] // skip - or . + var dx, dy string + dx, x = nextIdent(x) + dy, y = nextIdent(y) + if dx != dy { + ix := isNum(dx) + iy := isNum(dy) + if ix != iy { + if ix { + return -1 + } else { + return +1 + } + } + if ix { + if len(dx) < len(dy) { + return -1 + } + if len(dx) > len(dy) { + return +1 + } + } + if dx < dy { + return -1 + } else { + return +1 + } + } + } + if x == "" { + return -1 + } else { + return +1 + } +} + +func nextIdent(x string) (dx, rest string) { + i := 0 + for i < len(x) && x[i] != '.' { + i++ + } + return x[:i], x[i:] +} diff --git a/vendor/golang.org/x/oauth2/go.mod b/vendor/golang.org/x/oauth2/go.mod deleted file mode 100644 index b34578155..000000000 --- a/vendor/golang.org/x/oauth2/go.mod +++ /dev/null @@ -1,10 +0,0 @@ -module golang.org/x/oauth2 - -go 1.11 - -require ( - cloud.google.com/go v0.34.0 - golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e - golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 // indirect - google.golang.org/appengine v1.4.0 -) diff --git a/vendor/golang.org/x/oauth2/go.sum b/vendor/golang.org/x/oauth2/go.sum deleted file mode 100644 index 6f0079b0d..000000000 --- a/vendor/golang.org/x/oauth2/go.sum +++ /dev/null @@ -1,12 +0,0 @@ -cloud.google.com/go v0.34.0 h1:eOI3/cP2VTU6uZLDYAoic+eyzzB9YyGmJ7eIjl8rOPg= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e h1:bRhVy7zSSasaqNksaRZiA5EEI+Ei4I1nO5Jh72wfHlg= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= diff --git a/vendor/golang.org/x/tools/cmd/goimports/doc.go b/vendor/golang.org/x/tools/cmd/goimports/doc.go new file mode 100644 index 000000000..f344d8014 --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/goimports/doc.go @@ -0,0 +1,47 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* + +Command goimports updates your Go import lines, +adding missing ones and removing unreferenced ones. + + $ go get golang.org/x/tools/cmd/goimports + +In addition to fixing imports, goimports also formats +your code in the same style as gofmt so it can be used +as a replacement for your editor's gofmt-on-save hook. + +For emacs, make sure you have the latest go-mode.el: + https://github.com/dominikh/go-mode.el +Then in your .emacs file: + (setq gofmt-command "goimports") + (add-hook 'before-save-hook 'gofmt-before-save) + +For vim, set "gofmt_command" to "goimports": + https://golang.org/change/39c724dd7f252 + https://golang.org/wiki/IDEsAndTextEditorPlugins + etc + +For GoSublime, follow the steps described here: + http://michaelwhatcott.com/gosublime-goimports/ + +For other editors, you probably know what to do. + +To exclude directories in your $GOPATH from being scanned for Go +files, goimports respects a configuration file at +$GOPATH/src/.goimportsignore which may contain blank lines, comment +lines (beginning with '#'), or lines naming a directory relative to +the configuration file to ignore when scanning. No globbing or regex +patterns are allowed. Use the "-v" verbose flag to verify it's +working and see what goimports is doing. + +File bugs or feature requests at: + + https://golang.org/issues/new?title=x/tools/cmd/goimports:+ + +Happy hacking! + +*/ +package main // import "golang.org/x/tools/cmd/goimports" diff --git a/vendor/golang.org/x/tools/cmd/goimports/goimports.go b/vendor/golang.org/x/tools/cmd/goimports/goimports.go new file mode 100644 index 000000000..27708972d --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/goimports/goimports.go @@ -0,0 +1,380 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package main + +import ( + "bufio" + "bytes" + "errors" + "flag" + "fmt" + "go/scanner" + "io" + "io/ioutil" + "log" + "os" + "os/exec" + "path/filepath" + "runtime" + "runtime/pprof" + "strings" + + "golang.org/x/tools/internal/gocommand" + "golang.org/x/tools/internal/imports" +) + +var ( + // main operation modes + list = flag.Bool("l", false, "list files whose formatting differs from goimport's") + write = flag.Bool("w", false, "write result to (source) file instead of stdout") + doDiff = flag.Bool("d", false, "display diffs instead of rewriting files") + srcdir = flag.String("srcdir", "", "choose imports as if source code is from `dir`. When operating on a single file, dir may instead be the complete file name.") + + verbose bool // verbose logging + + cpuProfile = flag.String("cpuprofile", "", "CPU profile output") + memProfile = flag.String("memprofile", "", "memory profile output") + memProfileRate = flag.Int("memrate", 0, "if > 0, sets runtime.MemProfileRate") + + options = &imports.Options{ + TabWidth: 8, + TabIndent: true, + Comments: true, + Fragment: true, + Env: &imports.ProcessEnv{ + GocmdRunner: &gocommand.Runner{}, + }, + } + exitCode = 0 +) + +func init() { + flag.BoolVar(&options.AllErrors, "e", false, "report all errors (not just the first 10 on different lines)") + flag.StringVar(&options.LocalPrefix, "local", "", "put imports beginning with this string after 3rd-party packages; comma-separated list") + flag.BoolVar(&options.FormatOnly, "format-only", false, "if true, don't fix imports and only format. In this mode, goimports is effectively gofmt, with the addition that imports are grouped into sections.") +} + +func report(err error) { + scanner.PrintError(os.Stderr, err) + exitCode = 2 +} + +func usage() { + fmt.Fprintf(os.Stderr, "usage: goimports [flags] [path ...]\n") + flag.PrintDefaults() + os.Exit(2) +} + +func isGoFile(f os.FileInfo) bool { + // ignore non-Go files + name := f.Name() + return !f.IsDir() && !strings.HasPrefix(name, ".") && strings.HasSuffix(name, ".go") +} + +// argumentType is which mode goimports was invoked as. +type argumentType int + +const ( + // fromStdin means the user is piping their source into goimports. + fromStdin argumentType = iota + + // singleArg is the common case from editors, when goimports is run on + // a single file. + singleArg + + // multipleArg is when the user ran "goimports file1.go file2.go" + // or ran goimports on a directory tree. + multipleArg +) + +func processFile(filename string, in io.Reader, out io.Writer, argType argumentType) error { + opt := options + if argType == fromStdin { + nopt := *options + nopt.Fragment = true + opt = &nopt + } + + if in == nil { + f, err := os.Open(filename) + if err != nil { + return err + } + defer f.Close() + in = f + } + + src, err := ioutil.ReadAll(in) + if err != nil { + return err + } + + target := filename + if *srcdir != "" { + // Determine whether the provided -srcdirc is a directory or file + // and then use it to override the target. + // + // See https://github.com/dominikh/go-mode.el/issues/146 + if isFile(*srcdir) { + if argType == multipleArg { + return errors.New("-srcdir value can't be a file when passing multiple arguments or when walking directories") + } + target = *srcdir + } else if argType == singleArg && strings.HasSuffix(*srcdir, ".go") && !isDir(*srcdir) { + // For a file which doesn't exist on disk yet, but might shortly. + // e.g. user in editor opens $DIR/newfile.go and newfile.go doesn't yet exist on disk. + // The goimports on-save hook writes the buffer to a temp file + // first and runs goimports before the actual save to newfile.go. + // The editor's buffer is named "newfile.go" so that is passed to goimports as: + // goimports -srcdir=/gopath/src/pkg/newfile.go /tmp/gofmtXXXXXXXX.go + // and then the editor reloads the result from the tmp file and writes + // it to newfile.go. + target = *srcdir + } else { + // Pretend that file is from *srcdir in order to decide + // visible imports correctly. + target = filepath.Join(*srcdir, filepath.Base(filename)) + } + } + + res, err := imports.Process(target, src, opt) + if err != nil { + return err + } + + if !bytes.Equal(src, res) { + // formatting has changed + if *list { + fmt.Fprintln(out, filename) + } + if *write { + if argType == fromStdin { + // filename is "" + return errors.New("can't use -w on stdin") + } + // On Windows, we need to re-set the permissions from the file. See golang/go#38225. + var perms os.FileMode + if fi, err := os.Stat(filename); err == nil { + perms = fi.Mode() & os.ModePerm + } + err = ioutil.WriteFile(filename, res, perms) + if err != nil { + return err + } + } + if *doDiff { + if argType == fromStdin { + filename = "stdin.go" // because .orig looks silly + } + data, err := diff(src, res, filename) + if err != nil { + return fmt.Errorf("computing diff: %s", err) + } + fmt.Printf("diff -u %s %s\n", filepath.ToSlash(filename+".orig"), filepath.ToSlash(filename)) + out.Write(data) + } + } + + if !*list && !*write && !*doDiff { + _, err = out.Write(res) + } + + return err +} + +func visitFile(path string, f os.FileInfo, err error) error { + if err == nil && isGoFile(f) { + err = processFile(path, nil, os.Stdout, multipleArg) + } + if err != nil { + report(err) + } + return nil +} + +func walkDir(path string) { + filepath.Walk(path, visitFile) +} + +func main() { + runtime.GOMAXPROCS(runtime.NumCPU()) + + // call gofmtMain in a separate function + // so that it can use defer and have them + // run before the exit. + gofmtMain() + os.Exit(exitCode) +} + +// parseFlags parses command line flags and returns the paths to process. +// It's a var so that custom implementations can replace it in other files. +var parseFlags = func() []string { + flag.BoolVar(&verbose, "v", false, "verbose logging") + + flag.Parse() + return flag.Args() +} + +func bufferedFileWriter(dest string) (w io.Writer, close func()) { + f, err := os.Create(dest) + if err != nil { + log.Fatal(err) + } + bw := bufio.NewWriter(f) + return bw, func() { + if err := bw.Flush(); err != nil { + log.Fatalf("error flushing %v: %v", dest, err) + } + if err := f.Close(); err != nil { + log.Fatal(err) + } + } +} + +func gofmtMain() { + flag.Usage = usage + paths := parseFlags() + + if *cpuProfile != "" { + bw, flush := bufferedFileWriter(*cpuProfile) + pprof.StartCPUProfile(bw) + defer flush() + defer pprof.StopCPUProfile() + } + // doTrace is a conditionally compiled wrapper around runtime/trace. It is + // used to allow goimports to compile under gccgo, which does not support + // runtime/trace. See https://golang.org/issue/15544. + defer doTrace()() + if *memProfileRate > 0 { + runtime.MemProfileRate = *memProfileRate + bw, flush := bufferedFileWriter(*memProfile) + defer func() { + runtime.GC() // materialize all statistics + if err := pprof.WriteHeapProfile(bw); err != nil { + log.Fatal(err) + } + flush() + }() + } + + if verbose { + log.SetFlags(log.LstdFlags | log.Lmicroseconds) + options.Env.Logf = log.Printf + } + if options.TabWidth < 0 { + fmt.Fprintf(os.Stderr, "negative tabwidth %d\n", options.TabWidth) + exitCode = 2 + return + } + + if len(paths) == 0 { + if err := processFile("", os.Stdin, os.Stdout, fromStdin); err != nil { + report(err) + } + return + } + + argType := singleArg + if len(paths) > 1 { + argType = multipleArg + } + + for _, path := range paths { + switch dir, err := os.Stat(path); { + case err != nil: + report(err) + case dir.IsDir(): + walkDir(path) + default: + if err := processFile(path, nil, os.Stdout, argType); err != nil { + report(err) + } + } + } +} + +func writeTempFile(dir, prefix string, data []byte) (string, error) { + file, err := ioutil.TempFile(dir, prefix) + if err != nil { + return "", err + } + _, err = file.Write(data) + if err1 := file.Close(); err == nil { + err = err1 + } + if err != nil { + os.Remove(file.Name()) + return "", err + } + return file.Name(), nil +} + +func diff(b1, b2 []byte, filename string) (data []byte, err error) { + f1, err := writeTempFile("", "gofmt", b1) + if err != nil { + return + } + defer os.Remove(f1) + + f2, err := writeTempFile("", "gofmt", b2) + if err != nil { + return + } + defer os.Remove(f2) + + cmd := "diff" + if runtime.GOOS == "plan9" { + cmd = "/bin/ape/diff" + } + + data, err = exec.Command(cmd, "-u", f1, f2).CombinedOutput() + if len(data) > 0 { + // diff exits with a non-zero status when the files don't match. + // Ignore that failure as long as we get output. + return replaceTempFilename(data, filename) + } + return +} + +// replaceTempFilename replaces temporary filenames in diff with actual one. +// +// --- /tmp/gofmt316145376 2017-02-03 19:13:00.280468375 -0500 +// +++ /tmp/gofmt617882815 2017-02-03 19:13:00.280468375 -0500 +// ... +// -> +// --- path/to/file.go.orig 2017-02-03 19:13:00.280468375 -0500 +// +++ path/to/file.go 2017-02-03 19:13:00.280468375 -0500 +// ... +func replaceTempFilename(diff []byte, filename string) ([]byte, error) { + bs := bytes.SplitN(diff, []byte{'\n'}, 3) + if len(bs) < 3 { + return nil, fmt.Errorf("got unexpected diff for %s", filename) + } + // Preserve timestamps. + var t0, t1 []byte + if i := bytes.LastIndexByte(bs[0], '\t'); i != -1 { + t0 = bs[0][i:] + } + if i := bytes.LastIndexByte(bs[1], '\t'); i != -1 { + t1 = bs[1][i:] + } + // Always print filepath with slash separator. + f := filepath.ToSlash(filename) + bs[0] = []byte(fmt.Sprintf("--- %s%s", f+".orig", t0)) + bs[1] = []byte(fmt.Sprintf("+++ %s%s", f, t1)) + return bytes.Join(bs, []byte{'\n'}), nil +} + +// isFile reports whether name is a file. +func isFile(name string) bool { + fi, err := os.Stat(name) + return err == nil && fi.Mode().IsRegular() +} + +// isDir reports whether name is a directory. +func isDir(name string) bool { + fi, err := os.Stat(name) + return err == nil && fi.IsDir() +} diff --git a/vendor/golang.org/x/tools/cmd/goimports/goimports_gc.go b/vendor/golang.org/x/tools/cmd/goimports/goimports_gc.go new file mode 100644 index 000000000..21d867eaa --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/goimports/goimports_gc.go @@ -0,0 +1,26 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build gc + +package main + +import ( + "flag" + "runtime/trace" +) + +var traceProfile = flag.String("trace", "", "trace profile output") + +func doTrace() func() { + if *traceProfile != "" { + bw, flush := bufferedFileWriter(*traceProfile) + trace.Start(bw) + return func() { + flush() + trace.Stop() + } + } + return func() {} +} diff --git a/vendor/golang.org/x/tools/cmd/goimports/goimports_not_gc.go b/vendor/golang.org/x/tools/cmd/goimports/goimports_not_gc.go new file mode 100644 index 000000000..f5531ceb3 --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/goimports/goimports_not_gc.go @@ -0,0 +1,11 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !gc + +package main + +func doTrace() func() { + return func() {} +} diff --git a/vendor/golang.org/x/tools/cmd/stringer/stringer.go b/vendor/golang.org/x/tools/cmd/stringer/stringer.go new file mode 100644 index 000000000..558a234d6 --- /dev/null +++ b/vendor/golang.org/x/tools/cmd/stringer/stringer.go @@ -0,0 +1,655 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Stringer is a tool to automate the creation of methods that satisfy the fmt.Stringer +// interface. Given the name of a (signed or unsigned) integer type T that has constants +// defined, stringer will create a new self-contained Go source file implementing +// func (t T) String() string +// The file is created in the same package and directory as the package that defines T. +// It has helpful defaults designed for use with go generate. +// +// Stringer works best with constants that are consecutive values such as created using iota, +// but creates good code regardless. In the future it might also provide custom support for +// constant sets that are bit patterns. +// +// For example, given this snippet, +// +// package painkiller +// +// type Pill int +// +// const ( +// Placebo Pill = iota +// Aspirin +// Ibuprofen +// Paracetamol +// Acetaminophen = Paracetamol +// ) +// +// running this command +// +// stringer -type=Pill +// +// in the same directory will create the file pill_string.go, in package painkiller, +// containing a definition of +// +// func (Pill) String() string +// +// That method will translate the value of a Pill constant to the string representation +// of the respective constant name, so that the call fmt.Print(painkiller.Aspirin) will +// print the string "Aspirin". +// +// Typically this process would be run using go generate, like this: +// +// //go:generate stringer -type=Pill +// +// If multiple constants have the same value, the lexically first matching name will +// be used (in the example, Acetaminophen will print as "Paracetamol"). +// +// With no arguments, it processes the package in the current directory. +// Otherwise, the arguments must name a single directory holding a Go package +// or a set of Go source files that represent a single Go package. +// +// The -type flag accepts a comma-separated list of types so a single run can +// generate methods for multiple types. The default output file is t_string.go, +// where t is the lower-cased name of the first type listed. It can be overridden +// with the -output flag. +// +// The -linecomment flag tells stringer to generate the text of any line comment, trimmed +// of leading spaces, instead of the constant name. For instance, if the constants above had a +// Pill prefix, one could write +// +// PillAspirin // Aspirin +// +// to suppress it in the output. +package main // import "golang.org/x/tools/cmd/stringer" + +import ( + "bytes" + "flag" + "fmt" + "go/ast" + "go/constant" + "go/format" + "go/token" + "go/types" + "io/ioutil" + "log" + "os" + "path/filepath" + "sort" + "strings" + + "golang.org/x/tools/go/packages" +) + +var ( + typeNames = flag.String("type", "", "comma-separated list of type names; must be set") + output = flag.String("output", "", "output file name; default srcdir/_string.go") + trimprefix = flag.String("trimprefix", "", "trim the `prefix` from the generated constant names") + linecomment = flag.Bool("linecomment", false, "use line comment text as printed text when present") + buildTags = flag.String("tags", "", "comma-separated list of build tags to apply") +) + +// Usage is a replacement usage function for the flags package. +func Usage() { + fmt.Fprintf(os.Stderr, "Usage of stringer:\n") + fmt.Fprintf(os.Stderr, "\tstringer [flags] -type T [directory]\n") + fmt.Fprintf(os.Stderr, "\tstringer [flags] -type T files... # Must be a single package\n") + fmt.Fprintf(os.Stderr, "For more information, see:\n") + fmt.Fprintf(os.Stderr, "\thttps://pkg.go.dev/golang.org/x/tools/cmd/stringer\n") + fmt.Fprintf(os.Stderr, "Flags:\n") + flag.PrintDefaults() +} + +func main() { + log.SetFlags(0) + log.SetPrefix("stringer: ") + flag.Usage = Usage + flag.Parse() + if len(*typeNames) == 0 { + flag.Usage() + os.Exit(2) + } + types := strings.Split(*typeNames, ",") + var tags []string + if len(*buildTags) > 0 { + tags = strings.Split(*buildTags, ",") + } + + // We accept either one directory or a list of files. Which do we have? + args := flag.Args() + if len(args) == 0 { + // Default: process whole package in current directory. + args = []string{"."} + } + + // Parse the package once. + var dir string + g := Generator{ + trimPrefix: *trimprefix, + lineComment: *linecomment, + } + // TODO(suzmue): accept other patterns for packages (directories, list of files, import paths, etc). + if len(args) == 1 && isDirectory(args[0]) { + dir = args[0] + } else { + if len(tags) != 0 { + log.Fatal("-tags option applies only to directories, not when files are specified") + } + dir = filepath.Dir(args[0]) + } + + g.parsePackage(args, tags) + + // Print the header and package clause. + g.Printf("// Code generated by \"stringer %s\"; DO NOT EDIT.\n", strings.Join(os.Args[1:], " ")) + g.Printf("\n") + g.Printf("package %s", g.pkg.name) + g.Printf("\n") + g.Printf("import \"strconv\"\n") // Used by all methods. + + // Run generate for each type. + for _, typeName := range types { + g.generate(typeName) + } + + // Format the output. + src := g.format() + + // Write to file. + outputName := *output + if outputName == "" { + baseName := fmt.Sprintf("%s_string.go", types[0]) + outputName = filepath.Join(dir, strings.ToLower(baseName)) + } + err := ioutil.WriteFile(outputName, src, 0644) + if err != nil { + log.Fatalf("writing output: %s", err) + } +} + +// isDirectory reports whether the named file is a directory. +func isDirectory(name string) bool { + info, err := os.Stat(name) + if err != nil { + log.Fatal(err) + } + return info.IsDir() +} + +// Generator holds the state of the analysis. Primarily used to buffer +// the output for format.Source. +type Generator struct { + buf bytes.Buffer // Accumulated output. + pkg *Package // Package we are scanning. + + trimPrefix string + lineComment bool +} + +func (g *Generator) Printf(format string, args ...interface{}) { + fmt.Fprintf(&g.buf, format, args...) +} + +// File holds a single parsed file and associated data. +type File struct { + pkg *Package // Package to which this file belongs. + file *ast.File // Parsed AST. + // These fields are reset for each type being generated. + typeName string // Name of the constant type. + values []Value // Accumulator for constant values of that type. + + trimPrefix string + lineComment bool +} + +type Package struct { + name string + defs map[*ast.Ident]types.Object + files []*File +} + +// parsePackage analyzes the single package constructed from the patterns and tags. +// parsePackage exits if there is an error. +func (g *Generator) parsePackage(patterns []string, tags []string) { + cfg := &packages.Config{ + Mode: packages.LoadSyntax, + // TODO: Need to think about constants in test files. Maybe write type_string_test.go + // in a separate pass? For later. + Tests: false, + BuildFlags: []string{fmt.Sprintf("-tags=%s", strings.Join(tags, " "))}, + } + pkgs, err := packages.Load(cfg, patterns...) + if err != nil { + log.Fatal(err) + } + if len(pkgs) != 1 { + log.Fatalf("error: %d packages found", len(pkgs)) + } + g.addPackage(pkgs[0]) +} + +// addPackage adds a type checked Package and its syntax files to the generator. +func (g *Generator) addPackage(pkg *packages.Package) { + g.pkg = &Package{ + name: pkg.Name, + defs: pkg.TypesInfo.Defs, + files: make([]*File, len(pkg.Syntax)), + } + + for i, file := range pkg.Syntax { + g.pkg.files[i] = &File{ + file: file, + pkg: g.pkg, + trimPrefix: g.trimPrefix, + lineComment: g.lineComment, + } + } +} + +// generate produces the String method for the named type. +func (g *Generator) generate(typeName string) { + values := make([]Value, 0, 100) + for _, file := range g.pkg.files { + // Set the state for this run of the walker. + file.typeName = typeName + file.values = nil + if file.file != nil { + ast.Inspect(file.file, file.genDecl) + values = append(values, file.values...) + } + } + + if len(values) == 0 { + log.Fatalf("no values defined for type %s", typeName) + } + // Generate code that will fail if the constants change value. + g.Printf("func _() {\n") + g.Printf("\t// An \"invalid array index\" compiler error signifies that the constant values have changed.\n") + g.Printf("\t// Re-run the stringer command to generate them again.\n") + g.Printf("\tvar x [1]struct{}\n") + for _, v := range values { + g.Printf("\t_ = x[%s - %s]\n", v.originalName, v.str) + } + g.Printf("}\n") + runs := splitIntoRuns(values) + // The decision of which pattern to use depends on the number of + // runs in the numbers. If there's only one, it's easy. For more than + // one, there's a tradeoff between complexity and size of the data + // and code vs. the simplicity of a map. A map takes more space, + // but so does the code. The decision here (crossover at 10) is + // arbitrary, but considers that for large numbers of runs the cost + // of the linear scan in the switch might become important, and + // rather than use yet another algorithm such as binary search, + // we punt and use a map. In any case, the likelihood of a map + // being necessary for any realistic example other than bitmasks + // is very low. And bitmasks probably deserve their own analysis, + // to be done some other day. + switch { + case len(runs) == 1: + g.buildOneRun(runs, typeName) + case len(runs) <= 10: + g.buildMultipleRuns(runs, typeName) + default: + g.buildMap(runs, typeName) + } +} + +// splitIntoRuns breaks the values into runs of contiguous sequences. +// For example, given 1,2,3,5,6,7 it returns {1,2,3},{5,6,7}. +// The input slice is known to be non-empty. +func splitIntoRuns(values []Value) [][]Value { + // We use stable sort so the lexically first name is chosen for equal elements. + sort.Stable(byValue(values)) + // Remove duplicates. Stable sort has put the one we want to print first, + // so use that one. The String method won't care about which named constant + // was the argument, so the first name for the given value is the only one to keep. + // We need to do this because identical values would cause the switch or map + // to fail to compile. + j := 1 + for i := 1; i < len(values); i++ { + if values[i].value != values[i-1].value { + values[j] = values[i] + j++ + } + } + values = values[:j] + runs := make([][]Value, 0, 10) + for len(values) > 0 { + // One contiguous sequence per outer loop. + i := 1 + for i < len(values) && values[i].value == values[i-1].value+1 { + i++ + } + runs = append(runs, values[:i]) + values = values[i:] + } + return runs +} + +// format returns the gofmt-ed contents of the Generator's buffer. +func (g *Generator) format() []byte { + src, err := format.Source(g.buf.Bytes()) + if err != nil { + // Should never happen, but can arise when developing this code. + // The user can compile the output to see the error. + log.Printf("warning: internal error: invalid Go generated: %s", err) + log.Printf("warning: compile the package to analyze the error") + return g.buf.Bytes() + } + return src +} + +// Value represents a declared constant. +type Value struct { + originalName string // The name of the constant. + name string // The name with trimmed prefix. + // The value is stored as a bit pattern alone. The boolean tells us + // whether to interpret it as an int64 or a uint64; the only place + // this matters is when sorting. + // Much of the time the str field is all we need; it is printed + // by Value.String. + value uint64 // Will be converted to int64 when needed. + signed bool // Whether the constant is a signed type. + str string // The string representation given by the "go/constant" package. +} + +func (v *Value) String() string { + return v.str +} + +// byValue lets us sort the constants into increasing order. +// We take care in the Less method to sort in signed or unsigned order, +// as appropriate. +type byValue []Value + +func (b byValue) Len() int { return len(b) } +func (b byValue) Swap(i, j int) { b[i], b[j] = b[j], b[i] } +func (b byValue) Less(i, j int) bool { + if b[i].signed { + return int64(b[i].value) < int64(b[j].value) + } + return b[i].value < b[j].value +} + +// genDecl processes one declaration clause. +func (f *File) genDecl(node ast.Node) bool { + decl, ok := node.(*ast.GenDecl) + if !ok || decl.Tok != token.CONST { + // We only care about const declarations. + return true + } + // The name of the type of the constants we are declaring. + // Can change if this is a multi-element declaration. + typ := "" + // Loop over the elements of the declaration. Each element is a ValueSpec: + // a list of names possibly followed by a type, possibly followed by values. + // If the type and value are both missing, we carry down the type (and value, + // but the "go/types" package takes care of that). + for _, spec := range decl.Specs { + vspec := spec.(*ast.ValueSpec) // Guaranteed to succeed as this is CONST. + if vspec.Type == nil && len(vspec.Values) > 0 { + // "X = 1". With no type but a value. If the constant is untyped, + // skip this vspec and reset the remembered type. + typ = "" + + // If this is a simple type conversion, remember the type. + // We don't mind if this is actually a call; a qualified call won't + // be matched (that will be SelectorExpr, not Ident), and only unusual + // situations will result in a function call that appears to be + // a type conversion. + ce, ok := vspec.Values[0].(*ast.CallExpr) + if !ok { + continue + } + id, ok := ce.Fun.(*ast.Ident) + if !ok { + continue + } + typ = id.Name + } + if vspec.Type != nil { + // "X T". We have a type. Remember it. + ident, ok := vspec.Type.(*ast.Ident) + if !ok { + continue + } + typ = ident.Name + } + if typ != f.typeName { + // This is not the type we're looking for. + continue + } + // We now have a list of names (from one line of source code) all being + // declared with the desired type. + // Grab their names and actual values and store them in f.values. + for _, name := range vspec.Names { + if name.Name == "_" { + continue + } + // This dance lets the type checker find the values for us. It's a + // bit tricky: look up the object declared by the name, find its + // types.Const, and extract its value. + obj, ok := f.pkg.defs[name] + if !ok { + log.Fatalf("no value for constant %s", name) + } + info := obj.Type().Underlying().(*types.Basic).Info() + if info&types.IsInteger == 0 { + log.Fatalf("can't handle non-integer constant type %s", typ) + } + value := obj.(*types.Const).Val() // Guaranteed to succeed as this is CONST. + if value.Kind() != constant.Int { + log.Fatalf("can't happen: constant is not an integer %s", name) + } + i64, isInt := constant.Int64Val(value) + u64, isUint := constant.Uint64Val(value) + if !isInt && !isUint { + log.Fatalf("internal error: value of %s is not an integer: %s", name, value.String()) + } + if !isInt { + u64 = uint64(i64) + } + v := Value{ + originalName: name.Name, + value: u64, + signed: info&types.IsUnsigned == 0, + str: value.String(), + } + if c := vspec.Comment; f.lineComment && c != nil && len(c.List) == 1 { + v.name = strings.TrimSpace(c.Text()) + } else { + v.name = strings.TrimPrefix(v.originalName, f.trimPrefix) + } + f.values = append(f.values, v) + } + } + return false +} + +// Helpers + +// usize returns the number of bits of the smallest unsigned integer +// type that will hold n. Used to create the smallest possible slice of +// integers to use as indexes into the concatenated strings. +func usize(n int) int { + switch { + case n < 1<<8: + return 8 + case n < 1<<16: + return 16 + default: + // 2^32 is enough constants for anyone. + return 32 + } +} + +// declareIndexAndNameVars declares the index slices and concatenated names +// strings representing the runs of values. +func (g *Generator) declareIndexAndNameVars(runs [][]Value, typeName string) { + var indexes, names []string + for i, run := range runs { + index, name := g.createIndexAndNameDecl(run, typeName, fmt.Sprintf("_%d", i)) + if len(run) != 1 { + indexes = append(indexes, index) + } + names = append(names, name) + } + g.Printf("const (\n") + for _, name := range names { + g.Printf("\t%s\n", name) + } + g.Printf(")\n\n") + + if len(indexes) > 0 { + g.Printf("var (") + for _, index := range indexes { + g.Printf("\t%s\n", index) + } + g.Printf(")\n\n") + } +} + +// declareIndexAndNameVar is the single-run version of declareIndexAndNameVars +func (g *Generator) declareIndexAndNameVar(run []Value, typeName string) { + index, name := g.createIndexAndNameDecl(run, typeName, "") + g.Printf("const %s\n", name) + g.Printf("var %s\n", index) +} + +// createIndexAndNameDecl returns the pair of declarations for the run. The caller will add "const" and "var". +func (g *Generator) createIndexAndNameDecl(run []Value, typeName string, suffix string) (string, string) { + b := new(bytes.Buffer) + indexes := make([]int, len(run)) + for i := range run { + b.WriteString(run[i].name) + indexes[i] = b.Len() + } + nameConst := fmt.Sprintf("_%s_name%s = %q", typeName, suffix, b.String()) + nameLen := b.Len() + b.Reset() + fmt.Fprintf(b, "_%s_index%s = [...]uint%d{0, ", typeName, suffix, usize(nameLen)) + for i, v := range indexes { + if i > 0 { + fmt.Fprintf(b, ", ") + } + fmt.Fprintf(b, "%d", v) + } + fmt.Fprintf(b, "}") + return b.String(), nameConst +} + +// declareNameVars declares the concatenated names string representing all the values in the runs. +func (g *Generator) declareNameVars(runs [][]Value, typeName string, suffix string) { + g.Printf("const _%s_name%s = \"", typeName, suffix) + for _, run := range runs { + for i := range run { + g.Printf("%s", run[i].name) + } + } + g.Printf("\"\n") +} + +// buildOneRun generates the variables and String method for a single run of contiguous values. +func (g *Generator) buildOneRun(runs [][]Value, typeName string) { + values := runs[0] + g.Printf("\n") + g.declareIndexAndNameVar(values, typeName) + // The generated code is simple enough to write as a Printf format. + lessThanZero := "" + if values[0].signed { + lessThanZero = "i < 0 || " + } + if values[0].value == 0 { // Signed or unsigned, 0 is still 0. + g.Printf(stringOneRun, typeName, usize(len(values)), lessThanZero) + } else { + g.Printf(stringOneRunWithOffset, typeName, values[0].String(), usize(len(values)), lessThanZero) + } +} + +// Arguments to format are: +// [1]: type name +// [2]: size of index element (8 for uint8 etc.) +// [3]: less than zero check (for signed types) +const stringOneRun = `func (i %[1]s) String() string { + if %[3]si >= %[1]s(len(_%[1]s_index)-1) { + return "%[1]s(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _%[1]s_name[_%[1]s_index[i]:_%[1]s_index[i+1]] +} +` + +// Arguments to format are: +// [1]: type name +// [2]: lowest defined value for type, as a string +// [3]: size of index element (8 for uint8 etc.) +// [4]: less than zero check (for signed types) +/* + */ +const stringOneRunWithOffset = `func (i %[1]s) String() string { + i -= %[2]s + if %[4]si >= %[1]s(len(_%[1]s_index)-1) { + return "%[1]s(" + strconv.FormatInt(int64(i + %[2]s), 10) + ")" + } + return _%[1]s_name[_%[1]s_index[i] : _%[1]s_index[i+1]] +} +` + +// buildMultipleRuns generates the variables and String method for multiple runs of contiguous values. +// For this pattern, a single Printf format won't do. +func (g *Generator) buildMultipleRuns(runs [][]Value, typeName string) { + g.Printf("\n") + g.declareIndexAndNameVars(runs, typeName) + g.Printf("func (i %s) String() string {\n", typeName) + g.Printf("\tswitch {\n") + for i, values := range runs { + if len(values) == 1 { + g.Printf("\tcase i == %s:\n", &values[0]) + g.Printf("\t\treturn _%s_name_%d\n", typeName, i) + continue + } + if values[0].value == 0 && !values[0].signed { + // For an unsigned lower bound of 0, "0 <= i" would be redundant. + g.Printf("\tcase i <= %s:\n", &values[len(values)-1]) + } else { + g.Printf("\tcase %s <= i && i <= %s:\n", &values[0], &values[len(values)-1]) + } + if values[0].value != 0 { + g.Printf("\t\ti -= %s\n", &values[0]) + } + g.Printf("\t\treturn _%s_name_%d[_%s_index_%d[i]:_%s_index_%d[i+1]]\n", + typeName, i, typeName, i, typeName, i) + } + g.Printf("\tdefault:\n") + g.Printf("\t\treturn \"%s(\" + strconv.FormatInt(int64(i), 10) + \")\"\n", typeName) + g.Printf("\t}\n") + g.Printf("}\n") +} + +// buildMap handles the case where the space is so sparse a map is a reasonable fallback. +// It's a rare situation but has simple code. +func (g *Generator) buildMap(runs [][]Value, typeName string) { + g.Printf("\n") + g.declareNameVars(runs, typeName, "") + g.Printf("\nvar _%s_map = map[%s]string{\n", typeName, typeName) + n := 0 + for _, values := range runs { + for _, value := range values { + g.Printf("\t%s: _%s_name[%d:%d],\n", &value, typeName, n, n+len(value.name)) + n += len(value.name) + } + } + g.Printf("}\n\n") + g.Printf(stringMap, typeName) +} + +// Argument to format is the type name. +const stringMap = `func (i %[1]s) String() string { + if str, ok := _%[1]s_map[i]; ok { + return str + } + return "%[1]s(" + strconv.FormatInt(int64(i), 10) + ")" +} +` diff --git a/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go b/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go new file mode 100644 index 000000000..f4d73b233 --- /dev/null +++ b/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go @@ -0,0 +1,49 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package packagesdriver fetches type sizes for go/packages and go/analysis. +package packagesdriver + +import ( + "context" + "fmt" + "go/types" + "strings" + + "golang.org/x/tools/internal/gocommand" +) + +var debug = false + +func GetSizesGolist(ctx context.Context, inv gocommand.Invocation, gocmdRunner *gocommand.Runner) (types.Sizes, error) { + inv.Verb = "list" + inv.Args = []string{"-f", "{{context.GOARCH}} {{context.Compiler}}", "--", "unsafe"} + stdout, stderr, friendlyErr, rawErr := gocmdRunner.RunRaw(ctx, inv) + var goarch, compiler string + if rawErr != nil { + if strings.Contains(rawErr.Error(), "cannot find main module") { + // User's running outside of a module. All bets are off. Get GOARCH and guess compiler is gc. + // TODO(matloob): Is this a problem in practice? + inv.Verb = "env" + inv.Args = []string{"GOARCH"} + envout, enverr := gocmdRunner.Run(ctx, inv) + if enverr != nil { + return nil, enverr + } + goarch = strings.TrimSpace(envout.String()) + compiler = "gc" + } else { + return nil, friendlyErr + } + } else { + fields := strings.Fields(stdout.String()) + if len(fields) < 2 { + return nil, fmt.Errorf("could not parse GOARCH and Go compiler in format \" \":\nstdout: <<%s>>\nstderr: <<%s>>", + stdout.String(), stderr.String()) + } + goarch = fields[0] + compiler = fields[1] + } + return types.SizesFor(compiler, goarch), nil +} diff --git a/vendor/golang.org/x/tools/go/packages/doc.go b/vendor/golang.org/x/tools/go/packages/doc.go new file mode 100644 index 000000000..4bfe28a51 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/doc.go @@ -0,0 +1,221 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package packages loads Go packages for inspection and analysis. + +The Load function takes as input a list of patterns and return a list of Package +structs describing individual packages matched by those patterns. +The LoadMode controls the amount of detail in the loaded packages. + +Load passes most patterns directly to the underlying build tool, +but all patterns with the prefix "query=", where query is a +non-empty string of letters from [a-z], are reserved and may be +interpreted as query operators. + +Two query operators are currently supported: "file" and "pattern". + +The query "file=path/to/file.go" matches the package or packages enclosing +the Go source file path/to/file.go. For example "file=~/go/src/fmt/print.go" +might return the packages "fmt" and "fmt [fmt.test]". + +The query "pattern=string" causes "string" to be passed directly to +the underlying build tool. In most cases this is unnecessary, +but an application can use Load("pattern=" + x) as an escaping mechanism +to ensure that x is not interpreted as a query operator if it contains '='. + +All other query operators are reserved for future use and currently +cause Load to report an error. + +The Package struct provides basic information about the package, including + + - ID, a unique identifier for the package in the returned set; + - GoFiles, the names of the package's Go source files; + - Imports, a map from source import strings to the Packages they name; + - Types, the type information for the package's exported symbols; + - Syntax, the parsed syntax trees for the package's source code; and + - TypeInfo, the result of a complete type-check of the package syntax trees. + +(See the documentation for type Package for the complete list of fields +and more detailed descriptions.) + +For example, + + Load(nil, "bytes", "unicode...") + +returns four Package structs describing the standard library packages +bytes, unicode, unicode/utf16, and unicode/utf8. Note that one pattern +can match multiple packages and that a package might be matched by +multiple patterns: in general it is not possible to determine which +packages correspond to which patterns. + +Note that the list returned by Load contains only the packages matched +by the patterns. Their dependencies can be found by walking the import +graph using the Imports fields. + +The Load function can be configured by passing a pointer to a Config as +the first argument. A nil Config is equivalent to the zero Config, which +causes Load to run in LoadFiles mode, collecting minimal information. +See the documentation for type Config for details. + +As noted earlier, the Config.Mode controls the amount of detail +reported about the loaded packages. See the documentation for type LoadMode +for details. + +Most tools should pass their command-line arguments (after any flags) +uninterpreted to the loader, so that the loader can interpret them +according to the conventions of the underlying build system. +See the Example function for typical usage. + +*/ +package packages // import "golang.org/x/tools/go/packages" + +/* + +Motivation and design considerations + +The new package's design solves problems addressed by two existing +packages: go/build, which locates and describes packages, and +golang.org/x/tools/go/loader, which loads, parses and type-checks them. +The go/build.Package structure encodes too much of the 'go build' way +of organizing projects, leaving us in need of a data type that describes a +package of Go source code independent of the underlying build system. +We wanted something that works equally well with go build and vgo, and +also other build systems such as Bazel and Blaze, making it possible to +construct analysis tools that work in all these environments. +Tools such as errcheck and staticcheck were essentially unavailable to +the Go community at Google, and some of Google's internal tools for Go +are unavailable externally. +This new package provides a uniform way to obtain package metadata by +querying each of these build systems, optionally supporting their +preferred command-line notations for packages, so that tools integrate +neatly with users' build environments. The Metadata query function +executes an external query tool appropriate to the current workspace. + +Loading packages always returns the complete import graph "all the way down", +even if all you want is information about a single package, because the query +mechanisms of all the build systems we currently support ({go,vgo} list, and +blaze/bazel aspect-based query) cannot provide detailed information +about one package without visiting all its dependencies too, so there is +no additional asymptotic cost to providing transitive information. +(This property might not be true of a hypothetical 5th build system.) + +In calls to TypeCheck, all initial packages, and any package that +transitively depends on one of them, must be loaded from source. +Consider A->B->C->D->E: if A,C are initial, A,B,C must be loaded from +source; D may be loaded from export data, and E may not be loaded at all +(though it's possible that D's export data mentions it, so a +types.Package may be created for it and exposed.) + +The old loader had a feature to suppress type-checking of function +bodies on a per-package basis, primarily intended to reduce the work of +obtaining type information for imported packages. Now that imports are +satisfied by export data, the optimization no longer seems necessary. + +Despite some early attempts, the old loader did not exploit export data, +instead always using the equivalent of WholeProgram mode. This was due +to the complexity of mixing source and export data packages (now +resolved by the upward traversal mentioned above), and because export data +files were nearly always missing or stale. Now that 'go build' supports +caching, all the underlying build systems can guarantee to produce +export data in a reasonable (amortized) time. + +Test "main" packages synthesized by the build system are now reported as +first-class packages, avoiding the need for clients (such as go/ssa) to +reinvent this generation logic. + +One way in which go/packages is simpler than the old loader is in its +treatment of in-package tests. In-package tests are packages that +consist of all the files of the library under test, plus the test files. +The old loader constructed in-package tests by a two-phase process of +mutation called "augmentation": first it would construct and type check +all the ordinary library packages and type-check the packages that +depend on them; then it would add more (test) files to the package and +type-check again. This two-phase approach had four major problems: +1) in processing the tests, the loader modified the library package, + leaving no way for a client application to see both the test + package and the library package; one would mutate into the other. +2) because test files can declare additional methods on types defined in + the library portion of the package, the dispatch of method calls in + the library portion was affected by the presence of the test files. + This should have been a clue that the packages were logically + different. +3) this model of "augmentation" assumed at most one in-package test + per library package, which is true of projects using 'go build', + but not other build systems. +4) because of the two-phase nature of test processing, all packages that + import the library package had to be processed before augmentation, + forcing a "one-shot" API and preventing the client from calling Load + in several times in sequence as is now possible in WholeProgram mode. + (TypeCheck mode has a similar one-shot restriction for a different reason.) + +Early drafts of this package supported "multi-shot" operation. +Although it allowed clients to make a sequence of calls (or concurrent +calls) to Load, building up the graph of Packages incrementally, +it was of marginal value: it complicated the API +(since it allowed some options to vary across calls but not others), +it complicated the implementation, +it cannot be made to work in Types mode, as explained above, +and it was less efficient than making one combined call (when this is possible). +Among the clients we have inspected, none made multiple calls to load +but could not be easily and satisfactorily modified to make only a single call. +However, applications changes may be required. +For example, the ssadump command loads the user-specified packages +and in addition the runtime package. It is tempting to simply append +"runtime" to the user-provided list, but that does not work if the user +specified an ad-hoc package such as [a.go b.go]. +Instead, ssadump no longer requests the runtime package, +but seeks it among the dependencies of the user-specified packages, +and emits an error if it is not found. + +Overlays: The Overlay field in the Config allows providing alternate contents +for Go source files, by providing a mapping from file path to contents. +go/packages will pull in new imports added in overlay files when go/packages +is run in LoadImports mode or greater. +Overlay support for the go list driver isn't complete yet: if the file doesn't +exist on disk, it will only be recognized in an overlay if it is a non-test file +and the package would be reported even without the overlay. + +Questions & Tasks + +- Add GOARCH/GOOS? + They are not portable concepts, but could be made portable. + Our goal has been to allow users to express themselves using the conventions + of the underlying build system: if the build system honors GOARCH + during a build and during a metadata query, then so should + applications built atop that query mechanism. + Conversely, if the target architecture of the build is determined by + command-line flags, the application can pass the relevant + flags through to the build system using a command such as: + myapp -query_flag="--cpu=amd64" -query_flag="--os=darwin" + However, this approach is low-level, unwieldy, and non-portable. + GOOS and GOARCH seem important enough to warrant a dedicated option. + +- How should we handle partial failures such as a mixture of good and + malformed patterns, existing and non-existent packages, successful and + failed builds, import failures, import cycles, and so on, in a call to + Load? + +- Support bazel, blaze, and go1.10 list, not just go1.11 list. + +- Handle (and test) various partial success cases, e.g. + a mixture of good packages and: + invalid patterns + nonexistent packages + empty packages + packages with malformed package or import declarations + unreadable files + import cycles + other parse errors + type errors + Make sure we record errors at the correct place in the graph. + +- Missing packages among initial arguments are not reported. + Return bogus packages for them, like golist does. + +- "undeclared name" errors (for example) are reported out of source file + order. I suspect this is due to the breadth-first resolution now used + by go/types. Is that a bug? Discuss with gri. + +*/ diff --git a/vendor/golang.org/x/tools/go/packages/external.go b/vendor/golang.org/x/tools/go/packages/external.go new file mode 100644 index 000000000..7db1d1293 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/external.go @@ -0,0 +1,101 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file enables an external tool to intercept package requests. +// If the tool is present then its results are used in preference to +// the go list command. + +package packages + +import ( + "bytes" + "encoding/json" + "fmt" + "os" + "os/exec" + "strings" +) + +// The Driver Protocol +// +// The driver, given the inputs to a call to Load, returns metadata about the packages specified. +// This allows for different build systems to support go/packages by telling go/packages how the +// packages' source is organized. +// The driver is a binary, either specified by the GOPACKAGESDRIVER environment variable or in +// the path as gopackagesdriver. It's given the inputs to load in its argv. See the package +// documentation in doc.go for the full description of the patterns that need to be supported. +// A driver receives as a JSON-serialized driverRequest struct in standard input and will +// produce a JSON-serialized driverResponse (see definition in packages.go) in its standard output. + +// driverRequest is used to provide the portion of Load's Config that is needed by a driver. +type driverRequest struct { + Mode LoadMode `json:"mode"` + // Env specifies the environment the underlying build system should be run in. + Env []string `json:"env"` + // BuildFlags are flags that should be passed to the underlying build system. + BuildFlags []string `json:"build_flags"` + // Tests specifies whether the patterns should also return test packages. + Tests bool `json:"tests"` + // Overlay maps file paths (relative to the driver's working directory) to the byte contents + // of overlay files. + Overlay map[string][]byte `json:"overlay"` +} + +// findExternalDriver returns the file path of a tool that supplies +// the build system package structure, or "" if not found." +// If GOPACKAGESDRIVER is set in the environment findExternalTool returns its +// value, otherwise it searches for a binary named gopackagesdriver on the PATH. +func findExternalDriver(cfg *Config) driver { + const toolPrefix = "GOPACKAGESDRIVER=" + tool := "" + for _, env := range cfg.Env { + if val := strings.TrimPrefix(env, toolPrefix); val != env { + tool = val + } + } + if tool != "" && tool == "off" { + return nil + } + if tool == "" { + var err error + tool, err = exec.LookPath("gopackagesdriver") + if err != nil { + return nil + } + } + return func(cfg *Config, words ...string) (*driverResponse, error) { + req, err := json.Marshal(driverRequest{ + Mode: cfg.Mode, + Env: cfg.Env, + BuildFlags: cfg.BuildFlags, + Tests: cfg.Tests, + Overlay: cfg.Overlay, + }) + if err != nil { + return nil, fmt.Errorf("failed to encode message to driver tool: %v", err) + } + + buf := new(bytes.Buffer) + stderr := new(bytes.Buffer) + cmd := exec.CommandContext(cfg.Context, tool, words...) + cmd.Dir = cfg.Dir + cmd.Env = cfg.Env + cmd.Stdin = bytes.NewReader(req) + cmd.Stdout = buf + cmd.Stderr = stderr + + if err := cmd.Run(); err != nil { + return nil, fmt.Errorf("%v: %v: %s", tool, err, cmd.Stderr) + } + if len(stderr.Bytes()) != 0 && os.Getenv("GOPACKAGESPRINTDRIVERERRORS") != "" { + fmt.Fprintf(os.Stderr, "%s stderr: <<%s>>\n", cmdDebugStr(cmd), stderr) + } + + var response driverResponse + if err := json.Unmarshal(buf.Bytes(), &response); err != nil { + return nil, err + } + return &response, nil + } +} diff --git a/vendor/golang.org/x/tools/go/packages/golist.go b/vendor/golang.org/x/tools/go/packages/golist.go new file mode 100644 index 000000000..c83ca097a --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/golist.go @@ -0,0 +1,1096 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "go/types" + "io/ioutil" + "log" + "os" + "os/exec" + "path" + "path/filepath" + "reflect" + "sort" + "strconv" + "strings" + "sync" + "unicode" + + "golang.org/x/tools/go/internal/packagesdriver" + "golang.org/x/tools/internal/gocommand" + "golang.org/x/xerrors" +) + +// debug controls verbose logging. +var debug, _ = strconv.ParseBool(os.Getenv("GOPACKAGESDEBUG")) + +// A goTooOldError reports that the go command +// found by exec.LookPath is too old to use the new go list behavior. +type goTooOldError struct { + error +} + +// responseDeduper wraps a driverResponse, deduplicating its contents. +type responseDeduper struct { + seenRoots map[string]bool + seenPackages map[string]*Package + dr *driverResponse +} + +func newDeduper() *responseDeduper { + return &responseDeduper{ + dr: &driverResponse{}, + seenRoots: map[string]bool{}, + seenPackages: map[string]*Package{}, + } +} + +// addAll fills in r with a driverResponse. +func (r *responseDeduper) addAll(dr *driverResponse) { + for _, pkg := range dr.Packages { + r.addPackage(pkg) + } + for _, root := range dr.Roots { + r.addRoot(root) + } +} + +func (r *responseDeduper) addPackage(p *Package) { + if r.seenPackages[p.ID] != nil { + return + } + r.seenPackages[p.ID] = p + r.dr.Packages = append(r.dr.Packages, p) +} + +func (r *responseDeduper) addRoot(id string) { + if r.seenRoots[id] { + return + } + r.seenRoots[id] = true + r.dr.Roots = append(r.dr.Roots, id) +} + +type golistState struct { + cfg *Config + ctx context.Context + + envOnce sync.Once + goEnvError error + goEnv map[string]string + + rootsOnce sync.Once + rootDirsError error + rootDirs map[string]string + + goVersionOnce sync.Once + goVersionError error + goVersion int // The X in Go 1.X. + + // vendorDirs caches the (non)existence of vendor directories. + vendorDirs map[string]bool +} + +// getEnv returns Go environment variables. Only specific variables are +// populated -- computing all of them is slow. +func (state *golistState) getEnv() (map[string]string, error) { + state.envOnce.Do(func() { + var b *bytes.Buffer + b, state.goEnvError = state.invokeGo("env", "-json", "GOMOD", "GOPATH") + if state.goEnvError != nil { + return + } + + state.goEnv = make(map[string]string) + decoder := json.NewDecoder(b) + if state.goEnvError = decoder.Decode(&state.goEnv); state.goEnvError != nil { + return + } + }) + return state.goEnv, state.goEnvError +} + +// mustGetEnv is a convenience function that can be used if getEnv has already succeeded. +func (state *golistState) mustGetEnv() map[string]string { + env, err := state.getEnv() + if err != nil { + panic(fmt.Sprintf("mustGetEnv: %v", err)) + } + return env +} + +// goListDriver uses the go list command to interpret the patterns and produce +// the build system package structure. +// See driver for more details. +func goListDriver(cfg *Config, patterns ...string) (*driverResponse, error) { + // Make sure that any asynchronous go commands are killed when we return. + parentCtx := cfg.Context + if parentCtx == nil { + parentCtx = context.Background() + } + ctx, cancel := context.WithCancel(parentCtx) + defer cancel() + + response := newDeduper() + + state := &golistState{ + cfg: cfg, + ctx: ctx, + vendorDirs: map[string]bool{}, + } + + // Fill in response.Sizes asynchronously if necessary. + var sizeserr error + var sizeswg sync.WaitGroup + if cfg.Mode&NeedTypesSizes != 0 || cfg.Mode&NeedTypes != 0 { + sizeswg.Add(1) + go func() { + var sizes types.Sizes + sizes, sizeserr = packagesdriver.GetSizesGolist(ctx, state.cfgInvocation(), cfg.gocmdRunner) + // types.SizesFor always returns nil or a *types.StdSizes. + response.dr.Sizes, _ = sizes.(*types.StdSizes) + sizeswg.Done() + }() + } + + // Determine files requested in contains patterns + var containFiles []string + restPatterns := make([]string, 0, len(patterns)) + // Extract file= and other [querytype]= patterns. Report an error if querytype + // doesn't exist. +extractQueries: + for _, pattern := range patterns { + eqidx := strings.Index(pattern, "=") + if eqidx < 0 { + restPatterns = append(restPatterns, pattern) + } else { + query, value := pattern[:eqidx], pattern[eqidx+len("="):] + switch query { + case "file": + containFiles = append(containFiles, value) + case "pattern": + restPatterns = append(restPatterns, value) + case "": // not a reserved query + restPatterns = append(restPatterns, pattern) + default: + for _, rune := range query { + if rune < 'a' || rune > 'z' { // not a reserved query + restPatterns = append(restPatterns, pattern) + continue extractQueries + } + } + // Reject all other patterns containing "=" + return nil, fmt.Errorf("invalid query type %q in query pattern %q", query, pattern) + } + } + } + + // See if we have any patterns to pass through to go list. Zero initial + // patterns also requires a go list call, since it's the equivalent of + // ".". + if len(restPatterns) > 0 || len(patterns) == 0 { + dr, err := state.createDriverResponse(restPatterns...) + if err != nil { + return nil, err + } + response.addAll(dr) + } + + if len(containFiles) != 0 { + if err := state.runContainsQueries(response, containFiles); err != nil { + return nil, err + } + } + + // Only use go/packages' overlay processing if we're using a Go version + // below 1.16. Otherwise, go list handles it. + if goVersion, err := state.getGoVersion(); err == nil && goVersion < 16 { + modifiedPkgs, needPkgs, err := state.processGolistOverlay(response) + if err != nil { + return nil, err + } + + var containsCandidates []string + if len(containFiles) > 0 { + containsCandidates = append(containsCandidates, modifiedPkgs...) + containsCandidates = append(containsCandidates, needPkgs...) + } + if err := state.addNeededOverlayPackages(response, needPkgs); err != nil { + return nil, err + } + // Check candidate packages for containFiles. + if len(containFiles) > 0 { + for _, id := range containsCandidates { + pkg, ok := response.seenPackages[id] + if !ok { + response.addPackage(&Package{ + ID: id, + Errors: []Error{{ + Kind: ListError, + Msg: fmt.Sprintf("package %s expected but not seen", id), + }}, + }) + continue + } + for _, f := range containFiles { + for _, g := range pkg.GoFiles { + if sameFile(f, g) { + response.addRoot(id) + } + } + } + } + } + // Add root for any package that matches a pattern. This applies only to + // packages that are modified by overlays, since they are not added as + // roots automatically. + for _, pattern := range restPatterns { + match := matchPattern(pattern) + for _, pkgID := range modifiedPkgs { + pkg, ok := response.seenPackages[pkgID] + if !ok { + continue + } + if match(pkg.PkgPath) { + response.addRoot(pkg.ID) + } + } + } + } + + sizeswg.Wait() + if sizeserr != nil { + return nil, sizeserr + } + return response.dr, nil +} + +func (state *golistState) addNeededOverlayPackages(response *responseDeduper, pkgs []string) error { + if len(pkgs) == 0 { + return nil + } + dr, err := state.createDriverResponse(pkgs...) + if err != nil { + return err + } + for _, pkg := range dr.Packages { + response.addPackage(pkg) + } + _, needPkgs, err := state.processGolistOverlay(response) + if err != nil { + return err + } + return state.addNeededOverlayPackages(response, needPkgs) +} + +func (state *golistState) runContainsQueries(response *responseDeduper, queries []string) error { + for _, query := range queries { + // TODO(matloob): Do only one query per directory. + fdir := filepath.Dir(query) + // Pass absolute path of directory to go list so that it knows to treat it as a directory, + // not a package path. + pattern, err := filepath.Abs(fdir) + if err != nil { + return fmt.Errorf("could not determine absolute path of file= query path %q: %v", query, err) + } + dirResponse, err := state.createDriverResponse(pattern) + + // If there was an error loading the package, or the package is returned + // with errors, try to load the file as an ad-hoc package. + // Usually the error will appear in a returned package, but may not if we're + // in module mode and the ad-hoc is located outside a module. + if err != nil || len(dirResponse.Packages) == 1 && len(dirResponse.Packages[0].GoFiles) == 0 && + len(dirResponse.Packages[0].Errors) == 1 { + var queryErr error + if dirResponse, queryErr = state.adhocPackage(pattern, query); queryErr != nil { + return err // return the original error + } + } + isRoot := make(map[string]bool, len(dirResponse.Roots)) + for _, root := range dirResponse.Roots { + isRoot[root] = true + } + for _, pkg := range dirResponse.Packages { + // Add any new packages to the main set + // We don't bother to filter packages that will be dropped by the changes of roots, + // that will happen anyway during graph construction outside this function. + // Over-reporting packages is not a problem. + response.addPackage(pkg) + // if the package was not a root one, it cannot have the file + if !isRoot[pkg.ID] { + continue + } + for _, pkgFile := range pkg.GoFiles { + if filepath.Base(query) == filepath.Base(pkgFile) { + response.addRoot(pkg.ID) + break + } + } + } + } + return nil +} + +// adhocPackage attempts to load or construct an ad-hoc package for a given +// query, if the original call to the driver produced inadequate results. +func (state *golistState) adhocPackage(pattern, query string) (*driverResponse, error) { + response, err := state.createDriverResponse(query) + if err != nil { + return nil, err + } + // If we get nothing back from `go list`, + // try to make this file into its own ad-hoc package. + // TODO(rstambler): Should this check against the original response? + if len(response.Packages) == 0 { + response.Packages = append(response.Packages, &Package{ + ID: "command-line-arguments", + PkgPath: query, + GoFiles: []string{query}, + CompiledGoFiles: []string{query}, + Imports: make(map[string]*Package), + }) + response.Roots = append(response.Roots, "command-line-arguments") + } + // Handle special cases. + if len(response.Packages) == 1 { + // golang/go#33482: If this is a file= query for ad-hoc packages where + // the file only exists on an overlay, and exists outside of a module, + // add the file to the package and remove the errors. + if response.Packages[0].ID == "command-line-arguments" || + filepath.ToSlash(response.Packages[0].PkgPath) == filepath.ToSlash(query) { + if len(response.Packages[0].GoFiles) == 0 { + filename := filepath.Join(pattern, filepath.Base(query)) // avoid recomputing abspath + // TODO(matloob): check if the file is outside of a root dir? + for path := range state.cfg.Overlay { + if path == filename { + response.Packages[0].Errors = nil + response.Packages[0].GoFiles = []string{path} + response.Packages[0].CompiledGoFiles = []string{path} + } + } + } + } + } + return response, nil +} + +// Fields must match go list; +// see $GOROOT/src/cmd/go/internal/load/pkg.go. +type jsonPackage struct { + ImportPath string + Dir string + Name string + Export string + GoFiles []string + CompiledGoFiles []string + IgnoredGoFiles []string + IgnoredOtherFiles []string + CFiles []string + CgoFiles []string + CXXFiles []string + MFiles []string + HFiles []string + FFiles []string + SFiles []string + SwigFiles []string + SwigCXXFiles []string + SysoFiles []string + Imports []string + ImportMap map[string]string + Deps []string + Module *Module + TestGoFiles []string + TestImports []string + XTestGoFiles []string + XTestImports []string + ForTest string // q in a "p [q.test]" package, else "" + DepOnly bool + + Error *jsonPackageError +} + +type jsonPackageError struct { + ImportStack []string + Pos string + Err string +} + +func otherFiles(p *jsonPackage) [][]string { + return [][]string{p.CFiles, p.CXXFiles, p.MFiles, p.HFiles, p.FFiles, p.SFiles, p.SwigFiles, p.SwigCXXFiles, p.SysoFiles} +} + +// createDriverResponse uses the "go list" command to expand the pattern +// words and return a response for the specified packages. +func (state *golistState) createDriverResponse(words ...string) (*driverResponse, error) { + // go list uses the following identifiers in ImportPath and Imports: + // + // "p" -- importable package or main (command) + // "q.test" -- q's test executable + // "p [q.test]" -- variant of p as built for q's test executable + // "q_test [q.test]" -- q's external test package + // + // The packages p that are built differently for a test q.test + // are q itself, plus any helpers used by the external test q_test, + // typically including "testing" and all its dependencies. + + // Run "go list" for complete + // information on the specified packages. + buf, err := state.invokeGo("list", golistargs(state.cfg, words)...) + if err != nil { + return nil, err + } + seen := make(map[string]*jsonPackage) + pkgs := make(map[string]*Package) + additionalErrors := make(map[string][]Error) + // Decode the JSON and convert it to Package form. + var response driverResponse + for dec := json.NewDecoder(buf); dec.More(); { + p := new(jsonPackage) + if err := dec.Decode(p); err != nil { + return nil, fmt.Errorf("JSON decoding failed: %v", err) + } + + if p.ImportPath == "" { + // The documentation for go list says that “[e]rroneous packages will have + // a non-empty ImportPath”. If for some reason it comes back empty, we + // prefer to error out rather than silently discarding data or handing + // back a package without any way to refer to it. + if p.Error != nil { + return nil, Error{ + Pos: p.Error.Pos, + Msg: p.Error.Err, + } + } + return nil, fmt.Errorf("package missing import path: %+v", p) + } + + // Work around https://golang.org/issue/33157: + // go list -e, when given an absolute path, will find the package contained at + // that directory. But when no package exists there, it will return a fake package + // with an error and the ImportPath set to the absolute path provided to go list. + // Try to convert that absolute path to what its package path would be if it's + // contained in a known module or GOPATH entry. This will allow the package to be + // properly "reclaimed" when overlays are processed. + if filepath.IsAbs(p.ImportPath) && p.Error != nil { + pkgPath, ok, err := state.getPkgPath(p.ImportPath) + if err != nil { + return nil, err + } + if ok { + p.ImportPath = pkgPath + } + } + + if old, found := seen[p.ImportPath]; found { + // If one version of the package has an error, and the other doesn't, assume + // that this is a case where go list is reporting a fake dependency variant + // of the imported package: When a package tries to invalidly import another + // package, go list emits a variant of the imported package (with the same + // import path, but with an error on it, and the package will have a + // DepError set on it). An example of when this can happen is for imports of + // main packages: main packages can not be imported, but they may be + // separately matched and listed by another pattern. + // See golang.org/issue/36188 for more details. + + // The plan is that eventually, hopefully in Go 1.15, the error will be + // reported on the importing package rather than the duplicate "fake" + // version of the imported package. Once all supported versions of Go + // have the new behavior this logic can be deleted. + // TODO(matloob): delete the workaround logic once all supported versions of + // Go return the errors on the proper package. + + // There should be exactly one version of a package that doesn't have an + // error. + if old.Error == nil && p.Error == nil { + if !reflect.DeepEqual(p, old) { + return nil, fmt.Errorf("internal error: go list gives conflicting information for package %v", p.ImportPath) + } + continue + } + + // Determine if this package's error needs to be bubbled up. + // This is a hack, and we expect for go list to eventually set the error + // on the package. + if old.Error != nil { + var errkind string + if strings.Contains(old.Error.Err, "not an importable package") { + errkind = "not an importable package" + } else if strings.Contains(old.Error.Err, "use of internal package") && strings.Contains(old.Error.Err, "not allowed") { + errkind = "use of internal package not allowed" + } + if errkind != "" { + if len(old.Error.ImportStack) < 1 { + return nil, fmt.Errorf(`internal error: go list gave a %q error with empty import stack`, errkind) + } + importingPkg := old.Error.ImportStack[len(old.Error.ImportStack)-1] + if importingPkg == old.ImportPath { + // Using an older version of Go which put this package itself on top of import + // stack, instead of the importer. Look for importer in second from top + // position. + if len(old.Error.ImportStack) < 2 { + return nil, fmt.Errorf(`internal error: go list gave a %q error with an import stack without importing package`, errkind) + } + importingPkg = old.Error.ImportStack[len(old.Error.ImportStack)-2] + } + additionalErrors[importingPkg] = append(additionalErrors[importingPkg], Error{ + Pos: old.Error.Pos, + Msg: old.Error.Err, + Kind: ListError, + }) + } + } + + // Make sure that if there's a version of the package without an error, + // that's the one reported to the user. + if old.Error == nil { + continue + } + + // This package will replace the old one at the end of the loop. + } + seen[p.ImportPath] = p + + pkg := &Package{ + Name: p.Name, + ID: p.ImportPath, + GoFiles: absJoin(p.Dir, p.GoFiles, p.CgoFiles), + CompiledGoFiles: absJoin(p.Dir, p.CompiledGoFiles), + OtherFiles: absJoin(p.Dir, otherFiles(p)...), + IgnoredFiles: absJoin(p.Dir, p.IgnoredGoFiles, p.IgnoredOtherFiles), + forTest: p.ForTest, + Module: p.Module, + } + + if (state.cfg.Mode&typecheckCgo) != 0 && len(p.CgoFiles) != 0 { + if len(p.CompiledGoFiles) > len(p.GoFiles) { + // We need the cgo definitions, which are in the first + // CompiledGoFile after the non-cgo ones. This is a hack but there + // isn't currently a better way to find it. We also need the pure + // Go files and unprocessed cgo files, all of which are already + // in pkg.GoFiles. + cgoTypes := p.CompiledGoFiles[len(p.GoFiles)] + pkg.CompiledGoFiles = append([]string{cgoTypes}, pkg.GoFiles...) + } else { + // golang/go#38990: go list silently fails to do cgo processing + pkg.CompiledGoFiles = nil + pkg.Errors = append(pkg.Errors, Error{ + Msg: "go list failed to return CompiledGoFiles; https://golang.org/issue/38990?", + Kind: ListError, + }) + } + } + + // Work around https://golang.org/issue/28749: + // cmd/go puts assembly, C, and C++ files in CompiledGoFiles. + // Filter out any elements of CompiledGoFiles that are also in OtherFiles. + // We have to keep this workaround in place until go1.12 is a distant memory. + if len(pkg.OtherFiles) > 0 { + other := make(map[string]bool, len(pkg.OtherFiles)) + for _, f := range pkg.OtherFiles { + other[f] = true + } + + out := pkg.CompiledGoFiles[:0] + for _, f := range pkg.CompiledGoFiles { + if other[f] { + continue + } + out = append(out, f) + } + pkg.CompiledGoFiles = out + } + + // Extract the PkgPath from the package's ID. + if i := strings.IndexByte(pkg.ID, ' '); i >= 0 { + pkg.PkgPath = pkg.ID[:i] + } else { + pkg.PkgPath = pkg.ID + } + + if pkg.PkgPath == "unsafe" { + pkg.GoFiles = nil // ignore fake unsafe.go file + } + + // Assume go list emits only absolute paths for Dir. + if p.Dir != "" && !filepath.IsAbs(p.Dir) { + log.Fatalf("internal error: go list returned non-absolute Package.Dir: %s", p.Dir) + } + + if p.Export != "" && !filepath.IsAbs(p.Export) { + pkg.ExportFile = filepath.Join(p.Dir, p.Export) + } else { + pkg.ExportFile = p.Export + } + + // imports + // + // Imports contains the IDs of all imported packages. + // ImportsMap records (path, ID) only where they differ. + ids := make(map[string]bool) + for _, id := range p.Imports { + ids[id] = true + } + pkg.Imports = make(map[string]*Package) + for path, id := range p.ImportMap { + pkg.Imports[path] = &Package{ID: id} // non-identity import + delete(ids, id) + } + for id := range ids { + if id == "C" { + continue + } + + pkg.Imports[id] = &Package{ID: id} // identity import + } + if !p.DepOnly { + response.Roots = append(response.Roots, pkg.ID) + } + + // Work around for pre-go.1.11 versions of go list. + // TODO(matloob): they should be handled by the fallback. + // Can we delete this? + if len(pkg.CompiledGoFiles) == 0 { + pkg.CompiledGoFiles = pkg.GoFiles + } + + // Temporary work-around for golang/go#39986. Parse filenames out of + // error messages. This happens if there are unrecoverable syntax + // errors in the source, so we can't match on a specific error message. + if err := p.Error; err != nil && state.shouldAddFilenameFromError(p) { + addFilenameFromPos := func(pos string) bool { + split := strings.Split(pos, ":") + if len(split) < 1 { + return false + } + filename := strings.TrimSpace(split[0]) + if filename == "" { + return false + } + if !filepath.IsAbs(filename) { + filename = filepath.Join(state.cfg.Dir, filename) + } + info, _ := os.Stat(filename) + if info == nil { + return false + } + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, filename) + pkg.GoFiles = append(pkg.GoFiles, filename) + return true + } + found := addFilenameFromPos(err.Pos) + // In some cases, go list only reports the error position in the + // error text, not the error position. One such case is when the + // file's package name is a keyword (see golang.org/issue/39763). + if !found { + addFilenameFromPos(err.Err) + } + } + + if p.Error != nil { + msg := strings.TrimSpace(p.Error.Err) // Trim to work around golang.org/issue/32363. + // Address golang.org/issue/35964 by appending import stack to error message. + if msg == "import cycle not allowed" && len(p.Error.ImportStack) != 0 { + msg += fmt.Sprintf(": import stack: %v", p.Error.ImportStack) + } + pkg.Errors = append(pkg.Errors, Error{ + Pos: p.Error.Pos, + Msg: msg, + Kind: ListError, + }) + } + + pkgs[pkg.ID] = pkg + } + + for id, errs := range additionalErrors { + if p, ok := pkgs[id]; ok { + p.Errors = append(p.Errors, errs...) + } + } + for _, pkg := range pkgs { + response.Packages = append(response.Packages, pkg) + } + sort.Slice(response.Packages, func(i, j int) bool { return response.Packages[i].ID < response.Packages[j].ID }) + + return &response, nil +} + +func (state *golistState) shouldAddFilenameFromError(p *jsonPackage) bool { + if len(p.GoFiles) > 0 || len(p.CompiledGoFiles) > 0 { + return false + } + + goV, err := state.getGoVersion() + if err != nil { + return false + } + + // On Go 1.14 and earlier, only add filenames from errors if the import stack is empty. + // The import stack behaves differently for these versions than newer Go versions. + if goV < 15 { + return len(p.Error.ImportStack) == 0 + } + + // On Go 1.15 and later, only parse filenames out of error if there's no import stack, + // or the current package is at the top of the import stack. This is not guaranteed + // to work perfectly, but should avoid some cases where files in errors don't belong to this + // package. + return len(p.Error.ImportStack) == 0 || p.Error.ImportStack[len(p.Error.ImportStack)-1] == p.ImportPath +} + +func (state *golistState) getGoVersion() (int, error) { + state.goVersionOnce.Do(func() { + state.goVersion, state.goVersionError = gocommand.GoVersion(state.ctx, state.cfgInvocation(), state.cfg.gocmdRunner) + }) + return state.goVersion, state.goVersionError +} + +// getPkgPath finds the package path of a directory if it's relative to a root +// directory. +func (state *golistState) getPkgPath(dir string) (string, bool, error) { + absDir, err := filepath.Abs(dir) + if err != nil { + return "", false, err + } + roots, err := state.determineRootDirs() + if err != nil { + return "", false, err + } + + for rdir, rpath := range roots { + // Make sure that the directory is in the module, + // to avoid creating a path relative to another module. + if !strings.HasPrefix(absDir, rdir) { + continue + } + // TODO(matloob): This doesn't properly handle symlinks. + r, err := filepath.Rel(rdir, dir) + if err != nil { + continue + } + if rpath != "" { + // We choose only one root even though the directory even it can belong in multiple modules + // or GOPATH entries. This is okay because we only need to work with absolute dirs when a + // file is missing from disk, for instance when gopls calls go/packages in an overlay. + // Once the file is saved, gopls, or the next invocation of the tool will get the correct + // result straight from golist. + // TODO(matloob): Implement module tiebreaking? + return path.Join(rpath, filepath.ToSlash(r)), true, nil + } + return filepath.ToSlash(r), true, nil + } + return "", false, nil +} + +// absJoin absolutizes and flattens the lists of files. +func absJoin(dir string, fileses ...[]string) (res []string) { + for _, files := range fileses { + for _, file := range files { + if !filepath.IsAbs(file) { + file = filepath.Join(dir, file) + } + res = append(res, file) + } + } + return res +} + +func golistargs(cfg *Config, words []string) []string { + const findFlags = NeedImports | NeedTypes | NeedSyntax | NeedTypesInfo + fullargs := []string{ + "-e", "-json", + fmt.Sprintf("-compiled=%t", cfg.Mode&(NeedCompiledGoFiles|NeedSyntax|NeedTypes|NeedTypesInfo|NeedTypesSizes) != 0), + fmt.Sprintf("-test=%t", cfg.Tests), + fmt.Sprintf("-export=%t", usesExportData(cfg)), + fmt.Sprintf("-deps=%t", cfg.Mode&NeedImports != 0), + // go list doesn't let you pass -test and -find together, + // probably because you'd just get the TestMain. + fmt.Sprintf("-find=%t", !cfg.Tests && cfg.Mode&findFlags == 0), + } + fullargs = append(fullargs, cfg.BuildFlags...) + fullargs = append(fullargs, "--") + fullargs = append(fullargs, words...) + return fullargs +} + +// cfgInvocation returns an Invocation that reflects cfg's settings. +func (state *golistState) cfgInvocation() gocommand.Invocation { + cfg := state.cfg + return gocommand.Invocation{ + BuildFlags: cfg.BuildFlags, + ModFile: cfg.modFile, + ModFlag: cfg.modFlag, + CleanEnv: cfg.Env != nil, + Env: cfg.Env, + Logf: cfg.Logf, + WorkingDir: cfg.Dir, + } +} + +// invokeGo returns the stdout of a go command invocation. +func (state *golistState) invokeGo(verb string, args ...string) (*bytes.Buffer, error) { + cfg := state.cfg + + inv := state.cfgInvocation() + + // For Go versions 1.16 and above, `go list` accepts overlays directly via + // the -overlay flag. Set it, if it's available. + // + // The check for "list" is not necessarily required, but we should avoid + // getting the go version if possible. + if verb == "list" { + goVersion, err := state.getGoVersion() + if err != nil { + return nil, err + } + if goVersion >= 16 { + filename, cleanup, err := state.writeOverlays() + if err != nil { + return nil, err + } + defer cleanup() + inv.Overlay = filename + } + } + inv.Verb = verb + inv.Args = args + gocmdRunner := cfg.gocmdRunner + if gocmdRunner == nil { + gocmdRunner = &gocommand.Runner{} + } + stdout, stderr, _, err := gocmdRunner.RunRaw(cfg.Context, inv) + if err != nil { + // Check for 'go' executable not being found. + if ee, ok := err.(*exec.Error); ok && ee.Err == exec.ErrNotFound { + return nil, fmt.Errorf("'go list' driver requires 'go', but %s", exec.ErrNotFound) + } + + exitErr, ok := err.(*exec.ExitError) + if !ok { + // Catastrophic error: + // - context cancellation + return nil, xerrors.Errorf("couldn't run 'go': %w", err) + } + + // Old go version? + if strings.Contains(stderr.String(), "flag provided but not defined") { + return nil, goTooOldError{fmt.Errorf("unsupported version of go: %s: %s", exitErr, stderr)} + } + + // Related to #24854 + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "unexpected directory layout") { + return nil, fmt.Errorf("%s", stderr.String()) + } + + // Is there an error running the C compiler in cgo? This will be reported in the "Error" field + // and should be suppressed by go list -e. + // + // This condition is not perfect yet because the error message can include other error messages than runtime/cgo. + isPkgPathRune := func(r rune) bool { + // From https://golang.org/ref/spec#Import_declarations: + // Implementation restriction: A compiler may restrict ImportPaths to non-empty strings + // using only characters belonging to Unicode's L, M, N, P, and S general categories + // (the Graphic characters without spaces) and may also exclude the + // characters !"#$%&'()*,:;<=>?[\]^`{|} and the Unicode replacement character U+FFFD. + return unicode.IsOneOf([]*unicode.RangeTable{unicode.L, unicode.M, unicode.N, unicode.P, unicode.S}, r) && + !strings.ContainsRune("!\"#$%&'()*,:;<=>?[\\]^`{|}\uFFFD", r) + } + // golang/go#36770: Handle case where cmd/go prints module download messages before the error. + msg := stderr.String() + for strings.HasPrefix(msg, "go: downloading") { + msg = msg[strings.IndexRune(msg, '\n')+1:] + } + if len(stderr.String()) > 0 && strings.HasPrefix(stderr.String(), "# ") { + msg := msg[len("# "):] + if strings.HasPrefix(strings.TrimLeftFunc(msg, isPkgPathRune), "\n") { + return stdout, nil + } + // Treat pkg-config errors as a special case (golang.org/issue/36770). + if strings.HasPrefix(msg, "pkg-config") { + return stdout, nil + } + } + + // This error only appears in stderr. See golang.org/cl/166398 for a fix in go list to show + // the error in the Err section of stdout in case -e option is provided. + // This fix is provided for backwards compatibility. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "named files must be .go files") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Similar to the previous error, but currently lacks a fix in Go. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "named files must all be in one directory") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Backwards compatibility for Go 1.11 because 1.12 and 1.13 put the directory in the ImportPath. + // If the package doesn't exist, put the absolute path of the directory into the error message, + // as Go 1.13 list does. + const noSuchDirectory = "no such directory" + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), noSuchDirectory) { + errstr := stderr.String() + abspath := strings.TrimSpace(errstr[strings.Index(errstr, noSuchDirectory)+len(noSuchDirectory):]) + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + abspath, strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for #29280: go list -e has incorrect behavior when an ad-hoc package doesn't exist. + // Note that the error message we look for in this case is different that the one looked for above. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no such file or directory") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for #34273. go list -e with GO111MODULE=on has incorrect behavior when listing a + // directory outside any module. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "outside available modules") { + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + // TODO(matloob): command-line-arguments isn't correct here. + "command-line-arguments", strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Another variation of the previous error + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "outside module root") { + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + // TODO(matloob): command-line-arguments isn't correct here. + "command-line-arguments", strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for an instance of golang.org/issue/26755: go list -e will return a non-zero exit + // status if there's a dependency on a package that doesn't exist. But it should return + // a zero exit status and set an error on that package. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no Go files in") { + // Don't clobber stdout if `go list` actually returned something. + if len(stdout.String()) > 0 { + return stdout, nil + } + // try to extract package name from string + stderrStr := stderr.String() + var importPath string + colon := strings.Index(stderrStr, ":") + if colon > 0 && strings.HasPrefix(stderrStr, "go build ") { + importPath = stderrStr[len("go build "):colon] + } + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + importPath, strings.Trim(stderrStr, "\n")) + return bytes.NewBufferString(output), nil + } + + // Export mode entails a build. + // If that build fails, errors appear on stderr + // (despite the -e flag) and the Export field is blank. + // Do not fail in that case. + // The same is true if an ad-hoc package given to go list doesn't exist. + // TODO(matloob): Remove these once we can depend on go list to exit with a zero status with -e even when + // packages don't exist or a build fails. + if !usesExportData(cfg) && !containsGoFile(args) { + return nil, fmt.Errorf("go %v: %s: %s", args, exitErr, stderr) + } + } + return stdout, nil +} + +// OverlayJSON is the format overlay files are expected to be in. +// The Replace map maps from overlaid paths to replacement paths: +// the Go command will forward all reads trying to open +// each overlaid path to its replacement path, or consider the overlaid +// path not to exist if the replacement path is empty. +// +// From golang/go#39958. +type OverlayJSON struct { + Replace map[string]string `json:"replace,omitempty"` +} + +// writeOverlays writes out files for go list's -overlay flag, as described +// above. +func (state *golistState) writeOverlays() (filename string, cleanup func(), err error) { + // Do nothing if there are no overlays in the config. + if len(state.cfg.Overlay) == 0 { + return "", func() {}, nil + } + dir, err := ioutil.TempDir("", "gopackages-*") + if err != nil { + return "", nil, err + } + // The caller must clean up this directory, unless this function returns an + // error. + cleanup = func() { + os.RemoveAll(dir) + } + defer func() { + if err != nil { + cleanup() + } + }() + overlays := map[string]string{} + for k, v := range state.cfg.Overlay { + // Create a unique filename for the overlaid files, to avoid + // creating nested directories. + noSeparator := strings.Join(strings.Split(filepath.ToSlash(k), "/"), "") + f, err := ioutil.TempFile(dir, fmt.Sprintf("*-%s", noSeparator)) + if err != nil { + return "", func() {}, err + } + if _, err := f.Write(v); err != nil { + return "", func() {}, err + } + if err := f.Close(); err != nil { + return "", func() {}, err + } + overlays[k] = f.Name() + } + b, err := json.Marshal(OverlayJSON{Replace: overlays}) + if err != nil { + return "", func() {}, err + } + // Write out the overlay file that contains the filepath mappings. + filename = filepath.Join(dir, "overlay.json") + if err := ioutil.WriteFile(filename, b, 0665); err != nil { + return "", func() {}, err + } + return filename, cleanup, nil +} + +func containsGoFile(s []string) bool { + for _, f := range s { + if strings.HasSuffix(f, ".go") { + return true + } + } + return false +} + +func cmdDebugStr(cmd *exec.Cmd) string { + env := make(map[string]string) + for _, kv := range cmd.Env { + split := strings.SplitN(kv, "=", 2) + k, v := split[0], split[1] + env[k] = v + } + + var args []string + for _, arg := range cmd.Args { + quoted := strconv.Quote(arg) + if quoted[1:len(quoted)-1] != arg || strings.Contains(arg, " ") { + args = append(args, quoted) + } else { + args = append(args, arg) + } + } + return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v GOPROXY=%v PWD=%v %v", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["GOPROXY"], env["PWD"], strings.Join(args, " ")) +} diff --git a/vendor/golang.org/x/tools/go/packages/golist_overlay.go b/vendor/golang.org/x/tools/go/packages/golist_overlay.go new file mode 100644 index 000000000..de2c1dc57 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/golist_overlay.go @@ -0,0 +1,572 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "encoding/json" + "fmt" + "go/parser" + "go/token" + "log" + "os" + "path/filepath" + "regexp" + "sort" + "strconv" + "strings" + + "golang.org/x/tools/internal/gocommand" +) + +// processGolistOverlay provides rudimentary support for adding +// files that don't exist on disk to an overlay. The results can be +// sometimes incorrect. +// TODO(matloob): Handle unsupported cases, including the following: +// - determining the correct package to add given a new import path +func (state *golistState) processGolistOverlay(response *responseDeduper) (modifiedPkgs, needPkgs []string, err error) { + havePkgs := make(map[string]string) // importPath -> non-test package ID + needPkgsSet := make(map[string]bool) + modifiedPkgsSet := make(map[string]bool) + + pkgOfDir := make(map[string][]*Package) + for _, pkg := range response.dr.Packages { + // This is an approximation of import path to id. This can be + // wrong for tests, vendored packages, and a number of other cases. + havePkgs[pkg.PkgPath] = pkg.ID + x := commonDir(pkg.GoFiles) + if x != "" { + pkgOfDir[x] = append(pkgOfDir[x], pkg) + } + } + + // If no new imports are added, it is safe to avoid loading any needPkgs. + // Otherwise, it's hard to tell which package is actually being loaded + // (due to vendoring) and whether any modified package will show up + // in the transitive set of dependencies (because new imports are added, + // potentially modifying the transitive set of dependencies). + var overlayAddsImports bool + + // If both a package and its test package are created by the overlay, we + // need the real package first. Process all non-test files before test + // files, and make the whole process deterministic while we're at it. + var overlayFiles []string + for opath := range state.cfg.Overlay { + overlayFiles = append(overlayFiles, opath) + } + sort.Slice(overlayFiles, func(i, j int) bool { + iTest := strings.HasSuffix(overlayFiles[i], "_test.go") + jTest := strings.HasSuffix(overlayFiles[j], "_test.go") + if iTest != jTest { + return !iTest // non-tests are before tests. + } + return overlayFiles[i] < overlayFiles[j] + }) + for _, opath := range overlayFiles { + contents := state.cfg.Overlay[opath] + base := filepath.Base(opath) + dir := filepath.Dir(opath) + var pkg *Package // if opath belongs to both a package and its test variant, this will be the test variant + var testVariantOf *Package // if opath is a test file, this is the package it is testing + var fileExists bool + isTestFile := strings.HasSuffix(opath, "_test.go") + pkgName, ok := extractPackageName(opath, contents) + if !ok { + // Don't bother adding a file that doesn't even have a parsable package statement + // to the overlay. + continue + } + // If all the overlay files belong to a different package, change the + // package name to that package. + maybeFixPackageName(pkgName, isTestFile, pkgOfDir[dir]) + nextPackage: + for _, p := range response.dr.Packages { + if pkgName != p.Name && p.ID != "command-line-arguments" { + continue + } + for _, f := range p.GoFiles { + if !sameFile(filepath.Dir(f), dir) { + continue + } + // Make sure to capture information on the package's test variant, if needed. + if isTestFile && !hasTestFiles(p) { + // TODO(matloob): Are there packages other than the 'production' variant + // of a package that this can match? This shouldn't match the test main package + // because the file is generated in another directory. + testVariantOf = p + continue nextPackage + } else if !isTestFile && hasTestFiles(p) { + // We're examining a test variant, but the overlaid file is + // a non-test file. Because the overlay implementation + // (currently) only adds a file to one package, skip this + // package, so that we can add the file to the production + // variant of the package. (https://golang.org/issue/36857 + // tracks handling overlays on both the production and test + // variant of a package). + continue nextPackage + } + if pkg != nil && p != pkg && pkg.PkgPath == p.PkgPath { + // We have already seen the production version of the + // for which p is a test variant. + if hasTestFiles(p) { + testVariantOf = pkg + } + } + pkg = p + if filepath.Base(f) == base { + fileExists = true + } + } + } + // The overlay could have included an entirely new package or an + // ad-hoc package. An ad-hoc package is one that we have manually + // constructed from inadequate `go list` results for a file= query. + // It will have the ID command-line-arguments. + if pkg == nil || pkg.ID == "command-line-arguments" { + // Try to find the module or gopath dir the file is contained in. + // Then for modules, add the module opath to the beginning. + pkgPath, ok, err := state.getPkgPath(dir) + if err != nil { + return nil, nil, err + } + if !ok { + break + } + var forTest string // only set for x tests + isXTest := strings.HasSuffix(pkgName, "_test") + if isXTest { + forTest = pkgPath + pkgPath += "_test" + } + id := pkgPath + if isTestFile { + if isXTest { + id = fmt.Sprintf("%s [%s.test]", pkgPath, forTest) + } else { + id = fmt.Sprintf("%s [%s.test]", pkgPath, pkgPath) + } + } + if pkg != nil { + // TODO(rstambler): We should change the package's path and ID + // here. The only issue is that this messes with the roots. + } else { + // Try to reclaim a package with the same ID, if it exists in the response. + for _, p := range response.dr.Packages { + if reclaimPackage(p, id, opath, contents) { + pkg = p + break + } + } + // Otherwise, create a new package. + if pkg == nil { + pkg = &Package{ + PkgPath: pkgPath, + ID: id, + Name: pkgName, + Imports: make(map[string]*Package), + } + response.addPackage(pkg) + havePkgs[pkg.PkgPath] = id + // Add the production package's sources for a test variant. + if isTestFile && !isXTest && testVariantOf != nil { + pkg.GoFiles = append(pkg.GoFiles, testVariantOf.GoFiles...) + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, testVariantOf.CompiledGoFiles...) + // Add the package under test and its imports to the test variant. + pkg.forTest = testVariantOf.PkgPath + for k, v := range testVariantOf.Imports { + pkg.Imports[k] = &Package{ID: v.ID} + } + } + if isXTest { + pkg.forTest = forTest + } + } + } + } + if !fileExists { + pkg.GoFiles = append(pkg.GoFiles, opath) + // TODO(matloob): Adding the file to CompiledGoFiles can exhibit the wrong behavior + // if the file will be ignored due to its build tags. + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, opath) + modifiedPkgsSet[pkg.ID] = true + } + imports, err := extractImports(opath, contents) + if err != nil { + // Let the parser or type checker report errors later. + continue + } + for _, imp := range imports { + // TODO(rstambler): If the package is an x test and the import has + // a test variant, make sure to replace it. + if _, found := pkg.Imports[imp]; found { + continue + } + overlayAddsImports = true + id, ok := havePkgs[imp] + if !ok { + var err error + id, err = state.resolveImport(dir, imp) + if err != nil { + return nil, nil, err + } + } + pkg.Imports[imp] = &Package{ID: id} + // Add dependencies to the non-test variant version of this package as well. + if testVariantOf != nil { + testVariantOf.Imports[imp] = &Package{ID: id} + } + } + } + + // toPkgPath guesses the package path given the id. + toPkgPath := func(sourceDir, id string) (string, error) { + if i := strings.IndexByte(id, ' '); i >= 0 { + return state.resolveImport(sourceDir, id[:i]) + } + return state.resolveImport(sourceDir, id) + } + + // Now that new packages have been created, do another pass to determine + // the new set of missing packages. + for _, pkg := range response.dr.Packages { + for _, imp := range pkg.Imports { + if len(pkg.GoFiles) == 0 { + return nil, nil, fmt.Errorf("cannot resolve imports for package %q with no Go files", pkg.PkgPath) + } + pkgPath, err := toPkgPath(filepath.Dir(pkg.GoFiles[0]), imp.ID) + if err != nil { + return nil, nil, err + } + if _, ok := havePkgs[pkgPath]; !ok { + needPkgsSet[pkgPath] = true + } + } + } + + if overlayAddsImports { + needPkgs = make([]string, 0, len(needPkgsSet)) + for pkg := range needPkgsSet { + needPkgs = append(needPkgs, pkg) + } + } + modifiedPkgs = make([]string, 0, len(modifiedPkgsSet)) + for pkg := range modifiedPkgsSet { + modifiedPkgs = append(modifiedPkgs, pkg) + } + return modifiedPkgs, needPkgs, err +} + +// resolveImport finds the ID of a package given its import path. +// In particular, it will find the right vendored copy when in GOPATH mode. +func (state *golistState) resolveImport(sourceDir, importPath string) (string, error) { + env, err := state.getEnv() + if err != nil { + return "", err + } + if env["GOMOD"] != "" { + return importPath, nil + } + + searchDir := sourceDir + for { + vendorDir := filepath.Join(searchDir, "vendor") + exists, ok := state.vendorDirs[vendorDir] + if !ok { + info, err := os.Stat(vendorDir) + exists = err == nil && info.IsDir() + state.vendorDirs[vendorDir] = exists + } + + if exists { + vendoredPath := filepath.Join(vendorDir, importPath) + if info, err := os.Stat(vendoredPath); err == nil && info.IsDir() { + // We should probably check for .go files here, but shame on anyone who fools us. + path, ok, err := state.getPkgPath(vendoredPath) + if err != nil { + return "", err + } + if ok { + return path, nil + } + } + } + + // We know we've hit the top of the filesystem when we Dir / and get /, + // or C:\ and get C:\, etc. + next := filepath.Dir(searchDir) + if next == searchDir { + break + } + searchDir = next + } + return importPath, nil +} + +func hasTestFiles(p *Package) bool { + for _, f := range p.GoFiles { + if strings.HasSuffix(f, "_test.go") { + return true + } + } + return false +} + +// determineRootDirs returns a mapping from absolute directories that could +// contain code to their corresponding import path prefixes. +func (state *golistState) determineRootDirs() (map[string]string, error) { + env, err := state.getEnv() + if err != nil { + return nil, err + } + if env["GOMOD"] != "" { + state.rootsOnce.Do(func() { + state.rootDirs, state.rootDirsError = state.determineRootDirsModules() + }) + } else { + state.rootsOnce.Do(func() { + state.rootDirs, state.rootDirsError = state.determineRootDirsGOPATH() + }) + } + return state.rootDirs, state.rootDirsError +} + +func (state *golistState) determineRootDirsModules() (map[string]string, error) { + // List all of the modules--the first will be the directory for the main + // module. Any replaced modules will also need to be treated as roots. + // Editing files in the module cache isn't a great idea, so we don't + // plan to ever support that. + out, err := state.invokeGo("list", "-m", "-json", "all") + if err != nil { + // 'go list all' will fail if we're outside of a module and + // GO111MODULE=on. Try falling back without 'all'. + var innerErr error + out, innerErr = state.invokeGo("list", "-m", "-json") + if innerErr != nil { + return nil, err + } + } + roots := map[string]string{} + modules := map[string]string{} + var i int + for dec := json.NewDecoder(out); dec.More(); { + mod := new(gocommand.ModuleJSON) + if err := dec.Decode(mod); err != nil { + return nil, err + } + if mod.Dir != "" && mod.Path != "" { + // This is a valid module; add it to the map. + absDir, err := filepath.Abs(mod.Dir) + if err != nil { + return nil, err + } + modules[absDir] = mod.Path + // The first result is the main module. + if i == 0 || mod.Replace != nil && mod.Replace.Path != "" { + roots[absDir] = mod.Path + } + } + i++ + } + return roots, nil +} + +func (state *golistState) determineRootDirsGOPATH() (map[string]string, error) { + m := map[string]string{} + for _, dir := range filepath.SplitList(state.mustGetEnv()["GOPATH"]) { + absDir, err := filepath.Abs(dir) + if err != nil { + return nil, err + } + m[filepath.Join(absDir, "src")] = "" + } + return m, nil +} + +func extractImports(filename string, contents []byte) ([]string, error) { + f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.ImportsOnly) // TODO(matloob): reuse fileset? + if err != nil { + return nil, err + } + var res []string + for _, imp := range f.Imports { + quotedPath := imp.Path.Value + path, err := strconv.Unquote(quotedPath) + if err != nil { + return nil, err + } + res = append(res, path) + } + return res, nil +} + +// reclaimPackage attempts to reuse a package that failed to load in an overlay. +// +// If the package has errors and has no Name, GoFiles, or Imports, +// then it's possible that it doesn't yet exist on disk. +func reclaimPackage(pkg *Package, id string, filename string, contents []byte) bool { + // TODO(rstambler): Check the message of the actual error? + // It differs between $GOPATH and module mode. + if pkg.ID != id { + return false + } + if len(pkg.Errors) != 1 { + return false + } + if pkg.Name != "" || pkg.ExportFile != "" { + return false + } + if len(pkg.GoFiles) > 0 || len(pkg.CompiledGoFiles) > 0 || len(pkg.OtherFiles) > 0 { + return false + } + if len(pkg.Imports) > 0 { + return false + } + pkgName, ok := extractPackageName(filename, contents) + if !ok { + return false + } + pkg.Name = pkgName + pkg.Errors = nil + return true +} + +func extractPackageName(filename string, contents []byte) (string, bool) { + // TODO(rstambler): Check the message of the actual error? + // It differs between $GOPATH and module mode. + f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.PackageClauseOnly) // TODO(matloob): reuse fileset? + if err != nil { + return "", false + } + return f.Name.Name, true +} + +func commonDir(a []string) string { + seen := make(map[string]bool) + x := append([]string{}, a...) + for _, f := range x { + seen[filepath.Dir(f)] = true + } + if len(seen) > 1 { + log.Fatalf("commonDir saw %v for %v", seen, x) + } + for k := range seen { + // len(seen) == 1 + return k + } + return "" // no files +} + +// It is possible that the files in the disk directory dir have a different package +// name from newName, which is deduced from the overlays. If they all have a different +// package name, and they all have the same package name, then that name becomes +// the package name. +// It returns true if it changes the package name, false otherwise. +func maybeFixPackageName(newName string, isTestFile bool, pkgsOfDir []*Package) { + names := make(map[string]int) + for _, p := range pkgsOfDir { + names[p.Name]++ + } + if len(names) != 1 { + // some files are in different packages + return + } + var oldName string + for k := range names { + oldName = k + } + if newName == oldName { + return + } + // We might have a case where all of the package names in the directory are + // the same, but the overlay file is for an x test, which belongs to its + // own package. If the x test does not yet exist on disk, we may not yet + // have its package name on disk, but we should not rename the packages. + // + // We use a heuristic to determine if this file belongs to an x test: + // The test file should have a package name whose package name has a _test + // suffix or looks like "newName_test". + maybeXTest := strings.HasPrefix(oldName+"_test", newName) || strings.HasSuffix(newName, "_test") + if isTestFile && maybeXTest { + return + } + for _, p := range pkgsOfDir { + p.Name = newName + } +} + +// This function is copy-pasted from +// https://github.com/golang/go/blob/9706f510a5e2754595d716bd64be8375997311fb/src/cmd/go/internal/search/search.go#L360. +// It should be deleted when we remove support for overlays from go/packages. +// +// NOTE: This does not handle any ./... or ./ style queries, as this function +// doesn't know the working directory. +// +// matchPattern(pattern)(name) reports whether +// name matches pattern. Pattern is a limited glob +// pattern in which '...' means 'any string' and there +// is no other special syntax. +// Unfortunately, there are two special cases. Quoting "go help packages": +// +// First, /... at the end of the pattern can match an empty string, +// so that net/... matches both net and packages in its subdirectories, like net/http. +// Second, any slash-separated pattern element containing a wildcard never +// participates in a match of the "vendor" element in the path of a vendored +// package, so that ./... does not match packages in subdirectories of +// ./vendor or ./mycode/vendor, but ./vendor/... and ./mycode/vendor/... do. +// Note, however, that a directory named vendor that itself contains code +// is not a vendored package: cmd/vendor would be a command named vendor, +// and the pattern cmd/... matches it. +func matchPattern(pattern string) func(name string) bool { + // Convert pattern to regular expression. + // The strategy for the trailing /... is to nest it in an explicit ? expression. + // The strategy for the vendor exclusion is to change the unmatchable + // vendor strings to a disallowed code point (vendorChar) and to use + // "(anything but that codepoint)*" as the implementation of the ... wildcard. + // This is a bit complicated but the obvious alternative, + // namely a hand-written search like in most shell glob matchers, + // is too easy to make accidentally exponential. + // Using package regexp guarantees linear-time matching. + + const vendorChar = "\x00" + + if strings.Contains(pattern, vendorChar) { + return func(name string) bool { return false } + } + + re := regexp.QuoteMeta(pattern) + re = replaceVendor(re, vendorChar) + switch { + case strings.HasSuffix(re, `/`+vendorChar+`/\.\.\.`): + re = strings.TrimSuffix(re, `/`+vendorChar+`/\.\.\.`) + `(/vendor|/` + vendorChar + `/\.\.\.)` + case re == vendorChar+`/\.\.\.`: + re = `(/vendor|/` + vendorChar + `/\.\.\.)` + case strings.HasSuffix(re, `/\.\.\.`): + re = strings.TrimSuffix(re, `/\.\.\.`) + `(/\.\.\.)?` + } + re = strings.ReplaceAll(re, `\.\.\.`, `[^`+vendorChar+`]*`) + + reg := regexp.MustCompile(`^` + re + `$`) + + return func(name string) bool { + if strings.Contains(name, vendorChar) { + return false + } + return reg.MatchString(replaceVendor(name, vendorChar)) + } +} + +// replaceVendor returns the result of replacing +// non-trailing vendor path elements in x with repl. +func replaceVendor(x, repl string) string { + if !strings.Contains(x, "vendor") { + return x + } + elem := strings.Split(x, "/") + for i := 0; i < len(elem)-1; i++ { + if elem[i] == "vendor" { + elem[i] = repl + } + } + return strings.Join(elem, "/") +} diff --git a/vendor/golang.org/x/tools/go/packages/loadmode_string.go b/vendor/golang.org/x/tools/go/packages/loadmode_string.go new file mode 100644 index 000000000..7ea37e7ee --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/loadmode_string.go @@ -0,0 +1,57 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "fmt" + "strings" +) + +var allModes = []LoadMode{ + NeedName, + NeedFiles, + NeedCompiledGoFiles, + NeedImports, + NeedDeps, + NeedExportsFile, + NeedTypes, + NeedSyntax, + NeedTypesInfo, + NeedTypesSizes, +} + +var modeStrings = []string{ + "NeedName", + "NeedFiles", + "NeedCompiledGoFiles", + "NeedImports", + "NeedDeps", + "NeedExportsFile", + "NeedTypes", + "NeedSyntax", + "NeedTypesInfo", + "NeedTypesSizes", +} + +func (mod LoadMode) String() string { + m := mod + if m == 0 { + return "LoadMode(0)" + } + var out []string + for i, x := range allModes { + if x > m { + break + } + if (m & x) != 0 { + out = append(out, modeStrings[i]) + m = m ^ x + } + } + if m != 0 { + out = append(out, "Unknown") + } + return fmt.Sprintf("LoadMode(%s)", strings.Join(out, "|")) +} diff --git a/vendor/golang.org/x/tools/go/packages/packages.go b/vendor/golang.org/x/tools/go/packages/packages.go new file mode 100644 index 000000000..38475e871 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/packages.go @@ -0,0 +1,1233 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +// See doc.go for package documentation and implementation notes. + +import ( + "context" + "encoding/json" + "fmt" + "go/ast" + "go/parser" + "go/scanner" + "go/token" + "go/types" + "io/ioutil" + "log" + "os" + "path/filepath" + "strings" + "sync" + "time" + + "golang.org/x/tools/go/gcexportdata" + "golang.org/x/tools/internal/gocommand" + "golang.org/x/tools/internal/packagesinternal" + "golang.org/x/tools/internal/typesinternal" +) + +// A LoadMode controls the amount of detail to return when loading. +// The bits below can be combined to specify which fields should be +// filled in the result packages. +// The zero value is a special case, equivalent to combining +// the NeedName, NeedFiles, and NeedCompiledGoFiles bits. +// ID and Errors (if present) will always be filled. +// Load may return more information than requested. +type LoadMode int + +// TODO(matloob): When a V2 of go/packages is released, rename NeedExportsFile to +// NeedExportFile to make it consistent with the Package field it's adding. + +const ( + // NeedName adds Name and PkgPath. + NeedName LoadMode = 1 << iota + + // NeedFiles adds GoFiles and OtherFiles. + NeedFiles + + // NeedCompiledGoFiles adds CompiledGoFiles. + NeedCompiledGoFiles + + // NeedImports adds Imports. If NeedDeps is not set, the Imports field will contain + // "placeholder" Packages with only the ID set. + NeedImports + + // NeedDeps adds the fields requested by the LoadMode in the packages in Imports. + NeedDeps + + // NeedExportsFile adds ExportFile. + NeedExportsFile + + // NeedTypes adds Types, Fset, and IllTyped. + NeedTypes + + // NeedSyntax adds Syntax. + NeedSyntax + + // NeedTypesInfo adds TypesInfo. + NeedTypesInfo + + // NeedTypesSizes adds TypesSizes. + NeedTypesSizes + + // typecheckCgo enables full support for type checking cgo. Requires Go 1.15+. + // Modifies CompiledGoFiles and Types, and has no effect on its own. + typecheckCgo + + // NeedModule adds Module. + NeedModule +) + +const ( + // Deprecated: LoadFiles exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadFiles = NeedName | NeedFiles | NeedCompiledGoFiles + + // Deprecated: LoadImports exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadImports = LoadFiles | NeedImports + + // Deprecated: LoadTypes exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadTypes = LoadImports | NeedTypes | NeedTypesSizes + + // Deprecated: LoadSyntax exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadSyntax = LoadTypes | NeedSyntax | NeedTypesInfo + + // Deprecated: LoadAllSyntax exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadAllSyntax = LoadSyntax | NeedDeps +) + +// A Config specifies details about how packages should be loaded. +// The zero value is a valid configuration. +// Calls to Load do not modify this struct. +type Config struct { + // Mode controls the level of information returned for each package. + Mode LoadMode + + // Context specifies the context for the load operation. + // If the context is cancelled, the loader may stop early + // and return an ErrCancelled error. + // If Context is nil, the load cannot be cancelled. + Context context.Context + + // Logf is the logger for the config. + // If the user provides a logger, debug logging is enabled. + // If the GOPACKAGESDEBUG environment variable is set to true, + // but the logger is nil, default to log.Printf. + Logf func(format string, args ...interface{}) + + // Dir is the directory in which to run the build system's query tool + // that provides information about the packages. + // If Dir is empty, the tool is run in the current directory. + Dir string + + // Env is the environment to use when invoking the build system's query tool. + // If Env is nil, the current environment is used. + // As in os/exec's Cmd, only the last value in the slice for + // each environment key is used. To specify the setting of only + // a few variables, append to the current environment, as in: + // + // opt.Env = append(os.Environ(), "GOOS=plan9", "GOARCH=386") + // + Env []string + + // gocmdRunner guards go command calls from concurrency errors. + gocmdRunner *gocommand.Runner + + // BuildFlags is a list of command-line flags to be passed through to + // the build system's query tool. + BuildFlags []string + + // modFile will be used for -modfile in go command invocations. + modFile string + + // modFlag will be used for -modfile in go command invocations. + modFlag string + + // Fset provides source position information for syntax trees and types. + // If Fset is nil, Load will use a new fileset, but preserve Fset's value. + Fset *token.FileSet + + // ParseFile is called to read and parse each file + // when preparing a package's type-checked syntax tree. + // It must be safe to call ParseFile simultaneously from multiple goroutines. + // If ParseFile is nil, the loader will uses parser.ParseFile. + // + // ParseFile should parse the source from src and use filename only for + // recording position information. + // + // An application may supply a custom implementation of ParseFile + // to change the effective file contents or the behavior of the parser, + // or to modify the syntax tree. For example, selectively eliminating + // unwanted function bodies can significantly accelerate type checking. + ParseFile func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) + + // If Tests is set, the loader includes not just the packages + // matching a particular pattern but also any related test packages, + // including test-only variants of the package and the test executable. + // + // For example, when using the go command, loading "fmt" with Tests=true + // returns four packages, with IDs "fmt" (the standard package), + // "fmt [fmt.test]" (the package as compiled for the test), + // "fmt_test" (the test functions from source files in package fmt_test), + // and "fmt.test" (the test binary). + // + // In build systems with explicit names for tests, + // setting Tests may have no effect. + Tests bool + + // Overlay provides a mapping of absolute file paths to file contents. + // If the file with the given path already exists, the parser will use the + // alternative file contents provided by the map. + // + // Overlays provide incomplete support for when a given file doesn't + // already exist on disk. See the package doc above for more details. + Overlay map[string][]byte +} + +// driver is the type for functions that query the build system for the +// packages named by the patterns. +type driver func(cfg *Config, patterns ...string) (*driverResponse, error) + +// driverResponse contains the results for a driver query. +type driverResponse struct { + // NotHandled is returned if the request can't be handled by the current + // driver. If an external driver returns a response with NotHandled, the + // rest of the driverResponse is ignored, and go/packages will fallback + // to the next driver. If go/packages is extended in the future to support + // lists of multiple drivers, go/packages will fall back to the next driver. + NotHandled bool + + // Sizes, if not nil, is the types.Sizes to use when type checking. + Sizes *types.StdSizes + + // Roots is the set of package IDs that make up the root packages. + // We have to encode this separately because when we encode a single package + // we cannot know if it is one of the roots as that requires knowledge of the + // graph it is part of. + Roots []string `json:",omitempty"` + + // Packages is the full set of packages in the graph. + // The packages are not connected into a graph. + // The Imports if populated will be stubs that only have their ID set. + // Imports will be connected and then type and syntax information added in a + // later pass (see refine). + Packages []*Package +} + +// Load loads and returns the Go packages named by the given patterns. +// +// Config specifies loading options; +// nil behaves the same as an empty Config. +// +// Load returns an error if any of the patterns was invalid +// as defined by the underlying build system. +// It may return an empty list of packages without an error, +// for instance for an empty expansion of a valid wildcard. +// Errors associated with a particular package are recorded in the +// corresponding Package's Errors list, and do not cause Load to +// return an error. Clients may need to handle such errors before +// proceeding with further analysis. The PrintErrors function is +// provided for convenient display of all errors. +func Load(cfg *Config, patterns ...string) ([]*Package, error) { + l := newLoader(cfg) + response, err := defaultDriver(&l.Config, patterns...) + if err != nil { + return nil, err + } + l.sizes = response.Sizes + return l.refine(response.Roots, response.Packages...) +} + +// defaultDriver is a driver that implements go/packages' fallback behavior. +// It will try to request to an external driver, if one exists. If there's +// no external driver, or the driver returns a response with NotHandled set, +// defaultDriver will fall back to the go list driver. +func defaultDriver(cfg *Config, patterns ...string) (*driverResponse, error) { + driver := findExternalDriver(cfg) + if driver == nil { + driver = goListDriver + } + response, err := driver(cfg, patterns...) + if err != nil { + return response, err + } else if response.NotHandled { + return goListDriver(cfg, patterns...) + } + return response, nil +} + +// A Package describes a loaded Go package. +type Package struct { + // ID is a unique identifier for a package, + // in a syntax provided by the underlying build system. + // + // Because the syntax varies based on the build system, + // clients should treat IDs as opaque and not attempt to + // interpret them. + ID string + + // Name is the package name as it appears in the package source code. + Name string + + // PkgPath is the package path as used by the go/types package. + PkgPath string + + // Errors contains any errors encountered querying the metadata + // of the package, or while parsing or type-checking its files. + Errors []Error + + // GoFiles lists the absolute file paths of the package's Go source files. + GoFiles []string + + // CompiledGoFiles lists the absolute file paths of the package's source + // files that are suitable for type checking. + // This may differ from GoFiles if files are processed before compilation. + CompiledGoFiles []string + + // OtherFiles lists the absolute file paths of the package's non-Go source files, + // including assembly, C, C++, Fortran, Objective-C, SWIG, and so on. + OtherFiles []string + + // IgnoredFiles lists source files that are not part of the package + // using the current build configuration but that might be part of + // the package using other build configurations. + IgnoredFiles []string + + // ExportFile is the absolute path to a file containing type + // information for the package as provided by the build system. + ExportFile string + + // Imports maps import paths appearing in the package's Go source files + // to corresponding loaded Packages. + Imports map[string]*Package + + // Types provides type information for the package. + // The NeedTypes LoadMode bit sets this field for packages matching the + // patterns; type information for dependencies may be missing or incomplete, + // unless NeedDeps and NeedImports are also set. + Types *types.Package + + // Fset provides position information for Types, TypesInfo, and Syntax. + // It is set only when Types is set. + Fset *token.FileSet + + // IllTyped indicates whether the package or any dependency contains errors. + // It is set only when Types is set. + IllTyped bool + + // Syntax is the package's syntax trees, for the files listed in CompiledGoFiles. + // + // The NeedSyntax LoadMode bit populates this field for packages matching the patterns. + // If NeedDeps and NeedImports are also set, this field will also be populated + // for dependencies. + Syntax []*ast.File + + // TypesInfo provides type information about the package's syntax trees. + // It is set only when Syntax is set. + TypesInfo *types.Info + + // TypesSizes provides the effective size function for types in TypesInfo. + TypesSizes types.Sizes + + // forTest is the package under test, if any. + forTest string + + // module is the module information for the package if it exists. + Module *Module +} + +// Module provides module information for a package. +type Module struct { + Path string // module path + Version string // module version + Replace *Module // replaced by this module + Time *time.Time // time version was created + Main bool // is this the main module? + Indirect bool // is this module only an indirect dependency of main module? + Dir string // directory holding files for this module, if any + GoMod string // path to go.mod file used when loading this module, if any + GoVersion string // go version used in module + Error *ModuleError // error loading module +} + +// ModuleError holds errors loading a module. +type ModuleError struct { + Err string // the error itself +} + +func init() { + packagesinternal.GetForTest = func(p interface{}) string { + return p.(*Package).forTest + } + packagesinternal.GetGoCmdRunner = func(config interface{}) *gocommand.Runner { + return config.(*Config).gocmdRunner + } + packagesinternal.SetGoCmdRunner = func(config interface{}, runner *gocommand.Runner) { + config.(*Config).gocmdRunner = runner + } + packagesinternal.SetModFile = func(config interface{}, value string) { + config.(*Config).modFile = value + } + packagesinternal.SetModFlag = func(config interface{}, value string) { + config.(*Config).modFlag = value + } + packagesinternal.TypecheckCgo = int(typecheckCgo) +} + +// An Error describes a problem with a package's metadata, syntax, or types. +type Error struct { + Pos string // "file:line:col" or "file:line" or "" or "-" + Msg string + Kind ErrorKind +} + +// ErrorKind describes the source of the error, allowing the user to +// differentiate between errors generated by the driver, the parser, or the +// type-checker. +type ErrorKind int + +const ( + UnknownError ErrorKind = iota + ListError + ParseError + TypeError +) + +func (err Error) Error() string { + pos := err.Pos + if pos == "" { + pos = "-" // like token.Position{}.String() + } + return pos + ": " + err.Msg +} + +// flatPackage is the JSON form of Package +// It drops all the type and syntax fields, and transforms the Imports +// +// TODO(adonovan): identify this struct with Package, effectively +// publishing the JSON protocol. +type flatPackage struct { + ID string + Name string `json:",omitempty"` + PkgPath string `json:",omitempty"` + Errors []Error `json:",omitempty"` + GoFiles []string `json:",omitempty"` + CompiledGoFiles []string `json:",omitempty"` + OtherFiles []string `json:",omitempty"` + IgnoredFiles []string `json:",omitempty"` + ExportFile string `json:",omitempty"` + Imports map[string]string `json:",omitempty"` +} + +// MarshalJSON returns the Package in its JSON form. +// For the most part, the structure fields are written out unmodified, and +// the type and syntax fields are skipped. +// The imports are written out as just a map of path to package id. +// The errors are written using a custom type that tries to preserve the +// structure of error types we know about. +// +// This method exists to enable support for additional build systems. It is +// not intended for use by clients of the API and we may change the format. +func (p *Package) MarshalJSON() ([]byte, error) { + flat := &flatPackage{ + ID: p.ID, + Name: p.Name, + PkgPath: p.PkgPath, + Errors: p.Errors, + GoFiles: p.GoFiles, + CompiledGoFiles: p.CompiledGoFiles, + OtherFiles: p.OtherFiles, + IgnoredFiles: p.IgnoredFiles, + ExportFile: p.ExportFile, + } + if len(p.Imports) > 0 { + flat.Imports = make(map[string]string, len(p.Imports)) + for path, ipkg := range p.Imports { + flat.Imports[path] = ipkg.ID + } + } + return json.Marshal(flat) +} + +// UnmarshalJSON reads in a Package from its JSON format. +// See MarshalJSON for details about the format accepted. +func (p *Package) UnmarshalJSON(b []byte) error { + flat := &flatPackage{} + if err := json.Unmarshal(b, &flat); err != nil { + return err + } + *p = Package{ + ID: flat.ID, + Name: flat.Name, + PkgPath: flat.PkgPath, + Errors: flat.Errors, + GoFiles: flat.GoFiles, + CompiledGoFiles: flat.CompiledGoFiles, + OtherFiles: flat.OtherFiles, + ExportFile: flat.ExportFile, + } + if len(flat.Imports) > 0 { + p.Imports = make(map[string]*Package, len(flat.Imports)) + for path, id := range flat.Imports { + p.Imports[path] = &Package{ID: id} + } + } + return nil +} + +func (p *Package) String() string { return p.ID } + +// loaderPackage augments Package with state used during the loading phase +type loaderPackage struct { + *Package + importErrors map[string]error // maps each bad import to its error + loadOnce sync.Once + color uint8 // for cycle detection + needsrc bool // load from source (Mode >= LoadTypes) + needtypes bool // type information is either requested or depended on + initial bool // package was matched by a pattern +} + +// loader holds the working state of a single call to load. +type loader struct { + pkgs map[string]*loaderPackage + Config + sizes types.Sizes + parseCache map[string]*parseValue + parseCacheMu sync.Mutex + exportMu sync.Mutex // enforces mutual exclusion of exportdata operations + + // Config.Mode contains the implied mode (see impliedLoadMode). + // Implied mode contains all the fields we need the data for. + // In requestedMode there are the actually requested fields. + // We'll zero them out before returning packages to the user. + // This makes it easier for us to get the conditions where + // we need certain modes right. + requestedMode LoadMode +} + +type parseValue struct { + f *ast.File + err error + ready chan struct{} +} + +func newLoader(cfg *Config) *loader { + ld := &loader{ + parseCache: map[string]*parseValue{}, + } + if cfg != nil { + ld.Config = *cfg + // If the user has provided a logger, use it. + ld.Config.Logf = cfg.Logf + } + if ld.Config.Logf == nil { + // If the GOPACKAGESDEBUG environment variable is set to true, + // but the user has not provided a logger, default to log.Printf. + if debug { + ld.Config.Logf = log.Printf + } else { + ld.Config.Logf = func(format string, args ...interface{}) {} + } + } + if ld.Config.Mode == 0 { + ld.Config.Mode = NeedName | NeedFiles | NeedCompiledGoFiles // Preserve zero behavior of Mode for backwards compatibility. + } + if ld.Config.Env == nil { + ld.Config.Env = os.Environ() + } + if ld.Config.gocmdRunner == nil { + ld.Config.gocmdRunner = &gocommand.Runner{} + } + if ld.Context == nil { + ld.Context = context.Background() + } + if ld.Dir == "" { + if dir, err := os.Getwd(); err == nil { + ld.Dir = dir + } + } + + // Save the actually requested fields. We'll zero them out before returning packages to the user. + ld.requestedMode = ld.Mode + ld.Mode = impliedLoadMode(ld.Mode) + + if ld.Mode&NeedTypes != 0 || ld.Mode&NeedSyntax != 0 { + if ld.Fset == nil { + ld.Fset = token.NewFileSet() + } + + // ParseFile is required even in LoadTypes mode + // because we load source if export data is missing. + if ld.ParseFile == nil { + ld.ParseFile = func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) { + const mode = parser.AllErrors | parser.ParseComments + return parser.ParseFile(fset, filename, src, mode) + } + } + } + + return ld +} + +// refine connects the supplied packages into a graph and then adds type and +// and syntax information as requested by the LoadMode. +func (ld *loader) refine(roots []string, list ...*Package) ([]*Package, error) { + rootMap := make(map[string]int, len(roots)) + for i, root := range roots { + rootMap[root] = i + } + ld.pkgs = make(map[string]*loaderPackage) + // first pass, fixup and build the map and roots + var initial = make([]*loaderPackage, len(roots)) + for _, pkg := range list { + rootIndex := -1 + if i, found := rootMap[pkg.ID]; found { + rootIndex = i + } + + // Overlays can invalidate export data. + // TODO(matloob): make this check fine-grained based on dependencies on overlaid files + exportDataInvalid := len(ld.Overlay) > 0 || pkg.ExportFile == "" && pkg.PkgPath != "unsafe" + // This package needs type information if the caller requested types and the package is + // either a root, or it's a non-root and the user requested dependencies ... + needtypes := (ld.Mode&NeedTypes|NeedTypesInfo != 0 && (rootIndex >= 0 || ld.Mode&NeedDeps != 0)) + // This package needs source if the call requested source (or types info, which implies source) + // and the package is either a root, or itas a non- root and the user requested dependencies... + needsrc := ((ld.Mode&(NeedSyntax|NeedTypesInfo) != 0 && (rootIndex >= 0 || ld.Mode&NeedDeps != 0)) || + // ... or if we need types and the exportData is invalid. We fall back to (incompletely) + // typechecking packages from source if they fail to compile. + (ld.Mode&NeedTypes|NeedTypesInfo != 0 && exportDataInvalid)) && pkg.PkgPath != "unsafe" + lpkg := &loaderPackage{ + Package: pkg, + needtypes: needtypes, + needsrc: needsrc, + } + ld.pkgs[lpkg.ID] = lpkg + if rootIndex >= 0 { + initial[rootIndex] = lpkg + lpkg.initial = true + } + } + for i, root := range roots { + if initial[i] == nil { + return nil, fmt.Errorf("root package %v is missing", root) + } + } + + // Materialize the import graph. + + const ( + white = 0 // new + grey = 1 // in progress + black = 2 // complete + ) + + // visit traverses the import graph, depth-first, + // and materializes the graph as Packages.Imports. + // + // Valid imports are saved in the Packages.Import map. + // Invalid imports (cycles and missing nodes) are saved in the importErrors map. + // Thus, even in the presence of both kinds of errors, the Import graph remains a DAG. + // + // visit returns whether the package needs src or has a transitive + // dependency on a package that does. These are the only packages + // for which we load source code. + var stack []*loaderPackage + var visit func(lpkg *loaderPackage) bool + var srcPkgs []*loaderPackage + visit = func(lpkg *loaderPackage) bool { + switch lpkg.color { + case black: + return lpkg.needsrc + case grey: + panic("internal error: grey node") + } + lpkg.color = grey + stack = append(stack, lpkg) // push + stubs := lpkg.Imports // the structure form has only stubs with the ID in the Imports + // If NeedImports isn't set, the imports fields will all be zeroed out. + if ld.Mode&NeedImports != 0 { + lpkg.Imports = make(map[string]*Package, len(stubs)) + for importPath, ipkg := range stubs { + var importErr error + imp := ld.pkgs[ipkg.ID] + if imp == nil { + // (includes package "C" when DisableCgo) + importErr = fmt.Errorf("missing package: %q", ipkg.ID) + } else if imp.color == grey { + importErr = fmt.Errorf("import cycle: %s", stack) + } + if importErr != nil { + if lpkg.importErrors == nil { + lpkg.importErrors = make(map[string]error) + } + lpkg.importErrors[importPath] = importErr + continue + } + + if visit(imp) { + lpkg.needsrc = true + } + lpkg.Imports[importPath] = imp.Package + } + } + if lpkg.needsrc { + srcPkgs = append(srcPkgs, lpkg) + } + if ld.Mode&NeedTypesSizes != 0 { + lpkg.TypesSizes = ld.sizes + } + stack = stack[:len(stack)-1] // pop + lpkg.color = black + + return lpkg.needsrc + } + + if ld.Mode&NeedImports == 0 { + // We do this to drop the stub import packages that we are not even going to try to resolve. + for _, lpkg := range initial { + lpkg.Imports = nil + } + } else { + // For each initial package, create its import DAG. + for _, lpkg := range initial { + visit(lpkg) + } + } + if ld.Mode&NeedImports != 0 && ld.Mode&NeedTypes != 0 { + for _, lpkg := range srcPkgs { + // Complete type information is required for the + // immediate dependencies of each source package. + for _, ipkg := range lpkg.Imports { + imp := ld.pkgs[ipkg.ID] + imp.needtypes = true + } + } + } + // Load type data and syntax if needed, starting at + // the initial packages (roots of the import DAG). + if ld.Mode&NeedTypes != 0 || ld.Mode&NeedSyntax != 0 { + var wg sync.WaitGroup + for _, lpkg := range initial { + wg.Add(1) + go func(lpkg *loaderPackage) { + ld.loadRecursive(lpkg) + wg.Done() + }(lpkg) + } + wg.Wait() + } + + result := make([]*Package, len(initial)) + for i, lpkg := range initial { + result[i] = lpkg.Package + } + for i := range ld.pkgs { + // Clear all unrequested fields, + // to catch programs that use more than they request. + if ld.requestedMode&NeedName == 0 { + ld.pkgs[i].Name = "" + ld.pkgs[i].PkgPath = "" + } + if ld.requestedMode&NeedFiles == 0 { + ld.pkgs[i].GoFiles = nil + ld.pkgs[i].OtherFiles = nil + ld.pkgs[i].IgnoredFiles = nil + } + if ld.requestedMode&NeedCompiledGoFiles == 0 { + ld.pkgs[i].CompiledGoFiles = nil + } + if ld.requestedMode&NeedImports == 0 { + ld.pkgs[i].Imports = nil + } + if ld.requestedMode&NeedExportsFile == 0 { + ld.pkgs[i].ExportFile = "" + } + if ld.requestedMode&NeedTypes == 0 { + ld.pkgs[i].Types = nil + ld.pkgs[i].Fset = nil + ld.pkgs[i].IllTyped = false + } + if ld.requestedMode&NeedSyntax == 0 { + ld.pkgs[i].Syntax = nil + } + if ld.requestedMode&NeedTypesInfo == 0 { + ld.pkgs[i].TypesInfo = nil + } + if ld.requestedMode&NeedTypesSizes == 0 { + ld.pkgs[i].TypesSizes = nil + } + if ld.requestedMode&NeedModule == 0 { + ld.pkgs[i].Module = nil + } + } + + return result, nil +} + +// loadRecursive loads the specified package and its dependencies, +// recursively, in parallel, in topological order. +// It is atomic and idempotent. +// Precondition: ld.Mode&NeedTypes. +func (ld *loader) loadRecursive(lpkg *loaderPackage) { + lpkg.loadOnce.Do(func() { + // Load the direct dependencies, in parallel. + var wg sync.WaitGroup + for _, ipkg := range lpkg.Imports { + imp := ld.pkgs[ipkg.ID] + wg.Add(1) + go func(imp *loaderPackage) { + ld.loadRecursive(imp) + wg.Done() + }(imp) + } + wg.Wait() + ld.loadPackage(lpkg) + }) +} + +// loadPackage loads the specified package. +// It must be called only once per Package, +// after immediate dependencies are loaded. +// Precondition: ld.Mode & NeedTypes. +func (ld *loader) loadPackage(lpkg *loaderPackage) { + if lpkg.PkgPath == "unsafe" { + // Fill in the blanks to avoid surprises. + lpkg.Types = types.Unsafe + lpkg.Fset = ld.Fset + lpkg.Syntax = []*ast.File{} + lpkg.TypesInfo = new(types.Info) + lpkg.TypesSizes = ld.sizes + return + } + + // Call NewPackage directly with explicit name. + // This avoids skew between golist and go/types when the files' + // package declarations are inconsistent. + lpkg.Types = types.NewPackage(lpkg.PkgPath, lpkg.Name) + lpkg.Fset = ld.Fset + + // Subtle: we populate all Types fields with an empty Package + // before loading export data so that export data processing + // never has to create a types.Package for an indirect dependency, + // which would then require that such created packages be explicitly + // inserted back into the Import graph as a final step after export data loading. + // The Diamond test exercises this case. + if !lpkg.needtypes && !lpkg.needsrc { + return + } + if !lpkg.needsrc { + ld.loadFromExportData(lpkg) + return // not a source package, don't get syntax trees + } + + appendError := func(err error) { + // Convert various error types into the one true Error. + var errs []Error + switch err := err.(type) { + case Error: + // from driver + errs = append(errs, err) + + case *os.PathError: + // from parser + errs = append(errs, Error{ + Pos: err.Path + ":1", + Msg: err.Err.Error(), + Kind: ParseError, + }) + + case scanner.ErrorList: + // from parser + for _, err := range err { + errs = append(errs, Error{ + Pos: err.Pos.String(), + Msg: err.Msg, + Kind: ParseError, + }) + } + + case types.Error: + // from type checker + errs = append(errs, Error{ + Pos: err.Fset.Position(err.Pos).String(), + Msg: err.Msg, + Kind: TypeError, + }) + + default: + // unexpected impoverished error from parser? + errs = append(errs, Error{ + Pos: "-", + Msg: err.Error(), + Kind: UnknownError, + }) + + // If you see this error message, please file a bug. + log.Printf("internal error: error %q (%T) without position", err, err) + } + + lpkg.Errors = append(lpkg.Errors, errs...) + } + + if ld.Config.Mode&NeedTypes != 0 && len(lpkg.CompiledGoFiles) == 0 && lpkg.ExportFile != "" { + // The config requested loading sources and types, but sources are missing. + // Add an error to the package and fall back to loading from export data. + appendError(Error{"-", fmt.Sprintf("sources missing for package %s", lpkg.ID), ParseError}) + ld.loadFromExportData(lpkg) + return // can't get syntax trees for this package + } + + files, errs := ld.parseFiles(lpkg.CompiledGoFiles) + for _, err := range errs { + appendError(err) + } + + lpkg.Syntax = files + if ld.Config.Mode&NeedTypes == 0 { + return + } + + lpkg.TypesInfo = &types.Info{ + Types: make(map[ast.Expr]types.TypeAndValue), + Defs: make(map[*ast.Ident]types.Object), + Uses: make(map[*ast.Ident]types.Object), + Implicits: make(map[ast.Node]types.Object), + Scopes: make(map[ast.Node]*types.Scope), + Selections: make(map[*ast.SelectorExpr]*types.Selection), + } + lpkg.TypesSizes = ld.sizes + + importer := importerFunc(func(path string) (*types.Package, error) { + if path == "unsafe" { + return types.Unsafe, nil + } + + // The imports map is keyed by import path. + ipkg := lpkg.Imports[path] + if ipkg == nil { + if err := lpkg.importErrors[path]; err != nil { + return nil, err + } + // There was skew between the metadata and the + // import declarations, likely due to an edit + // race, or because the ParseFile feature was + // used to supply alternative file contents. + return nil, fmt.Errorf("no metadata for %s", path) + } + + if ipkg.Types != nil && ipkg.Types.Complete() { + return ipkg.Types, nil + } + log.Fatalf("internal error: package %q without types was imported from %q", path, lpkg) + panic("unreachable") + }) + + // type-check + tc := &types.Config{ + Importer: importer, + + // Type-check bodies of functions only in non-initial packages. + // Example: for import graph A->B->C and initial packages {A,C}, + // we can ignore function bodies in B. + IgnoreFuncBodies: ld.Mode&NeedDeps == 0 && !lpkg.initial, + + Error: appendError, + Sizes: ld.sizes, + } + if (ld.Mode & typecheckCgo) != 0 { + if !typesinternal.SetUsesCgo(tc) { + appendError(Error{ + Msg: "typecheckCgo requires Go 1.15+", + Kind: ListError, + }) + return + } + } + types.NewChecker(tc, ld.Fset, lpkg.Types, lpkg.TypesInfo).Files(lpkg.Syntax) + + lpkg.importErrors = nil // no longer needed + + // If !Cgo, the type-checker uses FakeImportC mode, so + // it doesn't invoke the importer for import "C", + // nor report an error for the import, + // or for any undefined C.f reference. + // We must detect this explicitly and correctly + // mark the package as IllTyped (by reporting an error). + // TODO(adonovan): if these errors are annoying, + // we could just set IllTyped quietly. + if tc.FakeImportC { + outer: + for _, f := range lpkg.Syntax { + for _, imp := range f.Imports { + if imp.Path.Value == `"C"` { + err := types.Error{Fset: ld.Fset, Pos: imp.Pos(), Msg: `import "C" ignored`} + appendError(err) + break outer + } + } + } + } + + // Record accumulated errors. + illTyped := len(lpkg.Errors) > 0 + if !illTyped { + for _, imp := range lpkg.Imports { + if imp.IllTyped { + illTyped = true + break + } + } + } + lpkg.IllTyped = illTyped +} + +// An importFunc is an implementation of the single-method +// types.Importer interface based on a function value. +type importerFunc func(path string) (*types.Package, error) + +func (f importerFunc) Import(path string) (*types.Package, error) { return f(path) } + +// We use a counting semaphore to limit +// the number of parallel I/O calls per process. +var ioLimit = make(chan bool, 20) + +func (ld *loader) parseFile(filename string) (*ast.File, error) { + ld.parseCacheMu.Lock() + v, ok := ld.parseCache[filename] + if ok { + // cache hit + ld.parseCacheMu.Unlock() + <-v.ready + } else { + // cache miss + v = &parseValue{ready: make(chan struct{})} + ld.parseCache[filename] = v + ld.parseCacheMu.Unlock() + + var src []byte + for f, contents := range ld.Config.Overlay { + if sameFile(f, filename) { + src = contents + } + } + var err error + if src == nil { + ioLimit <- true // wait + src, err = ioutil.ReadFile(filename) + <-ioLimit // signal + } + if err != nil { + v.err = err + } else { + v.f, v.err = ld.ParseFile(ld.Fset, filename, src) + } + + close(v.ready) + } + return v.f, v.err +} + +// parseFiles reads and parses the Go source files and returns the ASTs +// of the ones that could be at least partially parsed, along with a +// list of I/O and parse errors encountered. +// +// Because files are scanned in parallel, the token.Pos +// positions of the resulting ast.Files are not ordered. +// +func (ld *loader) parseFiles(filenames []string) ([]*ast.File, []error) { + var wg sync.WaitGroup + n := len(filenames) + parsed := make([]*ast.File, n) + errors := make([]error, n) + for i, file := range filenames { + if ld.Config.Context.Err() != nil { + parsed[i] = nil + errors[i] = ld.Config.Context.Err() + continue + } + wg.Add(1) + go func(i int, filename string) { + parsed[i], errors[i] = ld.parseFile(filename) + wg.Done() + }(i, file) + } + wg.Wait() + + // Eliminate nils, preserving order. + var o int + for _, f := range parsed { + if f != nil { + parsed[o] = f + o++ + } + } + parsed = parsed[:o] + + o = 0 + for _, err := range errors { + if err != nil { + errors[o] = err + o++ + } + } + errors = errors[:o] + + return parsed, errors +} + +// sameFile returns true if x and y have the same basename and denote +// the same file. +// +func sameFile(x, y string) bool { + if x == y { + // It could be the case that y doesn't exist. + // For instance, it may be an overlay file that + // hasn't been written to disk. To handle that case + // let x == y through. (We added the exact absolute path + // string to the CompiledGoFiles list, so the unwritten + // overlay case implies x==y.) + return true + } + if strings.EqualFold(filepath.Base(x), filepath.Base(y)) { // (optimisation) + if xi, err := os.Stat(x); err == nil { + if yi, err := os.Stat(y); err == nil { + return os.SameFile(xi, yi) + } + } + } + return false +} + +// loadFromExportData returns type information for the specified +// package, loading it from an export data file on the first request. +func (ld *loader) loadFromExportData(lpkg *loaderPackage) (*types.Package, error) { + if lpkg.PkgPath == "" { + log.Fatalf("internal error: Package %s has no PkgPath", lpkg) + } + + // Because gcexportdata.Read has the potential to create or + // modify the types.Package for each node in the transitive + // closure of dependencies of lpkg, all exportdata operations + // must be sequential. (Finer-grained locking would require + // changes to the gcexportdata API.) + // + // The exportMu lock guards the Package.Pkg field and the + // types.Package it points to, for each Package in the graph. + // + // Not all accesses to Package.Pkg need to be protected by exportMu: + // graph ordering ensures that direct dependencies of source + // packages are fully loaded before the importer reads their Pkg field. + ld.exportMu.Lock() + defer ld.exportMu.Unlock() + + if tpkg := lpkg.Types; tpkg != nil && tpkg.Complete() { + return tpkg, nil // cache hit + } + + lpkg.IllTyped = true // fail safe + + if lpkg.ExportFile == "" { + // Errors while building export data will have been printed to stderr. + return nil, fmt.Errorf("no export data file") + } + f, err := os.Open(lpkg.ExportFile) + if err != nil { + return nil, err + } + defer f.Close() + + // Read gc export data. + // + // We don't currently support gccgo export data because all + // underlying workspaces use the gc toolchain. (Even build + // systems that support gccgo don't use it for workspace + // queries.) + r, err := gcexportdata.NewReader(f) + if err != nil { + return nil, fmt.Errorf("reading %s: %v", lpkg.ExportFile, err) + } + + // Build the view. + // + // The gcexportdata machinery has no concept of package ID. + // It identifies packages by their PkgPath, which although not + // globally unique is unique within the scope of one invocation + // of the linker, type-checker, or gcexportdata. + // + // So, we must build a PkgPath-keyed view of the global + // (conceptually ID-keyed) cache of packages and pass it to + // gcexportdata. The view must contain every existing + // package that might possibly be mentioned by the + // current package---its transitive closure. + // + // In loadPackage, we unconditionally create a types.Package for + // each dependency so that export data loading does not + // create new ones. + // + // TODO(adonovan): it would be simpler and more efficient + // if the export data machinery invoked a callback to + // get-or-create a package instead of a map. + // + view := make(map[string]*types.Package) // view seen by gcexportdata + seen := make(map[*loaderPackage]bool) // all visited packages + var visit func(pkgs map[string]*Package) + visit = func(pkgs map[string]*Package) { + for _, p := range pkgs { + lpkg := ld.pkgs[p.ID] + if !seen[lpkg] { + seen[lpkg] = true + view[lpkg.PkgPath] = lpkg.Types + visit(lpkg.Imports) + } + } + } + visit(lpkg.Imports) + + viewLen := len(view) + 1 // adding the self package + // Parse the export data. + // (May modify incomplete packages in view but not create new ones.) + tpkg, err := gcexportdata.Read(r, ld.Fset, view, lpkg.PkgPath) + if err != nil { + return nil, fmt.Errorf("reading %s: %v", lpkg.ExportFile, err) + } + if viewLen != len(view) { + log.Fatalf("Unexpected package creation during export data loading") + } + + lpkg.Types = tpkg + lpkg.IllTyped = false + + return tpkg, nil +} + +// impliedLoadMode returns loadMode with its dependencies. +func impliedLoadMode(loadMode LoadMode) LoadMode { + if loadMode&NeedTypesInfo != 0 && loadMode&NeedImports == 0 { + // If NeedTypesInfo, go/packages needs to do typechecking itself so it can + // associate type info with the AST. To do so, we need the export data + // for dependencies, which means we need to ask for the direct dependencies. + // NeedImports is used to ask for the direct dependencies. + loadMode |= NeedImports + } + + if loadMode&NeedDeps != 0 && loadMode&NeedImports == 0 { + // With NeedDeps we need to load at least direct dependencies. + // NeedImports is used to ask for the direct dependencies. + loadMode |= NeedImports + } + + return loadMode +} + +func usesExportData(cfg *Config) bool { + return cfg.Mode&NeedExportsFile != 0 || cfg.Mode&NeedTypes != 0 && cfg.Mode&NeedDeps == 0 +} diff --git a/vendor/golang.org/x/tools/go/packages/visit.go b/vendor/golang.org/x/tools/go/packages/visit.go new file mode 100644 index 000000000..a1dcc40b7 --- /dev/null +++ b/vendor/golang.org/x/tools/go/packages/visit.go @@ -0,0 +1,59 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "fmt" + "os" + "sort" +) + +// Visit visits all the packages in the import graph whose roots are +// pkgs, calling the optional pre function the first time each package +// is encountered (preorder), and the optional post function after a +// package's dependencies have been visited (postorder). +// The boolean result of pre(pkg) determines whether +// the imports of package pkg are visited. +func Visit(pkgs []*Package, pre func(*Package) bool, post func(*Package)) { + seen := make(map[*Package]bool) + var visit func(*Package) + visit = func(pkg *Package) { + if !seen[pkg] { + seen[pkg] = true + + if pre == nil || pre(pkg) { + paths := make([]string, 0, len(pkg.Imports)) + for path := range pkg.Imports { + paths = append(paths, path) + } + sort.Strings(paths) // Imports is a map, this makes visit stable + for _, path := range paths { + visit(pkg.Imports[path]) + } + } + + if post != nil { + post(pkg) + } + } + } + for _, pkg := range pkgs { + visit(pkg) + } +} + +// PrintErrors prints to os.Stderr the accumulated errors of all +// packages in the import graph rooted at pkgs, dependencies first. +// PrintErrors returns the number of errors printed. +func PrintErrors(pkgs []*Package) int { + var n int + Visit(pkgs, nil, func(pkg *Package) { + for _, err := range pkg.Errors { + fmt.Fprintln(os.Stderr, err) + n++ + } + }) + return n +} diff --git a/vendor/golang.org/x/tools/internal/event/core/event.go b/vendor/golang.org/x/tools/internal/event/core/event.go new file mode 100644 index 000000000..a6cf0e64a --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/core/event.go @@ -0,0 +1,85 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package core provides support for event based telemetry. +package core + +import ( + "fmt" + "time" + + "golang.org/x/tools/internal/event/label" +) + +// Event holds the information about an event of note that occurred. +type Event struct { + at time.Time + + // As events are often on the stack, storing the first few labels directly + // in the event can avoid an allocation at all for the very common cases of + // simple events. + // The length needs to be large enough to cope with the majority of events + // but no so large as to cause undue stack pressure. + // A log message with two values will use 3 labels (one for each value and + // one for the message itself). + + static [3]label.Label // inline storage for the first few labels + dynamic []label.Label // dynamically sized storage for remaining labels +} + +// eventLabelMap implements label.Map for a the labels of an Event. +type eventLabelMap struct { + event Event +} + +func (ev Event) At() time.Time { return ev.at } + +func (ev Event) Format(f fmt.State, r rune) { + if !ev.at.IsZero() { + fmt.Fprint(f, ev.at.Format("2006/01/02 15:04:05 ")) + } + for index := 0; ev.Valid(index); index++ { + if l := ev.Label(index); l.Valid() { + fmt.Fprintf(f, "\n\t%v", l) + } + } +} + +func (ev Event) Valid(index int) bool { + return index >= 0 && index < len(ev.static)+len(ev.dynamic) +} + +func (ev Event) Label(index int) label.Label { + if index < len(ev.static) { + return ev.static[index] + } + return ev.dynamic[index-len(ev.static)] +} + +func (ev Event) Find(key label.Key) label.Label { + for _, l := range ev.static { + if l.Key() == key { + return l + } + } + for _, l := range ev.dynamic { + if l.Key() == key { + return l + } + } + return label.Label{} +} + +func MakeEvent(static [3]label.Label, labels []label.Label) Event { + return Event{ + static: static, + dynamic: labels, + } +} + +// CloneEvent event returns a copy of the event with the time adjusted to at. +func CloneEvent(ev Event, at time.Time) Event { + ev.at = at + return ev +} diff --git a/vendor/golang.org/x/tools/internal/event/core/export.go b/vendor/golang.org/x/tools/internal/event/core/export.go new file mode 100644 index 000000000..05f3a9a57 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/core/export.go @@ -0,0 +1,70 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package core + +import ( + "context" + "sync/atomic" + "time" + "unsafe" + + "golang.org/x/tools/internal/event/label" +) + +// Exporter is a function that handles events. +// It may return a modified context and event. +type Exporter func(context.Context, Event, label.Map) context.Context + +var ( + exporter unsafe.Pointer +) + +// SetExporter sets the global exporter function that handles all events. +// The exporter is called synchronously from the event call site, so it should +// return quickly so as not to hold up user code. +func SetExporter(e Exporter) { + p := unsafe.Pointer(&e) + if e == nil { + // &e is always valid, and so p is always valid, but for the early abort + // of ProcessEvent to be efficient it needs to make the nil check on the + // pointer without having to dereference it, so we make the nil function + // also a nil pointer + p = nil + } + atomic.StorePointer(&exporter, p) +} + +// deliver is called to deliver an event to the supplied exporter. +// it will fill in the time. +func deliver(ctx context.Context, exporter Exporter, ev Event) context.Context { + // add the current time to the event + ev.at = time.Now() + // hand the event off to the current exporter + return exporter(ctx, ev, ev) +} + +// Export is called to deliver an event to the global exporter if set. +func Export(ctx context.Context, ev Event) context.Context { + // get the global exporter and abort early if there is not one + exporterPtr := (*Exporter)(atomic.LoadPointer(&exporter)) + if exporterPtr == nil { + return ctx + } + return deliver(ctx, *exporterPtr, ev) +} + +// ExportPair is called to deliver a start event to the supplied exporter. +// It also returns a function that will deliver the end event to the same +// exporter. +// It will fill in the time. +func ExportPair(ctx context.Context, begin, end Event) (context.Context, func()) { + // get the global exporter and abort early if there is not one + exporterPtr := (*Exporter)(atomic.LoadPointer(&exporter)) + if exporterPtr == nil { + return ctx, func() {} + } + ctx = deliver(ctx, *exporterPtr, begin) + return ctx, func() { deliver(ctx, *exporterPtr, end) } +} diff --git a/vendor/golang.org/x/tools/internal/event/core/fast.go b/vendor/golang.org/x/tools/internal/event/core/fast.go new file mode 100644 index 000000000..06c1d4615 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/core/fast.go @@ -0,0 +1,77 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package core + +import ( + "context" + + "golang.org/x/tools/internal/event/keys" + "golang.org/x/tools/internal/event/label" +) + +// Log1 takes a message and one label delivers a log event to the exporter. +// It is a customized version of Print that is faster and does no allocation. +func Log1(ctx context.Context, message string, t1 label.Label) { + Export(ctx, MakeEvent([3]label.Label{ + keys.Msg.Of(message), + t1, + }, nil)) +} + +// Log2 takes a message and two labels and delivers a log event to the exporter. +// It is a customized version of Print that is faster and does no allocation. +func Log2(ctx context.Context, message string, t1 label.Label, t2 label.Label) { + Export(ctx, MakeEvent([3]label.Label{ + keys.Msg.Of(message), + t1, + t2, + }, nil)) +} + +// Metric1 sends a label event to the exporter with the supplied labels. +func Metric1(ctx context.Context, t1 label.Label) context.Context { + return Export(ctx, MakeEvent([3]label.Label{ + keys.Metric.New(), + t1, + }, nil)) +} + +// Metric2 sends a label event to the exporter with the supplied labels. +func Metric2(ctx context.Context, t1, t2 label.Label) context.Context { + return Export(ctx, MakeEvent([3]label.Label{ + keys.Metric.New(), + t1, + t2, + }, nil)) +} + +// Start1 sends a span start event with the supplied label list to the exporter. +// It also returns a function that will end the span, which should normally be +// deferred. +func Start1(ctx context.Context, name string, t1 label.Label) (context.Context, func()) { + return ExportPair(ctx, + MakeEvent([3]label.Label{ + keys.Start.Of(name), + t1, + }, nil), + MakeEvent([3]label.Label{ + keys.End.New(), + }, nil)) +} + +// Start2 sends a span start event with the supplied label list to the exporter. +// It also returns a function that will end the span, which should normally be +// deferred. +func Start2(ctx context.Context, name string, t1, t2 label.Label) (context.Context, func()) { + return ExportPair(ctx, + MakeEvent([3]label.Label{ + keys.Start.Of(name), + t1, + t2, + }, nil), + MakeEvent([3]label.Label{ + keys.End.New(), + }, nil)) +} diff --git a/vendor/golang.org/x/tools/internal/event/doc.go b/vendor/golang.org/x/tools/internal/event/doc.go new file mode 100644 index 000000000..5dc6e6bab --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/doc.go @@ -0,0 +1,7 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package event provides a set of packages that cover the main +// concepts of telemetry in an implementation agnostic way. +package event diff --git a/vendor/golang.org/x/tools/internal/event/event.go b/vendor/golang.org/x/tools/internal/event/event.go new file mode 100644 index 000000000..4d55e577d --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/event.go @@ -0,0 +1,127 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package event + +import ( + "context" + + "golang.org/x/tools/internal/event/core" + "golang.org/x/tools/internal/event/keys" + "golang.org/x/tools/internal/event/label" +) + +// Exporter is a function that handles events. +// It may return a modified context and event. +type Exporter func(context.Context, core.Event, label.Map) context.Context + +// SetExporter sets the global exporter function that handles all events. +// The exporter is called synchronously from the event call site, so it should +// return quickly so as not to hold up user code. +func SetExporter(e Exporter) { + core.SetExporter(core.Exporter(e)) +} + +// Log takes a message and a label list and combines them into a single event +// before delivering them to the exporter. +func Log(ctx context.Context, message string, labels ...label.Label) { + core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Msg.Of(message), + }, labels)) +} + +// IsLog returns true if the event was built by the Log function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsLog(ev core.Event) bool { + return ev.Label(0).Key() == keys.Msg +} + +// Error takes a message and a label list and combines them into a single event +// before delivering them to the exporter. It captures the error in the +// delivered event. +func Error(ctx context.Context, message string, err error, labels ...label.Label) { + core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Msg.Of(message), + keys.Err.Of(err), + }, labels)) +} + +// IsError returns true if the event was built by the Error function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsError(ev core.Event) bool { + return ev.Label(0).Key() == keys.Msg && + ev.Label(1).Key() == keys.Err +} + +// Metric sends a label event to the exporter with the supplied labels. +func Metric(ctx context.Context, labels ...label.Label) { + core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Metric.New(), + }, labels)) +} + +// IsMetric returns true if the event was built by the Metric function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsMetric(ev core.Event) bool { + return ev.Label(0).Key() == keys.Metric +} + +// Label sends a label event to the exporter with the supplied labels. +func Label(ctx context.Context, labels ...label.Label) context.Context { + return core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Label.New(), + }, labels)) +} + +// IsLabel returns true if the event was built by the Label function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsLabel(ev core.Event) bool { + return ev.Label(0).Key() == keys.Label +} + +// Start sends a span start event with the supplied label list to the exporter. +// It also returns a function that will end the span, which should normally be +// deferred. +func Start(ctx context.Context, name string, labels ...label.Label) (context.Context, func()) { + return core.ExportPair(ctx, + core.MakeEvent([3]label.Label{ + keys.Start.Of(name), + }, labels), + core.MakeEvent([3]label.Label{ + keys.End.New(), + }, nil)) +} + +// IsStart returns true if the event was built by the Start function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsStart(ev core.Event) bool { + return ev.Label(0).Key() == keys.Start +} + +// IsEnd returns true if the event was built by the End function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsEnd(ev core.Event) bool { + return ev.Label(0).Key() == keys.End +} + +// Detach returns a context without an associated span. +// This allows the creation of spans that are not children of the current span. +func Detach(ctx context.Context) context.Context { + return core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Detach.New(), + }, nil)) +} + +// IsDetach returns true if the event was built by the Detach function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsDetach(ev core.Event) bool { + return ev.Label(0).Key() == keys.Detach +} diff --git a/vendor/golang.org/x/tools/internal/event/keys/keys.go b/vendor/golang.org/x/tools/internal/event/keys/keys.go new file mode 100644 index 000000000..a02206e30 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/keys/keys.go @@ -0,0 +1,564 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package keys + +import ( + "fmt" + "io" + "math" + "strconv" + + "golang.org/x/tools/internal/event/label" +) + +// Value represents a key for untyped values. +type Value struct { + name string + description string +} + +// New creates a new Key for untyped values. +func New(name, description string) *Value { + return &Value{name: name, description: description} +} + +func (k *Value) Name() string { return k.name } +func (k *Value) Description() string { return k.description } + +func (k *Value) Format(w io.Writer, buf []byte, l label.Label) { + fmt.Fprint(w, k.From(l)) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Value) Get(lm label.Map) interface{} { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return nil +} + +// From can be used to get a value from a Label. +func (k *Value) From(t label.Label) interface{} { return t.UnpackValue() } + +// Of creates a new Label with this key and the supplied value. +func (k *Value) Of(value interface{}) label.Label { return label.OfValue(k, value) } + +// Tag represents a key for tagging labels that have no value. +// These are used when the existence of the label is the entire information it +// carries, such as marking events to be of a specific kind, or from a specific +// package. +type Tag struct { + name string + description string +} + +// NewTag creates a new Key for tagging labels. +func NewTag(name, description string) *Tag { + return &Tag{name: name, description: description} +} + +func (k *Tag) Name() string { return k.name } +func (k *Tag) Description() string { return k.description } + +func (k *Tag) Format(w io.Writer, buf []byte, l label.Label) {} + +// New creates a new Label with this key. +func (k *Tag) New() label.Label { return label.OfValue(k, nil) } + +// Int represents a key +type Int struct { + name string + description string +} + +// NewInt creates a new Key for int values. +func NewInt(name, description string) *Int { + return &Int{name: name, description: description} +} + +func (k *Int) Name() string { return k.name } +func (k *Int) Description() string { return k.description } + +func (k *Int) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int) Of(v int) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int) Get(lm label.Map) int { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int) From(t label.Label) int { return int(t.Unpack64()) } + +// Int8 represents a key +type Int8 struct { + name string + description string +} + +// NewInt8 creates a new Key for int8 values. +func NewInt8(name, description string) *Int8 { + return &Int8{name: name, description: description} +} + +func (k *Int8) Name() string { return k.name } +func (k *Int8) Description() string { return k.description } + +func (k *Int8) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int8) Of(v int8) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int8) Get(lm label.Map) int8 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int8) From(t label.Label) int8 { return int8(t.Unpack64()) } + +// Int16 represents a key +type Int16 struct { + name string + description string +} + +// NewInt16 creates a new Key for int16 values. +func NewInt16(name, description string) *Int16 { + return &Int16{name: name, description: description} +} + +func (k *Int16) Name() string { return k.name } +func (k *Int16) Description() string { return k.description } + +func (k *Int16) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int16) Of(v int16) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int16) Get(lm label.Map) int16 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int16) From(t label.Label) int16 { return int16(t.Unpack64()) } + +// Int32 represents a key +type Int32 struct { + name string + description string +} + +// NewInt32 creates a new Key for int32 values. +func NewInt32(name, description string) *Int32 { + return &Int32{name: name, description: description} +} + +func (k *Int32) Name() string { return k.name } +func (k *Int32) Description() string { return k.description } + +func (k *Int32) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int32) Of(v int32) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int32) Get(lm label.Map) int32 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int32) From(t label.Label) int32 { return int32(t.Unpack64()) } + +// Int64 represents a key +type Int64 struct { + name string + description string +} + +// NewInt64 creates a new Key for int64 values. +func NewInt64(name, description string) *Int64 { + return &Int64{name: name, description: description} +} + +func (k *Int64) Name() string { return k.name } +func (k *Int64) Description() string { return k.description } + +func (k *Int64) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, k.From(l), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int64) Of(v int64) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int64) Get(lm label.Map) int64 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int64) From(t label.Label) int64 { return int64(t.Unpack64()) } + +// UInt represents a key +type UInt struct { + name string + description string +} + +// NewUInt creates a new Key for uint values. +func NewUInt(name, description string) *UInt { + return &UInt{name: name, description: description} +} + +func (k *UInt) Name() string { return k.name } +func (k *UInt) Description() string { return k.description } + +func (k *UInt) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt) Of(v uint) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt) Get(lm label.Map) uint { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt) From(t label.Label) uint { return uint(t.Unpack64()) } + +// UInt8 represents a key +type UInt8 struct { + name string + description string +} + +// NewUInt8 creates a new Key for uint8 values. +func NewUInt8(name, description string) *UInt8 { + return &UInt8{name: name, description: description} +} + +func (k *UInt8) Name() string { return k.name } +func (k *UInt8) Description() string { return k.description } + +func (k *UInt8) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt8) Of(v uint8) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt8) Get(lm label.Map) uint8 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt8) From(t label.Label) uint8 { return uint8(t.Unpack64()) } + +// UInt16 represents a key +type UInt16 struct { + name string + description string +} + +// NewUInt16 creates a new Key for uint16 values. +func NewUInt16(name, description string) *UInt16 { + return &UInt16{name: name, description: description} +} + +func (k *UInt16) Name() string { return k.name } +func (k *UInt16) Description() string { return k.description } + +func (k *UInt16) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt16) Of(v uint16) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt16) Get(lm label.Map) uint16 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt16) From(t label.Label) uint16 { return uint16(t.Unpack64()) } + +// UInt32 represents a key +type UInt32 struct { + name string + description string +} + +// NewUInt32 creates a new Key for uint32 values. +func NewUInt32(name, description string) *UInt32 { + return &UInt32{name: name, description: description} +} + +func (k *UInt32) Name() string { return k.name } +func (k *UInt32) Description() string { return k.description } + +func (k *UInt32) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt32) Of(v uint32) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt32) Get(lm label.Map) uint32 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt32) From(t label.Label) uint32 { return uint32(t.Unpack64()) } + +// UInt64 represents a key +type UInt64 struct { + name string + description string +} + +// NewUInt64 creates a new Key for uint64 values. +func NewUInt64(name, description string) *UInt64 { + return &UInt64{name: name, description: description} +} + +func (k *UInt64) Name() string { return k.name } +func (k *UInt64) Description() string { return k.description } + +func (k *UInt64) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, k.From(l), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt64) Of(v uint64) label.Label { return label.Of64(k, v) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt64) Get(lm label.Map) uint64 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt64) From(t label.Label) uint64 { return t.Unpack64() } + +// Float32 represents a key +type Float32 struct { + name string + description string +} + +// NewFloat32 creates a new Key for float32 values. +func NewFloat32(name, description string) *Float32 { + return &Float32{name: name, description: description} +} + +func (k *Float32) Name() string { return k.name } +func (k *Float32) Description() string { return k.description } + +func (k *Float32) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendFloat(buf, float64(k.From(l)), 'E', -1, 32)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Float32) Of(v float32) label.Label { + return label.Of64(k, uint64(math.Float32bits(v))) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Float32) Get(lm label.Map) float32 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Float32) From(t label.Label) float32 { + return math.Float32frombits(uint32(t.Unpack64())) +} + +// Float64 represents a key +type Float64 struct { + name string + description string +} + +// NewFloat64 creates a new Key for int64 values. +func NewFloat64(name, description string) *Float64 { + return &Float64{name: name, description: description} +} + +func (k *Float64) Name() string { return k.name } +func (k *Float64) Description() string { return k.description } + +func (k *Float64) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendFloat(buf, k.From(l), 'E', -1, 64)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Float64) Of(v float64) label.Label { + return label.Of64(k, math.Float64bits(v)) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Float64) Get(lm label.Map) float64 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Float64) From(t label.Label) float64 { + return math.Float64frombits(t.Unpack64()) +} + +// String represents a key +type String struct { + name string + description string +} + +// NewString creates a new Key for int64 values. +func NewString(name, description string) *String { + return &String{name: name, description: description} +} + +func (k *String) Name() string { return k.name } +func (k *String) Description() string { return k.description } + +func (k *String) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendQuote(buf, k.From(l))) +} + +// Of creates a new Label with this key and the supplied value. +func (k *String) Of(v string) label.Label { return label.OfString(k, v) } + +// Get can be used to get a label for the key from a label.Map. +func (k *String) Get(lm label.Map) string { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return "" +} + +// From can be used to get a value from a Label. +func (k *String) From(t label.Label) string { return t.UnpackString() } + +// Boolean represents a key +type Boolean struct { + name string + description string +} + +// NewBoolean creates a new Key for bool values. +func NewBoolean(name, description string) *Boolean { + return &Boolean{name: name, description: description} +} + +func (k *Boolean) Name() string { return k.name } +func (k *Boolean) Description() string { return k.description } + +func (k *Boolean) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendBool(buf, k.From(l))) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Boolean) Of(v bool) label.Label { + if v { + return label.Of64(k, 1) + } + return label.Of64(k, 0) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Boolean) Get(lm label.Map) bool { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return false +} + +// From can be used to get a value from a Label. +func (k *Boolean) From(t label.Label) bool { return t.Unpack64() > 0 } + +// Error represents a key +type Error struct { + name string + description string +} + +// NewError creates a new Key for int64 values. +func NewError(name, description string) *Error { + return &Error{name: name, description: description} +} + +func (k *Error) Name() string { return k.name } +func (k *Error) Description() string { return k.description } + +func (k *Error) Format(w io.Writer, buf []byte, l label.Label) { + io.WriteString(w, k.From(l).Error()) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Error) Of(v error) label.Label { return label.OfValue(k, v) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Error) Get(lm label.Map) error { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return nil +} + +// From can be used to get a value from a Label. +func (k *Error) From(t label.Label) error { + err, _ := t.UnpackValue().(error) + return err +} diff --git a/vendor/golang.org/x/tools/internal/event/keys/standard.go b/vendor/golang.org/x/tools/internal/event/keys/standard.go new file mode 100644 index 000000000..7e9586659 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/keys/standard.go @@ -0,0 +1,22 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package keys + +var ( + // Msg is a key used to add message strings to label lists. + Msg = NewString("message", "a readable message") + // Label is a key used to indicate an event adds labels to the context. + Label = NewTag("label", "a label context marker") + // Start is used for things like traces that have a name. + Start = NewString("start", "span start") + // Metric is a key used to indicate an event records metrics. + End = NewTag("end", "a span end marker") + // Metric is a key used to indicate an event records metrics. + Detach = NewTag("detach", "a span detach marker") + // Err is a key used to add error values to label lists. + Err = NewError("error", "an error that occurred") + // Metric is a key used to indicate an event records metrics. + Metric = NewTag("metric", "a metric event marker") +) diff --git a/vendor/golang.org/x/tools/internal/event/label/label.go b/vendor/golang.org/x/tools/internal/event/label/label.go new file mode 100644 index 000000000..b55c12eb2 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/event/label/label.go @@ -0,0 +1,213 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package label + +import ( + "fmt" + "io" + "reflect" + "unsafe" +) + +// Key is used as the identity of a Label. +// Keys are intended to be compared by pointer only, the name should be unique +// for communicating with external systems, but it is not required or enforced. +type Key interface { + // Name returns the key name. + Name() string + // Description returns a string that can be used to describe the value. + Description() string + + // Format is used in formatting to append the value of the label to the + // supplied buffer. + // The formatter may use the supplied buf as a scratch area to avoid + // allocations. + Format(w io.Writer, buf []byte, l Label) +} + +// Label holds a key and value pair. +// It is normally used when passing around lists of labels. +type Label struct { + key Key + packed uint64 + untyped interface{} +} + +// Map is the interface to a collection of Labels indexed by key. +type Map interface { + // Find returns the label that matches the supplied key. + Find(key Key) Label +} + +// List is the interface to something that provides an iterable +// list of labels. +// Iteration should start from 0 and continue until Valid returns false. +type List interface { + // Valid returns true if the index is within range for the list. + // It does not imply the label at that index will itself be valid. + Valid(index int) bool + // Label returns the label at the given index. + Label(index int) Label +} + +// list implements LabelList for a list of Labels. +type list struct { + labels []Label +} + +// filter wraps a LabelList filtering out specific labels. +type filter struct { + keys []Key + underlying List +} + +// listMap implements LabelMap for a simple list of labels. +type listMap struct { + labels []Label +} + +// mapChain implements LabelMap for a list of underlying LabelMap. +type mapChain struct { + maps []Map +} + +// OfValue creates a new label from the key and value. +// This method is for implementing new key types, label creation should +// normally be done with the Of method of the key. +func OfValue(k Key, value interface{}) Label { return Label{key: k, untyped: value} } + +// UnpackValue assumes the label was built using LabelOfValue and returns the value +// that was passed to that constructor. +// This method is for implementing new key types, for type safety normal +// access should be done with the From method of the key. +func (t Label) UnpackValue() interface{} { return t.untyped } + +// Of64 creates a new label from a key and a uint64. This is often +// used for non uint64 values that can be packed into a uint64. +// This method is for implementing new key types, label creation should +// normally be done with the Of method of the key. +func Of64(k Key, v uint64) Label { return Label{key: k, packed: v} } + +// Unpack64 assumes the label was built using LabelOf64 and returns the value that +// was passed to that constructor. +// This method is for implementing new key types, for type safety normal +// access should be done with the From method of the key. +func (t Label) Unpack64() uint64 { return t.packed } + +// OfString creates a new label from a key and a string. +// This method is for implementing new key types, label creation should +// normally be done with the Of method of the key. +func OfString(k Key, v string) Label { + hdr := (*reflect.StringHeader)(unsafe.Pointer(&v)) + return Label{ + key: k, + packed: uint64(hdr.Len), + untyped: unsafe.Pointer(hdr.Data), + } +} + +// UnpackString assumes the label was built using LabelOfString and returns the +// value that was passed to that constructor. +// This method is for implementing new key types, for type safety normal +// access should be done with the From method of the key. +func (t Label) UnpackString() string { + var v string + hdr := (*reflect.StringHeader)(unsafe.Pointer(&v)) + hdr.Data = uintptr(t.untyped.(unsafe.Pointer)) + hdr.Len = int(t.packed) + return *(*string)(unsafe.Pointer(hdr)) +} + +// Valid returns true if the Label is a valid one (it has a key). +func (t Label) Valid() bool { return t.key != nil } + +// Key returns the key of this Label. +func (t Label) Key() Key { return t.key } + +// Format is used for debug printing of labels. +func (t Label) Format(f fmt.State, r rune) { + if !t.Valid() { + io.WriteString(f, `nil`) + return + } + io.WriteString(f, t.Key().Name()) + io.WriteString(f, "=") + var buf [128]byte + t.Key().Format(f, buf[:0], t) +} + +func (l *list) Valid(index int) bool { + return index >= 0 && index < len(l.labels) +} + +func (l *list) Label(index int) Label { + return l.labels[index] +} + +func (f *filter) Valid(index int) bool { + return f.underlying.Valid(index) +} + +func (f *filter) Label(index int) Label { + l := f.underlying.Label(index) + for _, f := range f.keys { + if l.Key() == f { + return Label{} + } + } + return l +} + +func (lm listMap) Find(key Key) Label { + for _, l := range lm.labels { + if l.Key() == key { + return l + } + } + return Label{} +} + +func (c mapChain) Find(key Key) Label { + for _, src := range c.maps { + l := src.Find(key) + if l.Valid() { + return l + } + } + return Label{} +} + +var emptyList = &list{} + +func NewList(labels ...Label) List { + if len(labels) == 0 { + return emptyList + } + return &list{labels: labels} +} + +func Filter(l List, keys ...Key) List { + if len(keys) == 0 { + return l + } + return &filter{keys: keys, underlying: l} +} + +func NewMap(labels ...Label) Map { + return listMap{labels: labels} +} + +func MergeMaps(srcs ...Map) Map { + var nonNil []Map + for _, src := range srcs { + if src != nil { + nonNil = append(nonNil, src) + } + } + if len(nonNil) == 1 { + return nonNil[0] + } + return mapChain{maps: nonNil} +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go new file mode 100644 index 000000000..9887f7e7a --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go @@ -0,0 +1,196 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package fastwalk provides a faster version of filepath.Walk for file system +// scanning tools. +package fastwalk + +import ( + "errors" + "os" + "path/filepath" + "runtime" + "sync" +) + +// ErrTraverseLink is used as a return value from WalkFuncs to indicate that the +// symlink named in the call may be traversed. +var ErrTraverseLink = errors.New("fastwalk: traverse symlink, assuming target is a directory") + +// ErrSkipFiles is a used as a return value from WalkFuncs to indicate that the +// callback should not be called for any other files in the current directory. +// Child directories will still be traversed. +var ErrSkipFiles = errors.New("fastwalk: skip remaining files in directory") + +// Walk is a faster implementation of filepath.Walk. +// +// filepath.Walk's design necessarily calls os.Lstat on each file, +// even if the caller needs less info. +// Many tools need only the type of each file. +// On some platforms, this information is provided directly by the readdir +// system call, avoiding the need to stat each file individually. +// fastwalk_unix.go contains a fork of the syscall routines. +// +// See golang.org/issue/16399 +// +// Walk walks the file tree rooted at root, calling walkFn for +// each file or directory in the tree, including root. +// +// If fastWalk returns filepath.SkipDir, the directory is skipped. +// +// Unlike filepath.Walk: +// * file stat calls must be done by the user. +// The only provided metadata is the file type, which does not include +// any permission bits. +// * multiple goroutines stat the filesystem concurrently. The provided +// walkFn must be safe for concurrent use. +// * fastWalk can follow symlinks if walkFn returns the TraverseLink +// sentinel error. It is the walkFn's responsibility to prevent +// fastWalk from going into symlink cycles. +func Walk(root string, walkFn func(path string, typ os.FileMode) error) error { + // TODO(bradfitz): make numWorkers configurable? We used a + // minimum of 4 to give the kernel more info about multiple + // things we want, in hopes its I/O scheduling can take + // advantage of that. Hopefully most are in cache. Maybe 4 is + // even too low of a minimum. Profile more. + numWorkers := 4 + if n := runtime.NumCPU(); n > numWorkers { + numWorkers = n + } + + // Make sure to wait for all workers to finish, otherwise + // walkFn could still be called after returning. This Wait call + // runs after close(e.donec) below. + var wg sync.WaitGroup + defer wg.Wait() + + w := &walker{ + fn: walkFn, + enqueuec: make(chan walkItem, numWorkers), // buffered for performance + workc: make(chan walkItem, numWorkers), // buffered for performance + donec: make(chan struct{}), + + // buffered for correctness & not leaking goroutines: + resc: make(chan error, numWorkers), + } + defer close(w.donec) + + for i := 0; i < numWorkers; i++ { + wg.Add(1) + go w.doWork(&wg) + } + todo := []walkItem{{dir: root}} + out := 0 + for { + workc := w.workc + var workItem walkItem + if len(todo) == 0 { + workc = nil + } else { + workItem = todo[len(todo)-1] + } + select { + case workc <- workItem: + todo = todo[:len(todo)-1] + out++ + case it := <-w.enqueuec: + todo = append(todo, it) + case err := <-w.resc: + out-- + if err != nil { + return err + } + if out == 0 && len(todo) == 0 { + // It's safe to quit here, as long as the buffered + // enqueue channel isn't also readable, which might + // happen if the worker sends both another unit of + // work and its result before the other select was + // scheduled and both w.resc and w.enqueuec were + // readable. + select { + case it := <-w.enqueuec: + todo = append(todo, it) + default: + return nil + } + } + } + } +} + +// doWork reads directories as instructed (via workc) and runs the +// user's callback function. +func (w *walker) doWork(wg *sync.WaitGroup) { + defer wg.Done() + for { + select { + case <-w.donec: + return + case it := <-w.workc: + select { + case <-w.donec: + return + case w.resc <- w.walk(it.dir, !it.callbackDone): + } + } + } +} + +type walker struct { + fn func(path string, typ os.FileMode) error + + donec chan struct{} // closed on fastWalk's return + workc chan walkItem // to workers + enqueuec chan walkItem // from workers + resc chan error // from workers +} + +type walkItem struct { + dir string + callbackDone bool // callback already called; don't do it again +} + +func (w *walker) enqueue(it walkItem) { + select { + case w.enqueuec <- it: + case <-w.donec: + } +} + +func (w *walker) onDirEnt(dirName, baseName string, typ os.FileMode) error { + joined := dirName + string(os.PathSeparator) + baseName + if typ == os.ModeDir { + w.enqueue(walkItem{dir: joined}) + return nil + } + + err := w.fn(joined, typ) + if typ == os.ModeSymlink { + if err == ErrTraverseLink { + // Set callbackDone so we don't call it twice for both the + // symlink-as-symlink and the symlink-as-directory later: + w.enqueue(walkItem{dir: joined, callbackDone: true}) + return nil + } + if err == filepath.SkipDir { + // Permit SkipDir on symlinks too. + return nil + } + } + return err +} + +func (w *walker) walk(root string, runUserCallback bool) error { + if runUserCallback { + err := w.fn(root, os.ModeDir) + if err == filepath.SkipDir { + return nil + } + if err != nil { + return err + } + } + + return readDir(root, w.onDirEnt) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go new file mode 100644 index 000000000..ccffec5ad --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go @@ -0,0 +1,13 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build freebsd openbsd netbsd + +package fastwalk + +import "syscall" + +func direntInode(dirent *syscall.Dirent) uint64 { + return uint64(dirent.Fileno) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go new file mode 100644 index 000000000..ab7fbc0a9 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go @@ -0,0 +1,14 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux darwin +// +build !appengine + +package fastwalk + +import "syscall" + +func direntInode(dirent *syscall.Dirent) uint64 { + return uint64(dirent.Ino) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go new file mode 100644 index 000000000..a3b26a7ba --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go @@ -0,0 +1,13 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build darwin freebsd openbsd netbsd + +package fastwalk + +import "syscall" + +func direntNamlen(dirent *syscall.Dirent) uint64 { + return uint64(dirent.Namlen) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go new file mode 100644 index 000000000..e880d358b --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go @@ -0,0 +1,29 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux +// +build !appengine + +package fastwalk + +import ( + "bytes" + "syscall" + "unsafe" +) + +func direntNamlen(dirent *syscall.Dirent) uint64 { + const fixedHdr = uint16(unsafe.Offsetof(syscall.Dirent{}.Name)) + nameBuf := (*[unsafe.Sizeof(dirent.Name)]byte)(unsafe.Pointer(&dirent.Name[0])) + const nameBufLen = uint16(len(nameBuf)) + limit := dirent.Reclen - fixedHdr + if limit > nameBufLen { + limit = nameBufLen + } + nameLen := bytes.IndexByte(nameBuf[:limit], 0) + if nameLen < 0 { + panic("failed to find terminating 0 byte in dirent") + } + return uint64(nameLen) +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go new file mode 100644 index 000000000..b0d6327a9 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go @@ -0,0 +1,37 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build appengine !linux,!darwin,!freebsd,!openbsd,!netbsd + +package fastwalk + +import ( + "io/ioutil" + "os" +) + +// readDir calls fn for each directory entry in dirName. +// It does not descend into directories or follow symlinks. +// If fn returns a non-nil error, readDir returns with that error +// immediately. +func readDir(dirName string, fn func(dirName, entName string, typ os.FileMode) error) error { + fis, err := ioutil.ReadDir(dirName) + if err != nil { + return err + } + skipFiles := false + for _, fi := range fis { + if fi.Mode().IsRegular() && skipFiles { + continue + } + if err := fn(dirName, fi.Name(), fi.Mode()&os.ModeType); err != nil { + if err == ErrSkipFiles { + skipFiles = true + continue + } + return err + } + } + return nil +} diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go new file mode 100644 index 000000000..5901a8f61 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go @@ -0,0 +1,128 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux darwin freebsd openbsd netbsd +// +build !appengine + +package fastwalk + +import ( + "fmt" + "os" + "syscall" + "unsafe" +) + +const blockSize = 8 << 10 + +// unknownFileMode is a sentinel (and bogus) os.FileMode +// value used to represent a syscall.DT_UNKNOWN Dirent.Type. +const unknownFileMode os.FileMode = os.ModeNamedPipe | os.ModeSocket | os.ModeDevice + +func readDir(dirName string, fn func(dirName, entName string, typ os.FileMode) error) error { + fd, err := syscall.Open(dirName, 0, 0) + if err != nil { + return &os.PathError{Op: "open", Path: dirName, Err: err} + } + defer syscall.Close(fd) + + // The buffer must be at least a block long. + buf := make([]byte, blockSize) // stack-allocated; doesn't escape + bufp := 0 // starting read position in buf + nbuf := 0 // end valid data in buf + skipFiles := false + for { + if bufp >= nbuf { + bufp = 0 + nbuf, err = syscall.ReadDirent(fd, buf) + if err != nil { + return os.NewSyscallError("readdirent", err) + } + if nbuf <= 0 { + return nil + } + } + consumed, name, typ := parseDirEnt(buf[bufp:nbuf]) + bufp += consumed + if name == "" || name == "." || name == ".." { + continue + } + // Fallback for filesystems (like old XFS) that don't + // support Dirent.Type and have DT_UNKNOWN (0) there + // instead. + if typ == unknownFileMode { + fi, err := os.Lstat(dirName + "/" + name) + if err != nil { + // It got deleted in the meantime. + if os.IsNotExist(err) { + continue + } + return err + } + typ = fi.Mode() & os.ModeType + } + if skipFiles && typ.IsRegular() { + continue + } + if err := fn(dirName, name, typ); err != nil { + if err == ErrSkipFiles { + skipFiles = true + continue + } + return err + } + } +} + +func parseDirEnt(buf []byte) (consumed int, name string, typ os.FileMode) { + // golang.org/issue/37269 + dirent := &syscall.Dirent{} + copy((*[unsafe.Sizeof(syscall.Dirent{})]byte)(unsafe.Pointer(dirent))[:], buf) + if v := unsafe.Offsetof(dirent.Reclen) + unsafe.Sizeof(dirent.Reclen); uintptr(len(buf)) < v { + panic(fmt.Sprintf("buf size of %d smaller than dirent header size %d", len(buf), v)) + } + if len(buf) < int(dirent.Reclen) { + panic(fmt.Sprintf("buf size %d < record length %d", len(buf), dirent.Reclen)) + } + consumed = int(dirent.Reclen) + if direntInode(dirent) == 0 { // File absent in directory. + return + } + switch dirent.Type { + case syscall.DT_REG: + typ = 0 + case syscall.DT_DIR: + typ = os.ModeDir + case syscall.DT_LNK: + typ = os.ModeSymlink + case syscall.DT_BLK: + typ = os.ModeDevice + case syscall.DT_FIFO: + typ = os.ModeNamedPipe + case syscall.DT_SOCK: + typ = os.ModeSocket + case syscall.DT_UNKNOWN: + typ = unknownFileMode + default: + // Skip weird things. + // It's probably a DT_WHT (http://lwn.net/Articles/325369/) + // or something. Revisit if/when this package is moved outside + // of goimports. goimports only cares about regular files, + // symlinks, and directories. + return + } + + nameBuf := (*[unsafe.Sizeof(dirent.Name)]byte)(unsafe.Pointer(&dirent.Name[0])) + nameLen := direntNamlen(dirent) + + // Special cases for common things: + if nameLen == 1 && nameBuf[0] == '.' { + name = "." + } else if nameLen == 2 && nameBuf[0] == '.' && nameBuf[1] == '.' { + name = ".." + } else { + name = string(nameBuf[:nameLen]) + } + return +} diff --git a/vendor/golang.org/x/tools/internal/gocommand/invoke.go b/vendor/golang.org/x/tools/internal/gocommand/invoke.go new file mode 100644 index 000000000..f65aad4ec --- /dev/null +++ b/vendor/golang.org/x/tools/internal/gocommand/invoke.go @@ -0,0 +1,273 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package gocommand is a helper for calling the go command. +package gocommand + +import ( + "bytes" + "context" + "fmt" + "io" + "os" + "os/exec" + "regexp" + "strconv" + "strings" + "sync" + "time" + + "golang.org/x/tools/internal/event" +) + +// An Runner will run go command invocations and serialize +// them if it sees a concurrency error. +type Runner struct { + // once guards the runner initialization. + once sync.Once + + // inFlight tracks available workers. + inFlight chan struct{} + + // serialized guards the ability to run a go command serially, + // to avoid deadlocks when claiming workers. + serialized chan struct{} +} + +const maxInFlight = 10 + +func (runner *Runner) initialize() { + runner.once.Do(func() { + runner.inFlight = make(chan struct{}, maxInFlight) + runner.serialized = make(chan struct{}, 1) + }) +} + +// 1.13: go: updates to go.mod needed, but contents have changed +// 1.14: go: updating go.mod: existing contents have changed since last read +var modConcurrencyError = regexp.MustCompile(`go:.*go.mod.*contents have changed`) + +// Run is a convenience wrapper around RunRaw. +// It returns only stdout and a "friendly" error. +func (runner *Runner) Run(ctx context.Context, inv Invocation) (*bytes.Buffer, error) { + stdout, _, friendly, _ := runner.RunRaw(ctx, inv) + return stdout, friendly +} + +// RunPiped runs the invocation serially, always waiting for any concurrent +// invocations to complete first. +func (runner *Runner) RunPiped(ctx context.Context, inv Invocation, stdout, stderr io.Writer) error { + _, err := runner.runPiped(ctx, inv, stdout, stderr) + return err +} + +// RunRaw runs the invocation, serializing requests only if they fight over +// go.mod changes. +func (runner *Runner) RunRaw(ctx context.Context, inv Invocation) (*bytes.Buffer, *bytes.Buffer, error, error) { + // Make sure the runner is always initialized. + runner.initialize() + + // First, try to run the go command concurrently. + stdout, stderr, friendlyErr, err := runner.runConcurrent(ctx, inv) + + // If we encounter a load concurrency error, we need to retry serially. + if friendlyErr == nil || !modConcurrencyError.MatchString(friendlyErr.Error()) { + return stdout, stderr, friendlyErr, err + } + event.Error(ctx, "Load concurrency error, will retry serially", err) + + // Run serially by calling runPiped. + stdout.Reset() + stderr.Reset() + friendlyErr, err = runner.runPiped(ctx, inv, stdout, stderr) + return stdout, stderr, friendlyErr, err +} + +func (runner *Runner) runConcurrent(ctx context.Context, inv Invocation) (*bytes.Buffer, *bytes.Buffer, error, error) { + // Wait for 1 worker to become available. + select { + case <-ctx.Done(): + return nil, nil, nil, ctx.Err() + case runner.inFlight <- struct{}{}: + defer func() { <-runner.inFlight }() + } + + stdout, stderr := &bytes.Buffer{}, &bytes.Buffer{} + friendlyErr, err := inv.runWithFriendlyError(ctx, stdout, stderr) + return stdout, stderr, friendlyErr, err +} + +func (runner *Runner) runPiped(ctx context.Context, inv Invocation, stdout, stderr io.Writer) (error, error) { + // Make sure the runner is always initialized. + runner.initialize() + + // Acquire the serialization lock. This avoids deadlocks between two + // runPiped commands. + select { + case <-ctx.Done(): + return nil, ctx.Err() + case runner.serialized <- struct{}{}: + defer func() { <-runner.serialized }() + } + + // Wait for all in-progress go commands to return before proceeding, + // to avoid load concurrency errors. + for i := 0; i < maxInFlight; i++ { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case runner.inFlight <- struct{}{}: + // Make sure we always "return" any workers we took. + defer func() { <-runner.inFlight }() + } + } + + return inv.runWithFriendlyError(ctx, stdout, stderr) +} + +// An Invocation represents a call to the go command. +type Invocation struct { + Verb string + Args []string + BuildFlags []string + ModFlag string + ModFile string + Overlay string + // If CleanEnv is set, the invocation will run only with the environment + // in Env, not starting with os.Environ. + CleanEnv bool + Env []string + WorkingDir string + Logf func(format string, args ...interface{}) +} + +func (i *Invocation) runWithFriendlyError(ctx context.Context, stdout, stderr io.Writer) (friendlyError error, rawError error) { + rawError = i.run(ctx, stdout, stderr) + if rawError != nil { + friendlyError = rawError + // Check for 'go' executable not being found. + if ee, ok := rawError.(*exec.Error); ok && ee.Err == exec.ErrNotFound { + friendlyError = fmt.Errorf("go command required, not found: %v", ee) + } + if ctx.Err() != nil { + friendlyError = ctx.Err() + } + friendlyError = fmt.Errorf("err: %v: stderr: %s", friendlyError, stderr) + } + return +} + +func (i *Invocation) run(ctx context.Context, stdout, stderr io.Writer) error { + log := i.Logf + if log == nil { + log = func(string, ...interface{}) {} + } + + goArgs := []string{i.Verb} + + appendModFile := func() { + if i.ModFile != "" { + goArgs = append(goArgs, "-modfile="+i.ModFile) + } + } + appendModFlag := func() { + if i.ModFlag != "" { + goArgs = append(goArgs, "-mod="+i.ModFlag) + } + } + appendOverlayFlag := func() { + if i.Overlay != "" { + goArgs = append(goArgs, "-overlay="+i.Overlay) + } + } + + switch i.Verb { + case "env", "version": + goArgs = append(goArgs, i.Args...) + case "mod": + // mod needs the sub-verb before flags. + goArgs = append(goArgs, i.Args[0]) + appendModFile() + goArgs = append(goArgs, i.Args[1:]...) + case "get": + goArgs = append(goArgs, i.BuildFlags...) + appendModFile() + goArgs = append(goArgs, i.Args...) + + default: // notably list and build. + goArgs = append(goArgs, i.BuildFlags...) + appendModFile() + appendModFlag() + appendOverlayFlag() + goArgs = append(goArgs, i.Args...) + } + cmd := exec.Command("go", goArgs...) + cmd.Stdout = stdout + cmd.Stderr = stderr + // On darwin the cwd gets resolved to the real path, which breaks anything that + // expects the working directory to keep the original path, including the + // go command when dealing with modules. + // The Go stdlib has a special feature where if the cwd and the PWD are the + // same node then it trusts the PWD, so by setting it in the env for the child + // process we fix up all the paths returned by the go command. + if !i.CleanEnv { + cmd.Env = os.Environ() + } + cmd.Env = append(cmd.Env, i.Env...) + if i.WorkingDir != "" { + cmd.Env = append(cmd.Env, "PWD="+i.WorkingDir) + cmd.Dir = i.WorkingDir + } + defer func(start time.Time) { log("%s for %v", time.Since(start), cmdDebugStr(cmd)) }(time.Now()) + + return runCmdContext(ctx, cmd) +} + +// runCmdContext is like exec.CommandContext except it sends os.Interrupt +// before os.Kill. +func runCmdContext(ctx context.Context, cmd *exec.Cmd) error { + if err := cmd.Start(); err != nil { + return err + } + resChan := make(chan error, 1) + go func() { + resChan <- cmd.Wait() + }() + + select { + case err := <-resChan: + return err + case <-ctx.Done(): + } + // Cancelled. Interrupt and see if it ends voluntarily. + cmd.Process.Signal(os.Interrupt) + select { + case err := <-resChan: + return err + case <-time.After(time.Second): + } + // Didn't shut down in response to interrupt. Kill it hard. + cmd.Process.Kill() + return <-resChan +} + +func cmdDebugStr(cmd *exec.Cmd) string { + env := make(map[string]string) + for _, kv := range cmd.Env { + split := strings.SplitN(kv, "=", 2) + k, v := split[0], split[1] + env[k] = v + } + + var args []string + for _, arg := range cmd.Args { + quoted := strconv.Quote(arg) + if quoted[1:len(quoted)-1] != arg || strings.Contains(arg, " ") { + args = append(args, quoted) + } else { + args = append(args, arg) + } + } + return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v GOPROXY=%v PWD=%v %v", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["GOPROXY"], env["PWD"], strings.Join(args, " ")) +} diff --git a/vendor/golang.org/x/tools/internal/gocommand/vendor.go b/vendor/golang.org/x/tools/internal/gocommand/vendor.go new file mode 100644 index 000000000..1cd8d8473 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/gocommand/vendor.go @@ -0,0 +1,102 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package gocommand + +import ( + "bytes" + "context" + "fmt" + "os" + "path/filepath" + "regexp" + "strings" + + "golang.org/x/mod/semver" +) + +// ModuleJSON holds information about a module. +type ModuleJSON struct { + Path string // module path + Replace *ModuleJSON // replaced by this module + Main bool // is this the main module? + Indirect bool // is this module only an indirect dependency of main module? + Dir string // directory holding files for this module, if any + GoMod string // path to go.mod file for this module, if any + GoVersion string // go version used in module +} + +var modFlagRegexp = regexp.MustCompile(`-mod[ =](\w+)`) + +// VendorEnabled reports whether vendoring is enabled. It takes a *Runner to execute Go commands +// with the supplied context.Context and Invocation. The Invocation can contain pre-defined fields, +// of which only Verb and Args are modified to run the appropriate Go command. +// Inspired by setDefaultBuildMod in modload/init.go +func VendorEnabled(ctx context.Context, inv Invocation, r *Runner) (*ModuleJSON, bool, error) { + mainMod, go114, err := getMainModuleAnd114(ctx, inv, r) + if err != nil { + return nil, false, err + } + + // We check the GOFLAGS to see if there is anything overridden or not. + inv.Verb = "env" + inv.Args = []string{"GOFLAGS"} + stdout, err := r.Run(ctx, inv) + if err != nil { + return nil, false, err + } + goflags := string(bytes.TrimSpace(stdout.Bytes())) + matches := modFlagRegexp.FindStringSubmatch(goflags) + var modFlag string + if len(matches) != 0 { + modFlag = matches[1] + } + if modFlag != "" { + // Don't override an explicit '-mod=' argument. + return mainMod, modFlag == "vendor", nil + } + if mainMod == nil || !go114 { + return mainMod, false, nil + } + // Check 1.14's automatic vendor mode. + if fi, err := os.Stat(filepath.Join(mainMod.Dir, "vendor")); err == nil && fi.IsDir() { + if mainMod.GoVersion != "" && semver.Compare("v"+mainMod.GoVersion, "v1.14") >= 0 { + // The Go version is at least 1.14, and a vendor directory exists. + // Set -mod=vendor by default. + return mainMod, true, nil + } + } + return mainMod, false, nil +} + +// getMainModuleAnd114 gets the main module's information and whether the +// go command in use is 1.14+. This is the information needed to figure out +// if vendoring should be enabled. +func getMainModuleAnd114(ctx context.Context, inv Invocation, r *Runner) (*ModuleJSON, bool, error) { + const format = `{{.Path}} +{{.Dir}} +{{.GoMod}} +{{.GoVersion}} +{{range context.ReleaseTags}}{{if eq . "go1.14"}}{{.}}{{end}}{{end}} +` + inv.Verb = "list" + inv.Args = []string{"-m", "-f", format} + stdout, err := r.Run(ctx, inv) + if err != nil { + return nil, false, err + } + + lines := strings.Split(stdout.String(), "\n") + if len(lines) < 5 { + return nil, false, fmt.Errorf("unexpected stdout: %q", stdout.String()) + } + mod := &ModuleJSON{ + Path: lines[0], + Dir: lines[1], + GoMod: lines[2], + GoVersion: lines[3], + Main: true, + } + return mod, lines[4] == "go1.14", nil +} diff --git a/vendor/golang.org/x/tools/internal/gocommand/version.go b/vendor/golang.org/x/tools/internal/gocommand/version.go new file mode 100644 index 000000000..0cebac6e6 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/gocommand/version.go @@ -0,0 +1,51 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package gocommand + +import ( + "context" + "fmt" + "strings" +) + +// GoVersion checks the go version by running "go list" with modules off. +// It returns the X in Go 1.X. +func GoVersion(ctx context.Context, inv Invocation, r *Runner) (int, error) { + inv.Verb = "list" + inv.Args = []string{"-e", "-f", `{{context.ReleaseTags}}`} + inv.Env = append(append([]string{}, inv.Env...), "GO111MODULE=off") + // Unset any unneeded flags, and remove them from BuildFlags, if they're + // present. + inv.ModFile = "" + inv.ModFlag = "" + var buildFlags []string + for _, flag := range inv.BuildFlags { + // Flags can be prefixed by one or two dashes. + f := strings.TrimPrefix(strings.TrimPrefix(flag, "-"), "-") + if strings.HasPrefix(f, "mod=") || strings.HasPrefix(f, "modfile=") { + continue + } + buildFlags = append(buildFlags, flag) + } + inv.BuildFlags = buildFlags + stdoutBytes, err := r.Run(ctx, inv) + if err != nil { + return 0, err + } + stdout := stdoutBytes.String() + if len(stdout) < 3 { + return 0, fmt.Errorf("bad ReleaseTags output: %q", stdout) + } + // Split up "[go1.1 go1.15]" + tags := strings.Fields(stdout[1 : len(stdout)-2]) + for i := len(tags) - 1; i >= 0; i-- { + var version int + if _, err := fmt.Sscanf(tags[i], "go1.%d", &version); err != nil { + continue + } + return version, nil + } + return 0, fmt.Errorf("no parseable ReleaseTags in %v", tags) +} diff --git a/vendor/golang.org/x/tools/internal/gopathwalk/walk.go b/vendor/golang.org/x/tools/internal/gopathwalk/walk.go new file mode 100644 index 000000000..925ff5356 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/gopathwalk/walk.go @@ -0,0 +1,264 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package gopathwalk is like filepath.Walk but specialized for finding Go +// packages, particularly in $GOPATH and $GOROOT. +package gopathwalk + +import ( + "bufio" + "bytes" + "fmt" + "io/ioutil" + "log" + "os" + "path/filepath" + "strings" + "time" + + "golang.org/x/tools/internal/fastwalk" +) + +// Options controls the behavior of a Walk call. +type Options struct { + // If Logf is non-nil, debug logging is enabled through this function. + Logf func(format string, args ...interface{}) + // Search module caches. Also disables legacy goimports ignore rules. + ModulesEnabled bool +} + +// RootType indicates the type of a Root. +type RootType int + +const ( + RootUnknown RootType = iota + RootGOROOT + RootGOPATH + RootCurrentModule + RootModuleCache + RootOther +) + +// A Root is a starting point for a Walk. +type Root struct { + Path string + Type RootType +} + +// Walk walks Go source directories ($GOROOT, $GOPATH, etc) to find packages. +// For each package found, add will be called (concurrently) with the absolute +// paths of the containing source directory and the package directory. +// add will be called concurrently. +func Walk(roots []Root, add func(root Root, dir string), opts Options) { + WalkSkip(roots, add, func(Root, string) bool { return false }, opts) +} + +// WalkSkip walks Go source directories ($GOROOT, $GOPATH, etc) to find packages. +// For each package found, add will be called (concurrently) with the absolute +// paths of the containing source directory and the package directory. +// For each directory that will be scanned, skip will be called (concurrently) +// with the absolute paths of the containing source directory and the directory. +// If skip returns false on a directory it will be processed. +// add will be called concurrently. +// skip will be called concurrently. +func WalkSkip(roots []Root, add func(root Root, dir string), skip func(root Root, dir string) bool, opts Options) { + for _, root := range roots { + walkDir(root, add, skip, opts) + } +} + +// walkDir creates a walker and starts fastwalk with this walker. +func walkDir(root Root, add func(Root, string), skip func(root Root, dir string) bool, opts Options) { + if _, err := os.Stat(root.Path); os.IsNotExist(err) { + if opts.Logf != nil { + opts.Logf("skipping nonexistent directory: %v", root.Path) + } + return + } + start := time.Now() + if opts.Logf != nil { + opts.Logf("gopathwalk: scanning %s", root.Path) + } + w := &walker{ + root: root, + add: add, + skip: skip, + opts: opts, + } + w.init() + if err := fastwalk.Walk(root.Path, w.walk); err != nil { + log.Printf("gopathwalk: scanning directory %v: %v", root.Path, err) + } + + if opts.Logf != nil { + opts.Logf("gopathwalk: scanned %s in %v", root.Path, time.Since(start)) + } +} + +// walker is the callback for fastwalk.Walk. +type walker struct { + root Root // The source directory to scan. + add func(Root, string) // The callback that will be invoked for every possible Go package dir. + skip func(Root, string) bool // The callback that will be invoked for every dir. dir is skipped if it returns true. + opts Options // Options passed to Walk by the user. + + ignoredDirs []os.FileInfo // The ignored directories, loaded from .goimportsignore files. +} + +// init initializes the walker based on its Options +func (w *walker) init() { + var ignoredPaths []string + if w.root.Type == RootModuleCache { + ignoredPaths = []string{"cache"} + } + if !w.opts.ModulesEnabled && w.root.Type == RootGOPATH { + ignoredPaths = w.getIgnoredDirs(w.root.Path) + ignoredPaths = append(ignoredPaths, "v", "mod") + } + + for _, p := range ignoredPaths { + full := filepath.Join(w.root.Path, p) + if fi, err := os.Stat(full); err == nil { + w.ignoredDirs = append(w.ignoredDirs, fi) + if w.opts.Logf != nil { + w.opts.Logf("Directory added to ignore list: %s", full) + } + } else if w.opts.Logf != nil { + w.opts.Logf("Error statting ignored directory: %v", err) + } + } +} + +// getIgnoredDirs reads an optional config file at /.goimportsignore +// of relative directories to ignore when scanning for go files. +// The provided path is one of the $GOPATH entries with "src" appended. +func (w *walker) getIgnoredDirs(path string) []string { + file := filepath.Join(path, ".goimportsignore") + slurp, err := ioutil.ReadFile(file) + if w.opts.Logf != nil { + if err != nil { + w.opts.Logf("%v", err) + } else { + w.opts.Logf("Read %s", file) + } + } + if err != nil { + return nil + } + + var ignoredDirs []string + bs := bufio.NewScanner(bytes.NewReader(slurp)) + for bs.Scan() { + line := strings.TrimSpace(bs.Text()) + if line == "" || strings.HasPrefix(line, "#") { + continue + } + ignoredDirs = append(ignoredDirs, line) + } + return ignoredDirs +} + +// shouldSkipDir reports whether the file should be skipped or not. +func (w *walker) shouldSkipDir(fi os.FileInfo, dir string) bool { + for _, ignoredDir := range w.ignoredDirs { + if os.SameFile(fi, ignoredDir) { + return true + } + } + if w.skip != nil { + // Check with the user specified callback. + return w.skip(w.root, dir) + } + return false +} + +// walk walks through the given path. +func (w *walker) walk(path string, typ os.FileMode) error { + dir := filepath.Dir(path) + if typ.IsRegular() { + if dir == w.root.Path && (w.root.Type == RootGOROOT || w.root.Type == RootGOPATH) { + // Doesn't make sense to have regular files + // directly in your $GOPATH/src or $GOROOT/src. + return fastwalk.ErrSkipFiles + } + if !strings.HasSuffix(path, ".go") { + return nil + } + + w.add(w.root, dir) + return fastwalk.ErrSkipFiles + } + if typ == os.ModeDir { + base := filepath.Base(path) + if base == "" || base[0] == '.' || base[0] == '_' || + base == "testdata" || + (w.root.Type == RootGOROOT && w.opts.ModulesEnabled && base == "vendor") || + (!w.opts.ModulesEnabled && base == "node_modules") { + return filepath.SkipDir + } + fi, err := os.Lstat(path) + if err == nil && w.shouldSkipDir(fi, path) { + return filepath.SkipDir + } + return nil + } + if typ == os.ModeSymlink { + base := filepath.Base(path) + if strings.HasPrefix(base, ".#") { + // Emacs noise. + return nil + } + fi, err := os.Lstat(path) + if err != nil { + // Just ignore it. + return nil + } + if w.shouldTraverse(dir, fi) { + return fastwalk.ErrTraverseLink + } + } + return nil +} + +// shouldTraverse reports whether the symlink fi, found in dir, +// should be followed. It makes sure symlinks were never visited +// before to avoid symlink loops. +func (w *walker) shouldTraverse(dir string, fi os.FileInfo) bool { + path := filepath.Join(dir, fi.Name()) + target, err := filepath.EvalSymlinks(path) + if err != nil { + return false + } + ts, err := os.Stat(target) + if err != nil { + fmt.Fprintln(os.Stderr, err) + return false + } + if !ts.IsDir() { + return false + } + if w.shouldSkipDir(ts, dir) { + return false + } + // Check for symlink loops by statting each directory component + // and seeing if any are the same file as ts. + for { + parent := filepath.Dir(path) + if parent == path { + // Made it to the root without seeing a cycle. + // Use this symlink. + return true + } + parentInfo, err := os.Stat(parent) + if err != nil { + return false + } + if os.SameFile(ts, parentInfo) { + // Cycle. Don't traverse. + return false + } + path = parent + } + +} diff --git a/vendor/golang.org/x/tools/internal/imports/fix.go b/vendor/golang.org/x/tools/internal/imports/fix.go new file mode 100644 index 000000000..d859617b7 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/imports/fix.go @@ -0,0 +1,1730 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package imports + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "go/ast" + "go/build" + "go/parser" + "go/token" + "io/ioutil" + "os" + "path" + "path/filepath" + "reflect" + "sort" + "strconv" + "strings" + "sync" + "unicode" + "unicode/utf8" + + "golang.org/x/tools/go/ast/astutil" + "golang.org/x/tools/internal/gocommand" + "golang.org/x/tools/internal/gopathwalk" +) + +// importToGroup is a list of functions which map from an import path to +// a group number. +var importToGroup = []func(localPrefix, importPath string) (num int, ok bool){ + func(localPrefix, importPath string) (num int, ok bool) { + if localPrefix == "" { + return + } + for _, p := range strings.Split(localPrefix, ",") { + if strings.HasPrefix(importPath, p) || strings.TrimSuffix(p, "/") == importPath { + return 3, true + } + } + return + }, + func(_, importPath string) (num int, ok bool) { + if strings.HasPrefix(importPath, "appengine") { + return 2, true + } + return + }, + func(_, importPath string) (num int, ok bool) { + firstComponent := strings.Split(importPath, "/")[0] + if strings.Contains(firstComponent, ".") { + return 1, true + } + return + }, +} + +func importGroup(localPrefix, importPath string) int { + for _, fn := range importToGroup { + if n, ok := fn(localPrefix, importPath); ok { + return n + } + } + return 0 +} + +type ImportFixType int + +const ( + AddImport ImportFixType = iota + DeleteImport + SetImportName +) + +type ImportFix struct { + // StmtInfo represents the import statement this fix will add, remove, or change. + StmtInfo ImportInfo + // IdentName is the identifier that this fix will add or remove. + IdentName string + // FixType is the type of fix this is (AddImport, DeleteImport, SetImportName). + FixType ImportFixType + Relevance float64 // see pkg +} + +// An ImportInfo represents a single import statement. +type ImportInfo struct { + ImportPath string // import path, e.g. "crypto/rand". + Name string // import name, e.g. "crand", or "" if none. +} + +// A packageInfo represents what's known about a package. +type packageInfo struct { + name string // real package name, if known. + exports map[string]bool // known exports. +} + +// parseOtherFiles parses all the Go files in srcDir except filename, including +// test files if filename looks like a test. +func parseOtherFiles(fset *token.FileSet, srcDir, filename string) []*ast.File { + // This could use go/packages but it doesn't buy much, and it fails + // with https://golang.org/issue/26296 in LoadFiles mode in some cases. + considerTests := strings.HasSuffix(filename, "_test.go") + + fileBase := filepath.Base(filename) + packageFileInfos, err := ioutil.ReadDir(srcDir) + if err != nil { + return nil + } + + var files []*ast.File + for _, fi := range packageFileInfos { + if fi.Name() == fileBase || !strings.HasSuffix(fi.Name(), ".go") { + continue + } + if !considerTests && strings.HasSuffix(fi.Name(), "_test.go") { + continue + } + + f, err := parser.ParseFile(fset, filepath.Join(srcDir, fi.Name()), nil, 0) + if err != nil { + continue + } + + files = append(files, f) + } + + return files +} + +// addGlobals puts the names of package vars into the provided map. +func addGlobals(f *ast.File, globals map[string]bool) { + for _, decl := range f.Decls { + genDecl, ok := decl.(*ast.GenDecl) + if !ok { + continue + } + + for _, spec := range genDecl.Specs { + valueSpec, ok := spec.(*ast.ValueSpec) + if !ok { + continue + } + globals[valueSpec.Names[0].Name] = true + } + } +} + +// collectReferences builds a map of selector expressions, from +// left hand side (X) to a set of right hand sides (Sel). +func collectReferences(f *ast.File) references { + refs := references{} + + var visitor visitFn + visitor = func(node ast.Node) ast.Visitor { + if node == nil { + return visitor + } + switch v := node.(type) { + case *ast.SelectorExpr: + xident, ok := v.X.(*ast.Ident) + if !ok { + break + } + if xident.Obj != nil { + // If the parser can resolve it, it's not a package ref. + break + } + if !ast.IsExported(v.Sel.Name) { + // Whatever this is, it's not exported from a package. + break + } + pkgName := xident.Name + r := refs[pkgName] + if r == nil { + r = make(map[string]bool) + refs[pkgName] = r + } + r[v.Sel.Name] = true + } + return visitor + } + ast.Walk(visitor, f) + return refs +} + +// collectImports returns all the imports in f. +// Unnamed imports (., _) and "C" are ignored. +func collectImports(f *ast.File) []*ImportInfo { + var imports []*ImportInfo + for _, imp := range f.Imports { + var name string + if imp.Name != nil { + name = imp.Name.Name + } + if imp.Path.Value == `"C"` || name == "_" || name == "." { + continue + } + path := strings.Trim(imp.Path.Value, `"`) + imports = append(imports, &ImportInfo{ + Name: name, + ImportPath: path, + }) + } + return imports +} + +// findMissingImport searches pass's candidates for an import that provides +// pkg, containing all of syms. +func (p *pass) findMissingImport(pkg string, syms map[string]bool) *ImportInfo { + for _, candidate := range p.candidates { + pkgInfo, ok := p.knownPackages[candidate.ImportPath] + if !ok { + continue + } + if p.importIdentifier(candidate) != pkg { + continue + } + + allFound := true + for right := range syms { + if !pkgInfo.exports[right] { + allFound = false + break + } + } + + if allFound { + return candidate + } + } + return nil +} + +// references is set of references found in a Go file. The first map key is the +// left hand side of a selector expression, the second key is the right hand +// side, and the value should always be true. +type references map[string]map[string]bool + +// A pass contains all the inputs and state necessary to fix a file's imports. +// It can be modified in some ways during use; see comments below. +type pass struct { + // Inputs. These must be set before a call to load, and not modified after. + fset *token.FileSet // fset used to parse f and its siblings. + f *ast.File // the file being fixed. + srcDir string // the directory containing f. + env *ProcessEnv // the environment to use for go commands, etc. + loadRealPackageNames bool // if true, load package names from disk rather than guessing them. + otherFiles []*ast.File // sibling files. + + // Intermediate state, generated by load. + existingImports map[string]*ImportInfo + allRefs references + missingRefs references + + // Inputs to fix. These can be augmented between successive fix calls. + lastTry bool // indicates that this is the last call and fix should clean up as best it can. + candidates []*ImportInfo // candidate imports in priority order. + knownPackages map[string]*packageInfo // information about all known packages. +} + +// loadPackageNames saves the package names for everything referenced by imports. +func (p *pass) loadPackageNames(imports []*ImportInfo) error { + if p.env.Logf != nil { + p.env.Logf("loading package names for %v packages", len(imports)) + defer func() { + p.env.Logf("done loading package names for %v packages", len(imports)) + }() + } + var unknown []string + for _, imp := range imports { + if _, ok := p.knownPackages[imp.ImportPath]; ok { + continue + } + unknown = append(unknown, imp.ImportPath) + } + + resolver, err := p.env.GetResolver() + if err != nil { + return err + } + + names, err := resolver.loadPackageNames(unknown, p.srcDir) + if err != nil { + return err + } + + for path, name := range names { + p.knownPackages[path] = &packageInfo{ + name: name, + exports: map[string]bool{}, + } + } + return nil +} + +// importIdentifier returns the identifier that imp will introduce. It will +// guess if the package name has not been loaded, e.g. because the source +// is not available. +func (p *pass) importIdentifier(imp *ImportInfo) string { + if imp.Name != "" { + return imp.Name + } + known := p.knownPackages[imp.ImportPath] + if known != nil && known.name != "" { + return known.name + } + return ImportPathToAssumedName(imp.ImportPath) +} + +// load reads in everything necessary to run a pass, and reports whether the +// file already has all the imports it needs. It fills in p.missingRefs with the +// file's missing symbols, if any, or removes unused imports if not. +func (p *pass) load() ([]*ImportFix, bool) { + p.knownPackages = map[string]*packageInfo{} + p.missingRefs = references{} + p.existingImports = map[string]*ImportInfo{} + + // Load basic information about the file in question. + p.allRefs = collectReferences(p.f) + + // Load stuff from other files in the same package: + // global variables so we know they don't need resolving, and imports + // that we might want to mimic. + globals := map[string]bool{} + for _, otherFile := range p.otherFiles { + // Don't load globals from files that are in the same directory + // but a different package. Using them to suggest imports is OK. + if p.f.Name.Name == otherFile.Name.Name { + addGlobals(otherFile, globals) + } + p.candidates = append(p.candidates, collectImports(otherFile)...) + } + + // Resolve all the import paths we've seen to package names, and store + // f's imports by the identifier they introduce. + imports := collectImports(p.f) + if p.loadRealPackageNames { + err := p.loadPackageNames(append(imports, p.candidates...)) + if err != nil { + if p.env.Logf != nil { + p.env.Logf("loading package names: %v", err) + } + return nil, false + } + } + for _, imp := range imports { + p.existingImports[p.importIdentifier(imp)] = imp + } + + // Find missing references. + for left, rights := range p.allRefs { + if globals[left] { + continue + } + _, ok := p.existingImports[left] + if !ok { + p.missingRefs[left] = rights + continue + } + } + if len(p.missingRefs) != 0 { + return nil, false + } + + return p.fix() +} + +// fix attempts to satisfy missing imports using p.candidates. If it finds +// everything, or if p.lastTry is true, it updates fixes to add the imports it found, +// delete anything unused, and update import names, and returns true. +func (p *pass) fix() ([]*ImportFix, bool) { + // Find missing imports. + var selected []*ImportInfo + for left, rights := range p.missingRefs { + if imp := p.findMissingImport(left, rights); imp != nil { + selected = append(selected, imp) + } + } + + if !p.lastTry && len(selected) != len(p.missingRefs) { + return nil, false + } + + // Found everything, or giving up. Add the new imports and remove any unused. + var fixes []*ImportFix + for _, imp := range p.existingImports { + // We deliberately ignore globals here, because we can't be sure + // they're in the same package. People do things like put multiple + // main packages in the same directory, and we don't want to + // remove imports if they happen to have the same name as a var in + // a different package. + if _, ok := p.allRefs[p.importIdentifier(imp)]; !ok { + fixes = append(fixes, &ImportFix{ + StmtInfo: *imp, + IdentName: p.importIdentifier(imp), + FixType: DeleteImport, + }) + continue + } + + // An existing import may need to update its import name to be correct. + if name := p.importSpecName(imp); name != imp.Name { + fixes = append(fixes, &ImportFix{ + StmtInfo: ImportInfo{ + Name: name, + ImportPath: imp.ImportPath, + }, + IdentName: p.importIdentifier(imp), + FixType: SetImportName, + }) + } + } + + for _, imp := range selected { + fixes = append(fixes, &ImportFix{ + StmtInfo: ImportInfo{ + Name: p.importSpecName(imp), + ImportPath: imp.ImportPath, + }, + IdentName: p.importIdentifier(imp), + FixType: AddImport, + }) + } + + return fixes, true +} + +// importSpecName gets the import name of imp in the import spec. +// +// When the import identifier matches the assumed import name, the import name does +// not appear in the import spec. +func (p *pass) importSpecName(imp *ImportInfo) string { + // If we did not load the real package names, or the name is already set, + // we just return the existing name. + if !p.loadRealPackageNames || imp.Name != "" { + return imp.Name + } + + ident := p.importIdentifier(imp) + if ident == ImportPathToAssumedName(imp.ImportPath) { + return "" // ident not needed since the assumed and real names are the same. + } + return ident +} + +// apply will perform the fixes on f in order. +func apply(fset *token.FileSet, f *ast.File, fixes []*ImportFix) { + for _, fix := range fixes { + switch fix.FixType { + case DeleteImport: + astutil.DeleteNamedImport(fset, f, fix.StmtInfo.Name, fix.StmtInfo.ImportPath) + case AddImport: + astutil.AddNamedImport(fset, f, fix.StmtInfo.Name, fix.StmtInfo.ImportPath) + case SetImportName: + // Find the matching import path and change the name. + for _, spec := range f.Imports { + path := strings.Trim(spec.Path.Value, `"`) + if path == fix.StmtInfo.ImportPath { + spec.Name = &ast.Ident{ + Name: fix.StmtInfo.Name, + NamePos: spec.Pos(), + } + } + } + } + } +} + +// assumeSiblingImportsValid assumes that siblings' use of packages is valid, +// adding the exports they use. +func (p *pass) assumeSiblingImportsValid() { + for _, f := range p.otherFiles { + refs := collectReferences(f) + imports := collectImports(f) + importsByName := map[string]*ImportInfo{} + for _, imp := range imports { + importsByName[p.importIdentifier(imp)] = imp + } + for left, rights := range refs { + if imp, ok := importsByName[left]; ok { + if m, ok := stdlib[imp.ImportPath]; ok { + // We have the stdlib in memory; no need to guess. + rights = copyExports(m) + } + p.addCandidate(imp, &packageInfo{ + // no name; we already know it. + exports: rights, + }) + } + } + } +} + +// addCandidate adds a candidate import to p, and merges in the information +// in pkg. +func (p *pass) addCandidate(imp *ImportInfo, pkg *packageInfo) { + p.candidates = append(p.candidates, imp) + if existing, ok := p.knownPackages[imp.ImportPath]; ok { + if existing.name == "" { + existing.name = pkg.name + } + for export := range pkg.exports { + existing.exports[export] = true + } + } else { + p.knownPackages[imp.ImportPath] = pkg + } +} + +// fixImports adds and removes imports from f so that all its references are +// satisfied and there are no unused imports. +// +// This is declared as a variable rather than a function so goimports can +// easily be extended by adding a file with an init function. +var fixImports = fixImportsDefault + +func fixImportsDefault(fset *token.FileSet, f *ast.File, filename string, env *ProcessEnv) error { + fixes, err := getFixes(fset, f, filename, env) + if err != nil { + return err + } + apply(fset, f, fixes) + return err +} + +// getFixes gets the import fixes that need to be made to f in order to fix the imports. +// It does not modify the ast. +func getFixes(fset *token.FileSet, f *ast.File, filename string, env *ProcessEnv) ([]*ImportFix, error) { + abs, err := filepath.Abs(filename) + if err != nil { + return nil, err + } + srcDir := filepath.Dir(abs) + if env.Logf != nil { + env.Logf("fixImports(filename=%q), abs=%q, srcDir=%q ...", filename, abs, srcDir) + } + + // First pass: looking only at f, and using the naive algorithm to + // derive package names from import paths, see if the file is already + // complete. We can't add any imports yet, because we don't know + // if missing references are actually package vars. + p := &pass{fset: fset, f: f, srcDir: srcDir, env: env} + if fixes, done := p.load(); done { + return fixes, nil + } + + otherFiles := parseOtherFiles(fset, srcDir, filename) + + // Second pass: add information from other files in the same package, + // like their package vars and imports. + p.otherFiles = otherFiles + if fixes, done := p.load(); done { + return fixes, nil + } + + // Now we can try adding imports from the stdlib. + p.assumeSiblingImportsValid() + addStdlibCandidates(p, p.missingRefs) + if fixes, done := p.fix(); done { + return fixes, nil + } + + // Third pass: get real package names where we had previously used + // the naive algorithm. + p = &pass{fset: fset, f: f, srcDir: srcDir, env: env} + p.loadRealPackageNames = true + p.otherFiles = otherFiles + if fixes, done := p.load(); done { + return fixes, nil + } + + if err := addStdlibCandidates(p, p.missingRefs); err != nil { + return nil, err + } + p.assumeSiblingImportsValid() + if fixes, done := p.fix(); done { + return fixes, nil + } + + // Go look for candidates in $GOPATH, etc. We don't necessarily load + // the real exports of sibling imports, so keep assuming their contents. + if err := addExternalCandidates(p, p.missingRefs, filename); err != nil { + return nil, err + } + + p.lastTry = true + fixes, _ := p.fix() + return fixes, nil +} + +// MaxRelevance is the highest relevance, used for the standard library. +// Chosen arbitrarily to match pre-existing gopls code. +const MaxRelevance = 7.0 + +// getCandidatePkgs works with the passed callback to find all acceptable packages. +// It deduplicates by import path, and uses a cached stdlib rather than reading +// from disk. +func getCandidatePkgs(ctx context.Context, wrappedCallback *scanCallback, filename, filePkg string, env *ProcessEnv) error { + notSelf := func(p *pkg) bool { + return p.packageName != filePkg || p.dir != filepath.Dir(filename) + } + goenv, err := env.goEnv() + if err != nil { + return err + } + + var mu sync.Mutex // to guard asynchronous access to dupCheck + dupCheck := map[string]struct{}{} + + // Start off with the standard library. + for importPath, exports := range stdlib { + p := &pkg{ + dir: filepath.Join(goenv["GOROOT"], "src", importPath), + importPathShort: importPath, + packageName: path.Base(importPath), + relevance: MaxRelevance, + } + dupCheck[importPath] = struct{}{} + if notSelf(p) && wrappedCallback.dirFound(p) && wrappedCallback.packageNameLoaded(p) { + wrappedCallback.exportsLoaded(p, exports) + } + } + + scanFilter := &scanCallback{ + rootFound: func(root gopathwalk.Root) bool { + // Exclude goroot results -- getting them is relatively expensive, not cached, + // and generally redundant with the in-memory version. + return root.Type != gopathwalk.RootGOROOT && wrappedCallback.rootFound(root) + }, + dirFound: wrappedCallback.dirFound, + packageNameLoaded: func(pkg *pkg) bool { + mu.Lock() + defer mu.Unlock() + if _, ok := dupCheck[pkg.importPathShort]; ok { + return false + } + dupCheck[pkg.importPathShort] = struct{}{} + return notSelf(pkg) && wrappedCallback.packageNameLoaded(pkg) + }, + exportsLoaded: func(pkg *pkg, exports []string) { + // If we're an x_test, load the package under test's test variant. + if strings.HasSuffix(filePkg, "_test") && pkg.dir == filepath.Dir(filename) { + var err error + _, exports, err = loadExportsFromFiles(ctx, env, pkg.dir, true) + if err != nil { + return + } + } + wrappedCallback.exportsLoaded(pkg, exports) + }, + } + resolver, err := env.GetResolver() + if err != nil { + return err + } + return resolver.scan(ctx, scanFilter) +} + +func ScoreImportPaths(ctx context.Context, env *ProcessEnv, paths []string) (map[string]float64, error) { + result := make(map[string]float64) + resolver, err := env.GetResolver() + if err != nil { + return nil, err + } + for _, path := range paths { + result[path] = resolver.scoreImportPath(ctx, path) + } + return result, nil +} + +func PrimeCache(ctx context.Context, env *ProcessEnv) error { + // Fully scan the disk for directories, but don't actually read any Go files. + callback := &scanCallback{ + rootFound: func(gopathwalk.Root) bool { + return true + }, + dirFound: func(pkg *pkg) bool { + return false + }, + packageNameLoaded: func(pkg *pkg) bool { + return false + }, + } + return getCandidatePkgs(ctx, callback, "", "", env) +} + +func candidateImportName(pkg *pkg) string { + if ImportPathToAssumedName(pkg.importPathShort) != pkg.packageName { + return pkg.packageName + } + return "" +} + +// GetAllCandidates calls wrapped for each package whose name starts with +// searchPrefix, and can be imported from filename with the package name filePkg. +func GetAllCandidates(ctx context.Context, wrapped func(ImportFix), searchPrefix, filename, filePkg string, env *ProcessEnv) error { + callback := &scanCallback{ + rootFound: func(gopathwalk.Root) bool { + return true + }, + dirFound: func(pkg *pkg) bool { + if !canUse(filename, pkg.dir) { + return false + } + // Try the assumed package name first, then a simpler path match + // in case of packages named vN, which are not uncommon. + return strings.HasPrefix(ImportPathToAssumedName(pkg.importPathShort), searchPrefix) || + strings.HasPrefix(path.Base(pkg.importPathShort), searchPrefix) + }, + packageNameLoaded: func(pkg *pkg) bool { + if !strings.HasPrefix(pkg.packageName, searchPrefix) { + return false + } + wrapped(ImportFix{ + StmtInfo: ImportInfo{ + ImportPath: pkg.importPathShort, + Name: candidateImportName(pkg), + }, + IdentName: pkg.packageName, + FixType: AddImport, + Relevance: pkg.relevance, + }) + return false + }, + } + return getCandidatePkgs(ctx, callback, filename, filePkg, env) +} + +// GetImportPaths calls wrapped for each package whose import path starts with +// searchPrefix, and can be imported from filename with the package name filePkg. +func GetImportPaths(ctx context.Context, wrapped func(ImportFix), searchPrefix, filename, filePkg string, env *ProcessEnv) error { + callback := &scanCallback{ + rootFound: func(gopathwalk.Root) bool { + return true + }, + dirFound: func(pkg *pkg) bool { + if !canUse(filename, pkg.dir) { + return false + } + return strings.HasPrefix(pkg.importPathShort, searchPrefix) + }, + packageNameLoaded: func(pkg *pkg) bool { + wrapped(ImportFix{ + StmtInfo: ImportInfo{ + ImportPath: pkg.importPathShort, + Name: candidateImportName(pkg), + }, + IdentName: pkg.packageName, + FixType: AddImport, + Relevance: pkg.relevance, + }) + return false + }, + } + return getCandidatePkgs(ctx, callback, filename, filePkg, env) +} + +// A PackageExport is a package and its exports. +type PackageExport struct { + Fix *ImportFix + Exports []string +} + +// GetPackageExports returns all known packages with name pkg and their exports. +func GetPackageExports(ctx context.Context, wrapped func(PackageExport), searchPkg, filename, filePkg string, env *ProcessEnv) error { + callback := &scanCallback{ + rootFound: func(gopathwalk.Root) bool { + return true + }, + dirFound: func(pkg *pkg) bool { + return pkgIsCandidate(filename, references{searchPkg: nil}, pkg) + }, + packageNameLoaded: func(pkg *pkg) bool { + return pkg.packageName == searchPkg + }, + exportsLoaded: func(pkg *pkg, exports []string) { + sort.Strings(exports) + wrapped(PackageExport{ + Fix: &ImportFix{ + StmtInfo: ImportInfo{ + ImportPath: pkg.importPathShort, + Name: candidateImportName(pkg), + }, + IdentName: pkg.packageName, + FixType: AddImport, + Relevance: pkg.relevance, + }, + Exports: exports, + }) + }, + } + return getCandidatePkgs(ctx, callback, filename, filePkg, env) +} + +var RequiredGoEnvVars = []string{"GO111MODULE", "GOFLAGS", "GOINSECURE", "GOMOD", "GOMODCACHE", "GONOPROXY", "GONOSUMDB", "GOPATH", "GOPROXY", "GOROOT", "GOSUMDB"} + +// ProcessEnv contains environment variables and settings that affect the use of +// the go command, the go/build package, etc. +type ProcessEnv struct { + GocmdRunner *gocommand.Runner + + BuildFlags []string + ModFlag string + ModFile string + + // Env overrides the OS environment, and can be used to specify + // GOPROXY, GO111MODULE, etc. PATH cannot be set here, because + // exec.Command will not honor it. + // Specifying all of RequiredGoEnvVars avoids a call to `go env`. + Env map[string]string + + WorkingDir string + + // If Logf is non-nil, debug logging is enabled through this function. + Logf func(format string, args ...interface{}) + + initialized bool + + resolver Resolver +} + +func (e *ProcessEnv) goEnv() (map[string]string, error) { + if err := e.init(); err != nil { + return nil, err + } + return e.Env, nil +} + +func (e *ProcessEnv) matchFile(dir, name string) (bool, error) { + bctx, err := e.buildContext() + if err != nil { + return false, err + } + return bctx.MatchFile(dir, name) +} + +// CopyConfig copies the env's configuration into a new env. +func (e *ProcessEnv) CopyConfig() *ProcessEnv { + copy := &ProcessEnv{ + GocmdRunner: e.GocmdRunner, + initialized: e.initialized, + BuildFlags: e.BuildFlags, + Logf: e.Logf, + WorkingDir: e.WorkingDir, + resolver: nil, + Env: map[string]string{}, + } + for k, v := range e.Env { + copy.Env[k] = v + } + return copy +} + +func (e *ProcessEnv) init() error { + if e.initialized { + return nil + } + + foundAllRequired := true + for _, k := range RequiredGoEnvVars { + if _, ok := e.Env[k]; !ok { + foundAllRequired = false + break + } + } + if foundAllRequired { + e.initialized = true + return nil + } + + if e.Env == nil { + e.Env = map[string]string{} + } + + goEnv := map[string]string{} + stdout, err := e.invokeGo(context.TODO(), "env", append([]string{"-json"}, RequiredGoEnvVars...)...) + if err != nil { + return err + } + if err := json.Unmarshal(stdout.Bytes(), &goEnv); err != nil { + return err + } + for k, v := range goEnv { + e.Env[k] = v + } + e.initialized = true + return nil +} + +func (e *ProcessEnv) env() []string { + var env []string // the gocommand package will prepend os.Environ. + for k, v := range e.Env { + env = append(env, k+"="+v) + } + return env +} + +func (e *ProcessEnv) GetResolver() (Resolver, error) { + if e.resolver != nil { + return e.resolver, nil + } + if err := e.init(); err != nil { + return nil, err + } + if len(e.Env["GOMOD"]) == 0 { + e.resolver = newGopathResolver(e) + return e.resolver, nil + } + e.resolver = newModuleResolver(e) + return e.resolver, nil +} + +func (e *ProcessEnv) buildContext() (*build.Context, error) { + ctx := build.Default + goenv, err := e.goEnv() + if err != nil { + return nil, err + } + ctx.GOROOT = goenv["GOROOT"] + ctx.GOPATH = goenv["GOPATH"] + + // As of Go 1.14, build.Context has a Dir field + // (see golang.org/issue/34860). + // Populate it only if present. + rc := reflect.ValueOf(&ctx).Elem() + dir := rc.FieldByName("Dir") + if dir.IsValid() && dir.Kind() == reflect.String { + dir.SetString(e.WorkingDir) + } + + // Since Go 1.11, go/build.Context.Import may invoke 'go list' depending on + // the value in GO111MODULE in the process's environment. We always want to + // run in GOPATH mode when calling Import, so we need to prevent this from + // happening. In Go 1.16, GO111MODULE defaults to "on", so this problem comes + // up more frequently. + // + // HACK: setting any of the Context I/O hooks prevents Import from invoking + // 'go list', regardless of GO111MODULE. This is undocumented, but it's + // unlikely to change before GOPATH support is removed. + ctx.ReadDir = ioutil.ReadDir + + return &ctx, nil +} + +func (e *ProcessEnv) invokeGo(ctx context.Context, verb string, args ...string) (*bytes.Buffer, error) { + inv := gocommand.Invocation{ + Verb: verb, + Args: args, + BuildFlags: e.BuildFlags, + Env: e.env(), + Logf: e.Logf, + WorkingDir: e.WorkingDir, + } + return e.GocmdRunner.Run(ctx, inv) +} + +func addStdlibCandidates(pass *pass, refs references) error { + goenv, err := pass.env.goEnv() + if err != nil { + return err + } + add := func(pkg string) { + // Prevent self-imports. + if path.Base(pkg) == pass.f.Name.Name && filepath.Join(goenv["GOROOT"], "src", pkg) == pass.srcDir { + return + } + exports := copyExports(stdlib[pkg]) + pass.addCandidate( + &ImportInfo{ImportPath: pkg}, + &packageInfo{name: path.Base(pkg), exports: exports}) + } + for left := range refs { + if left == "rand" { + // Make sure we try crypto/rand before math/rand. + add("crypto/rand") + add("math/rand") + continue + } + for importPath := range stdlib { + if path.Base(importPath) == left { + add(importPath) + } + } + } + return nil +} + +// A Resolver does the build-system-specific parts of goimports. +type Resolver interface { + // loadPackageNames loads the package names in importPaths. + loadPackageNames(importPaths []string, srcDir string) (map[string]string, error) + // scan works with callback to search for packages. See scanCallback for details. + scan(ctx context.Context, callback *scanCallback) error + // loadExports returns the set of exported symbols in the package at dir. + // loadExports may be called concurrently. + loadExports(ctx context.Context, pkg *pkg, includeTest bool) (string, []string, error) + // scoreImportPath returns the relevance for an import path. + scoreImportPath(ctx context.Context, path string) float64 + + ClearForNewScan() +} + +// A scanCallback controls a call to scan and receives its results. +// In general, minor errors will be silently discarded; a user should not +// expect to receive a full series of calls for everything. +type scanCallback struct { + // rootFound is called before scanning a new root dir. If it returns true, + // the root will be scanned. Returning false will not necessarily prevent + // directories from that root making it to dirFound. + rootFound func(gopathwalk.Root) bool + // dirFound is called when a directory is found that is possibly a Go package. + // pkg will be populated with everything except packageName. + // If it returns true, the package's name will be loaded. + dirFound func(pkg *pkg) bool + // packageNameLoaded is called when a package is found and its name is loaded. + // If it returns true, the package's exports will be loaded. + packageNameLoaded func(pkg *pkg) bool + // exportsLoaded is called when a package's exports have been loaded. + exportsLoaded func(pkg *pkg, exports []string) +} + +func addExternalCandidates(pass *pass, refs references, filename string) error { + var mu sync.Mutex + found := make(map[string][]pkgDistance) + callback := &scanCallback{ + rootFound: func(gopathwalk.Root) bool { + return true // We want everything. + }, + dirFound: func(pkg *pkg) bool { + return pkgIsCandidate(filename, refs, pkg) + }, + packageNameLoaded: func(pkg *pkg) bool { + if _, want := refs[pkg.packageName]; !want { + return false + } + if pkg.dir == pass.srcDir && pass.f.Name.Name == pkg.packageName { + // The candidate is in the same directory and has the + // same package name. Don't try to import ourselves. + return false + } + if !canUse(filename, pkg.dir) { + return false + } + mu.Lock() + defer mu.Unlock() + found[pkg.packageName] = append(found[pkg.packageName], pkgDistance{pkg, distance(pass.srcDir, pkg.dir)}) + return false // We'll do our own loading after we sort. + }, + } + resolver, err := pass.env.GetResolver() + if err != nil { + return err + } + if err = resolver.scan(context.Background(), callback); err != nil { + return err + } + + // Search for imports matching potential package references. + type result struct { + imp *ImportInfo + pkg *packageInfo + } + results := make(chan result, len(refs)) + + ctx, cancel := context.WithCancel(context.TODO()) + var wg sync.WaitGroup + defer func() { + cancel() + wg.Wait() + }() + var ( + firstErr error + firstErrOnce sync.Once + ) + for pkgName, symbols := range refs { + wg.Add(1) + go func(pkgName string, symbols map[string]bool) { + defer wg.Done() + + found, err := findImport(ctx, pass, found[pkgName], pkgName, symbols, filename) + + if err != nil { + firstErrOnce.Do(func() { + firstErr = err + cancel() + }) + return + } + + if found == nil { + return // No matching package. + } + + imp := &ImportInfo{ + ImportPath: found.importPathShort, + } + + pkg := &packageInfo{ + name: pkgName, + exports: symbols, + } + results <- result{imp, pkg} + }(pkgName, symbols) + } + go func() { + wg.Wait() + close(results) + }() + + for result := range results { + pass.addCandidate(result.imp, result.pkg) + } + return firstErr +} + +// notIdentifier reports whether ch is an invalid identifier character. +func notIdentifier(ch rune) bool { + return !('a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || + '0' <= ch && ch <= '9' || + ch == '_' || + ch >= utf8.RuneSelf && (unicode.IsLetter(ch) || unicode.IsDigit(ch))) +} + +// ImportPathToAssumedName returns the assumed package name of an import path. +// It does this using only string parsing of the import path. +// It picks the last element of the path that does not look like a major +// version, and then picks the valid identifier off the start of that element. +// It is used to determine if a local rename should be added to an import for +// clarity. +// This function could be moved to a standard package and exported if we want +// for use in other tools. +func ImportPathToAssumedName(importPath string) string { + base := path.Base(importPath) + if strings.HasPrefix(base, "v") { + if _, err := strconv.Atoi(base[1:]); err == nil { + dir := path.Dir(importPath) + if dir != "." { + base = path.Base(dir) + } + } + } + base = strings.TrimPrefix(base, "go-") + if i := strings.IndexFunc(base, notIdentifier); i >= 0 { + base = base[:i] + } + return base +} + +// gopathResolver implements resolver for GOPATH workspaces. +type gopathResolver struct { + env *ProcessEnv + walked bool + cache *dirInfoCache + scanSema chan struct{} // scanSema prevents concurrent scans. +} + +func newGopathResolver(env *ProcessEnv) *gopathResolver { + r := &gopathResolver{ + env: env, + cache: &dirInfoCache{ + dirs: map[string]*directoryPackageInfo{}, + listeners: map[*int]cacheListener{}, + }, + scanSema: make(chan struct{}, 1), + } + r.scanSema <- struct{}{} + return r +} + +func (r *gopathResolver) ClearForNewScan() { + <-r.scanSema + r.cache = &dirInfoCache{ + dirs: map[string]*directoryPackageInfo{}, + listeners: map[*int]cacheListener{}, + } + r.walked = false + r.scanSema <- struct{}{} +} + +func (r *gopathResolver) loadPackageNames(importPaths []string, srcDir string) (map[string]string, error) { + names := map[string]string{} + bctx, err := r.env.buildContext() + if err != nil { + return nil, err + } + for _, path := range importPaths { + names[path] = importPathToName(bctx, path, srcDir) + } + return names, nil +} + +// importPathToName finds out the actual package name, as declared in its .go files. +func importPathToName(bctx *build.Context, importPath, srcDir string) string { + // Fast path for standard library without going to disk. + if _, ok := stdlib[importPath]; ok { + return path.Base(importPath) // stdlib packages always match their paths. + } + + buildPkg, err := bctx.Import(importPath, srcDir, build.FindOnly) + if err != nil { + return "" + } + pkgName, err := packageDirToName(buildPkg.Dir) + if err != nil { + return "" + } + return pkgName +} + +// packageDirToName is a faster version of build.Import if +// the only thing desired is the package name. Given a directory, +// packageDirToName then only parses one file in the package, +// trusting that the files in the directory are consistent. +func packageDirToName(dir string) (packageName string, err error) { + d, err := os.Open(dir) + if err != nil { + return "", err + } + names, err := d.Readdirnames(-1) + d.Close() + if err != nil { + return "", err + } + sort.Strings(names) // to have predictable behavior + var lastErr error + var nfile int + for _, name := range names { + if !strings.HasSuffix(name, ".go") { + continue + } + if strings.HasSuffix(name, "_test.go") { + continue + } + nfile++ + fullFile := filepath.Join(dir, name) + + fset := token.NewFileSet() + f, err := parser.ParseFile(fset, fullFile, nil, parser.PackageClauseOnly) + if err != nil { + lastErr = err + continue + } + pkgName := f.Name.Name + if pkgName == "documentation" { + // Special case from go/build.ImportDir, not + // handled by ctx.MatchFile. + continue + } + if pkgName == "main" { + // Also skip package main, assuming it's a +build ignore generator or example. + // Since you can't import a package main anyway, there's no harm here. + continue + } + return pkgName, nil + } + if lastErr != nil { + return "", lastErr + } + return "", fmt.Errorf("no importable package found in %d Go files", nfile) +} + +type pkg struct { + dir string // absolute file path to pkg directory ("/usr/lib/go/src/net/http") + importPathShort string // vendorless import path ("net/http", "a/b") + packageName string // package name loaded from source if requested + relevance float64 // a weakly-defined score of how relevant a package is. 0 is most relevant. +} + +type pkgDistance struct { + pkg *pkg + distance int // relative distance to target +} + +// byDistanceOrImportPathShortLength sorts by relative distance breaking ties +// on the short import path length and then the import string itself. +type byDistanceOrImportPathShortLength []pkgDistance + +func (s byDistanceOrImportPathShortLength) Len() int { return len(s) } +func (s byDistanceOrImportPathShortLength) Less(i, j int) bool { + di, dj := s[i].distance, s[j].distance + if di == -1 { + return false + } + if dj == -1 { + return true + } + if di != dj { + return di < dj + } + + vi, vj := s[i].pkg.importPathShort, s[j].pkg.importPathShort + if len(vi) != len(vj) { + return len(vi) < len(vj) + } + return vi < vj +} +func (s byDistanceOrImportPathShortLength) Swap(i, j int) { s[i], s[j] = s[j], s[i] } + +func distance(basepath, targetpath string) int { + p, err := filepath.Rel(basepath, targetpath) + if err != nil { + return -1 + } + if p == "." { + return 0 + } + return strings.Count(p, string(filepath.Separator)) + 1 +} + +func (r *gopathResolver) scan(ctx context.Context, callback *scanCallback) error { + add := func(root gopathwalk.Root, dir string) { + // We assume cached directories have not changed. We can skip them and their + // children. + if _, ok := r.cache.Load(dir); ok { + return + } + + importpath := filepath.ToSlash(dir[len(root.Path)+len("/"):]) + info := directoryPackageInfo{ + status: directoryScanned, + dir: dir, + rootType: root.Type, + nonCanonicalImportPath: VendorlessPath(importpath), + } + r.cache.Store(dir, info) + } + processDir := func(info directoryPackageInfo) { + // Skip this directory if we were not able to get the package information successfully. + if scanned, err := info.reachedStatus(directoryScanned); !scanned || err != nil { + return + } + + p := &pkg{ + importPathShort: info.nonCanonicalImportPath, + dir: info.dir, + relevance: MaxRelevance - 1, + } + if info.rootType == gopathwalk.RootGOROOT { + p.relevance = MaxRelevance + } + + if !callback.dirFound(p) { + return + } + var err error + p.packageName, err = r.cache.CachePackageName(info) + if err != nil { + return + } + + if !callback.packageNameLoaded(p) { + return + } + if _, exports, err := r.loadExports(ctx, p, false); err == nil { + callback.exportsLoaded(p, exports) + } + } + stop := r.cache.ScanAndListen(ctx, processDir) + defer stop() + + goenv, err := r.env.goEnv() + if err != nil { + return err + } + var roots []gopathwalk.Root + roots = append(roots, gopathwalk.Root{filepath.Join(goenv["GOROOT"], "src"), gopathwalk.RootGOROOT}) + for _, p := range filepath.SplitList(goenv["GOPATH"]) { + roots = append(roots, gopathwalk.Root{filepath.Join(p, "src"), gopathwalk.RootGOPATH}) + } + // The callback is not necessarily safe to use in the goroutine below. Process roots eagerly. + roots = filterRoots(roots, callback.rootFound) + // We can't cancel walks, because we need them to finish to have a usable + // cache. Instead, run them in a separate goroutine and detach. + scanDone := make(chan struct{}) + go func() { + select { + case <-ctx.Done(): + return + case <-r.scanSema: + } + defer func() { r.scanSema <- struct{}{} }() + gopathwalk.Walk(roots, add, gopathwalk.Options{Logf: r.env.Logf, ModulesEnabled: false}) + close(scanDone) + }() + select { + case <-ctx.Done(): + case <-scanDone: + } + return nil +} + +func (r *gopathResolver) scoreImportPath(ctx context.Context, path string) float64 { + if _, ok := stdlib[path]; ok { + return MaxRelevance + } + return MaxRelevance - 1 +} + +func filterRoots(roots []gopathwalk.Root, include func(gopathwalk.Root) bool) []gopathwalk.Root { + var result []gopathwalk.Root + for _, root := range roots { + if !include(root) { + continue + } + result = append(result, root) + } + return result +} + +func (r *gopathResolver) loadExports(ctx context.Context, pkg *pkg, includeTest bool) (string, []string, error) { + if info, ok := r.cache.Load(pkg.dir); ok && !includeTest { + return r.cache.CacheExports(ctx, r.env, info) + } + return loadExportsFromFiles(ctx, r.env, pkg.dir, includeTest) +} + +// VendorlessPath returns the devendorized version of the import path ipath. +// For example, VendorlessPath("foo/bar/vendor/a/b") returns "a/b". +func VendorlessPath(ipath string) string { + // Devendorize for use in import statement. + if i := strings.LastIndex(ipath, "/vendor/"); i >= 0 { + return ipath[i+len("/vendor/"):] + } + if strings.HasPrefix(ipath, "vendor/") { + return ipath[len("vendor/"):] + } + return ipath +} + +func loadExportsFromFiles(ctx context.Context, env *ProcessEnv, dir string, includeTest bool) (string, []string, error) { + // Look for non-test, buildable .go files which could provide exports. + all, err := ioutil.ReadDir(dir) + if err != nil { + return "", nil, err + } + var files []os.FileInfo + for _, fi := range all { + name := fi.Name() + if !strings.HasSuffix(name, ".go") || (!includeTest && strings.HasSuffix(name, "_test.go")) { + continue + } + match, err := env.matchFile(dir, fi.Name()) + if err != nil || !match { + continue + } + files = append(files, fi) + } + + if len(files) == 0 { + return "", nil, fmt.Errorf("dir %v contains no buildable, non-test .go files", dir) + } + + var pkgName string + var exports []string + fset := token.NewFileSet() + for _, fi := range files { + select { + case <-ctx.Done(): + return "", nil, ctx.Err() + default: + } + + fullFile := filepath.Join(dir, fi.Name()) + f, err := parser.ParseFile(fset, fullFile, nil, 0) + if err != nil { + if env.Logf != nil { + env.Logf("error parsing %v: %v", fullFile, err) + } + continue + } + if f.Name.Name == "documentation" { + // Special case from go/build.ImportDir, not + // handled by MatchFile above. + continue + } + if includeTest && strings.HasSuffix(f.Name.Name, "_test") { + // x_test package. We want internal test files only. + continue + } + pkgName = f.Name.Name + for name := range f.Scope.Objects { + if ast.IsExported(name) { + exports = append(exports, name) + } + } + } + + if env.Logf != nil { + sortedExports := append([]string(nil), exports...) + sort.Strings(sortedExports) + env.Logf("loaded exports in dir %v (package %v): %v", dir, pkgName, strings.Join(sortedExports, ", ")) + } + return pkgName, exports, nil +} + +// findImport searches for a package with the given symbols. +// If no package is found, findImport returns ("", false, nil) +func findImport(ctx context.Context, pass *pass, candidates []pkgDistance, pkgName string, symbols map[string]bool, filename string) (*pkg, error) { + // Sort the candidates by their import package length, + // assuming that shorter package names are better than long + // ones. Note that this sorts by the de-vendored name, so + // there's no "penalty" for vendoring. + sort.Sort(byDistanceOrImportPathShortLength(candidates)) + if pass.env.Logf != nil { + for i, c := range candidates { + pass.env.Logf("%s candidate %d/%d: %v in %v", pkgName, i+1, len(candidates), c.pkg.importPathShort, c.pkg.dir) + } + } + resolver, err := pass.env.GetResolver() + if err != nil { + return nil, err + } + + // Collect exports for packages with matching names. + rescv := make([]chan *pkg, len(candidates)) + for i := range candidates { + rescv[i] = make(chan *pkg, 1) + } + const maxConcurrentPackageImport = 4 + loadExportsSem := make(chan struct{}, maxConcurrentPackageImport) + + ctx, cancel := context.WithCancel(ctx) + var wg sync.WaitGroup + defer func() { + cancel() + wg.Wait() + }() + + wg.Add(1) + go func() { + defer wg.Done() + for i, c := range candidates { + select { + case loadExportsSem <- struct{}{}: + case <-ctx.Done(): + return + } + + wg.Add(1) + go func(c pkgDistance, resc chan<- *pkg) { + defer func() { + <-loadExportsSem + wg.Done() + }() + + if pass.env.Logf != nil { + pass.env.Logf("loading exports in dir %s (seeking package %s)", c.pkg.dir, pkgName) + } + // If we're an x_test, load the package under test's test variant. + includeTest := strings.HasSuffix(pass.f.Name.Name, "_test") && c.pkg.dir == pass.srcDir + _, exports, err := resolver.loadExports(ctx, c.pkg, includeTest) + if err != nil { + if pass.env.Logf != nil { + pass.env.Logf("loading exports in dir %s (seeking package %s): %v", c.pkg.dir, pkgName, err) + } + resc <- nil + return + } + + exportsMap := make(map[string]bool, len(exports)) + for _, sym := range exports { + exportsMap[sym] = true + } + + // If it doesn't have the right + // symbols, send nil to mean no match. + for symbol := range symbols { + if !exportsMap[symbol] { + resc <- nil + return + } + } + resc <- c.pkg + }(c, rescv[i]) + } + }() + + for _, resc := range rescv { + pkg := <-resc + if pkg == nil { + continue + } + return pkg, nil + } + return nil, nil +} + +// pkgIsCandidate reports whether pkg is a candidate for satisfying the +// finding which package pkgIdent in the file named by filename is trying +// to refer to. +// +// This check is purely lexical and is meant to be as fast as possible +// because it's run over all $GOPATH directories to filter out poor +// candidates in order to limit the CPU and I/O later parsing the +// exports in candidate packages. +// +// filename is the file being formatted. +// pkgIdent is the package being searched for, like "client" (if +// searching for "client.New") +func pkgIsCandidate(filename string, refs references, pkg *pkg) bool { + // Check "internal" and "vendor" visibility: + if !canUse(filename, pkg.dir) { + return false + } + + // Speed optimization to minimize disk I/O: + // the last two components on disk must contain the + // package name somewhere. + // + // This permits mismatch naming like directory + // "go-foo" being package "foo", or "pkg.v3" being "pkg", + // or directory "google.golang.org/api/cloudbilling/v1" + // being package "cloudbilling", but doesn't + // permit a directory "foo" to be package + // "bar", which is strongly discouraged + // anyway. There's no reason goimports needs + // to be slow just to accommodate that. + for pkgIdent := range refs { + lastTwo := lastTwoComponents(pkg.importPathShort) + if strings.Contains(lastTwo, pkgIdent) { + return true + } + if hasHyphenOrUpperASCII(lastTwo) && !hasHyphenOrUpperASCII(pkgIdent) { + lastTwo = lowerASCIIAndRemoveHyphen(lastTwo) + if strings.Contains(lastTwo, pkgIdent) { + return true + } + } + } + return false +} + +func hasHyphenOrUpperASCII(s string) bool { + for i := 0; i < len(s); i++ { + b := s[i] + if b == '-' || ('A' <= b && b <= 'Z') { + return true + } + } + return false +} + +func lowerASCIIAndRemoveHyphen(s string) (ret string) { + buf := make([]byte, 0, len(s)) + for i := 0; i < len(s); i++ { + b := s[i] + switch { + case b == '-': + continue + case 'A' <= b && b <= 'Z': + buf = append(buf, b+('a'-'A')) + default: + buf = append(buf, b) + } + } + return string(buf) +} + +// canUse reports whether the package in dir is usable from filename, +// respecting the Go "internal" and "vendor" visibility rules. +func canUse(filename, dir string) bool { + // Fast path check, before any allocations. If it doesn't contain vendor + // or internal, it's not tricky: + // Note that this can false-negative on directories like "notinternal", + // but we check it correctly below. This is just a fast path. + if !strings.Contains(dir, "vendor") && !strings.Contains(dir, "internal") { + return true + } + + dirSlash := filepath.ToSlash(dir) + if !strings.Contains(dirSlash, "/vendor/") && !strings.Contains(dirSlash, "/internal/") && !strings.HasSuffix(dirSlash, "/internal") { + return true + } + // Vendor or internal directory only visible from children of parent. + // That means the path from the current directory to the target directory + // can contain ../vendor or ../internal but not ../foo/vendor or ../foo/internal + // or bar/vendor or bar/internal. + // After stripping all the leading ../, the only okay place to see vendor or internal + // is at the very beginning of the path. + absfile, err := filepath.Abs(filename) + if err != nil { + return false + } + absdir, err := filepath.Abs(dir) + if err != nil { + return false + } + rel, err := filepath.Rel(absfile, absdir) + if err != nil { + return false + } + relSlash := filepath.ToSlash(rel) + if i := strings.LastIndex(relSlash, "../"); i >= 0 { + relSlash = relSlash[i+len("../"):] + } + return !strings.Contains(relSlash, "/vendor/") && !strings.Contains(relSlash, "/internal/") && !strings.HasSuffix(relSlash, "/internal") +} + +// lastTwoComponents returns at most the last two path components +// of v, using either / or \ as the path separator. +func lastTwoComponents(v string) string { + nslash := 0 + for i := len(v) - 1; i >= 0; i-- { + if v[i] == '/' || v[i] == '\\' { + nslash++ + if nslash == 2 { + return v[i:] + } + } + } + return v +} + +type visitFn func(node ast.Node) ast.Visitor + +func (fn visitFn) Visit(node ast.Node) ast.Visitor { + return fn(node) +} + +func copyExports(pkg []string) map[string]bool { + m := make(map[string]bool, len(pkg)) + for _, v := range pkg { + m[v] = true + } + return m +} diff --git a/vendor/golang.org/x/tools/internal/imports/imports.go b/vendor/golang.org/x/tools/internal/imports/imports.go new file mode 100644 index 000000000..2815edc33 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/imports/imports.go @@ -0,0 +1,346 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:generate go run mkstdlib.go + +// Package imports implements a Go pretty-printer (like package "go/format") +// that also adds or removes import statements as necessary. +package imports + +import ( + "bufio" + "bytes" + "fmt" + "go/ast" + "go/format" + "go/parser" + "go/printer" + "go/token" + "io" + "regexp" + "strconv" + "strings" + + "golang.org/x/tools/go/ast/astutil" +) + +// Options is golang.org/x/tools/imports.Options with extra internal-only options. +type Options struct { + Env *ProcessEnv // The environment to use. Note: this contains the cached module and filesystem state. + + // LocalPrefix is a comma-separated string of import path prefixes, which, if + // set, instructs Process to sort the import paths with the given prefixes + // into another group after 3rd-party packages. + LocalPrefix string + + Fragment bool // Accept fragment of a source file (no package statement) + AllErrors bool // Report all errors (not just the first 10 on different lines) + + Comments bool // Print comments (true if nil *Options provided) + TabIndent bool // Use tabs for indent (true if nil *Options provided) + TabWidth int // Tab width (8 if nil *Options provided) + + FormatOnly bool // Disable the insertion and deletion of imports +} + +// Process implements golang.org/x/tools/imports.Process with explicit context in opt.Env. +func Process(filename string, src []byte, opt *Options) (formatted []byte, err error) { + fileSet := token.NewFileSet() + file, adjust, err := parse(fileSet, filename, src, opt) + if err != nil { + return nil, err + } + + if !opt.FormatOnly { + if err := fixImports(fileSet, file, filename, opt.Env); err != nil { + return nil, err + } + } + return formatFile(fileSet, file, src, adjust, opt) +} + +// FixImports returns a list of fixes to the imports that, when applied, +// will leave the imports in the same state as Process. src and opt must +// be specified. +// +// Note that filename's directory influences which imports can be chosen, +// so it is important that filename be accurate. +func FixImports(filename string, src []byte, opt *Options) (fixes []*ImportFix, err error) { + fileSet := token.NewFileSet() + file, _, err := parse(fileSet, filename, src, opt) + if err != nil { + return nil, err + } + + return getFixes(fileSet, file, filename, opt.Env) +} + +// ApplyFixes applies all of the fixes to the file and formats it. extraMode +// is added in when parsing the file. src and opts must be specified, but no +// env is needed. +func ApplyFixes(fixes []*ImportFix, filename string, src []byte, opt *Options, extraMode parser.Mode) (formatted []byte, err error) { + // Don't use parse() -- we don't care about fragments or statement lists + // here, and we need to work with unparseable files. + fileSet := token.NewFileSet() + parserMode := parser.Mode(0) + if opt.Comments { + parserMode |= parser.ParseComments + } + if opt.AllErrors { + parserMode |= parser.AllErrors + } + parserMode |= extraMode + + file, err := parser.ParseFile(fileSet, filename, src, parserMode) + if file == nil { + return nil, err + } + + // Apply the fixes to the file. + apply(fileSet, file, fixes) + + return formatFile(fileSet, file, src, nil, opt) +} + +func formatFile(fileSet *token.FileSet, file *ast.File, src []byte, adjust func(orig []byte, src []byte) []byte, opt *Options) ([]byte, error) { + mergeImports(fileSet, file) + sortImports(opt.LocalPrefix, fileSet, file) + imps := astutil.Imports(fileSet, file) + var spacesBefore []string // import paths we need spaces before + for _, impSection := range imps { + // Within each block of contiguous imports, see if any + // import lines are in different group numbers. If so, + // we'll need to put a space between them so it's + // compatible with gofmt. + lastGroup := -1 + for _, importSpec := range impSection { + importPath, _ := strconv.Unquote(importSpec.Path.Value) + groupNum := importGroup(opt.LocalPrefix, importPath) + if groupNum != lastGroup && lastGroup != -1 { + spacesBefore = append(spacesBefore, importPath) + } + lastGroup = groupNum + } + + } + + printerMode := printer.UseSpaces + if opt.TabIndent { + printerMode |= printer.TabIndent + } + printConfig := &printer.Config{Mode: printerMode, Tabwidth: opt.TabWidth} + + var buf bytes.Buffer + err := printConfig.Fprint(&buf, fileSet, file) + if err != nil { + return nil, err + } + out := buf.Bytes() + if adjust != nil { + out = adjust(src, out) + } + if len(spacesBefore) > 0 { + out, err = addImportSpaces(bytes.NewReader(out), spacesBefore) + if err != nil { + return nil, err + } + } + + out, err = format.Source(out) + if err != nil { + return nil, err + } + return out, nil +} + +// parse parses src, which was read from filename, +// as a Go source file or statement list. +func parse(fset *token.FileSet, filename string, src []byte, opt *Options) (*ast.File, func(orig, src []byte) []byte, error) { + parserMode := parser.Mode(0) + if opt.Comments { + parserMode |= parser.ParseComments + } + if opt.AllErrors { + parserMode |= parser.AllErrors + } + + // Try as whole source file. + file, err := parser.ParseFile(fset, filename, src, parserMode) + if err == nil { + return file, nil, nil + } + // If the error is that the source file didn't begin with a + // package line and we accept fragmented input, fall through to + // try as a source fragment. Stop and return on any other error. + if !opt.Fragment || !strings.Contains(err.Error(), "expected 'package'") { + return nil, nil, err + } + + // If this is a declaration list, make it a source file + // by inserting a package clause. + // Insert using a ;, not a newline, so that parse errors are on + // the correct line. + const prefix = "package main;" + psrc := append([]byte(prefix), src...) + file, err = parser.ParseFile(fset, filename, psrc, parserMode) + if err == nil { + // Gofmt will turn the ; into a \n. + // Do that ourselves now and update the file contents, + // so that positions and line numbers are correct going forward. + psrc[len(prefix)-1] = '\n' + fset.File(file.Package).SetLinesForContent(psrc) + + // If a main function exists, we will assume this is a main + // package and leave the file. + if containsMainFunc(file) { + return file, nil, nil + } + + adjust := func(orig, src []byte) []byte { + // Remove the package clause. + src = src[len(prefix):] + return matchSpace(orig, src) + } + return file, adjust, nil + } + // If the error is that the source file didn't begin with a + // declaration, fall through to try as a statement list. + // Stop and return on any other error. + if !strings.Contains(err.Error(), "expected declaration") { + return nil, nil, err + } + + // If this is a statement list, make it a source file + // by inserting a package clause and turning the list + // into a function body. This handles expressions too. + // Insert using a ;, not a newline, so that the line numbers + // in fsrc match the ones in src. + fsrc := append(append([]byte("package p; func _() {"), src...), '}') + file, err = parser.ParseFile(fset, filename, fsrc, parserMode) + if err == nil { + adjust := func(orig, src []byte) []byte { + // Remove the wrapping. + // Gofmt has turned the ; into a \n\n. + src = src[len("package p\n\nfunc _() {"):] + src = src[:len(src)-len("}\n")] + // Gofmt has also indented the function body one level. + // Remove that indent. + src = bytes.Replace(src, []byte("\n\t"), []byte("\n"), -1) + return matchSpace(orig, src) + } + return file, adjust, nil + } + + // Failed, and out of options. + return nil, nil, err +} + +// containsMainFunc checks if a file contains a function declaration with the +// function signature 'func main()' +func containsMainFunc(file *ast.File) bool { + for _, decl := range file.Decls { + if f, ok := decl.(*ast.FuncDecl); ok { + if f.Name.Name != "main" { + continue + } + + if len(f.Type.Params.List) != 0 { + continue + } + + if f.Type.Results != nil && len(f.Type.Results.List) != 0 { + continue + } + + return true + } + } + + return false +} + +func cutSpace(b []byte) (before, middle, after []byte) { + i := 0 + for i < len(b) && (b[i] == ' ' || b[i] == '\t' || b[i] == '\n') { + i++ + } + j := len(b) + for j > 0 && (b[j-1] == ' ' || b[j-1] == '\t' || b[j-1] == '\n') { + j-- + } + if i <= j { + return b[:i], b[i:j], b[j:] + } + return nil, nil, b[j:] +} + +// matchSpace reformats src to use the same space context as orig. +// 1) If orig begins with blank lines, matchSpace inserts them at the beginning of src. +// 2) matchSpace copies the indentation of the first non-blank line in orig +// to every non-blank line in src. +// 3) matchSpace copies the trailing space from orig and uses it in place +// of src's trailing space. +func matchSpace(orig []byte, src []byte) []byte { + before, _, after := cutSpace(orig) + i := bytes.LastIndex(before, []byte{'\n'}) + before, indent := before[:i+1], before[i+1:] + + _, src, _ = cutSpace(src) + + var b bytes.Buffer + b.Write(before) + for len(src) > 0 { + line := src + if i := bytes.IndexByte(line, '\n'); i >= 0 { + line, src = line[:i+1], line[i+1:] + } else { + src = nil + } + if len(line) > 0 && line[0] != '\n' { // not blank + b.Write(indent) + } + b.Write(line) + } + b.Write(after) + return b.Bytes() +} + +var impLine = regexp.MustCompile(`^\s+(?:[\w\.]+\s+)?"(.+)"`) + +func addImportSpaces(r io.Reader, breaks []string) ([]byte, error) { + var out bytes.Buffer + in := bufio.NewReader(r) + inImports := false + done := false + for { + s, err := in.ReadString('\n') + if err == io.EOF { + break + } else if err != nil { + return nil, err + } + + if !inImports && !done && strings.HasPrefix(s, "import") { + inImports = true + } + if inImports && (strings.HasPrefix(s, "var") || + strings.HasPrefix(s, "func") || + strings.HasPrefix(s, "const") || + strings.HasPrefix(s, "type")) { + done = true + inImports = false + } + if inImports && len(breaks) > 0 { + if m := impLine.FindStringSubmatch(s); m != nil { + if m[1] == breaks[0] { + out.WriteByte('\n') + breaks = breaks[1:] + } + } + } + + fmt.Fprint(&out, s) + } + return out.Bytes(), nil +} diff --git a/vendor/golang.org/x/tools/internal/imports/mod.go b/vendor/golang.org/x/tools/internal/imports/mod.go new file mode 100644 index 000000000..901449a82 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/imports/mod.go @@ -0,0 +1,688 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package imports + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "io/ioutil" + "os" + "path" + "path/filepath" + "regexp" + "sort" + "strconv" + "strings" + + "golang.org/x/mod/module" + "golang.org/x/tools/internal/gocommand" + "golang.org/x/tools/internal/gopathwalk" +) + +// ModuleResolver implements resolver for modules using the go command as little +// as feasible. +type ModuleResolver struct { + env *ProcessEnv + moduleCacheDir string + dummyVendorMod *gocommand.ModuleJSON // If vendoring is enabled, the pseudo-module that represents the /vendor directory. + roots []gopathwalk.Root + scanSema chan struct{} // scanSema prevents concurrent scans and guards scannedRoots. + scannedRoots map[gopathwalk.Root]bool + + initialized bool + main *gocommand.ModuleJSON + modsByModPath []*gocommand.ModuleJSON // All modules, ordered by # of path components in module Path... + modsByDir []*gocommand.ModuleJSON // ...or Dir. + + // moduleCacheCache stores information about the module cache. + moduleCacheCache *dirInfoCache + otherCache *dirInfoCache +} + +func newModuleResolver(e *ProcessEnv) *ModuleResolver { + r := &ModuleResolver{ + env: e, + scanSema: make(chan struct{}, 1), + } + r.scanSema <- struct{}{} + return r +} + +func (r *ModuleResolver) init() error { + if r.initialized { + return nil + } + + goenv, err := r.env.goEnv() + if err != nil { + return err + } + inv := gocommand.Invocation{ + BuildFlags: r.env.BuildFlags, + ModFlag: r.env.ModFlag, + ModFile: r.env.ModFile, + Env: r.env.env(), + Logf: r.env.Logf, + WorkingDir: r.env.WorkingDir, + } + mainMod, vendorEnabled, err := gocommand.VendorEnabled(context.TODO(), inv, r.env.GocmdRunner) + if err != nil { + return err + } + + if mainMod != nil && vendorEnabled { + // Vendor mode is on, so all the non-Main modules are irrelevant, + // and we need to search /vendor for everything. + r.main = mainMod + r.dummyVendorMod = &gocommand.ModuleJSON{ + Path: "", + Dir: filepath.Join(mainMod.Dir, "vendor"), + } + r.modsByModPath = []*gocommand.ModuleJSON{mainMod, r.dummyVendorMod} + r.modsByDir = []*gocommand.ModuleJSON{mainMod, r.dummyVendorMod} + } else { + // Vendor mode is off, so run go list -m ... to find everything. + r.initAllMods() + } + + if gmc := r.env.Env["GOMODCACHE"]; gmc != "" { + r.moduleCacheDir = gmc + } else { + gopaths := filepath.SplitList(goenv["GOPATH"]) + if len(gopaths) == 0 { + return fmt.Errorf("empty GOPATH") + } + r.moduleCacheDir = filepath.Join(gopaths[0], "/pkg/mod") + } + + sort.Slice(r.modsByModPath, func(i, j int) bool { + count := func(x int) int { + return strings.Count(r.modsByModPath[x].Path, "/") + } + return count(j) < count(i) // descending order + }) + sort.Slice(r.modsByDir, func(i, j int) bool { + count := func(x int) int { + return strings.Count(r.modsByDir[x].Dir, "/") + } + return count(j) < count(i) // descending order + }) + + r.roots = []gopathwalk.Root{ + {filepath.Join(goenv["GOROOT"], "/src"), gopathwalk.RootGOROOT}, + } + if r.main != nil { + r.roots = append(r.roots, gopathwalk.Root{r.main.Dir, gopathwalk.RootCurrentModule}) + } + if vendorEnabled { + r.roots = append(r.roots, gopathwalk.Root{r.dummyVendorMod.Dir, gopathwalk.RootOther}) + } else { + addDep := func(mod *gocommand.ModuleJSON) { + if mod.Replace == nil { + // This is redundant with the cache, but we'll skip it cheaply enough. + r.roots = append(r.roots, gopathwalk.Root{mod.Dir, gopathwalk.RootModuleCache}) + } else { + r.roots = append(r.roots, gopathwalk.Root{mod.Dir, gopathwalk.RootOther}) + } + } + // Walk dependent modules before scanning the full mod cache, direct deps first. + for _, mod := range r.modsByModPath { + if !mod.Indirect && !mod.Main { + addDep(mod) + } + } + for _, mod := range r.modsByModPath { + if mod.Indirect && !mod.Main { + addDep(mod) + } + } + r.roots = append(r.roots, gopathwalk.Root{r.moduleCacheDir, gopathwalk.RootModuleCache}) + } + + r.scannedRoots = map[gopathwalk.Root]bool{} + if r.moduleCacheCache == nil { + r.moduleCacheCache = &dirInfoCache{ + dirs: map[string]*directoryPackageInfo{}, + listeners: map[*int]cacheListener{}, + } + } + if r.otherCache == nil { + r.otherCache = &dirInfoCache{ + dirs: map[string]*directoryPackageInfo{}, + listeners: map[*int]cacheListener{}, + } + } + r.initialized = true + return nil +} + +func (r *ModuleResolver) initAllMods() error { + stdout, err := r.env.invokeGo(context.TODO(), "list", "-m", "-json", "...") + if err != nil { + return err + } + for dec := json.NewDecoder(stdout); dec.More(); { + mod := &gocommand.ModuleJSON{} + if err := dec.Decode(mod); err != nil { + return err + } + if mod.Dir == "" { + if r.env.Logf != nil { + r.env.Logf("module %v has not been downloaded and will be ignored", mod.Path) + } + // Can't do anything with a module that's not downloaded. + continue + } + // golang/go#36193: the go command doesn't always clean paths. + mod.Dir = filepath.Clean(mod.Dir) + r.modsByModPath = append(r.modsByModPath, mod) + r.modsByDir = append(r.modsByDir, mod) + if mod.Main { + r.main = mod + } + } + return nil +} + +func (r *ModuleResolver) ClearForNewScan() { + <-r.scanSema + r.scannedRoots = map[gopathwalk.Root]bool{} + r.otherCache = &dirInfoCache{ + dirs: map[string]*directoryPackageInfo{}, + listeners: map[*int]cacheListener{}, + } + r.scanSema <- struct{}{} +} + +func (r *ModuleResolver) ClearForNewMod() { + <-r.scanSema + *r = ModuleResolver{ + env: r.env, + moduleCacheCache: r.moduleCacheCache, + otherCache: r.otherCache, + scanSema: r.scanSema, + } + r.init() + r.scanSema <- struct{}{} +} + +// findPackage returns the module and directory that contains the package at +// the given import path, or returns nil, "" if no module is in scope. +func (r *ModuleResolver) findPackage(importPath string) (*gocommand.ModuleJSON, string) { + // This can't find packages in the stdlib, but that's harmless for all + // the existing code paths. + for _, m := range r.modsByModPath { + if !strings.HasPrefix(importPath, m.Path) { + continue + } + pathInModule := importPath[len(m.Path):] + pkgDir := filepath.Join(m.Dir, pathInModule) + if r.dirIsNestedModule(pkgDir, m) { + continue + } + + if info, ok := r.cacheLoad(pkgDir); ok { + if loaded, err := info.reachedStatus(nameLoaded); loaded { + if err != nil { + continue // No package in this dir. + } + return m, pkgDir + } + if scanned, err := info.reachedStatus(directoryScanned); scanned && err != nil { + continue // Dir is unreadable, etc. + } + // This is slightly wrong: a directory doesn't have to have an + // importable package to count as a package for package-to-module + // resolution. package main or _test files should count but + // don't. + // TODO(heschi): fix this. + if _, err := r.cachePackageName(info); err == nil { + return m, pkgDir + } + } + + // Not cached. Read the filesystem. + pkgFiles, err := ioutil.ReadDir(pkgDir) + if err != nil { + continue + } + // A module only contains a package if it has buildable go + // files in that directory. If not, it could be provided by an + // outer module. See #29736. + for _, fi := range pkgFiles { + if ok, _ := r.env.matchFile(pkgDir, fi.Name()); ok { + return m, pkgDir + } + } + } + return nil, "" +} + +func (r *ModuleResolver) cacheLoad(dir string) (directoryPackageInfo, bool) { + if info, ok := r.moduleCacheCache.Load(dir); ok { + return info, ok + } + return r.otherCache.Load(dir) +} + +func (r *ModuleResolver) cacheStore(info directoryPackageInfo) { + if info.rootType == gopathwalk.RootModuleCache { + r.moduleCacheCache.Store(info.dir, info) + } else { + r.otherCache.Store(info.dir, info) + } +} + +func (r *ModuleResolver) cacheKeys() []string { + return append(r.moduleCacheCache.Keys(), r.otherCache.Keys()...) +} + +// cachePackageName caches the package name for a dir already in the cache. +func (r *ModuleResolver) cachePackageName(info directoryPackageInfo) (string, error) { + if info.rootType == gopathwalk.RootModuleCache { + return r.moduleCacheCache.CachePackageName(info) + } + return r.otherCache.CachePackageName(info) +} + +func (r *ModuleResolver) cacheExports(ctx context.Context, env *ProcessEnv, info directoryPackageInfo) (string, []string, error) { + if info.rootType == gopathwalk.RootModuleCache { + return r.moduleCacheCache.CacheExports(ctx, env, info) + } + return r.otherCache.CacheExports(ctx, env, info) +} + +// findModuleByDir returns the module that contains dir, or nil if no such +// module is in scope. +func (r *ModuleResolver) findModuleByDir(dir string) *gocommand.ModuleJSON { + // This is quite tricky and may not be correct. dir could be: + // - a package in the main module. + // - a replace target underneath the main module's directory. + // - a nested module in the above. + // - a replace target somewhere totally random. + // - a nested module in the above. + // - in the mod cache. + // - in /vendor/ in -mod=vendor mode. + // - nested module? Dunno. + // Rumor has it that replace targets cannot contain other replace targets. + for _, m := range r.modsByDir { + if !strings.HasPrefix(dir, m.Dir) { + continue + } + + if r.dirIsNestedModule(dir, m) { + continue + } + + return m + } + return nil +} + +// dirIsNestedModule reports if dir is contained in a nested module underneath +// mod, not actually in mod. +func (r *ModuleResolver) dirIsNestedModule(dir string, mod *gocommand.ModuleJSON) bool { + if !strings.HasPrefix(dir, mod.Dir) { + return false + } + if r.dirInModuleCache(dir) { + // Nested modules in the module cache are pruned, + // so it cannot be a nested module. + return false + } + if mod != nil && mod == r.dummyVendorMod { + // The /vendor pseudomodule is flattened and doesn't actually count. + return false + } + modDir, _ := r.modInfo(dir) + if modDir == "" { + return false + } + return modDir != mod.Dir +} + +func (r *ModuleResolver) modInfo(dir string) (modDir string, modName string) { + readModName := func(modFile string) string { + modBytes, err := ioutil.ReadFile(modFile) + if err != nil { + return "" + } + return modulePath(modBytes) + } + + if r.dirInModuleCache(dir) { + if matches := modCacheRegexp.FindStringSubmatch(dir); len(matches) == 3 { + index := strings.Index(dir, matches[1]+"@"+matches[2]) + modDir := filepath.Join(dir[:index], matches[1]+"@"+matches[2]) + return modDir, readModName(filepath.Join(modDir, "go.mod")) + } + } + for { + if info, ok := r.cacheLoad(dir); ok { + return info.moduleDir, info.moduleName + } + f := filepath.Join(dir, "go.mod") + info, err := os.Stat(f) + if err == nil && !info.IsDir() { + return dir, readModName(f) + } + + d := filepath.Dir(dir) + if len(d) >= len(dir) { + return "", "" // reached top of file system, no go.mod + } + dir = d + } +} + +func (r *ModuleResolver) dirInModuleCache(dir string) bool { + if r.moduleCacheDir == "" { + return false + } + return strings.HasPrefix(dir, r.moduleCacheDir) +} + +func (r *ModuleResolver) loadPackageNames(importPaths []string, srcDir string) (map[string]string, error) { + if err := r.init(); err != nil { + return nil, err + } + names := map[string]string{} + for _, path := range importPaths { + _, packageDir := r.findPackage(path) + if packageDir == "" { + continue + } + name, err := packageDirToName(packageDir) + if err != nil { + continue + } + names[path] = name + } + return names, nil +} + +func (r *ModuleResolver) scan(ctx context.Context, callback *scanCallback) error { + if err := r.init(); err != nil { + return err + } + + processDir := func(info directoryPackageInfo) { + // Skip this directory if we were not able to get the package information successfully. + if scanned, err := info.reachedStatus(directoryScanned); !scanned || err != nil { + return + } + pkg, err := r.canonicalize(info) + if err != nil { + return + } + + if !callback.dirFound(pkg) { + return + } + pkg.packageName, err = r.cachePackageName(info) + if err != nil { + return + } + + if !callback.packageNameLoaded(pkg) { + return + } + _, exports, err := r.loadExports(ctx, pkg, false) + if err != nil { + return + } + callback.exportsLoaded(pkg, exports) + } + + // Start processing everything in the cache, and listen for the new stuff + // we discover in the walk below. + stop1 := r.moduleCacheCache.ScanAndListen(ctx, processDir) + defer stop1() + stop2 := r.otherCache.ScanAndListen(ctx, processDir) + defer stop2() + + // We assume cached directories are fully cached, including all their + // children, and have not changed. We can skip them. + skip := func(root gopathwalk.Root, dir string) bool { + info, ok := r.cacheLoad(dir) + if !ok { + return false + } + // This directory can be skipped as long as we have already scanned it. + // Packages with errors will continue to have errors, so there is no need + // to rescan them. + packageScanned, _ := info.reachedStatus(directoryScanned) + return packageScanned + } + + // Add anything new to the cache, and process it if we're still listening. + add := func(root gopathwalk.Root, dir string) { + r.cacheStore(r.scanDirForPackage(root, dir)) + } + + // r.roots and the callback are not necessarily safe to use in the + // goroutine below. Process them eagerly. + roots := filterRoots(r.roots, callback.rootFound) + // We can't cancel walks, because we need them to finish to have a usable + // cache. Instead, run them in a separate goroutine and detach. + scanDone := make(chan struct{}) + go func() { + select { + case <-ctx.Done(): + return + case <-r.scanSema: + } + defer func() { r.scanSema <- struct{}{} }() + // We have the lock on r.scannedRoots, and no other scans can run. + for _, root := range roots { + if ctx.Err() != nil { + return + } + + if r.scannedRoots[root] { + continue + } + gopathwalk.WalkSkip([]gopathwalk.Root{root}, add, skip, gopathwalk.Options{Logf: r.env.Logf, ModulesEnabled: true}) + r.scannedRoots[root] = true + } + close(scanDone) + }() + select { + case <-ctx.Done(): + case <-scanDone: + } + return nil +} + +func (r *ModuleResolver) scoreImportPath(ctx context.Context, path string) float64 { + if _, ok := stdlib[path]; ok { + return MaxRelevance + } + mod, _ := r.findPackage(path) + return modRelevance(mod) +} + +func modRelevance(mod *gocommand.ModuleJSON) float64 { + var relevance float64 + switch { + case mod == nil: // out of scope + return MaxRelevance - 4 + case mod.Indirect: + relevance = MaxRelevance - 3 + case !mod.Main: + relevance = MaxRelevance - 2 + default: + relevance = MaxRelevance - 1 // main module ties with stdlib + } + + _, versionString, ok := module.SplitPathVersion(mod.Path) + if ok { + index := strings.Index(versionString, "v") + if index == -1 { + return relevance + } + if versionNumber, err := strconv.ParseFloat(versionString[index+1:], 64); err == nil { + relevance += versionNumber / 1000 + } + } + + return relevance +} + +// canonicalize gets the result of canonicalizing the packages using the results +// of initializing the resolver from 'go list -m'. +func (r *ModuleResolver) canonicalize(info directoryPackageInfo) (*pkg, error) { + // Packages in GOROOT are already canonical, regardless of the std/cmd modules. + if info.rootType == gopathwalk.RootGOROOT { + return &pkg{ + importPathShort: info.nonCanonicalImportPath, + dir: info.dir, + packageName: path.Base(info.nonCanonicalImportPath), + relevance: MaxRelevance, + }, nil + } + + importPath := info.nonCanonicalImportPath + mod := r.findModuleByDir(info.dir) + // Check if the directory is underneath a module that's in scope. + if mod != nil { + // It is. If dir is the target of a replace directive, + // our guessed import path is wrong. Use the real one. + if mod.Dir == info.dir { + importPath = mod.Path + } else { + dirInMod := info.dir[len(mod.Dir)+len("/"):] + importPath = path.Join(mod.Path, filepath.ToSlash(dirInMod)) + } + } else if !strings.HasPrefix(importPath, info.moduleName) { + // The module's name doesn't match the package's import path. It + // probably needs a replace directive we don't have. + return nil, fmt.Errorf("package in %q is not valid without a replace statement", info.dir) + } + + res := &pkg{ + importPathShort: importPath, + dir: info.dir, + relevance: modRelevance(mod), + } + // We may have discovered a package that has a different version + // in scope already. Canonicalize to that one if possible. + if _, canonicalDir := r.findPackage(importPath); canonicalDir != "" { + res.dir = canonicalDir + } + return res, nil +} + +func (r *ModuleResolver) loadExports(ctx context.Context, pkg *pkg, includeTest bool) (string, []string, error) { + if err := r.init(); err != nil { + return "", nil, err + } + if info, ok := r.cacheLoad(pkg.dir); ok && !includeTest { + return r.cacheExports(ctx, r.env, info) + } + return loadExportsFromFiles(ctx, r.env, pkg.dir, includeTest) +} + +func (r *ModuleResolver) scanDirForPackage(root gopathwalk.Root, dir string) directoryPackageInfo { + subdir := "" + if dir != root.Path { + subdir = dir[len(root.Path)+len("/"):] + } + importPath := filepath.ToSlash(subdir) + if strings.HasPrefix(importPath, "vendor/") { + // Only enter vendor directories if they're explicitly requested as a root. + return directoryPackageInfo{ + status: directoryScanned, + err: fmt.Errorf("unwanted vendor directory"), + } + } + switch root.Type { + case gopathwalk.RootCurrentModule: + importPath = path.Join(r.main.Path, filepath.ToSlash(subdir)) + case gopathwalk.RootModuleCache: + matches := modCacheRegexp.FindStringSubmatch(subdir) + if len(matches) == 0 { + return directoryPackageInfo{ + status: directoryScanned, + err: fmt.Errorf("invalid module cache path: %v", subdir), + } + } + modPath, err := module.UnescapePath(filepath.ToSlash(matches[1])) + if err != nil { + if r.env.Logf != nil { + r.env.Logf("decoding module cache path %q: %v", subdir, err) + } + return directoryPackageInfo{ + status: directoryScanned, + err: fmt.Errorf("decoding module cache path %q: %v", subdir, err), + } + } + importPath = path.Join(modPath, filepath.ToSlash(matches[3])) + } + + modDir, modName := r.modInfo(dir) + result := directoryPackageInfo{ + status: directoryScanned, + dir: dir, + rootType: root.Type, + nonCanonicalImportPath: importPath, + moduleDir: modDir, + moduleName: modName, + } + if root.Type == gopathwalk.RootGOROOT { + // stdlib packages are always in scope, despite the confusing go.mod + return result + } + return result +} + +// modCacheRegexp splits a path in a module cache into module, module version, and package. +var modCacheRegexp = regexp.MustCompile(`(.*)@([^/\\]*)(.*)`) + +var ( + slashSlash = []byte("//") + moduleStr = []byte("module") +) + +// modulePath returns the module path from the gomod file text. +// If it cannot find a module path, it returns an empty string. +// It is tolerant of unrelated problems in the go.mod file. +// +// Copied from cmd/go/internal/modfile. +func modulePath(mod []byte) string { + for len(mod) > 0 { + line := mod + mod = nil + if i := bytes.IndexByte(line, '\n'); i >= 0 { + line, mod = line[:i], line[i+1:] + } + if i := bytes.Index(line, slashSlash); i >= 0 { + line = line[:i] + } + line = bytes.TrimSpace(line) + if !bytes.HasPrefix(line, moduleStr) { + continue + } + line = line[len(moduleStr):] + n := len(line) + line = bytes.TrimSpace(line) + if len(line) == n || len(line) == 0 { + continue + } + + if line[0] == '"' || line[0] == '`' { + p, err := strconv.Unquote(string(line)) + if err != nil { + return "" // malformed quoted string or multiline module path + } + return p + } + + return string(line) + } + return "" // missing module path +} diff --git a/vendor/golang.org/x/tools/internal/imports/mod_cache.go b/vendor/golang.org/x/tools/internal/imports/mod_cache.go new file mode 100644 index 000000000..18dada495 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/imports/mod_cache.go @@ -0,0 +1,236 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package imports + +import ( + "context" + "fmt" + "sync" + + "golang.org/x/tools/internal/gopathwalk" +) + +// To find packages to import, the resolver needs to know about all of the +// the packages that could be imported. This includes packages that are +// already in modules that are in (1) the current module, (2) replace targets, +// and (3) packages in the module cache. Packages in (1) and (2) may change over +// time, as the client may edit the current module and locally replaced modules. +// The module cache (which includes all of the packages in (3)) can only +// ever be added to. +// +// The resolver can thus save state about packages in the module cache +// and guarantee that this will not change over time. To obtain information +// about new modules added to the module cache, the module cache should be +// rescanned. +// +// It is OK to serve information about modules that have been deleted, +// as they do still exist. +// TODO(suzmue): can we share information with the caller about +// what module needs to be downloaded to import this package? + +type directoryPackageStatus int + +const ( + _ directoryPackageStatus = iota + directoryScanned + nameLoaded + exportsLoaded +) + +type directoryPackageInfo struct { + // status indicates the extent to which this struct has been filled in. + status directoryPackageStatus + // err is non-nil when there was an error trying to reach status. + err error + + // Set when status >= directoryScanned. + + // dir is the absolute directory of this package. + dir string + rootType gopathwalk.RootType + // nonCanonicalImportPath is the package's expected import path. It may + // not actually be importable at that path. + nonCanonicalImportPath string + + // Module-related information. + moduleDir string // The directory that is the module root of this dir. + moduleName string // The module name that contains this dir. + + // Set when status >= nameLoaded. + + packageName string // the package name, as declared in the source. + + // Set when status >= exportsLoaded. + + exports []string +} + +// reachedStatus returns true when info has a status at least target and any error associated with +// an attempt to reach target. +func (info *directoryPackageInfo) reachedStatus(target directoryPackageStatus) (bool, error) { + if info.err == nil { + return info.status >= target, nil + } + if info.status == target { + return true, info.err + } + return true, nil +} + +// dirInfoCache is a concurrency safe map for storing information about +// directories that may contain packages. +// +// The information in this cache is built incrementally. Entries are initialized in scan. +// No new keys should be added in any other functions, as all directories containing +// packages are identified in scan. +// +// Other functions, including loadExports and findPackage, may update entries in this cache +// as they discover new things about the directory. +// +// The information in the cache is not expected to change for the cache's +// lifetime, so there is no protection against competing writes. Users should +// take care not to hold the cache across changes to the underlying files. +// +// TODO(suzmue): consider other concurrency strategies and data structures (RWLocks, sync.Map, etc) +type dirInfoCache struct { + mu sync.Mutex + // dirs stores information about packages in directories, keyed by absolute path. + dirs map[string]*directoryPackageInfo + listeners map[*int]cacheListener +} + +type cacheListener func(directoryPackageInfo) + +// ScanAndListen calls listener on all the items in the cache, and on anything +// newly added. The returned stop function waits for all in-flight callbacks to +// finish and blocks new ones. +func (d *dirInfoCache) ScanAndListen(ctx context.Context, listener cacheListener) func() { + ctx, cancel := context.WithCancel(ctx) + + // Flushing out all the callbacks is tricky without knowing how many there + // are going to be. Setting an arbitrary limit makes it much easier. + const maxInFlight = 10 + sema := make(chan struct{}, maxInFlight) + for i := 0; i < maxInFlight; i++ { + sema <- struct{}{} + } + + cookie := new(int) // A unique ID we can use for the listener. + + // We can't hold mu while calling the listener. + d.mu.Lock() + var keys []string + for key := range d.dirs { + keys = append(keys, key) + } + d.listeners[cookie] = func(info directoryPackageInfo) { + select { + case <-ctx.Done(): + return + case <-sema: + } + listener(info) + sema <- struct{}{} + } + d.mu.Unlock() + + stop := func() { + cancel() + d.mu.Lock() + delete(d.listeners, cookie) + d.mu.Unlock() + for i := 0; i < maxInFlight; i++ { + <-sema + } + } + + // Process the pre-existing keys. + for _, k := range keys { + select { + case <-ctx.Done(): + return stop + default: + } + if v, ok := d.Load(k); ok { + listener(v) + } + } + + return stop +} + +// Store stores the package info for dir. +func (d *dirInfoCache) Store(dir string, info directoryPackageInfo) { + d.mu.Lock() + _, old := d.dirs[dir] + d.dirs[dir] = &info + var listeners []cacheListener + for _, l := range d.listeners { + listeners = append(listeners, l) + } + d.mu.Unlock() + + if !old { + for _, l := range listeners { + l(info) + } + } +} + +// Load returns a copy of the directoryPackageInfo for absolute directory dir. +func (d *dirInfoCache) Load(dir string) (directoryPackageInfo, bool) { + d.mu.Lock() + defer d.mu.Unlock() + info, ok := d.dirs[dir] + if !ok { + return directoryPackageInfo{}, false + } + return *info, true +} + +// Keys returns the keys currently present in d. +func (d *dirInfoCache) Keys() (keys []string) { + d.mu.Lock() + defer d.mu.Unlock() + for key := range d.dirs { + keys = append(keys, key) + } + return keys +} + +func (d *dirInfoCache) CachePackageName(info directoryPackageInfo) (string, error) { + if loaded, err := info.reachedStatus(nameLoaded); loaded { + return info.packageName, err + } + if scanned, err := info.reachedStatus(directoryScanned); !scanned || err != nil { + return "", fmt.Errorf("cannot read package name, scan error: %v", err) + } + info.packageName, info.err = packageDirToName(info.dir) + info.status = nameLoaded + d.Store(info.dir, info) + return info.packageName, info.err +} + +func (d *dirInfoCache) CacheExports(ctx context.Context, env *ProcessEnv, info directoryPackageInfo) (string, []string, error) { + if reached, _ := info.reachedStatus(exportsLoaded); reached { + return info.packageName, info.exports, info.err + } + if reached, err := info.reachedStatus(nameLoaded); reached && err != nil { + return "", nil, err + } + info.packageName, info.exports, info.err = loadExportsFromFiles(ctx, env, info.dir, false) + if info.err == context.Canceled || info.err == context.DeadlineExceeded { + return info.packageName, info.exports, info.err + } + // The cache structure wants things to proceed linearly. We can skip a + // step here, but only if we succeed. + if info.status == nameLoaded || info.err == nil { + info.status = exportsLoaded + } else { + info.status = nameLoaded + } + d.Store(info.dir, info) + return info.packageName, info.exports, info.err +} diff --git a/vendor/golang.org/x/tools/internal/imports/sortimports.go b/vendor/golang.org/x/tools/internal/imports/sortimports.go new file mode 100644 index 000000000..be8ffa25f --- /dev/null +++ b/vendor/golang.org/x/tools/internal/imports/sortimports.go @@ -0,0 +1,280 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Hacked up copy of go/ast/import.go + +package imports + +import ( + "go/ast" + "go/token" + "sort" + "strconv" +) + +// sortImports sorts runs of consecutive import lines in import blocks in f. +// It also removes duplicate imports when it is possible to do so without data loss. +func sortImports(localPrefix string, fset *token.FileSet, f *ast.File) { + for i, d := range f.Decls { + d, ok := d.(*ast.GenDecl) + if !ok || d.Tok != token.IMPORT { + // Not an import declaration, so we're done. + // Imports are always first. + break + } + + if len(d.Specs) == 0 { + // Empty import block, remove it. + f.Decls = append(f.Decls[:i], f.Decls[i+1:]...) + } + + if !d.Lparen.IsValid() { + // Not a block: sorted by default. + continue + } + + // Identify and sort runs of specs on successive lines. + i := 0 + specs := d.Specs[:0] + for j, s := range d.Specs { + if j > i && fset.Position(s.Pos()).Line > 1+fset.Position(d.Specs[j-1].End()).Line { + // j begins a new run. End this one. + specs = append(specs, sortSpecs(localPrefix, fset, f, d.Specs[i:j])...) + i = j + } + } + specs = append(specs, sortSpecs(localPrefix, fset, f, d.Specs[i:])...) + d.Specs = specs + + // Deduping can leave a blank line before the rparen; clean that up. + if len(d.Specs) > 0 { + lastSpec := d.Specs[len(d.Specs)-1] + lastLine := fset.Position(lastSpec.Pos()).Line + if rParenLine := fset.Position(d.Rparen).Line; rParenLine > lastLine+1 { + fset.File(d.Rparen).MergeLine(rParenLine - 1) + } + } + } +} + +// mergeImports merges all the import declarations into the first one. +// Taken from golang.org/x/tools/ast/astutil. +func mergeImports(fset *token.FileSet, f *ast.File) { + if len(f.Decls) <= 1 { + return + } + + // Merge all the import declarations into the first one. + var first *ast.GenDecl + for i := 0; i < len(f.Decls); i++ { + decl := f.Decls[i] + gen, ok := decl.(*ast.GenDecl) + if !ok || gen.Tok != token.IMPORT || declImports(gen, "C") { + continue + } + if first == nil { + first = gen + continue // Don't touch the first one. + } + // We now know there is more than one package in this import + // declaration. Ensure that it ends up parenthesized. + first.Lparen = first.Pos() + // Move the imports of the other import declaration to the first one. + for _, spec := range gen.Specs { + spec.(*ast.ImportSpec).Path.ValuePos = first.Pos() + first.Specs = append(first.Specs, spec) + } + f.Decls = append(f.Decls[:i], f.Decls[i+1:]...) + i-- + } +} + +// declImports reports whether gen contains an import of path. +// Taken from golang.org/x/tools/ast/astutil. +func declImports(gen *ast.GenDecl, path string) bool { + if gen.Tok != token.IMPORT { + return false + } + for _, spec := range gen.Specs { + impspec := spec.(*ast.ImportSpec) + if importPath(impspec) == path { + return true + } + } + return false +} + +func importPath(s ast.Spec) string { + t, err := strconv.Unquote(s.(*ast.ImportSpec).Path.Value) + if err == nil { + return t + } + return "" +} + +func importName(s ast.Spec) string { + n := s.(*ast.ImportSpec).Name + if n == nil { + return "" + } + return n.Name +} + +func importComment(s ast.Spec) string { + c := s.(*ast.ImportSpec).Comment + if c == nil { + return "" + } + return c.Text() +} + +// collapse indicates whether prev may be removed, leaving only next. +func collapse(prev, next ast.Spec) bool { + if importPath(next) != importPath(prev) || importName(next) != importName(prev) { + return false + } + return prev.(*ast.ImportSpec).Comment == nil +} + +type posSpan struct { + Start token.Pos + End token.Pos +} + +func sortSpecs(localPrefix string, fset *token.FileSet, f *ast.File, specs []ast.Spec) []ast.Spec { + // Can't short-circuit here even if specs are already sorted, + // since they might yet need deduplication. + // A lone import, however, may be safely ignored. + if len(specs) <= 1 { + return specs + } + + // Record positions for specs. + pos := make([]posSpan, len(specs)) + for i, s := range specs { + pos[i] = posSpan{s.Pos(), s.End()} + } + + // Identify comments in this range. + // Any comment from pos[0].Start to the final line counts. + lastLine := fset.Position(pos[len(pos)-1].End).Line + cstart := len(f.Comments) + cend := len(f.Comments) + for i, g := range f.Comments { + if g.Pos() < pos[0].Start { + continue + } + if i < cstart { + cstart = i + } + if fset.Position(g.End()).Line > lastLine { + cend = i + break + } + } + comments := f.Comments[cstart:cend] + + // Assign each comment to the import spec preceding it. + importComment := map[*ast.ImportSpec][]*ast.CommentGroup{} + specIndex := 0 + for _, g := range comments { + for specIndex+1 < len(specs) && pos[specIndex+1].Start <= g.Pos() { + specIndex++ + } + s := specs[specIndex].(*ast.ImportSpec) + importComment[s] = append(importComment[s], g) + } + + // Sort the import specs by import path. + // Remove duplicates, when possible without data loss. + // Reassign the import paths to have the same position sequence. + // Reassign each comment to abut the end of its spec. + // Sort the comments by new position. + sort.Sort(byImportSpec{localPrefix, specs}) + + // Dedup. Thanks to our sorting, we can just consider + // adjacent pairs of imports. + deduped := specs[:0] + for i, s := range specs { + if i == len(specs)-1 || !collapse(s, specs[i+1]) { + deduped = append(deduped, s) + } else { + p := s.Pos() + fset.File(p).MergeLine(fset.Position(p).Line) + } + } + specs = deduped + + // Fix up comment positions + for i, s := range specs { + s := s.(*ast.ImportSpec) + if s.Name != nil { + s.Name.NamePos = pos[i].Start + } + s.Path.ValuePos = pos[i].Start + s.EndPos = pos[i].End + nextSpecPos := pos[i].End + + for _, g := range importComment[s] { + for _, c := range g.List { + c.Slash = pos[i].End + nextSpecPos = c.End() + } + } + if i < len(specs)-1 { + pos[i+1].Start = nextSpecPos + pos[i+1].End = nextSpecPos + } + } + + sort.Sort(byCommentPos(comments)) + + // Fixup comments can insert blank lines, because import specs are on different lines. + // We remove those blank lines here by merging import spec to the first import spec line. + firstSpecLine := fset.Position(specs[0].Pos()).Line + for _, s := range specs[1:] { + p := s.Pos() + line := fset.File(p).Line(p) + for previousLine := line - 1; previousLine >= firstSpecLine; { + fset.File(p).MergeLine(previousLine) + previousLine-- + } + } + return specs +} + +type byImportSpec struct { + localPrefix string + specs []ast.Spec // slice of *ast.ImportSpec +} + +func (x byImportSpec) Len() int { return len(x.specs) } +func (x byImportSpec) Swap(i, j int) { x.specs[i], x.specs[j] = x.specs[j], x.specs[i] } +func (x byImportSpec) Less(i, j int) bool { + ipath := importPath(x.specs[i]) + jpath := importPath(x.specs[j]) + + igroup := importGroup(x.localPrefix, ipath) + jgroup := importGroup(x.localPrefix, jpath) + if igroup != jgroup { + return igroup < jgroup + } + + if ipath != jpath { + return ipath < jpath + } + iname := importName(x.specs[i]) + jname := importName(x.specs[j]) + + if iname != jname { + return iname < jname + } + return importComment(x.specs[i]) < importComment(x.specs[j]) +} + +type byCommentPos []*ast.CommentGroup + +func (x byCommentPos) Len() int { return len(x) } +func (x byCommentPos) Swap(i, j int) { x[i], x[j] = x[j], x[i] } +func (x byCommentPos) Less(i, j int) bool { return x[i].Pos() < x[j].Pos() } diff --git a/vendor/golang.org/x/tools/internal/imports/zstdlib.go b/vendor/golang.org/x/tools/internal/imports/zstdlib.go new file mode 100644 index 000000000..7b573b983 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/imports/zstdlib.go @@ -0,0 +1,10516 @@ +// Code generated by mkstdlib.go. DO NOT EDIT. + +package imports + +var stdlib = map[string][]string{ + "archive/tar": []string{ + "ErrFieldTooLong", + "ErrHeader", + "ErrWriteAfterClose", + "ErrWriteTooLong", + "FileInfoHeader", + "Format", + "FormatGNU", + "FormatPAX", + "FormatUSTAR", + "FormatUnknown", + "Header", + "NewReader", + "NewWriter", + "Reader", + "TypeBlock", + "TypeChar", + "TypeCont", + "TypeDir", + "TypeFifo", + "TypeGNULongLink", + "TypeGNULongName", + "TypeGNUSparse", + "TypeLink", + "TypeReg", + "TypeRegA", + "TypeSymlink", + "TypeXGlobalHeader", + "TypeXHeader", + "Writer", + }, + "archive/zip": []string{ + "Compressor", + "Decompressor", + "Deflate", + "ErrAlgorithm", + "ErrChecksum", + "ErrFormat", + "File", + "FileHeader", + "FileInfoHeader", + "NewReader", + "NewWriter", + "OpenReader", + "ReadCloser", + "Reader", + "RegisterCompressor", + "RegisterDecompressor", + "Store", + "Writer", + }, + "bufio": []string{ + "ErrAdvanceTooFar", + "ErrBadReadCount", + "ErrBufferFull", + "ErrFinalToken", + "ErrInvalidUnreadByte", + "ErrInvalidUnreadRune", + "ErrNegativeAdvance", + "ErrNegativeCount", + "ErrTooLong", + "MaxScanTokenSize", + "NewReadWriter", + "NewReader", + "NewReaderSize", + "NewScanner", + "NewWriter", + "NewWriterSize", + "ReadWriter", + "Reader", + "ScanBytes", + "ScanLines", + "ScanRunes", + "ScanWords", + "Scanner", + "SplitFunc", + "Writer", + }, + "bytes": []string{ + "Buffer", + "Compare", + "Contains", + "ContainsAny", + "ContainsRune", + "Count", + "Equal", + "EqualFold", + "ErrTooLarge", + "Fields", + "FieldsFunc", + "HasPrefix", + "HasSuffix", + "Index", + "IndexAny", + "IndexByte", + "IndexFunc", + "IndexRune", + "Join", + "LastIndex", + "LastIndexAny", + "LastIndexByte", + "LastIndexFunc", + "Map", + "MinRead", + "NewBuffer", + "NewBufferString", + "NewReader", + "Reader", + "Repeat", + "Replace", + "ReplaceAll", + "Runes", + "Split", + "SplitAfter", + "SplitAfterN", + "SplitN", + "Title", + "ToLower", + "ToLowerSpecial", + "ToTitle", + "ToTitleSpecial", + "ToUpper", + "ToUpperSpecial", + "ToValidUTF8", + "Trim", + "TrimFunc", + "TrimLeft", + "TrimLeftFunc", + "TrimPrefix", + "TrimRight", + "TrimRightFunc", + "TrimSpace", + "TrimSuffix", + }, + "compress/bzip2": []string{ + "NewReader", + "StructuralError", + }, + "compress/flate": []string{ + "BestCompression", + "BestSpeed", + "CorruptInputError", + "DefaultCompression", + "HuffmanOnly", + "InternalError", + "NewReader", + "NewReaderDict", + "NewWriter", + "NewWriterDict", + "NoCompression", + "ReadError", + "Reader", + "Resetter", + "WriteError", + "Writer", + }, + "compress/gzip": []string{ + "BestCompression", + "BestSpeed", + "DefaultCompression", + "ErrChecksum", + "ErrHeader", + "Header", + "HuffmanOnly", + "NewReader", + "NewWriter", + "NewWriterLevel", + "NoCompression", + "Reader", + "Writer", + }, + "compress/lzw": []string{ + "LSB", + "MSB", + "NewReader", + "NewWriter", + "Order", + }, + "compress/zlib": []string{ + "BestCompression", + "BestSpeed", + "DefaultCompression", + "ErrChecksum", + "ErrDictionary", + "ErrHeader", + "HuffmanOnly", + "NewReader", + "NewReaderDict", + "NewWriter", + "NewWriterLevel", + "NewWriterLevelDict", + "NoCompression", + "Resetter", + "Writer", + }, + "container/heap": []string{ + "Fix", + "Init", + "Interface", + "Pop", + "Push", + "Remove", + }, + "container/list": []string{ + "Element", + "List", + "New", + }, + "container/ring": []string{ + "New", + "Ring", + }, + "context": []string{ + "Background", + "CancelFunc", + "Canceled", + "Context", + "DeadlineExceeded", + "TODO", + "WithCancel", + "WithDeadline", + "WithTimeout", + "WithValue", + }, + "crypto": []string{ + "BLAKE2b_256", + "BLAKE2b_384", + "BLAKE2b_512", + "BLAKE2s_256", + "Decrypter", + "DecrypterOpts", + "Hash", + "MD4", + "MD5", + "MD5SHA1", + "PrivateKey", + "PublicKey", + "RIPEMD160", + "RegisterHash", + "SHA1", + "SHA224", + "SHA256", + "SHA384", + "SHA3_224", + "SHA3_256", + "SHA3_384", + "SHA3_512", + "SHA512", + "SHA512_224", + "SHA512_256", + "Signer", + "SignerOpts", + }, + "crypto/aes": []string{ + "BlockSize", + "KeySizeError", + "NewCipher", + }, + "crypto/cipher": []string{ + "AEAD", + "Block", + "BlockMode", + "NewCBCDecrypter", + "NewCBCEncrypter", + "NewCFBDecrypter", + "NewCFBEncrypter", + "NewCTR", + "NewGCM", + "NewGCMWithNonceSize", + "NewGCMWithTagSize", + "NewOFB", + "Stream", + "StreamReader", + "StreamWriter", + }, + "crypto/des": []string{ + "BlockSize", + "KeySizeError", + "NewCipher", + "NewTripleDESCipher", + }, + "crypto/dsa": []string{ + "ErrInvalidPublicKey", + "GenerateKey", + "GenerateParameters", + "L1024N160", + "L2048N224", + "L2048N256", + "L3072N256", + "ParameterSizes", + "Parameters", + "PrivateKey", + "PublicKey", + "Sign", + "Verify", + }, + "crypto/ecdsa": []string{ + "GenerateKey", + "PrivateKey", + "PublicKey", + "Sign", + "SignASN1", + "Verify", + "VerifyASN1", + }, + "crypto/ed25519": []string{ + "GenerateKey", + "NewKeyFromSeed", + "PrivateKey", + "PrivateKeySize", + "PublicKey", + "PublicKeySize", + "SeedSize", + "Sign", + "SignatureSize", + "Verify", + }, + "crypto/elliptic": []string{ + "Curve", + "CurveParams", + "GenerateKey", + "Marshal", + "MarshalCompressed", + "P224", + "P256", + "P384", + "P521", + "Unmarshal", + "UnmarshalCompressed", + }, + "crypto/hmac": []string{ + "Equal", + "New", + }, + "crypto/md5": []string{ + "BlockSize", + "New", + "Size", + "Sum", + }, + "crypto/rand": []string{ + "Int", + "Prime", + "Read", + "Reader", + }, + "crypto/rc4": []string{ + "Cipher", + "KeySizeError", + "NewCipher", + }, + "crypto/rsa": []string{ + "CRTValue", + "DecryptOAEP", + "DecryptPKCS1v15", + "DecryptPKCS1v15SessionKey", + "EncryptOAEP", + "EncryptPKCS1v15", + "ErrDecryption", + "ErrMessageTooLong", + "ErrVerification", + "GenerateKey", + "GenerateMultiPrimeKey", + "OAEPOptions", + "PKCS1v15DecryptOptions", + "PSSOptions", + "PSSSaltLengthAuto", + "PSSSaltLengthEqualsHash", + "PrecomputedValues", + "PrivateKey", + "PublicKey", + "SignPKCS1v15", + "SignPSS", + "VerifyPKCS1v15", + "VerifyPSS", + }, + "crypto/sha1": []string{ + "BlockSize", + "New", + "Size", + "Sum", + }, + "crypto/sha256": []string{ + "BlockSize", + "New", + "New224", + "Size", + "Size224", + "Sum224", + "Sum256", + }, + "crypto/sha512": []string{ + "BlockSize", + "New", + "New384", + "New512_224", + "New512_256", + "Size", + "Size224", + "Size256", + "Size384", + "Sum384", + "Sum512", + "Sum512_224", + "Sum512_256", + }, + "crypto/subtle": []string{ + "ConstantTimeByteEq", + "ConstantTimeCompare", + "ConstantTimeCopy", + "ConstantTimeEq", + "ConstantTimeLessOrEq", + "ConstantTimeSelect", + }, + "crypto/tls": []string{ + "Certificate", + "CertificateRequestInfo", + "CipherSuite", + "CipherSuiteName", + "CipherSuites", + "Client", + "ClientAuthType", + "ClientHelloInfo", + "ClientSessionCache", + "ClientSessionState", + "Config", + "Conn", + "ConnectionState", + "CurveID", + "CurveP256", + "CurveP384", + "CurveP521", + "Dial", + "DialWithDialer", + "Dialer", + "ECDSAWithP256AndSHA256", + "ECDSAWithP384AndSHA384", + "ECDSAWithP521AndSHA512", + "ECDSAWithSHA1", + "Ed25519", + "InsecureCipherSuites", + "Listen", + "LoadX509KeyPair", + "NewLRUClientSessionCache", + "NewListener", + "NoClientCert", + "PKCS1WithSHA1", + "PKCS1WithSHA256", + "PKCS1WithSHA384", + "PKCS1WithSHA512", + "PSSWithSHA256", + "PSSWithSHA384", + "PSSWithSHA512", + "RecordHeaderError", + "RenegotiateFreelyAsClient", + "RenegotiateNever", + "RenegotiateOnceAsClient", + "RenegotiationSupport", + "RequestClientCert", + "RequireAndVerifyClientCert", + "RequireAnyClientCert", + "Server", + "SignatureScheme", + "TLS_AES_128_GCM_SHA256", + "TLS_AES_256_GCM_SHA384", + "TLS_CHACHA20_POLY1305_SHA256", + "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA", + "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", + "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", + "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA", + "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", + "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", + "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", + "TLS_ECDHE_ECDSA_WITH_RC4_128_SHA", + "TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA", + "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", + "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", + "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", + "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305", + "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", + "TLS_ECDHE_RSA_WITH_RC4_128_SHA", + "TLS_FALLBACK_SCSV", + "TLS_RSA_WITH_3DES_EDE_CBC_SHA", + "TLS_RSA_WITH_AES_128_CBC_SHA", + "TLS_RSA_WITH_AES_128_CBC_SHA256", + "TLS_RSA_WITH_AES_128_GCM_SHA256", + "TLS_RSA_WITH_AES_256_CBC_SHA", + "TLS_RSA_WITH_AES_256_GCM_SHA384", + "TLS_RSA_WITH_RC4_128_SHA", + "VerifyClientCertIfGiven", + "VersionSSL30", + "VersionTLS10", + "VersionTLS11", + "VersionTLS12", + "VersionTLS13", + "X25519", + "X509KeyPair", + }, + "crypto/x509": []string{ + "CANotAuthorizedForExtKeyUsage", + "CANotAuthorizedForThisName", + "CertPool", + "Certificate", + "CertificateInvalidError", + "CertificateRequest", + "ConstraintViolationError", + "CreateCertificate", + "CreateCertificateRequest", + "CreateRevocationList", + "DSA", + "DSAWithSHA1", + "DSAWithSHA256", + "DecryptPEMBlock", + "ECDSA", + "ECDSAWithSHA1", + "ECDSAWithSHA256", + "ECDSAWithSHA384", + "ECDSAWithSHA512", + "Ed25519", + "EncryptPEMBlock", + "ErrUnsupportedAlgorithm", + "Expired", + "ExtKeyUsage", + "ExtKeyUsageAny", + "ExtKeyUsageClientAuth", + "ExtKeyUsageCodeSigning", + "ExtKeyUsageEmailProtection", + "ExtKeyUsageIPSECEndSystem", + "ExtKeyUsageIPSECTunnel", + "ExtKeyUsageIPSECUser", + "ExtKeyUsageMicrosoftCommercialCodeSigning", + "ExtKeyUsageMicrosoftKernelCodeSigning", + "ExtKeyUsageMicrosoftServerGatedCrypto", + "ExtKeyUsageNetscapeServerGatedCrypto", + "ExtKeyUsageOCSPSigning", + "ExtKeyUsageServerAuth", + "ExtKeyUsageTimeStamping", + "HostnameError", + "IncompatibleUsage", + "IncorrectPasswordError", + "InsecureAlgorithmError", + "InvalidReason", + "IsEncryptedPEMBlock", + "KeyUsage", + "KeyUsageCRLSign", + "KeyUsageCertSign", + "KeyUsageContentCommitment", + "KeyUsageDataEncipherment", + "KeyUsageDecipherOnly", + "KeyUsageDigitalSignature", + "KeyUsageEncipherOnly", + "KeyUsageKeyAgreement", + "KeyUsageKeyEncipherment", + "MD2WithRSA", + "MD5WithRSA", + "MarshalECPrivateKey", + "MarshalPKCS1PrivateKey", + "MarshalPKCS1PublicKey", + "MarshalPKCS8PrivateKey", + "MarshalPKIXPublicKey", + "NameConstraintsWithoutSANs", + "NameMismatch", + "NewCertPool", + "NotAuthorizedToSign", + "PEMCipher", + "PEMCipher3DES", + "PEMCipherAES128", + "PEMCipherAES192", + "PEMCipherAES256", + "PEMCipherDES", + "ParseCRL", + "ParseCertificate", + "ParseCertificateRequest", + "ParseCertificates", + "ParseDERCRL", + "ParseECPrivateKey", + "ParsePKCS1PrivateKey", + "ParsePKCS1PublicKey", + "ParsePKCS8PrivateKey", + "ParsePKIXPublicKey", + "PublicKeyAlgorithm", + "PureEd25519", + "RSA", + "RevocationList", + "SHA1WithRSA", + "SHA256WithRSA", + "SHA256WithRSAPSS", + "SHA384WithRSA", + "SHA384WithRSAPSS", + "SHA512WithRSA", + "SHA512WithRSAPSS", + "SignatureAlgorithm", + "SystemCertPool", + "SystemRootsError", + "TooManyConstraints", + "TooManyIntermediates", + "UnconstrainedName", + "UnhandledCriticalExtension", + "UnknownAuthorityError", + "UnknownPublicKeyAlgorithm", + "UnknownSignatureAlgorithm", + "VerifyOptions", + }, + "crypto/x509/pkix": []string{ + "AlgorithmIdentifier", + "AttributeTypeAndValue", + "AttributeTypeAndValueSET", + "CertificateList", + "Extension", + "Name", + "RDNSequence", + "RelativeDistinguishedNameSET", + "RevokedCertificate", + "TBSCertificateList", + }, + "database/sql": []string{ + "ColumnType", + "Conn", + "DB", + "DBStats", + "Drivers", + "ErrConnDone", + "ErrNoRows", + "ErrTxDone", + "IsolationLevel", + "LevelDefault", + "LevelLinearizable", + "LevelReadCommitted", + "LevelReadUncommitted", + "LevelRepeatableRead", + "LevelSerializable", + "LevelSnapshot", + "LevelWriteCommitted", + "Named", + "NamedArg", + "NullBool", + "NullFloat64", + "NullInt32", + "NullInt64", + "NullString", + "NullTime", + "Open", + "OpenDB", + "Out", + "RawBytes", + "Register", + "Result", + "Row", + "Rows", + "Scanner", + "Stmt", + "Tx", + "TxOptions", + }, + "database/sql/driver": []string{ + "Bool", + "ColumnConverter", + "Conn", + "ConnBeginTx", + "ConnPrepareContext", + "Connector", + "DefaultParameterConverter", + "Driver", + "DriverContext", + "ErrBadConn", + "ErrRemoveArgument", + "ErrSkip", + "Execer", + "ExecerContext", + "Int32", + "IsScanValue", + "IsValue", + "IsolationLevel", + "NamedValue", + "NamedValueChecker", + "NotNull", + "Null", + "Pinger", + "Queryer", + "QueryerContext", + "Result", + "ResultNoRows", + "Rows", + "RowsAffected", + "RowsColumnTypeDatabaseTypeName", + "RowsColumnTypeLength", + "RowsColumnTypeNullable", + "RowsColumnTypePrecisionScale", + "RowsColumnTypeScanType", + "RowsNextResultSet", + "SessionResetter", + "Stmt", + "StmtExecContext", + "StmtQueryContext", + "String", + "Tx", + "TxOptions", + "Validator", + "Value", + "ValueConverter", + "Valuer", + }, + "debug/dwarf": []string{ + "AddrType", + "ArrayType", + "Attr", + "AttrAbstractOrigin", + "AttrAccessibility", + "AttrAddrBase", + "AttrAddrClass", + "AttrAlignment", + "AttrAllocated", + "AttrArtificial", + "AttrAssociated", + "AttrBaseTypes", + "AttrBinaryScale", + "AttrBitOffset", + "AttrBitSize", + "AttrByteSize", + "AttrCallAllCalls", + "AttrCallAllSourceCalls", + "AttrCallAllTailCalls", + "AttrCallColumn", + "AttrCallDataLocation", + "AttrCallDataValue", + "AttrCallFile", + "AttrCallLine", + "AttrCallOrigin", + "AttrCallPC", + "AttrCallParameter", + "AttrCallReturnPC", + "AttrCallTailCall", + "AttrCallTarget", + "AttrCallTargetClobbered", + "AttrCallValue", + "AttrCalling", + "AttrCommonRef", + "AttrCompDir", + "AttrConstExpr", + "AttrConstValue", + "AttrContainingType", + "AttrCount", + "AttrDataBitOffset", + "AttrDataLocation", + "AttrDataMemberLoc", + "AttrDecimalScale", + "AttrDecimalSign", + "AttrDeclColumn", + "AttrDeclFile", + "AttrDeclLine", + "AttrDeclaration", + "AttrDefaultValue", + "AttrDefaulted", + "AttrDeleted", + "AttrDescription", + "AttrDigitCount", + "AttrDiscr", + "AttrDiscrList", + "AttrDiscrValue", + "AttrDwoName", + "AttrElemental", + "AttrEncoding", + "AttrEndianity", + "AttrEntrypc", + "AttrEnumClass", + "AttrExplicit", + "AttrExportSymbols", + "AttrExtension", + "AttrExternal", + "AttrFrameBase", + "AttrFriend", + "AttrHighpc", + "AttrIdentifierCase", + "AttrImport", + "AttrInline", + "AttrIsOptional", + "AttrLanguage", + "AttrLinkageName", + "AttrLocation", + "AttrLoclistsBase", + "AttrLowerBound", + "AttrLowpc", + "AttrMacroInfo", + "AttrMacros", + "AttrMainSubprogram", + "AttrMutable", + "AttrName", + "AttrNamelistItem", + "AttrNoreturn", + "AttrObjectPointer", + "AttrOrdering", + "AttrPictureString", + "AttrPriority", + "AttrProducer", + "AttrPrototyped", + "AttrPure", + "AttrRanges", + "AttrRank", + "AttrRecursive", + "AttrReference", + "AttrReturnAddr", + "AttrRnglistsBase", + "AttrRvalueReference", + "AttrSegment", + "AttrSibling", + "AttrSignature", + "AttrSmall", + "AttrSpecification", + "AttrStartScope", + "AttrStaticLink", + "AttrStmtList", + "AttrStrOffsetsBase", + "AttrStride", + "AttrStrideSize", + "AttrStringLength", + "AttrStringLengthBitSize", + "AttrStringLengthByteSize", + "AttrThreadsScaled", + "AttrTrampoline", + "AttrType", + "AttrUpperBound", + "AttrUseLocation", + "AttrUseUTF8", + "AttrVarParam", + "AttrVirtuality", + "AttrVisibility", + "AttrVtableElemLoc", + "BasicType", + "BoolType", + "CharType", + "Class", + "ClassAddrPtr", + "ClassAddress", + "ClassBlock", + "ClassConstant", + "ClassExprLoc", + "ClassFlag", + "ClassLinePtr", + "ClassLocList", + "ClassLocListPtr", + "ClassMacPtr", + "ClassRangeListPtr", + "ClassReference", + "ClassReferenceAlt", + "ClassReferenceSig", + "ClassRngList", + "ClassRngListsPtr", + "ClassStrOffsetsPtr", + "ClassString", + "ClassStringAlt", + "ClassUnknown", + "CommonType", + "ComplexType", + "Data", + "DecodeError", + "DotDotDotType", + "Entry", + "EnumType", + "EnumValue", + "ErrUnknownPC", + "Field", + "FloatType", + "FuncType", + "IntType", + "LineEntry", + "LineFile", + "LineReader", + "LineReaderPos", + "New", + "Offset", + "PtrType", + "QualType", + "Reader", + "StructField", + "StructType", + "Tag", + "TagAccessDeclaration", + "TagArrayType", + "TagAtomicType", + "TagBaseType", + "TagCallSite", + "TagCallSiteParameter", + "TagCatchDwarfBlock", + "TagClassType", + "TagCoarrayType", + "TagCommonDwarfBlock", + "TagCommonInclusion", + "TagCompileUnit", + "TagCondition", + "TagConstType", + "TagConstant", + "TagDwarfProcedure", + "TagDynamicType", + "TagEntryPoint", + "TagEnumerationType", + "TagEnumerator", + "TagFileType", + "TagFormalParameter", + "TagFriend", + "TagGenericSubrange", + "TagImmutableType", + "TagImportedDeclaration", + "TagImportedModule", + "TagImportedUnit", + "TagInheritance", + "TagInlinedSubroutine", + "TagInterfaceType", + "TagLabel", + "TagLexDwarfBlock", + "TagMember", + "TagModule", + "TagMutableType", + "TagNamelist", + "TagNamelistItem", + "TagNamespace", + "TagPackedType", + "TagPartialUnit", + "TagPointerType", + "TagPtrToMemberType", + "TagReferenceType", + "TagRestrictType", + "TagRvalueReferenceType", + "TagSetType", + "TagSharedType", + "TagSkeletonUnit", + "TagStringType", + "TagStructType", + "TagSubprogram", + "TagSubrangeType", + "TagSubroutineType", + "TagTemplateAlias", + "TagTemplateTypeParameter", + "TagTemplateValueParameter", + "TagThrownType", + "TagTryDwarfBlock", + "TagTypeUnit", + "TagTypedef", + "TagUnionType", + "TagUnspecifiedParameters", + "TagUnspecifiedType", + "TagVariable", + "TagVariant", + "TagVariantPart", + "TagVolatileType", + "TagWithStmt", + "Type", + "TypedefType", + "UcharType", + "UintType", + "UnspecifiedType", + "UnsupportedType", + "VoidType", + }, + "debug/elf": []string{ + "ARM_MAGIC_TRAMP_NUMBER", + "COMPRESS_HIOS", + "COMPRESS_HIPROC", + "COMPRESS_LOOS", + "COMPRESS_LOPROC", + "COMPRESS_ZLIB", + "Chdr32", + "Chdr64", + "Class", + "CompressionType", + "DF_BIND_NOW", + "DF_ORIGIN", + "DF_STATIC_TLS", + "DF_SYMBOLIC", + "DF_TEXTREL", + "DT_BIND_NOW", + "DT_DEBUG", + "DT_ENCODING", + "DT_FINI", + "DT_FINI_ARRAY", + "DT_FINI_ARRAYSZ", + "DT_FLAGS", + "DT_HASH", + "DT_HIOS", + "DT_HIPROC", + "DT_INIT", + "DT_INIT_ARRAY", + "DT_INIT_ARRAYSZ", + "DT_JMPREL", + "DT_LOOS", + "DT_LOPROC", + "DT_NEEDED", + "DT_NULL", + "DT_PLTGOT", + "DT_PLTREL", + "DT_PLTRELSZ", + "DT_PREINIT_ARRAY", + "DT_PREINIT_ARRAYSZ", + "DT_REL", + "DT_RELA", + "DT_RELAENT", + "DT_RELASZ", + "DT_RELENT", + "DT_RELSZ", + "DT_RPATH", + "DT_RUNPATH", + "DT_SONAME", + "DT_STRSZ", + "DT_STRTAB", + "DT_SYMBOLIC", + "DT_SYMENT", + "DT_SYMTAB", + "DT_TEXTREL", + "DT_VERNEED", + "DT_VERNEEDNUM", + "DT_VERSYM", + "Data", + "Dyn32", + "Dyn64", + "DynFlag", + "DynTag", + "EI_ABIVERSION", + "EI_CLASS", + "EI_DATA", + "EI_NIDENT", + "EI_OSABI", + "EI_PAD", + "EI_VERSION", + "ELFCLASS32", + "ELFCLASS64", + "ELFCLASSNONE", + "ELFDATA2LSB", + "ELFDATA2MSB", + "ELFDATANONE", + "ELFMAG", + "ELFOSABI_86OPEN", + "ELFOSABI_AIX", + "ELFOSABI_ARM", + "ELFOSABI_AROS", + "ELFOSABI_CLOUDABI", + "ELFOSABI_FENIXOS", + "ELFOSABI_FREEBSD", + "ELFOSABI_HPUX", + "ELFOSABI_HURD", + "ELFOSABI_IRIX", + "ELFOSABI_LINUX", + "ELFOSABI_MODESTO", + "ELFOSABI_NETBSD", + "ELFOSABI_NONE", + "ELFOSABI_NSK", + "ELFOSABI_OPENBSD", + "ELFOSABI_OPENVMS", + "ELFOSABI_SOLARIS", + "ELFOSABI_STANDALONE", + "ELFOSABI_TRU64", + "EM_386", + "EM_486", + "EM_56800EX", + "EM_68HC05", + "EM_68HC08", + "EM_68HC11", + "EM_68HC12", + "EM_68HC16", + "EM_68K", + "EM_78KOR", + "EM_8051", + "EM_860", + "EM_88K", + "EM_960", + "EM_AARCH64", + "EM_ALPHA", + "EM_ALPHA_STD", + "EM_ALTERA_NIOS2", + "EM_AMDGPU", + "EM_ARC", + "EM_ARCA", + "EM_ARC_COMPACT", + "EM_ARC_COMPACT2", + "EM_ARM", + "EM_AVR", + "EM_AVR32", + "EM_BA1", + "EM_BA2", + "EM_BLACKFIN", + "EM_BPF", + "EM_C166", + "EM_CDP", + "EM_CE", + "EM_CLOUDSHIELD", + "EM_COGE", + "EM_COLDFIRE", + "EM_COOL", + "EM_COREA_1ST", + "EM_COREA_2ND", + "EM_CR", + "EM_CR16", + "EM_CRAYNV2", + "EM_CRIS", + "EM_CRX", + "EM_CSR_KALIMBA", + "EM_CUDA", + "EM_CYPRESS_M8C", + "EM_D10V", + "EM_D30V", + "EM_DSP24", + "EM_DSPIC30F", + "EM_DXP", + "EM_ECOG1", + "EM_ECOG16", + "EM_ECOG1X", + "EM_ECOG2", + "EM_ETPU", + "EM_EXCESS", + "EM_F2MC16", + "EM_FIREPATH", + "EM_FR20", + "EM_FR30", + "EM_FT32", + "EM_FX66", + "EM_H8S", + "EM_H8_300", + "EM_H8_300H", + "EM_H8_500", + "EM_HUANY", + "EM_IA_64", + "EM_INTEL205", + "EM_INTEL206", + "EM_INTEL207", + "EM_INTEL208", + "EM_INTEL209", + "EM_IP2K", + "EM_JAVELIN", + "EM_K10M", + "EM_KM32", + "EM_KMX16", + "EM_KMX32", + "EM_KMX8", + "EM_KVARC", + "EM_L10M", + "EM_LANAI", + "EM_LATTICEMICO32", + "EM_M16C", + "EM_M32", + "EM_M32C", + "EM_M32R", + "EM_MANIK", + "EM_MAX", + "EM_MAXQ30", + "EM_MCHP_PIC", + "EM_MCST_ELBRUS", + "EM_ME16", + "EM_METAG", + "EM_MICROBLAZE", + "EM_MIPS", + "EM_MIPS_RS3_LE", + "EM_MIPS_RS4_BE", + "EM_MIPS_X", + "EM_MMA", + "EM_MMDSP_PLUS", + "EM_MMIX", + "EM_MN10200", + "EM_MN10300", + "EM_MOXIE", + "EM_MSP430", + "EM_NCPU", + "EM_NDR1", + "EM_NDS32", + "EM_NONE", + "EM_NORC", + "EM_NS32K", + "EM_OPEN8", + "EM_OPENRISC", + "EM_PARISC", + "EM_PCP", + "EM_PDP10", + "EM_PDP11", + "EM_PDSP", + "EM_PJ", + "EM_PPC", + "EM_PPC64", + "EM_PRISM", + "EM_QDSP6", + "EM_R32C", + "EM_RCE", + "EM_RH32", + "EM_RISCV", + "EM_RL78", + "EM_RS08", + "EM_RX", + "EM_S370", + "EM_S390", + "EM_SCORE7", + "EM_SEP", + "EM_SE_C17", + "EM_SE_C33", + "EM_SH", + "EM_SHARC", + "EM_SLE9X", + "EM_SNP1K", + "EM_SPARC", + "EM_SPARC32PLUS", + "EM_SPARCV9", + "EM_ST100", + "EM_ST19", + "EM_ST200", + "EM_ST7", + "EM_ST9PLUS", + "EM_STARCORE", + "EM_STM8", + "EM_STXP7X", + "EM_SVX", + "EM_TILE64", + "EM_TILEGX", + "EM_TILEPRO", + "EM_TINYJ", + "EM_TI_ARP32", + "EM_TI_C2000", + "EM_TI_C5500", + "EM_TI_C6000", + "EM_TI_PRU", + "EM_TMM_GPP", + "EM_TPC", + "EM_TRICORE", + "EM_TRIMEDIA", + "EM_TSK3000", + "EM_UNICORE", + "EM_V800", + "EM_V850", + "EM_VAX", + "EM_VIDEOCORE", + "EM_VIDEOCORE3", + "EM_VIDEOCORE5", + "EM_VISIUM", + "EM_VPP500", + "EM_X86_64", + "EM_XCORE", + "EM_XGATE", + "EM_XIMO16", + "EM_XTENSA", + "EM_Z80", + "EM_ZSP", + "ET_CORE", + "ET_DYN", + "ET_EXEC", + "ET_HIOS", + "ET_HIPROC", + "ET_LOOS", + "ET_LOPROC", + "ET_NONE", + "ET_REL", + "EV_CURRENT", + "EV_NONE", + "ErrNoSymbols", + "File", + "FileHeader", + "FormatError", + "Header32", + "Header64", + "ImportedSymbol", + "Machine", + "NT_FPREGSET", + "NT_PRPSINFO", + "NT_PRSTATUS", + "NType", + "NewFile", + "OSABI", + "Open", + "PF_MASKOS", + "PF_MASKPROC", + "PF_R", + "PF_W", + "PF_X", + "PT_DYNAMIC", + "PT_HIOS", + "PT_HIPROC", + "PT_INTERP", + "PT_LOAD", + "PT_LOOS", + "PT_LOPROC", + "PT_NOTE", + "PT_NULL", + "PT_PHDR", + "PT_SHLIB", + "PT_TLS", + "Prog", + "Prog32", + "Prog64", + "ProgFlag", + "ProgHeader", + "ProgType", + "R_386", + "R_386_16", + "R_386_32", + "R_386_32PLT", + "R_386_8", + "R_386_COPY", + "R_386_GLOB_DAT", + "R_386_GOT32", + "R_386_GOT32X", + "R_386_GOTOFF", + "R_386_GOTPC", + "R_386_IRELATIVE", + "R_386_JMP_SLOT", + "R_386_NONE", + "R_386_PC16", + "R_386_PC32", + "R_386_PC8", + "R_386_PLT32", + "R_386_RELATIVE", + "R_386_SIZE32", + "R_386_TLS_DESC", + "R_386_TLS_DESC_CALL", + "R_386_TLS_DTPMOD32", + "R_386_TLS_DTPOFF32", + "R_386_TLS_GD", + "R_386_TLS_GD_32", + "R_386_TLS_GD_CALL", + "R_386_TLS_GD_POP", + "R_386_TLS_GD_PUSH", + "R_386_TLS_GOTDESC", + "R_386_TLS_GOTIE", + "R_386_TLS_IE", + "R_386_TLS_IE_32", + "R_386_TLS_LDM", + "R_386_TLS_LDM_32", + "R_386_TLS_LDM_CALL", + "R_386_TLS_LDM_POP", + "R_386_TLS_LDM_PUSH", + "R_386_TLS_LDO_32", + "R_386_TLS_LE", + "R_386_TLS_LE_32", + "R_386_TLS_TPOFF", + "R_386_TLS_TPOFF32", + "R_390", + "R_390_12", + "R_390_16", + "R_390_20", + "R_390_32", + "R_390_64", + "R_390_8", + "R_390_COPY", + "R_390_GLOB_DAT", + "R_390_GOT12", + "R_390_GOT16", + "R_390_GOT20", + "R_390_GOT32", + "R_390_GOT64", + "R_390_GOTENT", + "R_390_GOTOFF", + "R_390_GOTOFF16", + "R_390_GOTOFF64", + "R_390_GOTPC", + "R_390_GOTPCDBL", + "R_390_GOTPLT12", + "R_390_GOTPLT16", + "R_390_GOTPLT20", + "R_390_GOTPLT32", + "R_390_GOTPLT64", + "R_390_GOTPLTENT", + "R_390_GOTPLTOFF16", + "R_390_GOTPLTOFF32", + "R_390_GOTPLTOFF64", + "R_390_JMP_SLOT", + "R_390_NONE", + "R_390_PC16", + "R_390_PC16DBL", + "R_390_PC32", + "R_390_PC32DBL", + "R_390_PC64", + "R_390_PLT16DBL", + "R_390_PLT32", + "R_390_PLT32DBL", + "R_390_PLT64", + "R_390_RELATIVE", + "R_390_TLS_DTPMOD", + "R_390_TLS_DTPOFF", + "R_390_TLS_GD32", + "R_390_TLS_GD64", + "R_390_TLS_GDCALL", + "R_390_TLS_GOTIE12", + "R_390_TLS_GOTIE20", + "R_390_TLS_GOTIE32", + "R_390_TLS_GOTIE64", + "R_390_TLS_IE32", + "R_390_TLS_IE64", + "R_390_TLS_IEENT", + "R_390_TLS_LDCALL", + "R_390_TLS_LDM32", + "R_390_TLS_LDM64", + "R_390_TLS_LDO32", + "R_390_TLS_LDO64", + "R_390_TLS_LE32", + "R_390_TLS_LE64", + "R_390_TLS_LOAD", + "R_390_TLS_TPOFF", + "R_AARCH64", + "R_AARCH64_ABS16", + "R_AARCH64_ABS32", + "R_AARCH64_ABS64", + "R_AARCH64_ADD_ABS_LO12_NC", + "R_AARCH64_ADR_GOT_PAGE", + "R_AARCH64_ADR_PREL_LO21", + "R_AARCH64_ADR_PREL_PG_HI21", + "R_AARCH64_ADR_PREL_PG_HI21_NC", + "R_AARCH64_CALL26", + "R_AARCH64_CONDBR19", + "R_AARCH64_COPY", + "R_AARCH64_GLOB_DAT", + "R_AARCH64_GOT_LD_PREL19", + "R_AARCH64_IRELATIVE", + "R_AARCH64_JUMP26", + "R_AARCH64_JUMP_SLOT", + "R_AARCH64_LD64_GOTOFF_LO15", + "R_AARCH64_LD64_GOTPAGE_LO15", + "R_AARCH64_LD64_GOT_LO12_NC", + "R_AARCH64_LDST128_ABS_LO12_NC", + "R_AARCH64_LDST16_ABS_LO12_NC", + "R_AARCH64_LDST32_ABS_LO12_NC", + "R_AARCH64_LDST64_ABS_LO12_NC", + "R_AARCH64_LDST8_ABS_LO12_NC", + "R_AARCH64_LD_PREL_LO19", + "R_AARCH64_MOVW_SABS_G0", + "R_AARCH64_MOVW_SABS_G1", + "R_AARCH64_MOVW_SABS_G2", + "R_AARCH64_MOVW_UABS_G0", + "R_AARCH64_MOVW_UABS_G0_NC", + "R_AARCH64_MOVW_UABS_G1", + "R_AARCH64_MOVW_UABS_G1_NC", + "R_AARCH64_MOVW_UABS_G2", + "R_AARCH64_MOVW_UABS_G2_NC", + "R_AARCH64_MOVW_UABS_G3", + "R_AARCH64_NONE", + "R_AARCH64_NULL", + "R_AARCH64_P32_ABS16", + "R_AARCH64_P32_ABS32", + "R_AARCH64_P32_ADD_ABS_LO12_NC", + "R_AARCH64_P32_ADR_GOT_PAGE", + "R_AARCH64_P32_ADR_PREL_LO21", + "R_AARCH64_P32_ADR_PREL_PG_HI21", + "R_AARCH64_P32_CALL26", + "R_AARCH64_P32_CONDBR19", + "R_AARCH64_P32_COPY", + "R_AARCH64_P32_GLOB_DAT", + "R_AARCH64_P32_GOT_LD_PREL19", + "R_AARCH64_P32_IRELATIVE", + "R_AARCH64_P32_JUMP26", + "R_AARCH64_P32_JUMP_SLOT", + "R_AARCH64_P32_LD32_GOT_LO12_NC", + "R_AARCH64_P32_LDST128_ABS_LO12_NC", + "R_AARCH64_P32_LDST16_ABS_LO12_NC", + "R_AARCH64_P32_LDST32_ABS_LO12_NC", + "R_AARCH64_P32_LDST64_ABS_LO12_NC", + "R_AARCH64_P32_LDST8_ABS_LO12_NC", + "R_AARCH64_P32_LD_PREL_LO19", + "R_AARCH64_P32_MOVW_SABS_G0", + "R_AARCH64_P32_MOVW_UABS_G0", + "R_AARCH64_P32_MOVW_UABS_G0_NC", + "R_AARCH64_P32_MOVW_UABS_G1", + "R_AARCH64_P32_PREL16", + "R_AARCH64_P32_PREL32", + "R_AARCH64_P32_RELATIVE", + "R_AARCH64_P32_TLSDESC", + "R_AARCH64_P32_TLSDESC_ADD_LO12_NC", + "R_AARCH64_P32_TLSDESC_ADR_PAGE21", + "R_AARCH64_P32_TLSDESC_ADR_PREL21", + "R_AARCH64_P32_TLSDESC_CALL", + "R_AARCH64_P32_TLSDESC_LD32_LO12_NC", + "R_AARCH64_P32_TLSDESC_LD_PREL19", + "R_AARCH64_P32_TLSGD_ADD_LO12_NC", + "R_AARCH64_P32_TLSGD_ADR_PAGE21", + "R_AARCH64_P32_TLSIE_ADR_GOTTPREL_PAGE21", + "R_AARCH64_P32_TLSIE_LD32_GOTTPREL_LO12_NC", + "R_AARCH64_P32_TLSIE_LD_GOTTPREL_PREL19", + "R_AARCH64_P32_TLSLE_ADD_TPREL_HI12", + "R_AARCH64_P32_TLSLE_ADD_TPREL_LO12", + "R_AARCH64_P32_TLSLE_ADD_TPREL_LO12_NC", + "R_AARCH64_P32_TLSLE_MOVW_TPREL_G0", + "R_AARCH64_P32_TLSLE_MOVW_TPREL_G0_NC", + "R_AARCH64_P32_TLSLE_MOVW_TPREL_G1", + "R_AARCH64_P32_TLS_DTPMOD", + "R_AARCH64_P32_TLS_DTPREL", + "R_AARCH64_P32_TLS_TPREL", + "R_AARCH64_P32_TSTBR14", + "R_AARCH64_PREL16", + "R_AARCH64_PREL32", + "R_AARCH64_PREL64", + "R_AARCH64_RELATIVE", + "R_AARCH64_TLSDESC", + "R_AARCH64_TLSDESC_ADD", + "R_AARCH64_TLSDESC_ADD_LO12_NC", + "R_AARCH64_TLSDESC_ADR_PAGE21", + "R_AARCH64_TLSDESC_ADR_PREL21", + "R_AARCH64_TLSDESC_CALL", + "R_AARCH64_TLSDESC_LD64_LO12_NC", + "R_AARCH64_TLSDESC_LDR", + "R_AARCH64_TLSDESC_LD_PREL19", + "R_AARCH64_TLSDESC_OFF_G0_NC", + "R_AARCH64_TLSDESC_OFF_G1", + "R_AARCH64_TLSGD_ADD_LO12_NC", + "R_AARCH64_TLSGD_ADR_PAGE21", + "R_AARCH64_TLSGD_ADR_PREL21", + "R_AARCH64_TLSGD_MOVW_G0_NC", + "R_AARCH64_TLSGD_MOVW_G1", + "R_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21", + "R_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC", + "R_AARCH64_TLSIE_LD_GOTTPREL_PREL19", + "R_AARCH64_TLSIE_MOVW_GOTTPREL_G0_NC", + "R_AARCH64_TLSIE_MOVW_GOTTPREL_G1", + "R_AARCH64_TLSLD_ADR_PAGE21", + "R_AARCH64_TLSLD_ADR_PREL21", + "R_AARCH64_TLSLD_LDST128_DTPREL_LO12", + "R_AARCH64_TLSLD_LDST128_DTPREL_LO12_NC", + "R_AARCH64_TLSLE_ADD_TPREL_HI12", + "R_AARCH64_TLSLE_ADD_TPREL_LO12", + "R_AARCH64_TLSLE_ADD_TPREL_LO12_NC", + "R_AARCH64_TLSLE_LDST128_TPREL_LO12", + "R_AARCH64_TLSLE_LDST128_TPREL_LO12_NC", + "R_AARCH64_TLSLE_MOVW_TPREL_G0", + "R_AARCH64_TLSLE_MOVW_TPREL_G0_NC", + "R_AARCH64_TLSLE_MOVW_TPREL_G1", + "R_AARCH64_TLSLE_MOVW_TPREL_G1_NC", + "R_AARCH64_TLSLE_MOVW_TPREL_G2", + "R_AARCH64_TLS_DTPMOD64", + "R_AARCH64_TLS_DTPREL64", + "R_AARCH64_TLS_TPREL64", + "R_AARCH64_TSTBR14", + "R_ALPHA", + "R_ALPHA_BRADDR", + "R_ALPHA_COPY", + "R_ALPHA_GLOB_DAT", + "R_ALPHA_GPDISP", + "R_ALPHA_GPREL32", + "R_ALPHA_GPRELHIGH", + "R_ALPHA_GPRELLOW", + "R_ALPHA_GPVALUE", + "R_ALPHA_HINT", + "R_ALPHA_IMMED_BR_HI32", + "R_ALPHA_IMMED_GP_16", + "R_ALPHA_IMMED_GP_HI32", + "R_ALPHA_IMMED_LO32", + "R_ALPHA_IMMED_SCN_HI32", + "R_ALPHA_JMP_SLOT", + "R_ALPHA_LITERAL", + "R_ALPHA_LITUSE", + "R_ALPHA_NONE", + "R_ALPHA_OP_PRSHIFT", + "R_ALPHA_OP_PSUB", + "R_ALPHA_OP_PUSH", + "R_ALPHA_OP_STORE", + "R_ALPHA_REFLONG", + "R_ALPHA_REFQUAD", + "R_ALPHA_RELATIVE", + "R_ALPHA_SREL16", + "R_ALPHA_SREL32", + "R_ALPHA_SREL64", + "R_ARM", + "R_ARM_ABS12", + "R_ARM_ABS16", + "R_ARM_ABS32", + "R_ARM_ABS32_NOI", + "R_ARM_ABS8", + "R_ARM_ALU_PCREL_15_8", + "R_ARM_ALU_PCREL_23_15", + "R_ARM_ALU_PCREL_7_0", + "R_ARM_ALU_PC_G0", + "R_ARM_ALU_PC_G0_NC", + "R_ARM_ALU_PC_G1", + "R_ARM_ALU_PC_G1_NC", + "R_ARM_ALU_PC_G2", + "R_ARM_ALU_SBREL_19_12_NC", + "R_ARM_ALU_SBREL_27_20_CK", + "R_ARM_ALU_SB_G0", + "R_ARM_ALU_SB_G0_NC", + "R_ARM_ALU_SB_G1", + "R_ARM_ALU_SB_G1_NC", + "R_ARM_ALU_SB_G2", + "R_ARM_AMP_VCALL9", + "R_ARM_BASE_ABS", + "R_ARM_CALL", + "R_ARM_COPY", + "R_ARM_GLOB_DAT", + "R_ARM_GNU_VTENTRY", + "R_ARM_GNU_VTINHERIT", + "R_ARM_GOT32", + "R_ARM_GOTOFF", + "R_ARM_GOTOFF12", + "R_ARM_GOTPC", + "R_ARM_GOTRELAX", + "R_ARM_GOT_ABS", + "R_ARM_GOT_BREL12", + "R_ARM_GOT_PREL", + "R_ARM_IRELATIVE", + "R_ARM_JUMP24", + "R_ARM_JUMP_SLOT", + "R_ARM_LDC_PC_G0", + "R_ARM_LDC_PC_G1", + "R_ARM_LDC_PC_G2", + "R_ARM_LDC_SB_G0", + "R_ARM_LDC_SB_G1", + "R_ARM_LDC_SB_G2", + "R_ARM_LDRS_PC_G0", + "R_ARM_LDRS_PC_G1", + "R_ARM_LDRS_PC_G2", + "R_ARM_LDRS_SB_G0", + "R_ARM_LDRS_SB_G1", + "R_ARM_LDRS_SB_G2", + "R_ARM_LDR_PC_G1", + "R_ARM_LDR_PC_G2", + "R_ARM_LDR_SBREL_11_10_NC", + "R_ARM_LDR_SB_G0", + "R_ARM_LDR_SB_G1", + "R_ARM_LDR_SB_G2", + "R_ARM_ME_TOO", + "R_ARM_MOVT_ABS", + "R_ARM_MOVT_BREL", + "R_ARM_MOVT_PREL", + "R_ARM_MOVW_ABS_NC", + "R_ARM_MOVW_BREL", + "R_ARM_MOVW_BREL_NC", + "R_ARM_MOVW_PREL_NC", + "R_ARM_NONE", + "R_ARM_PC13", + "R_ARM_PC24", + "R_ARM_PLT32", + "R_ARM_PLT32_ABS", + "R_ARM_PREL31", + "R_ARM_PRIVATE_0", + "R_ARM_PRIVATE_1", + "R_ARM_PRIVATE_10", + "R_ARM_PRIVATE_11", + "R_ARM_PRIVATE_12", + "R_ARM_PRIVATE_13", + "R_ARM_PRIVATE_14", + "R_ARM_PRIVATE_15", + "R_ARM_PRIVATE_2", + "R_ARM_PRIVATE_3", + "R_ARM_PRIVATE_4", + "R_ARM_PRIVATE_5", + "R_ARM_PRIVATE_6", + "R_ARM_PRIVATE_7", + "R_ARM_PRIVATE_8", + "R_ARM_PRIVATE_9", + "R_ARM_RABS32", + "R_ARM_RBASE", + "R_ARM_REL32", + "R_ARM_REL32_NOI", + "R_ARM_RELATIVE", + "R_ARM_RPC24", + "R_ARM_RREL32", + "R_ARM_RSBREL32", + "R_ARM_RXPC25", + "R_ARM_SBREL31", + "R_ARM_SBREL32", + "R_ARM_SWI24", + "R_ARM_TARGET1", + "R_ARM_TARGET2", + "R_ARM_THM_ABS5", + "R_ARM_THM_ALU_ABS_G0_NC", + "R_ARM_THM_ALU_ABS_G1_NC", + "R_ARM_THM_ALU_ABS_G2_NC", + "R_ARM_THM_ALU_ABS_G3", + "R_ARM_THM_ALU_PREL_11_0", + "R_ARM_THM_GOT_BREL12", + "R_ARM_THM_JUMP11", + "R_ARM_THM_JUMP19", + "R_ARM_THM_JUMP24", + "R_ARM_THM_JUMP6", + "R_ARM_THM_JUMP8", + "R_ARM_THM_MOVT_ABS", + "R_ARM_THM_MOVT_BREL", + "R_ARM_THM_MOVT_PREL", + "R_ARM_THM_MOVW_ABS_NC", + "R_ARM_THM_MOVW_BREL", + "R_ARM_THM_MOVW_BREL_NC", + "R_ARM_THM_MOVW_PREL_NC", + "R_ARM_THM_PC12", + "R_ARM_THM_PC22", + "R_ARM_THM_PC8", + "R_ARM_THM_RPC22", + "R_ARM_THM_SWI8", + "R_ARM_THM_TLS_CALL", + "R_ARM_THM_TLS_DESCSEQ16", + "R_ARM_THM_TLS_DESCSEQ32", + "R_ARM_THM_XPC22", + "R_ARM_TLS_CALL", + "R_ARM_TLS_DESCSEQ", + "R_ARM_TLS_DTPMOD32", + "R_ARM_TLS_DTPOFF32", + "R_ARM_TLS_GD32", + "R_ARM_TLS_GOTDESC", + "R_ARM_TLS_IE12GP", + "R_ARM_TLS_IE32", + "R_ARM_TLS_LDM32", + "R_ARM_TLS_LDO12", + "R_ARM_TLS_LDO32", + "R_ARM_TLS_LE12", + "R_ARM_TLS_LE32", + "R_ARM_TLS_TPOFF32", + "R_ARM_V4BX", + "R_ARM_XPC25", + "R_INFO", + "R_INFO32", + "R_MIPS", + "R_MIPS_16", + "R_MIPS_26", + "R_MIPS_32", + "R_MIPS_64", + "R_MIPS_ADD_IMMEDIATE", + "R_MIPS_CALL16", + "R_MIPS_CALL_HI16", + "R_MIPS_CALL_LO16", + "R_MIPS_DELETE", + "R_MIPS_GOT16", + "R_MIPS_GOT_DISP", + "R_MIPS_GOT_HI16", + "R_MIPS_GOT_LO16", + "R_MIPS_GOT_OFST", + "R_MIPS_GOT_PAGE", + "R_MIPS_GPREL16", + "R_MIPS_GPREL32", + "R_MIPS_HI16", + "R_MIPS_HIGHER", + "R_MIPS_HIGHEST", + "R_MIPS_INSERT_A", + "R_MIPS_INSERT_B", + "R_MIPS_JALR", + "R_MIPS_LITERAL", + "R_MIPS_LO16", + "R_MIPS_NONE", + "R_MIPS_PC16", + "R_MIPS_PJUMP", + "R_MIPS_REL16", + "R_MIPS_REL32", + "R_MIPS_RELGOT", + "R_MIPS_SCN_DISP", + "R_MIPS_SHIFT5", + "R_MIPS_SHIFT6", + "R_MIPS_SUB", + "R_MIPS_TLS_DTPMOD32", + "R_MIPS_TLS_DTPMOD64", + "R_MIPS_TLS_DTPREL32", + "R_MIPS_TLS_DTPREL64", + "R_MIPS_TLS_DTPREL_HI16", + "R_MIPS_TLS_DTPREL_LO16", + "R_MIPS_TLS_GD", + "R_MIPS_TLS_GOTTPREL", + "R_MIPS_TLS_LDM", + "R_MIPS_TLS_TPREL32", + "R_MIPS_TLS_TPREL64", + "R_MIPS_TLS_TPREL_HI16", + "R_MIPS_TLS_TPREL_LO16", + "R_PPC", + "R_PPC64", + "R_PPC64_ADDR14", + "R_PPC64_ADDR14_BRNTAKEN", + "R_PPC64_ADDR14_BRTAKEN", + "R_PPC64_ADDR16", + "R_PPC64_ADDR16_DS", + "R_PPC64_ADDR16_HA", + "R_PPC64_ADDR16_HI", + "R_PPC64_ADDR16_HIGH", + "R_PPC64_ADDR16_HIGHA", + "R_PPC64_ADDR16_HIGHER", + "R_PPC64_ADDR16_HIGHERA", + "R_PPC64_ADDR16_HIGHEST", + "R_PPC64_ADDR16_HIGHESTA", + "R_PPC64_ADDR16_LO", + "R_PPC64_ADDR16_LO_DS", + "R_PPC64_ADDR24", + "R_PPC64_ADDR32", + "R_PPC64_ADDR64", + "R_PPC64_ADDR64_LOCAL", + "R_PPC64_DTPMOD64", + "R_PPC64_DTPREL16", + "R_PPC64_DTPREL16_DS", + "R_PPC64_DTPREL16_HA", + "R_PPC64_DTPREL16_HI", + "R_PPC64_DTPREL16_HIGH", + "R_PPC64_DTPREL16_HIGHA", + "R_PPC64_DTPREL16_HIGHER", + "R_PPC64_DTPREL16_HIGHERA", + "R_PPC64_DTPREL16_HIGHEST", + "R_PPC64_DTPREL16_HIGHESTA", + "R_PPC64_DTPREL16_LO", + "R_PPC64_DTPREL16_LO_DS", + "R_PPC64_DTPREL64", + "R_PPC64_ENTRY", + "R_PPC64_GOT16", + "R_PPC64_GOT16_DS", + "R_PPC64_GOT16_HA", + "R_PPC64_GOT16_HI", + "R_PPC64_GOT16_LO", + "R_PPC64_GOT16_LO_DS", + "R_PPC64_GOT_DTPREL16_DS", + "R_PPC64_GOT_DTPREL16_HA", + "R_PPC64_GOT_DTPREL16_HI", + "R_PPC64_GOT_DTPREL16_LO_DS", + "R_PPC64_GOT_TLSGD16", + "R_PPC64_GOT_TLSGD16_HA", + "R_PPC64_GOT_TLSGD16_HI", + "R_PPC64_GOT_TLSGD16_LO", + "R_PPC64_GOT_TLSLD16", + "R_PPC64_GOT_TLSLD16_HA", + "R_PPC64_GOT_TLSLD16_HI", + "R_PPC64_GOT_TLSLD16_LO", + "R_PPC64_GOT_TPREL16_DS", + "R_PPC64_GOT_TPREL16_HA", + "R_PPC64_GOT_TPREL16_HI", + "R_PPC64_GOT_TPREL16_LO_DS", + "R_PPC64_IRELATIVE", + "R_PPC64_JMP_IREL", + "R_PPC64_JMP_SLOT", + "R_PPC64_NONE", + "R_PPC64_PLT16_LO_DS", + "R_PPC64_PLTGOT16", + "R_PPC64_PLTGOT16_DS", + "R_PPC64_PLTGOT16_HA", + "R_PPC64_PLTGOT16_HI", + "R_PPC64_PLTGOT16_LO", + "R_PPC64_PLTGOT_LO_DS", + "R_PPC64_REL14", + "R_PPC64_REL14_BRNTAKEN", + "R_PPC64_REL14_BRTAKEN", + "R_PPC64_REL16", + "R_PPC64_REL16DX_HA", + "R_PPC64_REL16_HA", + "R_PPC64_REL16_HI", + "R_PPC64_REL16_LO", + "R_PPC64_REL24", + "R_PPC64_REL24_NOTOC", + "R_PPC64_REL32", + "R_PPC64_REL64", + "R_PPC64_SECTOFF_DS", + "R_PPC64_SECTOFF_LO_DS", + "R_PPC64_TLS", + "R_PPC64_TLSGD", + "R_PPC64_TLSLD", + "R_PPC64_TOC", + "R_PPC64_TOC16", + "R_PPC64_TOC16_DS", + "R_PPC64_TOC16_HA", + "R_PPC64_TOC16_HI", + "R_PPC64_TOC16_LO", + "R_PPC64_TOC16_LO_DS", + "R_PPC64_TOCSAVE", + "R_PPC64_TPREL16", + "R_PPC64_TPREL16_DS", + "R_PPC64_TPREL16_HA", + "R_PPC64_TPREL16_HI", + "R_PPC64_TPREL16_HIGH", + "R_PPC64_TPREL16_HIGHA", + "R_PPC64_TPREL16_HIGHER", + "R_PPC64_TPREL16_HIGHERA", + "R_PPC64_TPREL16_HIGHEST", + "R_PPC64_TPREL16_HIGHESTA", + "R_PPC64_TPREL16_LO", + "R_PPC64_TPREL16_LO_DS", + "R_PPC64_TPREL64", + "R_PPC_ADDR14", + "R_PPC_ADDR14_BRNTAKEN", + "R_PPC_ADDR14_BRTAKEN", + "R_PPC_ADDR16", + "R_PPC_ADDR16_HA", + "R_PPC_ADDR16_HI", + "R_PPC_ADDR16_LO", + "R_PPC_ADDR24", + "R_PPC_ADDR32", + "R_PPC_COPY", + "R_PPC_DTPMOD32", + "R_PPC_DTPREL16", + "R_PPC_DTPREL16_HA", + "R_PPC_DTPREL16_HI", + "R_PPC_DTPREL16_LO", + "R_PPC_DTPREL32", + "R_PPC_EMB_BIT_FLD", + "R_PPC_EMB_MRKREF", + "R_PPC_EMB_NADDR16", + "R_PPC_EMB_NADDR16_HA", + "R_PPC_EMB_NADDR16_HI", + "R_PPC_EMB_NADDR16_LO", + "R_PPC_EMB_NADDR32", + "R_PPC_EMB_RELSDA", + "R_PPC_EMB_RELSEC16", + "R_PPC_EMB_RELST_HA", + "R_PPC_EMB_RELST_HI", + "R_PPC_EMB_RELST_LO", + "R_PPC_EMB_SDA21", + "R_PPC_EMB_SDA2I16", + "R_PPC_EMB_SDA2REL", + "R_PPC_EMB_SDAI16", + "R_PPC_GLOB_DAT", + "R_PPC_GOT16", + "R_PPC_GOT16_HA", + "R_PPC_GOT16_HI", + "R_PPC_GOT16_LO", + "R_PPC_GOT_TLSGD16", + "R_PPC_GOT_TLSGD16_HA", + "R_PPC_GOT_TLSGD16_HI", + "R_PPC_GOT_TLSGD16_LO", + "R_PPC_GOT_TLSLD16", + "R_PPC_GOT_TLSLD16_HA", + "R_PPC_GOT_TLSLD16_HI", + "R_PPC_GOT_TLSLD16_LO", + "R_PPC_GOT_TPREL16", + "R_PPC_GOT_TPREL16_HA", + "R_PPC_GOT_TPREL16_HI", + "R_PPC_GOT_TPREL16_LO", + "R_PPC_JMP_SLOT", + "R_PPC_LOCAL24PC", + "R_PPC_NONE", + "R_PPC_PLT16_HA", + "R_PPC_PLT16_HI", + "R_PPC_PLT16_LO", + "R_PPC_PLT32", + "R_PPC_PLTREL24", + "R_PPC_PLTREL32", + "R_PPC_REL14", + "R_PPC_REL14_BRNTAKEN", + "R_PPC_REL14_BRTAKEN", + "R_PPC_REL24", + "R_PPC_REL32", + "R_PPC_RELATIVE", + "R_PPC_SDAREL16", + "R_PPC_SECTOFF", + "R_PPC_SECTOFF_HA", + "R_PPC_SECTOFF_HI", + "R_PPC_SECTOFF_LO", + "R_PPC_TLS", + "R_PPC_TPREL16", + "R_PPC_TPREL16_HA", + "R_PPC_TPREL16_HI", + "R_PPC_TPREL16_LO", + "R_PPC_TPREL32", + "R_PPC_UADDR16", + "R_PPC_UADDR32", + "R_RISCV", + "R_RISCV_32", + "R_RISCV_32_PCREL", + "R_RISCV_64", + "R_RISCV_ADD16", + "R_RISCV_ADD32", + "R_RISCV_ADD64", + "R_RISCV_ADD8", + "R_RISCV_ALIGN", + "R_RISCV_BRANCH", + "R_RISCV_CALL", + "R_RISCV_CALL_PLT", + "R_RISCV_COPY", + "R_RISCV_GNU_VTENTRY", + "R_RISCV_GNU_VTINHERIT", + "R_RISCV_GOT_HI20", + "R_RISCV_GPREL_I", + "R_RISCV_GPREL_S", + "R_RISCV_HI20", + "R_RISCV_JAL", + "R_RISCV_JUMP_SLOT", + "R_RISCV_LO12_I", + "R_RISCV_LO12_S", + "R_RISCV_NONE", + "R_RISCV_PCREL_HI20", + "R_RISCV_PCREL_LO12_I", + "R_RISCV_PCREL_LO12_S", + "R_RISCV_RELATIVE", + "R_RISCV_RELAX", + "R_RISCV_RVC_BRANCH", + "R_RISCV_RVC_JUMP", + "R_RISCV_RVC_LUI", + "R_RISCV_SET16", + "R_RISCV_SET32", + "R_RISCV_SET6", + "R_RISCV_SET8", + "R_RISCV_SUB16", + "R_RISCV_SUB32", + "R_RISCV_SUB6", + "R_RISCV_SUB64", + "R_RISCV_SUB8", + "R_RISCV_TLS_DTPMOD32", + "R_RISCV_TLS_DTPMOD64", + "R_RISCV_TLS_DTPREL32", + "R_RISCV_TLS_DTPREL64", + "R_RISCV_TLS_GD_HI20", + "R_RISCV_TLS_GOT_HI20", + "R_RISCV_TLS_TPREL32", + "R_RISCV_TLS_TPREL64", + "R_RISCV_TPREL_ADD", + "R_RISCV_TPREL_HI20", + "R_RISCV_TPREL_I", + "R_RISCV_TPREL_LO12_I", + "R_RISCV_TPREL_LO12_S", + "R_RISCV_TPREL_S", + "R_SPARC", + "R_SPARC_10", + "R_SPARC_11", + "R_SPARC_13", + "R_SPARC_16", + "R_SPARC_22", + "R_SPARC_32", + "R_SPARC_5", + "R_SPARC_6", + "R_SPARC_64", + "R_SPARC_7", + "R_SPARC_8", + "R_SPARC_COPY", + "R_SPARC_DISP16", + "R_SPARC_DISP32", + "R_SPARC_DISP64", + "R_SPARC_DISP8", + "R_SPARC_GLOB_DAT", + "R_SPARC_GLOB_JMP", + "R_SPARC_GOT10", + "R_SPARC_GOT13", + "R_SPARC_GOT22", + "R_SPARC_H44", + "R_SPARC_HH22", + "R_SPARC_HI22", + "R_SPARC_HIPLT22", + "R_SPARC_HIX22", + "R_SPARC_HM10", + "R_SPARC_JMP_SLOT", + "R_SPARC_L44", + "R_SPARC_LM22", + "R_SPARC_LO10", + "R_SPARC_LOPLT10", + "R_SPARC_LOX10", + "R_SPARC_M44", + "R_SPARC_NONE", + "R_SPARC_OLO10", + "R_SPARC_PC10", + "R_SPARC_PC22", + "R_SPARC_PCPLT10", + "R_SPARC_PCPLT22", + "R_SPARC_PCPLT32", + "R_SPARC_PC_HH22", + "R_SPARC_PC_HM10", + "R_SPARC_PC_LM22", + "R_SPARC_PLT32", + "R_SPARC_PLT64", + "R_SPARC_REGISTER", + "R_SPARC_RELATIVE", + "R_SPARC_UA16", + "R_SPARC_UA32", + "R_SPARC_UA64", + "R_SPARC_WDISP16", + "R_SPARC_WDISP19", + "R_SPARC_WDISP22", + "R_SPARC_WDISP30", + "R_SPARC_WPLT30", + "R_SYM32", + "R_SYM64", + "R_TYPE32", + "R_TYPE64", + "R_X86_64", + "R_X86_64_16", + "R_X86_64_32", + "R_X86_64_32S", + "R_X86_64_64", + "R_X86_64_8", + "R_X86_64_COPY", + "R_X86_64_DTPMOD64", + "R_X86_64_DTPOFF32", + "R_X86_64_DTPOFF64", + "R_X86_64_GLOB_DAT", + "R_X86_64_GOT32", + "R_X86_64_GOT64", + "R_X86_64_GOTOFF64", + "R_X86_64_GOTPC32", + "R_X86_64_GOTPC32_TLSDESC", + "R_X86_64_GOTPC64", + "R_X86_64_GOTPCREL", + "R_X86_64_GOTPCREL64", + "R_X86_64_GOTPCRELX", + "R_X86_64_GOTPLT64", + "R_X86_64_GOTTPOFF", + "R_X86_64_IRELATIVE", + "R_X86_64_JMP_SLOT", + "R_X86_64_NONE", + "R_X86_64_PC16", + "R_X86_64_PC32", + "R_X86_64_PC32_BND", + "R_X86_64_PC64", + "R_X86_64_PC8", + "R_X86_64_PLT32", + "R_X86_64_PLT32_BND", + "R_X86_64_PLTOFF64", + "R_X86_64_RELATIVE", + "R_X86_64_RELATIVE64", + "R_X86_64_REX_GOTPCRELX", + "R_X86_64_SIZE32", + "R_X86_64_SIZE64", + "R_X86_64_TLSDESC", + "R_X86_64_TLSDESC_CALL", + "R_X86_64_TLSGD", + "R_X86_64_TLSLD", + "R_X86_64_TPOFF32", + "R_X86_64_TPOFF64", + "Rel32", + "Rel64", + "Rela32", + "Rela64", + "SHF_ALLOC", + "SHF_COMPRESSED", + "SHF_EXECINSTR", + "SHF_GROUP", + "SHF_INFO_LINK", + "SHF_LINK_ORDER", + "SHF_MASKOS", + "SHF_MASKPROC", + "SHF_MERGE", + "SHF_OS_NONCONFORMING", + "SHF_STRINGS", + "SHF_TLS", + "SHF_WRITE", + "SHN_ABS", + "SHN_COMMON", + "SHN_HIOS", + "SHN_HIPROC", + "SHN_HIRESERVE", + "SHN_LOOS", + "SHN_LOPROC", + "SHN_LORESERVE", + "SHN_UNDEF", + "SHN_XINDEX", + "SHT_DYNAMIC", + "SHT_DYNSYM", + "SHT_FINI_ARRAY", + "SHT_GNU_ATTRIBUTES", + "SHT_GNU_HASH", + "SHT_GNU_LIBLIST", + "SHT_GNU_VERDEF", + "SHT_GNU_VERNEED", + "SHT_GNU_VERSYM", + "SHT_GROUP", + "SHT_HASH", + "SHT_HIOS", + "SHT_HIPROC", + "SHT_HIUSER", + "SHT_INIT_ARRAY", + "SHT_LOOS", + "SHT_LOPROC", + "SHT_LOUSER", + "SHT_NOBITS", + "SHT_NOTE", + "SHT_NULL", + "SHT_PREINIT_ARRAY", + "SHT_PROGBITS", + "SHT_REL", + "SHT_RELA", + "SHT_SHLIB", + "SHT_STRTAB", + "SHT_SYMTAB", + "SHT_SYMTAB_SHNDX", + "STB_GLOBAL", + "STB_HIOS", + "STB_HIPROC", + "STB_LOCAL", + "STB_LOOS", + "STB_LOPROC", + "STB_WEAK", + "STT_COMMON", + "STT_FILE", + "STT_FUNC", + "STT_HIOS", + "STT_HIPROC", + "STT_LOOS", + "STT_LOPROC", + "STT_NOTYPE", + "STT_OBJECT", + "STT_SECTION", + "STT_TLS", + "STV_DEFAULT", + "STV_HIDDEN", + "STV_INTERNAL", + "STV_PROTECTED", + "ST_BIND", + "ST_INFO", + "ST_TYPE", + "ST_VISIBILITY", + "Section", + "Section32", + "Section64", + "SectionFlag", + "SectionHeader", + "SectionIndex", + "SectionType", + "Sym32", + "Sym32Size", + "Sym64", + "Sym64Size", + "SymBind", + "SymType", + "SymVis", + "Symbol", + "Type", + "Version", + }, + "debug/gosym": []string{ + "DecodingError", + "Func", + "LineTable", + "NewLineTable", + "NewTable", + "Obj", + "Sym", + "Table", + "UnknownFileError", + "UnknownLineError", + }, + "debug/macho": []string{ + "ARM64_RELOC_ADDEND", + "ARM64_RELOC_BRANCH26", + "ARM64_RELOC_GOT_LOAD_PAGE21", + "ARM64_RELOC_GOT_LOAD_PAGEOFF12", + "ARM64_RELOC_PAGE21", + "ARM64_RELOC_PAGEOFF12", + "ARM64_RELOC_POINTER_TO_GOT", + "ARM64_RELOC_SUBTRACTOR", + "ARM64_RELOC_TLVP_LOAD_PAGE21", + "ARM64_RELOC_TLVP_LOAD_PAGEOFF12", + "ARM64_RELOC_UNSIGNED", + "ARM_RELOC_BR24", + "ARM_RELOC_HALF", + "ARM_RELOC_HALF_SECTDIFF", + "ARM_RELOC_LOCAL_SECTDIFF", + "ARM_RELOC_PAIR", + "ARM_RELOC_PB_LA_PTR", + "ARM_RELOC_SECTDIFF", + "ARM_RELOC_VANILLA", + "ARM_THUMB_32BIT_BRANCH", + "ARM_THUMB_RELOC_BR22", + "Cpu", + "Cpu386", + "CpuAmd64", + "CpuArm", + "CpuArm64", + "CpuPpc", + "CpuPpc64", + "Dylib", + "DylibCmd", + "Dysymtab", + "DysymtabCmd", + "ErrNotFat", + "FatArch", + "FatArchHeader", + "FatFile", + "File", + "FileHeader", + "FlagAllModsBound", + "FlagAllowStackExecution", + "FlagAppExtensionSafe", + "FlagBindAtLoad", + "FlagBindsToWeak", + "FlagCanonical", + "FlagDeadStrippableDylib", + "FlagDyldLink", + "FlagForceFlat", + "FlagHasTLVDescriptors", + "FlagIncrLink", + "FlagLazyInit", + "FlagNoFixPrebinding", + "FlagNoHeapExecution", + "FlagNoMultiDefs", + "FlagNoReexportedDylibs", + "FlagNoUndefs", + "FlagPIE", + "FlagPrebindable", + "FlagPrebound", + "FlagRootSafe", + "FlagSetuidSafe", + "FlagSplitSegs", + "FlagSubsectionsViaSymbols", + "FlagTwoLevel", + "FlagWeakDefines", + "FormatError", + "GENERIC_RELOC_LOCAL_SECTDIFF", + "GENERIC_RELOC_PAIR", + "GENERIC_RELOC_PB_LA_PTR", + "GENERIC_RELOC_SECTDIFF", + "GENERIC_RELOC_TLV", + "GENERIC_RELOC_VANILLA", + "Load", + "LoadBytes", + "LoadCmd", + "LoadCmdDylib", + "LoadCmdDylinker", + "LoadCmdDysymtab", + "LoadCmdRpath", + "LoadCmdSegment", + "LoadCmdSegment64", + "LoadCmdSymtab", + "LoadCmdThread", + "LoadCmdUnixThread", + "Magic32", + "Magic64", + "MagicFat", + "NewFatFile", + "NewFile", + "Nlist32", + "Nlist64", + "Open", + "OpenFat", + "Regs386", + "RegsAMD64", + "Reloc", + "RelocTypeARM", + "RelocTypeARM64", + "RelocTypeGeneric", + "RelocTypeX86_64", + "Rpath", + "RpathCmd", + "Section", + "Section32", + "Section64", + "SectionHeader", + "Segment", + "Segment32", + "Segment64", + "SegmentHeader", + "Symbol", + "Symtab", + "SymtabCmd", + "Thread", + "Type", + "TypeBundle", + "TypeDylib", + "TypeExec", + "TypeObj", + "X86_64_RELOC_BRANCH", + "X86_64_RELOC_GOT", + "X86_64_RELOC_GOT_LOAD", + "X86_64_RELOC_SIGNED", + "X86_64_RELOC_SIGNED_1", + "X86_64_RELOC_SIGNED_2", + "X86_64_RELOC_SIGNED_4", + "X86_64_RELOC_SUBTRACTOR", + "X86_64_RELOC_TLV", + "X86_64_RELOC_UNSIGNED", + }, + "debug/pe": []string{ + "COFFSymbol", + "COFFSymbolSize", + "DataDirectory", + "File", + "FileHeader", + "FormatError", + "IMAGE_DIRECTORY_ENTRY_ARCHITECTURE", + "IMAGE_DIRECTORY_ENTRY_BASERELOC", + "IMAGE_DIRECTORY_ENTRY_BOUND_IMPORT", + "IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR", + "IMAGE_DIRECTORY_ENTRY_DEBUG", + "IMAGE_DIRECTORY_ENTRY_DELAY_IMPORT", + "IMAGE_DIRECTORY_ENTRY_EXCEPTION", + "IMAGE_DIRECTORY_ENTRY_EXPORT", + "IMAGE_DIRECTORY_ENTRY_GLOBALPTR", + "IMAGE_DIRECTORY_ENTRY_IAT", + "IMAGE_DIRECTORY_ENTRY_IMPORT", + "IMAGE_DIRECTORY_ENTRY_LOAD_CONFIG", + "IMAGE_DIRECTORY_ENTRY_RESOURCE", + "IMAGE_DIRECTORY_ENTRY_SECURITY", + "IMAGE_DIRECTORY_ENTRY_TLS", + "IMAGE_DLLCHARACTERISTICS_APPCONTAINER", + "IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE", + "IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY", + "IMAGE_DLLCHARACTERISTICS_GUARD_CF", + "IMAGE_DLLCHARACTERISTICS_HIGH_ENTROPY_VA", + "IMAGE_DLLCHARACTERISTICS_NO_BIND", + "IMAGE_DLLCHARACTERISTICS_NO_ISOLATION", + "IMAGE_DLLCHARACTERISTICS_NO_SEH", + "IMAGE_DLLCHARACTERISTICS_NX_COMPAT", + "IMAGE_DLLCHARACTERISTICS_TERMINAL_SERVER_AWARE", + "IMAGE_DLLCHARACTERISTICS_WDM_DRIVER", + "IMAGE_FILE_32BIT_MACHINE", + "IMAGE_FILE_AGGRESIVE_WS_TRIM", + "IMAGE_FILE_BYTES_REVERSED_HI", + "IMAGE_FILE_BYTES_REVERSED_LO", + "IMAGE_FILE_DEBUG_STRIPPED", + "IMAGE_FILE_DLL", + "IMAGE_FILE_EXECUTABLE_IMAGE", + "IMAGE_FILE_LARGE_ADDRESS_AWARE", + "IMAGE_FILE_LINE_NUMS_STRIPPED", + "IMAGE_FILE_LOCAL_SYMS_STRIPPED", + "IMAGE_FILE_MACHINE_AM33", + "IMAGE_FILE_MACHINE_AMD64", + "IMAGE_FILE_MACHINE_ARM", + "IMAGE_FILE_MACHINE_ARM64", + "IMAGE_FILE_MACHINE_ARMNT", + "IMAGE_FILE_MACHINE_EBC", + "IMAGE_FILE_MACHINE_I386", + "IMAGE_FILE_MACHINE_IA64", + "IMAGE_FILE_MACHINE_M32R", + "IMAGE_FILE_MACHINE_MIPS16", + "IMAGE_FILE_MACHINE_MIPSFPU", + "IMAGE_FILE_MACHINE_MIPSFPU16", + "IMAGE_FILE_MACHINE_POWERPC", + "IMAGE_FILE_MACHINE_POWERPCFP", + "IMAGE_FILE_MACHINE_R4000", + "IMAGE_FILE_MACHINE_SH3", + "IMAGE_FILE_MACHINE_SH3DSP", + "IMAGE_FILE_MACHINE_SH4", + "IMAGE_FILE_MACHINE_SH5", + "IMAGE_FILE_MACHINE_THUMB", + "IMAGE_FILE_MACHINE_UNKNOWN", + "IMAGE_FILE_MACHINE_WCEMIPSV2", + "IMAGE_FILE_NET_RUN_FROM_SWAP", + "IMAGE_FILE_RELOCS_STRIPPED", + "IMAGE_FILE_REMOVABLE_RUN_FROM_SWAP", + "IMAGE_FILE_SYSTEM", + "IMAGE_FILE_UP_SYSTEM_ONLY", + "IMAGE_SUBSYSTEM_EFI_APPLICATION", + "IMAGE_SUBSYSTEM_EFI_BOOT_SERVICE_DRIVER", + "IMAGE_SUBSYSTEM_EFI_ROM", + "IMAGE_SUBSYSTEM_EFI_RUNTIME_DRIVER", + "IMAGE_SUBSYSTEM_NATIVE", + "IMAGE_SUBSYSTEM_NATIVE_WINDOWS", + "IMAGE_SUBSYSTEM_OS2_CUI", + "IMAGE_SUBSYSTEM_POSIX_CUI", + "IMAGE_SUBSYSTEM_UNKNOWN", + "IMAGE_SUBSYSTEM_WINDOWS_BOOT_APPLICATION", + "IMAGE_SUBSYSTEM_WINDOWS_CE_GUI", + "IMAGE_SUBSYSTEM_WINDOWS_CUI", + "IMAGE_SUBSYSTEM_WINDOWS_GUI", + "IMAGE_SUBSYSTEM_XBOX", + "ImportDirectory", + "NewFile", + "Open", + "OptionalHeader32", + "OptionalHeader64", + "Reloc", + "Section", + "SectionHeader", + "SectionHeader32", + "StringTable", + "Symbol", + }, + "debug/plan9obj": []string{ + "File", + "FileHeader", + "Magic386", + "Magic64", + "MagicAMD64", + "MagicARM", + "NewFile", + "Open", + "Section", + "SectionHeader", + "Sym", + }, + "encoding": []string{ + "BinaryMarshaler", + "BinaryUnmarshaler", + "TextMarshaler", + "TextUnmarshaler", + }, + "encoding/ascii85": []string{ + "CorruptInputError", + "Decode", + "Encode", + "MaxEncodedLen", + "NewDecoder", + "NewEncoder", + }, + "encoding/asn1": []string{ + "BitString", + "ClassApplication", + "ClassContextSpecific", + "ClassPrivate", + "ClassUniversal", + "Enumerated", + "Flag", + "Marshal", + "MarshalWithParams", + "NullBytes", + "NullRawValue", + "ObjectIdentifier", + "RawContent", + "RawValue", + "StructuralError", + "SyntaxError", + "TagBMPString", + "TagBitString", + "TagBoolean", + "TagEnum", + "TagGeneralString", + "TagGeneralizedTime", + "TagIA5String", + "TagInteger", + "TagNull", + "TagNumericString", + "TagOID", + "TagOctetString", + "TagPrintableString", + "TagSequence", + "TagSet", + "TagT61String", + "TagUTCTime", + "TagUTF8String", + "Unmarshal", + "UnmarshalWithParams", + }, + "encoding/base32": []string{ + "CorruptInputError", + "Encoding", + "HexEncoding", + "NewDecoder", + "NewEncoder", + "NewEncoding", + "NoPadding", + "StdEncoding", + "StdPadding", + }, + "encoding/base64": []string{ + "CorruptInputError", + "Encoding", + "NewDecoder", + "NewEncoder", + "NewEncoding", + "NoPadding", + "RawStdEncoding", + "RawURLEncoding", + "StdEncoding", + "StdPadding", + "URLEncoding", + }, + "encoding/binary": []string{ + "BigEndian", + "ByteOrder", + "LittleEndian", + "MaxVarintLen16", + "MaxVarintLen32", + "MaxVarintLen64", + "PutUvarint", + "PutVarint", + "Read", + "ReadUvarint", + "ReadVarint", + "Size", + "Uvarint", + "Varint", + "Write", + }, + "encoding/csv": []string{ + "ErrBareQuote", + "ErrFieldCount", + "ErrQuote", + "ErrTrailingComma", + "NewReader", + "NewWriter", + "ParseError", + "Reader", + "Writer", + }, + "encoding/gob": []string{ + "CommonType", + "Decoder", + "Encoder", + "GobDecoder", + "GobEncoder", + "NewDecoder", + "NewEncoder", + "Register", + "RegisterName", + }, + "encoding/hex": []string{ + "Decode", + "DecodeString", + "DecodedLen", + "Dump", + "Dumper", + "Encode", + "EncodeToString", + "EncodedLen", + "ErrLength", + "InvalidByteError", + "NewDecoder", + "NewEncoder", + }, + "encoding/json": []string{ + "Compact", + "Decoder", + "Delim", + "Encoder", + "HTMLEscape", + "Indent", + "InvalidUTF8Error", + "InvalidUnmarshalError", + "Marshal", + "MarshalIndent", + "Marshaler", + "MarshalerError", + "NewDecoder", + "NewEncoder", + "Number", + "RawMessage", + "SyntaxError", + "Token", + "Unmarshal", + "UnmarshalFieldError", + "UnmarshalTypeError", + "Unmarshaler", + "UnsupportedTypeError", + "UnsupportedValueError", + "Valid", + }, + "encoding/pem": []string{ + "Block", + "Decode", + "Encode", + "EncodeToMemory", + }, + "encoding/xml": []string{ + "Attr", + "CharData", + "Comment", + "CopyToken", + "Decoder", + "Directive", + "Encoder", + "EndElement", + "Escape", + "EscapeText", + "HTMLAutoClose", + "HTMLEntity", + "Header", + "Marshal", + "MarshalIndent", + "Marshaler", + "MarshalerAttr", + "Name", + "NewDecoder", + "NewEncoder", + "NewTokenDecoder", + "ProcInst", + "StartElement", + "SyntaxError", + "TagPathError", + "Token", + "TokenReader", + "Unmarshal", + "UnmarshalError", + "Unmarshaler", + "UnmarshalerAttr", + "UnsupportedTypeError", + }, + "errors": []string{ + "As", + "Is", + "New", + "Unwrap", + }, + "expvar": []string{ + "Do", + "Float", + "Func", + "Get", + "Handler", + "Int", + "KeyValue", + "Map", + "NewFloat", + "NewInt", + "NewMap", + "NewString", + "Publish", + "String", + "Var", + }, + "flag": []string{ + "Arg", + "Args", + "Bool", + "BoolVar", + "CommandLine", + "ContinueOnError", + "Duration", + "DurationVar", + "ErrHelp", + "ErrorHandling", + "ExitOnError", + "Flag", + "FlagSet", + "Float64", + "Float64Var", + "Getter", + "Int", + "Int64", + "Int64Var", + "IntVar", + "Lookup", + "NArg", + "NFlag", + "NewFlagSet", + "PanicOnError", + "Parse", + "Parsed", + "PrintDefaults", + "Set", + "String", + "StringVar", + "Uint", + "Uint64", + "Uint64Var", + "UintVar", + "UnquoteUsage", + "Usage", + "Value", + "Var", + "Visit", + "VisitAll", + }, + "fmt": []string{ + "Errorf", + "Formatter", + "Fprint", + "Fprintf", + "Fprintln", + "Fscan", + "Fscanf", + "Fscanln", + "GoStringer", + "Print", + "Printf", + "Println", + "Scan", + "ScanState", + "Scanf", + "Scanln", + "Scanner", + "Sprint", + "Sprintf", + "Sprintln", + "Sscan", + "Sscanf", + "Sscanln", + "State", + "Stringer", + }, + "go/ast": []string{ + "ArrayType", + "AssignStmt", + "Bad", + "BadDecl", + "BadExpr", + "BadStmt", + "BasicLit", + "BinaryExpr", + "BlockStmt", + "BranchStmt", + "CallExpr", + "CaseClause", + "ChanDir", + "ChanType", + "CommClause", + "Comment", + "CommentGroup", + "CommentMap", + "CompositeLit", + "Con", + "Decl", + "DeclStmt", + "DeferStmt", + "Ellipsis", + "EmptyStmt", + "Expr", + "ExprStmt", + "Field", + "FieldFilter", + "FieldList", + "File", + "FileExports", + "Filter", + "FilterDecl", + "FilterFile", + "FilterFuncDuplicates", + "FilterImportDuplicates", + "FilterPackage", + "FilterUnassociatedComments", + "ForStmt", + "Fprint", + "Fun", + "FuncDecl", + "FuncLit", + "FuncType", + "GenDecl", + "GoStmt", + "Ident", + "IfStmt", + "ImportSpec", + "Importer", + "IncDecStmt", + "IndexExpr", + "Inspect", + "InterfaceType", + "IsExported", + "KeyValueExpr", + "LabeledStmt", + "Lbl", + "MapType", + "MergeMode", + "MergePackageFiles", + "NewCommentMap", + "NewIdent", + "NewObj", + "NewPackage", + "NewScope", + "Node", + "NotNilFilter", + "ObjKind", + "Object", + "Package", + "PackageExports", + "ParenExpr", + "Pkg", + "Print", + "RECV", + "RangeStmt", + "ReturnStmt", + "SEND", + "Scope", + "SelectStmt", + "SelectorExpr", + "SendStmt", + "SliceExpr", + "SortImports", + "Spec", + "StarExpr", + "Stmt", + "StructType", + "SwitchStmt", + "Typ", + "TypeAssertExpr", + "TypeSpec", + "TypeSwitchStmt", + "UnaryExpr", + "ValueSpec", + "Var", + "Visitor", + "Walk", + }, + "go/build": []string{ + "AllowBinary", + "ArchChar", + "Context", + "Default", + "FindOnly", + "IgnoreVendor", + "Import", + "ImportComment", + "ImportDir", + "ImportMode", + "IsLocalImport", + "MultiplePackageError", + "NoGoError", + "Package", + "ToolDir", + }, + "go/constant": []string{ + "BinaryOp", + "BitLen", + "Bool", + "BoolVal", + "Bytes", + "Compare", + "Complex", + "Denom", + "Float", + "Float32Val", + "Float64Val", + "Imag", + "Int", + "Int64Val", + "Kind", + "Make", + "MakeBool", + "MakeFloat64", + "MakeFromBytes", + "MakeFromLiteral", + "MakeImag", + "MakeInt64", + "MakeString", + "MakeUint64", + "MakeUnknown", + "Num", + "Real", + "Shift", + "Sign", + "String", + "StringVal", + "ToComplex", + "ToFloat", + "ToInt", + "Uint64Val", + "UnaryOp", + "Unknown", + "Val", + "Value", + }, + "go/doc": []string{ + "AllDecls", + "AllMethods", + "Example", + "Examples", + "Filter", + "Func", + "IllegalPrefixes", + "IsPredeclared", + "Mode", + "New", + "NewFromFiles", + "Note", + "Package", + "PreserveAST", + "Synopsis", + "ToHTML", + "ToText", + "Type", + "Value", + }, + "go/format": []string{ + "Node", + "Source", + }, + "go/importer": []string{ + "Default", + "For", + "ForCompiler", + "Lookup", + }, + "go/parser": []string{ + "AllErrors", + "DeclarationErrors", + "ImportsOnly", + "Mode", + "PackageClauseOnly", + "ParseComments", + "ParseDir", + "ParseExpr", + "ParseExprFrom", + "ParseFile", + "SpuriousErrors", + "Trace", + }, + "go/printer": []string{ + "CommentedNode", + "Config", + "Fprint", + "Mode", + "RawFormat", + "SourcePos", + "TabIndent", + "UseSpaces", + }, + "go/scanner": []string{ + "Error", + "ErrorHandler", + "ErrorList", + "Mode", + "PrintError", + "ScanComments", + "Scanner", + }, + "go/token": []string{ + "ADD", + "ADD_ASSIGN", + "AND", + "AND_ASSIGN", + "AND_NOT", + "AND_NOT_ASSIGN", + "ARROW", + "ASSIGN", + "BREAK", + "CASE", + "CHAN", + "CHAR", + "COLON", + "COMMA", + "COMMENT", + "CONST", + "CONTINUE", + "DEC", + "DEFAULT", + "DEFER", + "DEFINE", + "ELLIPSIS", + "ELSE", + "EOF", + "EQL", + "FALLTHROUGH", + "FLOAT", + "FOR", + "FUNC", + "File", + "FileSet", + "GEQ", + "GO", + "GOTO", + "GTR", + "HighestPrec", + "IDENT", + "IF", + "ILLEGAL", + "IMAG", + "IMPORT", + "INC", + "INT", + "INTERFACE", + "IsExported", + "IsIdentifier", + "IsKeyword", + "LAND", + "LBRACE", + "LBRACK", + "LEQ", + "LOR", + "LPAREN", + "LSS", + "Lookup", + "LowestPrec", + "MAP", + "MUL", + "MUL_ASSIGN", + "NEQ", + "NOT", + "NewFileSet", + "NoPos", + "OR", + "OR_ASSIGN", + "PACKAGE", + "PERIOD", + "Pos", + "Position", + "QUO", + "QUO_ASSIGN", + "RANGE", + "RBRACE", + "RBRACK", + "REM", + "REM_ASSIGN", + "RETURN", + "RPAREN", + "SELECT", + "SEMICOLON", + "SHL", + "SHL_ASSIGN", + "SHR", + "SHR_ASSIGN", + "STRING", + "STRUCT", + "SUB", + "SUB_ASSIGN", + "SWITCH", + "TYPE", + "Token", + "UnaryPrec", + "VAR", + "XOR", + "XOR_ASSIGN", + }, + "go/types": []string{ + "Array", + "AssertableTo", + "AssignableTo", + "Basic", + "BasicInfo", + "BasicKind", + "Bool", + "Builtin", + "Byte", + "Chan", + "ChanDir", + "CheckExpr", + "Checker", + "Comparable", + "Complex128", + "Complex64", + "Config", + "Const", + "ConvertibleTo", + "DefPredeclaredTestFuncs", + "Default", + "Error", + "Eval", + "ExprString", + "FieldVal", + "Float32", + "Float64", + "Func", + "Id", + "Identical", + "IdenticalIgnoreTags", + "Implements", + "ImportMode", + "Importer", + "ImporterFrom", + "Info", + "Initializer", + "Int", + "Int16", + "Int32", + "Int64", + "Int8", + "Interface", + "Invalid", + "IsBoolean", + "IsComplex", + "IsConstType", + "IsFloat", + "IsInteger", + "IsInterface", + "IsNumeric", + "IsOrdered", + "IsString", + "IsUnsigned", + "IsUntyped", + "Label", + "LookupFieldOrMethod", + "Map", + "MethodExpr", + "MethodSet", + "MethodVal", + "MissingMethod", + "Named", + "NewArray", + "NewChan", + "NewChecker", + "NewConst", + "NewField", + "NewFunc", + "NewInterface", + "NewInterfaceType", + "NewLabel", + "NewMap", + "NewMethodSet", + "NewNamed", + "NewPackage", + "NewParam", + "NewPkgName", + "NewPointer", + "NewScope", + "NewSignature", + "NewSlice", + "NewStruct", + "NewTuple", + "NewTypeName", + "NewVar", + "Nil", + "Object", + "ObjectString", + "Package", + "PkgName", + "Pointer", + "Qualifier", + "RecvOnly", + "RelativeTo", + "Rune", + "Scope", + "Selection", + "SelectionKind", + "SelectionString", + "SendOnly", + "SendRecv", + "Signature", + "Sizes", + "SizesFor", + "Slice", + "StdSizes", + "String", + "Struct", + "Tuple", + "Typ", + "Type", + "TypeAndValue", + "TypeName", + "TypeString", + "Uint", + "Uint16", + "Uint32", + "Uint64", + "Uint8", + "Uintptr", + "Universe", + "Unsafe", + "UnsafePointer", + "UntypedBool", + "UntypedComplex", + "UntypedFloat", + "UntypedInt", + "UntypedNil", + "UntypedRune", + "UntypedString", + "Var", + "WriteExpr", + "WriteSignature", + "WriteType", + }, + "hash": []string{ + "Hash", + "Hash32", + "Hash64", + }, + "hash/adler32": []string{ + "Checksum", + "New", + "Size", + }, + "hash/crc32": []string{ + "Castagnoli", + "Checksum", + "ChecksumIEEE", + "IEEE", + "IEEETable", + "Koopman", + "MakeTable", + "New", + "NewIEEE", + "Size", + "Table", + "Update", + }, + "hash/crc64": []string{ + "Checksum", + "ECMA", + "ISO", + "MakeTable", + "New", + "Size", + "Table", + "Update", + }, + "hash/fnv": []string{ + "New128", + "New128a", + "New32", + "New32a", + "New64", + "New64a", + }, + "hash/maphash": []string{ + "Hash", + "MakeSeed", + "Seed", + }, + "html": []string{ + "EscapeString", + "UnescapeString", + }, + "html/template": []string{ + "CSS", + "ErrAmbigContext", + "ErrBadHTML", + "ErrBranchEnd", + "ErrEndContext", + "ErrNoSuchTemplate", + "ErrOutputContext", + "ErrPartialCharset", + "ErrPartialEscape", + "ErrPredefinedEscaper", + "ErrRangeLoopReentry", + "ErrSlashAmbig", + "Error", + "ErrorCode", + "FuncMap", + "HTML", + "HTMLAttr", + "HTMLEscape", + "HTMLEscapeString", + "HTMLEscaper", + "IsTrue", + "JS", + "JSEscape", + "JSEscapeString", + "JSEscaper", + "JSStr", + "Must", + "New", + "OK", + "ParseFiles", + "ParseGlob", + "Srcset", + "Template", + "URL", + "URLQueryEscaper", + }, + "image": []string{ + "Alpha", + "Alpha16", + "Black", + "CMYK", + "Config", + "Decode", + "DecodeConfig", + "ErrFormat", + "Gray", + "Gray16", + "Image", + "NRGBA", + "NRGBA64", + "NYCbCrA", + "NewAlpha", + "NewAlpha16", + "NewCMYK", + "NewGray", + "NewGray16", + "NewNRGBA", + "NewNRGBA64", + "NewNYCbCrA", + "NewPaletted", + "NewRGBA", + "NewRGBA64", + "NewUniform", + "NewYCbCr", + "Opaque", + "Paletted", + "PalettedImage", + "Point", + "Pt", + "RGBA", + "RGBA64", + "Rect", + "Rectangle", + "RegisterFormat", + "Transparent", + "Uniform", + "White", + "YCbCr", + "YCbCrSubsampleRatio", + "YCbCrSubsampleRatio410", + "YCbCrSubsampleRatio411", + "YCbCrSubsampleRatio420", + "YCbCrSubsampleRatio422", + "YCbCrSubsampleRatio440", + "YCbCrSubsampleRatio444", + "ZP", + "ZR", + }, + "image/color": []string{ + "Alpha", + "Alpha16", + "Alpha16Model", + "AlphaModel", + "Black", + "CMYK", + "CMYKModel", + "CMYKToRGB", + "Color", + "Gray", + "Gray16", + "Gray16Model", + "GrayModel", + "Model", + "ModelFunc", + "NRGBA", + "NRGBA64", + "NRGBA64Model", + "NRGBAModel", + "NYCbCrA", + "NYCbCrAModel", + "Opaque", + "Palette", + "RGBA", + "RGBA64", + "RGBA64Model", + "RGBAModel", + "RGBToCMYK", + "RGBToYCbCr", + "Transparent", + "White", + "YCbCr", + "YCbCrModel", + "YCbCrToRGB", + }, + "image/color/palette": []string{ + "Plan9", + "WebSafe", + }, + "image/draw": []string{ + "Draw", + "DrawMask", + "Drawer", + "FloydSteinberg", + "Image", + "Op", + "Over", + "Quantizer", + "Src", + }, + "image/gif": []string{ + "Decode", + "DecodeAll", + "DecodeConfig", + "DisposalBackground", + "DisposalNone", + "DisposalPrevious", + "Encode", + "EncodeAll", + "GIF", + "Options", + }, + "image/jpeg": []string{ + "Decode", + "DecodeConfig", + "DefaultQuality", + "Encode", + "FormatError", + "Options", + "Reader", + "UnsupportedError", + }, + "image/png": []string{ + "BestCompression", + "BestSpeed", + "CompressionLevel", + "Decode", + "DecodeConfig", + "DefaultCompression", + "Encode", + "Encoder", + "EncoderBuffer", + "EncoderBufferPool", + "FormatError", + "NoCompression", + "UnsupportedError", + }, + "index/suffixarray": []string{ + "Index", + "New", + }, + "io": []string{ + "ByteReader", + "ByteScanner", + "ByteWriter", + "Closer", + "Copy", + "CopyBuffer", + "CopyN", + "EOF", + "ErrClosedPipe", + "ErrNoProgress", + "ErrShortBuffer", + "ErrShortWrite", + "ErrUnexpectedEOF", + "LimitReader", + "LimitedReader", + "MultiReader", + "MultiWriter", + "NewSectionReader", + "Pipe", + "PipeReader", + "PipeWriter", + "ReadAtLeast", + "ReadCloser", + "ReadFull", + "ReadSeeker", + "ReadWriteCloser", + "ReadWriteSeeker", + "ReadWriter", + "Reader", + "ReaderAt", + "ReaderFrom", + "RuneReader", + "RuneScanner", + "SectionReader", + "SeekCurrent", + "SeekEnd", + "SeekStart", + "Seeker", + "StringWriter", + "TeeReader", + "WriteCloser", + "WriteSeeker", + "WriteString", + "Writer", + "WriterAt", + "WriterTo", + }, + "io/ioutil": []string{ + "Discard", + "NopCloser", + "ReadAll", + "ReadDir", + "ReadFile", + "TempDir", + "TempFile", + "WriteFile", + }, + "log": []string{ + "Fatal", + "Fatalf", + "Fatalln", + "Flags", + "LUTC", + "Ldate", + "Llongfile", + "Lmicroseconds", + "Lmsgprefix", + "Logger", + "Lshortfile", + "LstdFlags", + "Ltime", + "New", + "Output", + "Panic", + "Panicf", + "Panicln", + "Prefix", + "Print", + "Printf", + "Println", + "SetFlags", + "SetOutput", + "SetPrefix", + "Writer", + }, + "log/syslog": []string{ + "Dial", + "LOG_ALERT", + "LOG_AUTH", + "LOG_AUTHPRIV", + "LOG_CRIT", + "LOG_CRON", + "LOG_DAEMON", + "LOG_DEBUG", + "LOG_EMERG", + "LOG_ERR", + "LOG_FTP", + "LOG_INFO", + "LOG_KERN", + "LOG_LOCAL0", + "LOG_LOCAL1", + "LOG_LOCAL2", + "LOG_LOCAL3", + "LOG_LOCAL4", + "LOG_LOCAL5", + "LOG_LOCAL6", + "LOG_LOCAL7", + "LOG_LPR", + "LOG_MAIL", + "LOG_NEWS", + "LOG_NOTICE", + "LOG_SYSLOG", + "LOG_USER", + "LOG_UUCP", + "LOG_WARNING", + "New", + "NewLogger", + "Priority", + "Writer", + }, + "math": []string{ + "Abs", + "Acos", + "Acosh", + "Asin", + "Asinh", + "Atan", + "Atan2", + "Atanh", + "Cbrt", + "Ceil", + "Copysign", + "Cos", + "Cosh", + "Dim", + "E", + "Erf", + "Erfc", + "Erfcinv", + "Erfinv", + "Exp", + "Exp2", + "Expm1", + "FMA", + "Float32bits", + "Float32frombits", + "Float64bits", + "Float64frombits", + "Floor", + "Frexp", + "Gamma", + "Hypot", + "Ilogb", + "Inf", + "IsInf", + "IsNaN", + "J0", + "J1", + "Jn", + "Ldexp", + "Lgamma", + "Ln10", + "Ln2", + "Log", + "Log10", + "Log10E", + "Log1p", + "Log2", + "Log2E", + "Logb", + "Max", + "MaxFloat32", + "MaxFloat64", + "MaxInt16", + "MaxInt32", + "MaxInt64", + "MaxInt8", + "MaxUint16", + "MaxUint32", + "MaxUint64", + "MaxUint8", + "Min", + "MinInt16", + "MinInt32", + "MinInt64", + "MinInt8", + "Mod", + "Modf", + "NaN", + "Nextafter", + "Nextafter32", + "Phi", + "Pi", + "Pow", + "Pow10", + "Remainder", + "Round", + "RoundToEven", + "Signbit", + "Sin", + "Sincos", + "Sinh", + "SmallestNonzeroFloat32", + "SmallestNonzeroFloat64", + "Sqrt", + "Sqrt2", + "SqrtE", + "SqrtPhi", + "SqrtPi", + "Tan", + "Tanh", + "Trunc", + "Y0", + "Y1", + "Yn", + }, + "math/big": []string{ + "Above", + "Accuracy", + "AwayFromZero", + "Below", + "ErrNaN", + "Exact", + "Float", + "Int", + "Jacobi", + "MaxBase", + "MaxExp", + "MaxPrec", + "MinExp", + "NewFloat", + "NewInt", + "NewRat", + "ParseFloat", + "Rat", + "RoundingMode", + "ToNearestAway", + "ToNearestEven", + "ToNegativeInf", + "ToPositiveInf", + "ToZero", + "Word", + }, + "math/bits": []string{ + "Add", + "Add32", + "Add64", + "Div", + "Div32", + "Div64", + "LeadingZeros", + "LeadingZeros16", + "LeadingZeros32", + "LeadingZeros64", + "LeadingZeros8", + "Len", + "Len16", + "Len32", + "Len64", + "Len8", + "Mul", + "Mul32", + "Mul64", + "OnesCount", + "OnesCount16", + "OnesCount32", + "OnesCount64", + "OnesCount8", + "Rem", + "Rem32", + "Rem64", + "Reverse", + "Reverse16", + "Reverse32", + "Reverse64", + "Reverse8", + "ReverseBytes", + "ReverseBytes16", + "ReverseBytes32", + "ReverseBytes64", + "RotateLeft", + "RotateLeft16", + "RotateLeft32", + "RotateLeft64", + "RotateLeft8", + "Sub", + "Sub32", + "Sub64", + "TrailingZeros", + "TrailingZeros16", + "TrailingZeros32", + "TrailingZeros64", + "TrailingZeros8", + "UintSize", + }, + "math/cmplx": []string{ + "Abs", + "Acos", + "Acosh", + "Asin", + "Asinh", + "Atan", + "Atanh", + "Conj", + "Cos", + "Cosh", + "Cot", + "Exp", + "Inf", + "IsInf", + "IsNaN", + "Log", + "Log10", + "NaN", + "Phase", + "Polar", + "Pow", + "Rect", + "Sin", + "Sinh", + "Sqrt", + "Tan", + "Tanh", + }, + "math/rand": []string{ + "ExpFloat64", + "Float32", + "Float64", + "Int", + "Int31", + "Int31n", + "Int63", + "Int63n", + "Intn", + "New", + "NewSource", + "NewZipf", + "NormFloat64", + "Perm", + "Rand", + "Read", + "Seed", + "Shuffle", + "Source", + "Source64", + "Uint32", + "Uint64", + "Zipf", + }, + "mime": []string{ + "AddExtensionType", + "BEncoding", + "ErrInvalidMediaParameter", + "ExtensionsByType", + "FormatMediaType", + "ParseMediaType", + "QEncoding", + "TypeByExtension", + "WordDecoder", + "WordEncoder", + }, + "mime/multipart": []string{ + "ErrMessageTooLarge", + "File", + "FileHeader", + "Form", + "NewReader", + "NewWriter", + "Part", + "Reader", + "Writer", + }, + "mime/quotedprintable": []string{ + "NewReader", + "NewWriter", + "Reader", + "Writer", + }, + "net": []string{ + "Addr", + "AddrError", + "Buffers", + "CIDRMask", + "Conn", + "DNSConfigError", + "DNSError", + "DefaultResolver", + "Dial", + "DialIP", + "DialTCP", + "DialTimeout", + "DialUDP", + "DialUnix", + "Dialer", + "ErrWriteToConnected", + "Error", + "FileConn", + "FileListener", + "FilePacketConn", + "FlagBroadcast", + "FlagLoopback", + "FlagMulticast", + "FlagPointToPoint", + "FlagUp", + "Flags", + "HardwareAddr", + "IP", + "IPAddr", + "IPConn", + "IPMask", + "IPNet", + "IPv4", + "IPv4Mask", + "IPv4allrouter", + "IPv4allsys", + "IPv4bcast", + "IPv4len", + "IPv4zero", + "IPv6interfacelocalallnodes", + "IPv6len", + "IPv6linklocalallnodes", + "IPv6linklocalallrouters", + "IPv6loopback", + "IPv6unspecified", + "IPv6zero", + "Interface", + "InterfaceAddrs", + "InterfaceByIndex", + "InterfaceByName", + "Interfaces", + "InvalidAddrError", + "JoinHostPort", + "Listen", + "ListenConfig", + "ListenIP", + "ListenMulticastUDP", + "ListenPacket", + "ListenTCP", + "ListenUDP", + "ListenUnix", + "ListenUnixgram", + "Listener", + "LookupAddr", + "LookupCNAME", + "LookupHost", + "LookupIP", + "LookupMX", + "LookupNS", + "LookupPort", + "LookupSRV", + "LookupTXT", + "MX", + "NS", + "OpError", + "PacketConn", + "ParseCIDR", + "ParseError", + "ParseIP", + "ParseMAC", + "Pipe", + "ResolveIPAddr", + "ResolveTCPAddr", + "ResolveUDPAddr", + "ResolveUnixAddr", + "Resolver", + "SRV", + "SplitHostPort", + "TCPAddr", + "TCPConn", + "TCPListener", + "UDPAddr", + "UDPConn", + "UnixAddr", + "UnixConn", + "UnixListener", + "UnknownNetworkError", + }, + "net/http": []string{ + "CanonicalHeaderKey", + "Client", + "CloseNotifier", + "ConnState", + "Cookie", + "CookieJar", + "DefaultClient", + "DefaultMaxHeaderBytes", + "DefaultMaxIdleConnsPerHost", + "DefaultServeMux", + "DefaultTransport", + "DetectContentType", + "Dir", + "ErrAbortHandler", + "ErrBodyNotAllowed", + "ErrBodyReadAfterClose", + "ErrContentLength", + "ErrHandlerTimeout", + "ErrHeaderTooLong", + "ErrHijacked", + "ErrLineTooLong", + "ErrMissingBoundary", + "ErrMissingContentLength", + "ErrMissingFile", + "ErrNoCookie", + "ErrNoLocation", + "ErrNotMultipart", + "ErrNotSupported", + "ErrServerClosed", + "ErrShortBody", + "ErrSkipAltProtocol", + "ErrUnexpectedTrailer", + "ErrUseLastResponse", + "ErrWriteAfterFlush", + "Error", + "File", + "FileServer", + "FileSystem", + "Flusher", + "Get", + "Handle", + "HandleFunc", + "Handler", + "HandlerFunc", + "Head", + "Header", + "Hijacker", + "ListenAndServe", + "ListenAndServeTLS", + "LocalAddrContextKey", + "MaxBytesReader", + "MethodConnect", + "MethodDelete", + "MethodGet", + "MethodHead", + "MethodOptions", + "MethodPatch", + "MethodPost", + "MethodPut", + "MethodTrace", + "NewFileTransport", + "NewRequest", + "NewRequestWithContext", + "NewServeMux", + "NoBody", + "NotFound", + "NotFoundHandler", + "ParseHTTPVersion", + "ParseTime", + "Post", + "PostForm", + "ProtocolError", + "ProxyFromEnvironment", + "ProxyURL", + "PushOptions", + "Pusher", + "ReadRequest", + "ReadResponse", + "Redirect", + "RedirectHandler", + "Request", + "Response", + "ResponseWriter", + "RoundTripper", + "SameSite", + "SameSiteDefaultMode", + "SameSiteLaxMode", + "SameSiteNoneMode", + "SameSiteStrictMode", + "Serve", + "ServeContent", + "ServeFile", + "ServeMux", + "ServeTLS", + "Server", + "ServerContextKey", + "SetCookie", + "StateActive", + "StateClosed", + "StateHijacked", + "StateIdle", + "StateNew", + "StatusAccepted", + "StatusAlreadyReported", + "StatusBadGateway", + "StatusBadRequest", + "StatusConflict", + "StatusContinue", + "StatusCreated", + "StatusEarlyHints", + "StatusExpectationFailed", + "StatusFailedDependency", + "StatusForbidden", + "StatusFound", + "StatusGatewayTimeout", + "StatusGone", + "StatusHTTPVersionNotSupported", + "StatusIMUsed", + "StatusInsufficientStorage", + "StatusInternalServerError", + "StatusLengthRequired", + "StatusLocked", + "StatusLoopDetected", + "StatusMethodNotAllowed", + "StatusMisdirectedRequest", + "StatusMovedPermanently", + "StatusMultiStatus", + "StatusMultipleChoices", + "StatusNetworkAuthenticationRequired", + "StatusNoContent", + "StatusNonAuthoritativeInfo", + "StatusNotAcceptable", + "StatusNotExtended", + "StatusNotFound", + "StatusNotImplemented", + "StatusNotModified", + "StatusOK", + "StatusPartialContent", + "StatusPaymentRequired", + "StatusPermanentRedirect", + "StatusPreconditionFailed", + "StatusPreconditionRequired", + "StatusProcessing", + "StatusProxyAuthRequired", + "StatusRequestEntityTooLarge", + "StatusRequestHeaderFieldsTooLarge", + "StatusRequestTimeout", + "StatusRequestURITooLong", + "StatusRequestedRangeNotSatisfiable", + "StatusResetContent", + "StatusSeeOther", + "StatusServiceUnavailable", + "StatusSwitchingProtocols", + "StatusTeapot", + "StatusTemporaryRedirect", + "StatusText", + "StatusTooEarly", + "StatusTooManyRequests", + "StatusUnauthorized", + "StatusUnavailableForLegalReasons", + "StatusUnprocessableEntity", + "StatusUnsupportedMediaType", + "StatusUpgradeRequired", + "StatusUseProxy", + "StatusVariantAlsoNegotiates", + "StripPrefix", + "TimeFormat", + "TimeoutHandler", + "TrailerPrefix", + "Transport", + }, + "net/http/cgi": []string{ + "Handler", + "Request", + "RequestFromMap", + "Serve", + }, + "net/http/cookiejar": []string{ + "Jar", + "New", + "Options", + "PublicSuffixList", + }, + "net/http/fcgi": []string{ + "ErrConnClosed", + "ErrRequestAborted", + "ProcessEnv", + "Serve", + }, + "net/http/httptest": []string{ + "DefaultRemoteAddr", + "NewRecorder", + "NewRequest", + "NewServer", + "NewTLSServer", + "NewUnstartedServer", + "ResponseRecorder", + "Server", + }, + "net/http/httptrace": []string{ + "ClientTrace", + "ContextClientTrace", + "DNSDoneInfo", + "DNSStartInfo", + "GotConnInfo", + "WithClientTrace", + "WroteRequestInfo", + }, + "net/http/httputil": []string{ + "BufferPool", + "ClientConn", + "DumpRequest", + "DumpRequestOut", + "DumpResponse", + "ErrClosed", + "ErrLineTooLong", + "ErrPersistEOF", + "ErrPipeline", + "NewChunkedReader", + "NewChunkedWriter", + "NewClientConn", + "NewProxyClientConn", + "NewServerConn", + "NewSingleHostReverseProxy", + "ReverseProxy", + "ServerConn", + }, + "net/http/pprof": []string{ + "Cmdline", + "Handler", + "Index", + "Profile", + "Symbol", + "Trace", + }, + "net/mail": []string{ + "Address", + "AddressParser", + "ErrHeaderNotPresent", + "Header", + "Message", + "ParseAddress", + "ParseAddressList", + "ParseDate", + "ReadMessage", + }, + "net/rpc": []string{ + "Accept", + "Call", + "Client", + "ClientCodec", + "DefaultDebugPath", + "DefaultRPCPath", + "DefaultServer", + "Dial", + "DialHTTP", + "DialHTTPPath", + "ErrShutdown", + "HandleHTTP", + "NewClient", + "NewClientWithCodec", + "NewServer", + "Register", + "RegisterName", + "Request", + "Response", + "ServeCodec", + "ServeConn", + "ServeRequest", + "Server", + "ServerCodec", + "ServerError", + }, + "net/rpc/jsonrpc": []string{ + "Dial", + "NewClient", + "NewClientCodec", + "NewServerCodec", + "ServeConn", + }, + "net/smtp": []string{ + "Auth", + "CRAMMD5Auth", + "Client", + "Dial", + "NewClient", + "PlainAuth", + "SendMail", + "ServerInfo", + }, + "net/textproto": []string{ + "CanonicalMIMEHeaderKey", + "Conn", + "Dial", + "Error", + "MIMEHeader", + "NewConn", + "NewReader", + "NewWriter", + "Pipeline", + "ProtocolError", + "Reader", + "TrimBytes", + "TrimString", + "Writer", + }, + "net/url": []string{ + "Error", + "EscapeError", + "InvalidHostError", + "Parse", + "ParseQuery", + "ParseRequestURI", + "PathEscape", + "PathUnescape", + "QueryEscape", + "QueryUnescape", + "URL", + "User", + "UserPassword", + "Userinfo", + "Values", + }, + "os": []string{ + "Args", + "Chdir", + "Chmod", + "Chown", + "Chtimes", + "Clearenv", + "Create", + "DevNull", + "Environ", + "ErrClosed", + "ErrDeadlineExceeded", + "ErrExist", + "ErrInvalid", + "ErrNoDeadline", + "ErrNotExist", + "ErrPermission", + "Executable", + "Exit", + "Expand", + "ExpandEnv", + "File", + "FileInfo", + "FileMode", + "FindProcess", + "Getegid", + "Getenv", + "Geteuid", + "Getgid", + "Getgroups", + "Getpagesize", + "Getpid", + "Getppid", + "Getuid", + "Getwd", + "Hostname", + "Interrupt", + "IsExist", + "IsNotExist", + "IsPathSeparator", + "IsPermission", + "IsTimeout", + "Kill", + "Lchown", + "Link", + "LinkError", + "LookupEnv", + "Lstat", + "Mkdir", + "MkdirAll", + "ModeAppend", + "ModeCharDevice", + "ModeDevice", + "ModeDir", + "ModeExclusive", + "ModeIrregular", + "ModeNamedPipe", + "ModePerm", + "ModeSetgid", + "ModeSetuid", + "ModeSocket", + "ModeSticky", + "ModeSymlink", + "ModeTemporary", + "ModeType", + "NewFile", + "NewSyscallError", + "O_APPEND", + "O_CREATE", + "O_EXCL", + "O_RDONLY", + "O_RDWR", + "O_SYNC", + "O_TRUNC", + "O_WRONLY", + "Open", + "OpenFile", + "PathError", + "PathListSeparator", + "PathSeparator", + "Pipe", + "ProcAttr", + "Process", + "ProcessState", + "Readlink", + "Remove", + "RemoveAll", + "Rename", + "SEEK_CUR", + "SEEK_END", + "SEEK_SET", + "SameFile", + "Setenv", + "Signal", + "StartProcess", + "Stat", + "Stderr", + "Stdin", + "Stdout", + "Symlink", + "SyscallError", + "TempDir", + "Truncate", + "Unsetenv", + "UserCacheDir", + "UserConfigDir", + "UserHomeDir", + }, + "os/exec": []string{ + "Cmd", + "Command", + "CommandContext", + "ErrNotFound", + "Error", + "ExitError", + "LookPath", + }, + "os/signal": []string{ + "Ignore", + "Ignored", + "Notify", + "Reset", + "Stop", + }, + "os/user": []string{ + "Current", + "Group", + "Lookup", + "LookupGroup", + "LookupGroupId", + "LookupId", + "UnknownGroupError", + "UnknownGroupIdError", + "UnknownUserError", + "UnknownUserIdError", + "User", + }, + "path": []string{ + "Base", + "Clean", + "Dir", + "ErrBadPattern", + "Ext", + "IsAbs", + "Join", + "Match", + "Split", + }, + "path/filepath": []string{ + "Abs", + "Base", + "Clean", + "Dir", + "ErrBadPattern", + "EvalSymlinks", + "Ext", + "FromSlash", + "Glob", + "HasPrefix", + "IsAbs", + "Join", + "ListSeparator", + "Match", + "Rel", + "Separator", + "SkipDir", + "Split", + "SplitList", + "ToSlash", + "VolumeName", + "Walk", + "WalkFunc", + }, + "plugin": []string{ + "Open", + "Plugin", + "Symbol", + }, + "reflect": []string{ + "Append", + "AppendSlice", + "Array", + "ArrayOf", + "Bool", + "BothDir", + "Chan", + "ChanDir", + "ChanOf", + "Complex128", + "Complex64", + "Copy", + "DeepEqual", + "Float32", + "Float64", + "Func", + "FuncOf", + "Indirect", + "Int", + "Int16", + "Int32", + "Int64", + "Int8", + "Interface", + "Invalid", + "Kind", + "MakeChan", + "MakeFunc", + "MakeMap", + "MakeMapWithSize", + "MakeSlice", + "Map", + "MapIter", + "MapOf", + "Method", + "New", + "NewAt", + "Ptr", + "PtrTo", + "RecvDir", + "Select", + "SelectCase", + "SelectDefault", + "SelectDir", + "SelectRecv", + "SelectSend", + "SendDir", + "Slice", + "SliceHeader", + "SliceOf", + "String", + "StringHeader", + "Struct", + "StructField", + "StructOf", + "StructTag", + "Swapper", + "Type", + "TypeOf", + "Uint", + "Uint16", + "Uint32", + "Uint64", + "Uint8", + "Uintptr", + "UnsafePointer", + "Value", + "ValueError", + "ValueOf", + "Zero", + }, + "regexp": []string{ + "Compile", + "CompilePOSIX", + "Match", + "MatchReader", + "MatchString", + "MustCompile", + "MustCompilePOSIX", + "QuoteMeta", + "Regexp", + }, + "regexp/syntax": []string{ + "ClassNL", + "Compile", + "DotNL", + "EmptyBeginLine", + "EmptyBeginText", + "EmptyEndLine", + "EmptyEndText", + "EmptyNoWordBoundary", + "EmptyOp", + "EmptyOpContext", + "EmptyWordBoundary", + "ErrInternalError", + "ErrInvalidCharClass", + "ErrInvalidCharRange", + "ErrInvalidEscape", + "ErrInvalidNamedCapture", + "ErrInvalidPerlOp", + "ErrInvalidRepeatOp", + "ErrInvalidRepeatSize", + "ErrInvalidUTF8", + "ErrMissingBracket", + "ErrMissingParen", + "ErrMissingRepeatArgument", + "ErrTrailingBackslash", + "ErrUnexpectedParen", + "Error", + "ErrorCode", + "Flags", + "FoldCase", + "Inst", + "InstAlt", + "InstAltMatch", + "InstCapture", + "InstEmptyWidth", + "InstFail", + "InstMatch", + "InstNop", + "InstOp", + "InstRune", + "InstRune1", + "InstRuneAny", + "InstRuneAnyNotNL", + "IsWordChar", + "Literal", + "MatchNL", + "NonGreedy", + "OneLine", + "Op", + "OpAlternate", + "OpAnyChar", + "OpAnyCharNotNL", + "OpBeginLine", + "OpBeginText", + "OpCapture", + "OpCharClass", + "OpConcat", + "OpEmptyMatch", + "OpEndLine", + "OpEndText", + "OpLiteral", + "OpNoMatch", + "OpNoWordBoundary", + "OpPlus", + "OpQuest", + "OpRepeat", + "OpStar", + "OpWordBoundary", + "POSIX", + "Parse", + "Perl", + "PerlX", + "Prog", + "Regexp", + "Simple", + "UnicodeGroups", + "WasDollar", + }, + "runtime": []string{ + "BlockProfile", + "BlockProfileRecord", + "Breakpoint", + "CPUProfile", + "Caller", + "Callers", + "CallersFrames", + "Compiler", + "Error", + "Frame", + "Frames", + "Func", + "FuncForPC", + "GC", + "GOARCH", + "GOMAXPROCS", + "GOOS", + "GOROOT", + "Goexit", + "GoroutineProfile", + "Gosched", + "KeepAlive", + "LockOSThread", + "MemProfile", + "MemProfileRate", + "MemProfileRecord", + "MemStats", + "MutexProfile", + "NumCPU", + "NumCgoCall", + "NumGoroutine", + "ReadMemStats", + "ReadTrace", + "SetBlockProfileRate", + "SetCPUProfileRate", + "SetCgoTraceback", + "SetFinalizer", + "SetMutexProfileFraction", + "Stack", + "StackRecord", + "StartTrace", + "StopTrace", + "ThreadCreateProfile", + "TypeAssertionError", + "UnlockOSThread", + "Version", + }, + "runtime/debug": []string{ + "BuildInfo", + "FreeOSMemory", + "GCStats", + "Module", + "PrintStack", + "ReadBuildInfo", + "ReadGCStats", + "SetGCPercent", + "SetMaxStack", + "SetMaxThreads", + "SetPanicOnFault", + "SetTraceback", + "Stack", + "WriteHeapDump", + }, + "runtime/pprof": []string{ + "Do", + "ForLabels", + "Label", + "LabelSet", + "Labels", + "Lookup", + "NewProfile", + "Profile", + "Profiles", + "SetGoroutineLabels", + "StartCPUProfile", + "StopCPUProfile", + "WithLabels", + "WriteHeapProfile", + }, + "runtime/trace": []string{ + "IsEnabled", + "Log", + "Logf", + "NewTask", + "Region", + "Start", + "StartRegion", + "Stop", + "Task", + "WithRegion", + }, + "sort": []string{ + "Float64Slice", + "Float64s", + "Float64sAreSorted", + "IntSlice", + "Interface", + "Ints", + "IntsAreSorted", + "IsSorted", + "Reverse", + "Search", + "SearchFloat64s", + "SearchInts", + "SearchStrings", + "Slice", + "SliceIsSorted", + "SliceStable", + "Sort", + "Stable", + "StringSlice", + "Strings", + "StringsAreSorted", + }, + "strconv": []string{ + "AppendBool", + "AppendFloat", + "AppendInt", + "AppendQuote", + "AppendQuoteRune", + "AppendQuoteRuneToASCII", + "AppendQuoteRuneToGraphic", + "AppendQuoteToASCII", + "AppendQuoteToGraphic", + "AppendUint", + "Atoi", + "CanBackquote", + "ErrRange", + "ErrSyntax", + "FormatBool", + "FormatComplex", + "FormatFloat", + "FormatInt", + "FormatUint", + "IntSize", + "IsGraphic", + "IsPrint", + "Itoa", + "NumError", + "ParseBool", + "ParseComplex", + "ParseFloat", + "ParseInt", + "ParseUint", + "Quote", + "QuoteRune", + "QuoteRuneToASCII", + "QuoteRuneToGraphic", + "QuoteToASCII", + "QuoteToGraphic", + "Unquote", + "UnquoteChar", + }, + "strings": []string{ + "Builder", + "Compare", + "Contains", + "ContainsAny", + "ContainsRune", + "Count", + "EqualFold", + "Fields", + "FieldsFunc", + "HasPrefix", + "HasSuffix", + "Index", + "IndexAny", + "IndexByte", + "IndexFunc", + "IndexRune", + "Join", + "LastIndex", + "LastIndexAny", + "LastIndexByte", + "LastIndexFunc", + "Map", + "NewReader", + "NewReplacer", + "Reader", + "Repeat", + "Replace", + "ReplaceAll", + "Replacer", + "Split", + "SplitAfter", + "SplitAfterN", + "SplitN", + "Title", + "ToLower", + "ToLowerSpecial", + "ToTitle", + "ToTitleSpecial", + "ToUpper", + "ToUpperSpecial", + "ToValidUTF8", + "Trim", + "TrimFunc", + "TrimLeft", + "TrimLeftFunc", + "TrimPrefix", + "TrimRight", + "TrimRightFunc", + "TrimSpace", + "TrimSuffix", + }, + "sync": []string{ + "Cond", + "Locker", + "Map", + "Mutex", + "NewCond", + "Once", + "Pool", + "RWMutex", + "WaitGroup", + }, + "sync/atomic": []string{ + "AddInt32", + "AddInt64", + "AddUint32", + "AddUint64", + "AddUintptr", + "CompareAndSwapInt32", + "CompareAndSwapInt64", + "CompareAndSwapPointer", + "CompareAndSwapUint32", + "CompareAndSwapUint64", + "CompareAndSwapUintptr", + "LoadInt32", + "LoadInt64", + "LoadPointer", + "LoadUint32", + "LoadUint64", + "LoadUintptr", + "StoreInt32", + "StoreInt64", + "StorePointer", + "StoreUint32", + "StoreUint64", + "StoreUintptr", + "SwapInt32", + "SwapInt64", + "SwapPointer", + "SwapUint32", + "SwapUint64", + "SwapUintptr", + "Value", + }, + "syscall": []string{ + "AF_ALG", + "AF_APPLETALK", + "AF_ARP", + "AF_ASH", + "AF_ATM", + "AF_ATMPVC", + "AF_ATMSVC", + "AF_AX25", + "AF_BLUETOOTH", + "AF_BRIDGE", + "AF_CAIF", + "AF_CAN", + "AF_CCITT", + "AF_CHAOS", + "AF_CNT", + "AF_COIP", + "AF_DATAKIT", + "AF_DECnet", + "AF_DLI", + "AF_E164", + "AF_ECMA", + "AF_ECONET", + "AF_ENCAP", + "AF_FILE", + "AF_HYLINK", + "AF_IEEE80211", + "AF_IEEE802154", + "AF_IMPLINK", + "AF_INET", + "AF_INET6", + "AF_INET6_SDP", + "AF_INET_SDP", + "AF_IPX", + "AF_IRDA", + "AF_ISDN", + "AF_ISO", + "AF_IUCV", + "AF_KEY", + "AF_LAT", + "AF_LINK", + "AF_LLC", + "AF_LOCAL", + "AF_MAX", + "AF_MPLS", + "AF_NATM", + "AF_NDRV", + "AF_NETBEUI", + "AF_NETBIOS", + "AF_NETGRAPH", + "AF_NETLINK", + "AF_NETROM", + "AF_NS", + "AF_OROUTE", + "AF_OSI", + "AF_PACKET", + "AF_PHONET", + "AF_PPP", + "AF_PPPOX", + "AF_PUP", + "AF_RDS", + "AF_RESERVED_36", + "AF_ROSE", + "AF_ROUTE", + "AF_RXRPC", + "AF_SCLUSTER", + "AF_SECURITY", + "AF_SIP", + "AF_SLOW", + "AF_SNA", + "AF_SYSTEM", + "AF_TIPC", + "AF_UNIX", + "AF_UNSPEC", + "AF_VENDOR00", + "AF_VENDOR01", + "AF_VENDOR02", + "AF_VENDOR03", + "AF_VENDOR04", + "AF_VENDOR05", + "AF_VENDOR06", + "AF_VENDOR07", + "AF_VENDOR08", + "AF_VENDOR09", + "AF_VENDOR10", + "AF_VENDOR11", + "AF_VENDOR12", + "AF_VENDOR13", + "AF_VENDOR14", + "AF_VENDOR15", + "AF_VENDOR16", + "AF_VENDOR17", + "AF_VENDOR18", + "AF_VENDOR19", + "AF_VENDOR20", + "AF_VENDOR21", + "AF_VENDOR22", + "AF_VENDOR23", + "AF_VENDOR24", + "AF_VENDOR25", + "AF_VENDOR26", + "AF_VENDOR27", + "AF_VENDOR28", + "AF_VENDOR29", + "AF_VENDOR30", + "AF_VENDOR31", + "AF_VENDOR32", + "AF_VENDOR33", + "AF_VENDOR34", + "AF_VENDOR35", + "AF_VENDOR36", + "AF_VENDOR37", + "AF_VENDOR38", + "AF_VENDOR39", + "AF_VENDOR40", + "AF_VENDOR41", + "AF_VENDOR42", + "AF_VENDOR43", + "AF_VENDOR44", + "AF_VENDOR45", + "AF_VENDOR46", + "AF_VENDOR47", + "AF_WANPIPE", + "AF_X25", + "AI_CANONNAME", + "AI_NUMERICHOST", + "AI_PASSIVE", + "APPLICATION_ERROR", + "ARPHRD_ADAPT", + "ARPHRD_APPLETLK", + "ARPHRD_ARCNET", + "ARPHRD_ASH", + "ARPHRD_ATM", + "ARPHRD_AX25", + "ARPHRD_BIF", + "ARPHRD_CHAOS", + "ARPHRD_CISCO", + "ARPHRD_CSLIP", + "ARPHRD_CSLIP6", + "ARPHRD_DDCMP", + "ARPHRD_DLCI", + "ARPHRD_ECONET", + "ARPHRD_EETHER", + "ARPHRD_ETHER", + "ARPHRD_EUI64", + "ARPHRD_FCAL", + "ARPHRD_FCFABRIC", + "ARPHRD_FCPL", + "ARPHRD_FCPP", + "ARPHRD_FDDI", + "ARPHRD_FRAD", + "ARPHRD_FRELAY", + "ARPHRD_HDLC", + "ARPHRD_HIPPI", + "ARPHRD_HWX25", + "ARPHRD_IEEE1394", + "ARPHRD_IEEE802", + "ARPHRD_IEEE80211", + "ARPHRD_IEEE80211_PRISM", + "ARPHRD_IEEE80211_RADIOTAP", + "ARPHRD_IEEE802154", + "ARPHRD_IEEE802154_PHY", + "ARPHRD_IEEE802_TR", + "ARPHRD_INFINIBAND", + "ARPHRD_IPDDP", + "ARPHRD_IPGRE", + "ARPHRD_IRDA", + "ARPHRD_LAPB", + "ARPHRD_LOCALTLK", + "ARPHRD_LOOPBACK", + "ARPHRD_METRICOM", + "ARPHRD_NETROM", + "ARPHRD_NONE", + "ARPHRD_PIMREG", + "ARPHRD_PPP", + "ARPHRD_PRONET", + "ARPHRD_RAWHDLC", + "ARPHRD_ROSE", + "ARPHRD_RSRVD", + "ARPHRD_SIT", + "ARPHRD_SKIP", + "ARPHRD_SLIP", + "ARPHRD_SLIP6", + "ARPHRD_STRIP", + "ARPHRD_TUNNEL", + "ARPHRD_TUNNEL6", + "ARPHRD_VOID", + "ARPHRD_X25", + "AUTHTYPE_CLIENT", + "AUTHTYPE_SERVER", + "Accept", + "Accept4", + "AcceptEx", + "Access", + "Acct", + "AddrinfoW", + "Adjtime", + "Adjtimex", + "AttachLsf", + "B0", + "B1000000", + "B110", + "B115200", + "B1152000", + "B1200", + "B134", + "B14400", + "B150", + "B1500000", + "B1800", + "B19200", + "B200", + "B2000000", + "B230400", + "B2400", + "B2500000", + "B28800", + "B300", + "B3000000", + "B3500000", + "B38400", + "B4000000", + "B460800", + "B4800", + "B50", + "B500000", + "B57600", + "B576000", + "B600", + "B7200", + "B75", + "B76800", + "B921600", + "B9600", + "BASE_PROTOCOL", + "BIOCFEEDBACK", + "BIOCFLUSH", + "BIOCGBLEN", + "BIOCGDIRECTION", + "BIOCGDIRFILT", + "BIOCGDLT", + "BIOCGDLTLIST", + "BIOCGETBUFMODE", + "BIOCGETIF", + "BIOCGETZMAX", + "BIOCGFEEDBACK", + "BIOCGFILDROP", + "BIOCGHDRCMPLT", + "BIOCGRSIG", + "BIOCGRTIMEOUT", + "BIOCGSEESENT", + "BIOCGSTATS", + "BIOCGSTATSOLD", + "BIOCGTSTAMP", + "BIOCIMMEDIATE", + "BIOCLOCK", + "BIOCPROMISC", + "BIOCROTZBUF", + "BIOCSBLEN", + "BIOCSDIRECTION", + "BIOCSDIRFILT", + "BIOCSDLT", + "BIOCSETBUFMODE", + "BIOCSETF", + "BIOCSETFNR", + "BIOCSETIF", + "BIOCSETWF", + "BIOCSETZBUF", + "BIOCSFEEDBACK", + "BIOCSFILDROP", + "BIOCSHDRCMPLT", + "BIOCSRSIG", + "BIOCSRTIMEOUT", + "BIOCSSEESENT", + "BIOCSTCPF", + "BIOCSTSTAMP", + "BIOCSUDPF", + "BIOCVERSION", + "BPF_A", + "BPF_ABS", + "BPF_ADD", + "BPF_ALIGNMENT", + "BPF_ALIGNMENT32", + "BPF_ALU", + "BPF_AND", + "BPF_B", + "BPF_BUFMODE_BUFFER", + "BPF_BUFMODE_ZBUF", + "BPF_DFLTBUFSIZE", + "BPF_DIRECTION_IN", + "BPF_DIRECTION_OUT", + "BPF_DIV", + "BPF_H", + "BPF_IMM", + "BPF_IND", + "BPF_JA", + "BPF_JEQ", + "BPF_JGE", + "BPF_JGT", + "BPF_JMP", + "BPF_JSET", + "BPF_K", + "BPF_LD", + "BPF_LDX", + "BPF_LEN", + "BPF_LSH", + "BPF_MAJOR_VERSION", + "BPF_MAXBUFSIZE", + "BPF_MAXINSNS", + "BPF_MEM", + "BPF_MEMWORDS", + "BPF_MINBUFSIZE", + "BPF_MINOR_VERSION", + "BPF_MISC", + "BPF_MSH", + "BPF_MUL", + "BPF_NEG", + "BPF_OR", + "BPF_RELEASE", + "BPF_RET", + "BPF_RSH", + "BPF_ST", + "BPF_STX", + "BPF_SUB", + "BPF_TAX", + "BPF_TXA", + "BPF_T_BINTIME", + "BPF_T_BINTIME_FAST", + "BPF_T_BINTIME_MONOTONIC", + "BPF_T_BINTIME_MONOTONIC_FAST", + "BPF_T_FAST", + "BPF_T_FLAG_MASK", + "BPF_T_FORMAT_MASK", + "BPF_T_MICROTIME", + "BPF_T_MICROTIME_FAST", + "BPF_T_MICROTIME_MONOTONIC", + "BPF_T_MICROTIME_MONOTONIC_FAST", + "BPF_T_MONOTONIC", + "BPF_T_MONOTONIC_FAST", + "BPF_T_NANOTIME", + "BPF_T_NANOTIME_FAST", + "BPF_T_NANOTIME_MONOTONIC", + "BPF_T_NANOTIME_MONOTONIC_FAST", + "BPF_T_NONE", + "BPF_T_NORMAL", + "BPF_W", + "BPF_X", + "BRKINT", + "Bind", + "BindToDevice", + "BpfBuflen", + "BpfDatalink", + "BpfHdr", + "BpfHeadercmpl", + "BpfInsn", + "BpfInterface", + "BpfJump", + "BpfProgram", + "BpfStat", + "BpfStats", + "BpfStmt", + "BpfTimeout", + "BpfTimeval", + "BpfVersion", + "BpfZbuf", + "BpfZbufHeader", + "ByHandleFileInformation", + "BytePtrFromString", + "ByteSliceFromString", + "CCR0_FLUSH", + "CERT_CHAIN_POLICY_AUTHENTICODE", + "CERT_CHAIN_POLICY_AUTHENTICODE_TS", + "CERT_CHAIN_POLICY_BASE", + "CERT_CHAIN_POLICY_BASIC_CONSTRAINTS", + "CERT_CHAIN_POLICY_EV", + "CERT_CHAIN_POLICY_MICROSOFT_ROOT", + "CERT_CHAIN_POLICY_NT_AUTH", + "CERT_CHAIN_POLICY_SSL", + "CERT_E_CN_NO_MATCH", + "CERT_E_EXPIRED", + "CERT_E_PURPOSE", + "CERT_E_ROLE", + "CERT_E_UNTRUSTEDROOT", + "CERT_STORE_ADD_ALWAYS", + "CERT_STORE_DEFER_CLOSE_UNTIL_LAST_FREE_FLAG", + "CERT_STORE_PROV_MEMORY", + "CERT_TRUST_HAS_EXCLUDED_NAME_CONSTRAINT", + "CERT_TRUST_HAS_NOT_DEFINED_NAME_CONSTRAINT", + "CERT_TRUST_HAS_NOT_PERMITTED_NAME_CONSTRAINT", + "CERT_TRUST_HAS_NOT_SUPPORTED_CRITICAL_EXT", + "CERT_TRUST_HAS_NOT_SUPPORTED_NAME_CONSTRAINT", + "CERT_TRUST_INVALID_BASIC_CONSTRAINTS", + "CERT_TRUST_INVALID_EXTENSION", + "CERT_TRUST_INVALID_NAME_CONSTRAINTS", + "CERT_TRUST_INVALID_POLICY_CONSTRAINTS", + "CERT_TRUST_IS_CYCLIC", + "CERT_TRUST_IS_EXPLICIT_DISTRUST", + "CERT_TRUST_IS_NOT_SIGNATURE_VALID", + "CERT_TRUST_IS_NOT_TIME_VALID", + "CERT_TRUST_IS_NOT_VALID_FOR_USAGE", + "CERT_TRUST_IS_OFFLINE_REVOCATION", + "CERT_TRUST_IS_REVOKED", + "CERT_TRUST_IS_UNTRUSTED_ROOT", + "CERT_TRUST_NO_ERROR", + "CERT_TRUST_NO_ISSUANCE_CHAIN_POLICY", + "CERT_TRUST_REVOCATION_STATUS_UNKNOWN", + "CFLUSH", + "CLOCAL", + "CLONE_CHILD_CLEARTID", + "CLONE_CHILD_SETTID", + "CLONE_CSIGNAL", + "CLONE_DETACHED", + "CLONE_FILES", + "CLONE_FS", + "CLONE_IO", + "CLONE_NEWIPC", + "CLONE_NEWNET", + "CLONE_NEWNS", + "CLONE_NEWPID", + "CLONE_NEWUSER", + "CLONE_NEWUTS", + "CLONE_PARENT", + "CLONE_PARENT_SETTID", + "CLONE_PID", + "CLONE_PTRACE", + "CLONE_SETTLS", + "CLONE_SIGHAND", + "CLONE_SYSVSEM", + "CLONE_THREAD", + "CLONE_UNTRACED", + "CLONE_VFORK", + "CLONE_VM", + "CPUID_CFLUSH", + "CREAD", + "CREATE_ALWAYS", + "CREATE_NEW", + "CREATE_NEW_PROCESS_GROUP", + "CREATE_UNICODE_ENVIRONMENT", + "CRYPT_DEFAULT_CONTAINER_OPTIONAL", + "CRYPT_DELETEKEYSET", + "CRYPT_MACHINE_KEYSET", + "CRYPT_NEWKEYSET", + "CRYPT_SILENT", + "CRYPT_VERIFYCONTEXT", + "CS5", + "CS6", + "CS7", + "CS8", + "CSIZE", + "CSTART", + "CSTATUS", + "CSTOP", + "CSTOPB", + "CSUSP", + "CTL_MAXNAME", + "CTL_NET", + "CTL_QUERY", + "CTRL_BREAK_EVENT", + "CTRL_CLOSE_EVENT", + "CTRL_C_EVENT", + "CTRL_LOGOFF_EVENT", + "CTRL_SHUTDOWN_EVENT", + "CancelIo", + "CancelIoEx", + "CertAddCertificateContextToStore", + "CertChainContext", + "CertChainElement", + "CertChainPara", + "CertChainPolicyPara", + "CertChainPolicyStatus", + "CertCloseStore", + "CertContext", + "CertCreateCertificateContext", + "CertEnhKeyUsage", + "CertEnumCertificatesInStore", + "CertFreeCertificateChain", + "CertFreeCertificateContext", + "CertGetCertificateChain", + "CertInfo", + "CertOpenStore", + "CertOpenSystemStore", + "CertRevocationCrlInfo", + "CertRevocationInfo", + "CertSimpleChain", + "CertTrustListInfo", + "CertTrustStatus", + "CertUsageMatch", + "CertVerifyCertificateChainPolicy", + "Chdir", + "CheckBpfVersion", + "Chflags", + "Chmod", + "Chown", + "Chroot", + "Clearenv", + "Close", + "CloseHandle", + "CloseOnExec", + "Closesocket", + "CmsgLen", + "CmsgSpace", + "Cmsghdr", + "CommandLineToArgv", + "ComputerName", + "Conn", + "Connect", + "ConnectEx", + "ConvertSidToStringSid", + "ConvertStringSidToSid", + "CopySid", + "Creat", + "CreateDirectory", + "CreateFile", + "CreateFileMapping", + "CreateHardLink", + "CreateIoCompletionPort", + "CreatePipe", + "CreateProcess", + "CreateProcessAsUser", + "CreateSymbolicLink", + "CreateToolhelp32Snapshot", + "Credential", + "CryptAcquireContext", + "CryptGenRandom", + "CryptReleaseContext", + "DIOCBSFLUSH", + "DIOCOSFPFLUSH", + "DLL", + "DLLError", + "DLT_A429", + "DLT_A653_ICM", + "DLT_AIRONET_HEADER", + "DLT_AOS", + "DLT_APPLE_IP_OVER_IEEE1394", + "DLT_ARCNET", + "DLT_ARCNET_LINUX", + "DLT_ATM_CLIP", + "DLT_ATM_RFC1483", + "DLT_AURORA", + "DLT_AX25", + "DLT_AX25_KISS", + "DLT_BACNET_MS_TP", + "DLT_BLUETOOTH_HCI_H4", + "DLT_BLUETOOTH_HCI_H4_WITH_PHDR", + "DLT_CAN20B", + "DLT_CAN_SOCKETCAN", + "DLT_CHAOS", + "DLT_CHDLC", + "DLT_CISCO_IOS", + "DLT_C_HDLC", + "DLT_C_HDLC_WITH_DIR", + "DLT_DBUS", + "DLT_DECT", + "DLT_DOCSIS", + "DLT_DVB_CI", + "DLT_ECONET", + "DLT_EN10MB", + "DLT_EN3MB", + "DLT_ENC", + "DLT_ERF", + "DLT_ERF_ETH", + "DLT_ERF_POS", + "DLT_FC_2", + "DLT_FC_2_WITH_FRAME_DELIMS", + "DLT_FDDI", + "DLT_FLEXRAY", + "DLT_FRELAY", + "DLT_FRELAY_WITH_DIR", + "DLT_GCOM_SERIAL", + "DLT_GCOM_T1E1", + "DLT_GPF_F", + "DLT_GPF_T", + "DLT_GPRS_LLC", + "DLT_GSMTAP_ABIS", + "DLT_GSMTAP_UM", + "DLT_HDLC", + "DLT_HHDLC", + "DLT_HIPPI", + "DLT_IBM_SN", + "DLT_IBM_SP", + "DLT_IEEE802", + "DLT_IEEE802_11", + "DLT_IEEE802_11_RADIO", + "DLT_IEEE802_11_RADIO_AVS", + "DLT_IEEE802_15_4", + "DLT_IEEE802_15_4_LINUX", + "DLT_IEEE802_15_4_NOFCS", + "DLT_IEEE802_15_4_NONASK_PHY", + "DLT_IEEE802_16_MAC_CPS", + "DLT_IEEE802_16_MAC_CPS_RADIO", + "DLT_IPFILTER", + "DLT_IPMB", + "DLT_IPMB_LINUX", + "DLT_IPNET", + "DLT_IPOIB", + "DLT_IPV4", + "DLT_IPV6", + "DLT_IP_OVER_FC", + "DLT_JUNIPER_ATM1", + "DLT_JUNIPER_ATM2", + "DLT_JUNIPER_ATM_CEMIC", + "DLT_JUNIPER_CHDLC", + "DLT_JUNIPER_ES", + "DLT_JUNIPER_ETHER", + "DLT_JUNIPER_FIBRECHANNEL", + "DLT_JUNIPER_FRELAY", + "DLT_JUNIPER_GGSN", + "DLT_JUNIPER_ISM", + "DLT_JUNIPER_MFR", + "DLT_JUNIPER_MLFR", + "DLT_JUNIPER_MLPPP", + "DLT_JUNIPER_MONITOR", + "DLT_JUNIPER_PIC_PEER", + "DLT_JUNIPER_PPP", + "DLT_JUNIPER_PPPOE", + "DLT_JUNIPER_PPPOE_ATM", + "DLT_JUNIPER_SERVICES", + "DLT_JUNIPER_SRX_E2E", + "DLT_JUNIPER_ST", + "DLT_JUNIPER_VP", + "DLT_JUNIPER_VS", + "DLT_LAPB_WITH_DIR", + "DLT_LAPD", + "DLT_LIN", + "DLT_LINUX_EVDEV", + "DLT_LINUX_IRDA", + "DLT_LINUX_LAPD", + "DLT_LINUX_PPP_WITHDIRECTION", + "DLT_LINUX_SLL", + "DLT_LOOP", + "DLT_LTALK", + "DLT_MATCHING_MAX", + "DLT_MATCHING_MIN", + "DLT_MFR", + "DLT_MOST", + "DLT_MPEG_2_TS", + "DLT_MPLS", + "DLT_MTP2", + "DLT_MTP2_WITH_PHDR", + "DLT_MTP3", + "DLT_MUX27010", + "DLT_NETANALYZER", + "DLT_NETANALYZER_TRANSPARENT", + "DLT_NFC_LLCP", + "DLT_NFLOG", + "DLT_NG40", + "DLT_NULL", + "DLT_PCI_EXP", + "DLT_PFLOG", + "DLT_PFSYNC", + "DLT_PPI", + "DLT_PPP", + "DLT_PPP_BSDOS", + "DLT_PPP_ETHER", + "DLT_PPP_PPPD", + "DLT_PPP_SERIAL", + "DLT_PPP_WITH_DIR", + "DLT_PPP_WITH_DIRECTION", + "DLT_PRISM_HEADER", + "DLT_PRONET", + "DLT_RAIF1", + "DLT_RAW", + "DLT_RAWAF_MASK", + "DLT_RIO", + "DLT_SCCP", + "DLT_SITA", + "DLT_SLIP", + "DLT_SLIP_BSDOS", + "DLT_STANAG_5066_D_PDU", + "DLT_SUNATM", + "DLT_SYMANTEC_FIREWALL", + "DLT_TZSP", + "DLT_USB", + "DLT_USB_LINUX", + "DLT_USB_LINUX_MMAPPED", + "DLT_USER0", + "DLT_USER1", + "DLT_USER10", + "DLT_USER11", + "DLT_USER12", + "DLT_USER13", + "DLT_USER14", + "DLT_USER15", + "DLT_USER2", + "DLT_USER3", + "DLT_USER4", + "DLT_USER5", + "DLT_USER6", + "DLT_USER7", + "DLT_USER8", + "DLT_USER9", + "DLT_WIHART", + "DLT_X2E_SERIAL", + "DLT_X2E_XORAYA", + "DNSMXData", + "DNSPTRData", + "DNSRecord", + "DNSSRVData", + "DNSTXTData", + "DNS_INFO_NO_RECORDS", + "DNS_TYPE_A", + "DNS_TYPE_A6", + "DNS_TYPE_AAAA", + "DNS_TYPE_ADDRS", + "DNS_TYPE_AFSDB", + "DNS_TYPE_ALL", + "DNS_TYPE_ANY", + "DNS_TYPE_ATMA", + "DNS_TYPE_AXFR", + "DNS_TYPE_CERT", + "DNS_TYPE_CNAME", + "DNS_TYPE_DHCID", + "DNS_TYPE_DNAME", + "DNS_TYPE_DNSKEY", + "DNS_TYPE_DS", + "DNS_TYPE_EID", + "DNS_TYPE_GID", + "DNS_TYPE_GPOS", + "DNS_TYPE_HINFO", + "DNS_TYPE_ISDN", + "DNS_TYPE_IXFR", + "DNS_TYPE_KEY", + "DNS_TYPE_KX", + "DNS_TYPE_LOC", + "DNS_TYPE_MAILA", + "DNS_TYPE_MAILB", + "DNS_TYPE_MB", + "DNS_TYPE_MD", + "DNS_TYPE_MF", + "DNS_TYPE_MG", + "DNS_TYPE_MINFO", + "DNS_TYPE_MR", + "DNS_TYPE_MX", + "DNS_TYPE_NAPTR", + "DNS_TYPE_NBSTAT", + "DNS_TYPE_NIMLOC", + "DNS_TYPE_NS", + "DNS_TYPE_NSAP", + "DNS_TYPE_NSAPPTR", + "DNS_TYPE_NSEC", + "DNS_TYPE_NULL", + "DNS_TYPE_NXT", + "DNS_TYPE_OPT", + "DNS_TYPE_PTR", + "DNS_TYPE_PX", + "DNS_TYPE_RP", + "DNS_TYPE_RRSIG", + "DNS_TYPE_RT", + "DNS_TYPE_SIG", + "DNS_TYPE_SINK", + "DNS_TYPE_SOA", + "DNS_TYPE_SRV", + "DNS_TYPE_TEXT", + "DNS_TYPE_TKEY", + "DNS_TYPE_TSIG", + "DNS_TYPE_UID", + "DNS_TYPE_UINFO", + "DNS_TYPE_UNSPEC", + "DNS_TYPE_WINS", + "DNS_TYPE_WINSR", + "DNS_TYPE_WKS", + "DNS_TYPE_X25", + "DT_BLK", + "DT_CHR", + "DT_DIR", + "DT_FIFO", + "DT_LNK", + "DT_REG", + "DT_SOCK", + "DT_UNKNOWN", + "DT_WHT", + "DUPLICATE_CLOSE_SOURCE", + "DUPLICATE_SAME_ACCESS", + "DeleteFile", + "DetachLsf", + "DeviceIoControl", + "Dirent", + "DnsNameCompare", + "DnsQuery", + "DnsRecordListFree", + "DnsSectionAdditional", + "DnsSectionAnswer", + "DnsSectionAuthority", + "DnsSectionQuestion", + "Dup", + "Dup2", + "Dup3", + "DuplicateHandle", + "E2BIG", + "EACCES", + "EADDRINUSE", + "EADDRNOTAVAIL", + "EADV", + "EAFNOSUPPORT", + "EAGAIN", + "EALREADY", + "EAUTH", + "EBADARCH", + "EBADE", + "EBADEXEC", + "EBADF", + "EBADFD", + "EBADMACHO", + "EBADMSG", + "EBADR", + "EBADRPC", + "EBADRQC", + "EBADSLT", + "EBFONT", + "EBUSY", + "ECANCELED", + "ECAPMODE", + "ECHILD", + "ECHO", + "ECHOCTL", + "ECHOE", + "ECHOK", + "ECHOKE", + "ECHONL", + "ECHOPRT", + "ECHRNG", + "ECOMM", + "ECONNABORTED", + "ECONNREFUSED", + "ECONNRESET", + "EDEADLK", + "EDEADLOCK", + "EDESTADDRREQ", + "EDEVERR", + "EDOM", + "EDOOFUS", + "EDOTDOT", + "EDQUOT", + "EEXIST", + "EFAULT", + "EFBIG", + "EFER_LMA", + "EFER_LME", + "EFER_NXE", + "EFER_SCE", + "EFTYPE", + "EHOSTDOWN", + "EHOSTUNREACH", + "EHWPOISON", + "EIDRM", + "EILSEQ", + "EINPROGRESS", + "EINTR", + "EINVAL", + "EIO", + "EIPSEC", + "EISCONN", + "EISDIR", + "EISNAM", + "EKEYEXPIRED", + "EKEYREJECTED", + "EKEYREVOKED", + "EL2HLT", + "EL2NSYNC", + "EL3HLT", + "EL3RST", + "ELAST", + "ELF_NGREG", + "ELF_PRARGSZ", + "ELIBACC", + "ELIBBAD", + "ELIBEXEC", + "ELIBMAX", + "ELIBSCN", + "ELNRNG", + "ELOOP", + "EMEDIUMTYPE", + "EMFILE", + "EMLINK", + "EMSGSIZE", + "EMT_TAGOVF", + "EMULTIHOP", + "EMUL_ENABLED", + "EMUL_LINUX", + "EMUL_LINUX32", + "EMUL_MAXID", + "EMUL_NATIVE", + "ENAMETOOLONG", + "ENAVAIL", + "ENDRUNDISC", + "ENEEDAUTH", + "ENETDOWN", + "ENETRESET", + "ENETUNREACH", + "ENFILE", + "ENOANO", + "ENOATTR", + "ENOBUFS", + "ENOCSI", + "ENODATA", + "ENODEV", + "ENOENT", + "ENOEXEC", + "ENOKEY", + "ENOLCK", + "ENOLINK", + "ENOMEDIUM", + "ENOMEM", + "ENOMSG", + "ENONET", + "ENOPKG", + "ENOPOLICY", + "ENOPROTOOPT", + "ENOSPC", + "ENOSR", + "ENOSTR", + "ENOSYS", + "ENOTBLK", + "ENOTCAPABLE", + "ENOTCONN", + "ENOTDIR", + "ENOTEMPTY", + "ENOTNAM", + "ENOTRECOVERABLE", + "ENOTSOCK", + "ENOTSUP", + "ENOTTY", + "ENOTUNIQ", + "ENXIO", + "EN_SW_CTL_INF", + "EN_SW_CTL_PREC", + "EN_SW_CTL_ROUND", + "EN_SW_DATACHAIN", + "EN_SW_DENORM", + "EN_SW_INVOP", + "EN_SW_OVERFLOW", + "EN_SW_PRECLOSS", + "EN_SW_UNDERFLOW", + "EN_SW_ZERODIV", + "EOPNOTSUPP", + "EOVERFLOW", + "EOWNERDEAD", + "EPERM", + "EPFNOSUPPORT", + "EPIPE", + "EPOLLERR", + "EPOLLET", + "EPOLLHUP", + "EPOLLIN", + "EPOLLMSG", + "EPOLLONESHOT", + "EPOLLOUT", + "EPOLLPRI", + "EPOLLRDBAND", + "EPOLLRDHUP", + "EPOLLRDNORM", + "EPOLLWRBAND", + "EPOLLWRNORM", + "EPOLL_CLOEXEC", + "EPOLL_CTL_ADD", + "EPOLL_CTL_DEL", + "EPOLL_CTL_MOD", + "EPOLL_NONBLOCK", + "EPROCLIM", + "EPROCUNAVAIL", + "EPROGMISMATCH", + "EPROGUNAVAIL", + "EPROTO", + "EPROTONOSUPPORT", + "EPROTOTYPE", + "EPWROFF", + "ERANGE", + "EREMCHG", + "EREMOTE", + "EREMOTEIO", + "ERESTART", + "ERFKILL", + "EROFS", + "ERPCMISMATCH", + "ERROR_ACCESS_DENIED", + "ERROR_ALREADY_EXISTS", + "ERROR_BROKEN_PIPE", + "ERROR_BUFFER_OVERFLOW", + "ERROR_DIR_NOT_EMPTY", + "ERROR_ENVVAR_NOT_FOUND", + "ERROR_FILE_EXISTS", + "ERROR_FILE_NOT_FOUND", + "ERROR_HANDLE_EOF", + "ERROR_INSUFFICIENT_BUFFER", + "ERROR_IO_PENDING", + "ERROR_MOD_NOT_FOUND", + "ERROR_MORE_DATA", + "ERROR_NETNAME_DELETED", + "ERROR_NOT_FOUND", + "ERROR_NO_MORE_FILES", + "ERROR_OPERATION_ABORTED", + "ERROR_PATH_NOT_FOUND", + "ERROR_PRIVILEGE_NOT_HELD", + "ERROR_PROC_NOT_FOUND", + "ESHLIBVERS", + "ESHUTDOWN", + "ESOCKTNOSUPPORT", + "ESPIPE", + "ESRCH", + "ESRMNT", + "ESTALE", + "ESTRPIPE", + "ETHERCAP_JUMBO_MTU", + "ETHERCAP_VLAN_HWTAGGING", + "ETHERCAP_VLAN_MTU", + "ETHERMIN", + "ETHERMTU", + "ETHERMTU_JUMBO", + "ETHERTYPE_8023", + "ETHERTYPE_AARP", + "ETHERTYPE_ACCTON", + "ETHERTYPE_AEONIC", + "ETHERTYPE_ALPHA", + "ETHERTYPE_AMBER", + "ETHERTYPE_AMOEBA", + "ETHERTYPE_AOE", + "ETHERTYPE_APOLLO", + "ETHERTYPE_APOLLODOMAIN", + "ETHERTYPE_APPLETALK", + "ETHERTYPE_APPLITEK", + "ETHERTYPE_ARGONAUT", + "ETHERTYPE_ARP", + "ETHERTYPE_AT", + "ETHERTYPE_ATALK", + "ETHERTYPE_ATOMIC", + "ETHERTYPE_ATT", + "ETHERTYPE_ATTSTANFORD", + "ETHERTYPE_AUTOPHON", + "ETHERTYPE_AXIS", + "ETHERTYPE_BCLOOP", + "ETHERTYPE_BOFL", + "ETHERTYPE_CABLETRON", + "ETHERTYPE_CHAOS", + "ETHERTYPE_COMDESIGN", + "ETHERTYPE_COMPUGRAPHIC", + "ETHERTYPE_COUNTERPOINT", + "ETHERTYPE_CRONUS", + "ETHERTYPE_CRONUSVLN", + "ETHERTYPE_DCA", + "ETHERTYPE_DDE", + "ETHERTYPE_DEBNI", + "ETHERTYPE_DECAM", + "ETHERTYPE_DECCUST", + "ETHERTYPE_DECDIAG", + "ETHERTYPE_DECDNS", + "ETHERTYPE_DECDTS", + "ETHERTYPE_DECEXPER", + "ETHERTYPE_DECLAST", + "ETHERTYPE_DECLTM", + "ETHERTYPE_DECMUMPS", + "ETHERTYPE_DECNETBIOS", + "ETHERTYPE_DELTACON", + "ETHERTYPE_DIDDLE", + "ETHERTYPE_DLOG1", + "ETHERTYPE_DLOG2", + "ETHERTYPE_DN", + "ETHERTYPE_DOGFIGHT", + "ETHERTYPE_DSMD", + "ETHERTYPE_ECMA", + "ETHERTYPE_ENCRYPT", + "ETHERTYPE_ES", + "ETHERTYPE_EXCELAN", + "ETHERTYPE_EXPERDATA", + "ETHERTYPE_FLIP", + "ETHERTYPE_FLOWCONTROL", + "ETHERTYPE_FRARP", + "ETHERTYPE_GENDYN", + "ETHERTYPE_HAYES", + "ETHERTYPE_HIPPI_FP", + "ETHERTYPE_HITACHI", + "ETHERTYPE_HP", + "ETHERTYPE_IEEEPUP", + "ETHERTYPE_IEEEPUPAT", + "ETHERTYPE_IMLBL", + "ETHERTYPE_IMLBLDIAG", + "ETHERTYPE_IP", + "ETHERTYPE_IPAS", + "ETHERTYPE_IPV6", + "ETHERTYPE_IPX", + "ETHERTYPE_IPXNEW", + "ETHERTYPE_KALPANA", + "ETHERTYPE_LANBRIDGE", + "ETHERTYPE_LANPROBE", + "ETHERTYPE_LAT", + "ETHERTYPE_LBACK", + "ETHERTYPE_LITTLE", + "ETHERTYPE_LLDP", + "ETHERTYPE_LOGICRAFT", + "ETHERTYPE_LOOPBACK", + "ETHERTYPE_MATRA", + "ETHERTYPE_MAX", + "ETHERTYPE_MERIT", + "ETHERTYPE_MICP", + "ETHERTYPE_MOPDL", + "ETHERTYPE_MOPRC", + "ETHERTYPE_MOTOROLA", + "ETHERTYPE_MPLS", + "ETHERTYPE_MPLS_MCAST", + "ETHERTYPE_MUMPS", + "ETHERTYPE_NBPCC", + "ETHERTYPE_NBPCLAIM", + "ETHERTYPE_NBPCLREQ", + "ETHERTYPE_NBPCLRSP", + "ETHERTYPE_NBPCREQ", + "ETHERTYPE_NBPCRSP", + "ETHERTYPE_NBPDG", + "ETHERTYPE_NBPDGB", + "ETHERTYPE_NBPDLTE", + "ETHERTYPE_NBPRAR", + "ETHERTYPE_NBPRAS", + "ETHERTYPE_NBPRST", + "ETHERTYPE_NBPSCD", + "ETHERTYPE_NBPVCD", + "ETHERTYPE_NBS", + "ETHERTYPE_NCD", + "ETHERTYPE_NESTAR", + "ETHERTYPE_NETBEUI", + "ETHERTYPE_NOVELL", + "ETHERTYPE_NS", + "ETHERTYPE_NSAT", + "ETHERTYPE_NSCOMPAT", + "ETHERTYPE_NTRAILER", + "ETHERTYPE_OS9", + "ETHERTYPE_OS9NET", + "ETHERTYPE_PACER", + "ETHERTYPE_PAE", + "ETHERTYPE_PCS", + "ETHERTYPE_PLANNING", + "ETHERTYPE_PPP", + "ETHERTYPE_PPPOE", + "ETHERTYPE_PPPOEDISC", + "ETHERTYPE_PRIMENTS", + "ETHERTYPE_PUP", + "ETHERTYPE_PUPAT", + "ETHERTYPE_QINQ", + "ETHERTYPE_RACAL", + "ETHERTYPE_RATIONAL", + "ETHERTYPE_RAWFR", + "ETHERTYPE_RCL", + "ETHERTYPE_RDP", + "ETHERTYPE_RETIX", + "ETHERTYPE_REVARP", + "ETHERTYPE_SCA", + "ETHERTYPE_SECTRA", + "ETHERTYPE_SECUREDATA", + "ETHERTYPE_SGITW", + "ETHERTYPE_SG_BOUNCE", + "ETHERTYPE_SG_DIAG", + "ETHERTYPE_SG_NETGAMES", + "ETHERTYPE_SG_RESV", + "ETHERTYPE_SIMNET", + "ETHERTYPE_SLOW", + "ETHERTYPE_SLOWPROTOCOLS", + "ETHERTYPE_SNA", + "ETHERTYPE_SNMP", + "ETHERTYPE_SONIX", + "ETHERTYPE_SPIDER", + "ETHERTYPE_SPRITE", + "ETHERTYPE_STP", + "ETHERTYPE_TALARIS", + "ETHERTYPE_TALARISMC", + "ETHERTYPE_TCPCOMP", + "ETHERTYPE_TCPSM", + "ETHERTYPE_TEC", + "ETHERTYPE_TIGAN", + "ETHERTYPE_TRAIL", + "ETHERTYPE_TRANSETHER", + "ETHERTYPE_TYMSHARE", + "ETHERTYPE_UBBST", + "ETHERTYPE_UBDEBUG", + "ETHERTYPE_UBDIAGLOOP", + "ETHERTYPE_UBDL", + "ETHERTYPE_UBNIU", + "ETHERTYPE_UBNMC", + "ETHERTYPE_VALID", + "ETHERTYPE_VARIAN", + "ETHERTYPE_VAXELN", + "ETHERTYPE_VEECO", + "ETHERTYPE_VEXP", + "ETHERTYPE_VGLAB", + "ETHERTYPE_VINES", + "ETHERTYPE_VINESECHO", + "ETHERTYPE_VINESLOOP", + "ETHERTYPE_VITAL", + "ETHERTYPE_VLAN", + "ETHERTYPE_VLTLMAN", + "ETHERTYPE_VPROD", + "ETHERTYPE_VURESERVED", + "ETHERTYPE_WATERLOO", + "ETHERTYPE_WELLFLEET", + "ETHERTYPE_X25", + "ETHERTYPE_X75", + "ETHERTYPE_XNSSM", + "ETHERTYPE_XTP", + "ETHER_ADDR_LEN", + "ETHER_ALIGN", + "ETHER_CRC_LEN", + "ETHER_CRC_POLY_BE", + "ETHER_CRC_POLY_LE", + "ETHER_HDR_LEN", + "ETHER_MAX_DIX_LEN", + "ETHER_MAX_LEN", + "ETHER_MAX_LEN_JUMBO", + "ETHER_MIN_LEN", + "ETHER_PPPOE_ENCAP_LEN", + "ETHER_TYPE_LEN", + "ETHER_VLAN_ENCAP_LEN", + "ETH_P_1588", + "ETH_P_8021Q", + "ETH_P_802_2", + "ETH_P_802_3", + "ETH_P_AARP", + "ETH_P_ALL", + "ETH_P_AOE", + "ETH_P_ARCNET", + "ETH_P_ARP", + "ETH_P_ATALK", + "ETH_P_ATMFATE", + "ETH_P_ATMMPOA", + "ETH_P_AX25", + "ETH_P_BPQ", + "ETH_P_CAIF", + "ETH_P_CAN", + "ETH_P_CONTROL", + "ETH_P_CUST", + "ETH_P_DDCMP", + "ETH_P_DEC", + "ETH_P_DIAG", + "ETH_P_DNA_DL", + "ETH_P_DNA_RC", + "ETH_P_DNA_RT", + "ETH_P_DSA", + "ETH_P_ECONET", + "ETH_P_EDSA", + "ETH_P_FCOE", + "ETH_P_FIP", + "ETH_P_HDLC", + "ETH_P_IEEE802154", + "ETH_P_IEEEPUP", + "ETH_P_IEEEPUPAT", + "ETH_P_IP", + "ETH_P_IPV6", + "ETH_P_IPX", + "ETH_P_IRDA", + "ETH_P_LAT", + "ETH_P_LINK_CTL", + "ETH_P_LOCALTALK", + "ETH_P_LOOP", + "ETH_P_MOBITEX", + "ETH_P_MPLS_MC", + "ETH_P_MPLS_UC", + "ETH_P_PAE", + "ETH_P_PAUSE", + "ETH_P_PHONET", + "ETH_P_PPPTALK", + "ETH_P_PPP_DISC", + "ETH_P_PPP_MP", + "ETH_P_PPP_SES", + "ETH_P_PUP", + "ETH_P_PUPAT", + "ETH_P_RARP", + "ETH_P_SCA", + "ETH_P_SLOW", + "ETH_P_SNAP", + "ETH_P_TEB", + "ETH_P_TIPC", + "ETH_P_TRAILER", + "ETH_P_TR_802_2", + "ETH_P_WAN_PPP", + "ETH_P_WCCP", + "ETH_P_X25", + "ETIME", + "ETIMEDOUT", + "ETOOMANYREFS", + "ETXTBSY", + "EUCLEAN", + "EUNATCH", + "EUSERS", + "EVFILT_AIO", + "EVFILT_FS", + "EVFILT_LIO", + "EVFILT_MACHPORT", + "EVFILT_PROC", + "EVFILT_READ", + "EVFILT_SIGNAL", + "EVFILT_SYSCOUNT", + "EVFILT_THREADMARKER", + "EVFILT_TIMER", + "EVFILT_USER", + "EVFILT_VM", + "EVFILT_VNODE", + "EVFILT_WRITE", + "EV_ADD", + "EV_CLEAR", + "EV_DELETE", + "EV_DISABLE", + "EV_DISPATCH", + "EV_DROP", + "EV_ENABLE", + "EV_EOF", + "EV_ERROR", + "EV_FLAG0", + "EV_FLAG1", + "EV_ONESHOT", + "EV_OOBAND", + "EV_POLL", + "EV_RECEIPT", + "EV_SYSFLAGS", + "EWINDOWS", + "EWOULDBLOCK", + "EXDEV", + "EXFULL", + "EXTA", + "EXTB", + "EXTPROC", + "Environ", + "EpollCreate", + "EpollCreate1", + "EpollCtl", + "EpollEvent", + "EpollWait", + "Errno", + "EscapeArg", + "Exchangedata", + "Exec", + "Exit", + "ExitProcess", + "FD_CLOEXEC", + "FD_SETSIZE", + "FILE_ACTION_ADDED", + "FILE_ACTION_MODIFIED", + "FILE_ACTION_REMOVED", + "FILE_ACTION_RENAMED_NEW_NAME", + "FILE_ACTION_RENAMED_OLD_NAME", + "FILE_APPEND_DATA", + "FILE_ATTRIBUTE_ARCHIVE", + "FILE_ATTRIBUTE_DIRECTORY", + "FILE_ATTRIBUTE_HIDDEN", + "FILE_ATTRIBUTE_NORMAL", + "FILE_ATTRIBUTE_READONLY", + "FILE_ATTRIBUTE_REPARSE_POINT", + "FILE_ATTRIBUTE_SYSTEM", + "FILE_BEGIN", + "FILE_CURRENT", + "FILE_END", + "FILE_FLAG_BACKUP_SEMANTICS", + "FILE_FLAG_OPEN_REPARSE_POINT", + "FILE_FLAG_OVERLAPPED", + "FILE_LIST_DIRECTORY", + "FILE_MAP_COPY", + "FILE_MAP_EXECUTE", + "FILE_MAP_READ", + "FILE_MAP_WRITE", + "FILE_NOTIFY_CHANGE_ATTRIBUTES", + "FILE_NOTIFY_CHANGE_CREATION", + "FILE_NOTIFY_CHANGE_DIR_NAME", + "FILE_NOTIFY_CHANGE_FILE_NAME", + "FILE_NOTIFY_CHANGE_LAST_ACCESS", + "FILE_NOTIFY_CHANGE_LAST_WRITE", + "FILE_NOTIFY_CHANGE_SIZE", + "FILE_SHARE_DELETE", + "FILE_SHARE_READ", + "FILE_SHARE_WRITE", + "FILE_SKIP_COMPLETION_PORT_ON_SUCCESS", + "FILE_SKIP_SET_EVENT_ON_HANDLE", + "FILE_TYPE_CHAR", + "FILE_TYPE_DISK", + "FILE_TYPE_PIPE", + "FILE_TYPE_REMOTE", + "FILE_TYPE_UNKNOWN", + "FILE_WRITE_ATTRIBUTES", + "FLUSHO", + "FORMAT_MESSAGE_ALLOCATE_BUFFER", + "FORMAT_MESSAGE_ARGUMENT_ARRAY", + "FORMAT_MESSAGE_FROM_HMODULE", + "FORMAT_MESSAGE_FROM_STRING", + "FORMAT_MESSAGE_FROM_SYSTEM", + "FORMAT_MESSAGE_IGNORE_INSERTS", + "FORMAT_MESSAGE_MAX_WIDTH_MASK", + "FSCTL_GET_REPARSE_POINT", + "F_ADDFILESIGS", + "F_ADDSIGS", + "F_ALLOCATEALL", + "F_ALLOCATECONTIG", + "F_CANCEL", + "F_CHKCLEAN", + "F_CLOSEM", + "F_DUP2FD", + "F_DUP2FD_CLOEXEC", + "F_DUPFD", + "F_DUPFD_CLOEXEC", + "F_EXLCK", + "F_FLUSH_DATA", + "F_FREEZE_FS", + "F_FSCTL", + "F_FSDIRMASK", + "F_FSIN", + "F_FSINOUT", + "F_FSOUT", + "F_FSPRIV", + "F_FSVOID", + "F_FULLFSYNC", + "F_GETFD", + "F_GETFL", + "F_GETLEASE", + "F_GETLK", + "F_GETLK64", + "F_GETLKPID", + "F_GETNOSIGPIPE", + "F_GETOWN", + "F_GETOWN_EX", + "F_GETPATH", + "F_GETPATH_MTMINFO", + "F_GETPIPE_SZ", + "F_GETPROTECTIONCLASS", + "F_GETSIG", + "F_GLOBAL_NOCACHE", + "F_LOCK", + "F_LOG2PHYS", + "F_LOG2PHYS_EXT", + "F_MARKDEPENDENCY", + "F_MAXFD", + "F_NOCACHE", + "F_NODIRECT", + "F_NOTIFY", + "F_OGETLK", + "F_OK", + "F_OSETLK", + "F_OSETLKW", + "F_PARAM_MASK", + "F_PARAM_MAX", + "F_PATHPKG_CHECK", + "F_PEOFPOSMODE", + "F_PREALLOCATE", + "F_RDADVISE", + "F_RDAHEAD", + "F_RDLCK", + "F_READAHEAD", + "F_READBOOTSTRAP", + "F_SETBACKINGSTORE", + "F_SETFD", + "F_SETFL", + "F_SETLEASE", + "F_SETLK", + "F_SETLK64", + "F_SETLKW", + "F_SETLKW64", + "F_SETLK_REMOTE", + "F_SETNOSIGPIPE", + "F_SETOWN", + "F_SETOWN_EX", + "F_SETPIPE_SZ", + "F_SETPROTECTIONCLASS", + "F_SETSIG", + "F_SETSIZE", + "F_SHLCK", + "F_TEST", + "F_THAW_FS", + "F_TLOCK", + "F_ULOCK", + "F_UNLCK", + "F_UNLCKSYS", + "F_VOLPOSMODE", + "F_WRITEBOOTSTRAP", + "F_WRLCK", + "Faccessat", + "Fallocate", + "Fbootstraptransfer_t", + "Fchdir", + "Fchflags", + "Fchmod", + "Fchmodat", + "Fchown", + "Fchownat", + "FcntlFlock", + "FdSet", + "Fdatasync", + "FileNotifyInformation", + "Filetime", + "FindClose", + "FindFirstFile", + "FindNextFile", + "Flock", + "Flock_t", + "FlushBpf", + "FlushFileBuffers", + "FlushViewOfFile", + "ForkExec", + "ForkLock", + "FormatMessage", + "Fpathconf", + "FreeAddrInfoW", + "FreeEnvironmentStrings", + "FreeLibrary", + "Fsid", + "Fstat", + "Fstatat", + "Fstatfs", + "Fstore_t", + "Fsync", + "Ftruncate", + "FullPath", + "Futimes", + "Futimesat", + "GENERIC_ALL", + "GENERIC_EXECUTE", + "GENERIC_READ", + "GENERIC_WRITE", + "GUID", + "GetAcceptExSockaddrs", + "GetAdaptersInfo", + "GetAddrInfoW", + "GetCommandLine", + "GetComputerName", + "GetConsoleMode", + "GetCurrentDirectory", + "GetCurrentProcess", + "GetEnvironmentStrings", + "GetEnvironmentVariable", + "GetExitCodeProcess", + "GetFileAttributes", + "GetFileAttributesEx", + "GetFileExInfoStandard", + "GetFileExMaxInfoLevel", + "GetFileInformationByHandle", + "GetFileType", + "GetFullPathName", + "GetHostByName", + "GetIfEntry", + "GetLastError", + "GetLengthSid", + "GetLongPathName", + "GetProcAddress", + "GetProcessTimes", + "GetProtoByName", + "GetQueuedCompletionStatus", + "GetServByName", + "GetShortPathName", + "GetStartupInfo", + "GetStdHandle", + "GetSystemTimeAsFileTime", + "GetTempPath", + "GetTimeZoneInformation", + "GetTokenInformation", + "GetUserNameEx", + "GetUserProfileDirectory", + "GetVersion", + "Getcwd", + "Getdents", + "Getdirentries", + "Getdtablesize", + "Getegid", + "Getenv", + "Geteuid", + "Getfsstat", + "Getgid", + "Getgroups", + "Getpagesize", + "Getpeername", + "Getpgid", + "Getpgrp", + "Getpid", + "Getppid", + "Getpriority", + "Getrlimit", + "Getrusage", + "Getsid", + "Getsockname", + "Getsockopt", + "GetsockoptByte", + "GetsockoptICMPv6Filter", + "GetsockoptIPMreq", + "GetsockoptIPMreqn", + "GetsockoptIPv6MTUInfo", + "GetsockoptIPv6Mreq", + "GetsockoptInet4Addr", + "GetsockoptInt", + "GetsockoptUcred", + "Gettid", + "Gettimeofday", + "Getuid", + "Getwd", + "Getxattr", + "HANDLE_FLAG_INHERIT", + "HKEY_CLASSES_ROOT", + "HKEY_CURRENT_CONFIG", + "HKEY_CURRENT_USER", + "HKEY_DYN_DATA", + "HKEY_LOCAL_MACHINE", + "HKEY_PERFORMANCE_DATA", + "HKEY_USERS", + "HUPCL", + "Handle", + "Hostent", + "ICANON", + "ICMP6_FILTER", + "ICMPV6_FILTER", + "ICMPv6Filter", + "ICRNL", + "IEXTEN", + "IFAN_ARRIVAL", + "IFAN_DEPARTURE", + "IFA_ADDRESS", + "IFA_ANYCAST", + "IFA_BROADCAST", + "IFA_CACHEINFO", + "IFA_F_DADFAILED", + "IFA_F_DEPRECATED", + "IFA_F_HOMEADDRESS", + "IFA_F_NODAD", + "IFA_F_OPTIMISTIC", + "IFA_F_PERMANENT", + "IFA_F_SECONDARY", + "IFA_F_TEMPORARY", + "IFA_F_TENTATIVE", + "IFA_LABEL", + "IFA_LOCAL", + "IFA_MAX", + "IFA_MULTICAST", + "IFA_ROUTE", + "IFA_UNSPEC", + "IFF_ALLMULTI", + "IFF_ALTPHYS", + "IFF_AUTOMEDIA", + "IFF_BROADCAST", + "IFF_CANTCHANGE", + "IFF_CANTCONFIG", + "IFF_DEBUG", + "IFF_DRV_OACTIVE", + "IFF_DRV_RUNNING", + "IFF_DYING", + "IFF_DYNAMIC", + "IFF_LINK0", + "IFF_LINK1", + "IFF_LINK2", + "IFF_LOOPBACK", + "IFF_MASTER", + "IFF_MONITOR", + "IFF_MULTICAST", + "IFF_NOARP", + "IFF_NOTRAILERS", + "IFF_NO_PI", + "IFF_OACTIVE", + "IFF_ONE_QUEUE", + "IFF_POINTOPOINT", + "IFF_POINTTOPOINT", + "IFF_PORTSEL", + "IFF_PPROMISC", + "IFF_PROMISC", + "IFF_RENAMING", + "IFF_RUNNING", + "IFF_SIMPLEX", + "IFF_SLAVE", + "IFF_SMART", + "IFF_STATICARP", + "IFF_TAP", + "IFF_TUN", + "IFF_TUN_EXCL", + "IFF_UP", + "IFF_VNET_HDR", + "IFLA_ADDRESS", + "IFLA_BROADCAST", + "IFLA_COST", + "IFLA_IFALIAS", + "IFLA_IFNAME", + "IFLA_LINK", + "IFLA_LINKINFO", + "IFLA_LINKMODE", + "IFLA_MAP", + "IFLA_MASTER", + "IFLA_MAX", + "IFLA_MTU", + "IFLA_NET_NS_PID", + "IFLA_OPERSTATE", + "IFLA_PRIORITY", + "IFLA_PROTINFO", + "IFLA_QDISC", + "IFLA_STATS", + "IFLA_TXQLEN", + "IFLA_UNSPEC", + "IFLA_WEIGHT", + "IFLA_WIRELESS", + "IFNAMSIZ", + "IFT_1822", + "IFT_A12MPPSWITCH", + "IFT_AAL2", + "IFT_AAL5", + "IFT_ADSL", + "IFT_AFLANE8023", + "IFT_AFLANE8025", + "IFT_ARAP", + "IFT_ARCNET", + "IFT_ARCNETPLUS", + "IFT_ASYNC", + "IFT_ATM", + "IFT_ATMDXI", + "IFT_ATMFUNI", + "IFT_ATMIMA", + "IFT_ATMLOGICAL", + "IFT_ATMRADIO", + "IFT_ATMSUBINTERFACE", + "IFT_ATMVCIENDPT", + "IFT_ATMVIRTUAL", + "IFT_BGPPOLICYACCOUNTING", + "IFT_BLUETOOTH", + "IFT_BRIDGE", + "IFT_BSC", + "IFT_CARP", + "IFT_CCTEMUL", + "IFT_CELLULAR", + "IFT_CEPT", + "IFT_CES", + "IFT_CHANNEL", + "IFT_CNR", + "IFT_COFFEE", + "IFT_COMPOSITELINK", + "IFT_DCN", + "IFT_DIGITALPOWERLINE", + "IFT_DIGITALWRAPPEROVERHEADCHANNEL", + "IFT_DLSW", + "IFT_DOCSCABLEDOWNSTREAM", + "IFT_DOCSCABLEMACLAYER", + "IFT_DOCSCABLEUPSTREAM", + "IFT_DOCSCABLEUPSTREAMCHANNEL", + "IFT_DS0", + "IFT_DS0BUNDLE", + "IFT_DS1FDL", + "IFT_DS3", + "IFT_DTM", + "IFT_DUMMY", + "IFT_DVBASILN", + "IFT_DVBASIOUT", + "IFT_DVBRCCDOWNSTREAM", + "IFT_DVBRCCMACLAYER", + "IFT_DVBRCCUPSTREAM", + "IFT_ECONET", + "IFT_ENC", + "IFT_EON", + "IFT_EPLRS", + "IFT_ESCON", + "IFT_ETHER", + "IFT_FAITH", + "IFT_FAST", + "IFT_FASTETHER", + "IFT_FASTETHERFX", + "IFT_FDDI", + "IFT_FIBRECHANNEL", + "IFT_FRAMERELAYINTERCONNECT", + "IFT_FRAMERELAYMPI", + "IFT_FRDLCIENDPT", + "IFT_FRELAY", + "IFT_FRELAYDCE", + "IFT_FRF16MFRBUNDLE", + "IFT_FRFORWARD", + "IFT_G703AT2MB", + "IFT_G703AT64K", + "IFT_GIF", + "IFT_GIGABITETHERNET", + "IFT_GR303IDT", + "IFT_GR303RDT", + "IFT_H323GATEKEEPER", + "IFT_H323PROXY", + "IFT_HDH1822", + "IFT_HDLC", + "IFT_HDSL2", + "IFT_HIPERLAN2", + "IFT_HIPPI", + "IFT_HIPPIINTERFACE", + "IFT_HOSTPAD", + "IFT_HSSI", + "IFT_HY", + "IFT_IBM370PARCHAN", + "IFT_IDSL", + "IFT_IEEE1394", + "IFT_IEEE80211", + "IFT_IEEE80212", + "IFT_IEEE8023ADLAG", + "IFT_IFGSN", + "IFT_IMT", + "IFT_INFINIBAND", + "IFT_INTERLEAVE", + "IFT_IP", + "IFT_IPFORWARD", + "IFT_IPOVERATM", + "IFT_IPOVERCDLC", + "IFT_IPOVERCLAW", + "IFT_IPSWITCH", + "IFT_IPXIP", + "IFT_ISDN", + "IFT_ISDNBASIC", + "IFT_ISDNPRIMARY", + "IFT_ISDNS", + "IFT_ISDNU", + "IFT_ISO88022LLC", + "IFT_ISO88023", + "IFT_ISO88024", + "IFT_ISO88025", + "IFT_ISO88025CRFPINT", + "IFT_ISO88025DTR", + "IFT_ISO88025FIBER", + "IFT_ISO88026", + "IFT_ISUP", + "IFT_L2VLAN", + "IFT_L3IPVLAN", + "IFT_L3IPXVLAN", + "IFT_LAPB", + "IFT_LAPD", + "IFT_LAPF", + "IFT_LINEGROUP", + "IFT_LOCALTALK", + "IFT_LOOP", + "IFT_MEDIAMAILOVERIP", + "IFT_MFSIGLINK", + "IFT_MIOX25", + "IFT_MODEM", + "IFT_MPC", + "IFT_MPLS", + "IFT_MPLSTUNNEL", + "IFT_MSDSL", + "IFT_MVL", + "IFT_MYRINET", + "IFT_NFAS", + "IFT_NSIP", + "IFT_OPTICALCHANNEL", + "IFT_OPTICALTRANSPORT", + "IFT_OTHER", + "IFT_P10", + "IFT_P80", + "IFT_PARA", + "IFT_PDP", + "IFT_PFLOG", + "IFT_PFLOW", + "IFT_PFSYNC", + "IFT_PLC", + "IFT_PON155", + "IFT_PON622", + "IFT_POS", + "IFT_PPP", + "IFT_PPPMULTILINKBUNDLE", + "IFT_PROPATM", + "IFT_PROPBWAP2MP", + "IFT_PROPCNLS", + "IFT_PROPDOCSWIRELESSDOWNSTREAM", + "IFT_PROPDOCSWIRELESSMACLAYER", + "IFT_PROPDOCSWIRELESSUPSTREAM", + "IFT_PROPMUX", + "IFT_PROPVIRTUAL", + "IFT_PROPWIRELESSP2P", + "IFT_PTPSERIAL", + "IFT_PVC", + "IFT_Q2931", + "IFT_QLLC", + "IFT_RADIOMAC", + "IFT_RADSL", + "IFT_REACHDSL", + "IFT_RFC1483", + "IFT_RS232", + "IFT_RSRB", + "IFT_SDLC", + "IFT_SDSL", + "IFT_SHDSL", + "IFT_SIP", + "IFT_SIPSIG", + "IFT_SIPTG", + "IFT_SLIP", + "IFT_SMDSDXI", + "IFT_SMDSICIP", + "IFT_SONET", + "IFT_SONETOVERHEADCHANNEL", + "IFT_SONETPATH", + "IFT_SONETVT", + "IFT_SRP", + "IFT_SS7SIGLINK", + "IFT_STACKTOSTACK", + "IFT_STARLAN", + "IFT_STF", + "IFT_T1", + "IFT_TDLC", + "IFT_TELINK", + "IFT_TERMPAD", + "IFT_TR008", + "IFT_TRANSPHDLC", + "IFT_TUNNEL", + "IFT_ULTRA", + "IFT_USB", + "IFT_V11", + "IFT_V35", + "IFT_V36", + "IFT_V37", + "IFT_VDSL", + "IFT_VIRTUALIPADDRESS", + "IFT_VIRTUALTG", + "IFT_VOICEDID", + "IFT_VOICEEM", + "IFT_VOICEEMFGD", + "IFT_VOICEENCAP", + "IFT_VOICEFGDEANA", + "IFT_VOICEFXO", + "IFT_VOICEFXS", + "IFT_VOICEOVERATM", + "IFT_VOICEOVERCABLE", + "IFT_VOICEOVERFRAMERELAY", + "IFT_VOICEOVERIP", + "IFT_X213", + "IFT_X25", + "IFT_X25DDN", + "IFT_X25HUNTGROUP", + "IFT_X25MLP", + "IFT_X25PLE", + "IFT_XETHER", + "IGNBRK", + "IGNCR", + "IGNORE", + "IGNPAR", + "IMAXBEL", + "INFINITE", + "INLCR", + "INPCK", + "INVALID_FILE_ATTRIBUTES", + "IN_ACCESS", + "IN_ALL_EVENTS", + "IN_ATTRIB", + "IN_CLASSA_HOST", + "IN_CLASSA_MAX", + "IN_CLASSA_NET", + "IN_CLASSA_NSHIFT", + "IN_CLASSB_HOST", + "IN_CLASSB_MAX", + "IN_CLASSB_NET", + "IN_CLASSB_NSHIFT", + "IN_CLASSC_HOST", + "IN_CLASSC_NET", + "IN_CLASSC_NSHIFT", + "IN_CLASSD_HOST", + "IN_CLASSD_NET", + "IN_CLASSD_NSHIFT", + "IN_CLOEXEC", + "IN_CLOSE", + "IN_CLOSE_NOWRITE", + "IN_CLOSE_WRITE", + "IN_CREATE", + "IN_DELETE", + "IN_DELETE_SELF", + "IN_DONT_FOLLOW", + "IN_EXCL_UNLINK", + "IN_IGNORED", + "IN_ISDIR", + "IN_LINKLOCALNETNUM", + "IN_LOOPBACKNET", + "IN_MASK_ADD", + "IN_MODIFY", + "IN_MOVE", + "IN_MOVED_FROM", + "IN_MOVED_TO", + "IN_MOVE_SELF", + "IN_NONBLOCK", + "IN_ONESHOT", + "IN_ONLYDIR", + "IN_OPEN", + "IN_Q_OVERFLOW", + "IN_RFC3021_HOST", + "IN_RFC3021_MASK", + "IN_RFC3021_NET", + "IN_RFC3021_NSHIFT", + "IN_UNMOUNT", + "IOC_IN", + "IOC_INOUT", + "IOC_OUT", + "IOC_VENDOR", + "IOC_WS2", + "IO_REPARSE_TAG_SYMLINK", + "IPMreq", + "IPMreqn", + "IPPROTO_3PC", + "IPPROTO_ADFS", + "IPPROTO_AH", + "IPPROTO_AHIP", + "IPPROTO_APES", + "IPPROTO_ARGUS", + "IPPROTO_AX25", + "IPPROTO_BHA", + "IPPROTO_BLT", + "IPPROTO_BRSATMON", + "IPPROTO_CARP", + "IPPROTO_CFTP", + "IPPROTO_CHAOS", + "IPPROTO_CMTP", + "IPPROTO_COMP", + "IPPROTO_CPHB", + "IPPROTO_CPNX", + "IPPROTO_DCCP", + "IPPROTO_DDP", + "IPPROTO_DGP", + "IPPROTO_DIVERT", + "IPPROTO_DIVERT_INIT", + "IPPROTO_DIVERT_RESP", + "IPPROTO_DONE", + "IPPROTO_DSTOPTS", + "IPPROTO_EGP", + "IPPROTO_EMCON", + "IPPROTO_ENCAP", + "IPPROTO_EON", + "IPPROTO_ESP", + "IPPROTO_ETHERIP", + "IPPROTO_FRAGMENT", + "IPPROTO_GGP", + "IPPROTO_GMTP", + "IPPROTO_GRE", + "IPPROTO_HELLO", + "IPPROTO_HMP", + "IPPROTO_HOPOPTS", + "IPPROTO_ICMP", + "IPPROTO_ICMPV6", + "IPPROTO_IDP", + "IPPROTO_IDPR", + "IPPROTO_IDRP", + "IPPROTO_IGMP", + "IPPROTO_IGP", + "IPPROTO_IGRP", + "IPPROTO_IL", + "IPPROTO_INLSP", + "IPPROTO_INP", + "IPPROTO_IP", + "IPPROTO_IPCOMP", + "IPPROTO_IPCV", + "IPPROTO_IPEIP", + "IPPROTO_IPIP", + "IPPROTO_IPPC", + "IPPROTO_IPV4", + "IPPROTO_IPV6", + "IPPROTO_IPV6_ICMP", + "IPPROTO_IRTP", + "IPPROTO_KRYPTOLAN", + "IPPROTO_LARP", + "IPPROTO_LEAF1", + "IPPROTO_LEAF2", + "IPPROTO_MAX", + "IPPROTO_MAXID", + "IPPROTO_MEAS", + "IPPROTO_MH", + "IPPROTO_MHRP", + "IPPROTO_MICP", + "IPPROTO_MOBILE", + "IPPROTO_MPLS", + "IPPROTO_MTP", + "IPPROTO_MUX", + "IPPROTO_ND", + "IPPROTO_NHRP", + "IPPROTO_NONE", + "IPPROTO_NSP", + "IPPROTO_NVPII", + "IPPROTO_OLD_DIVERT", + "IPPROTO_OSPFIGP", + "IPPROTO_PFSYNC", + "IPPROTO_PGM", + "IPPROTO_PIGP", + "IPPROTO_PIM", + "IPPROTO_PRM", + "IPPROTO_PUP", + "IPPROTO_PVP", + "IPPROTO_RAW", + "IPPROTO_RCCMON", + "IPPROTO_RDP", + "IPPROTO_ROUTING", + "IPPROTO_RSVP", + "IPPROTO_RVD", + "IPPROTO_SATEXPAK", + "IPPROTO_SATMON", + "IPPROTO_SCCSP", + "IPPROTO_SCTP", + "IPPROTO_SDRP", + "IPPROTO_SEND", + "IPPROTO_SEP", + "IPPROTO_SKIP", + "IPPROTO_SPACER", + "IPPROTO_SRPC", + "IPPROTO_ST", + "IPPROTO_SVMTP", + "IPPROTO_SWIPE", + "IPPROTO_TCF", + "IPPROTO_TCP", + "IPPROTO_TLSP", + "IPPROTO_TP", + "IPPROTO_TPXX", + "IPPROTO_TRUNK1", + "IPPROTO_TRUNK2", + "IPPROTO_TTP", + "IPPROTO_UDP", + "IPPROTO_UDPLITE", + "IPPROTO_VINES", + "IPPROTO_VISA", + "IPPROTO_VMTP", + "IPPROTO_VRRP", + "IPPROTO_WBEXPAK", + "IPPROTO_WBMON", + "IPPROTO_WSN", + "IPPROTO_XNET", + "IPPROTO_XTP", + "IPV6_2292DSTOPTS", + "IPV6_2292HOPLIMIT", + "IPV6_2292HOPOPTS", + "IPV6_2292NEXTHOP", + "IPV6_2292PKTINFO", + "IPV6_2292PKTOPTIONS", + "IPV6_2292RTHDR", + "IPV6_ADDRFORM", + "IPV6_ADD_MEMBERSHIP", + "IPV6_AUTHHDR", + "IPV6_AUTH_LEVEL", + "IPV6_AUTOFLOWLABEL", + "IPV6_BINDANY", + "IPV6_BINDV6ONLY", + "IPV6_BOUND_IF", + "IPV6_CHECKSUM", + "IPV6_DEFAULT_MULTICAST_HOPS", + "IPV6_DEFAULT_MULTICAST_LOOP", + "IPV6_DEFHLIM", + "IPV6_DONTFRAG", + "IPV6_DROP_MEMBERSHIP", + "IPV6_DSTOPTS", + "IPV6_ESP_NETWORK_LEVEL", + "IPV6_ESP_TRANS_LEVEL", + "IPV6_FAITH", + "IPV6_FLOWINFO_MASK", + "IPV6_FLOWLABEL_MASK", + "IPV6_FRAGTTL", + "IPV6_FW_ADD", + "IPV6_FW_DEL", + "IPV6_FW_FLUSH", + "IPV6_FW_GET", + "IPV6_FW_ZERO", + "IPV6_HLIMDEC", + "IPV6_HOPLIMIT", + "IPV6_HOPOPTS", + "IPV6_IPCOMP_LEVEL", + "IPV6_IPSEC_POLICY", + "IPV6_JOIN_ANYCAST", + "IPV6_JOIN_GROUP", + "IPV6_LEAVE_ANYCAST", + "IPV6_LEAVE_GROUP", + "IPV6_MAXHLIM", + "IPV6_MAXOPTHDR", + "IPV6_MAXPACKET", + "IPV6_MAX_GROUP_SRC_FILTER", + "IPV6_MAX_MEMBERSHIPS", + "IPV6_MAX_SOCK_SRC_FILTER", + "IPV6_MIN_MEMBERSHIPS", + "IPV6_MMTU", + "IPV6_MSFILTER", + "IPV6_MTU", + "IPV6_MTU_DISCOVER", + "IPV6_MULTICAST_HOPS", + "IPV6_MULTICAST_IF", + "IPV6_MULTICAST_LOOP", + "IPV6_NEXTHOP", + "IPV6_OPTIONS", + "IPV6_PATHMTU", + "IPV6_PIPEX", + "IPV6_PKTINFO", + "IPV6_PMTUDISC_DO", + "IPV6_PMTUDISC_DONT", + "IPV6_PMTUDISC_PROBE", + "IPV6_PMTUDISC_WANT", + "IPV6_PORTRANGE", + "IPV6_PORTRANGE_DEFAULT", + "IPV6_PORTRANGE_HIGH", + "IPV6_PORTRANGE_LOW", + "IPV6_PREFER_TEMPADDR", + "IPV6_RECVDSTOPTS", + "IPV6_RECVDSTPORT", + "IPV6_RECVERR", + "IPV6_RECVHOPLIMIT", + "IPV6_RECVHOPOPTS", + "IPV6_RECVPATHMTU", + "IPV6_RECVPKTINFO", + "IPV6_RECVRTHDR", + "IPV6_RECVTCLASS", + "IPV6_ROUTER_ALERT", + "IPV6_RTABLE", + "IPV6_RTHDR", + "IPV6_RTHDRDSTOPTS", + "IPV6_RTHDR_LOOSE", + "IPV6_RTHDR_STRICT", + "IPV6_RTHDR_TYPE_0", + "IPV6_RXDSTOPTS", + "IPV6_RXHOPOPTS", + "IPV6_SOCKOPT_RESERVED1", + "IPV6_TCLASS", + "IPV6_UNICAST_HOPS", + "IPV6_USE_MIN_MTU", + "IPV6_V6ONLY", + "IPV6_VERSION", + "IPV6_VERSION_MASK", + "IPV6_XFRM_POLICY", + "IP_ADD_MEMBERSHIP", + "IP_ADD_SOURCE_MEMBERSHIP", + "IP_AUTH_LEVEL", + "IP_BINDANY", + "IP_BLOCK_SOURCE", + "IP_BOUND_IF", + "IP_DEFAULT_MULTICAST_LOOP", + "IP_DEFAULT_MULTICAST_TTL", + "IP_DF", + "IP_DIVERTFL", + "IP_DONTFRAG", + "IP_DROP_MEMBERSHIP", + "IP_DROP_SOURCE_MEMBERSHIP", + "IP_DUMMYNET3", + "IP_DUMMYNET_CONFIGURE", + "IP_DUMMYNET_DEL", + "IP_DUMMYNET_FLUSH", + "IP_DUMMYNET_GET", + "IP_EF", + "IP_ERRORMTU", + "IP_ESP_NETWORK_LEVEL", + "IP_ESP_TRANS_LEVEL", + "IP_FAITH", + "IP_FREEBIND", + "IP_FW3", + "IP_FW_ADD", + "IP_FW_DEL", + "IP_FW_FLUSH", + "IP_FW_GET", + "IP_FW_NAT_CFG", + "IP_FW_NAT_DEL", + "IP_FW_NAT_GET_CONFIG", + "IP_FW_NAT_GET_LOG", + "IP_FW_RESETLOG", + "IP_FW_TABLE_ADD", + "IP_FW_TABLE_DEL", + "IP_FW_TABLE_FLUSH", + "IP_FW_TABLE_GETSIZE", + "IP_FW_TABLE_LIST", + "IP_FW_ZERO", + "IP_HDRINCL", + "IP_IPCOMP_LEVEL", + "IP_IPSECFLOWINFO", + "IP_IPSEC_LOCAL_AUTH", + "IP_IPSEC_LOCAL_CRED", + "IP_IPSEC_LOCAL_ID", + "IP_IPSEC_POLICY", + "IP_IPSEC_REMOTE_AUTH", + "IP_IPSEC_REMOTE_CRED", + "IP_IPSEC_REMOTE_ID", + "IP_MAXPACKET", + "IP_MAX_GROUP_SRC_FILTER", + "IP_MAX_MEMBERSHIPS", + "IP_MAX_SOCK_MUTE_FILTER", + "IP_MAX_SOCK_SRC_FILTER", + "IP_MAX_SOURCE_FILTER", + "IP_MF", + "IP_MINFRAGSIZE", + "IP_MINTTL", + "IP_MIN_MEMBERSHIPS", + "IP_MSFILTER", + "IP_MSS", + "IP_MTU", + "IP_MTU_DISCOVER", + "IP_MULTICAST_IF", + "IP_MULTICAST_IFINDEX", + "IP_MULTICAST_LOOP", + "IP_MULTICAST_TTL", + "IP_MULTICAST_VIF", + "IP_NAT__XXX", + "IP_OFFMASK", + "IP_OLD_FW_ADD", + "IP_OLD_FW_DEL", + "IP_OLD_FW_FLUSH", + "IP_OLD_FW_GET", + "IP_OLD_FW_RESETLOG", + "IP_OLD_FW_ZERO", + "IP_ONESBCAST", + "IP_OPTIONS", + "IP_ORIGDSTADDR", + "IP_PASSSEC", + "IP_PIPEX", + "IP_PKTINFO", + "IP_PKTOPTIONS", + "IP_PMTUDISC", + "IP_PMTUDISC_DO", + "IP_PMTUDISC_DONT", + "IP_PMTUDISC_PROBE", + "IP_PMTUDISC_WANT", + "IP_PORTRANGE", + "IP_PORTRANGE_DEFAULT", + "IP_PORTRANGE_HIGH", + "IP_PORTRANGE_LOW", + "IP_RECVDSTADDR", + "IP_RECVDSTPORT", + "IP_RECVERR", + "IP_RECVIF", + "IP_RECVOPTS", + "IP_RECVORIGDSTADDR", + "IP_RECVPKTINFO", + "IP_RECVRETOPTS", + "IP_RECVRTABLE", + "IP_RECVTOS", + "IP_RECVTTL", + "IP_RETOPTS", + "IP_RF", + "IP_ROUTER_ALERT", + "IP_RSVP_OFF", + "IP_RSVP_ON", + "IP_RSVP_VIF_OFF", + "IP_RSVP_VIF_ON", + "IP_RTABLE", + "IP_SENDSRCADDR", + "IP_STRIPHDR", + "IP_TOS", + "IP_TRAFFIC_MGT_BACKGROUND", + "IP_TRANSPARENT", + "IP_TTL", + "IP_UNBLOCK_SOURCE", + "IP_XFRM_POLICY", + "IPv6MTUInfo", + "IPv6Mreq", + "ISIG", + "ISTRIP", + "IUCLC", + "IUTF8", + "IXANY", + "IXOFF", + "IXON", + "IfAddrmsg", + "IfAnnounceMsghdr", + "IfData", + "IfInfomsg", + "IfMsghdr", + "IfaMsghdr", + "IfmaMsghdr", + "IfmaMsghdr2", + "ImplementsGetwd", + "Inet4Pktinfo", + "Inet6Pktinfo", + "InotifyAddWatch", + "InotifyEvent", + "InotifyInit", + "InotifyInit1", + "InotifyRmWatch", + "InterfaceAddrMessage", + "InterfaceAnnounceMessage", + "InterfaceInfo", + "InterfaceMessage", + "InterfaceMulticastAddrMessage", + "InvalidHandle", + "Ioperm", + "Iopl", + "Iovec", + "IpAdapterInfo", + "IpAddrString", + "IpAddressString", + "IpMaskString", + "Issetugid", + "KEY_ALL_ACCESS", + "KEY_CREATE_LINK", + "KEY_CREATE_SUB_KEY", + "KEY_ENUMERATE_SUB_KEYS", + "KEY_EXECUTE", + "KEY_NOTIFY", + "KEY_QUERY_VALUE", + "KEY_READ", + "KEY_SET_VALUE", + "KEY_WOW64_32KEY", + "KEY_WOW64_64KEY", + "KEY_WRITE", + "Kevent", + "Kevent_t", + "Kill", + "Klogctl", + "Kqueue", + "LANG_ENGLISH", + "LAYERED_PROTOCOL", + "LCNT_OVERLOAD_FLUSH", + "LINUX_REBOOT_CMD_CAD_OFF", + "LINUX_REBOOT_CMD_CAD_ON", + "LINUX_REBOOT_CMD_HALT", + "LINUX_REBOOT_CMD_KEXEC", + "LINUX_REBOOT_CMD_POWER_OFF", + "LINUX_REBOOT_CMD_RESTART", + "LINUX_REBOOT_CMD_RESTART2", + "LINUX_REBOOT_CMD_SW_SUSPEND", + "LINUX_REBOOT_MAGIC1", + "LINUX_REBOOT_MAGIC2", + "LOCK_EX", + "LOCK_NB", + "LOCK_SH", + "LOCK_UN", + "LazyDLL", + "LazyProc", + "Lchown", + "Linger", + "Link", + "Listen", + "Listxattr", + "LoadCancelIoEx", + "LoadConnectEx", + "LoadCreateSymbolicLink", + "LoadDLL", + "LoadGetAddrInfo", + "LoadLibrary", + "LoadSetFileCompletionNotificationModes", + "LocalFree", + "Log2phys_t", + "LookupAccountName", + "LookupAccountSid", + "LookupSID", + "LsfJump", + "LsfSocket", + "LsfStmt", + "Lstat", + "MADV_AUTOSYNC", + "MADV_CAN_REUSE", + "MADV_CORE", + "MADV_DOFORK", + "MADV_DONTFORK", + "MADV_DONTNEED", + "MADV_FREE", + "MADV_FREE_REUSABLE", + "MADV_FREE_REUSE", + "MADV_HUGEPAGE", + "MADV_HWPOISON", + "MADV_MERGEABLE", + "MADV_NOCORE", + "MADV_NOHUGEPAGE", + "MADV_NORMAL", + "MADV_NOSYNC", + "MADV_PROTECT", + "MADV_RANDOM", + "MADV_REMOVE", + "MADV_SEQUENTIAL", + "MADV_SPACEAVAIL", + "MADV_UNMERGEABLE", + "MADV_WILLNEED", + "MADV_ZERO_WIRED_PAGES", + "MAP_32BIT", + "MAP_ALIGNED_SUPER", + "MAP_ALIGNMENT_16MB", + "MAP_ALIGNMENT_1TB", + "MAP_ALIGNMENT_256TB", + "MAP_ALIGNMENT_4GB", + "MAP_ALIGNMENT_64KB", + "MAP_ALIGNMENT_64PB", + "MAP_ALIGNMENT_MASK", + "MAP_ALIGNMENT_SHIFT", + "MAP_ANON", + "MAP_ANONYMOUS", + "MAP_COPY", + "MAP_DENYWRITE", + "MAP_EXECUTABLE", + "MAP_FILE", + "MAP_FIXED", + "MAP_FLAGMASK", + "MAP_GROWSDOWN", + "MAP_HASSEMAPHORE", + "MAP_HUGETLB", + "MAP_INHERIT", + "MAP_INHERIT_COPY", + "MAP_INHERIT_DEFAULT", + "MAP_INHERIT_DONATE_COPY", + "MAP_INHERIT_NONE", + "MAP_INHERIT_SHARE", + "MAP_JIT", + "MAP_LOCKED", + "MAP_NOCACHE", + "MAP_NOCORE", + "MAP_NOEXTEND", + "MAP_NONBLOCK", + "MAP_NORESERVE", + "MAP_NOSYNC", + "MAP_POPULATE", + "MAP_PREFAULT_READ", + "MAP_PRIVATE", + "MAP_RENAME", + "MAP_RESERVED0080", + "MAP_RESERVED0100", + "MAP_SHARED", + "MAP_STACK", + "MAP_TRYFIXED", + "MAP_TYPE", + "MAP_WIRED", + "MAXIMUM_REPARSE_DATA_BUFFER_SIZE", + "MAXLEN_IFDESCR", + "MAXLEN_PHYSADDR", + "MAX_ADAPTER_ADDRESS_LENGTH", + "MAX_ADAPTER_DESCRIPTION_LENGTH", + "MAX_ADAPTER_NAME_LENGTH", + "MAX_COMPUTERNAME_LENGTH", + "MAX_INTERFACE_NAME_LEN", + "MAX_LONG_PATH", + "MAX_PATH", + "MAX_PROTOCOL_CHAIN", + "MCL_CURRENT", + "MCL_FUTURE", + "MNT_DETACH", + "MNT_EXPIRE", + "MNT_FORCE", + "MSG_BCAST", + "MSG_CMSG_CLOEXEC", + "MSG_COMPAT", + "MSG_CONFIRM", + "MSG_CONTROLMBUF", + "MSG_CTRUNC", + "MSG_DONTROUTE", + "MSG_DONTWAIT", + "MSG_EOF", + "MSG_EOR", + "MSG_ERRQUEUE", + "MSG_FASTOPEN", + "MSG_FIN", + "MSG_FLUSH", + "MSG_HAVEMORE", + "MSG_HOLD", + "MSG_IOVUSRSPACE", + "MSG_LENUSRSPACE", + "MSG_MCAST", + "MSG_MORE", + "MSG_NAMEMBUF", + "MSG_NBIO", + "MSG_NEEDSA", + "MSG_NOSIGNAL", + "MSG_NOTIFICATION", + "MSG_OOB", + "MSG_PEEK", + "MSG_PROXY", + "MSG_RCVMORE", + "MSG_RST", + "MSG_SEND", + "MSG_SYN", + "MSG_TRUNC", + "MSG_TRYHARD", + "MSG_USERFLAGS", + "MSG_WAITALL", + "MSG_WAITFORONE", + "MSG_WAITSTREAM", + "MS_ACTIVE", + "MS_ASYNC", + "MS_BIND", + "MS_DEACTIVATE", + "MS_DIRSYNC", + "MS_INVALIDATE", + "MS_I_VERSION", + "MS_KERNMOUNT", + "MS_KILLPAGES", + "MS_MANDLOCK", + "MS_MGC_MSK", + "MS_MGC_VAL", + "MS_MOVE", + "MS_NOATIME", + "MS_NODEV", + "MS_NODIRATIME", + "MS_NOEXEC", + "MS_NOSUID", + "MS_NOUSER", + "MS_POSIXACL", + "MS_PRIVATE", + "MS_RDONLY", + "MS_REC", + "MS_RELATIME", + "MS_REMOUNT", + "MS_RMT_MASK", + "MS_SHARED", + "MS_SILENT", + "MS_SLAVE", + "MS_STRICTATIME", + "MS_SYNC", + "MS_SYNCHRONOUS", + "MS_UNBINDABLE", + "Madvise", + "MapViewOfFile", + "MaxTokenInfoClass", + "Mclpool", + "MibIfRow", + "Mkdir", + "Mkdirat", + "Mkfifo", + "Mknod", + "Mknodat", + "Mlock", + "Mlockall", + "Mmap", + "Mount", + "MoveFile", + "Mprotect", + "Msghdr", + "Munlock", + "Munlockall", + "Munmap", + "MustLoadDLL", + "NAME_MAX", + "NETLINK_ADD_MEMBERSHIP", + "NETLINK_AUDIT", + "NETLINK_BROADCAST_ERROR", + "NETLINK_CONNECTOR", + "NETLINK_DNRTMSG", + "NETLINK_DROP_MEMBERSHIP", + "NETLINK_ECRYPTFS", + "NETLINK_FIB_LOOKUP", + "NETLINK_FIREWALL", + "NETLINK_GENERIC", + "NETLINK_INET_DIAG", + "NETLINK_IP6_FW", + "NETLINK_ISCSI", + "NETLINK_KOBJECT_UEVENT", + "NETLINK_NETFILTER", + "NETLINK_NFLOG", + "NETLINK_NO_ENOBUFS", + "NETLINK_PKTINFO", + "NETLINK_RDMA", + "NETLINK_ROUTE", + "NETLINK_SCSITRANSPORT", + "NETLINK_SELINUX", + "NETLINK_UNUSED", + "NETLINK_USERSOCK", + "NETLINK_XFRM", + "NET_RT_DUMP", + "NET_RT_DUMP2", + "NET_RT_FLAGS", + "NET_RT_IFLIST", + "NET_RT_IFLIST2", + "NET_RT_IFLISTL", + "NET_RT_IFMALIST", + "NET_RT_MAXID", + "NET_RT_OIFLIST", + "NET_RT_OOIFLIST", + "NET_RT_STAT", + "NET_RT_STATS", + "NET_RT_TABLE", + "NET_RT_TRASH", + "NLA_ALIGNTO", + "NLA_F_NESTED", + "NLA_F_NET_BYTEORDER", + "NLA_HDRLEN", + "NLMSG_ALIGNTO", + "NLMSG_DONE", + "NLMSG_ERROR", + "NLMSG_HDRLEN", + "NLMSG_MIN_TYPE", + "NLMSG_NOOP", + "NLMSG_OVERRUN", + "NLM_F_ACK", + "NLM_F_APPEND", + "NLM_F_ATOMIC", + "NLM_F_CREATE", + "NLM_F_DUMP", + "NLM_F_ECHO", + "NLM_F_EXCL", + "NLM_F_MATCH", + "NLM_F_MULTI", + "NLM_F_REPLACE", + "NLM_F_REQUEST", + "NLM_F_ROOT", + "NOFLSH", + "NOTE_ABSOLUTE", + "NOTE_ATTRIB", + "NOTE_CHILD", + "NOTE_DELETE", + "NOTE_EOF", + "NOTE_EXEC", + "NOTE_EXIT", + "NOTE_EXITSTATUS", + "NOTE_EXTEND", + "NOTE_FFAND", + "NOTE_FFCOPY", + "NOTE_FFCTRLMASK", + "NOTE_FFLAGSMASK", + "NOTE_FFNOP", + "NOTE_FFOR", + "NOTE_FORK", + "NOTE_LINK", + "NOTE_LOWAT", + "NOTE_NONE", + "NOTE_NSECONDS", + "NOTE_PCTRLMASK", + "NOTE_PDATAMASK", + "NOTE_REAP", + "NOTE_RENAME", + "NOTE_RESOURCEEND", + "NOTE_REVOKE", + "NOTE_SECONDS", + "NOTE_SIGNAL", + "NOTE_TRACK", + "NOTE_TRACKERR", + "NOTE_TRIGGER", + "NOTE_TRUNCATE", + "NOTE_USECONDS", + "NOTE_VM_ERROR", + "NOTE_VM_PRESSURE", + "NOTE_VM_PRESSURE_SUDDEN_TERMINATE", + "NOTE_VM_PRESSURE_TERMINATE", + "NOTE_WRITE", + "NameCanonical", + "NameCanonicalEx", + "NameDisplay", + "NameDnsDomain", + "NameFullyQualifiedDN", + "NameSamCompatible", + "NameServicePrincipal", + "NameUniqueId", + "NameUnknown", + "NameUserPrincipal", + "Nanosleep", + "NetApiBufferFree", + "NetGetJoinInformation", + "NetSetupDomainName", + "NetSetupUnjoined", + "NetSetupUnknownStatus", + "NetSetupWorkgroupName", + "NetUserGetInfo", + "NetlinkMessage", + "NetlinkRIB", + "NetlinkRouteAttr", + "NetlinkRouteRequest", + "NewCallback", + "NewCallbackCDecl", + "NewLazyDLL", + "NlAttr", + "NlMsgerr", + "NlMsghdr", + "NsecToFiletime", + "NsecToTimespec", + "NsecToTimeval", + "Ntohs", + "OCRNL", + "OFDEL", + "OFILL", + "OFIOGETBMAP", + "OID_PKIX_KP_SERVER_AUTH", + "OID_SERVER_GATED_CRYPTO", + "OID_SGC_NETSCAPE", + "OLCUC", + "ONLCR", + "ONLRET", + "ONOCR", + "ONOEOT", + "OPEN_ALWAYS", + "OPEN_EXISTING", + "OPOST", + "O_ACCMODE", + "O_ALERT", + "O_ALT_IO", + "O_APPEND", + "O_ASYNC", + "O_CLOEXEC", + "O_CREAT", + "O_DIRECT", + "O_DIRECTORY", + "O_DSYNC", + "O_EVTONLY", + "O_EXCL", + "O_EXEC", + "O_EXLOCK", + "O_FSYNC", + "O_LARGEFILE", + "O_NDELAY", + "O_NOATIME", + "O_NOCTTY", + "O_NOFOLLOW", + "O_NONBLOCK", + "O_NOSIGPIPE", + "O_POPUP", + "O_RDONLY", + "O_RDWR", + "O_RSYNC", + "O_SHLOCK", + "O_SYMLINK", + "O_SYNC", + "O_TRUNC", + "O_TTY_INIT", + "O_WRONLY", + "Open", + "OpenCurrentProcessToken", + "OpenProcess", + "OpenProcessToken", + "Openat", + "Overlapped", + "PACKET_ADD_MEMBERSHIP", + "PACKET_BROADCAST", + "PACKET_DROP_MEMBERSHIP", + "PACKET_FASTROUTE", + "PACKET_HOST", + "PACKET_LOOPBACK", + "PACKET_MR_ALLMULTI", + "PACKET_MR_MULTICAST", + "PACKET_MR_PROMISC", + "PACKET_MULTICAST", + "PACKET_OTHERHOST", + "PACKET_OUTGOING", + "PACKET_RECV_OUTPUT", + "PACKET_RX_RING", + "PACKET_STATISTICS", + "PAGE_EXECUTE_READ", + "PAGE_EXECUTE_READWRITE", + "PAGE_EXECUTE_WRITECOPY", + "PAGE_READONLY", + "PAGE_READWRITE", + "PAGE_WRITECOPY", + "PARENB", + "PARMRK", + "PARODD", + "PENDIN", + "PFL_HIDDEN", + "PFL_MATCHES_PROTOCOL_ZERO", + "PFL_MULTIPLE_PROTO_ENTRIES", + "PFL_NETWORKDIRECT_PROVIDER", + "PFL_RECOMMENDED_PROTO_ENTRY", + "PF_FLUSH", + "PKCS_7_ASN_ENCODING", + "PMC5_PIPELINE_FLUSH", + "PRIO_PGRP", + "PRIO_PROCESS", + "PRIO_USER", + "PRI_IOFLUSH", + "PROCESS_QUERY_INFORMATION", + "PROCESS_TERMINATE", + "PROT_EXEC", + "PROT_GROWSDOWN", + "PROT_GROWSUP", + "PROT_NONE", + "PROT_READ", + "PROT_WRITE", + "PROV_DH_SCHANNEL", + "PROV_DSS", + "PROV_DSS_DH", + "PROV_EC_ECDSA_FULL", + "PROV_EC_ECDSA_SIG", + "PROV_EC_ECNRA_FULL", + "PROV_EC_ECNRA_SIG", + "PROV_FORTEZZA", + "PROV_INTEL_SEC", + "PROV_MS_EXCHANGE", + "PROV_REPLACE_OWF", + "PROV_RNG", + "PROV_RSA_AES", + "PROV_RSA_FULL", + "PROV_RSA_SCHANNEL", + "PROV_RSA_SIG", + "PROV_SPYRUS_LYNKS", + "PROV_SSL", + "PR_CAPBSET_DROP", + "PR_CAPBSET_READ", + "PR_CLEAR_SECCOMP_FILTER", + "PR_ENDIAN_BIG", + "PR_ENDIAN_LITTLE", + "PR_ENDIAN_PPC_LITTLE", + "PR_FPEMU_NOPRINT", + "PR_FPEMU_SIGFPE", + "PR_FP_EXC_ASYNC", + "PR_FP_EXC_DISABLED", + "PR_FP_EXC_DIV", + "PR_FP_EXC_INV", + "PR_FP_EXC_NONRECOV", + "PR_FP_EXC_OVF", + "PR_FP_EXC_PRECISE", + "PR_FP_EXC_RES", + "PR_FP_EXC_SW_ENABLE", + "PR_FP_EXC_UND", + "PR_GET_DUMPABLE", + "PR_GET_ENDIAN", + "PR_GET_FPEMU", + "PR_GET_FPEXC", + "PR_GET_KEEPCAPS", + "PR_GET_NAME", + "PR_GET_PDEATHSIG", + "PR_GET_SECCOMP", + "PR_GET_SECCOMP_FILTER", + "PR_GET_SECUREBITS", + "PR_GET_TIMERSLACK", + "PR_GET_TIMING", + "PR_GET_TSC", + "PR_GET_UNALIGN", + "PR_MCE_KILL", + "PR_MCE_KILL_CLEAR", + "PR_MCE_KILL_DEFAULT", + "PR_MCE_KILL_EARLY", + "PR_MCE_KILL_GET", + "PR_MCE_KILL_LATE", + "PR_MCE_KILL_SET", + "PR_SECCOMP_FILTER_EVENT", + "PR_SECCOMP_FILTER_SYSCALL", + "PR_SET_DUMPABLE", + "PR_SET_ENDIAN", + "PR_SET_FPEMU", + "PR_SET_FPEXC", + "PR_SET_KEEPCAPS", + "PR_SET_NAME", + "PR_SET_PDEATHSIG", + "PR_SET_PTRACER", + "PR_SET_SECCOMP", + "PR_SET_SECCOMP_FILTER", + "PR_SET_SECUREBITS", + "PR_SET_TIMERSLACK", + "PR_SET_TIMING", + "PR_SET_TSC", + "PR_SET_UNALIGN", + "PR_TASK_PERF_EVENTS_DISABLE", + "PR_TASK_PERF_EVENTS_ENABLE", + "PR_TIMING_STATISTICAL", + "PR_TIMING_TIMESTAMP", + "PR_TSC_ENABLE", + "PR_TSC_SIGSEGV", + "PR_UNALIGN_NOPRINT", + "PR_UNALIGN_SIGBUS", + "PTRACE_ARCH_PRCTL", + "PTRACE_ATTACH", + "PTRACE_CONT", + "PTRACE_DETACH", + "PTRACE_EVENT_CLONE", + "PTRACE_EVENT_EXEC", + "PTRACE_EVENT_EXIT", + "PTRACE_EVENT_FORK", + "PTRACE_EVENT_VFORK", + "PTRACE_EVENT_VFORK_DONE", + "PTRACE_GETCRUNCHREGS", + "PTRACE_GETEVENTMSG", + "PTRACE_GETFPREGS", + "PTRACE_GETFPXREGS", + "PTRACE_GETHBPREGS", + "PTRACE_GETREGS", + "PTRACE_GETREGSET", + "PTRACE_GETSIGINFO", + "PTRACE_GETVFPREGS", + "PTRACE_GETWMMXREGS", + "PTRACE_GET_THREAD_AREA", + "PTRACE_KILL", + "PTRACE_OLDSETOPTIONS", + "PTRACE_O_MASK", + "PTRACE_O_TRACECLONE", + "PTRACE_O_TRACEEXEC", + "PTRACE_O_TRACEEXIT", + "PTRACE_O_TRACEFORK", + "PTRACE_O_TRACESYSGOOD", + "PTRACE_O_TRACEVFORK", + "PTRACE_O_TRACEVFORKDONE", + "PTRACE_PEEKDATA", + "PTRACE_PEEKTEXT", + "PTRACE_PEEKUSR", + "PTRACE_POKEDATA", + "PTRACE_POKETEXT", + "PTRACE_POKEUSR", + "PTRACE_SETCRUNCHREGS", + "PTRACE_SETFPREGS", + "PTRACE_SETFPXREGS", + "PTRACE_SETHBPREGS", + "PTRACE_SETOPTIONS", + "PTRACE_SETREGS", + "PTRACE_SETREGSET", + "PTRACE_SETSIGINFO", + "PTRACE_SETVFPREGS", + "PTRACE_SETWMMXREGS", + "PTRACE_SET_SYSCALL", + "PTRACE_SET_THREAD_AREA", + "PTRACE_SINGLEBLOCK", + "PTRACE_SINGLESTEP", + "PTRACE_SYSCALL", + "PTRACE_SYSEMU", + "PTRACE_SYSEMU_SINGLESTEP", + "PTRACE_TRACEME", + "PT_ATTACH", + "PT_ATTACHEXC", + "PT_CONTINUE", + "PT_DATA_ADDR", + "PT_DENY_ATTACH", + "PT_DETACH", + "PT_FIRSTMACH", + "PT_FORCEQUOTA", + "PT_KILL", + "PT_MASK", + "PT_READ_D", + "PT_READ_I", + "PT_READ_U", + "PT_SIGEXC", + "PT_STEP", + "PT_TEXT_ADDR", + "PT_TEXT_END_ADDR", + "PT_THUPDATE", + "PT_TRACE_ME", + "PT_WRITE_D", + "PT_WRITE_I", + "PT_WRITE_U", + "ParseDirent", + "ParseNetlinkMessage", + "ParseNetlinkRouteAttr", + "ParseRoutingMessage", + "ParseRoutingSockaddr", + "ParseSocketControlMessage", + "ParseUnixCredentials", + "ParseUnixRights", + "PathMax", + "Pathconf", + "Pause", + "Pipe", + "Pipe2", + "PivotRoot", + "Pointer", + "PostQueuedCompletionStatus", + "Pread", + "Proc", + "ProcAttr", + "Process32First", + "Process32Next", + "ProcessEntry32", + "ProcessInformation", + "Protoent", + "PtraceAttach", + "PtraceCont", + "PtraceDetach", + "PtraceGetEventMsg", + "PtraceGetRegs", + "PtracePeekData", + "PtracePeekText", + "PtracePokeData", + "PtracePokeText", + "PtraceRegs", + "PtraceSetOptions", + "PtraceSetRegs", + "PtraceSingleStep", + "PtraceSyscall", + "Pwrite", + "REG_BINARY", + "REG_DWORD", + "REG_DWORD_BIG_ENDIAN", + "REG_DWORD_LITTLE_ENDIAN", + "REG_EXPAND_SZ", + "REG_FULL_RESOURCE_DESCRIPTOR", + "REG_LINK", + "REG_MULTI_SZ", + "REG_NONE", + "REG_QWORD", + "REG_QWORD_LITTLE_ENDIAN", + "REG_RESOURCE_LIST", + "REG_RESOURCE_REQUIREMENTS_LIST", + "REG_SZ", + "RLIMIT_AS", + "RLIMIT_CORE", + "RLIMIT_CPU", + "RLIMIT_DATA", + "RLIMIT_FSIZE", + "RLIMIT_NOFILE", + "RLIMIT_STACK", + "RLIM_INFINITY", + "RTAX_ADVMSS", + "RTAX_AUTHOR", + "RTAX_BRD", + "RTAX_CWND", + "RTAX_DST", + "RTAX_FEATURES", + "RTAX_FEATURE_ALLFRAG", + "RTAX_FEATURE_ECN", + "RTAX_FEATURE_SACK", + "RTAX_FEATURE_TIMESTAMP", + "RTAX_GATEWAY", + "RTAX_GENMASK", + "RTAX_HOPLIMIT", + "RTAX_IFA", + "RTAX_IFP", + "RTAX_INITCWND", + "RTAX_INITRWND", + "RTAX_LABEL", + "RTAX_LOCK", + "RTAX_MAX", + "RTAX_MTU", + "RTAX_NETMASK", + "RTAX_REORDERING", + "RTAX_RTO_MIN", + "RTAX_RTT", + "RTAX_RTTVAR", + "RTAX_SRC", + "RTAX_SRCMASK", + "RTAX_SSTHRESH", + "RTAX_TAG", + "RTAX_UNSPEC", + "RTAX_WINDOW", + "RTA_ALIGNTO", + "RTA_AUTHOR", + "RTA_BRD", + "RTA_CACHEINFO", + "RTA_DST", + "RTA_FLOW", + "RTA_GATEWAY", + "RTA_GENMASK", + "RTA_IFA", + "RTA_IFP", + "RTA_IIF", + "RTA_LABEL", + "RTA_MAX", + "RTA_METRICS", + "RTA_MULTIPATH", + "RTA_NETMASK", + "RTA_OIF", + "RTA_PREFSRC", + "RTA_PRIORITY", + "RTA_SRC", + "RTA_SRCMASK", + "RTA_TABLE", + "RTA_TAG", + "RTA_UNSPEC", + "RTCF_DIRECTSRC", + "RTCF_DOREDIRECT", + "RTCF_LOG", + "RTCF_MASQ", + "RTCF_NAT", + "RTCF_VALVE", + "RTF_ADDRCLASSMASK", + "RTF_ADDRCONF", + "RTF_ALLONLINK", + "RTF_ANNOUNCE", + "RTF_BLACKHOLE", + "RTF_BROADCAST", + "RTF_CACHE", + "RTF_CLONED", + "RTF_CLONING", + "RTF_CONDEMNED", + "RTF_DEFAULT", + "RTF_DELCLONE", + "RTF_DONE", + "RTF_DYNAMIC", + "RTF_FLOW", + "RTF_FMASK", + "RTF_GATEWAY", + "RTF_GWFLAG_COMPAT", + "RTF_HOST", + "RTF_IFREF", + "RTF_IFSCOPE", + "RTF_INTERFACE", + "RTF_IRTT", + "RTF_LINKRT", + "RTF_LLDATA", + "RTF_LLINFO", + "RTF_LOCAL", + "RTF_MASK", + "RTF_MODIFIED", + "RTF_MPATH", + "RTF_MPLS", + "RTF_MSS", + "RTF_MTU", + "RTF_MULTICAST", + "RTF_NAT", + "RTF_NOFORWARD", + "RTF_NONEXTHOP", + "RTF_NOPMTUDISC", + "RTF_PERMANENT_ARP", + "RTF_PINNED", + "RTF_POLICY", + "RTF_PRCLONING", + "RTF_PROTO1", + "RTF_PROTO2", + "RTF_PROTO3", + "RTF_REINSTATE", + "RTF_REJECT", + "RTF_RNH_LOCKED", + "RTF_SOURCE", + "RTF_SRC", + "RTF_STATIC", + "RTF_STICKY", + "RTF_THROW", + "RTF_TUNNEL", + "RTF_UP", + "RTF_USETRAILERS", + "RTF_WASCLONED", + "RTF_WINDOW", + "RTF_XRESOLVE", + "RTM_ADD", + "RTM_BASE", + "RTM_CHANGE", + "RTM_CHGADDR", + "RTM_DELACTION", + "RTM_DELADDR", + "RTM_DELADDRLABEL", + "RTM_DELETE", + "RTM_DELLINK", + "RTM_DELMADDR", + "RTM_DELNEIGH", + "RTM_DELQDISC", + "RTM_DELROUTE", + "RTM_DELRULE", + "RTM_DELTCLASS", + "RTM_DELTFILTER", + "RTM_DESYNC", + "RTM_F_CLONED", + "RTM_F_EQUALIZE", + "RTM_F_NOTIFY", + "RTM_F_PREFIX", + "RTM_GET", + "RTM_GET2", + "RTM_GETACTION", + "RTM_GETADDR", + "RTM_GETADDRLABEL", + "RTM_GETANYCAST", + "RTM_GETDCB", + "RTM_GETLINK", + "RTM_GETMULTICAST", + "RTM_GETNEIGH", + "RTM_GETNEIGHTBL", + "RTM_GETQDISC", + "RTM_GETROUTE", + "RTM_GETRULE", + "RTM_GETTCLASS", + "RTM_GETTFILTER", + "RTM_IEEE80211", + "RTM_IFANNOUNCE", + "RTM_IFINFO", + "RTM_IFINFO2", + "RTM_LLINFO_UPD", + "RTM_LOCK", + "RTM_LOSING", + "RTM_MAX", + "RTM_MAXSIZE", + "RTM_MISS", + "RTM_NEWACTION", + "RTM_NEWADDR", + "RTM_NEWADDRLABEL", + "RTM_NEWLINK", + "RTM_NEWMADDR", + "RTM_NEWMADDR2", + "RTM_NEWNDUSEROPT", + "RTM_NEWNEIGH", + "RTM_NEWNEIGHTBL", + "RTM_NEWPREFIX", + "RTM_NEWQDISC", + "RTM_NEWROUTE", + "RTM_NEWRULE", + "RTM_NEWTCLASS", + "RTM_NEWTFILTER", + "RTM_NR_FAMILIES", + "RTM_NR_MSGTYPES", + "RTM_OIFINFO", + "RTM_OLDADD", + "RTM_OLDDEL", + "RTM_OOIFINFO", + "RTM_REDIRECT", + "RTM_RESOLVE", + "RTM_RTTUNIT", + "RTM_SETDCB", + "RTM_SETGATE", + "RTM_SETLINK", + "RTM_SETNEIGHTBL", + "RTM_VERSION", + "RTNH_ALIGNTO", + "RTNH_F_DEAD", + "RTNH_F_ONLINK", + "RTNH_F_PERVASIVE", + "RTNLGRP_IPV4_IFADDR", + "RTNLGRP_IPV4_MROUTE", + "RTNLGRP_IPV4_ROUTE", + "RTNLGRP_IPV4_RULE", + "RTNLGRP_IPV6_IFADDR", + "RTNLGRP_IPV6_IFINFO", + "RTNLGRP_IPV6_MROUTE", + "RTNLGRP_IPV6_PREFIX", + "RTNLGRP_IPV6_ROUTE", + "RTNLGRP_IPV6_RULE", + "RTNLGRP_LINK", + "RTNLGRP_ND_USEROPT", + "RTNLGRP_NEIGH", + "RTNLGRP_NONE", + "RTNLGRP_NOTIFY", + "RTNLGRP_TC", + "RTN_ANYCAST", + "RTN_BLACKHOLE", + "RTN_BROADCAST", + "RTN_LOCAL", + "RTN_MAX", + "RTN_MULTICAST", + "RTN_NAT", + "RTN_PROHIBIT", + "RTN_THROW", + "RTN_UNICAST", + "RTN_UNREACHABLE", + "RTN_UNSPEC", + "RTN_XRESOLVE", + "RTPROT_BIRD", + "RTPROT_BOOT", + "RTPROT_DHCP", + "RTPROT_DNROUTED", + "RTPROT_GATED", + "RTPROT_KERNEL", + "RTPROT_MRT", + "RTPROT_NTK", + "RTPROT_RA", + "RTPROT_REDIRECT", + "RTPROT_STATIC", + "RTPROT_UNSPEC", + "RTPROT_XORP", + "RTPROT_ZEBRA", + "RTV_EXPIRE", + "RTV_HOPCOUNT", + "RTV_MTU", + "RTV_RPIPE", + "RTV_RTT", + "RTV_RTTVAR", + "RTV_SPIPE", + "RTV_SSTHRESH", + "RTV_WEIGHT", + "RT_CACHING_CONTEXT", + "RT_CLASS_DEFAULT", + "RT_CLASS_LOCAL", + "RT_CLASS_MAIN", + "RT_CLASS_MAX", + "RT_CLASS_UNSPEC", + "RT_DEFAULT_FIB", + "RT_NORTREF", + "RT_SCOPE_HOST", + "RT_SCOPE_LINK", + "RT_SCOPE_NOWHERE", + "RT_SCOPE_SITE", + "RT_SCOPE_UNIVERSE", + "RT_TABLEID_MAX", + "RT_TABLE_COMPAT", + "RT_TABLE_DEFAULT", + "RT_TABLE_LOCAL", + "RT_TABLE_MAIN", + "RT_TABLE_MAX", + "RT_TABLE_UNSPEC", + "RUSAGE_CHILDREN", + "RUSAGE_SELF", + "RUSAGE_THREAD", + "Radvisory_t", + "RawConn", + "RawSockaddr", + "RawSockaddrAny", + "RawSockaddrDatalink", + "RawSockaddrInet4", + "RawSockaddrInet6", + "RawSockaddrLinklayer", + "RawSockaddrNetlink", + "RawSockaddrUnix", + "RawSyscall", + "RawSyscall6", + "Read", + "ReadConsole", + "ReadDirectoryChanges", + "ReadDirent", + "ReadFile", + "Readlink", + "Reboot", + "Recvfrom", + "Recvmsg", + "RegCloseKey", + "RegEnumKeyEx", + "RegOpenKeyEx", + "RegQueryInfoKey", + "RegQueryValueEx", + "RemoveDirectory", + "Removexattr", + "Rename", + "Renameat", + "Revoke", + "Rlimit", + "Rmdir", + "RouteMessage", + "RouteRIB", + "RoutingMessage", + "RtAttr", + "RtGenmsg", + "RtMetrics", + "RtMsg", + "RtMsghdr", + "RtNexthop", + "Rusage", + "SCM_BINTIME", + "SCM_CREDENTIALS", + "SCM_CREDS", + "SCM_RIGHTS", + "SCM_TIMESTAMP", + "SCM_TIMESTAMPING", + "SCM_TIMESTAMPNS", + "SCM_TIMESTAMP_MONOTONIC", + "SHUT_RD", + "SHUT_RDWR", + "SHUT_WR", + "SID", + "SIDAndAttributes", + "SIGABRT", + "SIGALRM", + "SIGBUS", + "SIGCHLD", + "SIGCLD", + "SIGCONT", + "SIGEMT", + "SIGFPE", + "SIGHUP", + "SIGILL", + "SIGINFO", + "SIGINT", + "SIGIO", + "SIGIOT", + "SIGKILL", + "SIGLIBRT", + "SIGLWP", + "SIGPIPE", + "SIGPOLL", + "SIGPROF", + "SIGPWR", + "SIGQUIT", + "SIGSEGV", + "SIGSTKFLT", + "SIGSTOP", + "SIGSYS", + "SIGTERM", + "SIGTHR", + "SIGTRAP", + "SIGTSTP", + "SIGTTIN", + "SIGTTOU", + "SIGUNUSED", + "SIGURG", + "SIGUSR1", + "SIGUSR2", + "SIGVTALRM", + "SIGWINCH", + "SIGXCPU", + "SIGXFSZ", + "SIOCADDDLCI", + "SIOCADDMULTI", + "SIOCADDRT", + "SIOCAIFADDR", + "SIOCAIFGROUP", + "SIOCALIFADDR", + "SIOCARPIPLL", + "SIOCATMARK", + "SIOCAUTOADDR", + "SIOCAUTONETMASK", + "SIOCBRDGADD", + "SIOCBRDGADDS", + "SIOCBRDGARL", + "SIOCBRDGDADDR", + "SIOCBRDGDEL", + "SIOCBRDGDELS", + "SIOCBRDGFLUSH", + "SIOCBRDGFRL", + "SIOCBRDGGCACHE", + "SIOCBRDGGFD", + "SIOCBRDGGHT", + "SIOCBRDGGIFFLGS", + "SIOCBRDGGMA", + "SIOCBRDGGPARAM", + "SIOCBRDGGPRI", + "SIOCBRDGGRL", + "SIOCBRDGGSIFS", + "SIOCBRDGGTO", + "SIOCBRDGIFS", + "SIOCBRDGRTS", + "SIOCBRDGSADDR", + "SIOCBRDGSCACHE", + "SIOCBRDGSFD", + "SIOCBRDGSHT", + "SIOCBRDGSIFCOST", + "SIOCBRDGSIFFLGS", + "SIOCBRDGSIFPRIO", + "SIOCBRDGSMA", + "SIOCBRDGSPRI", + "SIOCBRDGSPROTO", + "SIOCBRDGSTO", + "SIOCBRDGSTXHC", + "SIOCDARP", + "SIOCDELDLCI", + "SIOCDELMULTI", + "SIOCDELRT", + "SIOCDEVPRIVATE", + "SIOCDIFADDR", + "SIOCDIFGROUP", + "SIOCDIFPHYADDR", + "SIOCDLIFADDR", + "SIOCDRARP", + "SIOCGARP", + "SIOCGDRVSPEC", + "SIOCGETKALIVE", + "SIOCGETLABEL", + "SIOCGETPFLOW", + "SIOCGETPFSYNC", + "SIOCGETSGCNT", + "SIOCGETVIFCNT", + "SIOCGETVLAN", + "SIOCGHIWAT", + "SIOCGIFADDR", + "SIOCGIFADDRPREF", + "SIOCGIFALIAS", + "SIOCGIFALTMTU", + "SIOCGIFASYNCMAP", + "SIOCGIFBOND", + "SIOCGIFBR", + "SIOCGIFBRDADDR", + "SIOCGIFCAP", + "SIOCGIFCONF", + "SIOCGIFCOUNT", + "SIOCGIFDATA", + "SIOCGIFDESCR", + "SIOCGIFDEVMTU", + "SIOCGIFDLT", + "SIOCGIFDSTADDR", + "SIOCGIFENCAP", + "SIOCGIFFIB", + "SIOCGIFFLAGS", + "SIOCGIFGATTR", + "SIOCGIFGENERIC", + "SIOCGIFGMEMB", + "SIOCGIFGROUP", + "SIOCGIFHARDMTU", + "SIOCGIFHWADDR", + "SIOCGIFINDEX", + "SIOCGIFKPI", + "SIOCGIFMAC", + "SIOCGIFMAP", + "SIOCGIFMEDIA", + "SIOCGIFMEM", + "SIOCGIFMETRIC", + "SIOCGIFMTU", + "SIOCGIFNAME", + "SIOCGIFNETMASK", + "SIOCGIFPDSTADDR", + "SIOCGIFPFLAGS", + "SIOCGIFPHYS", + "SIOCGIFPRIORITY", + "SIOCGIFPSRCADDR", + "SIOCGIFRDOMAIN", + "SIOCGIFRTLABEL", + "SIOCGIFSLAVE", + "SIOCGIFSTATUS", + "SIOCGIFTIMESLOT", + "SIOCGIFTXQLEN", + "SIOCGIFVLAN", + "SIOCGIFWAKEFLAGS", + "SIOCGIFXFLAGS", + "SIOCGLIFADDR", + "SIOCGLIFPHYADDR", + "SIOCGLIFPHYRTABLE", + "SIOCGLIFPHYTTL", + "SIOCGLINKSTR", + "SIOCGLOWAT", + "SIOCGPGRP", + "SIOCGPRIVATE_0", + "SIOCGPRIVATE_1", + "SIOCGRARP", + "SIOCGSPPPPARAMS", + "SIOCGSTAMP", + "SIOCGSTAMPNS", + "SIOCGVH", + "SIOCGVNETID", + "SIOCIFCREATE", + "SIOCIFCREATE2", + "SIOCIFDESTROY", + "SIOCIFGCLONERS", + "SIOCINITIFADDR", + "SIOCPROTOPRIVATE", + "SIOCRSLVMULTI", + "SIOCRTMSG", + "SIOCSARP", + "SIOCSDRVSPEC", + "SIOCSETKALIVE", + "SIOCSETLABEL", + "SIOCSETPFLOW", + "SIOCSETPFSYNC", + "SIOCSETVLAN", + "SIOCSHIWAT", + "SIOCSIFADDR", + "SIOCSIFADDRPREF", + "SIOCSIFALTMTU", + "SIOCSIFASYNCMAP", + "SIOCSIFBOND", + "SIOCSIFBR", + "SIOCSIFBRDADDR", + "SIOCSIFCAP", + "SIOCSIFDESCR", + "SIOCSIFDSTADDR", + "SIOCSIFENCAP", + "SIOCSIFFIB", + "SIOCSIFFLAGS", + "SIOCSIFGATTR", + "SIOCSIFGENERIC", + "SIOCSIFHWADDR", + "SIOCSIFHWBROADCAST", + "SIOCSIFKPI", + "SIOCSIFLINK", + "SIOCSIFLLADDR", + "SIOCSIFMAC", + "SIOCSIFMAP", + "SIOCSIFMEDIA", + "SIOCSIFMEM", + "SIOCSIFMETRIC", + "SIOCSIFMTU", + "SIOCSIFNAME", + "SIOCSIFNETMASK", + "SIOCSIFPFLAGS", + "SIOCSIFPHYADDR", + "SIOCSIFPHYS", + "SIOCSIFPRIORITY", + "SIOCSIFRDOMAIN", + "SIOCSIFRTLABEL", + "SIOCSIFRVNET", + "SIOCSIFSLAVE", + "SIOCSIFTIMESLOT", + "SIOCSIFTXQLEN", + "SIOCSIFVLAN", + "SIOCSIFVNET", + "SIOCSIFXFLAGS", + "SIOCSLIFPHYADDR", + "SIOCSLIFPHYRTABLE", + "SIOCSLIFPHYTTL", + "SIOCSLINKSTR", + "SIOCSLOWAT", + "SIOCSPGRP", + "SIOCSRARP", + "SIOCSSPPPPARAMS", + "SIOCSVH", + "SIOCSVNETID", + "SIOCZIFDATA", + "SIO_GET_EXTENSION_FUNCTION_POINTER", + "SIO_GET_INTERFACE_LIST", + "SIO_KEEPALIVE_VALS", + "SIO_UDP_CONNRESET", + "SOCK_CLOEXEC", + "SOCK_DCCP", + "SOCK_DGRAM", + "SOCK_FLAGS_MASK", + "SOCK_MAXADDRLEN", + "SOCK_NONBLOCK", + "SOCK_NOSIGPIPE", + "SOCK_PACKET", + "SOCK_RAW", + "SOCK_RDM", + "SOCK_SEQPACKET", + "SOCK_STREAM", + "SOL_AAL", + "SOL_ATM", + "SOL_DECNET", + "SOL_ICMPV6", + "SOL_IP", + "SOL_IPV6", + "SOL_IRDA", + "SOL_PACKET", + "SOL_RAW", + "SOL_SOCKET", + "SOL_TCP", + "SOL_X25", + "SOMAXCONN", + "SO_ACCEPTCONN", + "SO_ACCEPTFILTER", + "SO_ATTACH_FILTER", + "SO_BINDANY", + "SO_BINDTODEVICE", + "SO_BINTIME", + "SO_BROADCAST", + "SO_BSDCOMPAT", + "SO_DEBUG", + "SO_DETACH_FILTER", + "SO_DOMAIN", + "SO_DONTROUTE", + "SO_DONTTRUNC", + "SO_ERROR", + "SO_KEEPALIVE", + "SO_LABEL", + "SO_LINGER", + "SO_LINGER_SEC", + "SO_LISTENINCQLEN", + "SO_LISTENQLEN", + "SO_LISTENQLIMIT", + "SO_MARK", + "SO_NETPROC", + "SO_NKE", + "SO_NOADDRERR", + "SO_NOHEADER", + "SO_NOSIGPIPE", + "SO_NOTIFYCONFLICT", + "SO_NO_CHECK", + "SO_NO_DDP", + "SO_NO_OFFLOAD", + "SO_NP_EXTENSIONS", + "SO_NREAD", + "SO_NWRITE", + "SO_OOBINLINE", + "SO_OVERFLOWED", + "SO_PASSCRED", + "SO_PASSSEC", + "SO_PEERCRED", + "SO_PEERLABEL", + "SO_PEERNAME", + "SO_PEERSEC", + "SO_PRIORITY", + "SO_PROTOCOL", + "SO_PROTOTYPE", + "SO_RANDOMPORT", + "SO_RCVBUF", + "SO_RCVBUFFORCE", + "SO_RCVLOWAT", + "SO_RCVTIMEO", + "SO_RESTRICTIONS", + "SO_RESTRICT_DENYIN", + "SO_RESTRICT_DENYOUT", + "SO_RESTRICT_DENYSET", + "SO_REUSEADDR", + "SO_REUSEPORT", + "SO_REUSESHAREUID", + "SO_RTABLE", + "SO_RXQ_OVFL", + "SO_SECURITY_AUTHENTICATION", + "SO_SECURITY_ENCRYPTION_NETWORK", + "SO_SECURITY_ENCRYPTION_TRANSPORT", + "SO_SETFIB", + "SO_SNDBUF", + "SO_SNDBUFFORCE", + "SO_SNDLOWAT", + "SO_SNDTIMEO", + "SO_SPLICE", + "SO_TIMESTAMP", + "SO_TIMESTAMPING", + "SO_TIMESTAMPNS", + "SO_TIMESTAMP_MONOTONIC", + "SO_TYPE", + "SO_UPCALLCLOSEWAIT", + "SO_UPDATE_ACCEPT_CONTEXT", + "SO_UPDATE_CONNECT_CONTEXT", + "SO_USELOOPBACK", + "SO_USER_COOKIE", + "SO_VENDOR", + "SO_WANTMORE", + "SO_WANTOOBFLAG", + "SSLExtraCertChainPolicyPara", + "STANDARD_RIGHTS_ALL", + "STANDARD_RIGHTS_EXECUTE", + "STANDARD_RIGHTS_READ", + "STANDARD_RIGHTS_REQUIRED", + "STANDARD_RIGHTS_WRITE", + "STARTF_USESHOWWINDOW", + "STARTF_USESTDHANDLES", + "STD_ERROR_HANDLE", + "STD_INPUT_HANDLE", + "STD_OUTPUT_HANDLE", + "SUBLANG_ENGLISH_US", + "SW_FORCEMINIMIZE", + "SW_HIDE", + "SW_MAXIMIZE", + "SW_MINIMIZE", + "SW_NORMAL", + "SW_RESTORE", + "SW_SHOW", + "SW_SHOWDEFAULT", + "SW_SHOWMAXIMIZED", + "SW_SHOWMINIMIZED", + "SW_SHOWMINNOACTIVE", + "SW_SHOWNA", + "SW_SHOWNOACTIVATE", + "SW_SHOWNORMAL", + "SYMBOLIC_LINK_FLAG_DIRECTORY", + "SYNCHRONIZE", + "SYSCTL_VERSION", + "SYSCTL_VERS_0", + "SYSCTL_VERS_1", + "SYSCTL_VERS_MASK", + "SYS_ABORT2", + "SYS_ACCEPT", + "SYS_ACCEPT4", + "SYS_ACCEPT_NOCANCEL", + "SYS_ACCESS", + "SYS_ACCESS_EXTENDED", + "SYS_ACCT", + "SYS_ADD_KEY", + "SYS_ADD_PROFIL", + "SYS_ADJFREQ", + "SYS_ADJTIME", + "SYS_ADJTIMEX", + "SYS_AFS_SYSCALL", + "SYS_AIO_CANCEL", + "SYS_AIO_ERROR", + "SYS_AIO_FSYNC", + "SYS_AIO_READ", + "SYS_AIO_RETURN", + "SYS_AIO_SUSPEND", + "SYS_AIO_SUSPEND_NOCANCEL", + "SYS_AIO_WRITE", + "SYS_ALARM", + "SYS_ARCH_PRCTL", + "SYS_ARM_FADVISE64_64", + "SYS_ARM_SYNC_FILE_RANGE", + "SYS_ATGETMSG", + "SYS_ATPGETREQ", + "SYS_ATPGETRSP", + "SYS_ATPSNDREQ", + "SYS_ATPSNDRSP", + "SYS_ATPUTMSG", + "SYS_ATSOCKET", + "SYS_AUDIT", + "SYS_AUDITCTL", + "SYS_AUDITON", + "SYS_AUDIT_SESSION_JOIN", + "SYS_AUDIT_SESSION_PORT", + "SYS_AUDIT_SESSION_SELF", + "SYS_BDFLUSH", + "SYS_BIND", + "SYS_BINDAT", + "SYS_BREAK", + "SYS_BRK", + "SYS_BSDTHREAD_CREATE", + "SYS_BSDTHREAD_REGISTER", + "SYS_BSDTHREAD_TERMINATE", + "SYS_CAPGET", + "SYS_CAPSET", + "SYS_CAP_ENTER", + "SYS_CAP_FCNTLS_GET", + "SYS_CAP_FCNTLS_LIMIT", + "SYS_CAP_GETMODE", + "SYS_CAP_GETRIGHTS", + "SYS_CAP_IOCTLS_GET", + "SYS_CAP_IOCTLS_LIMIT", + "SYS_CAP_NEW", + "SYS_CAP_RIGHTS_GET", + "SYS_CAP_RIGHTS_LIMIT", + "SYS_CHDIR", + "SYS_CHFLAGS", + "SYS_CHFLAGSAT", + "SYS_CHMOD", + "SYS_CHMOD_EXTENDED", + "SYS_CHOWN", + "SYS_CHOWN32", + "SYS_CHROOT", + "SYS_CHUD", + "SYS_CLOCK_ADJTIME", + "SYS_CLOCK_GETCPUCLOCKID2", + "SYS_CLOCK_GETRES", + "SYS_CLOCK_GETTIME", + "SYS_CLOCK_NANOSLEEP", + "SYS_CLOCK_SETTIME", + "SYS_CLONE", + "SYS_CLOSE", + "SYS_CLOSEFROM", + "SYS_CLOSE_NOCANCEL", + "SYS_CONNECT", + "SYS_CONNECTAT", + "SYS_CONNECT_NOCANCEL", + "SYS_COPYFILE", + "SYS_CPUSET", + "SYS_CPUSET_GETAFFINITY", + "SYS_CPUSET_GETID", + "SYS_CPUSET_SETAFFINITY", + "SYS_CPUSET_SETID", + "SYS_CREAT", + "SYS_CREATE_MODULE", + "SYS_CSOPS", + "SYS_DELETE", + "SYS_DELETE_MODULE", + "SYS_DUP", + "SYS_DUP2", + "SYS_DUP3", + "SYS_EACCESS", + "SYS_EPOLL_CREATE", + "SYS_EPOLL_CREATE1", + "SYS_EPOLL_CTL", + "SYS_EPOLL_CTL_OLD", + "SYS_EPOLL_PWAIT", + "SYS_EPOLL_WAIT", + "SYS_EPOLL_WAIT_OLD", + "SYS_EVENTFD", + "SYS_EVENTFD2", + "SYS_EXCHANGEDATA", + "SYS_EXECVE", + "SYS_EXIT", + "SYS_EXIT_GROUP", + "SYS_EXTATTRCTL", + "SYS_EXTATTR_DELETE_FD", + "SYS_EXTATTR_DELETE_FILE", + "SYS_EXTATTR_DELETE_LINK", + "SYS_EXTATTR_GET_FD", + "SYS_EXTATTR_GET_FILE", + "SYS_EXTATTR_GET_LINK", + "SYS_EXTATTR_LIST_FD", + "SYS_EXTATTR_LIST_FILE", + "SYS_EXTATTR_LIST_LINK", + "SYS_EXTATTR_SET_FD", + "SYS_EXTATTR_SET_FILE", + "SYS_EXTATTR_SET_LINK", + "SYS_FACCESSAT", + "SYS_FADVISE64", + "SYS_FADVISE64_64", + "SYS_FALLOCATE", + "SYS_FANOTIFY_INIT", + "SYS_FANOTIFY_MARK", + "SYS_FCHDIR", + "SYS_FCHFLAGS", + "SYS_FCHMOD", + "SYS_FCHMODAT", + "SYS_FCHMOD_EXTENDED", + "SYS_FCHOWN", + "SYS_FCHOWN32", + "SYS_FCHOWNAT", + "SYS_FCHROOT", + "SYS_FCNTL", + "SYS_FCNTL64", + "SYS_FCNTL_NOCANCEL", + "SYS_FDATASYNC", + "SYS_FEXECVE", + "SYS_FFCLOCK_GETCOUNTER", + "SYS_FFCLOCK_GETESTIMATE", + "SYS_FFCLOCK_SETESTIMATE", + "SYS_FFSCTL", + "SYS_FGETATTRLIST", + "SYS_FGETXATTR", + "SYS_FHOPEN", + "SYS_FHSTAT", + "SYS_FHSTATFS", + "SYS_FILEPORT_MAKEFD", + "SYS_FILEPORT_MAKEPORT", + "SYS_FKTRACE", + "SYS_FLISTXATTR", + "SYS_FLOCK", + "SYS_FORK", + "SYS_FPATHCONF", + "SYS_FREEBSD6_FTRUNCATE", + "SYS_FREEBSD6_LSEEK", + "SYS_FREEBSD6_MMAP", + "SYS_FREEBSD6_PREAD", + "SYS_FREEBSD6_PWRITE", + "SYS_FREEBSD6_TRUNCATE", + "SYS_FREMOVEXATTR", + "SYS_FSCTL", + "SYS_FSETATTRLIST", + "SYS_FSETXATTR", + "SYS_FSGETPATH", + "SYS_FSTAT", + "SYS_FSTAT64", + "SYS_FSTAT64_EXTENDED", + "SYS_FSTATAT", + "SYS_FSTATAT64", + "SYS_FSTATFS", + "SYS_FSTATFS64", + "SYS_FSTATV", + "SYS_FSTATVFS1", + "SYS_FSTAT_EXTENDED", + "SYS_FSYNC", + "SYS_FSYNC_NOCANCEL", + "SYS_FSYNC_RANGE", + "SYS_FTIME", + "SYS_FTRUNCATE", + "SYS_FTRUNCATE64", + "SYS_FUTEX", + "SYS_FUTIMENS", + "SYS_FUTIMES", + "SYS_FUTIMESAT", + "SYS_GETATTRLIST", + "SYS_GETAUDIT", + "SYS_GETAUDIT_ADDR", + "SYS_GETAUID", + "SYS_GETCONTEXT", + "SYS_GETCPU", + "SYS_GETCWD", + "SYS_GETDENTS", + "SYS_GETDENTS64", + "SYS_GETDIRENTRIES", + "SYS_GETDIRENTRIES64", + "SYS_GETDIRENTRIESATTR", + "SYS_GETDTABLECOUNT", + "SYS_GETDTABLESIZE", + "SYS_GETEGID", + "SYS_GETEGID32", + "SYS_GETEUID", + "SYS_GETEUID32", + "SYS_GETFH", + "SYS_GETFSSTAT", + "SYS_GETFSSTAT64", + "SYS_GETGID", + "SYS_GETGID32", + "SYS_GETGROUPS", + "SYS_GETGROUPS32", + "SYS_GETHOSTUUID", + "SYS_GETITIMER", + "SYS_GETLCID", + "SYS_GETLOGIN", + "SYS_GETLOGINCLASS", + "SYS_GETPEERNAME", + "SYS_GETPGID", + "SYS_GETPGRP", + "SYS_GETPID", + "SYS_GETPMSG", + "SYS_GETPPID", + "SYS_GETPRIORITY", + "SYS_GETRESGID", + "SYS_GETRESGID32", + "SYS_GETRESUID", + "SYS_GETRESUID32", + "SYS_GETRLIMIT", + "SYS_GETRTABLE", + "SYS_GETRUSAGE", + "SYS_GETSGROUPS", + "SYS_GETSID", + "SYS_GETSOCKNAME", + "SYS_GETSOCKOPT", + "SYS_GETTHRID", + "SYS_GETTID", + "SYS_GETTIMEOFDAY", + "SYS_GETUID", + "SYS_GETUID32", + "SYS_GETVFSSTAT", + "SYS_GETWGROUPS", + "SYS_GETXATTR", + "SYS_GET_KERNEL_SYMS", + "SYS_GET_MEMPOLICY", + "SYS_GET_ROBUST_LIST", + "SYS_GET_THREAD_AREA", + "SYS_GTTY", + "SYS_IDENTITYSVC", + "SYS_IDLE", + "SYS_INITGROUPS", + "SYS_INIT_MODULE", + "SYS_INOTIFY_ADD_WATCH", + "SYS_INOTIFY_INIT", + "SYS_INOTIFY_INIT1", + "SYS_INOTIFY_RM_WATCH", + "SYS_IOCTL", + "SYS_IOPERM", + "SYS_IOPL", + "SYS_IOPOLICYSYS", + "SYS_IOPRIO_GET", + "SYS_IOPRIO_SET", + "SYS_IO_CANCEL", + "SYS_IO_DESTROY", + "SYS_IO_GETEVENTS", + "SYS_IO_SETUP", + "SYS_IO_SUBMIT", + "SYS_IPC", + "SYS_ISSETUGID", + "SYS_JAIL", + "SYS_JAIL_ATTACH", + "SYS_JAIL_GET", + "SYS_JAIL_REMOVE", + "SYS_JAIL_SET", + "SYS_KDEBUG_TRACE", + "SYS_KENV", + "SYS_KEVENT", + "SYS_KEVENT64", + "SYS_KEXEC_LOAD", + "SYS_KEYCTL", + "SYS_KILL", + "SYS_KLDFIND", + "SYS_KLDFIRSTMOD", + "SYS_KLDLOAD", + "SYS_KLDNEXT", + "SYS_KLDSTAT", + "SYS_KLDSYM", + "SYS_KLDUNLOAD", + "SYS_KLDUNLOADF", + "SYS_KQUEUE", + "SYS_KQUEUE1", + "SYS_KTIMER_CREATE", + "SYS_KTIMER_DELETE", + "SYS_KTIMER_GETOVERRUN", + "SYS_KTIMER_GETTIME", + "SYS_KTIMER_SETTIME", + "SYS_KTRACE", + "SYS_LCHFLAGS", + "SYS_LCHMOD", + "SYS_LCHOWN", + "SYS_LCHOWN32", + "SYS_LGETFH", + "SYS_LGETXATTR", + "SYS_LINK", + "SYS_LINKAT", + "SYS_LIO_LISTIO", + "SYS_LISTEN", + "SYS_LISTXATTR", + "SYS_LLISTXATTR", + "SYS_LOCK", + "SYS_LOOKUP_DCOOKIE", + "SYS_LPATHCONF", + "SYS_LREMOVEXATTR", + "SYS_LSEEK", + "SYS_LSETXATTR", + "SYS_LSTAT", + "SYS_LSTAT64", + "SYS_LSTAT64_EXTENDED", + "SYS_LSTATV", + "SYS_LSTAT_EXTENDED", + "SYS_LUTIMES", + "SYS_MAC_SYSCALL", + "SYS_MADVISE", + "SYS_MADVISE1", + "SYS_MAXSYSCALL", + "SYS_MBIND", + "SYS_MIGRATE_PAGES", + "SYS_MINCORE", + "SYS_MINHERIT", + "SYS_MKCOMPLEX", + "SYS_MKDIR", + "SYS_MKDIRAT", + "SYS_MKDIR_EXTENDED", + "SYS_MKFIFO", + "SYS_MKFIFOAT", + "SYS_MKFIFO_EXTENDED", + "SYS_MKNOD", + "SYS_MKNODAT", + "SYS_MLOCK", + "SYS_MLOCKALL", + "SYS_MMAP", + "SYS_MMAP2", + "SYS_MODCTL", + "SYS_MODFIND", + "SYS_MODFNEXT", + "SYS_MODIFY_LDT", + "SYS_MODNEXT", + "SYS_MODSTAT", + "SYS_MODWATCH", + "SYS_MOUNT", + "SYS_MOVE_PAGES", + "SYS_MPROTECT", + "SYS_MPX", + "SYS_MQUERY", + "SYS_MQ_GETSETATTR", + "SYS_MQ_NOTIFY", + "SYS_MQ_OPEN", + "SYS_MQ_TIMEDRECEIVE", + "SYS_MQ_TIMEDSEND", + "SYS_MQ_UNLINK", + "SYS_MREMAP", + "SYS_MSGCTL", + "SYS_MSGGET", + "SYS_MSGRCV", + "SYS_MSGRCV_NOCANCEL", + "SYS_MSGSND", + "SYS_MSGSND_NOCANCEL", + "SYS_MSGSYS", + "SYS_MSYNC", + "SYS_MSYNC_NOCANCEL", + "SYS_MUNLOCK", + "SYS_MUNLOCKALL", + "SYS_MUNMAP", + "SYS_NAME_TO_HANDLE_AT", + "SYS_NANOSLEEP", + "SYS_NEWFSTATAT", + "SYS_NFSCLNT", + "SYS_NFSSERVCTL", + "SYS_NFSSVC", + "SYS_NFSTAT", + "SYS_NICE", + "SYS_NLSTAT", + "SYS_NMOUNT", + "SYS_NSTAT", + "SYS_NTP_ADJTIME", + "SYS_NTP_GETTIME", + "SYS_OABI_SYSCALL_BASE", + "SYS_OBREAK", + "SYS_OLDFSTAT", + "SYS_OLDLSTAT", + "SYS_OLDOLDUNAME", + "SYS_OLDSTAT", + "SYS_OLDUNAME", + "SYS_OPEN", + "SYS_OPENAT", + "SYS_OPENBSD_POLL", + "SYS_OPEN_BY_HANDLE_AT", + "SYS_OPEN_EXTENDED", + "SYS_OPEN_NOCANCEL", + "SYS_OVADVISE", + "SYS_PACCEPT", + "SYS_PATHCONF", + "SYS_PAUSE", + "SYS_PCICONFIG_IOBASE", + "SYS_PCICONFIG_READ", + "SYS_PCICONFIG_WRITE", + "SYS_PDFORK", + "SYS_PDGETPID", + "SYS_PDKILL", + "SYS_PERF_EVENT_OPEN", + "SYS_PERSONALITY", + "SYS_PID_HIBERNATE", + "SYS_PID_RESUME", + "SYS_PID_SHUTDOWN_SOCKETS", + "SYS_PID_SUSPEND", + "SYS_PIPE", + "SYS_PIPE2", + "SYS_PIVOT_ROOT", + "SYS_PMC_CONTROL", + "SYS_PMC_GET_INFO", + "SYS_POLL", + "SYS_POLLTS", + "SYS_POLL_NOCANCEL", + "SYS_POSIX_FADVISE", + "SYS_POSIX_FALLOCATE", + "SYS_POSIX_OPENPT", + "SYS_POSIX_SPAWN", + "SYS_PPOLL", + "SYS_PRCTL", + "SYS_PREAD", + "SYS_PREAD64", + "SYS_PREADV", + "SYS_PREAD_NOCANCEL", + "SYS_PRLIMIT64", + "SYS_PROCCTL", + "SYS_PROCESS_POLICY", + "SYS_PROCESS_VM_READV", + "SYS_PROCESS_VM_WRITEV", + "SYS_PROC_INFO", + "SYS_PROF", + "SYS_PROFIL", + "SYS_PSELECT", + "SYS_PSELECT6", + "SYS_PSET_ASSIGN", + "SYS_PSET_CREATE", + "SYS_PSET_DESTROY", + "SYS_PSYNCH_CVBROAD", + "SYS_PSYNCH_CVCLRPREPOST", + "SYS_PSYNCH_CVSIGNAL", + "SYS_PSYNCH_CVWAIT", + "SYS_PSYNCH_MUTEXDROP", + "SYS_PSYNCH_MUTEXWAIT", + "SYS_PSYNCH_RW_DOWNGRADE", + "SYS_PSYNCH_RW_LONGRDLOCK", + "SYS_PSYNCH_RW_RDLOCK", + "SYS_PSYNCH_RW_UNLOCK", + "SYS_PSYNCH_RW_UNLOCK2", + "SYS_PSYNCH_RW_UPGRADE", + "SYS_PSYNCH_RW_WRLOCK", + "SYS_PSYNCH_RW_YIELDWRLOCK", + "SYS_PTRACE", + "SYS_PUTPMSG", + "SYS_PWRITE", + "SYS_PWRITE64", + "SYS_PWRITEV", + "SYS_PWRITE_NOCANCEL", + "SYS_QUERY_MODULE", + "SYS_QUOTACTL", + "SYS_RASCTL", + "SYS_RCTL_ADD_RULE", + "SYS_RCTL_GET_LIMITS", + "SYS_RCTL_GET_RACCT", + "SYS_RCTL_GET_RULES", + "SYS_RCTL_REMOVE_RULE", + "SYS_READ", + "SYS_READAHEAD", + "SYS_READDIR", + "SYS_READLINK", + "SYS_READLINKAT", + "SYS_READV", + "SYS_READV_NOCANCEL", + "SYS_READ_NOCANCEL", + "SYS_REBOOT", + "SYS_RECV", + "SYS_RECVFROM", + "SYS_RECVFROM_NOCANCEL", + "SYS_RECVMMSG", + "SYS_RECVMSG", + "SYS_RECVMSG_NOCANCEL", + "SYS_REMAP_FILE_PAGES", + "SYS_REMOVEXATTR", + "SYS_RENAME", + "SYS_RENAMEAT", + "SYS_REQUEST_KEY", + "SYS_RESTART_SYSCALL", + "SYS_REVOKE", + "SYS_RFORK", + "SYS_RMDIR", + "SYS_RTPRIO", + "SYS_RTPRIO_THREAD", + "SYS_RT_SIGACTION", + "SYS_RT_SIGPENDING", + "SYS_RT_SIGPROCMASK", + "SYS_RT_SIGQUEUEINFO", + "SYS_RT_SIGRETURN", + "SYS_RT_SIGSUSPEND", + "SYS_RT_SIGTIMEDWAIT", + "SYS_RT_TGSIGQUEUEINFO", + "SYS_SBRK", + "SYS_SCHED_GETAFFINITY", + "SYS_SCHED_GETPARAM", + "SYS_SCHED_GETSCHEDULER", + "SYS_SCHED_GET_PRIORITY_MAX", + "SYS_SCHED_GET_PRIORITY_MIN", + "SYS_SCHED_RR_GET_INTERVAL", + "SYS_SCHED_SETAFFINITY", + "SYS_SCHED_SETPARAM", + "SYS_SCHED_SETSCHEDULER", + "SYS_SCHED_YIELD", + "SYS_SCTP_GENERIC_RECVMSG", + "SYS_SCTP_GENERIC_SENDMSG", + "SYS_SCTP_GENERIC_SENDMSG_IOV", + "SYS_SCTP_PEELOFF", + "SYS_SEARCHFS", + "SYS_SECURITY", + "SYS_SELECT", + "SYS_SELECT_NOCANCEL", + "SYS_SEMCONFIG", + "SYS_SEMCTL", + "SYS_SEMGET", + "SYS_SEMOP", + "SYS_SEMSYS", + "SYS_SEMTIMEDOP", + "SYS_SEM_CLOSE", + "SYS_SEM_DESTROY", + "SYS_SEM_GETVALUE", + "SYS_SEM_INIT", + "SYS_SEM_OPEN", + "SYS_SEM_POST", + "SYS_SEM_TRYWAIT", + "SYS_SEM_UNLINK", + "SYS_SEM_WAIT", + "SYS_SEM_WAIT_NOCANCEL", + "SYS_SEND", + "SYS_SENDFILE", + "SYS_SENDFILE64", + "SYS_SENDMMSG", + "SYS_SENDMSG", + "SYS_SENDMSG_NOCANCEL", + "SYS_SENDTO", + "SYS_SENDTO_NOCANCEL", + "SYS_SETATTRLIST", + "SYS_SETAUDIT", + "SYS_SETAUDIT_ADDR", + "SYS_SETAUID", + "SYS_SETCONTEXT", + "SYS_SETDOMAINNAME", + "SYS_SETEGID", + "SYS_SETEUID", + "SYS_SETFIB", + "SYS_SETFSGID", + "SYS_SETFSGID32", + "SYS_SETFSUID", + "SYS_SETFSUID32", + "SYS_SETGID", + "SYS_SETGID32", + "SYS_SETGROUPS", + "SYS_SETGROUPS32", + "SYS_SETHOSTNAME", + "SYS_SETITIMER", + "SYS_SETLCID", + "SYS_SETLOGIN", + "SYS_SETLOGINCLASS", + "SYS_SETNS", + "SYS_SETPGID", + "SYS_SETPRIORITY", + "SYS_SETPRIVEXEC", + "SYS_SETREGID", + "SYS_SETREGID32", + "SYS_SETRESGID", + "SYS_SETRESGID32", + "SYS_SETRESUID", + "SYS_SETRESUID32", + "SYS_SETREUID", + "SYS_SETREUID32", + "SYS_SETRLIMIT", + "SYS_SETRTABLE", + "SYS_SETSGROUPS", + "SYS_SETSID", + "SYS_SETSOCKOPT", + "SYS_SETTID", + "SYS_SETTID_WITH_PID", + "SYS_SETTIMEOFDAY", + "SYS_SETUID", + "SYS_SETUID32", + "SYS_SETWGROUPS", + "SYS_SETXATTR", + "SYS_SET_MEMPOLICY", + "SYS_SET_ROBUST_LIST", + "SYS_SET_THREAD_AREA", + "SYS_SET_TID_ADDRESS", + "SYS_SGETMASK", + "SYS_SHARED_REGION_CHECK_NP", + "SYS_SHARED_REGION_MAP_AND_SLIDE_NP", + "SYS_SHMAT", + "SYS_SHMCTL", + "SYS_SHMDT", + "SYS_SHMGET", + "SYS_SHMSYS", + "SYS_SHM_OPEN", + "SYS_SHM_UNLINK", + "SYS_SHUTDOWN", + "SYS_SIGACTION", + "SYS_SIGALTSTACK", + "SYS_SIGNAL", + "SYS_SIGNALFD", + "SYS_SIGNALFD4", + "SYS_SIGPENDING", + "SYS_SIGPROCMASK", + "SYS_SIGQUEUE", + "SYS_SIGQUEUEINFO", + "SYS_SIGRETURN", + "SYS_SIGSUSPEND", + "SYS_SIGSUSPEND_NOCANCEL", + "SYS_SIGTIMEDWAIT", + "SYS_SIGWAIT", + "SYS_SIGWAITINFO", + "SYS_SOCKET", + "SYS_SOCKETCALL", + "SYS_SOCKETPAIR", + "SYS_SPLICE", + "SYS_SSETMASK", + "SYS_SSTK", + "SYS_STACK_SNAPSHOT", + "SYS_STAT", + "SYS_STAT64", + "SYS_STAT64_EXTENDED", + "SYS_STATFS", + "SYS_STATFS64", + "SYS_STATV", + "SYS_STATVFS1", + "SYS_STAT_EXTENDED", + "SYS_STIME", + "SYS_STTY", + "SYS_SWAPCONTEXT", + "SYS_SWAPCTL", + "SYS_SWAPOFF", + "SYS_SWAPON", + "SYS_SYMLINK", + "SYS_SYMLINKAT", + "SYS_SYNC", + "SYS_SYNCFS", + "SYS_SYNC_FILE_RANGE", + "SYS_SYSARCH", + "SYS_SYSCALL", + "SYS_SYSCALL_BASE", + "SYS_SYSFS", + "SYS_SYSINFO", + "SYS_SYSLOG", + "SYS_TEE", + "SYS_TGKILL", + "SYS_THREAD_SELFID", + "SYS_THR_CREATE", + "SYS_THR_EXIT", + "SYS_THR_KILL", + "SYS_THR_KILL2", + "SYS_THR_NEW", + "SYS_THR_SELF", + "SYS_THR_SET_NAME", + "SYS_THR_SUSPEND", + "SYS_THR_WAKE", + "SYS_TIME", + "SYS_TIMERFD_CREATE", + "SYS_TIMERFD_GETTIME", + "SYS_TIMERFD_SETTIME", + "SYS_TIMER_CREATE", + "SYS_TIMER_DELETE", + "SYS_TIMER_GETOVERRUN", + "SYS_TIMER_GETTIME", + "SYS_TIMER_SETTIME", + "SYS_TIMES", + "SYS_TKILL", + "SYS_TRUNCATE", + "SYS_TRUNCATE64", + "SYS_TUXCALL", + "SYS_UGETRLIMIT", + "SYS_ULIMIT", + "SYS_UMASK", + "SYS_UMASK_EXTENDED", + "SYS_UMOUNT", + "SYS_UMOUNT2", + "SYS_UNAME", + "SYS_UNDELETE", + "SYS_UNLINK", + "SYS_UNLINKAT", + "SYS_UNMOUNT", + "SYS_UNSHARE", + "SYS_USELIB", + "SYS_USTAT", + "SYS_UTIME", + "SYS_UTIMENSAT", + "SYS_UTIMES", + "SYS_UTRACE", + "SYS_UUIDGEN", + "SYS_VADVISE", + "SYS_VFORK", + "SYS_VHANGUP", + "SYS_VM86", + "SYS_VM86OLD", + "SYS_VMSPLICE", + "SYS_VM_PRESSURE_MONITOR", + "SYS_VSERVER", + "SYS_WAIT4", + "SYS_WAIT4_NOCANCEL", + "SYS_WAIT6", + "SYS_WAITEVENT", + "SYS_WAITID", + "SYS_WAITID_NOCANCEL", + "SYS_WAITPID", + "SYS_WATCHEVENT", + "SYS_WORKQ_KERNRETURN", + "SYS_WORKQ_OPEN", + "SYS_WRITE", + "SYS_WRITEV", + "SYS_WRITEV_NOCANCEL", + "SYS_WRITE_NOCANCEL", + "SYS_YIELD", + "SYS__LLSEEK", + "SYS__LWP_CONTINUE", + "SYS__LWP_CREATE", + "SYS__LWP_CTL", + "SYS__LWP_DETACH", + "SYS__LWP_EXIT", + "SYS__LWP_GETNAME", + "SYS__LWP_GETPRIVATE", + "SYS__LWP_KILL", + "SYS__LWP_PARK", + "SYS__LWP_SELF", + "SYS__LWP_SETNAME", + "SYS__LWP_SETPRIVATE", + "SYS__LWP_SUSPEND", + "SYS__LWP_UNPARK", + "SYS__LWP_UNPARK_ALL", + "SYS__LWP_WAIT", + "SYS__LWP_WAKEUP", + "SYS__NEWSELECT", + "SYS__PSET_BIND", + "SYS__SCHED_GETAFFINITY", + "SYS__SCHED_GETPARAM", + "SYS__SCHED_SETAFFINITY", + "SYS__SCHED_SETPARAM", + "SYS__SYSCTL", + "SYS__UMTX_LOCK", + "SYS__UMTX_OP", + "SYS__UMTX_UNLOCK", + "SYS___ACL_ACLCHECK_FD", + "SYS___ACL_ACLCHECK_FILE", + "SYS___ACL_ACLCHECK_LINK", + "SYS___ACL_DELETE_FD", + "SYS___ACL_DELETE_FILE", + "SYS___ACL_DELETE_LINK", + "SYS___ACL_GET_FD", + "SYS___ACL_GET_FILE", + "SYS___ACL_GET_LINK", + "SYS___ACL_SET_FD", + "SYS___ACL_SET_FILE", + "SYS___ACL_SET_LINK", + "SYS___CLONE", + "SYS___DISABLE_THREADSIGNAL", + "SYS___GETCWD", + "SYS___GETLOGIN", + "SYS___GET_TCB", + "SYS___MAC_EXECVE", + "SYS___MAC_GETFSSTAT", + "SYS___MAC_GET_FD", + "SYS___MAC_GET_FILE", + "SYS___MAC_GET_LCID", + "SYS___MAC_GET_LCTX", + "SYS___MAC_GET_LINK", + "SYS___MAC_GET_MOUNT", + "SYS___MAC_GET_PID", + "SYS___MAC_GET_PROC", + "SYS___MAC_MOUNT", + "SYS___MAC_SET_FD", + "SYS___MAC_SET_FILE", + "SYS___MAC_SET_LCTX", + "SYS___MAC_SET_LINK", + "SYS___MAC_SET_PROC", + "SYS___MAC_SYSCALL", + "SYS___OLD_SEMWAIT_SIGNAL", + "SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL", + "SYS___POSIX_CHOWN", + "SYS___POSIX_FCHOWN", + "SYS___POSIX_LCHOWN", + "SYS___POSIX_RENAME", + "SYS___PTHREAD_CANCELED", + "SYS___PTHREAD_CHDIR", + "SYS___PTHREAD_FCHDIR", + "SYS___PTHREAD_KILL", + "SYS___PTHREAD_MARKCANCEL", + "SYS___PTHREAD_SIGMASK", + "SYS___QUOTACTL", + "SYS___SEMCTL", + "SYS___SEMWAIT_SIGNAL", + "SYS___SEMWAIT_SIGNAL_NOCANCEL", + "SYS___SETLOGIN", + "SYS___SETUGID", + "SYS___SET_TCB", + "SYS___SIGACTION_SIGTRAMP", + "SYS___SIGTIMEDWAIT", + "SYS___SIGWAIT", + "SYS___SIGWAIT_NOCANCEL", + "SYS___SYSCTL", + "SYS___TFORK", + "SYS___THREXIT", + "SYS___THRSIGDIVERT", + "SYS___THRSLEEP", + "SYS___THRWAKEUP", + "S_ARCH1", + "S_ARCH2", + "S_BLKSIZE", + "S_IEXEC", + "S_IFBLK", + "S_IFCHR", + "S_IFDIR", + "S_IFIFO", + "S_IFLNK", + "S_IFMT", + "S_IFREG", + "S_IFSOCK", + "S_IFWHT", + "S_IREAD", + "S_IRGRP", + "S_IROTH", + "S_IRUSR", + "S_IRWXG", + "S_IRWXO", + "S_IRWXU", + "S_ISGID", + "S_ISTXT", + "S_ISUID", + "S_ISVTX", + "S_IWGRP", + "S_IWOTH", + "S_IWRITE", + "S_IWUSR", + "S_IXGRP", + "S_IXOTH", + "S_IXUSR", + "S_LOGIN_SET", + "SecurityAttributes", + "Seek", + "Select", + "Sendfile", + "Sendmsg", + "SendmsgN", + "Sendto", + "Servent", + "SetBpf", + "SetBpfBuflen", + "SetBpfDatalink", + "SetBpfHeadercmpl", + "SetBpfImmediate", + "SetBpfInterface", + "SetBpfPromisc", + "SetBpfTimeout", + "SetCurrentDirectory", + "SetEndOfFile", + "SetEnvironmentVariable", + "SetFileAttributes", + "SetFileCompletionNotificationModes", + "SetFilePointer", + "SetFileTime", + "SetHandleInformation", + "SetKevent", + "SetLsfPromisc", + "SetNonblock", + "Setdomainname", + "Setegid", + "Setenv", + "Seteuid", + "Setfsgid", + "Setfsuid", + "Setgid", + "Setgroups", + "Sethostname", + "Setlogin", + "Setpgid", + "Setpriority", + "Setprivexec", + "Setregid", + "Setresgid", + "Setresuid", + "Setreuid", + "Setrlimit", + "Setsid", + "Setsockopt", + "SetsockoptByte", + "SetsockoptICMPv6Filter", + "SetsockoptIPMreq", + "SetsockoptIPMreqn", + "SetsockoptIPv6Mreq", + "SetsockoptInet4Addr", + "SetsockoptInt", + "SetsockoptLinger", + "SetsockoptString", + "SetsockoptTimeval", + "Settimeofday", + "Setuid", + "Setxattr", + "Shutdown", + "SidTypeAlias", + "SidTypeComputer", + "SidTypeDeletedAccount", + "SidTypeDomain", + "SidTypeGroup", + "SidTypeInvalid", + "SidTypeLabel", + "SidTypeUnknown", + "SidTypeUser", + "SidTypeWellKnownGroup", + "Signal", + "SizeofBpfHdr", + "SizeofBpfInsn", + "SizeofBpfProgram", + "SizeofBpfStat", + "SizeofBpfVersion", + "SizeofBpfZbuf", + "SizeofBpfZbufHeader", + "SizeofCmsghdr", + "SizeofICMPv6Filter", + "SizeofIPMreq", + "SizeofIPMreqn", + "SizeofIPv6MTUInfo", + "SizeofIPv6Mreq", + "SizeofIfAddrmsg", + "SizeofIfAnnounceMsghdr", + "SizeofIfData", + "SizeofIfInfomsg", + "SizeofIfMsghdr", + "SizeofIfaMsghdr", + "SizeofIfmaMsghdr", + "SizeofIfmaMsghdr2", + "SizeofInet4Pktinfo", + "SizeofInet6Pktinfo", + "SizeofInotifyEvent", + "SizeofLinger", + "SizeofMsghdr", + "SizeofNlAttr", + "SizeofNlMsgerr", + "SizeofNlMsghdr", + "SizeofRtAttr", + "SizeofRtGenmsg", + "SizeofRtMetrics", + "SizeofRtMsg", + "SizeofRtMsghdr", + "SizeofRtNexthop", + "SizeofSockFilter", + "SizeofSockFprog", + "SizeofSockaddrAny", + "SizeofSockaddrDatalink", + "SizeofSockaddrInet4", + "SizeofSockaddrInet6", + "SizeofSockaddrLinklayer", + "SizeofSockaddrNetlink", + "SizeofSockaddrUnix", + "SizeofTCPInfo", + "SizeofUcred", + "SlicePtrFromStrings", + "SockFilter", + "SockFprog", + "Sockaddr", + "SockaddrDatalink", + "SockaddrGen", + "SockaddrInet4", + "SockaddrInet6", + "SockaddrLinklayer", + "SockaddrNetlink", + "SockaddrUnix", + "Socket", + "SocketControlMessage", + "SocketDisableIPv6", + "Socketpair", + "Splice", + "StartProcess", + "StartupInfo", + "Stat", + "Stat_t", + "Statfs", + "Statfs_t", + "Stderr", + "Stdin", + "Stdout", + "StringBytePtr", + "StringByteSlice", + "StringSlicePtr", + "StringToSid", + "StringToUTF16", + "StringToUTF16Ptr", + "Symlink", + "Sync", + "SyncFileRange", + "SysProcAttr", + "SysProcIDMap", + "Syscall", + "Syscall12", + "Syscall15", + "Syscall18", + "Syscall6", + "Syscall9", + "Sysctl", + "SysctlUint32", + "Sysctlnode", + "Sysinfo", + "Sysinfo_t", + "Systemtime", + "TCGETS", + "TCIFLUSH", + "TCIOFLUSH", + "TCOFLUSH", + "TCPInfo", + "TCPKeepalive", + "TCP_CA_NAME_MAX", + "TCP_CONGCTL", + "TCP_CONGESTION", + "TCP_CONNECTIONTIMEOUT", + "TCP_CORK", + "TCP_DEFER_ACCEPT", + "TCP_INFO", + "TCP_KEEPALIVE", + "TCP_KEEPCNT", + "TCP_KEEPIDLE", + "TCP_KEEPINIT", + "TCP_KEEPINTVL", + "TCP_LINGER2", + "TCP_MAXBURST", + "TCP_MAXHLEN", + "TCP_MAXOLEN", + "TCP_MAXSEG", + "TCP_MAXWIN", + "TCP_MAX_SACK", + "TCP_MAX_WINSHIFT", + "TCP_MD5SIG", + "TCP_MD5SIG_MAXKEYLEN", + "TCP_MINMSS", + "TCP_MINMSSOVERLOAD", + "TCP_MSS", + "TCP_NODELAY", + "TCP_NOOPT", + "TCP_NOPUSH", + "TCP_NSTATES", + "TCP_QUICKACK", + "TCP_RXT_CONNDROPTIME", + "TCP_RXT_FINDROP", + "TCP_SACK_ENABLE", + "TCP_SYNCNT", + "TCP_VENDOR", + "TCP_WINDOW_CLAMP", + "TCSAFLUSH", + "TCSETS", + "TF_DISCONNECT", + "TF_REUSE_SOCKET", + "TF_USE_DEFAULT_WORKER", + "TF_USE_KERNEL_APC", + "TF_USE_SYSTEM_THREAD", + "TF_WRITE_BEHIND", + "TH32CS_INHERIT", + "TH32CS_SNAPALL", + "TH32CS_SNAPHEAPLIST", + "TH32CS_SNAPMODULE", + "TH32CS_SNAPMODULE32", + "TH32CS_SNAPPROCESS", + "TH32CS_SNAPTHREAD", + "TIME_ZONE_ID_DAYLIGHT", + "TIME_ZONE_ID_STANDARD", + "TIME_ZONE_ID_UNKNOWN", + "TIOCCBRK", + "TIOCCDTR", + "TIOCCONS", + "TIOCDCDTIMESTAMP", + "TIOCDRAIN", + "TIOCDSIMICROCODE", + "TIOCEXCL", + "TIOCEXT", + "TIOCFLAG_CDTRCTS", + "TIOCFLAG_CLOCAL", + "TIOCFLAG_CRTSCTS", + "TIOCFLAG_MDMBUF", + "TIOCFLAG_PPS", + "TIOCFLAG_SOFTCAR", + "TIOCFLUSH", + "TIOCGDEV", + "TIOCGDRAINWAIT", + "TIOCGETA", + "TIOCGETD", + "TIOCGFLAGS", + "TIOCGICOUNT", + "TIOCGLCKTRMIOS", + "TIOCGLINED", + "TIOCGPGRP", + "TIOCGPTN", + "TIOCGQSIZE", + "TIOCGRANTPT", + "TIOCGRS485", + "TIOCGSERIAL", + "TIOCGSID", + "TIOCGSIZE", + "TIOCGSOFTCAR", + "TIOCGTSTAMP", + "TIOCGWINSZ", + "TIOCINQ", + "TIOCIXOFF", + "TIOCIXON", + "TIOCLINUX", + "TIOCMBIC", + "TIOCMBIS", + "TIOCMGDTRWAIT", + "TIOCMGET", + "TIOCMIWAIT", + "TIOCMODG", + "TIOCMODS", + "TIOCMSDTRWAIT", + "TIOCMSET", + "TIOCM_CAR", + "TIOCM_CD", + "TIOCM_CTS", + "TIOCM_DCD", + "TIOCM_DSR", + "TIOCM_DTR", + "TIOCM_LE", + "TIOCM_RI", + "TIOCM_RNG", + "TIOCM_RTS", + "TIOCM_SR", + "TIOCM_ST", + "TIOCNOTTY", + "TIOCNXCL", + "TIOCOUTQ", + "TIOCPKT", + "TIOCPKT_DATA", + "TIOCPKT_DOSTOP", + "TIOCPKT_FLUSHREAD", + "TIOCPKT_FLUSHWRITE", + "TIOCPKT_IOCTL", + "TIOCPKT_NOSTOP", + "TIOCPKT_START", + "TIOCPKT_STOP", + "TIOCPTMASTER", + "TIOCPTMGET", + "TIOCPTSNAME", + "TIOCPTYGNAME", + "TIOCPTYGRANT", + "TIOCPTYUNLK", + "TIOCRCVFRAME", + "TIOCREMOTE", + "TIOCSBRK", + "TIOCSCONS", + "TIOCSCTTY", + "TIOCSDRAINWAIT", + "TIOCSDTR", + "TIOCSERCONFIG", + "TIOCSERGETLSR", + "TIOCSERGETMULTI", + "TIOCSERGSTRUCT", + "TIOCSERGWILD", + "TIOCSERSETMULTI", + "TIOCSERSWILD", + "TIOCSER_TEMT", + "TIOCSETA", + "TIOCSETAF", + "TIOCSETAW", + "TIOCSETD", + "TIOCSFLAGS", + "TIOCSIG", + "TIOCSLCKTRMIOS", + "TIOCSLINED", + "TIOCSPGRP", + "TIOCSPTLCK", + "TIOCSQSIZE", + "TIOCSRS485", + "TIOCSSERIAL", + "TIOCSSIZE", + "TIOCSSOFTCAR", + "TIOCSTART", + "TIOCSTAT", + "TIOCSTI", + "TIOCSTOP", + "TIOCSTSTAMP", + "TIOCSWINSZ", + "TIOCTIMESTAMP", + "TIOCUCNTL", + "TIOCVHANGUP", + "TIOCXMTFRAME", + "TOKEN_ADJUST_DEFAULT", + "TOKEN_ADJUST_GROUPS", + "TOKEN_ADJUST_PRIVILEGES", + "TOKEN_ADJUST_SESSIONID", + "TOKEN_ALL_ACCESS", + "TOKEN_ASSIGN_PRIMARY", + "TOKEN_DUPLICATE", + "TOKEN_EXECUTE", + "TOKEN_IMPERSONATE", + "TOKEN_QUERY", + "TOKEN_QUERY_SOURCE", + "TOKEN_READ", + "TOKEN_WRITE", + "TOSTOP", + "TRUNCATE_EXISTING", + "TUNATTACHFILTER", + "TUNDETACHFILTER", + "TUNGETFEATURES", + "TUNGETIFF", + "TUNGETSNDBUF", + "TUNGETVNETHDRSZ", + "TUNSETDEBUG", + "TUNSETGROUP", + "TUNSETIFF", + "TUNSETLINK", + "TUNSETNOCSUM", + "TUNSETOFFLOAD", + "TUNSETOWNER", + "TUNSETPERSIST", + "TUNSETSNDBUF", + "TUNSETTXFILTER", + "TUNSETVNETHDRSZ", + "Tee", + "TerminateProcess", + "Termios", + "Tgkill", + "Time", + "Time_t", + "Times", + "Timespec", + "TimespecToNsec", + "Timeval", + "Timeval32", + "TimevalToNsec", + "Timex", + "Timezoneinformation", + "Tms", + "Token", + "TokenAccessInformation", + "TokenAuditPolicy", + "TokenDefaultDacl", + "TokenElevation", + "TokenElevationType", + "TokenGroups", + "TokenGroupsAndPrivileges", + "TokenHasRestrictions", + "TokenImpersonationLevel", + "TokenIntegrityLevel", + "TokenLinkedToken", + "TokenLogonSid", + "TokenMandatoryPolicy", + "TokenOrigin", + "TokenOwner", + "TokenPrimaryGroup", + "TokenPrivileges", + "TokenRestrictedSids", + "TokenSandBoxInert", + "TokenSessionId", + "TokenSessionReference", + "TokenSource", + "TokenStatistics", + "TokenType", + "TokenUIAccess", + "TokenUser", + "TokenVirtualizationAllowed", + "TokenVirtualizationEnabled", + "Tokenprimarygroup", + "Tokenuser", + "TranslateAccountName", + "TranslateName", + "TransmitFile", + "TransmitFileBuffers", + "Truncate", + "UNIX_PATH_MAX", + "USAGE_MATCH_TYPE_AND", + "USAGE_MATCH_TYPE_OR", + "UTF16FromString", + "UTF16PtrFromString", + "UTF16ToString", + "Ucred", + "Umask", + "Uname", + "Undelete", + "UnixCredentials", + "UnixRights", + "Unlink", + "Unlinkat", + "UnmapViewOfFile", + "Unmount", + "Unsetenv", + "Unshare", + "UserInfo10", + "Ustat", + "Ustat_t", + "Utimbuf", + "Utime", + "Utimes", + "UtimesNano", + "Utsname", + "VDISCARD", + "VDSUSP", + "VEOF", + "VEOL", + "VEOL2", + "VERASE", + "VERASE2", + "VINTR", + "VKILL", + "VLNEXT", + "VMIN", + "VQUIT", + "VREPRINT", + "VSTART", + "VSTATUS", + "VSTOP", + "VSUSP", + "VSWTC", + "VT0", + "VT1", + "VTDLY", + "VTIME", + "VWERASE", + "VirtualLock", + "VirtualUnlock", + "WAIT_ABANDONED", + "WAIT_FAILED", + "WAIT_OBJECT_0", + "WAIT_TIMEOUT", + "WALL", + "WALLSIG", + "WALTSIG", + "WCLONE", + "WCONTINUED", + "WCOREFLAG", + "WEXITED", + "WLINUXCLONE", + "WNOHANG", + "WNOTHREAD", + "WNOWAIT", + "WNOZOMBIE", + "WOPTSCHECKED", + "WORDSIZE", + "WSABuf", + "WSACleanup", + "WSADESCRIPTION_LEN", + "WSAData", + "WSAEACCES", + "WSAECONNABORTED", + "WSAECONNRESET", + "WSAEnumProtocols", + "WSAID_CONNECTEX", + "WSAIoctl", + "WSAPROTOCOL_LEN", + "WSAProtocolChain", + "WSAProtocolInfo", + "WSARecv", + "WSARecvFrom", + "WSASYS_STATUS_LEN", + "WSASend", + "WSASendTo", + "WSASendto", + "WSAStartup", + "WSTOPPED", + "WTRAPPED", + "WUNTRACED", + "Wait4", + "WaitForSingleObject", + "WaitStatus", + "Win32FileAttributeData", + "Win32finddata", + "Write", + "WriteConsole", + "WriteFile", + "X509_ASN_ENCODING", + "XCASE", + "XP1_CONNECTIONLESS", + "XP1_CONNECT_DATA", + "XP1_DISCONNECT_DATA", + "XP1_EXPEDITED_DATA", + "XP1_GRACEFUL_CLOSE", + "XP1_GUARANTEED_DELIVERY", + "XP1_GUARANTEED_ORDER", + "XP1_IFS_HANDLES", + "XP1_MESSAGE_ORIENTED", + "XP1_MULTIPOINT_CONTROL_PLANE", + "XP1_MULTIPOINT_DATA_PLANE", + "XP1_PARTIAL_MESSAGE", + "XP1_PSEUDO_STREAM", + "XP1_QOS_SUPPORTED", + "XP1_SAN_SUPPORT_SDP", + "XP1_SUPPORT_BROADCAST", + "XP1_SUPPORT_MULTIPOINT", + "XP1_UNI_RECV", + "XP1_UNI_SEND", + }, + "syscall/js": []string{ + "CopyBytesToGo", + "CopyBytesToJS", + "Error", + "Func", + "FuncOf", + "Global", + "Null", + "Type", + "TypeBoolean", + "TypeFunction", + "TypeNull", + "TypeNumber", + "TypeObject", + "TypeString", + "TypeSymbol", + "TypeUndefined", + "Undefined", + "Value", + "ValueError", + "ValueOf", + "Wrapper", + }, + "testing": []string{ + "AllocsPerRun", + "B", + "Benchmark", + "BenchmarkResult", + "Cover", + "CoverBlock", + "CoverMode", + "Coverage", + "Init", + "InternalBenchmark", + "InternalExample", + "InternalTest", + "M", + "Main", + "MainStart", + "PB", + "RegisterCover", + "RunBenchmarks", + "RunExamples", + "RunTests", + "Short", + "T", + "TB", + "Verbose", + }, + "testing/iotest": []string{ + "DataErrReader", + "ErrTimeout", + "HalfReader", + "NewReadLogger", + "NewWriteLogger", + "OneByteReader", + "TimeoutReader", + "TruncateWriter", + }, + "testing/quick": []string{ + "Check", + "CheckEqual", + "CheckEqualError", + "CheckError", + "Config", + "Generator", + "SetupError", + "Value", + }, + "text/scanner": []string{ + "Char", + "Comment", + "EOF", + "Float", + "GoTokens", + "GoWhitespace", + "Ident", + "Int", + "Position", + "RawString", + "ScanChars", + "ScanComments", + "ScanFloats", + "ScanIdents", + "ScanInts", + "ScanRawStrings", + "ScanStrings", + "Scanner", + "SkipComments", + "String", + "TokenString", + }, + "text/tabwriter": []string{ + "AlignRight", + "Debug", + "DiscardEmptyColumns", + "Escape", + "FilterHTML", + "NewWriter", + "StripEscape", + "TabIndent", + "Writer", + }, + "text/template": []string{ + "ExecError", + "FuncMap", + "HTMLEscape", + "HTMLEscapeString", + "HTMLEscaper", + "IsTrue", + "JSEscape", + "JSEscapeString", + "JSEscaper", + "Must", + "New", + "ParseFiles", + "ParseGlob", + "Template", + "URLQueryEscaper", + }, + "text/template/parse": []string{ + "ActionNode", + "BoolNode", + "BranchNode", + "ChainNode", + "CommandNode", + "DotNode", + "FieldNode", + "IdentifierNode", + "IfNode", + "IsEmptyTree", + "ListNode", + "New", + "NewIdentifier", + "NilNode", + "Node", + "NodeAction", + "NodeBool", + "NodeChain", + "NodeCommand", + "NodeDot", + "NodeField", + "NodeIdentifier", + "NodeIf", + "NodeList", + "NodeNil", + "NodeNumber", + "NodePipe", + "NodeRange", + "NodeString", + "NodeTemplate", + "NodeText", + "NodeType", + "NodeVariable", + "NodeWith", + "NumberNode", + "Parse", + "PipeNode", + "Pos", + "RangeNode", + "StringNode", + "TemplateNode", + "TextNode", + "Tree", + "VariableNode", + "WithNode", + }, + "time": []string{ + "ANSIC", + "After", + "AfterFunc", + "April", + "August", + "Date", + "December", + "Duration", + "February", + "FixedZone", + "Friday", + "Hour", + "January", + "July", + "June", + "Kitchen", + "LoadLocation", + "LoadLocationFromTZData", + "Local", + "Location", + "March", + "May", + "Microsecond", + "Millisecond", + "Minute", + "Monday", + "Month", + "Nanosecond", + "NewTicker", + "NewTimer", + "November", + "Now", + "October", + "Parse", + "ParseDuration", + "ParseError", + "ParseInLocation", + "RFC1123", + "RFC1123Z", + "RFC3339", + "RFC3339Nano", + "RFC822", + "RFC822Z", + "RFC850", + "RubyDate", + "Saturday", + "Second", + "September", + "Since", + "Sleep", + "Stamp", + "StampMicro", + "StampMilli", + "StampNano", + "Sunday", + "Thursday", + "Tick", + "Ticker", + "Time", + "Timer", + "Tuesday", + "UTC", + "Unix", + "UnixDate", + "Until", + "Wednesday", + "Weekday", + }, + "unicode": []string{ + "ASCII_Hex_Digit", + "Adlam", + "Ahom", + "Anatolian_Hieroglyphs", + "Arabic", + "Armenian", + "Avestan", + "AzeriCase", + "Balinese", + "Bamum", + "Bassa_Vah", + "Batak", + "Bengali", + "Bhaiksuki", + "Bidi_Control", + "Bopomofo", + "Brahmi", + "Braille", + "Buginese", + "Buhid", + "C", + "Canadian_Aboriginal", + "Carian", + "CaseRange", + "CaseRanges", + "Categories", + "Caucasian_Albanian", + "Cc", + "Cf", + "Chakma", + "Cham", + "Cherokee", + "Co", + "Common", + "Coptic", + "Cs", + "Cuneiform", + "Cypriot", + "Cyrillic", + "Dash", + "Deprecated", + "Deseret", + "Devanagari", + "Diacritic", + "Digit", + "Dogra", + "Duployan", + "Egyptian_Hieroglyphs", + "Elbasan", + "Elymaic", + "Ethiopic", + "Extender", + "FoldCategory", + "FoldScript", + "Georgian", + "Glagolitic", + "Gothic", + "Grantha", + "GraphicRanges", + "Greek", + "Gujarati", + "Gunjala_Gondi", + "Gurmukhi", + "Han", + "Hangul", + "Hanifi_Rohingya", + "Hanunoo", + "Hatran", + "Hebrew", + "Hex_Digit", + "Hiragana", + "Hyphen", + "IDS_Binary_Operator", + "IDS_Trinary_Operator", + "Ideographic", + "Imperial_Aramaic", + "In", + "Inherited", + "Inscriptional_Pahlavi", + "Inscriptional_Parthian", + "Is", + "IsControl", + "IsDigit", + "IsGraphic", + "IsLetter", + "IsLower", + "IsMark", + "IsNumber", + "IsOneOf", + "IsPrint", + "IsPunct", + "IsSpace", + "IsSymbol", + "IsTitle", + "IsUpper", + "Javanese", + "Join_Control", + "Kaithi", + "Kannada", + "Katakana", + "Kayah_Li", + "Kharoshthi", + "Khmer", + "Khojki", + "Khudawadi", + "L", + "Lao", + "Latin", + "Lepcha", + "Letter", + "Limbu", + "Linear_A", + "Linear_B", + "Lisu", + "Ll", + "Lm", + "Lo", + "Logical_Order_Exception", + "Lower", + "LowerCase", + "Lt", + "Lu", + "Lycian", + "Lydian", + "M", + "Mahajani", + "Makasar", + "Malayalam", + "Mandaic", + "Manichaean", + "Marchen", + "Mark", + "Masaram_Gondi", + "MaxASCII", + "MaxCase", + "MaxLatin1", + "MaxRune", + "Mc", + "Me", + "Medefaidrin", + "Meetei_Mayek", + "Mende_Kikakui", + "Meroitic_Cursive", + "Meroitic_Hieroglyphs", + "Miao", + "Mn", + "Modi", + "Mongolian", + "Mro", + "Multani", + "Myanmar", + "N", + "Nabataean", + "Nandinagari", + "Nd", + "New_Tai_Lue", + "Newa", + "Nko", + "Nl", + "No", + "Noncharacter_Code_Point", + "Number", + "Nushu", + "Nyiakeng_Puachue_Hmong", + "Ogham", + "Ol_Chiki", + "Old_Hungarian", + "Old_Italic", + "Old_North_Arabian", + "Old_Permic", + "Old_Persian", + "Old_Sogdian", + "Old_South_Arabian", + "Old_Turkic", + "Oriya", + "Osage", + "Osmanya", + "Other", + "Other_Alphabetic", + "Other_Default_Ignorable_Code_Point", + "Other_Grapheme_Extend", + "Other_ID_Continue", + "Other_ID_Start", + "Other_Lowercase", + "Other_Math", + "Other_Uppercase", + "P", + "Pahawh_Hmong", + "Palmyrene", + "Pattern_Syntax", + "Pattern_White_Space", + "Pau_Cin_Hau", + "Pc", + "Pd", + "Pe", + "Pf", + "Phags_Pa", + "Phoenician", + "Pi", + "Po", + "Prepended_Concatenation_Mark", + "PrintRanges", + "Properties", + "Ps", + "Psalter_Pahlavi", + "Punct", + "Quotation_Mark", + "Radical", + "Range16", + "Range32", + "RangeTable", + "Regional_Indicator", + "Rejang", + "ReplacementChar", + "Runic", + "S", + "STerm", + "Samaritan", + "Saurashtra", + "Sc", + "Scripts", + "Sentence_Terminal", + "Sharada", + "Shavian", + "Siddham", + "SignWriting", + "SimpleFold", + "Sinhala", + "Sk", + "Sm", + "So", + "Soft_Dotted", + "Sogdian", + "Sora_Sompeng", + "Soyombo", + "Space", + "SpecialCase", + "Sundanese", + "Syloti_Nagri", + "Symbol", + "Syriac", + "Tagalog", + "Tagbanwa", + "Tai_Le", + "Tai_Tham", + "Tai_Viet", + "Takri", + "Tamil", + "Tangut", + "Telugu", + "Terminal_Punctuation", + "Thaana", + "Thai", + "Tibetan", + "Tifinagh", + "Tirhuta", + "Title", + "TitleCase", + "To", + "ToLower", + "ToTitle", + "ToUpper", + "TurkishCase", + "Ugaritic", + "Unified_Ideograph", + "Upper", + "UpperCase", + "UpperLower", + "Vai", + "Variation_Selector", + "Version", + "Wancho", + "Warang_Citi", + "White_Space", + "Yi", + "Z", + "Zanabazar_Square", + "Zl", + "Zp", + "Zs", + }, + "unicode/utf16": []string{ + "Decode", + "DecodeRune", + "Encode", + "EncodeRune", + "IsSurrogate", + }, + "unicode/utf8": []string{ + "DecodeLastRune", + "DecodeLastRuneInString", + "DecodeRune", + "DecodeRuneInString", + "EncodeRune", + "FullRune", + "FullRuneInString", + "MaxRune", + "RuneCount", + "RuneCountInString", + "RuneError", + "RuneLen", + "RuneSelf", + "RuneStart", + "UTFMax", + "Valid", + "ValidRune", + "ValidString", + }, + "unsafe": []string{ + "Alignof", + "ArbitraryType", + "Offsetof", + "Pointer", + "Sizeof", + }, +} diff --git a/vendor/golang.org/x/tools/internal/packagesinternal/packages.go b/vendor/golang.org/x/tools/internal/packagesinternal/packages.go new file mode 100644 index 000000000..d4ec6f971 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/packagesinternal/packages.go @@ -0,0 +1,21 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package packagesinternal exposes internal-only fields from go/packages. +package packagesinternal + +import ( + "golang.org/x/tools/internal/gocommand" +) + +var GetForTest = func(p interface{}) string { return "" } + +var GetGoCmdRunner = func(config interface{}) *gocommand.Runner { return nil } + +var SetGoCmdRunner = func(config interface{}, runner *gocommand.Runner) {} + +var TypecheckCgo int + +var SetModFlag = func(config interface{}, value string) {} +var SetModFile = func(config interface{}, value string) {} diff --git a/vendor/golang.org/x/tools/internal/typesinternal/errorcode.go b/vendor/golang.org/x/tools/internal/typesinternal/errorcode.go new file mode 100644 index 000000000..65473eb22 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/typesinternal/errorcode.go @@ -0,0 +1,1358 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package typesinternal + +//go:generate stringer -type=ErrorCode + +type ErrorCode int + +// This file defines the error codes that can be produced during type-checking. +// Collectively, these codes provide an identifier that may be used to +// implement special handling for certain types of errors. +// +// Error codes should be fine-grained enough that the exact nature of the error +// can be easily determined, but coarse enough that they are not an +// implementation detail of the type checking algorithm. As a rule-of-thumb, +// errors should be considered equivalent if there is a theoretical refactoring +// of the type checker in which they are emitted in exactly one place. For +// example, the type checker emits different error messages for "too many +// arguments" and "too few arguments", but one can imagine an alternative type +// checker where this check instead just emits a single "wrong number of +// arguments", so these errors should have the same code. +// +// Error code names should be as brief as possible while retaining accuracy and +// distinctiveness. In most cases names should start with an adjective +// describing the nature of the error (e.g. "invalid", "unused", "misplaced"), +// and end with a noun identifying the relevant language object. For example, +// "DuplicateDecl" or "InvalidSliceExpr". For brevity, naming follows the +// convention that "bad" implies a problem with syntax, and "invalid" implies a +// problem with types. + +const ( + _ ErrorCode = iota + + // Test is reserved for errors that only apply while in self-test mode. + Test + + /* package names */ + + // BlankPkgName occurs when a package name is the blank identifier "_". + // + // Per the spec: + // "The PackageName must not be the blank identifier." + BlankPkgName + + // MismatchedPkgName occurs when a file's package name doesn't match the + // package name already established by other files. + MismatchedPkgName + + // InvalidPkgUse occurs when a package identifier is used outside of a + // selector expression. + // + // Example: + // import "fmt" + // + // var _ = fmt + InvalidPkgUse + + /* imports */ + + // BadImportPath occurs when an import path is not valid. + BadImportPath + + // BrokenImport occurs when importing a package fails. + // + // Example: + // import "amissingpackage" + BrokenImport + + // ImportCRenamed occurs when the special import "C" is renamed. "C" is a + // pseudo-package, and must not be renamed. + // + // Example: + // import _ "C" + ImportCRenamed + + // UnusedImport occurs when an import is unused. + // + // Example: + // import "fmt" + // + // func main() {} + UnusedImport + + /* initialization */ + + // InvalidInitCycle occurs when an invalid cycle is detected within the + // initialization graph. + // + // Example: + // var x int = f() + // + // func f() int { return x } + InvalidInitCycle + + /* decls */ + + // DuplicateDecl occurs when an identifier is declared multiple times. + // + // Example: + // var x = 1 + // var x = 2 + DuplicateDecl + + // InvalidDeclCycle occurs when a declaration cycle is not valid. + // + // Example: + // import "unsafe" + // + // type T struct { + // a [n]int + // } + // + // var n = unsafe.Sizeof(T{}) + InvalidDeclCycle + + // InvalidTypeCycle occurs when a cycle in type definitions results in a + // type that is not well-defined. + // + // Example: + // import "unsafe" + // + // type T [unsafe.Sizeof(T{})]int + InvalidTypeCycle + + /* decls > const */ + + // InvalidConstInit occurs when a const declaration has a non-constant + // initializer. + // + // Example: + // var x int + // const _ = x + InvalidConstInit + + // InvalidConstVal occurs when a const value cannot be converted to its + // target type. + // + // TODO(findleyr): this error code and example are not very clear. Consider + // removing it. + // + // Example: + // const _ = 1 << "hello" + InvalidConstVal + + // InvalidConstType occurs when the underlying type in a const declaration + // is not a valid constant type. + // + // Example: + // const c *int = 4 + InvalidConstType + + /* decls > var (+ other variable assignment codes) */ + + // UntypedNil occurs when the predeclared (untyped) value nil is used to + // initialize a variable declared without an explicit type. + // + // Example: + // var x = nil + UntypedNil + + // WrongAssignCount occurs when the number of values on the right-hand side + // of an assignment or or initialization expression does not match the number + // of variables on the left-hand side. + // + // Example: + // var x = 1, 2 + WrongAssignCount + + // UnassignableOperand occurs when the left-hand side of an assignment is + // not assignable. + // + // Example: + // func f() { + // const c = 1 + // c = 2 + // } + UnassignableOperand + + // NoNewVar occurs when a short variable declaration (':=') does not declare + // new variables. + // + // Example: + // func f() { + // x := 1 + // x := 2 + // } + NoNewVar + + // MultiValAssignOp occurs when an assignment operation (+=, *=, etc) does + // not have single-valued left-hand or right-hand side. + // + // Per the spec: + // "In assignment operations, both the left- and right-hand expression lists + // must contain exactly one single-valued expression" + // + // Example: + // func f() int { + // x, y := 1, 2 + // x, y += 1 + // return x + y + // } + MultiValAssignOp + + // InvalidIfaceAssign occurs when a value of type T is used as an + // interface, but T does not implement a method of the expected interface. + // + // Example: + // type I interface { + // f() + // } + // + // type T int + // + // var x I = T(1) + InvalidIfaceAssign + + // InvalidChanAssign occurs when a chan assignment is invalid. + // + // Per the spec, a value x is assignable to a channel type T if: + // "x is a bidirectional channel value, T is a channel type, x's type V and + // T have identical element types, and at least one of V or T is not a + // defined type." + // + // Example: + // type T1 chan int + // type T2 chan int + // + // var x T1 + // // Invalid assignment because both types are named + // var _ T2 = x + InvalidChanAssign + + // IncompatibleAssign occurs when the type of the right-hand side expression + // in an assignment cannot be assigned to the type of the variable being + // assigned. + // + // Example: + // var x []int + // var _ int = x + IncompatibleAssign + + // UnaddressableFieldAssign occurs when trying to assign to a struct field + // in a map value. + // + // Example: + // func f() { + // m := make(map[string]struct{i int}) + // m["foo"].i = 42 + // } + UnaddressableFieldAssign + + /* decls > type (+ other type expression codes) */ + + // NotAType occurs when the identifier used as the underlying type in a type + // declaration or the right-hand side of a type alias does not denote a type. + // + // Example: + // var S = 2 + // + // type T S + NotAType + + // InvalidArrayLen occurs when an array length is not a constant value. + // + // Example: + // var n = 3 + // var _ = [n]int{} + InvalidArrayLen + + // BlankIfaceMethod occurs when a method name is '_'. + // + // Per the spec: + // "The name of each explicitly specified method must be unique and not + // blank." + // + // Example: + // type T interface { + // _(int) + // } + BlankIfaceMethod + + // IncomparableMapKey occurs when a map key type does not support the == and + // != operators. + // + // Per the spec: + // "The comparison operators == and != must be fully defined for operands of + // the key type; thus the key type must not be a function, map, or slice." + // + // Example: + // var x map[T]int + // + // type T []int + IncomparableMapKey + + // InvalidIfaceEmbed occurs when a non-interface type is embedded in an + // interface. + // + // Example: + // type T struct {} + // + // func (T) m() + // + // type I interface { + // T + // } + InvalidIfaceEmbed + + // InvalidPtrEmbed occurs when an embedded field is of the pointer form *T, + // and T itself is itself a pointer, an unsafe.Pointer, or an interface. + // + // Per the spec: + // "An embedded field must be specified as a type name T or as a pointer to + // a non-interface type name *T, and T itself may not be a pointer type." + // + // Example: + // type T *int + // + // type S struct { + // *T + // } + InvalidPtrEmbed + + /* decls > func and method */ + + // BadRecv occurs when a method declaration does not have exactly one + // receiver parameter. + // + // Example: + // func () _() {} + BadRecv + + // InvalidRecv occurs when a receiver type expression is not of the form T + // or *T, or T is a pointer type. + // + // Example: + // type T struct {} + // + // func (**T) m() {} + InvalidRecv + + // DuplicateFieldAndMethod occurs when an identifier appears as both a field + // and method name. + // + // Example: + // type T struct { + // m int + // } + // + // func (T) m() {} + DuplicateFieldAndMethod + + // DuplicateMethod occurs when two methods on the same receiver type have + // the same name. + // + // Example: + // type T struct {} + // func (T) m() {} + // func (T) m(i int) int { return i } + DuplicateMethod + + /* decls > special */ + + // InvalidBlank occurs when a blank identifier is used as a value or type. + // + // Per the spec: + // "The blank identifier may appear as an operand only on the left-hand side + // of an assignment." + // + // Example: + // var x = _ + InvalidBlank + + // InvalidIota occurs when the predeclared identifier iota is used outside + // of a constant declaration. + // + // Example: + // var x = iota + InvalidIota + + // MissingInitBody occurs when an init function is missing its body. + // + // Example: + // func init() + MissingInitBody + + // InvalidInitSig occurs when an init function declares parameters or + // results. + // + // Example: + // func init() int { return 1 } + InvalidInitSig + + // InvalidInitDecl occurs when init is declared as anything other than a + // function. + // + // Example: + // var init = 1 + InvalidInitDecl + + // InvalidMainDecl occurs when main is declared as anything other than a + // function, in a main package. + InvalidMainDecl + + /* exprs */ + + // TooManyValues occurs when a function returns too many values for the + // expression context in which it is used. + // + // Example: + // func ReturnTwo() (int, int) { + // return 1, 2 + // } + // + // var x = ReturnTwo() + TooManyValues + + // NotAnExpr occurs when a type expression is used where a value expression + // is expected. + // + // Example: + // type T struct {} + // + // func f() { + // T + // } + NotAnExpr + + /* exprs > const */ + + // TruncatedFloat occurs when a float constant is truncated to an integer + // value. + // + // Example: + // var _ int = 98.6 + TruncatedFloat + + // NumericOverflow occurs when a numeric constant overflows its target type. + // + // Example: + // var x int8 = 1000 + NumericOverflow + + /* exprs > operation */ + + // UndefinedOp occurs when an operator is not defined for the type(s) used + // in an operation. + // + // Example: + // var c = "a" - "b" + UndefinedOp + + // MismatchedTypes occurs when operand types are incompatible in a binary + // operation. + // + // Example: + // var a = "hello" + // var b = 1 + // var c = a - b + MismatchedTypes + + // DivByZero occurs when a division operation is provable at compile + // time to be a division by zero. + // + // Example: + // const divisor = 0 + // var x int = 1/divisor + DivByZero + + // NonNumericIncDec occurs when an increment or decrement operator is + // applied to a non-numeric value. + // + // Example: + // func f() { + // var c = "c" + // c++ + // } + NonNumericIncDec + + /* exprs > ptr */ + + // UnaddressableOperand occurs when the & operator is applied to an + // unaddressable expression. + // + // Example: + // var x = &1 + UnaddressableOperand + + // InvalidIndirection occurs when a non-pointer value is indirected via the + // '*' operator. + // + // Example: + // var x int + // var y = *x + InvalidIndirection + + /* exprs > [] */ + + // NonIndexableOperand occurs when an index operation is applied to a value + // that cannot be indexed. + // + // Example: + // var x = 1 + // var y = x[1] + NonIndexableOperand + + // InvalidIndex occurs when an index argument is not of integer type, + // negative, or out-of-bounds. + // + // Example: + // var s = [...]int{1,2,3} + // var x = s[5] + // + // Example: + // var s = []int{1,2,3} + // var _ = s[-1] + // + // Example: + // var s = []int{1,2,3} + // var i string + // var _ = s[i] + InvalidIndex + + // SwappedSliceIndices occurs when constant indices in a slice expression + // are decreasing in value. + // + // Example: + // var _ = []int{1,2,3}[2:1] + SwappedSliceIndices + + /* operators > slice */ + + // NonSliceableOperand occurs when a slice operation is applied to a value + // whose type is not sliceable, or is unaddressable. + // + // Example: + // var x = [...]int{1, 2, 3}[:1] + // + // Example: + // var x = 1 + // var y = 1[:1] + NonSliceableOperand + + // InvalidSliceExpr occurs when a three-index slice expression (a[x:y:z]) is + // applied to a string. + // + // Example: + // var s = "hello" + // var x = s[1:2:3] + InvalidSliceExpr + + /* exprs > shift */ + + // InvalidShiftCount occurs when the right-hand side of a shift operation is + // either non-integer, negative, or too large. + // + // Example: + // var ( + // x string + // y int = 1 << x + // ) + InvalidShiftCount + + // InvalidShiftOperand occurs when the shifted operand is not an integer. + // + // Example: + // var s = "hello" + // var x = s << 2 + InvalidShiftOperand + + /* exprs > chan */ + + // InvalidReceive occurs when there is a channel receive from a value that + // is either not a channel, or is a send-only channel. + // + // Example: + // func f() { + // var x = 1 + // <-x + // } + InvalidReceive + + // InvalidSend occurs when there is a channel send to a value that is not a + // channel, or is a receive-only channel. + // + // Example: + // func f() { + // var x = 1 + // x <- "hello!" + // } + InvalidSend + + /* exprs > literal */ + + // DuplicateLitKey occurs when an index is duplicated in a slice, array, or + // map literal. + // + // Example: + // var _ = []int{0:1, 0:2} + // + // Example: + // var _ = map[string]int{"a": 1, "a": 2} + DuplicateLitKey + + // MissingLitKey occurs when a map literal is missing a key expression. + // + // Example: + // var _ = map[string]int{1} + MissingLitKey + + // InvalidLitIndex occurs when the key in a key-value element of a slice or + // array literal is not an integer constant. + // + // Example: + // var i = 0 + // var x = []string{i: "world"} + InvalidLitIndex + + // OversizeArrayLit occurs when an array literal exceeds its length. + // + // Example: + // var _ = [2]int{1,2,3} + OversizeArrayLit + + // MixedStructLit occurs when a struct literal contains a mix of positional + // and named elements. + // + // Example: + // var _ = struct{i, j int}{i: 1, 2} + MixedStructLit + + // InvalidStructLit occurs when a positional struct literal has an incorrect + // number of values. + // + // Example: + // var _ = struct{i, j int}{1,2,3} + InvalidStructLit + + // MissingLitField occurs when a struct literal refers to a field that does + // not exist on the struct type. + // + // Example: + // var _ = struct{i int}{j: 2} + MissingLitField + + // DuplicateLitField occurs when a struct literal contains duplicated + // fields. + // + // Example: + // var _ = struct{i int}{i: 1, i: 2} + DuplicateLitField + + // UnexportedLitField occurs when a positional struct literal implicitly + // assigns an unexported field of an imported type. + UnexportedLitField + + // InvalidLitField occurs when a field name is not a valid identifier. + // + // Example: + // var _ = struct{i int}{1: 1} + InvalidLitField + + // UntypedLit occurs when a composite literal omits a required type + // identifier. + // + // Example: + // type outer struct{ + // inner struct { i int } + // } + // + // var _ = outer{inner: {1}} + UntypedLit + + // InvalidLit occurs when a composite literal expression does not match its + // type. + // + // Example: + // type P *struct{ + // x int + // } + // var _ = P {} + InvalidLit + + /* exprs > selector */ + + // AmbiguousSelector occurs when a selector is ambiguous. + // + // Example: + // type E1 struct { i int } + // type E2 struct { i int } + // type T struct { E1; E2 } + // + // var x T + // var _ = x.i + AmbiguousSelector + + // UndeclaredImportedName occurs when a package-qualified identifier is + // undeclared by the imported package. + // + // Example: + // import "go/types" + // + // var _ = types.NotAnActualIdentifier + UndeclaredImportedName + + // UnexportedName occurs when a selector refers to an unexported identifier + // of an imported package. + // + // Example: + // import "reflect" + // + // type _ reflect.flag + UnexportedName + + // UndeclaredName occurs when an identifier is not declared in the current + // scope. + // + // Example: + // var x T + UndeclaredName + + // MissingFieldOrMethod occurs when a selector references a field or method + // that does not exist. + // + // Example: + // type T struct {} + // + // var x = T{}.f + MissingFieldOrMethod + + /* exprs > ... */ + + // BadDotDotDotSyntax occurs when a "..." occurs in a context where it is + // not valid. + // + // Example: + // var _ = map[int][...]int{0: {}} + BadDotDotDotSyntax + + // NonVariadicDotDotDot occurs when a "..." is used on the final argument to + // a non-variadic function. + // + // Example: + // func printArgs(s []string) { + // for _, a := range s { + // println(a) + // } + // } + // + // func f() { + // s := []string{"a", "b", "c"} + // printArgs(s...) + // } + NonVariadicDotDotDot + + // MisplacedDotDotDot occurs when a "..." is used somewhere other than the + // final argument to a function call. + // + // Example: + // func printArgs(args ...int) { + // for _, a := range args { + // println(a) + // } + // } + // + // func f() { + // a := []int{1,2,3} + // printArgs(0, a...) + // } + MisplacedDotDotDot + + // InvalidDotDotDotOperand occurs when a "..." operator is applied to a + // single-valued operand. + // + // Example: + // func printArgs(args ...int) { + // for _, a := range args { + // println(a) + // } + // } + // + // func f() { + // a := 1 + // printArgs(a...) + // } + // + // Example: + // func args() (int, int) { + // return 1, 2 + // } + // + // func printArgs(args ...int) { + // for _, a := range args { + // println(a) + // } + // } + // + // func g() { + // printArgs(args()...) + // } + InvalidDotDotDotOperand + + // InvalidDotDotDot occurs when a "..." is used in a non-variadic built-in + // function. + // + // Example: + // var s = []int{1, 2, 3} + // var l = len(s...) + InvalidDotDotDot + + /* exprs > built-in */ + + // UncalledBuiltin occurs when a built-in function is used as a + // function-valued expression, instead of being called. + // + // Per the spec: + // "The built-in functions do not have standard Go types, so they can only + // appear in call expressions; they cannot be used as function values." + // + // Example: + // var _ = copy + UncalledBuiltin + + // InvalidAppend occurs when append is called with a first argument that is + // not a slice. + // + // Example: + // var _ = append(1, 2) + InvalidAppend + + // InvalidCap occurs when an argument to the cap built-in function is not of + // supported type. + // + // See https://golang.org/ref/spec#Lengthand_capacity for information on + // which underlying types are supported as arguments to cap and len. + // + // Example: + // var s = 2 + // var x = cap(s) + InvalidCap + + // InvalidClose occurs when close(...) is called with an argument that is + // not of channel type, or that is a receive-only channel. + // + // Example: + // func f() { + // var x int + // close(x) + // } + InvalidClose + + // InvalidCopy occurs when the arguments are not of slice type or do not + // have compatible type. + // + // See https://golang.org/ref/spec#Appendingand_copying_slices for more + // information on the type requirements for the copy built-in. + // + // Example: + // func f() { + // var x []int + // y := []int64{1,2,3} + // copy(x, y) + // } + InvalidCopy + + // InvalidComplex occurs when the complex built-in function is called with + // arguments with incompatible types. + // + // Example: + // var _ = complex(float32(1), float64(2)) + InvalidComplex + + // InvalidDelete occurs when the delete built-in function is called with a + // first argument that is not a map. + // + // Example: + // func f() { + // m := "hello" + // delete(m, "e") + // } + InvalidDelete + + // InvalidImag occurs when the imag built-in function is called with an + // argument that does not have complex type. + // + // Example: + // var _ = imag(int(1)) + InvalidImag + + // InvalidLen occurs when an argument to the len built-in function is not of + // supported type. + // + // See https://golang.org/ref/spec#Lengthand_capacity for information on + // which underlying types are supported as arguments to cap and len. + // + // Example: + // var s = 2 + // var x = len(s) + InvalidLen + + // SwappedMakeArgs occurs when make is called with three arguments, and its + // length argument is larger than its capacity argument. + // + // Example: + // var x = make([]int, 3, 2) + SwappedMakeArgs + + // InvalidMake occurs when make is called with an unsupported type argument. + // + // See https://golang.org/ref/spec#Makingslices_maps_and_channels for + // information on the types that may be created using make. + // + // Example: + // var x = make(int) + InvalidMake + + // InvalidReal occurs when the real built-in function is called with an + // argument that does not have complex type. + // + // Example: + // var _ = real(int(1)) + InvalidReal + + /* exprs > assertion */ + + // InvalidAssert occurs when a type assertion is applied to a + // value that is not of interface type. + // + // Example: + // var x = 1 + // var _ = x.(float64) + InvalidAssert + + // ImpossibleAssert occurs for a type assertion x.(T) when the value x of + // interface cannot have dynamic type T, due to a missing or mismatching + // method on T. + // + // Example: + // type T int + // + // func (t *T) m() int { return int(*t) } + // + // type I interface { m() int } + // + // var x I + // var _ = x.(T) + ImpossibleAssert + + /* exprs > conversion */ + + // InvalidConversion occurs when the argument type cannot be converted to the + // target. + // + // See https://golang.org/ref/spec#Conversions for the rules of + // convertibility. + // + // Example: + // var x float64 + // var _ = string(x) + InvalidConversion + + // InvalidUntypedConversion occurs when an there is no valid implicit + // conversion from an untyped value satisfying the type constraints of the + // context in which it is used. + // + // Example: + // var _ = 1 + "" + InvalidUntypedConversion + + /* offsetof */ + + // BadOffsetofSyntax occurs when unsafe.Offsetof is called with an argument + // that is not a selector expression. + // + // Example: + // import "unsafe" + // + // var x int + // var _ = unsafe.Offsetof(x) + BadOffsetofSyntax + + // InvalidOffsetof occurs when unsafe.Offsetof is called with a method + // selector, rather than a field selector, or when the field is embedded via + // a pointer. + // + // Per the spec: + // + // "If f is an embedded field, it must be reachable without pointer + // indirections through fields of the struct. " + // + // Example: + // import "unsafe" + // + // type T struct { f int } + // type S struct { *T } + // var s S + // var _ = unsafe.Offsetof(s.f) + // + // Example: + // import "unsafe" + // + // type S struct{} + // + // func (S) m() {} + // + // var s S + // var _ = unsafe.Offsetof(s.m) + InvalidOffsetof + + /* control flow > scope */ + + // UnusedExpr occurs when a side-effect free expression is used as a + // statement. Such a statement has no effect. + // + // Example: + // func f(i int) { + // i*i + // } + UnusedExpr + + // UnusedVar occurs when a variable is declared but unused. + // + // Example: + // func f() { + // x := 1 + // } + UnusedVar + + // MissingReturn occurs when a function with results is missing a return + // statement. + // + // Example: + // func f() int {} + MissingReturn + + // WrongResultCount occurs when a return statement returns an incorrect + // number of values. + // + // Example: + // func ReturnOne() int { + // return 1, 2 + // } + WrongResultCount + + // OutOfScopeResult occurs when the name of a value implicitly returned by + // an empty return statement is shadowed in a nested scope. + // + // Example: + // func factor(n int) (i int) { + // for i := 2; i < n; i++ { + // if n%i == 0 { + // return + // } + // } + // return 0 + // } + OutOfScopeResult + + /* control flow > if */ + + // InvalidCond occurs when an if condition is not a boolean expression. + // + // Example: + // func checkReturn(i int) { + // if i { + // panic("non-zero return") + // } + // } + InvalidCond + + /* control flow > for */ + + // InvalidPostDecl occurs when there is a declaration in a for-loop post + // statement. + // + // Example: + // func f() { + // for i := 0; i < 10; j := 0 {} + // } + InvalidPostDecl + + // InvalidChanRange occurs when a send-only channel used in a range + // expression. + // + // Example: + // func sum(c chan<- int) { + // s := 0 + // for i := range c { + // s += i + // } + // } + InvalidChanRange + + // InvalidIterVar occurs when two iteration variables are used while ranging + // over a channel. + // + // Example: + // func f(c chan int) { + // for k, v := range c { + // println(k, v) + // } + // } + InvalidIterVar + + // InvalidRangeExpr occurs when the type of a range expression is not array, + // slice, string, map, or channel. + // + // Example: + // func f(i int) { + // for j := range i { + // println(j) + // } + // } + InvalidRangeExpr + + /* control flow > switch */ + + // MisplacedBreak occurs when a break statement is not within a for, switch, + // or select statement of the innermost function definition. + // + // Example: + // func f() { + // break + // } + MisplacedBreak + + // MisplacedContinue occurs when a continue statement is not within a for + // loop of the innermost function definition. + // + // Example: + // func sumeven(n int) int { + // proceed := func() { + // continue + // } + // sum := 0 + // for i := 1; i <= n; i++ { + // if i % 2 != 0 { + // proceed() + // } + // sum += i + // } + // return sum + // } + MisplacedContinue + + // MisplacedFallthrough occurs when a fallthrough statement is not within an + // expression switch. + // + // Example: + // func typename(i interface{}) string { + // switch i.(type) { + // case int64: + // fallthrough + // case int: + // return "int" + // } + // return "unsupported" + // } + MisplacedFallthrough + + // DuplicateCase occurs when a type or expression switch has duplicate + // cases. + // + // Example: + // func printInt(i int) { + // switch i { + // case 1: + // println("one") + // case 1: + // println("One") + // } + // } + DuplicateCase + + // DuplicateDefault occurs when a type or expression switch has multiple + // default clauses. + // + // Example: + // func printInt(i int) { + // switch i { + // case 1: + // println("one") + // default: + // println("One") + // default: + // println("1") + // } + // } + DuplicateDefault + + // BadTypeKeyword occurs when a .(type) expression is used anywhere other + // than a type switch. + // + // Example: + // type I interface { + // m() + // } + // var t I + // var _ = t.(type) + BadTypeKeyword + + // InvalidTypeSwitch occurs when .(type) is used on an expression that is + // not of interface type. + // + // Example: + // func f(i int) { + // switch x := i.(type) {} + // } + InvalidTypeSwitch + + /* control flow > select */ + + // InvalidSelectCase occurs when a select case is not a channel send or + // receive. + // + // Example: + // func checkChan(c <-chan int) bool { + // select { + // case c: + // return true + // default: + // return false + // } + // } + InvalidSelectCase + + /* control flow > labels and jumps */ + + // UndeclaredLabel occurs when an undeclared label is jumped to. + // + // Example: + // func f() { + // goto L + // } + UndeclaredLabel + + // DuplicateLabel occurs when a label is declared more than once. + // + // Example: + // func f() int { + // L: + // L: + // return 1 + // } + DuplicateLabel + + // MisplacedLabel occurs when a break or continue label is not on a for, + // switch, or select statement. + // + // Example: + // func f() { + // L: + // a := []int{1,2,3} + // for _, e := range a { + // if e > 10 { + // break L + // } + // println(a) + // } + // } + MisplacedLabel + + // UnusedLabel occurs when a label is declared but not used. + // + // Example: + // func f() { + // L: + // } + UnusedLabel + + // JumpOverDecl occurs when a label jumps over a variable declaration. + // + // Example: + // func f() int { + // goto L + // x := 2 + // L: + // x++ + // return x + // } + JumpOverDecl + + // JumpIntoBlock occurs when a forward jump goes to a label inside a nested + // block. + // + // Example: + // func f(x int) { + // goto L + // if x > 0 { + // L: + // print("inside block") + // } + // } + JumpIntoBlock + + /* control flow > calls */ + + // InvalidMethodExpr occurs when a pointer method is called but the argument + // is not addressable. + // + // Example: + // type T struct {} + // + // func (*T) m() int { return 1 } + // + // var _ = T.m(T{}) + InvalidMethodExpr + + // WrongArgCount occurs when too few or too many arguments are passed by a + // function call. + // + // Example: + // func f(i int) {} + // var x = f() + WrongArgCount + + // InvalidCall occurs when an expression is called that is not of function + // type. + // + // Example: + // var x = "x" + // var y = x() + InvalidCall + + /* control flow > suspended */ + + // UnusedResults occurs when a restricted expression-only built-in function + // is suspended via go or defer. Such a suspension discards the results of + // these side-effect free built-in functions, and therefore is ineffectual. + // + // Example: + // func f(a []int) int { + // defer len(a) + // return i + // } + UnusedResults + + // InvalidDefer occurs when a deferred expression is not a function call, + // for example if the expression is a type conversion. + // + // Example: + // func f(i int) int { + // defer int32(i) + // return i + // } + InvalidDefer + + // InvalidGo occurs when a go expression is not a function call, for example + // if the expression is a type conversion. + // + // Example: + // func f(i int) int { + // go int32(i) + // return i + // } + InvalidGo +) diff --git a/vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go b/vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go new file mode 100644 index 000000000..97f3ec891 --- /dev/null +++ b/vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go @@ -0,0 +1,152 @@ +// Code generated by "stringer -type=ErrorCode"; DO NOT EDIT. + +package typesinternal + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[Test-1] + _ = x[BlankPkgName-2] + _ = x[MismatchedPkgName-3] + _ = x[InvalidPkgUse-4] + _ = x[BadImportPath-5] + _ = x[BrokenImport-6] + _ = x[ImportCRenamed-7] + _ = x[UnusedImport-8] + _ = x[InvalidInitCycle-9] + _ = x[DuplicateDecl-10] + _ = x[InvalidDeclCycle-11] + _ = x[InvalidTypeCycle-12] + _ = x[InvalidConstInit-13] + _ = x[InvalidConstVal-14] + _ = x[InvalidConstType-15] + _ = x[UntypedNil-16] + _ = x[WrongAssignCount-17] + _ = x[UnassignableOperand-18] + _ = x[NoNewVar-19] + _ = x[MultiValAssignOp-20] + _ = x[InvalidIfaceAssign-21] + _ = x[InvalidChanAssign-22] + _ = x[IncompatibleAssign-23] + _ = x[UnaddressableFieldAssign-24] + _ = x[NotAType-25] + _ = x[InvalidArrayLen-26] + _ = x[BlankIfaceMethod-27] + _ = x[IncomparableMapKey-28] + _ = x[InvalidIfaceEmbed-29] + _ = x[InvalidPtrEmbed-30] + _ = x[BadRecv-31] + _ = x[InvalidRecv-32] + _ = x[DuplicateFieldAndMethod-33] + _ = x[DuplicateMethod-34] + _ = x[InvalidBlank-35] + _ = x[InvalidIota-36] + _ = x[MissingInitBody-37] + _ = x[InvalidInitSig-38] + _ = x[InvalidInitDecl-39] + _ = x[InvalidMainDecl-40] + _ = x[TooManyValues-41] + _ = x[NotAnExpr-42] + _ = x[TruncatedFloat-43] + _ = x[NumericOverflow-44] + _ = x[UndefinedOp-45] + _ = x[MismatchedTypes-46] + _ = x[DivByZero-47] + _ = x[NonNumericIncDec-48] + _ = x[UnaddressableOperand-49] + _ = x[InvalidIndirection-50] + _ = x[NonIndexableOperand-51] + _ = x[InvalidIndex-52] + _ = x[SwappedSliceIndices-53] + _ = x[NonSliceableOperand-54] + _ = x[InvalidSliceExpr-55] + _ = x[InvalidShiftCount-56] + _ = x[InvalidShiftOperand-57] + _ = x[InvalidReceive-58] + _ = x[InvalidSend-59] + _ = x[DuplicateLitKey-60] + _ = x[MissingLitKey-61] + _ = x[InvalidLitIndex-62] + _ = x[OversizeArrayLit-63] + _ = x[MixedStructLit-64] + _ = x[InvalidStructLit-65] + _ = x[MissingLitField-66] + _ = x[DuplicateLitField-67] + _ = x[UnexportedLitField-68] + _ = x[InvalidLitField-69] + _ = x[UntypedLit-70] + _ = x[InvalidLit-71] + _ = x[AmbiguousSelector-72] + _ = x[UndeclaredImportedName-73] + _ = x[UnexportedName-74] + _ = x[UndeclaredName-75] + _ = x[MissingFieldOrMethod-76] + _ = x[BadDotDotDotSyntax-77] + _ = x[NonVariadicDotDotDot-78] + _ = x[MisplacedDotDotDot-79] + _ = x[InvalidDotDotDotOperand-80] + _ = x[InvalidDotDotDot-81] + _ = x[UncalledBuiltin-82] + _ = x[InvalidAppend-83] + _ = x[InvalidCap-84] + _ = x[InvalidClose-85] + _ = x[InvalidCopy-86] + _ = x[InvalidComplex-87] + _ = x[InvalidDelete-88] + _ = x[InvalidImag-89] + _ = x[InvalidLen-90] + _ = x[SwappedMakeArgs-91] + _ = x[InvalidMake-92] + _ = x[InvalidReal-93] + _ = x[InvalidAssert-94] + _ = x[ImpossibleAssert-95] + _ = x[InvalidConversion-96] + _ = x[InvalidUntypedConversion-97] + _ = x[BadOffsetofSyntax-98] + _ = x[InvalidOffsetof-99] + _ = x[UnusedExpr-100] + _ = x[UnusedVar-101] + _ = x[MissingReturn-102] + _ = x[WrongResultCount-103] + _ = x[OutOfScopeResult-104] + _ = x[InvalidCond-105] + _ = x[InvalidPostDecl-106] + _ = x[InvalidChanRange-107] + _ = x[InvalidIterVar-108] + _ = x[InvalidRangeExpr-109] + _ = x[MisplacedBreak-110] + _ = x[MisplacedContinue-111] + _ = x[MisplacedFallthrough-112] + _ = x[DuplicateCase-113] + _ = x[DuplicateDefault-114] + _ = x[BadTypeKeyword-115] + _ = x[InvalidTypeSwitch-116] + _ = x[InvalidSelectCase-117] + _ = x[UndeclaredLabel-118] + _ = x[DuplicateLabel-119] + _ = x[MisplacedLabel-120] + _ = x[UnusedLabel-121] + _ = x[JumpOverDecl-122] + _ = x[JumpIntoBlock-123] + _ = x[InvalidMethodExpr-124] + _ = x[WrongArgCount-125] + _ = x[InvalidCall-126] + _ = x[UnusedResults-127] + _ = x[InvalidDefer-128] + _ = x[InvalidGo-129] +} + +const _ErrorCode_name = "TestBlankPkgNameMismatchedPkgNameInvalidPkgUseBadImportPathBrokenImportImportCRenamedUnusedImportInvalidInitCycleDuplicateDeclInvalidDeclCycleInvalidTypeCycleInvalidConstInitInvalidConstValInvalidConstTypeUntypedNilWrongAssignCountUnassignableOperandNoNewVarMultiValAssignOpInvalidIfaceAssignInvalidChanAssignIncompatibleAssignUnaddressableFieldAssignNotATypeInvalidArrayLenBlankIfaceMethodIncomparableMapKeyInvalidIfaceEmbedInvalidPtrEmbedBadRecvInvalidRecvDuplicateFieldAndMethodDuplicateMethodInvalidBlankInvalidIotaMissingInitBodyInvalidInitSigInvalidInitDeclInvalidMainDeclTooManyValuesNotAnExprTruncatedFloatNumericOverflowUndefinedOpMismatchedTypesDivByZeroNonNumericIncDecUnaddressableOperandInvalidIndirectionNonIndexableOperandInvalidIndexSwappedSliceIndicesNonSliceableOperandInvalidSliceExprInvalidShiftCountInvalidShiftOperandInvalidReceiveInvalidSendDuplicateLitKeyMissingLitKeyInvalidLitIndexOversizeArrayLitMixedStructLitInvalidStructLitMissingLitFieldDuplicateLitFieldUnexportedLitFieldInvalidLitFieldUntypedLitInvalidLitAmbiguousSelectorUndeclaredImportedNameUnexportedNameUndeclaredNameMissingFieldOrMethodBadDotDotDotSyntaxNonVariadicDotDotDotMisplacedDotDotDotInvalidDotDotDotOperandInvalidDotDotDotUncalledBuiltinInvalidAppendInvalidCapInvalidCloseInvalidCopyInvalidComplexInvalidDeleteInvalidImagInvalidLenSwappedMakeArgsInvalidMakeInvalidRealInvalidAssertImpossibleAssertInvalidConversionInvalidUntypedConversionBadOffsetofSyntaxInvalidOffsetofUnusedExprUnusedVarMissingReturnWrongResultCountOutOfScopeResultInvalidCondInvalidPostDeclInvalidChanRangeInvalidIterVarInvalidRangeExprMisplacedBreakMisplacedContinueMisplacedFallthroughDuplicateCaseDuplicateDefaultBadTypeKeywordInvalidTypeSwitchInvalidSelectCaseUndeclaredLabelDuplicateLabelMisplacedLabelUnusedLabelJumpOverDeclJumpIntoBlockInvalidMethodExprWrongArgCountInvalidCallUnusedResultsInvalidDeferInvalidGo" + +var _ErrorCode_index = [...]uint16{0, 4, 16, 33, 46, 59, 71, 85, 97, 113, 126, 142, 158, 174, 189, 205, 215, 231, 250, 258, 274, 292, 309, 327, 351, 359, 374, 390, 408, 425, 440, 447, 458, 481, 496, 508, 519, 534, 548, 563, 578, 591, 600, 614, 629, 640, 655, 664, 680, 700, 718, 737, 749, 768, 787, 803, 820, 839, 853, 864, 879, 892, 907, 923, 937, 953, 968, 985, 1003, 1018, 1028, 1038, 1055, 1077, 1091, 1105, 1125, 1143, 1163, 1181, 1204, 1220, 1235, 1248, 1258, 1270, 1281, 1295, 1308, 1319, 1329, 1344, 1355, 1366, 1379, 1395, 1412, 1436, 1453, 1468, 1478, 1487, 1500, 1516, 1532, 1543, 1558, 1574, 1588, 1604, 1618, 1635, 1655, 1668, 1684, 1698, 1715, 1732, 1747, 1761, 1775, 1786, 1798, 1811, 1828, 1841, 1852, 1865, 1877, 1886} + +func (i ErrorCode) String() string { + i -= 1 + if i < 0 || i >= ErrorCode(len(_ErrorCode_index)-1) { + return "ErrorCode(" + strconv.FormatInt(int64(i+1), 10) + ")" + } + return _ErrorCode_name[_ErrorCode_index[i]:_ErrorCode_index[i+1]] +} diff --git a/vendor/golang.org/x/tools/internal/typesinternal/types.go b/vendor/golang.org/x/tools/internal/typesinternal/types.go new file mode 100644 index 000000000..c3e1a397d --- /dev/null +++ b/vendor/golang.org/x/tools/internal/typesinternal/types.go @@ -0,0 +1,45 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package typesinternal provides access to internal go/types APIs that are not +// yet exported. +package typesinternal + +import ( + "go/token" + "go/types" + "reflect" + "unsafe" +) + +func SetUsesCgo(conf *types.Config) bool { + v := reflect.ValueOf(conf).Elem() + + f := v.FieldByName("go115UsesCgo") + if !f.IsValid() { + f = v.FieldByName("UsesCgo") + if !f.IsValid() { + return false + } + } + + addr := unsafe.Pointer(f.UnsafeAddr()) + *(*bool)(addr) = true + + return true +} + +func ReadGo116ErrorData(terr types.Error) (ErrorCode, token.Pos, token.Pos, bool) { + var data [3]int + // By coincidence all of these fields are ints, which simplifies things. + v := reflect.ValueOf(terr) + for i, name := range []string{"go116code", "go116start", "go116end"} { + f := v.FieldByName(name) + if !f.IsValid() { + return 0, 0, 0, false + } + data[i] = int(f.Int()) + } + return ErrorCode(data[0]), token.Pos(data[1]), token.Pos(data[2]), true +} diff --git a/vendor/golang.org/x/xerrors/LICENSE b/vendor/golang.org/x/xerrors/LICENSE new file mode 100644 index 000000000..e4a47e17f --- /dev/null +++ b/vendor/golang.org/x/xerrors/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2019 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/golang.org/x/xerrors/PATENTS b/vendor/golang.org/x/xerrors/PATENTS new file mode 100644 index 000000000..733099041 --- /dev/null +++ b/vendor/golang.org/x/xerrors/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/vendor/golang.org/x/xerrors/README b/vendor/golang.org/x/xerrors/README new file mode 100644 index 000000000..aac7867a5 --- /dev/null +++ b/vendor/golang.org/x/xerrors/README @@ -0,0 +1,2 @@ +This repository holds the transition packages for the new Go 1.13 error values. +See golang.org/design/29934-error-values. diff --git a/vendor/golang.org/x/xerrors/adaptor.go b/vendor/golang.org/x/xerrors/adaptor.go new file mode 100644 index 000000000..4317f2483 --- /dev/null +++ b/vendor/golang.org/x/xerrors/adaptor.go @@ -0,0 +1,193 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "bytes" + "fmt" + "io" + "reflect" + "strconv" +) + +// FormatError calls the FormatError method of f with an errors.Printer +// configured according to s and verb, and writes the result to s. +func FormatError(f Formatter, s fmt.State, verb rune) { + // Assuming this function is only called from the Format method, and given + // that FormatError takes precedence over Format, it cannot be called from + // any package that supports errors.Formatter. It is therefore safe to + // disregard that State may be a specific printer implementation and use one + // of our choice instead. + + // limitations: does not support printing error as Go struct. + + var ( + sep = " " // separator before next error + p = &state{State: s} + direct = true + ) + + var err error = f + + switch verb { + // Note that this switch must match the preference order + // for ordinary string printing (%#v before %+v, and so on). + + case 'v': + if s.Flag('#') { + if stringer, ok := err.(fmt.GoStringer); ok { + io.WriteString(&p.buf, stringer.GoString()) + goto exit + } + // proceed as if it were %v + } else if s.Flag('+') { + p.printDetail = true + sep = "\n - " + } + case 's': + case 'q', 'x', 'X': + // Use an intermediate buffer in the rare cases that precision, + // truncation, or one of the alternative verbs (q, x, and X) are + // specified. + direct = false + + default: + p.buf.WriteString("%!") + p.buf.WriteRune(verb) + p.buf.WriteByte('(') + switch { + case err != nil: + p.buf.WriteString(reflect.TypeOf(f).String()) + default: + p.buf.WriteString("") + } + p.buf.WriteByte(')') + io.Copy(s, &p.buf) + return + } + +loop: + for { + switch v := err.(type) { + case Formatter: + err = v.FormatError((*printer)(p)) + case fmt.Formatter: + v.Format(p, 'v') + break loop + default: + io.WriteString(&p.buf, v.Error()) + break loop + } + if err == nil { + break + } + if p.needColon || !p.printDetail { + p.buf.WriteByte(':') + p.needColon = false + } + p.buf.WriteString(sep) + p.inDetail = false + p.needNewline = false + } + +exit: + width, okW := s.Width() + prec, okP := s.Precision() + + if !direct || (okW && width > 0) || okP { + // Construct format string from State s. + format := []byte{'%'} + if s.Flag('-') { + format = append(format, '-') + } + if s.Flag('+') { + format = append(format, '+') + } + if s.Flag(' ') { + format = append(format, ' ') + } + if okW { + format = strconv.AppendInt(format, int64(width), 10) + } + if okP { + format = append(format, '.') + format = strconv.AppendInt(format, int64(prec), 10) + } + format = append(format, string(verb)...) + fmt.Fprintf(s, string(format), p.buf.String()) + } else { + io.Copy(s, &p.buf) + } +} + +var detailSep = []byte("\n ") + +// state tracks error printing state. It implements fmt.State. +type state struct { + fmt.State + buf bytes.Buffer + + printDetail bool + inDetail bool + needColon bool + needNewline bool +} + +func (s *state) Write(b []byte) (n int, err error) { + if s.printDetail { + if len(b) == 0 { + return 0, nil + } + if s.inDetail && s.needColon { + s.needNewline = true + if b[0] == '\n' { + b = b[1:] + } + } + k := 0 + for i, c := range b { + if s.needNewline { + if s.inDetail && s.needColon { + s.buf.WriteByte(':') + s.needColon = false + } + s.buf.Write(detailSep) + s.needNewline = false + } + if c == '\n' { + s.buf.Write(b[k:i]) + k = i + 1 + s.needNewline = true + } + } + s.buf.Write(b[k:]) + if !s.inDetail { + s.needColon = true + } + } else if !s.inDetail { + s.buf.Write(b) + } + return len(b), nil +} + +// printer wraps a state to implement an xerrors.Printer. +type printer state + +func (s *printer) Print(args ...interface{}) { + if !s.inDetail || s.printDetail { + fmt.Fprint((*state)(s), args...) + } +} + +func (s *printer) Printf(format string, args ...interface{}) { + if !s.inDetail || s.printDetail { + fmt.Fprintf((*state)(s), format, args...) + } +} + +func (s *printer) Detail() bool { + s.inDetail = true + return s.printDetail +} diff --git a/vendor/golang.org/x/xerrors/codereview.cfg b/vendor/golang.org/x/xerrors/codereview.cfg new file mode 100644 index 000000000..3f8b14b64 --- /dev/null +++ b/vendor/golang.org/x/xerrors/codereview.cfg @@ -0,0 +1 @@ +issuerepo: golang/go diff --git a/vendor/golang.org/x/xerrors/doc.go b/vendor/golang.org/x/xerrors/doc.go new file mode 100644 index 000000000..eef99d9d5 --- /dev/null +++ b/vendor/golang.org/x/xerrors/doc.go @@ -0,0 +1,22 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package xerrors implements functions to manipulate errors. +// +// This package is based on the Go 2 proposal for error values: +// https://golang.org/design/29934-error-values +// +// These functions were incorporated into the standard library's errors package +// in Go 1.13: +// - Is +// - As +// - Unwrap +// +// Also, Errorf's %w verb was incorporated into fmt.Errorf. +// +// Use this package to get equivalent behavior in all supported Go versions. +// +// No other features of this package were included in Go 1.13, and at present +// there are no plans to include any of them. +package xerrors // import "golang.org/x/xerrors" diff --git a/vendor/golang.org/x/xerrors/errors.go b/vendor/golang.org/x/xerrors/errors.go new file mode 100644 index 000000000..e88d3772d --- /dev/null +++ b/vendor/golang.org/x/xerrors/errors.go @@ -0,0 +1,33 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import "fmt" + +// errorString is a trivial implementation of error. +type errorString struct { + s string + frame Frame +} + +// New returns an error that formats as the given text. +// +// The returned error contains a Frame set to the caller's location and +// implements Formatter to show this information when printed with details. +func New(text string) error { + return &errorString{text, Caller(1)} +} + +func (e *errorString) Error() string { + return e.s +} + +func (e *errorString) Format(s fmt.State, v rune) { FormatError(e, s, v) } + +func (e *errorString) FormatError(p Printer) (next error) { + p.Print(e.s) + e.frame.Format(p) + return nil +} diff --git a/vendor/golang.org/x/xerrors/fmt.go b/vendor/golang.org/x/xerrors/fmt.go new file mode 100644 index 000000000..829862ddf --- /dev/null +++ b/vendor/golang.org/x/xerrors/fmt.go @@ -0,0 +1,187 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "fmt" + "strings" + "unicode" + "unicode/utf8" + + "golang.org/x/xerrors/internal" +) + +const percentBangString = "%!" + +// Errorf formats according to a format specifier and returns the string as a +// value that satisfies error. +// +// The returned error includes the file and line number of the caller when +// formatted with additional detail enabled. If the last argument is an error +// the returned error's Format method will return it if the format string ends +// with ": %s", ": %v", or ": %w". If the last argument is an error and the +// format string ends with ": %w", the returned error implements an Unwrap +// method returning it. +// +// If the format specifier includes a %w verb with an error operand in a +// position other than at the end, the returned error will still implement an +// Unwrap method returning the operand, but the error's Format method will not +// return the wrapped error. +// +// It is invalid to include more than one %w verb or to supply it with an +// operand that does not implement the error interface. The %w verb is otherwise +// a synonym for %v. +func Errorf(format string, a ...interface{}) error { + format = formatPlusW(format) + // Support a ": %[wsv]" suffix, which works well with xerrors.Formatter. + wrap := strings.HasSuffix(format, ": %w") + idx, format2, ok := parsePercentW(format) + percentWElsewhere := !wrap && idx >= 0 + if !percentWElsewhere && (wrap || strings.HasSuffix(format, ": %s") || strings.HasSuffix(format, ": %v")) { + err := errorAt(a, len(a)-1) + if err == nil { + return &noWrapError{fmt.Sprintf(format, a...), nil, Caller(1)} + } + // TODO: this is not entirely correct. The error value could be + // printed elsewhere in format if it mixes numbered with unnumbered + // substitutions. With relatively small changes to doPrintf we can + // have it optionally ignore extra arguments and pass the argument + // list in its entirety. + msg := fmt.Sprintf(format[:len(format)-len(": %s")], a[:len(a)-1]...) + frame := Frame{} + if internal.EnableTrace { + frame = Caller(1) + } + if wrap { + return &wrapError{msg, err, frame} + } + return &noWrapError{msg, err, frame} + } + // Support %w anywhere. + // TODO: don't repeat the wrapped error's message when %w occurs in the middle. + msg := fmt.Sprintf(format2, a...) + if idx < 0 { + return &noWrapError{msg, nil, Caller(1)} + } + err := errorAt(a, idx) + if !ok || err == nil { + // Too many %ws or argument of %w is not an error. Approximate the Go + // 1.13 fmt.Errorf message. + return &noWrapError{fmt.Sprintf("%sw(%s)", percentBangString, msg), nil, Caller(1)} + } + frame := Frame{} + if internal.EnableTrace { + frame = Caller(1) + } + return &wrapError{msg, err, frame} +} + +func errorAt(args []interface{}, i int) error { + if i < 0 || i >= len(args) { + return nil + } + err, ok := args[i].(error) + if !ok { + return nil + } + return err +} + +// formatPlusW is used to avoid the vet check that will barf at %w. +func formatPlusW(s string) string { + return s +} + +// Return the index of the only %w in format, or -1 if none. +// Also return a rewritten format string with %w replaced by %v, and +// false if there is more than one %w. +// TODO: handle "%[N]w". +func parsePercentW(format string) (idx int, newFormat string, ok bool) { + // Loosely copied from golang.org/x/tools/go/analysis/passes/printf/printf.go. + idx = -1 + ok = true + n := 0 + sz := 0 + var isW bool + for i := 0; i < len(format); i += sz { + if format[i] != '%' { + sz = 1 + continue + } + // "%%" is not a format directive. + if i+1 < len(format) && format[i+1] == '%' { + sz = 2 + continue + } + sz, isW = parsePrintfVerb(format[i:]) + if isW { + if idx >= 0 { + ok = false + } else { + idx = n + } + // "Replace" the last character, the 'w', with a 'v'. + p := i + sz - 1 + format = format[:p] + "v" + format[p+1:] + } + n++ + } + return idx, format, ok +} + +// Parse the printf verb starting with a % at s[0]. +// Return how many bytes it occupies and whether the verb is 'w'. +func parsePrintfVerb(s string) (int, bool) { + // Assume only that the directive is a sequence of non-letters followed by a single letter. + sz := 0 + var r rune + for i := 1; i < len(s); i += sz { + r, sz = utf8.DecodeRuneInString(s[i:]) + if unicode.IsLetter(r) { + return i + sz, r == 'w' + } + } + return len(s), false +} + +type noWrapError struct { + msg string + err error + frame Frame +} + +func (e *noWrapError) Error() string { + return fmt.Sprint(e) +} + +func (e *noWrapError) Format(s fmt.State, v rune) { FormatError(e, s, v) } + +func (e *noWrapError) FormatError(p Printer) (next error) { + p.Print(e.msg) + e.frame.Format(p) + return e.err +} + +type wrapError struct { + msg string + err error + frame Frame +} + +func (e *wrapError) Error() string { + return fmt.Sprint(e) +} + +func (e *wrapError) Format(s fmt.State, v rune) { FormatError(e, s, v) } + +func (e *wrapError) FormatError(p Printer) (next error) { + p.Print(e.msg) + e.frame.Format(p) + return e.err +} + +func (e *wrapError) Unwrap() error { + return e.err +} diff --git a/vendor/golang.org/x/xerrors/format.go b/vendor/golang.org/x/xerrors/format.go new file mode 100644 index 000000000..1bc9c26b9 --- /dev/null +++ b/vendor/golang.org/x/xerrors/format.go @@ -0,0 +1,34 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +// A Formatter formats error messages. +type Formatter interface { + error + + // FormatError prints the receiver's first error and returns the next error in + // the error chain, if any. + FormatError(p Printer) (next error) +} + +// A Printer formats error messages. +// +// The most common implementation of Printer is the one provided by package fmt +// during Printf (as of Go 1.13). Localization packages such as golang.org/x/text/message +// typically provide their own implementations. +type Printer interface { + // Print appends args to the message output. + Print(args ...interface{}) + + // Printf writes a formatted string. + Printf(format string, args ...interface{}) + + // Detail reports whether error detail is requested. + // After the first call to Detail, all text written to the Printer + // is formatted as additional detail, or ignored when + // detail has not been requested. + // If Detail returns false, the caller can avoid printing the detail at all. + Detail() bool +} diff --git a/vendor/golang.org/x/xerrors/frame.go b/vendor/golang.org/x/xerrors/frame.go new file mode 100644 index 000000000..0de628ec5 --- /dev/null +++ b/vendor/golang.org/x/xerrors/frame.go @@ -0,0 +1,56 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "runtime" +) + +// A Frame contains part of a call stack. +type Frame struct { + // Make room for three PCs: the one we were asked for, what it called, + // and possibly a PC for skipPleaseUseCallersFrames. See: + // https://go.googlesource.com/go/+/032678e0fb/src/runtime/extern.go#169 + frames [3]uintptr +} + +// Caller returns a Frame that describes a frame on the caller's stack. +// The argument skip is the number of frames to skip over. +// Caller(0) returns the frame for the caller of Caller. +func Caller(skip int) Frame { + var s Frame + runtime.Callers(skip+1, s.frames[:]) + return s +} + +// location reports the file, line, and function of a frame. +// +// The returned function may be "" even if file and line are not. +func (f Frame) location() (function, file string, line int) { + frames := runtime.CallersFrames(f.frames[:]) + if _, ok := frames.Next(); !ok { + return "", "", 0 + } + fr, ok := frames.Next() + if !ok { + return "", "", 0 + } + return fr.Function, fr.File, fr.Line +} + +// Format prints the stack as error detail. +// It should be called from an error's Format implementation +// after printing any other error detail. +func (f Frame) Format(p Printer) { + if p.Detail() { + function, file, line := f.location() + if function != "" { + p.Printf("%s\n ", function) + } + if file != "" { + p.Printf("%s:%d\n", file, line) + } + } +} diff --git a/vendor/golang.org/x/xerrors/internal/internal.go b/vendor/golang.org/x/xerrors/internal/internal.go new file mode 100644 index 000000000..89f4eca5d --- /dev/null +++ b/vendor/golang.org/x/xerrors/internal/internal.go @@ -0,0 +1,8 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package internal + +// EnableTrace indicates whether stack information should be recorded in errors. +var EnableTrace = true diff --git a/vendor/golang.org/x/xerrors/wrap.go b/vendor/golang.org/x/xerrors/wrap.go new file mode 100644 index 000000000..9a3b51037 --- /dev/null +++ b/vendor/golang.org/x/xerrors/wrap.go @@ -0,0 +1,106 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "reflect" +) + +// A Wrapper provides context around another error. +type Wrapper interface { + // Unwrap returns the next error in the error chain. + // If there is no next error, Unwrap returns nil. + Unwrap() error +} + +// Opaque returns an error with the same error formatting as err +// but that does not match err and cannot be unwrapped. +func Opaque(err error) error { + return noWrapper{err} +} + +type noWrapper struct { + error +} + +func (e noWrapper) FormatError(p Printer) (next error) { + if f, ok := e.error.(Formatter); ok { + return f.FormatError(p) + } + p.Print(e.error) + return nil +} + +// Unwrap returns the result of calling the Unwrap method on err, if err implements +// Unwrap. Otherwise, Unwrap returns nil. +func Unwrap(err error) error { + u, ok := err.(Wrapper) + if !ok { + return nil + } + return u.Unwrap() +} + +// Is reports whether any error in err's chain matches target. +// +// An error is considered to match a target if it is equal to that target or if +// it implements a method Is(error) bool such that Is(target) returns true. +func Is(err, target error) bool { + if target == nil { + return err == target + } + + isComparable := reflect.TypeOf(target).Comparable() + for { + if isComparable && err == target { + return true + } + if x, ok := err.(interface{ Is(error) bool }); ok && x.Is(target) { + return true + } + // TODO: consider supporing target.Is(err). This would allow + // user-definable predicates, but also may allow for coping with sloppy + // APIs, thereby making it easier to get away with them. + if err = Unwrap(err); err == nil { + return false + } + } +} + +// As finds the first error in err's chain that matches the type to which target +// points, and if so, sets the target to its value and returns true. An error +// matches a type if it is assignable to the target type, or if it has a method +// As(interface{}) bool such that As(target) returns true. As will panic if target +// is not a non-nil pointer to a type which implements error or is of interface type. +// +// The As method should set the target to its value and return true if err +// matches the type to which target points. +func As(err error, target interface{}) bool { + if target == nil { + panic("errors: target cannot be nil") + } + val := reflect.ValueOf(target) + typ := val.Type() + if typ.Kind() != reflect.Ptr || val.IsNil() { + panic("errors: target must be a non-nil pointer") + } + if e := typ.Elem(); e.Kind() != reflect.Interface && !e.Implements(errorType) { + panic("errors: *target must be interface or implement error") + } + targetType := typ.Elem() + for err != nil { + if reflect.TypeOf(err).AssignableTo(targetType) { + val.Elem().Set(reflect.ValueOf(err)) + return true + } + if x, ok := err.(interface{ As(interface{}) bool }); ok && x.As(target) { + return true + } + err = Unwrap(err) + } + return false +} + +var errorType = reflect.TypeOf((*error)(nil)).Elem() diff --git a/vendor/google.golang.org/grpc/go.mod b/vendor/google.golang.org/grpc/go.mod deleted file mode 100644 index b177cfa66..000000000 --- a/vendor/google.golang.org/grpc/go.mod +++ /dev/null @@ -1,17 +0,0 @@ -module google.golang.org/grpc - -go 1.11 - -require ( - github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403 - github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d - github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b - github.com/golang/protobuf v1.4.2 - github.com/google/go-cmp v0.5.0 - github.com/google/uuid v1.1.2 - golang.org/x/net v0.0.0-20190311183353-d8887717615a - golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be - golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a - google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 - google.golang.org/protobuf v1.25.0 -) diff --git a/vendor/google.golang.org/grpc/go.sum b/vendor/google.golang.org/grpc/go.sum deleted file mode 100644 index 24d2976ab..000000000 --- a/vendor/google.golang.org/grpc/go.sum +++ /dev/null @@ -1,96 +0,0 @@ -cloud.google.com/go v0.26.0 h1:e0WKqKTd5BnrG8aKH3J3h+QvEIQtSUcf2n5UZ5ZgLtQ= -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/census-instrumentation/opencensus-proto v0.2.1 h1:glEXhBS5PSLLv4IXzLA5yPRVX4bilULVyxxbrfOtDAk= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403 h1:cqQfy1jclcSy/FwLjemeg3SR1yaINm74aQyupQ0Bl8M= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d h1:QyzYnTnPE15SQyUeqU6qLbWxMkwyAyu+vGksa0b7j00= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/protoc-gen-validate v0.1.0 h1:EQciDnbrYxy13PgWoY8AqoxGiPrpgBZ1R8UNe3ddc+A= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0 h1:/QaMHBdZ26BB3SSst0Iwl10Epc+xhTquomWX0oZEB6w= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a h1:oWX7TPOiFAMXLq8o0ikBYfCJVlRHBcsciT5bXOrH628= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be h1:vEDujvNQGv4jgYKudGeI/+DAX4Jffq6hpD55MmoEvKs= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/vendor/gopkg.in/yaml.v2/go.mod b/vendor/gopkg.in/yaml.v2/go.mod deleted file mode 100644 index 1934e8769..000000000 --- a/vendor/gopkg.in/yaml.v2/go.mod +++ /dev/null @@ -1,5 +0,0 @@ -module "gopkg.in/yaml.v2" - -require ( - "gopkg.in/check.v1" v0.0.0-20161208181325-20d25e280405 -) diff --git a/vendor/gopkg.in/yaml.v3/go.mod b/vendor/gopkg.in/yaml.v3/go.mod deleted file mode 100644 index f407ea321..000000000 --- a/vendor/gopkg.in/yaml.v3/go.mod +++ /dev/null @@ -1,5 +0,0 @@ -module "gopkg.in/yaml.v3" - -require ( - "gopkg.in/check.v1" v0.0.0-20161208181325-20d25e280405 -) diff --git a/vendor/k8s.io/klog/v2/go.mod b/vendor/k8s.io/klog/v2/go.mod deleted file mode 100644 index e396e31c0..000000000 --- a/vendor/k8s.io/klog/v2/go.mod +++ /dev/null @@ -1,5 +0,0 @@ -module k8s.io/klog/v2 - -go 1.13 - -require github.com/go-logr/logr v0.2.0 diff --git a/vendor/k8s.io/klog/v2/go.sum b/vendor/k8s.io/klog/v2/go.sum deleted file mode 100644 index 8dfa78542..000000000 --- a/vendor/k8s.io/klog/v2/go.sum +++ /dev/null @@ -1,2 +0,0 @@ -github.com/go-logr/logr v0.2.0 h1:QvGt2nLcHH0WK9orKa+ppBPAxREcH364nPUedEpK0TY= -github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= diff --git a/vendor/modules.txt b/vendor/modules.txt index 79560dda6..c16c599ea 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -1,15 +1,20 @@ # github.com/DataDog/zstd v1.4.5 +## explicit github.com/DataDog/zstd # github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d ## explicit github.com/StackExchange/wmi # github.com/beorn7/perks v1.0.0 +## explicit; go 1.12 github.com/beorn7/perks/quantile # github.com/cenkalti/backoff/v4 v4.1.1 +## explicit; go 1.13 github.com/cenkalti/backoff/v4 # github.com/cespare/xxhash/v2 v2.1.1 +## explicit; go 1.11 github.com/cespare/xxhash/v2 # github.com/cockroachdb/errors v1.8.1 +## explicit; go 1.13 github.com/cockroachdb/errors github.com/cockroachdb/errors/assert github.com/cockroachdb/errors/barriers @@ -29,9 +34,10 @@ github.com/cockroachdb/errors/stdstrings github.com/cockroachdb/errors/telemetrykeys github.com/cockroachdb/errors/withstack # github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f -github.com/cockroachdb/logtags -# github.com/cockroachdb/pebble v0.0.0-20210622171231-4fcf40933159 ## explicit +github.com/cockroachdb/logtags +# github.com/cockroachdb/pebble v0.0.0-20210817201821-5e4468e97817 +## explicit; go 1.13 github.com/cockroachdb/pebble github.com/cockroachdb/pebble/internal/arenaskl github.com/cockroachdb/pebble/internal/base @@ -49,40 +55,50 @@ github.com/cockroachdb/pebble/internal/private github.com/cockroachdb/pebble/internal/rangedel github.com/cockroachdb/pebble/internal/rate github.com/cockroachdb/pebble/internal/rawalloc -github.com/cockroachdb/pebble/internal/record +github.com/cockroachdb/pebble/record github.com/cockroachdb/pebble/sstable github.com/cockroachdb/pebble/vfs # github.com/cockroachdb/redact v1.0.8 +## explicit; go 1.14 github.com/cockroachdb/redact github.com/cockroachdb/redact/internal github.com/cockroachdb/redact/internal/fmtsort # github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2 +## explicit; go 1.12 github.com/cockroachdb/sentry-go # github.com/coreos/go-semver v0.2.0 +## explicit github.com/coreos/go-semver/semver # github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7 +## explicit github.com/coreos/go-systemd/journal # github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf +## explicit github.com/coreos/pkg/capnslog # github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d +## explicit; go 1.12 github.com/cpuguy83/go-md2man/v2/md2man # github.com/davecgh/go-spew v1.1.1 +## explicit github.com/davecgh/go-spew/spew # github.com/docker/go-units v0.4.0 ## explicit github.com/docker/go-units # github.com/dustin/go-humanize v1.0.0 +## explicit github.com/dustin/go-humanize # github.com/go-logr/logr v0.2.0 +## explicit; go 1.14 github.com/go-logr/logr # github.com/go-ole/go-ole v1.2.4 -## explicit +## explicit; go 1.12 github.com/go-ole/go-ole github.com/go-ole/go-ole/oleutil # github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a +## explicit github.com/gogo/googleapis/google/rpc # github.com/gogo/protobuf v1.3.2 -## explicit +## explicit; go 1.15 github.com/gogo/protobuf/gogoproto github.com/gogo/protobuf/jsonpb github.com/gogo/protobuf/proto @@ -90,13 +106,14 @@ github.com/gogo/protobuf/protoc-gen-gogo/descriptor github.com/gogo/protobuf/sortkeys github.com/gogo/protobuf/types # github.com/gogo/status v1.1.0 -## explicit +## explicit; go 1.12 github.com/gogo/status # github.com/golang/mock v1.5.0 -## explicit +## explicit; go 1.11 github.com/golang/mock/gomock github.com/golang/mock/mockgen/model # github.com/golang/protobuf v1.5.2 +## explicit; go 1.9 github.com/golang/protobuf/descriptor github.com/golang/protobuf/jsonpb github.com/golang/protobuf/proto @@ -107,65 +124,84 @@ github.com/golang/protobuf/ptypes/duration github.com/golang/protobuf/ptypes/timestamp github.com/golang/protobuf/ptypes/wrappers # github.com/golang/snappy v0.0.3 +## explicit github.com/golang/snappy # github.com/google/gofuzz v1.2.0 -## explicit +## explicit; go 1.12 github.com/google/gofuzz github.com/google/gofuzz/bytesource # github.com/googleapis/gnostic v0.4.1 +## explicit; go 1.12 github.com/googleapis/gnostic/compiler github.com/googleapis/gnostic/extensions github.com/googleapis/gnostic/openapiv2 # github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 +## explicit github.com/gopherjs/gopherjs/js # github.com/grpc-ecosystem/grpc-gateway v1.16.0 +## explicit; go 1.14 github.com/grpc-ecosystem/grpc-gateway/internal github.com/grpc-ecosystem/grpc-gateway/runtime github.com/grpc-ecosystem/grpc-gateway/utilities # github.com/imdario/mergo v0.3.5 +## explicit github.com/imdario/mergo # github.com/json-iterator/go v1.1.10 +## explicit; go 1.12 github.com/json-iterator/go # github.com/jstemmer/go-junit-report v0.9.1 -## explicit +## explicit; go 1.2 github.com/jstemmer/go-junit-report/formatter github.com/jstemmer/go-junit-report/parser # github.com/jtolds/gls v4.20.0+incompatible +## explicit github.com/jtolds/gls # github.com/klauspost/compress v1.11.7 +## explicit; go 1.13 github.com/klauspost/compress/fse github.com/klauspost/compress/huff0 github.com/klauspost/compress/snappy github.com/klauspost/compress/zstd github.com/klauspost/compress/zstd/internal/xxhash # github.com/kr/pretty v0.2.0 +## explicit; go 1.12 github.com/kr/pretty # github.com/kr/text v0.1.0 +## explicit github.com/kr/text # github.com/matttproud/golang_protobuf_extensions v1.0.1 +## explicit github.com/matttproud/golang_protobuf_extensions/pbutil # github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd +## explicit github.com/modern-go/concurrent # github.com/modern-go/reflect2 v1.0.1 +## explicit github.com/modern-go/reflect2 # github.com/pkg/errors v0.9.1 ## explicit github.com/pkg/errors # github.com/pmezard/go-difflib v1.0.0 +## explicit github.com/pmezard/go-difflib/difflib # github.com/prometheus/client_golang v1.0.0 +## explicit github.com/prometheus/client_golang/prometheus github.com/prometheus/client_golang/prometheus/internal # github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 +## explicit; go 1.9 github.com/prometheus/client_model/go # github.com/prometheus/common v0.4.1 +## explicit github.com/prometheus/common/expfmt github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg github.com/prometheus/common/model # github.com/prometheus/procfs v0.0.2 +## explicit github.com/prometheus/procfs github.com/prometheus/procfs/internal/fs # github.com/russross/blackfriday/v2 v2.0.1 +## explicit github.com/russross/blackfriday/v2 # github.com/shirou/gopsutil v2.20.9+incompatible ## explicit @@ -175,9 +211,10 @@ github.com/shirou/gopsutil/mem github.com/shirou/gopsutil/net github.com/shirou/gopsutil/process # github.com/shurcooL/sanitized_anchor_name v1.0.0 +## explicit github.com/shurcooL/sanitized_anchor_name # github.com/smartystreets/assertions v1.2.0 -## explicit +## explicit; go 1.13 github.com/smartystreets/assertions github.com/smartystreets/assertions/internal/go-diff/diffmatchpatch github.com/smartystreets/assertions/internal/go-render/render @@ -191,18 +228,20 @@ github.com/smartystreets/goconvey/convey/reporting ## explicit github.com/soheilhy/cmux # github.com/spf13/pflag v1.0.5 +## explicit; go 1.12 github.com/spf13/pflag # github.com/stretchr/testify v1.7.0 -## explicit +## explicit; go 1.13 github.com/stretchr/testify/assert github.com/stretchr/testify/require # github.com/urfave/cli/v2 v2.3.0 -## explicit +## explicit; go 1.11 github.com/urfave/cli/v2 # github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 +## explicit github.com/xiang90/probing # go.etcd.io/etcd v0.0.0-20201125193152-8a03d2e9614b -## explicit +## explicit; go 1.14 go.etcd.io/etcd/etcdserver/api/rafthttp go.etcd.io/etcd/etcdserver/api/snap go.etcd.io/etcd/etcdserver/api/snap/snappb @@ -226,10 +265,10 @@ go.etcd.io/etcd/version go.etcd.io/etcd/wal go.etcd.io/etcd/wal/walpb # go.opentelemetry.io/contrib v0.21.0 -## explicit +## explicit; go 1.15 go.opentelemetry.io/contrib # go.opentelemetry.io/otel v1.0.0-RC1 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel go.opentelemetry.io/otel/attribute go.opentelemetry.io/otel/baggage @@ -240,50 +279,53 @@ go.opentelemetry.io/otel/internal/global go.opentelemetry.io/otel/propagation go.opentelemetry.io/otel/semconv/v1.4.0 # go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.21.0 +## explicit; go 1.15 go.opentelemetry.io/otel/exporters/otlp/otlpmetric go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal/connection go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal/metrictransform go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal/otlpconfig # go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.21.0 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc # go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.0.0-RC1 +## explicit; go 1.15 go.opentelemetry.io/otel/exporters/otlp/otlptrace go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/connection go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/otlpconfig go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform # go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.0.0-RC1 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc # go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v0.21.0 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/exporters/stdout/stdoutmetric # go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.0.0-RC1 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/exporters/stdout/stdouttrace # go.opentelemetry.io/otel/internal/metric v0.21.0 +## explicit; go 1.15 go.opentelemetry.io/otel/internal/metric go.opentelemetry.io/otel/internal/metric/global # go.opentelemetry.io/otel/metric v0.21.0 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/metric go.opentelemetry.io/otel/metric/global go.opentelemetry.io/otel/metric/number go.opentelemetry.io/otel/metric/registry go.opentelemetry.io/otel/metric/unit # go.opentelemetry.io/otel/sdk v1.0.0-RC1 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/sdk/instrumentation go.opentelemetry.io/otel/sdk/internal go.opentelemetry.io/otel/sdk/resource go.opentelemetry.io/otel/sdk/trace go.opentelemetry.io/otel/sdk/trace/tracetest # go.opentelemetry.io/otel/sdk/export/metric v0.21.0 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/sdk/export/metric go.opentelemetry.io/otel/sdk/export/metric/aggregation # go.opentelemetry.io/otel/sdk/metric v0.21.0 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/sdk/metric go.opentelemetry.io/otel/sdk/metric/aggregator go.opentelemetry.io/otel/sdk/metric/aggregator/exact @@ -296,9 +338,10 @@ go.opentelemetry.io/otel/sdk/metric/controller/time go.opentelemetry.io/otel/sdk/metric/processor/basic go.opentelemetry.io/otel/sdk/metric/selector/simple # go.opentelemetry.io/otel/trace v1.0.0-RC1 -## explicit +## explicit; go 1.15 go.opentelemetry.io/otel/trace # go.opentelemetry.io/proto/otlp v0.9.0 +## explicit; go 1.14 go.opentelemetry.io/proto/otlp/collector/metrics/v1 go.opentelemetry.io/proto/otlp/collector/trace/v1 go.opentelemetry.io/proto/otlp/common/v1 @@ -306,22 +349,23 @@ go.opentelemetry.io/proto/otlp/metrics/v1 go.opentelemetry.io/proto/otlp/resource/v1 go.opentelemetry.io/proto/otlp/trace/v1 # go.uber.org/atomic v1.7.0 +## explicit; go 1.13 go.uber.org/atomic # go.uber.org/automaxprocs v1.4.0 -## explicit +## explicit; go 1.13 go.uber.org/automaxprocs go.uber.org/automaxprocs/internal/cgroups go.uber.org/automaxprocs/internal/runtime go.uber.org/automaxprocs/maxprocs # go.uber.org/goleak v1.1.10 -## explicit +## explicit; go 1.13 go.uber.org/goleak go.uber.org/goleak/internal/stack # go.uber.org/multierr v1.7.0 -## explicit +## explicit; go 1.14 go.uber.org/multierr # go.uber.org/zap v1.16.0 -## explicit +## explicit; go 1.13 go.uber.org/zap go.uber.org/zap/buffer go.uber.org/zap/internal/bufferpool @@ -329,13 +373,21 @@ go.uber.org/zap/internal/color go.uber.org/zap/internal/exit go.uber.org/zap/zapcore # golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 +## explicit; go 1.11 golang.org/x/crypto/ssh/terminal # golang.org/x/exp v0.0.0-20200513190911-00229845015e +## explicit; go 1.12 golang.org/x/exp/rand # golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f +## explicit; go 1.11 golang.org/x/lint golang.org/x/lint/golint +# golang.org/x/mod v0.3.0 +## explicit; go 1.12 +golang.org/x/mod/module +golang.org/x/mod/semver # golang.org/x/net v0.0.0-20201021035429-f5854403a974 +## explicit; go 1.11 golang.org/x/net/context golang.org/x/net/context/ctxhttp golang.org/x/net/http/httpguts @@ -345,6 +397,7 @@ golang.org/x/net/idna golang.org/x/net/internal/timeseries golang.org/x/net/trace # golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d +## explicit; go 1.11 golang.org/x/oauth2 golang.org/x/oauth2/internal # golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 @@ -352,23 +405,44 @@ golang.org/x/oauth2/internal golang.org/x/sync/errgroup golang.org/x/sync/singleflight # golang.org/x/sys v0.0.0-20210304124612-50617c2ba197 -## explicit +## explicit; go 1.12 golang.org/x/sys/internal/unsafeheader golang.org/x/sys/unix golang.org/x/sys/windows # golang.org/x/text v0.3.5 -## explicit +## explicit; go 1.11 golang.org/x/text/secure/bidirule golang.org/x/text/transform golang.org/x/text/unicode/bidi golang.org/x/text/unicode/norm # golang.org/x/time v0.0.0-20191024005414-555d28b269f0 +## explicit golang.org/x/time/rate # golang.org/x/tools v0.0.0-20210106214847-113979e3529a +## explicit; go 1.12 +golang.org/x/tools/cmd/goimports +golang.org/x/tools/cmd/stringer golang.org/x/tools/go/ast/astutil golang.org/x/tools/go/gcexportdata golang.org/x/tools/go/internal/gcimporter +golang.org/x/tools/go/internal/packagesdriver +golang.org/x/tools/go/packages +golang.org/x/tools/internal/event +golang.org/x/tools/internal/event/core +golang.org/x/tools/internal/event/keys +golang.org/x/tools/internal/event/label +golang.org/x/tools/internal/fastwalk +golang.org/x/tools/internal/gocommand +golang.org/x/tools/internal/gopathwalk +golang.org/x/tools/internal/imports +golang.org/x/tools/internal/packagesinternal +golang.org/x/tools/internal/typesinternal +# golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 +## explicit; go 1.11 +golang.org/x/xerrors +golang.org/x/xerrors/internal # google.golang.org/appengine v1.6.5 +## explicit; go 1.11 google.golang.org/appengine/internal google.golang.org/appengine/internal/base google.golang.org/appengine/internal/datastore @@ -377,12 +451,13 @@ google.golang.org/appengine/internal/remote_api google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/urlfetch # google.golang.org/genproto v0.0.0-20200806141610-86f49bd18e98 +## explicit; go 1.11 google.golang.org/genproto/googleapis/api/httpbody google.golang.org/genproto/googleapis/rpc/errdetails google.golang.org/genproto/googleapis/rpc/status google.golang.org/genproto/protobuf/field_mask # google.golang.org/grpc v1.38.0 -## explicit +## explicit; go 1.11 google.golang.org/grpc google.golang.org/grpc/attributes google.golang.org/grpc/backoff @@ -431,9 +506,9 @@ google.golang.org/grpc/stats google.golang.org/grpc/status google.golang.org/grpc/tap # google.golang.org/grpc/examples v0.0.0-20210521225445-359fdbb7b310 -## explicit +## explicit; go 1.11 # google.golang.org/protobuf v1.26.0 -## explicit +## explicit; go 1.9 google.golang.org/protobuf/encoding/protojson google.golang.org/protobuf/encoding/prototext google.golang.org/protobuf/encoding/protowire @@ -469,16 +544,19 @@ google.golang.org/protobuf/types/known/fieldmaskpb google.golang.org/protobuf/types/known/timestamppb google.golang.org/protobuf/types/known/wrapperspb # gopkg.in/inf.v0 v0.9.1 +## explicit gopkg.in/inf.v0 # gopkg.in/natefinch/lumberjack.v2 v2.0.0 ## explicit gopkg.in/natefinch/lumberjack.v2 # gopkg.in/yaml.v2 v2.3.0 +## explicit gopkg.in/yaml.v2 # gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b +## explicit gopkg.in/yaml.v3 # k8s.io/api v0.19.0 -## explicit +## explicit; go 1.15 k8s.io/api/admissionregistration/v1 k8s.io/api/admissionregistration/v1beta1 k8s.io/api/apps/v1 @@ -521,7 +599,7 @@ k8s.io/api/storage/v1 k8s.io/api/storage/v1alpha1 k8s.io/api/storage/v1beta1 # k8s.io/apimachinery v0.19.0 -## explicit +## explicit; go 1.15 k8s.io/apimachinery/pkg/api/errors k8s.io/apimachinery/pkg/api/meta k8s.io/apimachinery/pkg/api/resource @@ -558,7 +636,7 @@ k8s.io/apimachinery/pkg/version k8s.io/apimachinery/pkg/watch k8s.io/apimachinery/third_party/forked/golang/reflect # k8s.io/client-go v0.19.0 -## explicit +## explicit; go 1.15 k8s.io/client-go/discovery k8s.io/client-go/kubernetes k8s.io/client-go/kubernetes/scheme @@ -625,10 +703,14 @@ k8s.io/client-go/util/homedir k8s.io/client-go/util/keyutil k8s.io/client-go/util/workqueue # k8s.io/klog/v2 v2.2.0 +## explicit; go 1.13 k8s.io/klog/v2 # k8s.io/utils v0.0.0-20200729134348-d5654de09c73 +## explicit; go 1.12 k8s.io/utils/integer # sigs.k8s.io/structured-merge-diff/v4 v4.0.1 +## explicit; go 1.13 sigs.k8s.io/structured-merge-diff/v4/value # sigs.k8s.io/yaml v1.2.0 +## explicit; go 1.12 sigs.k8s.io/yaml diff --git a/vendor/sigs.k8s.io/yaml/go.mod b/vendor/sigs.k8s.io/yaml/go.mod deleted file mode 100644 index 7224f3497..000000000 --- a/vendor/sigs.k8s.io/yaml/go.mod +++ /dev/null @@ -1,8 +0,0 @@ -module sigs.k8s.io/yaml - -go 1.12 - -require ( - github.com/davecgh/go-spew v1.1.1 - gopkg.in/yaml.v2 v2.2.8 -) diff --git a/vendor/sigs.k8s.io/yaml/go.sum b/vendor/sigs.k8s.io/yaml/go.sum deleted file mode 100644 index 76e49483a..000000000 --- a/vendor/sigs.k8s.io/yaml/go.sum +++ /dev/null @@ -1,9 +0,0 @@ -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo= -gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= -gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=