Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(all): auto-regenerate discovery clients #2862

Merged
merged 1 commit into from
Nov 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 43 additions & 40 deletions chat/v1/chat-api.json

Large diffs are not rendered by default.

116 changes: 62 additions & 54 deletions chat/v1/chat-gen.go

Large diffs are not rendered by default.

103 changes: 98 additions & 5 deletions container/v1/container-api.json
Original file line number Diff line number Diff line change
Expand Up @@ -2540,7 +2540,7 @@
}
}
},
"revision": "20241017",
"revision": "20241024",
"rootUrl": "https://container.googleapis.com/",
"schemas": {
"AcceleratorConfig": {
Expand Down Expand Up @@ -3747,6 +3747,10 @@
"$ref": "NodeKubeletConfig",
"description": "The desired node kubelet config for all auto-provisioned node pools in autopilot clusters and node auto-provisioning enabled clusters."
},
"desiredNodePoolAutoConfigLinuxNodeConfig": {
"$ref": "LinuxNodeConfig",
"description": "The desired Linux node config for all auto-provisioned node pools in autopilot clusters and node auto-provisioning enabled clusters. Currently only `cgroup_mode` can be set here."
},
"desiredNodePoolAutoConfigNetworkTags": {
"$ref": "NetworkTags",
"description": "The desired network tags that apply to all auto-provisioned node pools in autopilot clusters and node auto-provisioning enabled clusters."
Expand Down Expand Up @@ -5631,6 +5635,20 @@
"format": "int32",
"type": "integer"
},
"localSsdEncryptionMode": {
"description": "Specifies which method should be used for encrypting the Local SSDs attahced to the node.",
"enum": [
"LOCAL_SSD_ENCRYPTION_MODE_UNSPECIFIED",
"STANDARD_ENCRYPTION",
"EPHEMERAL_KEY_ENCRYPTION"
],
"enumDescriptions": [
"The given node will be encrypted using keys managed by Google infrastructure and the keys will be deleted when the node is deleted.",
"The given node will be encrypted using keys managed by Google infrastructure and the keys will be deleted when the node is deleted.",
"The given node will opt-in for using ephemeral key for encryption of Local SSDs. The Local SSDs will not be able to recover data in case of node crash."
],
"type": "string"
},
"loggingConfig": {
"$ref": "NodePoolLoggingConfig",
"description": "Logging configuration."
Expand Down Expand Up @@ -6009,6 +6027,11 @@
"description": "Node pool configs that apply to all auto-provisioned node pools in autopilot clusters and node auto-provisioning enabled clusters.",
"id": "NodePoolAutoConfig",
"properties": {
"linuxNodeConfig": {
"$ref": "LinuxNodeConfig",
"description": "Output only. Configuration options for Linux nodes.",
"readOnly": true
},
"networkTags": {
"$ref": "NetworkTags",
"description": "The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during cluster creation. Each tag within the list must comply with RFC1035."
Expand Down Expand Up @@ -6051,22 +6074,22 @@
"type": "string"
},
"maxNodeCount": {
"description": "Maximum number of nodes for one location in the NodePool. Must be \u003e= min_node_count. There has to be enough quota to scale up the cluster.",
"description": "Maximum number of nodes for one location in the node pool. Must be \u003e= min_node_count. There has to be enough quota to scale up the cluster.",
"format": "int32",
"type": "integer"
},
"minNodeCount": {
"description": "Minimum number of nodes for one location in the NodePool. Must be \u003e= 1 and \u003c= max_node_count.",
"description": "Minimum number of nodes for one location in the node pool. Must be greater than or equal to 0 and less than or equal to max_node_count.",
"format": "int32",
"type": "integer"
},
"totalMaxNodeCount": {
"description": "Maximum number of nodes in the node pool. Must be greater than total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.",
"description": "Maximum number of nodes in the node pool. Must be greater than or equal to total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.",
"format": "int32",
"type": "integer"
},
"totalMinNodeCount": {
"description": "Minimum number of nodes in the node pool. Must be greater than 1 less than total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.",
"description": "Minimum number of nodes in the node pool. Must be greater than or equal to 0 and less than or equal to total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.",
"format": "int32",
"type": "integer"
}
Expand Down Expand Up @@ -7974,6 +7997,76 @@
},
"type": "object"
},
"UpgradeInfoEvent": {
"description": "UpgradeInfoEvent is a notification sent to customers about the upgrade information of a resource.",
"id": "UpgradeInfoEvent",
"properties": {
"currentVersion": {
"description": "The current version before the upgrade.",
"type": "string"
},
"description": {
"description": "A brief description of the event.",
"type": "string"
},
"endTime": {
"description": "The time when the operation ended.",
"format": "google-datetime",
"type": "string"
},
"operation": {
"description": "The operation associated with this upgrade.",
"type": "string"
},
"resource": {
"description": "Optional relative path to the resource. For example in node pool upgrades, the relative path of the node pool.",
"type": "string"
},
"resourceType": {
"description": "The resource type associated with the upgrade.",
"enum": [
"UPGRADE_RESOURCE_TYPE_UNSPECIFIED",
"MASTER",
"NODE_POOL"
],
"enumDescriptions": [
"Default value. This shouldn't be used.",
"Master / control plane",
"Node pool"
],
"type": "string"
},
"startTime": {
"description": "The time when the operation was started.",
"format": "google-datetime",
"type": "string"
},
"state": {
"description": "Output only. The state of the upgrade.",
"enum": [
"STATE_UNSPECIFIED",
"STARTED",
"SUCCEEDED",
"FAILED",
"CANCELED"
],
"enumDescriptions": [
"STATE_UNSPECIFIED indicates the state is unspecified.",
"STARTED indicates the upgrade has started.",
"SUCCEEDED indicates the upgrade has completed successfully.",
"FAILED indicates the upgrade has failed.",
"CANCELED indicates the upgrade has canceled."
],
"readOnly": true,
"type": "string"
},
"targetVersion": {
"description": "The target version for the upgrade.",
"type": "string"
}
},
"type": "object"
},
"UpgradeSettings": {
"description": "These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. These upgrade settings configure the upgrade strategy for the node pool. Use strategy to switch between the strategies applied to the node pool. If the strategy is ROLLING, use max_surge and max_unavailable to control the level of parallelism and the level of disruption caused by upgrade. 1. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. 2. maxUnavailable controls the number of nodes that can be simultaneously unavailable. 3. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). If the strategy is BLUE_GREEN, use blue_green_settings to configure the blue-green upgrade related settings. 1. standard_rollout_policy is the default policy. The policy is used to control the way blue pool gets drained. The draining is executed in the batch mode. The batch size could be specified as either percentage of the node pool size or the number of nodes. batch_soak_duration is the soak time after each batch gets drained. 2. node_pool_soak_duration is the soak time after all blue nodes are drained. After this period, the blue pool nodes will be deleted.",
"id": "UpgradeSettings",
Expand Down
103 changes: 90 additions & 13 deletions container/v1/container-gen.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading