diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 5bc7fe3a54a..ad315dbcfb7 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -44,53 +44,15 @@ The following example shows how to create a NebulaGraph cluster by creating a cl - `DOCKER_REGISTRY_SERVER`: Specify the server address of the private repository from which the image will be pulled, such as `reg.example-inc.com`. - `DOCKER_USER`: The username for the image repository. - `DOCKER_PASSWORD`: The password for the image repository. + {{ent.ent_end}} 3. Create a file named `apps_v1alpha1_nebulacluster.yaml`. - - For a NebulaGraph Community cluster - - For the file content, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - ??? Info "Expand to show sample parameter descriptions" - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `metadata.name` | - | The name of the created NebulaGraph cluster. | - |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | - | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | - | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | - | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | - | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | - | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | - | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | - | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| - | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | - |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| - | `spec.reference.name` | - | The name of the dependent controller. | - | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| - |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| {{ ent.ent_begin }} - - For a NebulaGraph Enterprise cluster - - Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. - - !!! enterpriseonly - - Make sure that you have access to NebulaGraph Enterprise Edition images before pulling the image. + - To create a NebulaGraph Enterprise cluster === "Cluster without Zones" @@ -99,7 +61,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl | Parameter | Default value | Description | | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.xxx:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| @@ -107,74 +69,8 @@ The following example shows how to create a NebulaGraph cluster by creating a cl === "Cluster with Zones" NebulaGraph Operator supports creating a cluster with [Zones](../../4.deployment-and-installation/5.zone.md). - - You must set the following parameters for creating a cluster with Zones. Other parameters can be changed as needed. For more information on other parameters, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | - |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| - |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| - |`spec.alpineImage`|`reg.vesoft-inc.com/nebula-alpine:latest`|The Alpine Linux image, used to obtain the Zone information where nodes are located.| - |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| - |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage nodes in the same zone.
When set to `true`, the query is sent to the storage nodes in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | - |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| - - ???+ note "Learn more about Zones in NebulaGraph Operator" - - **Understanding NebulaGraph's Zone Feature** - - NebulaGraph utilizes a feature called Zones to efficiently manage its distributed architecture. Each Zone represents a logical grouping of Storage pods and Graph pods, responsible for storing the complete graph space data. The data within NebulaGraph's spaces is partitioned, and replicas of these partitions are evenly distributed across all available Zones. The utilization of Zones can significantly reduce inter-Zone network traffic costs and boost data transfer speeds. Moreover, intra-zone-reading allows for increased availability, because replicas of a partition spread out among different zones. - - **Configuring NebulaGraph Zones** - - To make the most of the Zone feature, you first need to determine the actual Zone where your cluster nodes are located. Typically, nodes deployed on cloud platforms are labeled with their respective Zones. Once you have this information, you can configure it in your cluster's configuration file by setting the `spec.metad.config.zone_list` parameter. This parameter should be a list of Zone names, separated by commas, and should match the actual Zone names where your nodes are located. For example, if your nodes are in Zones `az1`, `az2`, and `az3`, your configuration would look like this: - - ```yaml - spec: - metad: - config: - zone_list: az1,az2,az3 - ``` - - **Operator's Use of Zone Information** - - NebulaGraph Operator leverages Kubernetes' [TopoloySpread](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) feature to manage the scheduling of Storage and Graph pods. Once the `zone_list` is configured, Storage services are automatically assigned to their respective Zones based on the `topology.kubernetes.io/zone` label. - - For intra-zone data access, the Graph service dynamically assigns itself to a Zone using the `--assigned_zone=$NODE_ZONE` parameter. It identifies the Zone name of the node where the Graph service resides by utilizing an init-container to fetch this information. The Alpine Linux image specified in `spec.alpineImage` (default: `reg.vesoft-inc.com/nebula-alpine:latest`) plays a role in obtaining Zone information. - - **Prioritizing Intra-Zone Data Access** - - By setting `spec.graphd.config.prioritize_intra_zone_reading` to `true` in the cluster configuration file, you enable the Graph service to prioritize sending queries to Storage services within the same Zone. In the event of a read failure within that Zone, the behavior depends on the value of `spec.graphd.config.stick_to_intra_zone_on_failure`. If set to `true`, the Graph service avoids reading data from other Zones and returns an error. Otherwise, it reads data from leader partition replicas in other Zones. - - ```yaml - spec: - alpineImage: reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest - graphd: - config: - prioritize_intra_zone_reading: "true" - stick_to_intra_zone_on_failure: "false" - ``` - - **Zone Mapping for Resilience** - - Once Storage and Graph services are assigned to Zones, the mapping between the pod and its corresponding Zone is stored in a configmap named `-graphd|storaged-zone`. This mapping facilitates pod scheduling during rolling updates and pod restarts, ensuring that services return to their original Zones as needed. - - !!! caution - - DO NOT manually modify the configmaps created by NebulaGraph Operator. Doing so may cause unexpected behavior. - - - Other optional parameters for the enterprise edition are as follows: - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| - |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| - |`spec.graphd.enable_graph_ssl`|`false`| Specifies whether to enable SSL for the Graph service. For more details, see [Enable mTLS](../8.custom-cluster-configurations/8.5.enable-ssl.md). | - - ??? info "Expand to view sample cluster configurations" + ??? info "Expand to view sample configurations of a cluster with Zones" ```yaml apiVersion: apps.nebula-graph.io/v1alpha1 @@ -183,90 +79,34 @@ The following example shows how to create a NebulaGraph cluster by creating a cl name: nebula namespace: default spec: - alpineImage: "reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest" + # Used to obtain the Zone information where nodes are located. + alpineImage: "reg.vesoft-inc.com/xxx/xxx:latest" + # Used for backup and recovery as well as log cleanup functions. + # If you do not customize this configuration, + # the default configuration will be used. agent: - image: reg.vesoft-inc.com/cloud-dev/nebula-agent + image: reg.vesoft-inc.com/xxx/xxx version: v3.6.0-sc exporter: image: vesoft/nebula-stats-exporter replicas: 1 maxRequests: 20 + # Used to create a console container, + # which is used to connect to the NebulaGraph cluster. console: version: "nightly" graphd: config: + # The following parameters are required for creating a cluster with Zones. accept_partial_success: "true" - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - key_path: certs/server.key - enable_graph_ssl: "true" prioritize_intra_zone_reading: "true" - stick_to_intra_zone_on_failure: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" + # The following parameters are required for collecting logs. logtostderr: "1" redirect_stdout: "false" stderrthreshold: "0" - initContainers: - - name: init-auth-sidecar - imagePullPolicy: IfNotPresent - image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 - env: - - name: AUTH_SIDECAR_CONFIG_FILENAME - value: sidecar-init - volumeMounts: - - name: credentials - mountPath: /credentials - - name: auth-sidecar-config - mountPath: /etc/config - sidecarContainers: - - name: auth-sidecar - image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 - imagePullPolicy: IfNotPresent - resources: - requests: - cpu: 100m - memory: 500Mi - env: - - name: LOCAL_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - - name: LOCAL_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: LOCAL_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - readinessProbe: - httpGet: - path: /ready - port: 8086 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - failureThreshold: 3 - livenessProbe: - httpGet: - path: /live - port: 8086 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - failureThreshold: 3 - volumeMounts: - - name: credentials - mountPath: /credentials - - name: auth-sidecar-config - mountPath: /etc/config - volumes: - - name: credentials - emptyDir: - medium: Memory - volumeMounts: - - name: credentials - mountPath: /usr/local/nebula/certs resources: requests: cpu: "2" @@ -275,7 +115,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "2" memory: "2Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-graphd-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc metad: config: @@ -285,6 +125,8 @@ The following example shows how to create a NebulaGraph cluster by creating a cl # Zone names CANNOT be modified once set. # It's suggested to set an odd number of Zones. zone_list: az1,az2,az3 + validate_session_timestamp: "false" + # LM access address and port number. licenseManagerURL: "192.168.8.xxx:9119" resources: requests: @@ -294,7 +136,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "1" memory: "1Gi" replicas: 3 - image: reg.vesoft-inc.com/rc/nebula-metad-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaim: resources: @@ -314,13 +156,14 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "2" memory: "2Gi" replicas: 3 - image: reg.vesoft-inc.com/rc/nebula-storaged-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaims: - resources: requests: storage: 2Gi storageClassName: local-path + # Automatically balance storage data after scaling out. enableAutoBalance: true reference: name: statefulsets.apps @@ -331,14 +174,123 @@ The following example shows how to create a NebulaGraph cluster by creating a cl imagePullPolicy: Always imagePullSecrets: - name: nebula-image + # Evenly distribute storage Pods across Zones. + # Must be set when using Zones. topologySpreadConstraints: - topologyKey: "topology.kubernetes.io/zone" whenUnsatisfiable: "DoNotSchedule" + ``` + + !!! caution + + Make sure storage Pods are evenly distributed across zones before ingesting data by running `SHOW ZONES` in nebula-console. For zone-related commands, see [Zones](../../4.deployment-and-installation/5.zone.md). + + You must set the following parameters for creating a cluster with Zones. Other parameters can be changed as needed. + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.xxx:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| + |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| + |`spec.alpineImage`|-|The Alpine Linux image, used to obtain the Zone information where nodes are located.| + |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| + |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage pods in the same zone.
When set to `true`, the query is sent to the storage pods in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | + |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| + |`spec.schedulerName`|`kube-scheduler`|Schedules the restarted Graph and Storage pods to the same Zone. The value must be set to `nebula-scheduler`.| + |`spec.topologySpreadConstraints`|-| It is a field in Kubernetes used to control the distribution of storage Pods. Its purpose is to ensure that your storage Pods are evenly spread across Zones.
**To use the Zone feature, you must set the value of `topologySpreadConstraints[0].topologyKey` to `topology.kubernetes.io/zone` and the value of `topologySpreadConstraints[0].whenUnsatisfiable` to `DoNotSchedule`**. Run `kubectl get node --show-labels` to check the key. For more information, see [TopologySpread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#example-multiple-topologyspreadconstraints).| + + ???+ note "Learn more about Zones in NebulaGraph Operator" + + **Understanding NebulaGraph's Zone Feature** + + NebulaGraph utilizes a feature called Zones to efficiently manage its distributed architecture. Each Zone represents a logical grouping of Storage pods and Graph pods, responsible for storing the complete graph space data. The data within NebulaGraph's spaces is partitioned, and replicas of these partitions are evenly distributed across all available Zones. The utilization of Zones can significantly reduce inter-Zone network traffic costs and boost data transfer speeds. Moreover, intra-zone-reading allows for increased availability, because replicas of a partition spread out among different zones. + + **Configuring NebulaGraph Zones** + + To make the most of the Zone feature, you first need to determine the actual Zone where your cluster nodes are located. Typically, nodes deployed on cloud platforms are labeled with their respective Zones. Once you have this information, you can configure it in your cluster's configuration file by setting the `spec.metad.config.zone_list` parameter. This parameter should be a list of Zone names, separated by commas, and should match the actual Zone names where your nodes are located. For example, if your nodes are in Zones `az1`, `az2`, and `az3`, your configuration would look like this: + + ```yaml + spec: + metad: + config: + zone_list: az1,az2,az3 ``` + **Operator's Use of Zone Information** + + NebulaGraph Operator leverages Kubernetes' [TopoloySpread](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) feature to manage the scheduling of Storage and Graph pods. Once the `zone_list` is configured, Storage services are automatically assigned to their respective Zones based on the `topology.kubernetes.io/zone` label. + + For intra-zone data access, the Graph service dynamically assigns itself to a Zone using the `--assigned_zone=$NODE_ZONE` parameter. It identifies the Zone name of the node where the Graph service resides by utilizing an init-container to fetch this information. The Alpine Linux image specified in `spec.alpineImage` (default: `reg.vesoft-inc.com/nebula-alpine:latest`) plays a role in obtaining Zone information. + + **Prioritizing Intra-Zone Data Access** + + By setting `spec.graphd.config.prioritize_intra_zone_reading` to `true` in the cluster configuration file, you enable the Graph service to prioritize sending queries to Storage services within the same Zone. In the event of a read failure within that Zone, the behavior depends on the value of `spec.graphd.config.stick_to_intra_zone_on_failure`. If set to `true`, the Graph service avoids reading data from other Zones and returns an error. Otherwise, it reads data from leader partition replicas in other Zones. + + ```yaml + spec: + alpineImage: reg.vesoft-inc.com/xxx/xxx:latest + graphd: + config: + prioritize_intra_zone_reading: "true" + stick_to_intra_zone_on_failure: "false" + ``` + + **Zone Mapping for Resilience** + + Once Storage and Graph services are assigned to Zones, the mapping between the pod and its corresponding Zone is stored in a configmap named `-graphd|storaged-zone`. This mapping facilitates pod scheduling during rolling updates and pod restarts, ensuring that services return to their original Zones as needed. + + !!! caution + + DO NOT manually modify the configmaps created by NebulaGraph Operator. Doing so may cause unexpected behavior. + + + Other optional parameters for the enterprise edition are as follows: + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| + |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| + |`spec.graphd.enable_graph_ssl`|`false`| Specifies whether to enable SSL for the Graph service. For more details, see [Enable mTLS](../8.custom-cluster-configurations/8.5.enable-ssl.md). | + {{ ent.ent_end }} -1. Create a NebulaGraph cluster. + - To create a NebulaGraph Community cluster + + See [community cluster configurations](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). + + ??? Info "Expand to show parameter descriptions of community clusters" + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `metadata.name` | - | The name of the created NebulaGraph cluster. | + |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| + | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | + | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | + | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | + | `spec.graphd.service` | - | The Service configurations for the Graphd service. | + | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | + | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | + | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | + | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | + | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | + | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| + | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | + | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | + | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | + | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| + | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | + | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| + | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | + |`spec..securityContext`|`{}`|Defines privilege and access control settings for NebulaGraph service containers. For details, see [SecurityContext](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/security_context.md). | + |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| + | `spec.reference.name` | - | The name of the dependent controller. | + | `spec.schedulerName` | - | The scheduler name. | + | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | + |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| + |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| + + +4. Create a NebulaGraph cluster. ```bash kubectl create -f apps_v1alpha1_nebulacluster.yaml @@ -446,10 +398,9 @@ In the process of downsizing the cluster, if the operation is not complete succe !!! caution - - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. - {{ent.ent_begin}} + - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scaling Meta services. - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `spec.metad.config.zone_list` field. Otherwise, the cluster will fail to start. - {{ent.ent_end}} + {{ ent.ent_end }} diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 03b46388b3b..c076404970e 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -129,9 +129,18 @@ --set nebula.metad.config.zone_list= \ --set nebula.graphd.config.prioritize_intra_zone_reading=true \ --set nebula.graphd.config.stick_to_intra_zone_on_failure=false \ + # Evenly distribute the Pods of the Storage service across Zones. + --set nebula.topologySpreadConstraints[0].topologyKey=topology.kubernetes.io/zone \ + --set nebula.topologySpreadConstraints[0].whenUnsatisfiable=DoNotSchedule \ + # Used to schedule restarted Graph or Storage Pods to the specified Zone. + --set nebula.schedulerName=nebula-scheduler \ --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ ``` + !!! caution + + Make sure storage Pods are evenly distributed across zones before ingesting data by running `SHOW ZONES` in nebula-console. For zone-related commands, see [Zones](../../4.deployment-and-installation/5.zone.md). + {{ent.ent_end}} To view all configuration parameters of the NebulaGraph cluster, run the `helm show values nebula-operator/nebula-cluster` command or click [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml). @@ -183,14 +192,13 @@ helm upgrade "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ Similarly, you can scale in a NebulaGraph cluster by setting the value of the `replicas` corresponding to the different services in the cluster smaller than the original value. -In the process of downsizing the cluster, if the operation is not complete successfully and seems to be stuck, you may need to check the status of the job using the `nebula-console` client specified in the `spec.console` field. Analyzing the logs and manually intervening can help ensure that the Job runs successfully. For information on how to check jobs, see [Job statements](../../3.ngql-guide/4.job-statements.md). +In the process of downsizing the cluster, if the operation is not complete successfully and seems to be stuck, you may need to check the status of the job using the `nebula-console` client specified in the `nebula.console` field. Analyzing the logs and manually intervening can help ensure that the Job runs successfully. For information on how to check jobs, see [Job statements](../../3.ngql-guide/4.job-statements.md). !!! caution - - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. - {{ent.ent_begin}} - - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `spec.metad.config.zone_list` field. Otherwise, the cluster will fail to start. - {{ent.ent_end}} + - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scaling Meta services. + - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `nebula.metad.config.zone_list` field. Otherwise, the cluster will fail to start. + You can click on [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/charts/nebula-cluster/values.yaml) to see more configurable parameters of the nebula-cluster chart. For more information about the descriptions of configurable parameters, see **Configuration parameters of the nebula-cluster Helm chart** below. {{ ent.ent_end }} diff --git a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md index f6dfbd97f2f..8c2696fcb39 100644 --- a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md @@ -70,6 +70,7 @@ You can also create a `ClusterIP` type Service to provide an access point to the ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft + ``` - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - ``: The custom Pod name. @@ -86,13 +87,27 @@ You can also create a `ClusterIP` type Service to provide an access point to the (root@nebula) [(none)]> ``` -You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. + You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. -```bash -kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p -``` + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p + ``` + + `service_port` is the port to connect to Graphd services, the default port of which is `9669`. -`service_port` is the port to connect to Graphd services, the default port of which is `9669`. + !!! note + + If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command: + + ```bash + # Enter the nebula-console Pod. + kubectl exec -it nebula-console -- /bin/sh + + # Connect to NebulaGraph databases. + nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u -p + ``` + + For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via `NodePort` @@ -197,109 +212,6 @@ Steps: For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). -## Connect to NebulaGraph databases from within a NebulaGraph cluster - -You can also create a `ClusterIP` type Service to provide an access point to the NebulaGraph database for other Pods within the cluster. By using the Service's IP and the Graph service's port number (9669), you can connect to the NebulaGraph database. For more information, see [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/). - -1. Create a file named `graphd-clusterip-service.yaml`. The file contents are as follows: - - ```yaml - apiVersion: v1 - kind: Service - metadata: - labels: - app.kubernetes.io/cluster: nebula - app.kubernetes.io/component: graphd - app.kubernetes.io/managed-by: nebula-operator - app.kubernetes.io/name: nebula-graph - name: nebula-graphd-svc - namespace: default - spec: - externalTrafficPolicy: Local - ports: - - name: thrift - port: 9669 - protocol: TCP - targetPort: 9669 - - name: http - port: 19669 - protocol: TCP - targetPort: 19669 - selector: - app.kubernetes.io/cluster: nebula - app.kubernetes.io/component: graphd - app.kubernetes.io/managed-by: nebula-operator - app.kubernetes.io/name: nebula-graph - type: ClusterIP # Set the type to ClusterIP. - ``` - - - NebulaGraph uses port `9669` by default. `19669` is the HTTP port of the Graph service in a NebulaGraph cluster. - - `targetPort` is the port mapped to the database Pods, which can be customized. - -2. Create a ClusterIP Service. - - ```bash - kubectl create -f graphd-clusterip-service.yaml - ``` - -3. Check the IP of the Service: - - ```bash - $ kubectl get service -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-svc ClusterIP 10.98.213.34 9669/TCP,19669/TCP,19670/TCP 23h - ... - ``` - -4. Run the following command to connect to the NebulaGraph database using the IP of the `-graphd-svc` Service above: - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p - ``` - - For example: - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft - ``` - - - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - - ``: The custom Pod name. - - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. - - `-port`: The port to connect to Graphd services, the default port of which is `9669`. - - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. - - A successful connection to the database is indicated if the following is returned: - - ```bash - If you don't see a command prompt, try pressing enter. - - (root@nebula) [(none)]> - ``` - - You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p - ``` - - `service_port` is the port to connect to Graphd services, the default port of which is `9669`. - - !!! note - - If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command: - - ```bash - # Enter the nebula-console Pod. - kubectl exec -it nebula-console -- /bin/sh - - # Connect to NebulaGraph databases. - nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u -p - ``` - - For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). -s ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via Ingress When dealing with multiple pods in a cluster, managing services for each pod separately is not a good practice. Ingress is a Kubernetes resource that provides a unified entry point for accessing multiple services. Ingress can be used to expose multiple services under a single IP address. @@ -401,7 +313,7 @@ Steps are as follows. kubectl exec -it nebula-console -- /bin/sh # Connect to NebulaGraph databases. - nebula-console -addr -port -u -p + nebula-console -addr -port -u -p ``` For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md index 88d79153796..8ca11e3f7e8 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md @@ -70,7 +70,5 @@ It should be noted that only when all configuration items in `config` are the pa For information about the parameters that can be dynamically modified for each service, see the parameter table column of **Whether supports runtime dynamic modifications** in [Meta service configuration parameters](../../5.configurations-and-logs/1.configurations/2.meta-config.md), [Storage service configuration parameters](../../5.configurations-and-logs/1.configurations/4.storage-config.md), and [Graph service configuration parameters](../../5.configurations-and-logs/1.configurations/3.graph-config.md), respectively. -## Learn more -For more information about the configuration parameters of Meta, Storage, and Graph services, see [Meta service configuration parameters](../../5.configurations-and-logs/1.configurations/2.meta-config.md), [Storage service configuration parameters](../../5.configurations-and-logs/1.configurations/4.storage-config.md), and [Graph service configuration parameters](../../5.configurations-and-logs/1.configurations/3.graph-config.md). diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md index 11fbfc9a8ee..b8cda0a2aad 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md @@ -4,13 +4,12 @@ NebulaGraph Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume C You can also define the automatic deletion of PVCs to release data by setting the parameter `spec.enablePVReclaim` to `true` in the configuration file of the cluster instance. As for whether PV will be deleted automatically after PVC is deleted, you need to customize the PV reclaim policy. See [reclaimPolicy in StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#reclaim-policy) and [PV Reclaiming](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) for details. -## Notes - -The NebulaGraph Operator currently does not support the resizing of Persistent Volume Claims (PVCs), but this feature is expected to be supported in version 1.6.1. Additionally, the Operator does not support dynamically adding or mounting storage volumes to a running storaged instance. - ## Prerequisites You have created a cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). +## Notes + +NebulaGraph Operator does not support dynamically adding or mounting storage volumes to a running storaged instance. ## Steps diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md index b5f265c9845..9a97f56fac9 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md @@ -14,31 +14,28 @@ In the NebulaGraph environment running in Kubernetes, mutual TLS (mTLS) is used In the cluster created using Operator, the client and server use the same CA root certificate by default. -## Encryption policies +## Encryption scenarios -NebulaGraph provides three encryption policies for mTLS: +The following two scenarios are commonly used for encryption: -- Encryption of data transmission between the client and the Graph service. - - This policy only involves encryption between the client and the Graph service and does not encrypt data transmission between other services in the cluster. +- Encrypting communication between the client and the Graph service. -- Encrypt the data transmission between clients, the Graph service, the Meta service, and the Storage service. - - This policy encrypts data transmission between the client, Graph service, Meta service, and Storage service in the cluster. +- Encrypting communication between services, such as communication between the Graph service and the Meta service, communication between the Graph service and the Storage service, and communication between the Meta service and the Storage service. -- Encryption of data transmission related to the Meta service within the cluster. - - This policy only involves encrypting data transmission related to the Meta service within the cluster and does not encrypt data transmission between other services or the client. + !!! note + + - The Graph service in NebulaGraph is the entry point for all client requests. The Graph service communicates with the Meta service and the Storage service to complete the client requests. Therefore, the Graph service needs to be able to communicate with the Meta service and the Storage service. + - The Storage and Meta services in NebulaGraph communicate with each other through heartbeat messages to ensure their availability and health. Therefore, the Storage service needs to be able to communicate with the Meta service and vice versa. -For different encryption policies, you need to configure different fields in the cluster configuration file. For more information, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). +For all encryption scenarios, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). ## mTLS with certificate hot-reloading -NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The following provides an example of the configuration file to enable mTLS between the client and the Graph service. +NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. -### Sample configurations +The following provides examples of the configuration file to enable mTLS between the client and the Graph service, and between services. -??? info "Expand to view the sample configurations of mTLS" +??? info "View sample configurations of mTLS between the client and the Graph service" ```yaml apiVersion: apps.nebula-graph.io/v1alpha1 @@ -52,18 +49,152 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The maxRequests: 20 graphd: config: - accept_partial_success: "true" + # The following parameters are used to enable mTLS between the client and the Graph service. ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt + key_path: certs/server.key enable_graph_ssl: "true" - enable_intra_zone_routing: "true" + # The following parameters are required for creating a cluster with Zones. + accept_partial_success: "true" + prioritize_intra_zone_reading: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + resources: + requests: + cpu: "200m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/xxx/xxx + version: v3.5.0-sc + metad: + # Zone names CANNOT be modified once set. + # It's suggested to set an odd number of Zones. + zone_list: az1,az2,az3 + validate_session_timestamp: "false" + licenseManagerURL: "192.168.8.xxx:9119" + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/xxx/xxx + version: v3.5.0-sc + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + storaged: + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/xxx/xxx + version: v3.5.0-sc + dataVolumeClaims: + - resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + enableAutoBalance: true + reference: + name: statefulsets.apps + version: v1 + schedulerName: nebula-scheduler + imagePullPolicy: Always + imagePullSecrets: + - name: nebula-image + enablePVReclaim: true + topologySpreadConstraints: + - topologyKey: "kubernetes.io/zone" + whenUnsatisfiable: "DoNotSchedule" + ``` + +??? info "View sample configurations of mTLS between services" + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + spec: + exporter: + image: vesoft/nebula-stats-exporter + replicas: 1 + maxRequests: 20 + # The certificate files for NebulaGraph Operator to access Storage and Meta services. + sslCerts: + clientSecret: "client-cert" + caSecret: "ca-cert" + caCert: "root.crt" + graphd: + config: + # The following parameters are used to enable mTLS between services. + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt key_path: certs/server.key - logtostderr: "1" - redirect_stdout: "false" - stderrthreshold: "0" - stick_to_intra_zone_on_failure: "true" - timestamp_in_logfile_name: "false" + enable_meta_ssl: "true" + enable_storage_ssl: "true" + # The following parameters are required for creating a cluster with Zones. + accept_partial_success: "true" + prioritize_intra_zone_reading: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" initContainers: - name: init-auth-sidecar command: @@ -72,14 +203,14 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The args: - cp /certs/* /credentials/ imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials sidecarContainers: - name: auth-sidecar imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -103,10 +234,48 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-graphd-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc metad: - licenseManagerURL: "192.168.8.53:9119" + config: + # Zone names CANNOT be modified once set. + # It's suggested to set an odd number of Zones. + zone_list: az1,az2,az3 + validate_session_timestamp: "false" + # The following parameters are used to enable mTLS between services. + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + licenseManagerURL: "192.168.8.xx:9119" resources: requests: cpu: "300m" @@ -115,7 +284,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-metad-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaim: resources: @@ -128,6 +297,40 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The storage: 1Gi storageClassName: local-path storaged: + config: + # The following parameters are used to enable mTLS between services. + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs resources: requests: cpu: "300m" @@ -136,7 +339,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-storaged-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaims: - resources: @@ -148,23 +351,26 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The requests: storage: 1Gi storageClassName: local-path + # Automatically balance storage data after scaling out. enableAutoBalance: true reference: name: statefulsets.apps version: v1 - schedulerName: default-scheduler + schedulerName: nebula-scheduler imagePullPolicy: Always imagePullSecrets: - name: nebula-image + # Whether to automatically delete PVCs when deleting a cluster. enablePVReclaim: true + # Used to evenly distribute Pods across Zones. topologySpreadConstraints: - - topologyKey: "kubernetes.io/hostname" - whenUnsatisfiable: "ScheduleAnyway" + - topologyKey: "kubernetes.io/zone" + whenUnsatisfiable: "DoNotSchedule" ``` ### Configure `spec..config` -To enable mTLS between the client and the Graph service, configure the `spec.graphd.config` field in the cluster configuration file. The paths specified in fields with `*_path` correspond to file paths relative to `/user/local/nebula`. **It's important to avoid using absolute paths to prevent path recognition errors.** +To enable mTLS between the client and the Graph service, add the following fields under the `spec.graphd.config` in the cluster configuration file. The paths specified in fields with `*_path` correspond to file paths relative to `/user/local/nebula`. **It's important to avoid using absolute paths to prevent path recognition errors.** ```yaml spec: @@ -173,15 +379,11 @@ spec: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_graph_ssl: "true" key_path: certs/server.key + enable_graph_ssl: "true" ``` -For the configurations of the other two authentication policies: - -- To enable mTLS between the client, the Graph service, the Meta service, and the Storage service: - - Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. +To enable mTLS between services (Graph, Meta, and Storage), add the following fields under the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` respectively in the cluster configuration file. ```yaml spec: @@ -190,60 +392,37 @@ For the configurations of the other two authentication policies: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_ssl: "true" key_path: certs/server.key - metad: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - enable_ssl: "true" - key_path: certs/server.key - storaged: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - enable_ssl: "true" - key_path: certs/server.key - ``` - -- To enable mTLS related to the Meta service: - - Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. - - ```yaml - spec: - graph: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt enable_meta_ssl: "true" - key_path: certs/server.key + enable_storage_ssl: "true" metad: config: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_meta_ssl: "true" key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" storaged: config: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_meta_ssl: "true" key_path: certs/server.key - ``` + enable_meta_ssl: "true" + enable_storage_ssl: "true" + ``` ### Configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` -`initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` fields are essential for implementing mTLS certificate online hot-reloading. For the encryption scenario where only the Graph service needs to be encrypted, you need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`. +`initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` fields are essential for implementing mTLS certificate online hot-reloading. + +- For the encryption scenario where only the Graph service needs to be encrypted, configure these fields under `spec.graph.config`. +- For the encryption scenario where the Graph service, Meta service, and Storage service need to be encrypted, configure these fields under `spec.graph.config`, `spec.storage.config`, and `spec.meta.config` respectively. #### `initContainers` -The `initContainers` field is utilized to configure an init-container responsible for generating certificate files. Note that the `volumeMounts` field specifies how the `credentials` volume, shared with the NebulaGraph container, is mounted, providing read and write access. +The `initContainers` field is utilized to configure an init-container responsible for generating certificate files. Note that the `volumeMounts` field specifies how a volume specified by `volumes`, shared with the NebulaGraph container, is mounted, providing read and write access. In the following example, `init-auth-sidecar` performs the task of copying files from the `certs` directory within the image to `/credentials`. After this task is completed, the init-container exits. @@ -258,7 +437,7 @@ initContainers: args: - cp /certs/* /credentials/ imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -266,7 +445,7 @@ initContainers: #### `sidecarContainers` -The `sidecarContainers` field is dedicated to periodically monitoring the expiration time of certificates and, when they are near expiration, generating new certificates to replace the existing ones. This process ensures seamless online certificate hot-reloading without any service interruptions. The `volumeMounts` field specifies how the `credentials` volume is mounted, and this volume is shared with the NebulaGraph container. +The `sidecarContainers` field is dedicated to periodically monitoring the expiration time of certificates and, when they are near expiration, generating new certificates to replace the existing ones. This process ensures seamless online certificate hot-reloading without any service interruptions. The `volumeMounts` field specifies how a volume is mounted, and this volume is shared with the NebulaGraph container. In the example provided, the `auth-sidecar` container employs the `crond` process, which runs a crontab script every minute. This script checks the certificate's expiration status using the `openssl x509 -noout -enddate` command. @@ -276,7 +455,7 @@ Example: sidecarContainers: - name: auth-sidecar imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -309,9 +488,9 @@ volumeMounts: ### Configure `sslCerts` -The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). +When you enable mTLS between services, you still needs to set `spec.sslCerts`, because NebulaGraph Operator communicates with the Meta service and Storage service. -For the other two scenarios where the Graph service, Meta service, and Storage service need to be encrypted, and where only the Meta service needs to be encrypted, you not only need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`, `spec.storage.config`, and `spec.meta.config`, but also configure `spec.sslCerts`. +The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). ```yaml spec: @@ -357,255 +536,255 @@ nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u r ## mTLS without hot-reloading -??? info "If you don't need to perform TLS certificate hot-reloading and prefer to use TLS certificates stored in a Secret when deploying Kubernetes applications, expand to follow these steps" +If you don't need to perform TLS certificate hot-reloading and prefer to use TLS certificates stored in a Secret when deploying Kubernetes applications, you can follow the steps below to enable mTLS in NebulaGraph. - ### Create a TLS-type Secret +### Create a TLS-type Secret - In a K8s cluster, you can create Secrets to store sensitive information, such as passwords, OAuth tokens, and TLS certificates. In NebulaGraph, you can create a Secret to store TLS certificates and private keys. When creating a Secret, the type `tls` should be specified. A `tls` Secret is used to store TLS certificates. +In a K8s cluster, you can create Secrets to store sensitive information, such as passwords, OAuth tokens, and TLS certificates. In NebulaGraph, you can create a Secret to store TLS certificates and private keys. When creating a Secret, the type `tls` should be specified. A `tls` Secret is used to store TLS certificates. - For example, to create a Secret for storing server certificates and private keys: +For example, to create a Secret for storing server certificates and private keys: - ```bash - kubectl create secret tls --key= --cert= --namespace= - ``` +```bash +kubectl create secret tls --key= --cert= --namespace= +``` - - ``: The name of the Secret storing the server certificate and private key. - - ``: The path to the server private key file. - - ``: The path to the server certificate file. - - ``: The namespace where the Secret is located. If `--namespace` is not specified, it defaults to the `default` namespace. +- ``: The name of the Secret storing the server certificate and private key. +- ``: The path to the server private key file. +- ``: The path to the server certificate file. +- ``: The namespace where the Secret is located. If `--namespace` is not specified, it defaults to the `default` namespace. - You can follow the above steps to create Secrets for the client certificate and private key, and the CA certificate. +You can follow the above steps to create Secrets for the client certificate and private key, and the CA certificate. - To view the created Secrets: +To view the created Secrets: - ```bash - kubectl get secret --namespace= - ``` +```bash +kubectl get secret --namespace= +``` - ### Configure certifications +### Configure certifications - Operator provides the `sslCerts` field to specify the encrypted certificates. The `sslCerts` field contains four subfields. These three fields `serverSecret`, `clientSecret`, and `caSecret` are used to specify the Secret names of the NebulaGraph server certificate, client certificate, and CA certificate, respectively. - When you specify these three fields, Operator reads the certificate content from the corresponding Secret and mounts it into the cluster's Pod. The `autoMountServerCerts` must be set to `true` if you want to automatically mount the server certificate and private key into the Pod. The default value is `false`. +Operator provides the `sslCerts` field to specify the encrypted certificates. The `sslCerts` field contains four subfields. These three fields `serverSecret`, `clientSecret`, and `caSecret` are used to specify the Secret names of the NebulaGraph server certificate, client certificate, and CA certificate, respectively. +When you specify these three fields, Operator reads the certificate content from the corresponding Secret and mounts it into the cluster's Pod. The `autoMountServerCerts` must be set to `true` if you want to automatically mount the server certificate and private key into the Pod. The default value is `false`. - ```yaml +```yaml +sslCerts: + autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. + serverSecret: "server-cert" # The name of the server certificate Secret. + serverCert: "" # The key name of the certificate in the server certificate Secret, default is tls.crt. + serverKey: "" # The key name of the private key in the server certificate Secret, default is tls.key. + clientSecret: "client-cert" # The name of the client certificate Secret. + clientCert: "" # The key name of the certificate in the client certificate Secret, default is tls.crt. + clientKey: "" # The key name of the private key in the client certificate Secret, default is tls.key. + caSecret: "ca-cert" # The name of the CA certificate Secret. + caCert: "" # The key name of the certificate in the CA certificate Secret, default is ca.crt. +``` + +The `serverCert` and `serverKey`, `clientCert` and `clientKey`, and `caCert` are used to specify the key names of the certificate and private key of the server Secret, the key names of the certificate and private key of the client Secret, and the key name of the CA Secret certificate. If you do not customize these field values, Operator defaults `serverCert` and `clientCert` to `tls.crt`, `serverKey` and `clientKey` to `tls.key`, and `caCert` to `ca.crt`. However, in the K8s cluster, the TLS type Secret uses `tls.crt` and `tls.key` as the default key names for the certificate and private key. Therefore, after creating the NebulaGraph cluster, you need to manually change the `caCert` field from `ca.crt` to `tls.crt` in the cluster configuration, so that the Operator can correctly read the content of the CA certificate. Before you customize these field values, you need to specify the key names of the certificate and private key in the Secret when creating it. For how to create a Secret with the key name specified, run the `kubectl create secret generic -h` command for help. + +You can use the `insecureSkipVerify` field to decide whether the client will verify the server's certificate chain and hostname. In production environments, it is recommended to set this to `false` to ensure the security of communication. If set to `true`, the client will not verify the server's certificate chain and hostname. + +```yaml +sslCerts: + # Determines whether the client needs to verify the server's certificate chain and hostname when establishing an SSL connection. + insecureSkipVerify: false +``` + +!!! caution + + Make sure that you have added the hostname or IP of the server to the server's certificate's `subjectAltName` field before the `insecureSkipVerify` is set to `false`. If the hostname or IP of the server is not added, an error will occur when the client verifies the server's certificate chain and hostname. For details, see [openssl](https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl). + +When the certificates are approaching expiration, they can be automatically updated by installing [cert-manager](https://cert-manager.io/docs/installation/supported-releases/). NebulaGraph will monitor changes to the certificate directory files, and once a change is detected, it will load the new certificate content into memory. + +### Encryption strategies + +NebulaGraph offers three encryption strategies that you can choose and configure according to your needs. + +- Encryption of client-graph and all inter-service communications + + If you want to encrypt all data transmission between the client, Graph service, Meta service, and Storage service, you need to add the `enable_ssl = true` field to each service. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: sslCerts: - autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. - serverSecret: "server-cert" # The name of the server certificate Secret. - serverCert: "" # The key name of the certificate in the server certificate Secret, default is tls.crt. - serverKey: "" # The key name of the private key in the server certificate Secret, default is tls.key. - clientSecret: "client-cert" # The name of the client certificate Secret. - clientCert: "" # The key name of the certificate in the client certificate Secret, default is tls.crt. - clientKey: "" # The key name of the private key in the client certificate Secret, default is tls.key. - caSecret: "ca-cert" # The name of the CA certificate Secret. - caCert: "" # The key name of the certificate in the CA certificate Secret, default is ca.crt. - ``` + autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. + serverSecret: "server-cert" # The Secret name of the server certificate and private key. + clientSecret: "client-cert" # The Secret name of the client certificate and private key. + caSecret: "ca-cert" # The Secret name of the CA certificate. + graphd: + config: + enable_ssl: "true" + metad: + config: + enable_ssl: "true" + storaged: + config: + enable_ssl: "true" + ``` - The `serverCert` and `serverKey`, `clientCert` and `clientKey`, and `caCert` are used to specify the key names of the certificate and private key of the server Secret, the key names of the certificate and private key of the client Secret, and the key name of the CA Secret certificate. If you do not customize these field values, Operator defaults `serverCert` and `clientCert` to `tls.crt`, `serverKey` and `clientKey` to `tls.key`, and `caCert` to `ca.crt`. However, in the K8s cluster, the TLS type Secret uses `tls.crt` and `tls.key` as the default key names for the certificate and private key. Therefore, after creating the NebulaGraph cluster, you need to manually change the `caCert` field from `ca.crt` to `tls.crt` in the cluster configuration, so that the Operator can correctly read the content of the CA certificate. Before you customize these field values, you need to specify the key names of the certificate and private key in the Secret when creating it. For how to create a Secret with the key name specified, run the `kubectl create secret generic -h` command for help. - You can use the `insecureSkipVerify` field to decide whether the client will verify the server's certificate chain and hostname. In production environments, it is recommended to set this to `false` to ensure the security of communication. If set to `true`, the client will not verify the server's certificate chain and hostname. +- Encryption of only Graph service communication - ```yaml + If the K8s cluster is deployed in the same data center and only the port of the Graph service is exposed externally, you can choose to encrypt only data transmission between the client and the Graph service. In this case, other services can communicate internally without encryption. Just add the `enable_graph_ssl = true` field to the Graph service. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: sslCerts: - # Determines whether the client needs to verify the server's certificate chain and hostname when establishing an SSL connection. - insecureSkipVerify: false - ``` + autoMountServerCerts: "true" + serverSecret: "server-cert" + caSecret: "ca-cert" + graphd: + config: + enable_graph_ssl: "true" + ``` + + !!! note + + Because Operator doesn't need to call the Graph service through an interface, it's not necessary to set `clientSecret` in `sslCerts`. + +- Encryption of only Meta service communication + + If you need to transmit confidential information to the Meta service, you can choose to encrypt data transmission related to the Meta service. In this case, you need to add the `enable_meta_ssl = true` configuration to each component. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: "server-cert" + clientSecret: "client-cert" + caSecret: "ca-cert" + graphd: + config: + enable_meta_ssl: "true" + metad: + config: + enable_meta_ssl: "true" + storaged: + config: + enable_meta_ssl: "true" + ``` + + After setting up the encryption policy, when an external [client](../../14.client/1.nebula-client.md) needs to connect to the Graph service with mutual TLS, you still need to set the relevant TLS fields according to the different clients. See the Use NebulaGraph Console to connect to Graph service section below for examples. + +### Example of enabling mTLS without hot-reloading + +1. Use the pre-generated server and client certificates and private keys, and the CA certificate to create a Secret for each. + + ```yaml + kubectl create secret tls --key= --cert= + ``` + + - `tls`: Indicates that the type of secret being created is TLS, which is used to store TLS certificates. + - ``: Specifies the name of the new secret being created, which can be customized. + - `--key=`: Specifies the path to the private key file of the TLS certificate to be stored in the secret. + - `--cert=`: Specifies the path to the public key certificate file of the TLS certificate to be stored in the secret. + + +2. Add server certificate, client certificate, CA certificate configuration, and encryption policy configuration in the corresponding cluster instance YAML file. For details, see [Encryption strategies](#encryption_strategies). + + For example, add encryption configuration for transmission data between client, Graph service, Meta service, and Storage service. + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: "server-cert" // The name of the server Certificate Secret. + clientSecret: "client-cert" // The name of the client Certificate Secret. + caSecret: "ca-cert" // The name of the CA Certificate Secret. + graphd: + config: + enable_ssl: "true" + metad: + config: + enable_ssl: "true" + storaged: + config: + enable_ssl: "true" + ``` + +3. Use `kubectl apply -f` to apply the file to the Kubernetes cluster. + +4. Verify that the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration match the key names of the certificates and private keys stored in the created Secret. + + ```bash + # Check the key names of the certificate and private key stored in the Secret. For example, check the key name of the CA certificate stored in the Secret. + kubectl get secret ca-cert -o yaml + ``` - !!! caution - - Make sure that you have added the hostname or IP of the server to the server's certificate's `subjectAltName` field before the `insecureSkipVerify` is set to `false`. If the hostname or IP of the server is not added, an error will occur when the client verifies the server's certificate chain and hostname. For details, see [openssl](https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl). - - When the certificates are approaching expiration, they can be automatically updated by installing [cert-manager](https://cert-manager.io/docs/installation/supported-releases/). NebulaGraph will monitor changes to the certificate directory files, and once a change is detected, it will load the new certificate content into memory. - - ### Encryption strategies - - NebulaGraph offers three encryption strategies that you can choose and configure according to your needs. - - - Encryption of client-graph and all inter-service communications - - If you want to encrypt all data transmission between the client, Graph service, Meta service, and Storage service, you need to add the `enable_ssl = true` field to each service. - - Here is an example configuration: - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. - serverSecret: "server-cert" # The Secret name of the server certificate and private key. - clientSecret: "client-cert" # The Secret name of the client certificate and private key. - caSecret: "ca-cert" # The Secret name of the CA certificate. - graphd: - config: - enable_ssl: "true" - metad: - config: - enable_ssl: "true" - storaged: - config: - enable_ssl: "true" - ``` - - - - Encryption of only Graph service communication - - If the K8s cluster is deployed in the same data center and only the port of the Graph service is exposed externally, you can choose to encrypt only data transmission between the client and the Graph service. In this case, other services can communicate internally without encryption. Just add the `enable_graph_ssl = true` field to the Graph service. - - Here is an example configuration: - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: "server-cert" - caSecret: "ca-cert" - graphd: - config: - enable_graph_ssl: "true" - ``` - - !!! note - - Because Operator doesn't need to call the Graph service through an interface, it's not necessary to set `clientSecret` in `sslCerts`. - - - Encryption of only Meta service communication - - If you need to transmit confidential information to the Meta service, you can choose to encrypt data transmission related to the Meta service. In this case, you need to add the `enable_meta_ssl = true` configuration to each component. - - Here is an example configuration: - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: "server-cert" - clientSecret: "client-cert" - caSecret: "ca-cert" - graphd: - config: - enable_meta_ssl: "true" - metad: - config: - enable_meta_ssl: "true" - storaged: - config: - enable_meta_ssl: "true" - ``` - - After setting up the encryption policy, when an external [client](../../14.client/1.nebula-client.md) needs to connect to the Graph service with mutual TLS, you still need to set the relevant TLS fields according to the different clients. See the Use NebulaGraph Console to connect to Graph service section below for examples. - - ### Example of enabling mTLS without hot-reloading - - 1. Use the pre-generated server and client certificates and private keys, and the CA certificate to create a Secret for each. - - ```yaml - kubectl create secret tls --key= --cert= - ``` - - - `tls`: Indicates that the type of secret being created is TLS, which is used to store TLS certificates. - - ``: Specifies the name of the new secret being created, which can be customized. - - `--key=`: Specifies the path to the private key file of the TLS certificate to be stored in the secret. - - `--cert=`: Specifies the path to the public key certificate file of the TLS certificate to be stored in the secret. - - - 2. Add server certificate, client certificate, CA certificate configuration, and encryption policy configuration in the corresponding cluster instance YAML file. For details, see [Encryption strategies](#encryption_strategies). - - For example, add encryption configuration for transmission data between client, Graph service, Meta service, and Storage service. - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: "server-cert" // The name of the server Certificate Secret. - clientSecret: "client-cert" // The name of the client Certificate Secret. - caSecret: "ca-cert" // The name of the CA Certificate Secret. - graphd: - config: - enable_ssl: "true" - metad: - config: - enable_ssl: "true" - storaged: - config: - enable_ssl: "true" - ``` - - 3. Use `kubectl apply -f` to apply the file to the Kubernetes cluster. - - 4. Verify that the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration match the key names of the certificates and private keys stored in the created Secret. - - ```bash - # Check the key names of the certificate and private key stored in the Secret. For example, check the key name of the CA certificate stored in the Secret. - kubectl get secret ca-cert -o yaml - ``` - - ```bash - # Check the cluster configuration file. - kubectl get nebulacluster nebula -o yaml - ``` - - Example output: - - ``` - ... - spec: - sslCerts: - autoMountServerCerts: "true" - serverSecret: server-cert - serverCert: tls.crt - serverKey: tls.key - clientSecret: client-cert - clientCert: tls.crt - clientKey: tls.key - caSecret: ca-cert - caCert: ca.crt - ... - ``` - - If the key names of the certificates and private keys stored in the Secret are different from the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration, you need to execute `kubectl edit nebulacluster ` to manually modify the cluster configuration file. - - In the example output, the key name of the CA certificate in the TLS-type Secret is `tls.crt`, so you need to change the value of caCert from `ca.crt` to `tls.crt`. - - 5. Use NebulaGraph Console to connect to the Graph service and establish a secure TLS connection. - - Example: - - ``` - kubectl run -ti --image vesoft/nebula-console:v{{console.release}} --restart=Never -- nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key - ``` - - - `-enable_ssl`: Use mTLS when connecting to NebulaGraph. - - `-ssl_root_ca_path`: Specify the storage path of the CA root certificate. - - `-ssl_cert_path`: Specify the storage path of the TLS public key certificate. - - `-ssl_private_key_path`: Specify the storage path of the TLS private key. - - For details on using NebulaGraph Console to connect to the Graph service, see [Connect to NebulaGraph](../4.connect-to-nebula-graph-service.md). - - !!! note - - If you set `spec.console` to start a NebulaGraph Console container in the cluster, you can enter the console container and run the following command to connect to the Graph service. - - ```bash - nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key - ``` - - At this point, you can enable mTLS in NebulaGraph. + ```bash + # Check the cluster configuration file. + kubectl get nebulacluster nebula -o yaml + ``` + + Example output: + + ``` + ... + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: server-cert + serverCert: tls.crt + serverKey: tls.key + clientSecret: client-cert + clientCert: tls.crt + clientKey: tls.key + caSecret: ca-cert + caCert: ca.crt + ... + ``` + + If the key names of the certificates and private keys stored in the Secret are different from the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration, you need to execute `kubectl edit nebulacluster ` to manually modify the cluster configuration file. + + In the example output, the key name of the CA certificate in the TLS-type Secret is `tls.crt`, so you need to change the value of caCert from `ca.crt` to `tls.crt`. + +5. Use NebulaGraph Console to connect to the Graph service and establish a secure TLS connection. + + Example: + + ``` + kubectl run -ti --image vesoft/nebula-console:v{{console.release}} --restart=Never -- nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key + ``` + + - `-enable_ssl`: Use mTLS when connecting to NebulaGraph. + - `-ssl_root_ca_path`: Specify the storage path of the CA root certificate. + - `-ssl_cert_path`: Specify the storage path of the TLS public key certificate. + - `-ssl_private_key_path`: Specify the storage path of the TLS private key. + - For details on using NebulaGraph Console to connect to the Graph service, see [Connect to NebulaGraph](../4.connect-to-nebula-graph-service.md). + + !!! note + + If you set `spec.console` to start a NebulaGraph Console container in the cluster, you can enter the console container and run the following command to connect to the Graph service. + + ```bash + nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key + ``` + +At this point, you can enable mTLS in NebulaGraph. diff --git a/mkdocs.yml b/mkdocs.yml index b917db41be1..03dc9cf1bbf 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -735,12 +735,12 @@ nav: - Reclaim PVs: nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md #ent - Balance storage data after scaling out: nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md - Manage cluster logs: nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md -#ent + #ent - Enable mTLS: nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md - Upgrade NebulaGraph clusters: nebula-operator/9.upgrade-nebula-cluster.md - Specify a rolling update strategy: nebula-operator/11.rolling-update-strategy.md -#ent - - Backup and restore: nebula-operator/10.backup-restore-using-operator.md + +#ent - Backup and restore: nebula-operator/10.backup-restore-using-operator.md - Self-healing: nebula-operator/5.operator-failover.md - FAQ: nebula-operator/7.operator-faq.md