- Supervisor Services Catalog
Discover current Supervisor Services offered to support modern applications through vSphere Services. New service will be added overtime with the goal to continue to empower your DevOps communities.
Prior vSphere 8 Update 1, the Supervisor Services are only available with Supervisor Clusters enabled using VMware NSX-T. With vSphere 8 U1, Supervisor Services are also supported when using the vSphere Distributed Switch networking stack.
Supervisor Service | vSphere 7 | vSphere 8 |
---|---|---|
TKG Service | ❌ * | ✅ requires vSphere 8.0 Update 3 or later |
Consumption Interface | ❌ | ✅ requires vSphere 8.0 Update 3 or later |
vSAN Data Persistence Platform Services - MinIO, Cloudian and Dell ObjectScale | âś… | âś… |
Backup & Recovery Service - Velero | âś… | âś… |
Certificate Management Service - cert-manager | ❌ | ✅ |
Cloud Native Registry Service - Harbor | ❌ * | ✅ |
Kubernetes Ingress Controller Service - Contour | ❌ | ✅ |
External DNS Service - ExternalDNS | ❌ | ✅ |
Data Services Manager Consumption Operator | ❌ | ✅ requires vSphere 8.0 Update 3 or later with additional configuration. Please contact Global Support Services (GSS) for the additional configuration |
* The embedded Harbor Registry and TKG Service features are still available and supported on vSphere 7 and onwards. |
VMware Tanzu Kubernetes Grid Service (TKG Service) lets you deploy Kubernetes workload clusters on the vSphere IaaS control plane. Starting with vSphere 8.0 Update 3, Tanzu Kubernetes Grid is installed as a Supervisor Service. This architectural change decouples TKG from vSphere IaaS control plane releases and lets you upgrade the TKG Service independent of vCenter Server and Supervisor.
- Service install documentation
- Download latest version TKG Service v3.2.0
- Release Notes
- OSS Information
- Interoperability Matrix showing compatible Kubernetes releases
- Download TKG Service v3.1.0
- Release Notes
- OSS Information
- Interoperability Matrix showing compatible Kubernetes releases
Provides the Local Consumption Interface (LCI) for Namespaces within vSphere Client. This also includes the Single Sign On (SSO) component required by the Cloud Consumption Interface (CCI) in Aria Automation within VMware Cloud Foundation.
The minimum required version for using this interface is vSphere 8 Update 3.
Installation instructions can be found here in VMware documentation.
IMPORTANT NOTICE: Occasionally, the plug-in may fail to load on the initial attempt. To check if the plug-in has loaded correctly, click the vSphere Client menu icon, then to Administration -> Client -> Plug-ins. Check the Status column of the Namespace UI plug-in, and in case you see a "Plug-in configuration with Reverse Proxy failed." Message, reinstall the plug-in.
Download latest version:
SSO OSS Refer to the Open Source Tab
vSphere with Tanzu offers the vSAN Data Persistence platform. The platform provides a framework that enables third parties to integrate their cloud native service applications with underlying vSphere infrastructure, so that third-party software can run on vSphere with Tanzu optimally.
- Using vSAN Data Persistence Platform (vDPP) with vSphere with Tanzu documentation
- Enable Stateful Services in vSphere with Tanzu documentation
Available vDPP Services
- MinIO partner documentation
- Download version: Minio 2.0.10
- Download version: Minio 2.0.0
- Cloudian partner documentation
- Download version: Cloudian 1.3.1
- Download version: Cloudian 1.2.1
- Download version: Cloudian 1.2.0
Velero vSphere Operator helps users install Velero and its vSphere plugin on a vSphere with Kubernetes Supervisor cluster. Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
- Service install documentation
This is a prerequisite for a cluster admin install.
- Download latest version: Velero vSphere Operator CLI - v1.6.1
- Download: Velero vSphere Operator CLI - v1.6.0
- Download: Velero vSphere Operator CLI - v1.5.0
- Download: Velero vSphere Operator CLI - v1.4.0
- Download: Velero vSphere Operator CLI - v1.3.0
- Download: Velero vSphere Operator CLI - v1.2.0
- Download: Velero vSphere Operator CLI - v1.1.0
- Download latest version: Velero vSphere Operator v1.6.1
- Download: Velero vSphere Operator v1.6.0
- Download: Velero vSphere Operator v1.5.0
- Download: Velero vSphere Operator v1.4.0
- Download: Velero vSphere Operator v1.3.0
- Download: Velero vSphere Operator v1.2.0
- Download: Velero vSphere Operator v1.1.0
ClusterIssuers are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request.
- Service install - Follow steps 1 - 5 in the documentation then continue to the bullet point below.
- Read Service Configuration to understand how to install your root CA into the ca-clusterissuer.
- Download latest version: ca-clusterissuer v0.0.2
- Download version: ca-clusterissuer v0.0.1
CA Cluster Issuer Sample values.yaml
- We do not provide any default values for this package. Instead, we encourage that you generate certificates. Please read How-To Deploy a self-signed CA Issuer and Request a Certificate for information on how to create a self-signed certificate.
Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing.
- The contour package is a prerequisite for Harbor, so that must be installed first.
- Follow the instructions under Installing and Configuring Harbor on a Supervisor.
- Download latest version: Harbor v2.9.1
- Download version: Harbor v2.8.2
- Download version: Harbor v2.5.3
Harbor Sample values.yaml
- Download latest version: values for v2.9.1. For details about each of the required properties, see the configuration details page.
- Download version: values for v2.8.2. For details about each of the required properties, see the configuration details page.
- Download version: values for v2.5.3. For details about each of the required properties, see the configuration details page.
Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile.
- Service install - Follow steps 1 - 5 in the documentation.
- Download latest version: Contour v1.28.2
- Download version: Contour v1.24.4
- Download version: Contour v1.18.2
Contour Sample values.yaml
- Download values for all versions. These values can be used as-is and require no configuration changes.
ExternalDNS publishes DNS records for applications to DNS servers, using a declarative, Kubernetes-native interface. This operator connects to your DNS server (not included here). For a list of supported DNS providers and their corresponding configuration settings, see the upstream external-dns project.
- On Supervisors where Harbor is deployed with Contour, ExternalDNS may be used to publish a DNS hostname for the Harbor service.
- Download latest version: ExternalDNS v0.13.4
- Download version: ExternalDNS v0.11.0
ExternalDNS data values.yaml
- Because of the large list of supported DNS providers, we do not supply complete sample configuration values here. If you're deploying ExternalDNS with Harbor and Contour, make sure to include
source=contour-httpproxy
in the configuration values. An incomplete example of the service configuration is included below. Make sure to setup API access to your DNS server and include authentication details with the service configuration.
deployment:
args:
- --source=contour-httpproxy
- --source=service
- --log-level=debug
NSX Management Proxy is for Antrea-NSX adapter in TKG workload cluster to reach NSX manager. We recommend to use NSX Management Proxy when there is isolation between management network and workload network and the workloads running in TKG workload clusters cannot reach NSX manager.
- Download latest version: nsx-management-proxy v0.2.1
- Download version: nsx-management-proxy v0.2.0
- Download version: nsx-management-proxy v0.1.1
NSX Management Proxy Sample values.yaml
- Download values for all versions. Make sure to fill the property
nsxManagers
with your NSX Manager IP(s).
The Data Services Manager(DSM) Consumption Operator facilitates native, self-service access to DSM within a Kubernetes environment. It exposes a selection of resources supported by the DSM provider, allowing customers to connect to the DSM provider from Kubernetes. Although the DSM provider does not currently support tenancy natively, the DSM Consumption Operator enables customers to seamlessly integrate their existing tenancy model, effectively introducing tenancy into the DSM provider.
- The DSM provider is a prerequisite for DSM consumption operator, so that must be installed first.
- Installation instructions can be found here in VMware documentation
- Configuration instructions can be found here in VMware documentation.
- Download latest version: DSM Consumption Operator v1.2.0
Data Services Manager Consumption Operator Sample values. yaml
- Download latest version: values for v1.2.0. For details about each of the required properties, see the configuration details page.
Installation Note: DSM Consumption Operator v1.2.0
When installing DSM Consumption Operator v1.2.0 as a Supervisor Service, if you encounter any issues related to the Service-id, please contact Global Support Services (GSS) for immediate assistance.
Upgrade Note: DSM Consumption Operator v1.2.0
Earlier versions of the DSM Consumption Operator, including v1.1.0, v1.1.1, and v1.1.2, are deprecated and should not be used for new Supervisor Service installation.
If you are upgrading from these older versions to v1.2.0, do not uninstall the existing version. Instead, we highly recommend contacting GSS for guidance and support. This will ensure a smooth upgrade process and prevent potential disruptions.
For additional help, please refer to the support documentation or reach out to our technical support team.
The following Supervisor Services Labs catalog is only provided for testing and educational purposes. Please do not use these services in a production environment. These services are intended to demonstrate Supervisor Services' capabilities and usability. VMware will strive to provide regular updates to these services. The Labs services have been tested starting from vSphere 8.0. Over time, depending on usage and customer needs, some of these services may be included in the core product.
WARNING - By downloading and using these solutions from the Supervisor Services Labs catalog, you explicitly agree to the conditional use license agreement.
The Argo CD Operator manages the entire lifecycle of Argo CD and its components. The operator aims to automate the tasks required to operate an Argo CD deployment. Beyond installation, the operator helps automate the process of upgrading, backing up, and restoring as needed and removes the human toil as much as possible. For a detailed description of how to consume the ArgoCD Operator, see the ArgoCD Operator project.
- Download the latest version: ArgoCD Operator v0.12.0
- Download previous v0.8.0: ArgoCD Operator v0.8.0
ArgoCD Operator Sample values.yaml
for v0.12.0 - values.yaml
ArgoCD Operator Sample values.yaml
for v0.8.0 - None
- The sample
values.yaml
for the latest version has been provided above. This operator requires minimal configurations, and the necessary pods get deployed in thesvc-argocd-operator-domain-xxx
namespace.
- Check out this example on deploying an ArgoCD instance with the Argo CD Operator here.
- For advanced configurations, check the detailed reference and sample usage
External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, CyberArk Conjur, etc. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret. For a detailed description of how to consume External Secrets Operator, visit External Secrets Operator project
- Download latest version: External Secrets Operator v0.9.14
External Secrets Operator Sample values.yaml
- None
- We do not provide this package's default
values.yaml
. This operator requires minimal configurations, and the necessary pods get deployed in thesvc-external-secrets-operator-domain-xxx
namespace.
- Check out this example on how to access a secret from GCP Secret Manager using External Secrets Operator here
The RabbitMQ Cluster Kubernetes Operator provides a consistent and easy way to deploy RabbitMQ clusters to Kubernetes and run them, including "day two" (continuous) operations. RabbitMQ clusters deployed using the Operator can be used by applications running on or outside Kubernetes. For a detailed description of how to consume the RabbitMQ Cluster Kubernetes Operator, see the RabbitMQ Cluster Kubernetes Operator project.
- Download latest version: RabbitMQ Cluster Kubernetes Operator v2.8.0
RabbitMQ Cluster Kubernetes Operator Sample values.yaml
-
- Modify the latest values.yaml by providing a new location for the RabbitMQ Cluster Kubernetes Operator image. This may be required to overcome DockerHub's rate-limiting issues. The RabbitMQ Cluster Kubernetes Operator pods and related artifacts get deployed in the
svc-rabbitmq-operator-domain-xx
namespace.
- Check out this example on how to deploy a RabbitMQ cluster using the RabbitMQ Cluster Kubernetes Operator here
- For advanced configurations, check the detailed reference.
A Golang-based Redis operator that oversees Redis standalone/cluster/replication/sentinel mode setup on top of Kubernetes. It can create a Redis cluster setup using best practices. It also provides an in-built monitoring capability using Redis-exporter. For a detailed description of how to consume the Redis Operator, see the Redis Operator project.
- Download latest version: Redis Operator v0.16.0
Redis Operator Sample values.yaml
-
- We do not provide this package's default
values.yaml
. This operator requires minimal configurations, and the necessary pods get deployed in thesvc-redis-operator-domain-xxx
namespace.
- View an example of how to use the Redis Operator to deploy a Redis standalone instance here
- For advanced configurations, check the detailed reference.
KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks. For a detailed description of how to use KEDA, see the Keda project.
- Download latest version: KEDA v2.13.1 Note: This version supports Kubernetes v1.27 - v1.29.
KEDA Sample values.yaml
-
- We do not provide this package's default
values.yaml
. This operator requires minimal configurations, and the necessary pods get deployed in thesvc-kedaxxx
namespace.
- View an example of how to use KEDA
ScaledObject
to scale an NGINX deployment here. - For additonal examples, check the detailed reference.
Grafana Operator is a Kubernetes operator built to help you manage your Grafana instances and its resources from within Kubernetes. The operator can install and manage local Grafana instances, Dashboards and Datasources through Kubernetes Custom resources. The Grafana Operator automatically syncs the Kubernetes Custom resources and the actual resources in the Grafana Instance. For a detailed description of how to use Grafana Operator, see the Grafana Project.
- Download latest version: Grafana Operator v5.15.0.
Grafana Operator Sample values.yaml
for v5.15.0 - values.yaml
- The sample
values.yaml
for the latest version has been provided above. This operator requires minimal configurations, and the necessary pods get deployed in thesvc-grafana-operator-xxx
namespace.
- View an example of how to use Grafana Operator to create a Grafana instance here.
- For additonal examples, check the detailed reference.