Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: support Kubernetes multi-cluster configuration #1192

Closed
liu-hm19 opened this issue Jun 27, 2024 · 6 comments · Fixed by #1211
Closed

Feat: support Kubernetes multi-cluster configuration #1192

liu-hm19 opened this issue Jun 27, 2024 · 6 comments · Fixed by #1211
Assignees
Labels
help wanted Extra attention is needed kind/feature Categorizes issue or PR as related to a new feature proposal Categorizes an issue or PR as relevant to enhancement proposal
Milestone

Comments

@liu-hm19
Copy link
Contributor

liu-hm19 commented Jun 27, 2024

What would you like to be added?

Please support the declaration of Kubernetes runtime configurations in Workspace to manage multi-clusters with different workspace configurations.

The configuration may includes:

  • the local file path of KubeConfig
  • the URL for obtaining KubeConfig content, e.g. S3 endpoint
  • the KubeConfig content itself

Why is this needed?

Background

Kusion needs to deploy the application resources to a specific Kubernetes cluster, thus it should allow users to specify the information of the targeted cluster. Currently, users can specify the cluster by configuring the KUBECONFIG environment variable, which stay consistent with kubectl. The related codes can be found here

Target

In order to better integrate Kusion in the CI/CD pipeline and support the multi-cluster scenarios more flexibly, we hope to add Kubernetes runtime configurations in Kusion Workspace, for example, it can include the following configs:

  • the local file path of KubeConfig
  • the URL for obtaining KubeConfig content, e.g. S3 endpoint
  • the KubeConfig content itself

So that each workspace will correspond to a separate Kubernetes cluster, and users don't need to re-set the KUBECONFIG environment variable every time before deploying to a different Kubernetes cluster.

Main Concern

Actually, we once supported declaring Kubernetes and Terraform runtime configurations in Workspace in previous versions, but later we removed it for the following reasons:

  • Workspace is accessible to many team members, which may easily lead to the leakage of sensitive information, such as K8s cluster certificates and TF provider AK/SK.
  • KubeConfig can usually be considered as a workspace-level configuration, but the config of Terraform Provider is very like to be at the resource level, which means that the resources of different modules in the same workspace may still differ

Currently, we also support specifying the runtime information in the Extensions field of Resource in Spec. Now, we need to consider the actual requirements and situations comprehensively to implement this feature.

@liu-hm19 liu-hm19 added help wanted Extra attention is needed kind/feature Categorizes issue or PR as related to a new feature proposal Categorizes an issue or PR as relevant to enhancement proposal labels Jun 27, 2024
@liu-hm19
Copy link
Contributor Author

@hoangndst This is the issue tracking the support for KubeConfig configuration in Workspace. Welcome to discuss and exchange ideas : ) 🤝 🎉

@hoangndst
Copy link
Contributor

  1. We should have a resource manager for each workspace. Like workspace can have multiple k8s cluster, database, s3 that created and then it can be use in other project in the same workspace.
  2. Resources don't need to be created with the application, can be created separately and used later, can be used by 1 or more projects.

@SparkYuan SparkYuan added this to the v0.12.1 milestone Jun 28, 2024
@liu-hm19
Copy link
Contributor Author

liu-hm19 commented Jun 28, 2024

@hoangndst Thanks for your comment! Could you please provide a specific example to further illustrate your requirements? From my understanding, you may want the workspace to support multiple Kubernetes clusters and users can declare some shared infrastructure resources in the workspace for multiple Projects or Stacks, such as a global database and S3, or even the EC2 instances, which may need to be created when setting up the workspace environment.

However, we currently consider the workspace as a Landing Zone, which typically corresponds to a single Kubernetes cluster. For different K8s clusters, we suggest users create different workspaces to manage them.

And since Kusion is application-centric, we have primarily focused on managing the resources at the application level. Thus we haven't supported the global workspace-level resources in workspace.

But we have reserved a Context field in the workspace, which is intended to store some workspace-level configurations, such as deployment topology, server endpoint, and some other metadata. Users can declare workspace-level configurations in the Context field, and Kusion can pass it to module generators, so that the module generators can use these configs to generate complete resources.

Meanwhile, we are currently working on supporting the import of existing Terraform resources. Applications can reuse existing resources by adding kusionstack.io/import-id to the Extensions field in Resource. Related issue and pull request are listed below.

The issue you mentioned about managing global infrastructure resources shared between multiple applications in workspace is still under discussion. If possible, we hope you can provide a specific scenario so that we can work together to design a solution : )

cc @SparkYuan

@SparkYuan SparkYuan removed their assignment Jul 1, 2024
@liu-hm19
Copy link
Contributor Author

liu-hm19 commented Jul 3, 2024

After discussing with @SparkYuan and @ffforest we have tentatively formulated the following design to support Kubernetes and Terraform runtime configurations in workspace.

We will use the Context field in workspace, which is a map of string to any, and the configurations related to K8s and TF runtimes can be flattened in this field.

Considering the design concept of Kusion, a Workspace corresponds to a specific landing zone, thus we still want one workspace relates to only on kubeconfig file. And the configuration items for kubeconfig in workspace (the keys in the map of Context) include:

  • kubeconfig_path: corresponding to the local kubeconfig file path.
  • kubeconfig_content: corresponding to the content of the kubeconfig file, which can be plaintext, or it can also be filled with reference url of an external Secrets Management system, leveraging the capabilities of Kusion External Secrets.
# Example K8s runtime configs in workspace. 
context: 
    kubeconfig_path: /Users/kusion-test/.kube/config
    kubeconfig_content: ref://kubeconfig/kubeconfig-content

If both of the kubeconfig_path and kubeconfig_content are configured at the same time, kubeconfig_content has the higher priority.

In addition to Kubernetes runtime configurations, Context also supports Terraform runtime configurations. Similarly, Context can store various types of providers' AK/SK and region, supporting both plaintext and secret ref formats.

# Example TF provider runtime configs in workspace. 
context: 
    AWS_ACCESS_KEY_ID: AK**********
    AWS_SECRET_ACCESS_KEY: ref://aws-sk/secret-access-key
    AWS_REGION: us-east-1

The priority order for runtime configuration in workspace Context, environment variables and Extensions field of Resource in Spec will be: Context > environment variables > Extensions.

For remote kubeconfig content that can be directly obtained via curl, it can be configured like the following (still under consideration):

# Use `curl` to get `kubeconfig` content. 
context: 
    kubeconfig_context: https://location-of-the-kubeconfig

As for obtaining kubeconfig from S3, further discussion is needed because this may involve some access permission issues.

@hoangndst Can this meet your requirements? Welcome to review and provide your ideas and suggestions. Looking forward to hearing your thoughts : )

@hoangndst
Copy link
Contributor

With this method, the problem of k8s resources will be solved. But what about resource managers? In order for Kusion becomes real Platform Orchestrator like it would be, I would like to recommend resource management features for Kusion. Humanitec also allow Shared Resource dependencies

New Design

Storage:

  • workspace folder.
  • release folder.
  • resource folder: save every resource that created.

Feature:

  • Resource will be responsible for provisioning infrastructure resources (k8s cluster, database, server, lb, s3, ...) or can also add resources if available. The way is still write the main.k file and import the necessary modules to provision the resources like release.
    • Create resources: kusion resource apply
    • Add external resource: kusion resource add -f resource.yaml (The yaml format when adding available resources will be the same as the yaml format that Kusion is working on)
  • Release will be responsible for deploying both the application to k8s and provisioning the infrastructure.
    • New release can use resource has been created previously by passing in the resource id.

@liu-hm19 @SparkYuan Looking forward to hearing your comments :D

@SparkYuan
Copy link
Member

The definition of Release can be found here. It consists of Spec, State, and other metadata.

The Spec represents the operational intentions that you aim to deliver using Kusion. It contains all resources that would be operated in one kusion command such as 'kusion apply'.

The State is a record of the result of an operation. It is a mapping between resources managed by Kusion and the actual infra resources. It is used as a data source in the 3-way merge/diff during operations like 'apply' and 'preview'.

We will add one document to our website to explain the concept of Release next week. Here is the issue to track it.

Regarding the question you mentioned above, Kusion is an app-centric system designed to improve the efficiency of overall app development. However, certain resources may not belong to a specific application or may need to exist before any application, such as a k8s cluster. We have created a new Project and defined an internal schema to initialize all these shared resources. Applications in other Projects can reuse existing resources by adding the IaaS resource ID in the Spec. This feature will be released in the next version.

The internal schema is closely tied to our internal business, so it hasn't been open-sourced yet. The requirement you proposed does exist and we plan to solve it. We also have some discussions within our team. If you have any ideas, we are open to discuss and any inputs are appreciated.

Besides, Kusion also supports resource dependencies like you referred above. Details can be found here

@SparkYuan SparkYuan changed the title Feat: support Kubernetes runtime configurations in workspace Feat: support Kubernetes multi-cluster configuration Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed kind/feature Categorizes issue or PR as related to a new feature proposal Categorizes an issue or PR as relevant to enhancement proposal
Projects
Status: ✅ Done
Development

Successfully merging a pull request may close this issue.

3 participants