Skip to content

kubermatic/machine-controller

Repository files navigation

Kubermatic machine-controller

Important Note: User data plugins for machine-controller have been removed. Operating System Manager is the successor of user data plugins. It's responsible for creating and managing the required configurations for worker nodes in a Kubernetes cluster with better modularity and extensibility. Please refer to Operating System Manager for more details.

Table of Contents

Features

What Works

  • Creation of worker nodes on AWS, Digitalocean, Openstack, Azure, Google Cloud Platform, Nutanix, VMWare Cloud Director, VMWare vSphere, Hetzner Cloud and Kubevirt
  • Using Ubuntu, Flatcar, or Rocky Linux 8 distributions (not all distributions work on all providers)

Supported Kubernetes Versions

machine-controller tries to follow the Kubernetes version support policy as close as possible.

Currently supported K8S versions are:

  • 1.31
  • 1.30
  • 1.29

Community Providers

Some cloud providers implemented in machine-controller have been graciously contributed by community members. Those cloud providers are not part of the automated end-to-end tests run by the machine-controller developers and thus, their status cannot be guaranteed. The machine-controller developers assume that they are functional, but can only offer limited support for new features or bugfixes in those providers.

The current list of community providers is:

  • Linode
  • Vultr
  • OpenNebula

What Doesn't Work

  • Master creation (Not planned at the moment)

Quickstart

Deploy machine-controller

  • Install cert-manager for generating certificates used by webhooks since they serve using HTTPS
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.2/cert-manager.yaml
  • Run kubectl apply -f examples/operating-system-manager.yaml to deploy the operating-system-manager which is responsible for managing user data for worker machines.
  • Run kubectl apply -f examples/machine-controller.yaml to deploy the machine-controller.

Creating a MachineDeployment

# edit examples/$cloudprovider-machinedeployment.yaml & create the machineDeployment
kubectl create -f examples/$cloudprovider-machinedeployment.yaml

Advanced Usage

Specifying the Apiserver Endpoint

By default the controller looks for a cluster-info ConfigMap within the kube-public Namespace. If one is found which contains a minimal kubeconfig (kubeadm cluster have them by default), this kubeconfig will be used for the node bootstrapping. The kubeconfig only needs to contain two things:

  • CA-Data
  • The public endpoint for the Apiserver

If no ConfigMap can be found:

CA Data

The Certificate Authority (CA) will be loaded from the passed kubeconfig when running outside the cluster or from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt when running inside the cluster.

Apiserver Endpoint

The first endpoint from the kubernetes endpoints will be taken. kubectl get endpoints kubernetes -o yaml

Example cluster-info ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-info
  namespace: kube-public
data:
  kubeconfig: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHRENDQWdDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREE5TVRzd09RWURWUVFERXpKeWIyOTAKTFdOaExtaG1kblEwWkd0bllpNWxkWEp2Y0dVdGQyVnpkRE10WXk1a1pYWXVhM1ZpWlhKdFlYUnBZeTVwYnpBZQpGdzB4TnpFeU1qSXdPVFUyTkROYUZ3MHlOekV5TWpBd09UVTJORE5hTUQweE96QTVCZ05WQkFNVE1uSnZiM1F0ClkyRXVhR1oyZERSa2EyZGlMbVYxY205d1pTMTNaWE4wTXkxakxtUmxkaTVyZFdKbGNtMWhkR2xqTG1sdk1JSUIKSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTNPMFZBZm1wcHM4NU5KMFJ6ckhFODBQTQo0cldvRk9iRXpFWVQ1Unc2TjJ0V3lqazRvMk5KY1R1YmQ4bUlONjRqUjFTQmNQWTB0ZVRlM2tUbEx0OWMrbTVZCmRVZVpXRXZMcHJoMFF5YjVMK0RjWDdFZG94aysvbzVIL0txQW1VT0I5TnR1L2VSM0EzZ0xxNHIvdnFpRm1yTUgKUUxHbllHNVVPN25WSmc2RmJYbGxtcmhPWlUvNXA3c0xwQUpFbCtta3RJbzkybVA5VGFySXFZWTZTblZTSmpDVgpPYk4zTEtxU0gxNnFzR2ZhclluZUl6OWJGKzVjQTlFMzQ1cFdQVVhveXFRUURSNU1MRW9NY0tzYVF1V2g3Z2xBClY3SUdYUzRvaU5HNjhDOXd5REtDd3B2NENkbGJxdVRPMVhDb2puS1o0OEpMaGhFVHRxR2hIa2xMSkEwVXpRSUQKQVFBQm95TXdJVEFPQmdOVkhROEJBZjhFQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRwo5dzBCQVFzRkFBT0NBUUVBamlNU0kxTS9VcUR5ZkcyTDF5dGltVlpuclBrbFVIOVQySVZDZXp2OUhCUG9NRnFDCmpENk5JWVdUQWxVZXgwUXFQSjc1bnNWcXB0S0loaTRhYkgyRnlSRWhxTG9DOWcrMU1PZy95L1FsM3pReUlaWjIKTysyZGduSDNveXU0RjRldFBXamE3ZlNCNjF4dS95blhyZG5JNmlSUjFaL2FzcmJxUXd5ZUgwRjY4TXd1WUVBeQphMUNJNXk5Q1RmdHhxY2ZpNldOTERGWURLRXZwREt6aXJ1K2xDeFJWNzNJOGljWi9Tbk83c3VWa0xUNnoxcFBRCnlOby9zNXc3Ynp4ekFPdmFiWTVsa2VkVFNLKzAxSnZHby9LY3hsaTVoZ1NiMWVyOUR0VERXRjdHZjA5ZmdpWlcKcUd1NUZOOUFoamZodTZFcFVkMTRmdXVtQ2ttRHZIaDJ2dzhvL1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
        server: https://hfvt4dkgb.europe-west3-c.dev.kubermatic.io:30002
      name: ""
    contexts: []
    current-context: ""
    kind: Config
    preferences: {}
    users: []

Development

Testing

Unit Tests

Simply run make test-unit

End-to-End Locally

[WIP]

Troubleshooting

If you encounter issues file an issue or talk to us on the #kubermatic channel on the Kubermatic Slack.

Contributing

Thanks for taking the time to join our community and start contributing!

Before You Start

  • Please familiarize yourself with the Code of Conduct before contributing.
  • See CONTRIBUTING.md for instructions on the developer certificate of origin that we require.

Pull Requests

  • We welcome pull requests. Feel free to dig through the issues and jump in.

Changelog

See the list of releases to find out about feature changes.