Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onpremises provisioning for all cluster resources was implemented #624

Merged
merged 2 commits into from
Dec 11, 2023

Conversation

testisnullus
Copy link
Collaborator

No description provided.

@testisnullus testisnullus self-assigned this Nov 15, 2023
@testisnullus testisnullus added the enhancement New feature or request label Nov 15, 2023
@testisnullus testisnullus force-pushed the onpremises-provisioning branch 3 times, most recently from e446339 to 73e6f7a Compare November 16, 2023 15:46
Comment on lines 86 to 98
osDiskSizeMatched, err := regexp.Match(models.StorageRegExp, []byte(k.Spec.OnPremisesSpec.OSDiskSize))
if !osDiskSizeMatched || err != nil {
return fmt.Errorf("disk size field for node OS must fit pattern: %s",
models.StorageRegExp)
}

dataDiskSizeMatched, err := regexp.Match(models.StorageRegExp, []byte(k.Spec.OnPremisesSpec.DataDiskSize))
if !dataDiskSizeMatched || err != nil {
return fmt.Errorf("disk size field for storring cluster data must fit pattern: %s",
models.StorageRegExp)
}

nodeMemoryMatched, err := regexp.Match(models.MemoryRegExp, []byte(k.Spec.OnPremisesSpec.DataDiskSize))
if !nodeMemoryMatched || err != nil {
return fmt.Errorf("node memory field must fit pattern: %s",
models.MemoryRegExp)
}

if k.Spec.PrivateNetworkCluster {
if k.Spec.OnPremisesSpec.SSHGatewayCPU == 0 || k.Spec.OnPremisesSpec.SSHGatewayMemory == "" {
return fmt.Errorf("fields SSHGatewayCPU and SSHGatewayMemory must not be empty")
}
sshGatewayMemoryMatched, err := regexp.Match(models.MemoryRegExp, []byte(k.Spec.OnPremisesSpec.DataDiskSize))
if !sshGatewayMemoryMatched || err != nil {
return fmt.Errorf("ssh gateway memory field must fit pattern: %s",
models.MemoryRegExp)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move it to separate function to not duplicate such code a lot of times?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, will fix

Comment on lines 466 to 558
func (k *Kafka) NewExposePorts() []k8scorev1.ServicePort {
var ports []k8scorev1.ServicePort
ports = []k8scorev1.ServicePort{{
Name: models.SSH,
Port: models.Port22,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port22,
},
},
}

if !k.Spec.PrivateNetworkCluster {
additionalPorts := []k8scorev1.ServicePort{
{
Name: models.KafkaClient,
Port: models.Port9092,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port9092,
},
},
{
Name: models.KafkaControlPlane,
Port: models.Port9093,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port9093,
},
},
}
if k.Spec.ClientToClusterEncryption {
sslPort := k8scorev1.ServicePort{
Name: models.KafkaBroker,
Port: models.Port9094,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port9094,
},
}
additionalPorts = append(additionalPorts, sslPort)
}
ports = append(ports, additionalPorts...)
}

return ports
}

func (k *Kafka) NewHeadlessPorts() []k8scorev1.ServicePort {
ports := []k8scorev1.ServicePort{
{
Name: models.KafkaClient,
Port: models.Port9092,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port9092,
},
},
{
Name: models.KafkaControlPlane,
Port: models.Port9093,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port9093,
},
},
}
if k.Spec.ClientToClusterEncryption {
kafkaBrokerPort := k8scorev1.ServicePort{
Name: models.KafkaBroker,
Port: models.Port9094,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port9094,
},
}
ports = append(ports, kafkaBrokerPort)
}

return ports
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move it to separate function and just path values you need to not duplicate such code a lot of times?

Copy link
Collaborator Author

@testisnullus testisnullus Nov 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@testisnullus testisnullus force-pushed the onpremises-provisioning branch 3 times, most recently from 201c44e to 5ae02d4 Compare November 19, 2023 12:13
NodeCPU int64 `json:"nodeCPU"`
NodeMemory string `json:"nodeMemory"`
OSImageURL string `json:"osImageURL"`
CloudInitScriptRef *Reference `json:"cloudInitScriptNamespacedName"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about changing the JSON tag to json:"cloudInitScriptRef"?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@ribaraka ribaraka changed the title [WIP] Onpremises provisioning for all cluster resources was implemented Onpremises provisioning for all cluster resources was implemented Nov 20, 2023
Comment on lines +848 to +864
sslPort := k8scorev1.ServicePort{
Name: models.CadenceGRPC,
Port: models.Port7833,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: models.Port7833,
},
}
headlessPorts = append(headlessPorts, sslPort)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unneded variable. I suggest:
headlessPorts = append(headlessPorts, k8scorev1.ServicePort{...})

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same with getExpostPorts method

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's more readable I think

Comment on lines +186 to +191
if len(c.Spec.DataCentres) > 1 && c.Spec.OnPremisesSpec != nil {
return fmt.Errorf("on-premises cluster can be provisioned with only one data centre")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets check len != 1 && onprem

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have this kind of check above:

if len(c.Spec.DataCentres) == 0 {
		return fmt.Errorf("data centres field is empty")
	}

return reconcile.Result{}, err
}

bootstrap := newOnPremiseBootstrap(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newOnPremiseBootstrap -> newOnPremisesBootstrap

}
}

func handleCreateOnPremisesClusterResources(ctx context.Context, b *onPremiseBootstrap) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

onPremiseBootstrap -> onPremisesBootstrap

}

func handleCreateOnPremisesClusterResources(ctx context.Context, b *onPremiseBootstrap) error {
if len(b.ClusterStatus.DataCentres) < 1 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets check len == 0

Comment on lines +616 to +689
vmis := &virtcorev1.VirtualMachineInstanceList{}
err = K8sClient.List(ctx, vmis, &client.ListOptions{
LabelSelector: labels.SelectorFromSet(map[string]string{
models.ClusterIDLabel: clusterID,
}),
Namespace: ns,
})
if err != nil {
return err
}

for _, vmi := range vmis.Items {
err = K8sClient.Delete(ctx, &vmi)
if err != nil {
return err
}

patch := client.MergeFrom(vmi.DeepCopy())
controllerutil.RemoveFinalizer(&vmi, models.DeletionFinalizer)
err = K8sClient.Patch(ctx, &vmi, patch)
if err != nil {
return err
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VMIs are deleted when their VM resource is deleted isn't?
If so we don't need to explicitly delete VMIs

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's be totally sure that all resources are deleted

@testisnullus testisnullus force-pushed the onpremises-provisioning branch 2 times, most recently from 9c28f6b to a29cde3 Compare November 21, 2023 14:41
@testisnullus testisnullus force-pushed the onpremises-provisioning branch 2 times, most recently from 8b59b35 to 350d2f0 Compare November 28, 2023 10:30
Comment on lines 106 to 137
### CloudInitScript

cloud-init.sh:
```shell
#!/bin/bash

export NEW_PASS="qwerty12345"
export SSH_PUB_KEY=""
export BOOTSTRAP_SSH_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQDgeaO3zkY5v1dww3fFONPzUnEgIqJ4kUK0Usu8iFdp+TWIulw9dDeQHa+PdWXP97l5Vv1mG9ipqShEIu7/2bp13KxSblWX4iV1MYZbtimhY3UDOsPn1G3E1Ipis6y+/tosDAm8LoWaGEMcLuE5UjP6gs6K57rCEjkVGjg7vjhIypAMC0J2N2+CxK9o/Y1+LZAec+FL5cmSJoajoo9y/VYJjz/a52jJ93wRafD2uu6ObAl5gkN/+gqY4IJFSMx20sPsIRXdbiBNDqiap56ibHHPKTeZhRbdXvZfjYkHtutXnSM2xn7BjnV8CguISxS3rXlzlzRVYwSUjnKUf5SKBbeyZbCokx271vCeUd5EXfHphvW6FIOna2AI5lpCSYkw5Kho3HaPi2NjXJ9i2dCr1zpcZpCiettDuaEjxR0Cl4Jd6PrAiAEZ0Ns0u2ysVhudshVzQrq6qdd7W9/MLjbDIHdTToNjFLZA6cbE0MQf18LXwJAl+G/QrXgcVaiopUmld+68JL89Xym55LzkMhI0NiDtWufawd/NiZ6jm13Z3+atvnOimdyuqBYeFWgbtcxjs0yN9v7h7PfPh6TbiQYFx37DCfQiIucqx1GWmMijmd7HMY6Nv3UvnoTUTSn4yz1NxhEpC61N+iAInZDpeJEtULSzlEMWlbzL4t5cF+Rm1dFdq3WpZt1mi8F4DgrsgZEuLGAw22RNW3++EWYFUNnJXaYyctPrMpWQktr4DB5nQGIHF92WR8uncxTNUXfWuT29O9e+bFYh1etmq8rsCoLcxN0zFHWvcECK55aE+47lfNAR+HEjuwYW10mGU/pFmO0F9FFmcQRSw4D4tnVUgl3XjKe3bBiTa4lUrzrKkLZ6n9/buW2e7c3jbjmXdPh2R+2Msr/vfuWs9glxQf+CYEbBW6Ye4pekIyI77SaB/bVhaHtXutKxm+QWdNle8aeqiA8Ji1Ml+s75vIg+n5v6viCnl5aV33xHRFpGQJzj2ktsXl9P9d5kgal9eXJYTywC2SnVbZVLb6FGN4kPZTVwX1f+u7v7JCm4YWlbQZtwwiXKjs99AVtQnBWqQvUH5sFUkVXlHA1Y9W6wlup0r+F6URL+7Yw+d0dHByfevrJg3pvmpLb3sEpjIAZodW3dIUReE7Ku3s/q/O9foFnfRBnCcZ2QsnxI5pqNrbrundD1ApOnNXEvICvPXHBBQ44cW0hzAO+WxY5VxyG8y/kXnb48G9efkIQFkNaITJrU9SiOk6bFP4QANdS/pmaSLjJIsHixa+7vmYjRy1SVoQ/39vDUnyCbqKtO56QMH32hQLRO3Vk7NVG6o4dYjFkiaMSaqVlHKMkJQHVzlK2PW9/fjVXfkAHmmhoD debian"
echo "debian:$NEW_PASS" | chpasswd
echo "root:$NEW_PASS" | sudo chpasswd root
sudo echo "$SSH_PUB_KEY" > /home/debian/.ssh/authorized_keys
sudo echo "$BOOTSTRAP_SSH_KEY" >> /home/debian/.ssh/authorized_keys
sudo chown -R debian: /home/debian/.ssh
sudo cp /usr/share/doc/apt/examples/sources.list /etc/apt/sources.list
data_device=$(lsblk -dfn -o NAME,SERIAL | awk '$2 == "DATADISK" {print $1}')
sudo mkfs -t ext4 /dev/"${data_device}"
```

Create the base64 encoded string with the script using `cat cloud-init.sh | base64 -w0` command and create a secret using this yaml manifest:

```yaml
apiVersion: v1
kind: Secret
metadata:
name: cloud-init-secret
data:
userdata: <base64 encoded string>
```

Use `kubectl apply -f cloud-init-secret.yaml` command to create the secret.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest moving it below the on-premise Cluster create flow, specifically under the section:

Or if you want to create an on-premises cluster:

Comment on lines +98 to +115
### ReferenceObject

| Field | Type | Description |
|-----------|--------|---------------------------------------------------|
| name | string | Name of the cloud-init secret. |
| namespace | string | Namespace in which the cloud-init secret located. |

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't plan to add anything to reference, I think we should use namespacedName instead

@taaraora taaraora merged commit 94adf58 into main Dec 11, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants