Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mass deployer pkg #604

Closed
Tracked by #1489
rawdaGastan opened this issue Jan 4, 2024 · 19 comments
Closed
Tracked by #1489

Add mass deployer pkg #604

rawdaGastan opened this issue Jan 4, 2024 · 19 comments
Assignees
Labels
tfrobot type_feature New feature or request

Comments

@rawdaGastan
Copy link
Collaborator

@rawdaGastan rawdaGastan added the type_feature New feature or request label Jan 4, 2024
@rawdaGastan rawdaGastan added this to the 1.0.0 milestone Jan 4, 2024
@rawdaGastan rawdaGastan moved this to Accepted in 3.13.x Jan 4, 2024
@Eslam-Nawara Eslam-Nawara moved this from Accepted to In Progress in 3.13.x Jan 8, 2024
@Eslam-Nawara Eslam-Nawara mentioned this issue Jan 8, 2024
4 tasks
@Eslam-Nawara
Copy link
Contributor

Eslam-Nawara commented Jan 8, 2024

WIP:

  • Implementation is done with unit tests for the parser
  • Added a Makefile and Readme file
  • Tested the deployer against devnet deploying 100 vms on 10 nodes
mass-deployer git:(development_run_mass_deployment) ./bin/mass-deployer -c config.json
2:56PM INF starting peer session=tf-108784 twin=4653
2:57PM ERR failed to read message error="read tcp 192.168.1.10:38032->185.206.122.7:443: use of closed network connection"
deployment took  1m16.141256594s
ok:
        group_a
        group_b
error:
  • Config file:
{
  "node_groups": [
    {
      "name": "group_a",
      "nodes_count": 3,
      "free_cpu": 2,
      "free_mru": 16384,
      "free_ssd": 524288000,
      "free_hdd": 4096,
      "dedicated": false,
      "pubip4": false,
      "pubip6": false,
      "certification_type": "Diy",
      "region": "",
      "min_bandwidth_ms": 100
    },
    {
      "name": "group_b",
      "free_mru": 16384,
      "nodes_count": 10
    }
  ],
  "vms": [
    {
      "name": "examplevm",
      "vms_count": 5,
      "node_group": "group_a",
      "cpu": 1,
      "mem": 256,
      "ssd": {
        "capacity": 5,
        "mount_point": "/mnt/ssd"
      },
      "hdd": true,
      "pubip4": false,
      "pubip6": false,
      "flist": "https://hub.grid.tf/tf-official-apps/base:latest.flist",
      "root_size": 0,
      "entry_point": "/sbin/zinit init"
    },
    {
      "name": "example2",
      "vms_count": 100,
      "cpu": 1,
      "mem": 1024,
      "node_group": "group_b",
      "flist": "https://hub.grid.tf/tf-official-apps/base:latest.flist",
      "entry_point": "/sbin/zinit init"
    }
  ],
  "sshkey": "example-ssh-key",
  "mnemonic": "<Enter your mnemonics>",
  "network": "dev"
}

@Eslam-Nawara Eslam-Nawara moved this from In Progress to Pending review in 3.13.x Jan 8, 2024
@despiegk
Copy link

despiegk commented Jan 9, 2024

  "free_mru": 16384,
  "free_ssd": 524288000,
  "free_hdd": 4096,
  
  suggest to use GB as base unit

@despiegk
Copy link

despiegk commented Jan 9, 2024

need doc e.g. "certification_type": "Diy", which are the types
dont do mixed lower/upper, all lower or uppercase

@despiegk
Copy link

despiegk commented Jan 9, 2024

what are the regions, also in doc please

we also need build instructions & more doc in general, e.g. how long do we retry, how many times, ...

if one deploy fails on a node, do we pick another node which is also ok and then deploy again, how many times?

@despiegk
Copy link

despiegk commented Jan 9, 2024

  certified: true
  #min nr of nodes in farm
  min_nodes_farm: 5
  #nr of nodes in same farm which apply the rules above
  min_nodes_apply_rules: 3

doesn't seem to be implemented

also don't use complicated enumerators people don't understand e.g. certified:true is better, it means if not set its not certified, please look back at original spec and let me know all the things which are not done yet or done differently

@despiegk
Copy link

despiegk commented Jan 9, 2024

doesn't look like the return json is there with required info to process further

@despiegk
Copy link

despiegk commented Jan 9, 2024

  "ssd": {
    "capacity": 5,
    "mount_point": "/mnt/ssd"
  }, 
  
  can it be more than 1 SSD mount?

@despiegk
Copy link

despiegk commented Jan 9, 2024

rootsize is in which unit, suggest to also do GB
we need to document

@xmonader
Copy link
Contributor

xmonader commented Jan 10, 2024

region doesn't seem to be straight forward to implement, we will have another issue to support region filters, and will support country filter at this point

@Eslam-Nawara
Copy link
Contributor

region doesn't seem to be straight forward to implement, we will have another issue to support region filters, and will support country filter at this point

Sounds like we have the region filter, but it actually refers to subregions in graphql(continents name), fa we can support it for now

@Eslam-Nawara
Copy link
Contributor

WIP:

  • Added docs to all fields in the example file
  • allowed ssd storage to be list
  • allowed ssh keys to be a list
  • set the default unit for rootfs size, ssd and hdd to be in gb and memory to be in mb

@Eslam-Nawara Eslam-Nawara moved this from Pending review to In Progress in 3.13.x Jan 11, 2024
@Eslam-Nawara
Copy link
Contributor

WIP:

  • Added info about the deployed vms after the process complete
  • fixed the other mass deployment example

@Mik-TF
Copy link
Contributor

Mik-TF commented Jan 17, 2024

From the git.ourworld.tf issue here, Kristof asked for the mass-deployer to be named tfrobot, for simplicity.

"Ahmeds team made a super nice tool to do large scale deployments on the grid lets call this tfrobot"

Is it possible to rename the mass-deployer tfrobot in the ongoing PR #527 ? @Eslam-Nawara

Thanks!

@xmonader
Copy link
Contributor

From the git.ourworld.tf issue here, Kristof asked for the mass-deployer to be named tfrobot, for simplicity.

"Ahmeds team made a super nice tool to do large scale deployments on the grid lets call this tfrobot"

Is it possible to rename the mass-deployer tfrobot in the ongoing PR #527 ? @Eslam-Nawara

Thanks!

will check the names after all done, probably will be renamed in the generated binary

@Eslam-Nawara Eslam-Nawara moved this from In Progress to Pending review in 3.13.x Jan 18, 2024
@Eslam-Nawara Eslam-Nawara moved this from Pending review to In Progress in 3.13.x Jan 28, 2024
@Eslam-Nawara
Copy link
Contributor

WIP:

  • added more constraints to the values of config struct
  • refactor the parser to use https://pkg.go.dev/gopkg.in/validator.v2 for validation
  • improve the tests of parser, add more scenarios
  • change the grid-client deployer to not skip batch deployment errors

@Eslam-Nawara
Copy link
Contributor

WIP:

  • refactored the deployer to be able to retry the failed deployments, max 2 retries are allowed
  • cancel any remaining contracts of failed groups
  • add min limit for disks storage

@rawdaGastan
Copy link
Collaborator Author

rawdaGastan commented Jan 29, 2024

Tried the mass deployer: (3 node groups each one has 5 nodes, 5 vms groups each has 5 vms)
Total: 25 vms, 15 nodes

config

node_groups:
  - name: group_a
    free_cpu: 2
    free_mru: 16384
    free_ssd: 50
    nodes_count: 5
  - name: group_b
    free_cpu: 2
    free_mru: 16384
    free_ssd: 50
    nodes_count: 5
  - name: group_c
    free_cpu: 2
    free_mru: 16384
    free_ssd: 50
    nodes_count: 5

vms:
  - name: vms1
    vms_count: 5
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_a
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms2
    vms_count: 5
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_a
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms3
    vms_count: 5
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_b
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms4
    vms_count: 5
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_b
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms5
    vms_count: 5
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_c
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

ssh_keys: 
  rawda: "my ssh key"
mnemonic: my mnemonic
network: dev

Result

5:53PM DBG network: dev
5:53PM DBG mnemonic: ***
5:53PM INF starting peer session=tf-297990 twin=81
5:53PM INF running deployer for node group group_a
5:53PM INF done deploying node group group_a
5:53PM INF running deployer for node group group_b
5:54PM INF done deploying node group group_b
5:54PM INF running deployer for node group group_c
5:54PM INF done deploying node group group_c
5:54PM INF deployment took 1m47.254483185s
ok:
group_a:
name: vms14
publicip4: ""
publicip6: ""
yggip: 300:d205:aee5:1d1e:a6bc:ecb8:44f8:9b78
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms12
publicip4: ""
publicip6: ""
yggip: 302:b94d:f7ea:d753:994e:569e:1939:ff57
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms13
publicip4: ""
publicip6: ""
yggip: 307:4932:1986:2948:2684:5430:1075:6c40
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms11
publicip4: ""
publicip6: ""
yggip: 300:eec4:a447:e215:98ba:d7b1:4702:36e3
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms10
publicip4: ""
publicip6: ""
yggip: 301:2485:48dc:9401:8447:b141:9e60:3ca9
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms24
publicip4: ""
publicip6: ""
yggip: 300:d205:aee5:1d1e:1a71:4abb:79cf:8452
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms21
publicip4: ""
publicip6: ""
yggip: 300:eec4:a447:e215:a5e8:870:6fab:fb99
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms20
publicip4: ""
publicip6: ""
yggip: 301:2485:48dc:9401:785e:89ce:7a95:1dfd
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms22
publicip4: ""
publicip6: ""
yggip: 302:b94d:f7ea:d753:f644:95f3:cd7c:c2d0
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms23
publicip4: ""
publicip6: ""
yggip: 307:4932:1986:2948:714d:b446:ff8d:f69c
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

group_b:
name: vms34
publicip4: ""
publicip6: ""
yggip: 301:b989:8977:ff23:b11e:d3d0:5148:1fb4
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms31
publicip4: ""
publicip6: ""
yggip: 301:ee1c:672e:9572:76be:af53:4be7:720a
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms30
publicip4: ""
publicip6: ""
yggip: 300:e9c4:9048:57cf:a91f:7221:34cc:62d7
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms33
publicip4: ""
publicip6: ""
yggip: 302:9e63:7d43:b742:7cce:90d9:47b1:7fee
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms32
publicip4: ""
publicip6: ""
yggip: 301:ad3a:9c52:98d1:8b76:9349:289f:7c4f
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms44
publicip4: ""
publicip6: ""
yggip: 301:b989:8977:ff23:1142:d96c:e75c:2731
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms43
publicip4: ""
publicip6: ""
yggip: 302:9e63:7d43:b742:f88d:4e7f:cd6d:c8b
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms40
publicip4: ""
publicip6: ""
yggip: 300:e9c4:9048:57cf:28e8:5f7b:fece:1fa4
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms41
publicip4: ""
publicip6: ""
yggip: 301:ee1c:672e:9572:95cc:d29f:3b8e:d192
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms42
publicip4: ""
publicip6: ""
yggip: 301:ad3a:9c52:98d1:ee61:1114:c2e6:ddee
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

group_c:
name: vms53
publicip4: ""
publicip6: ""
yggip: 307:4932:1986:2948:dd53:c1c1:c123:a428
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms51
publicip4: ""
publicip6: ""
yggip: 302:b94d:f7ea:d753:da1d:eac7:f428:7b40
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms54
publicip4: ""
publicip6: ""
yggip: 300:d205:aee5:1d1e:75d1:3427:8346:5211
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms50
publicip4: ""
publicip6: ""
yggip: 301:2485:48dc:9401:f769:bb36:2399:45fe
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms52
publicip4: ""
publicip6: ""
yggip: 300:eec4:a447:e215:38e2:25bf:6cfc:fc1b
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

@rawdaGastan
Copy link
Collaborator Author

Tried the mass deployer: (3 node groups each one has 5 nodes, 5 vms groups each has 10 vms)
Total: 50 vms, 15 nodes

Config

node_groups:
  - name: group_a
    free_cpu: 2
    free_mru: 16384
    free_ssd: 50
    nodes_count: 5
  - name: group_b
    free_cpu: 2
    free_mru: 16384
    free_ssd: 50
    nodes_count: 5
  - name: group_c
    free_cpu: 2
    free_mru: 16384
    free_ssd: 50
    nodes_count: 5

vms:
  - name: vms1
    vms_count: 10
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_a
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms2
    vms_count: 10
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_a
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms3
    vms_count: 10
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_b
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms4
    vms_count: 10
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_b
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

  - name: vms5
    vms_count: 10
    cpu: 1
    mem: 256
    ssd:
      - size: 15
        mount_point: /mnt/ssd/
    node_group: group_c
    planetary: true 
    flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
    entry_point: /sbin/zinit init
    ssh_key: rawda

ssh_keys: 
  rawda: "my ssh key"
mnemonic: my mnemonic
network: dev

Result

6:07PM INF validating configuration file
6:07PM INF done validating configuration file
6:07PM DBG network: dev
6:07PM DBG mnemonic: ***
6:08PM INF starting peer session=tf-301433 twin=81
6:08PM INF running deployer for node group group_a
6:09PM INF done deploying node group group_a
6:09PM INF running deployer for node group group_b
6:12PM INF done deploying node group group_b
6:12PM INF running deployer for node group group_c
6:12PM INF done deploying node group group_c
6:12PM INF deployment took 4m53.066435133s
ok:
group_a:
name: vms14
publicip4: ""
publicip6: ""
yggip: 300:3a73:6990:65c1:9d48:c1b3:58d:f76d
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms11
publicip4: ""
publicip6: ""
yggip: 300:487a:7ea1:5b4e:a65f:76a9:3f84:11fb
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms12
publicip4: ""
publicip6: ""
yggip: 301:b738:424:2598:ac59:da6a:f5f6:706a
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms13
publicip4: ""
publicip6: ""
yggip: 300:68f9:dc9f:d10e:87f1:c09f:1d4c:fdad
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms19
publicip4: ""
publicip6: ""
yggip: 300:3a73:6990:65c1:6155:e604:354:38ac
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms10
publicip4: ""
publicip6: ""
yggip: 301:1037:16ee:23fb:5cdf:811:f8ec:adf9
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms16
publicip4: ""
publicip6: ""
yggip: 300:487a:7ea1:5b4e:bf57:3688:9b3b:afc2
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms17
publicip4: ""
publicip6: ""
yggip: 301:b738:424:2598:a040:2f04:8885:2a08
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms15
publicip4: ""
publicip6: ""
yggip: 301:1037:16ee:23fb:6f4c:bda6:e078:92f5
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms18
publicip4: ""
publicip6: ""
yggip: 300:68f9:dc9f:d10e:4e72:d8c9:c44d:71d9
ip: 10.20.2.2
mounts:
    - diskname: vms1disk
      mountpoint: /mnt/ssd/

name: vms24
publicip4: ""
publicip6: ""
yggip: 300:3a73:6990:65c1:dfbe:48b5:ffcb:3f5e
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms21
publicip4: ""
publicip6: ""
yggip: 300:487a:7ea1:5b4e:aa97:b0d1:7418:9a20
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms22
publicip4: ""
publicip6: ""
yggip: 301:b738:424:2598:e07:7879:821d:ee45
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms23
publicip4: ""
publicip6: ""
yggip: 300:68f9:dc9f:d10e:bca5:dc91:ef0c:331d
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms20
publicip4: ""
publicip6: ""
yggip: 301:1037:16ee:23fb:f937:3c3d:3b1d:e2c3
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms29
publicip4: ""
publicip6: ""
yggip: 300:3a73:6990:65c1:cf7b:6e87:fcb7:518e
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms26
publicip4: ""
publicip6: ""
yggip: 300:487a:7ea1:5b4e:5cd0:f8b4:bd27:586f
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms25
publicip4: ""
publicip6: ""
yggip: 301:1037:16ee:23fb:1368:7e8c:d2ed:e614
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms28
publicip4: ""
publicip6: ""
yggip: 300:68f9:dc9f:d10e:6daa:1fda:110b:b113
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

name: vms27
publicip4: ""
publicip6: ""
yggip: 301:b738:424:2598:ec4b:801b:af9:c8c3
ip: 10.20.2.2
mounts:
    - diskname: vms2disk
      mountpoint: /mnt/ssd/

group_b:
name: vms32
publicip4: ""
publicip6: ""
yggip: 302:b94d:f7ea:d753:2954:bda1:490e:d9c4
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms31
publicip4: ""
publicip6: ""
yggip: 307:4932:1986:2948:ff28:76d1:78fb:bf1c
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms34
publicip4: ""
publicip6: ""
yggip: 300:d205:aee5:1d1e:a323:9d1c:8f1f:c82b
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms33
publicip4: ""
publicip6: ""
yggip: 301:2485:48dc:9401:af57:3db7:c397:e9fe
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms30
publicip4: ""
publicip6: ""
yggip: 300:3303:8f7f:d117:4f5e:c9ef:e9a9:c62
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms39
publicip4: ""
publicip6: ""
yggip: 300:d205:aee5:1d1e:493c:9d54:9c93:e77a
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms37
publicip4: ""
publicip6: ""
yggip: 302:b94d:f7ea:d753:f76b:d3bd:fca9:c3
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms36
publicip4: ""
publicip6: ""
yggip: 307:4932:1986:2948:818:f1a:a16e:992d
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms38
publicip4: ""
publicip6: ""
yggip: 301:2485:48dc:9401:4f10:c6e3:55ba:2b96
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms35
publicip4: ""
publicip6: ""
yggip: 300:3303:8f7f:d117:1fdf:c53a:887a:858
ip: 10.20.2.2
mounts:
    - diskname: vms3disk
      mountpoint: /mnt/ssd/

name: vms41
publicip4: ""
publicip6: ""
yggip: 307:4932:1986:2948:3abf:a4f:209c:6bda
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms42
publicip4: ""
publicip6: ""
yggip: 302:b94d:f7ea:d753:5dc8:f09d:b401:6e6c
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms44
publicip4: ""
publicip6: ""
yggip: 300:d205:aee5:1d1e:5c4d:5016:10c2:d73c
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms40
publicip4: ""
publicip6: ""
yggip: 300:3303:8f7f:d117:c98c:e757:cdaa:e00
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms43
publicip4: ""
publicip6: ""
yggip: 301:2485:48dc:9401:c183:5f0f:bfca:c670
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms49
publicip4: ""
publicip6: ""
yggip: 300:d205:aee5:1d1e:5eb8:4908:6960:ea7a
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms45
publicip4: ""
publicip6: ""
yggip: 300:3303:8f7f:d117:cab:78b:a4dd:7e26
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms46
publicip4: ""
publicip6: ""
yggip: 307:4932:1986:2948:9d5d:2d5b:2e77:3c9a
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms47
publicip4: ""
publicip6: ""
yggip: 302:b94d:f7ea:d753:6999:f058:29d0:1905
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

name: vms48
publicip4: ""
publicip6: ""
yggip: 301:2485:48dc:9401:4598:9425:f815:60b6
ip: 10.20.2.2
mounts:
    - diskname: vms4disk
      mountpoint: /mnt/ssd/

group_c:
name: vms51
publicip4: ""
publicip6: ""
yggip: 301:3a0b:8af:8f42:dda4:7b37:2a8e:a44e
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms54
publicip4: ""
publicip6: ""
yggip: 300:8161:13c0:130a:27af:6fe3:9930:f25c
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms52
publicip4: ""
publicip6: ""
yggip: 300:515a:46c8:b649:443c:8254:dda7:1ad0
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms53
publicip4: ""
publicip6: ""
yggip: 302:2e79:2383:5add:96de:eb4e:ca91:49de
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms59
publicip4: ""
publicip6: ""
yggip: 300:8161:13c0:130a:a024:da7f:3f07:3303
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms56
publicip4: ""
publicip6: ""
yggip: 301:3a0b:8af:8f42:1ca0:cc98:164a:ae2b
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms57
publicip4: ""
publicip6: ""
yggip: 300:515a:46c8:b649:2e5e:acfa:3a5c:864
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms58
publicip4: ""
publicip6: ""
yggip: 302:2e79:2383:5add:d01f:fe97:773:80bf
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms50
publicip4: ""
publicip6: ""
yggip: 300:3303:8f7f:d117:9dbf:2a64:812e:9f99
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

name: vms55
publicip4: ""
publicip6: ""
yggip: 300:3303:8f7f:d117:ca7d:c207:ff41:2d3c
ip: 10.20.2.2
mounts:
    - diskname: vms5disk
      mountpoint: /mnt/ssd/

@rawdaGastan rawdaGastan moved this from In Progress to In Verification in 3.13.x Jan 29, 2024
@rawdaGastan
Copy link
Collaborator Author

tested here:

@github-project-automation github-project-automation bot moved this from In Verification to Done in 3.13.x Mar 3, 2024
@rawdaGastan rawdaGastan modified the milestones: 1.0.0, v0.13.x - v0.14.x Sep 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tfrobot type_feature New feature or request
Projects
No open projects
Status: Done
Development

No branches or pull requests

5 participants