Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix inventory group etcd_cluster #649

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open

Conversation

dtrdnk
Copy link

@dtrdnk dtrdnk commented May 5, 2024

Add ability use specific IP. Fix inventory group etcd_cluster: use hostnames instead of ip-address #612

…ng ip for instance with few interfaces. Improve pg_hba.conf.j2 and patroni.yml.j2. Reuse values from pg_cluster_default in etcd role
@dtrdnk dtrdnk changed the title Add small pg_cluster_default role. Add ability to define custom runni… Fix inventory group etcd_cluster May 5, 2024
@vitabaks
Copy link
Owner

vitabaks commented May 5, 2024

Thanks for this PR!

Could you help me understand a bit more about why you've classified this as a fix? This was not a mistake, the requirement to specify the IP address in the inventory was in all releases of this product.

It would be great if you could describe the details of what is offered in this PR.

Also, I'm curious about the decision to create a separate role. Could you share your thoughts on why this approach was chosen over defining variables in vars/main.yml?

@dtrdnk
Copy link
Author

dtrdnk commented May 5, 2024

Could you help me understand a bit more about why you've classified this as a fix?

Because you use ip-address as hosts in inventory file in etcd_cluster group. Normally, records in group writes like a short hostname or FQDN. You are right, it is not a mistake, because ETCD config require IP-address instead hostname.
My PR allow use hostnames in etcd_cluster group, and get their ip via pg_cluster_default role.

It would be great if you could describe the details of what is offered in this PR.

I wrote some logic, which allow automatically generate variables and properties for configs. My solutions in this PR inspired by Kubespray role
Main goal of this approach - do not write few long vars/* files. Instead, you can write few roles and drop down the monolith approach. I think, kubespray and your role are similar, and there is good opportunity to reuse their practices.
My environment VM's have few ethernet interfaces (for local subnet and management). And currently there is no way, how to use normally hostnames in etcd_cluster and set the IP var with custom interface. My PR also add this ability.

Could you share your thoughts on why this approach was chosen over defining variables in vars/main.yml?

Yeah. I think, better use simple role, which prepare final variables, then write this logic in vars/ files

@vitabaks
Copy link
Owner

vitabaks commented May 5, 2024

@dtrdnk Thanks for the answer!

I'll review the use of a role to define variables and assess whether it brings any advantages or conveniences. I'll get back to you with my thoughts soon. At first glance, it doesn't appear to be a conventional method for defining variables.

Could you consider this change in a broader context? Perhaps you'd be interested in helping to rethink the logic for all host groups, not just for etcd. As it stands, it seems like this might be only a partial solution.

In the Consul role, we use a different approach to define the listening IP address, not using the 'inventory_hostname' variable. Here are the relevant sections for reference:

Here, the consul_bind_address variable defines the IP address for the interface specified in the consul_iface variable.
This is useful when there are several interfaces on the server, where one is private and the second is public, so we have the opportunity to explicitly specify a private network interface to avoid security risks.

I was thinking of implementing a similar approach for all host groups. What are your thoughts on this?

@vitabaks
Copy link
Owner

vitabaks commented May 5, 2024

@ThomasSanson I would be grateful if you would take a look at this PR and take part in our discussion.

@dtrdnk
Copy link
Author

dtrdnk commented May 6, 2024

Could you consider this change in a broader context?

Yeah. I offer two variants for using automatically generated variables:

  • globally, if the variable will be used in two roles, as is the case with ETCD.
    The address is used both in the ETCD role itself and in Patroni. For global variant using pg_cluster_defaults role.
  • locally in defaults/main.yml, if only automatically generated variables will be used in this role

I was thinking of implementing a similar approach for all host groups. What are your thoughts on this?

With Consul, you took approximately the same approach as I did with ETCD, but did not include the connecting address in the Patroni config. You can globally define the ip variable, and thereby automatically determine the interface on which the service will be launched. This is exactly the approach in the kubespray:

P.S
Please, try this Vagrantfile for my work tasks

And after successfully running Vagrant, check the .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.
You can see ip var in inventory, which will reused in postgresql_cluster role.

@vitabaks
Copy link
Owner

vitabaks commented May 7, 2024

@dtrdnk What method do you think will be more convenient?

@dtrdnk
Copy link
Author

dtrdnk commented May 8, 2024

Take both. As I say above, in roles without reusing vars between roles, just keep them in <role_name>/defaults/main.yml.
In roles with sharing vars, use specific default role

p.s thx for your response

@vitabaks
Copy link
Owner

@dtrdnk Could you close this PR and prepare a new one that would implement this functionality for all cluster components and not just etcd?

Important! It is necessary to leave backward compatibility so that if the ip variable is not defined, then use the inventory_hostname value as it currently works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants