Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVE firewall #62

Open
trickert76 opened this issue Aug 5, 2019 · 8 comments
Open

PVE firewall #62

trickert76 opened this issue Aug 5, 2019 · 8 comments
Assignees

Comments

@trickert76
Copy link
Collaborator

As far as I know, there is no real Ansible role to manage the PVE firewall. Is it a good idea to integrate it here? I would start with it today.

@lae
Copy link
Owner

lae commented Aug 5, 2019

That sounds like a good idea. I'm not 100% sure if the firewall rules files can be templated or if you need to use pvesh or something to modify them. (if they can't be templated, then I'd prefer to have a module written for it.)

@trickert76
Copy link
Collaborator Author

I'll have a look and give you a branch, when I've started.

@trickert76 trickert76 self-assigned this Aug 5, 2019
@lae lae added the enhancement label Aug 5, 2019
@lae lae added this to the 1.7.0 milestone Aug 13, 2019
@lae lae removed this from the 1.7.0 milestone Jan 18, 2020
@trickert76
Copy link
Collaborator Author

After a lot of discussion with myself ;-) I think it would be better if this is part of a user specific playbook. The reason for that is easy. There is a file in /etc/pve/firewall/cluster.fw that contains the definition of all IP sets, aliases and groups together with all the rules. It is a file that can be managed better via template then via some args.

Also per VMID there is another file in that directory to define the rules per VM. So you need to define an array with IDs you may not know in a general way.

Maybe you can configure per Proxmox host (I didn't do that before, because then "all hosts" maybe differ from "one host" and the cluster.fw is a better place for this).

After changing the file(s), you only need to reload the pve-firewall service. That's it.

So this is very "specific" for the environment. The adventage would be to have an example how the template could be written, but that could end in a little desaster, if it available by default in this role.

A possible cluster.fw

[OPTIONS]

enable: 1

[ALIASES]

{% for host in groups['all']|sort %}
{% if hostvars[host].network.ipv4 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v4 {{ hostvars[host].network.ipv4.address }}/32 # {{ host }} 
{% endif %}
{% if hostvars[host].network.ipv6 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v6 {{ hostvars[host].network.ipv6.address }}/{{ hostvars[host].network.ipv6.prefix }} # {{ host }}
{% endif %}
{% endfor %}

[IPSET my-hosts] # all proxmox hosts

{% for host in groups['proxmox']|sort %}
{% if hostvars[host].network.ipv4 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v4
{% endif %}
{% if hostvars[host].network.ipv6 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v6
{% endif %}
{% endfor %}

[IPSET my-gfs] # Gluster Cluster

{% for host in groups['gluster']|sort %}
{% if hostvars[host].network.ipv4 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v4
{% endif %}
{% if hostvars[host].network.ipv6 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v6
{% endif %}
{% endfor %}

[IPSET my-network] # complete network

{% for host in groups['all']|sort %}
{% if hostvars[host].network.ipv4 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v4
{% if hostvars[host].network.ipv4.subnet is defined %}
{% for subnet in hostvars[host].network.ipv4.subnet %}
{{ subnet }} # {{ hostvars[host].inventory_hostname_short }}-ipv4-subnet 
{% endfor %}
{% endif %}
{% endif %}
{% if hostvars[host].network.ipv6 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v6
{% if hostvars[host].network.ipv6.subnet is defined %}
{% for subnet in hostvars[host].network.ipv6.subnet %}
{{ subnet }} # {{ hostvars[host].inventory_hostname_short }}-ipv6-subnet 
{% endfor %}
{% endif %}
{% endif %}
{% endfor %}

[IPSET my-vms] # all vms

{% for host in groups['proxmox_vms']|sort %}
{% if hostvars[host].network.ipv4 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v4
{% endif %}
{% if hostvars[host].network.ipv6 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v6
{% endif %}
{% endfor %}

[IPSET my-vpns] # all vpn entry points

{% for host in groups['vpn']|sort %}
{% if hostvars[host].network.ipv4 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v4
{% endif %}
{% if hostvars[host].network.ipv6 is defined %}
{{ hostvars[host].inventory_hostname_short }}-v6
{% endif %}
{% endfor %}

[RULES]

GROUP my-proxmox

[group my-proxmox] # HTTP 8006, SSH 22, DNS64

IN ACCEPT -source +my-vpns -log nolog
|IN Ceph(ACCEPT) -source +my-hosts -dest +my-hosts -log nolog # Ceph
IN DNS(ACCEPT) -source +my-network -log nolog # DNS
IN ACCEPT -source +my-hosts -dest +my-hosts -p tcp -dport 22 -log nolog # SSH
IN ACCEPT -source +my-vms -dest +my-network -p tcp -dport 22 -log nolog # SSH
IN ACCEPT -source +my-hosts -dest +my-hosts -log nolog
IN HTTP(ACCEPT) -dest +my-hosts -log nolog # Letsencrypt
|IN ACCEPT -dest +my-hosts -p tcp -dport 8006 -log nolog # Proxmox Management Port
IN ACCEPT -source +my-vpns -dest +my-hosts -p tcp -dport 8006 -log nolog # Proxmox Management Port
IN ACCEPT -source +my-hosts -dest +my-hosts -p tcp -dport 8006 -log nolog # Proxmox Management Port
IN ACCEPT -source +my-vms -dest +my-gfs -p tcp -dport 24007 -log nolog # GlusterFS
IN ACCEPT -source +my-vms -dest +my-gfs -p tcp -dport 49152 -log nolog # GlusterFS gv0
IN ACCEPT -source +my-gfs -dest +my-gfs -p tcp -dport 24007 -log nolog # Gluster Communication
IN ACCEPT -source +my-gfs -dest +my-gfs -p tcp -dport 49152 -log nolog # Gluster br0
IN ACCEPT -source +my-network -dest +my-hosts -p tcp -dport 5665 -log nolog # monitoring

[group my-ssh] # SSH Port 22

IN ACCEPT -source +my-network -dest +my-network -p tcp -dport 22 -log nolog # SSH

[group my-vms] # Service communication

IN ACCEPT -source +my-vpns -dest +my-vms -log nolog
IN ACCEPT -source +my-hosts -dest +my-vms -log nolog
IN ACCEPT -source +my-vms -dest +my-vms -log nolog

a default vm.fw template could be

[OPTIONS]

policy_in: DROP
enable: 1

[RULES]

GROUP my-ssh
GROUP my-vms

and some tasks:

- name: Configure /etc/pve/firewall/cluster.fw
  template:
    src: "cluster.fw.j2"
    dest: "/etc/pve/firewall/cluster.fw"
  notify:
    - reload pve-firewall

- name: Configure /etc/pve/firewall/xxx.fw
  template:
    src: "vm.fw.j2"
    dest: "/etc/pve/firewall/{{ hostvars[host].infrastructure.vmid }}.fw"
  with_items:
    - "{{ groups['proxmox_vms'] }}"
  notify:
    - reload pve-firewall
  loop_control:
    loop_var: host

and a handler:

- name: "reload pve-firewall"
  service:
    name: pve-firewall
    state: reloaded

of course, there could be a "nicer" template. In my environment I can't use "ansible_default_ipv4" etc. because of dynamic interface (like docker0) and some systems have IPv6 and other don't have it. So this values are static defined in my inventory. So, this is an example. It could be a good idea to document that and add some settings from this role (like Ceph knowledge).

@lae
Copy link
Owner

lae commented Aug 4, 2020

So, this role doesn't have it (yet) but in another of my roles, I have an examples/ directory where I drop example configurations, playbooks, templates, etc that people can refer to. If you want to drop a template into the repository as an example of how someone could manage the firewall with their own user playbook instead of integrating it directly into the role, that's fine, too.

@trickert76
Copy link
Collaborator Author

Can you give me a hint, which role do you mean?

@trickert76
Copy link
Collaborator Author

@trickert76
Copy link
Collaborator Author

Hmmm - I found, that it would be possible to write a module like pve_firewall which pvesh to configure aliases etc. So it would be possible to define role vars (like acl, group and user). I'll start to create that module in a separate branch.

trickert76 added a commit that referenced this issue Aug 4, 2021
@tuxillo
Copy link

tuxillo commented Oct 26, 2022

What happened to this in the end?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants