Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple SVI inside same VRF #988

Closed
3 tasks done
inetman28 opened this issue May 29, 2021 · 6 comments
Closed
3 tasks done

Multiple SVI inside same VRF #988

inetman28 opened this issue May 29, 2021 · 6 comments
Labels
role: eos_designs issue related to eos_designs role state: stale type: question Further information is requested

Comments

@inetman28
Copy link

inetman28 commented May 29, 2021

Issue Type

  • Template enhancement
  • Role enhancement
  • Documentation enhancement (avd.sh)

Is your feature request related to a problem? Please describe.

Arista-AVD doesn't create same SVI inside one VRF.

Describe the solution you'd like

I would like to have a feature which can create same SVI interface number (same vlan) on different switches. This is powerful in L3 fabric (without L2 VXLAN streaching). In this case we can use 1-4094 VLANs per each switch (or MLAG domain). Also, it makes easier hypervisor provisioning (or loadbalancer, or k8s services), because in diffirent rack we have same SVI number (but of couse, with different ip-network).

Describe alternatives you've considered

I didn't find any alternatives. I am going to make one more role for this ;( If you have any suggestions - please share them

Additional context

Now I couldn't create same SVI number inside same VRF but on different switches. Example, what I would like to have:

leaf1: svi 101, net: 192.168.1.0/24, name: leaf1_k8s_prod_pod
leaf2: svi 101, net: 192.168.2.0/24, name: leaf2_k8s_prod_pod
leaf3: svi 101, net: 192.168.3.0/24, name: leaf3_k8s_prod_pod

        svis:
          101:
            name: leaf1_k8s_prod_pod
            tags: [leaf1]
            enabled: true
            ip_address_virtual: 192.168.1.1/24

          101:
            name: leaf2_k8s_prod_pod
            tags: [leaf2]
            enabled: true
            ip_address_virtual: 192.168.2.1/24

          101:
            name: leaf3_k8s_prod_pod
            tags: [leaf3]
            enabled: true
            ip_address_virtual: 192.168.3.1/24

@inetman28
Copy link
Author

I've got workaround with host_vars variables. Example below

host_vars/switch1.yml

---
custom_structured_configuration_vlan_interfaces:
  Vlan4092:
    shutdown: false
    vrf: INFRA
    ip_address: 10.9.1.146/28
    ip_virtual_router_address: 10.9.1.145


custom_structured_configuration_vlans:
  4092:
    tenant: Tenant_A
    name: TEST

@ClausHolbechArista
Copy link
Contributor

In PR #900 we will enable you to override the structured_config generate per svi. So you could set:

        svis:
          101:
            name: k8s_prod_pod
            enabled: true
            nodes:
              leaf1a:
                structured_config:
                  ip_address_virtual: 192.168.1.1/24
              leaf1b:
                structured_config:
                  ip_address_virtual: 192.168.1.1/24
              leaf2a:
                structured_config:
                  ip_address_virtual: 192.168.2.1/24
              leaf2b:
                structured_config:
                  ip_address_virtual: 192.168.2.1/24

@inetman28
Copy link
Author

@ClausHolbechArista
it seems great!

but you should consider next link
#1022

@ClausHolbechArista
Copy link
Contributor

@inetman28 Note that ip address virtual will not work correctly in mlag without assigning a VNI to the SVI. So you will have to use ip_virtual_router when we remove the VNI.

@ClausHolbechArista ClausHolbechArista added role: eos_designs issue related to eos_designs role type: question Further information is requested labels Jun 16, 2021
@inetman28
Copy link
Author

yes, you are right.
I have already had this problem ;))))

@github-actions
Copy link

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 15 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
role: eos_designs issue related to eos_designs role state: stale type: question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants