Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for EVPN single-active in eos_designs role #1771

Closed
1 task done
dgonzalez85 opened this issue May 10, 2022 · 16 comments · Fixed by #1864
Closed
1 task done

Support for EVPN single-active in eos_designs role #1771

dgonzalez85 opened this issue May 10, 2022 · 16 comments · Fixed by #1864
Assignees
Labels
type: enhancement New feature or request

Comments

@dgonzalez85
Copy link

Enhancement summary

We do support EVPN single-active in eos_cli_config_gen #1330, this is to extend the support to eos_designs role.

Today we can configure active-active multihoming connected endpoints as follows:
https://avd.sh/en/latest/roles/eos_designs/doc/connected-endpoints.html#evpn-aa-esi-dual-attached-endpoint-scenario

As discussed with @gmuloc, ideally we would need to extend the model to support EVPN A/S, both at the port-channel level with a new redundancy var and a preference:

servers:
  server01:
    rack: RackB
    adapters:
      - endpoint_ports: [ E0, E1 ]
        switch_ports: [ Ethernet10, Ethernet10 ]
        switches: [ DC1-SVC3A, DC1-SVC4A ]
        profile: VM_Servers
        port_channel:
          description: PortChanne1
          redundancy: single-active
          designated_forwarder_preference: [ 100, 0]
          short_esi: 0303:0202:0101

This also can be configured in ethernet interfaces, for example to connect to separate CE devices:

servers:
  server01:
    rack: RackB
    adapters:
      - endpoint_ports: [ E0, E1 ]
        switch_ports: [ Ethernet10, Ethernet10 ]
        switches: [ DC1-SVC3A, DC1-SVC4A ]
        redundancy: single-active
        designated_forwarder_preference: [ 100, 0]
        short_esi: 0303:0202:0101

An example of eos cli configuation would be would be the following:

interface Ethernet1/1
   no shutdown
   speed 100g
   switchport
   switchport trunk allowed vlan 700
   switchport mode trunk
   switchport vlan translation 11 700
   spanning-tree portfast
   link tracking group LT_GROUP1 downstream
   evpn ethernet-segment
      identifier 0000:0000:0000:0102:0001
      redundancy single-active
      designated-forwarder election algorithm preference 100
      route-target import 00:00:01:02:00:01

@gmuloc proposed something around the following as a data model. Please add anything that i might be missing:

< endpoint_2 >:
    rack: RackC
    adapters:
      - speed: < interface_speed | forced interface_speed | auto interface_speed >
        endpoint_ports: [ < interface_name > ]
        switch_ports: [ < switchport_interface > ]
        switches: [ < device > ]
        profile: < port_profile_name >
        ethernet_segment:
          short_esi: < 0000:0000:0000 | auto >
          redundancy: < all-active | single-active > 
          designated_forwarder_preference: [42]
      - endpoint_ports: [ < interface_name_1 > , < interface_name_2 > ]
        switch_ports: [ < switchport_interface_1 >, < switchport_interface_2 > ]
        switches: [ < device_1 >, < device_2 > ]
        profile: < port_profile_name >
        ethernet_segment:
          short_esi: < 0000:0000:0000 | auto >
          redundancy: < all-active | single-active >
          designated_forwarder_preference: [100, 0]
        port_channel:
          description: < port_channel_description >
          mode: '< active | passive | on >'
          short_esi: < 0000:0000:0000 | auto >

Also with an auto short_esi being generated and default values for preference, this configuration could even be abstracted to the port_profile level.

Which component of AVD is impacted

eos_designs

Use case example

Configure EVPN single-active connections in the SERVER files of the eos_designs role.

Describe the solution you would like

Being able to define endpoint connections as described above.

Describe alternatives you have considered

Currently i need to implement this as part of host_vars configuration for each device.

Additional context

No response

Contributing Guide

  • I agree to follow this project's Code of Conduct
@dgonzalez85 dgonzalez85 added the type: enhancement New feature or request label May 10, 2022
@dgonzalez85
Copy link
Author

Adding @jonxstill into this since @gmuloc mentioned he implemented the short_esi. Not sure if having this type of configurations with the "auto" esi at the port_profile is worth considering?

@jonxstill
Copy link
Contributor

This certainly looks like it should be possible with only a small refactor of port-channel-interfaces.j2. I'll take a look and see if I can submit a PR to fix this.

@dgonzalez85
Copy link
Author

This certainly looks like it should be possible with only a small refactor of port-channel-interfaces.j2. I'll take a look and see if I can submit a PR to fix this.

perfect thanks Jon!

@gmuloc
Copy link
Contributor

gmuloc commented May 10, 2022

Hi @dgonzalez85 , @jonxstill - thanks for taking a look!

So I have seen #1772 and that will indeed cover adding the short_esi inside the profile. Thanks a lot

There is some additional functionality that @dgonzalez85 is looking for and that is for the capability to add also the ESI directly under an Ethernet interface in case of single homing.

The model I proposed (and that can be modified at will of course) was to move all ESI related config (in addition to preference and redundancy) in some ethernet_segment section under the adapter as it seems it can be used without a port-channel. (I am talking under @dgonzalez85 supervision here so feel free to correct :) ).

This would not prevent generating the lacp_id like it is in the current code of course when rendering a port-channel.

What would be your thoughts on that?

@jonxstill
Copy link
Contributor

OK, so we'd end up with short_esi under the ethernet_segment key, along with the redundancy and DF preference keys, regardless of whether we're configuring an ethernet (single-active) or port-channel (all active) interface. I would presume we'd leave subinterfaces of port-channels as-is as we don't appear to have support for subinterfaces on Ethernet interfaces in the connected endpoints model.

We'd also retain short_esi under the port_channel key for the moment and eventually deprecate it I presume?

Looking at the eos_cli_config_gen port-channel-interfaces.j2, the data model used here is different to that of ethernet-interfaces.j2. Should we try to homogenise these? I see there was some discussion in #1396. We could always accept both of these models to avoid making this a breaking change (we'd just have to be clear on behaviour when there is a conflict).

@ClausHolbechArista
Copy link
Contributor

ClausHolbechArista commented May 10, 2022

OK, so we'd end up with short_esi under the ethernet_segment key, along with the redundancy and DF preference keys, regardless of whether we're configuring an ethernet (single-active) or port-channel (all active) interface. I would presume we'd leave subinterfaces of port-channels as-is as we don't appear to have support for subinterfaces on Ethernet interfaces in the connected endpoints model.

Agree

We'd also retain short_esi under the port_channel key for the moment and eventually deprecate it I presume?

No need to deprecate. IMO we should read the value from port_channel context when rendering port-channels (current behavior), and from adapter for single-active (new).

Looking at the eos_cli_config_gen port-channel-interfaces.j2, the data model used here is different to that of ethernet-interfaces.j2. Should we try to homogenise these? I see there was some discussion in #1396. We could always accept both of these models to avoid making this a breaking change (we'd just have to be clear on behaviour when there is a conflict).

This would be nice to get in before this new change, so we could modify eos_designs to use the new knobs. To be clear, we should keep the old knobs, but deprecate them in the documentation, and let the new ones take precedence.
@emilarista would look into this, so maybe do a joined effort for all of this.

@jonxstill
Copy link
Contributor

No need to deprecate. IMO we should read the value from port_channel context when rendering port-channels (current behavior), and from adapter for single-active (new).

OK, we can leave as-is for the moment. I was thinking from a consistency point of view it might be nice to have them both using the same data model eventually. Happy to work with @emilarista to share the workload.

@gmuloc
Copy link
Contributor

gmuloc commented May 10, 2022

We'd also retain short_esi under the port_channel key for the moment and eventually deprecate it I presume?

No need to deprecate. IMO we should read the value from port_channel context when rendering port-channels (current behavior), and from adapter for single-active (new).

My proposal was to deprecate the one under port-channel indeed to have only one place where you setup the esi for a given adapter and then depending on the port-channel being present or not, the correct value would be picked up by the templates - otherwise it would mean replicating the additional fields (priority, redundancy and maybe more to come?) outside and inside port-channels. As per @dgonzalez85 there are two ways to configure A/S one without and one with port-channel (depending if the remote CE is the same device or not). Though probably it makes for harder logic inside the templates afterwards to detect whether or not it is a port-channel.

Otherwise would it make sense to change the proposed model to:

< endpoint_2 >:
    rack: RackC
    adapters:
      - speed: < interface_speed | forced interface_speed | auto interface_speed >
        endpoint_ports: [ < interface_name > ]
        switch_ports: [ < switchport_interface > ]
        switches: [ < device > ]
        profile: < port_profile_name >
        # New for Ethernet interfaces
        ethernet_segment:
          short_esi: < 0000:0000:0000 | auto >
          redundancy: < all-active | single-active > 
          designated_forwarder_preference: [42]
      - endpoint_ports: [ < interface_name_1 > , < interface_name_2 > ]
        switch_ports: [ < switchport_interface_1 >, < switchport_interface_2 > ]
        switches: [ < device_1 >, < device_2 > ]
        profile: < port_profile_name >
        port_channel:
          description: < port_channel_description >
          mode: '< active | passive | on >'
          # Existing to be deprecated
          short_esi: < 0000:0000:0000 | auto >
          # New
          ethernet_segment:
            short_esi: < 0000:0000:0000 | auto >
            redundancy: < all-active | single-active >
            designated_forwarder_preference: [100, 0]

but then indeed there is some kind of potential conflict in the final rendered config with the values directly under adapters and the one under port-channel. (e.g. different redundancy mode under the ethernet interface and the port-channel interface it belongs to)

Regarding the difference of model indeed this is tracked in: #1702 which @emilarista is planning to look at.

@jonxstill
Copy link
Contributor

I'm happy to pick this up once @emilarista has finished #1702, which he has said he is going to look at this week. I'll give the data model some consideration.

@jonxstill jonxstill self-assigned this May 10, 2022
@ClausHolbechArista
Copy link
Contributor

In the case of lacp fallback individual, we might need different config on port-channel vs. ethernet interface. In the case of port_channel there is no need for single-active and designated_forwarder_preference. It would always be all-active, so the current key should suffice right? Similar on ethernet interface I don't think all-active would work, so maybe just a preference and and short_esi
Please correct any misunderstandings.

@dgonzalez85
Copy link
Author

Hi Claus,

In the case of our customer there are some scenarios where we need EVPN single-active support with raw ethernet ports and also using port-channels. As you can see in the TOI this is supported:

https://www.arista.com/en/support/toi/eos-4-26-0f/14728-evpn-single-active-multihoming-preference-based-df-election

The examples there actually show the port-channel configuration with evpn single-active.

@jonxstill
Copy link
Contributor

I think the model described by @gmuloc above (with ethernet_segment under the port_channel key) should meet all the requirements. This would allow separate settings for the interface and the port-channel for lacp fallback.

We can retain the existing short_esi key under port_channel, making it clear that the esi specified under ethernet_segment takes precedence and that we would aim to deprecate the current short_esi key in version 4.0.

Now that #1702 has been completed I will also ensure eos_designs uses the new keys for the structured_config. Will try and work on this over the coming week (alongside a couple of other things).

@jonxstill
Copy link
Contributor

#1834 replaces #1772 - stupid rebasing/merging issues on my side - and is now ready for review. I'm now working on the single-active functionality today and next week.

@jonxstill
Copy link
Contributor

So #1834 has now been merged providing the port-profile functionality. #1864 has now been submitted for review for the single-active functionality.

@dgonzalez85
Copy link
Author

dgonzalez85 commented Jun 9, 2022

Hi John, thanks for all the work on this. While testing this feature, the customer is asking to set the "dont-preempt" option that was not originally requested, its a simple addition to this command:

      designated-forwarder election algorithm preference <pref_value> dont-preempt

Just in case this was not considered and it can be added as part of this change or in a separate one.

thanks,
David

@jonxstill
Copy link
Contributor

I can add it to this one. I'll take it back to being a draft and see if I can add that today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants