Cisco ACI L4-7 Service Insertion

These are my study notes on the Cisco ACI L4-7 service insertion topic.


ACI L4-7 (read “Layer four to seven”) Service Insertion is the process of introducing L4-7 services in the data path of a packet in ACI fabric, independently of the physical location of the L4-7 device itself.

The L4-7 integration (or insertion) is achievable either manually or with a Service Graph.

The L4-7 services can be:

  • packet filtering services
  • packet inspection services
  • NAT services
  • intrusion detection/ intrusion prevention services
  • load balancing services

The L4-7 services are performed by the following devices:

  • Firewalls: Cisco ASA, Palo Alto, Fortinet, etc.
  • Load Balancers: f5, Citrix, etc.
  • IPS: Cisco Firepower, etc.

We call interchangeably the following terms:

  • L4-7 device
  • service device
  • function device

By inserting a service device it is not meant the physical cabling of a service device, but rather the insertion of the function performed by the device within the data path between two EPGs.

Inserting a service device can be:

  • manual or
  • automated through a service graph

We distinguish also the following concepts:

  • Service Node / Service device
  • Function Node, which is the logical function fulfilled by the service node. A function node has connectors.
  • Service Resource Pool.
  • Terminal Node, which is the consumer/provider EPG in a service graph.

A Service Device is added with drag and drop onto the Service Graph.

We distinguish the concept of Pool of Service Devices or Pool of Service Appliances.

At least two bridge domains are needed in a normal service insertion.

A Policy-Based Redirect Service Graph needs the bridge domain to deactivate endpoint learning.

We must differentiate VRF (Virtual Redirect and Forwarding) from the traditional VRF (Virtual Routing and Forwarding).

ASA Cluster

  • A cluster is composed of a Master Switch and slave Switches. The Master Switch provides the configuration for the Slaves.
  • We must configure both the primary and secondary management IP addresses in order to make APIC able to reach the ASA slave switch.

L4-7 Management Modes

We manage the connected L4-7 devices in either one of the following modes:

Managed Mode

  • a complete horizontal integration is performed.
  • ACI pushes policies to the device and redirects traffic to it.
  • advantage of providing company-wide consistent policies and avoid potential human-introduced errors
  • advantage of deploying the L4-7 services anywhere in the fabric without caring about the physical location of the device.
  • if the device provides contexts, then access to the admin context must be configured on ACI.
  • ACI dynamically manages VLAN assignment
  • ACI collects statistical data such as health scores

Unmanaged Mode

Policies are not managed by ACI. They are created by a the L4-7 device administrator.

ACI L4-7 Service Insertion Modes

When we decide to integrate a L4-7 device with a Service Graph, then we have either of the following modes:

Service Policy Mode

  • this mode requires a device package to perform a full integration of the L4-7 device with ACI fabric. The integrated L4-7 device exposes its components with an Opflex agent to the APIC, which itself leverages Opflex to interact with the device. The L4-7 device and its VLANs are configured by the APIC.
  • The device package is developed by the third-party vendor and includes two files:
    • an XML files describing the capabilities of the device, and
    • a python file describing the integration with ACI.

The service device administrator prepares the configuration in the form of function profiles, but does not configure the service device. The APIC administrator on the other side configures the fabric, the service device (using function profiles), the service device interfaces and the necessary VLANs.

The way to insert L4-7 services in the service policy mode is to:

  1. create the device based on the L4-7 device package
  2. create the service graph template (which can also be created during the contract configuration).
  3. create a contract at the level of the EPG that will be the provider
  4. invoke the service graph in the contract
  5. configure the device parameters -which are modifiable later- like ACL, interfaces, IP addresses, etc.

Step 3 above can be replaced by right clicking on the L5-7 Service Graph Template menu and selecting Apply Service Graph Template, which creates automatically a Logical Device Context (aka Device Selector Policy).

The function profile is a collection of settings for quick L4-7 device deployment in the service graph. It can be configured during setup of the service graph template or earlier by the service device administrator.

Function Profile Parameters

The function profile has parameters which can be a combination of the following boolean values:

  • mandatory
  • locked
  • shared

The Parameters are defined during the configuration of a functional profile, or at one of these levels: tenant, Application Profile, BD, EPG.

The default order of lookup for the parameters is:

  1. Function profile
  2. abstract device
  3. EPG
  4. Application Profile
  5. tenant.

The Infrastructure admin is the only role which can import and install device packages. Tenant admins can only use it.

To insert the L4-7 services:

  • select the template
  • select the device interfaces
  • add the contract as a provided contract on the provider EPG.

Service Manager Mode

The service device administrator configures the service device using management tools and configures usable policies.

The APIC administrator, in the service graph, points to those policies. This means that the APIC will communicate with the service device management tool or controller to orchestrate those policies.

Both Service Policy and Service Manager modes are managed modes.

Network Policy Mode

The Network Policy Mode is also called:

  • the “no device package or service manager” mode, or
  • the Network-only mode, or
  • the Network Stitching mode.

the L4-7 device is completely managed by its own administrator and the ACI administrator only creates the necessary VLANs. This mode is also called

This modes is helpful in “political” environments, where each department still wants to completely manage its equipment.

Service Graphs

  • (is mapped to concrete devices?)
  • is a collection of Abstract Nodes
  • to invoke it between two EPGs, attach it to a subject in the contract provided/consumed by the EPG. To do that, mark the checkbox Service Graph in the contract configuration page and untick the checkbox Use Filters.
  • We create one or more service graph templates and reuse them in other data paths.
  • When the Service Graph is rendered, the adequate VLANs/VXLANs are programmed onto the Concrete Device Interfaces.

We distinguish:

  • normal service graphs
  • Policy-Based Redirect (PBR) service graphs. Only one Redirect is supported by a service graph.
  • Symmetric Policy-Based Redirect (symmetric PBR) service graphs

PBR Service Graphs are supported on either one or two VRFs.

Symmetric PBR are only available on Nexus 9300 EX product family and allow to provision a pool of service appliances. The service appliances in this pool are used based on an ECMP IP address prefix hashing of the redirected traffic source and destination.

A Service Graph can be single node or multinode.

A Service Graph that is configured is only rendered/instantiated when associated to a Device Selection Policy.

Meta device

  • is a symbolic representation of the L4-7 device that is connecting to the fabric
  • its function, whether it is a firewall, or a load balancer or else, is defined in the Abstract Node

Logical Device (LDev)

  • is a cluster of two or more Concrete Devices
  • is also called a Device Cluster.

The interfaces of a Device Cluster are simply called Device Cluster Interfaces or Logical Interfaces.

Concrete Device (CDev)

  • is a one-to-one representation of the physical L4-7 device in the ACI object model.
  • Interfaces of the Concrete Device (vnsCIf) represent one-to-one the interfaces of the real device but in the format {slot_port}, like interface 1_2 or interface 1_4
  • Each Interface on the Concrete Device maps to an interface on the Logical Device (LIf)
  • Concrete Device Interfaces are mapped to the Logical Interfaces when the Concrete Device is manually added by the APIC administrator to the Cluster Device .
  • If Concrete Devices are in the Active-Standby mode, then there can be at maximum 2 concrete devices in a device cluster.

Connecting the L4-7 Service Device to the ACI Fabric

the L4-7 device connects to the fabric:

  • through a direct L3 peering or
  • through a L3 Out connection.

Creating a L4-7 Device

We decide whether the service device will be managed or unmanaged, by checking/unchecking the Managed box:

We define the type of device in the field named Service Type:

We instruct the APIC whether the service device is a physical appliance or a virtual appliance, under the field Device Type:

  • If the service device is a physical appliance, we must specify a physical domain:
  • If the service device is a virtual appliance, we must indicate a VMM domain. This action supposes that we have already integrated a VMM domain with APIC:

ASA Firewall Service Insertion

Before inserting an ASA firewall function, we must create a virtual context on the box.The service device will refer later to this virtual context and not the whole ASA physical device.

If ASA is configured in a cluster, then:

  • we insert the firewall service in ACI as a single node, because ACI sees the ASA cluster as a single logical device.
  • the device management IP address that ACI sees is in reality the Master management IP address of the ASA virtual context, and
  • the Cluster management IP address that ACI sees is in reality the Master management IP address of the ASA admin context.

Although attaching a virtual ASA (ASAv) cluster to the fabric may seem an obvious working choice, it is not; Only clustering of physical ASA firewalls is supported on ACI.

If ASAv is used, it is recommended to configure VMM integration first, which will enable detecting ASAv VM in the ACI object model during the creation of the Service Graph Template. The Service Graph Template, in case of a normal Service Graph with Network Mode, contains:

  • the location of the service device
  • outside and inside interfaces
  • consumer and provider EPGs

In ACI, ASA clustering must occur over the same port channel or the same vPC. No clustering over multi Pods is supported. ASA clustering uses spanned Etherchannel.

A manual insertion of ASA (that is without Service Graphs):

  • works only in go-to mode.
  • needs 2 VRFs, 2 Bridge Domains, mapping the external and the internal firewall interfaces to an L3 Out each.

If one of the Bridge Domains is set up as L2 only, then the other Bridge Domain learns the IP and MAC addresses of the L2-Bridge Domain too.

We distinguish the firewall inside interface and the firewall outside interface. For each firewall interface we associate a bridge domain. This takes place while configuring the service graph for a contract.

When we have NAT configured on the outside interface of the ASA, then the APIC does not need to have visibility on the internal subnets as a client packet is coming ingress on a leaf. If we don’t have NAT however, then ASA needs to establish a layer 3 peering relation on his outside interface with the fabric leaf in the form of a L3 Out, and we define the subnets that sit behind the internal firewall interface as external networks in the L3 Routed Network object.

Firewall in Go-To Mode or in Go-Through Mode

ASA can be inserted in:

  • go-through mode: in this case we call the ASA a L2 firewall or a transparent firewall.
  • go-to mode: in this case we say that we are deploying the ASA in a L3 mode.

ASA Firewall Deployment in Go-To Mode

In the go-to mode we distinguish:

  • go-to-mode with the firewall being the gateway for internal and external endpoints:
    • the internal firewall interface is mapped to a bridge domain 1 which is in Layer 2 only and has Unicast Routing turned off
    • the external firewall interface is mapped to a bridge domain 2 which is also in Layer 2 only and has Unicast Routing turned off
    • there is no L3 Out between the firewall and the ACI fabric.
  • go-to mode with L3 peering:
    • the internal firewall interface is mapped to a bridge domain 1 which is in Layer 2 only and has Unicast Routing turned off
      the external firewall interface is mapped to a bridge domain 2 which must be a layer 3 bridge domain, and a L3 Out is configured between the firewall and the ACI fabric.
    • the L3 peering occurs with either static or dynamic routing.
    • the firewall peers with bridge domain 2 gateway.
    • this mode is required if we want to have a layer 3 firewall without NAT.


  • the ASA is in L3 mode and
  • both bridge domains that are associated to the firewall internal and external interfaces are layer 3 bridge domains, and
  • the ASA uses NAT

then we need to activate IP aging in the fabric.

ASA Firewall Deployment in Go-Through Mode

We can also deploy ASA in a go-through mode. And in this case we have two possible settings:

  • Setting 1: both bridge domains that are mapped to the internal and external firewall interfaces are L2. And an external device performs routing.
  • Setting 2: the bridge domain that is mapped to the firewall external interface is layer 3, and is configured to limit IP learning to the subnet.

Insert an ASAv Firewall Using a Service Graph in Network Policy Mode, (aka Unmanaged Mode)

Insert an ASAv Firewall Using a Service Graph in Service Policy Mode, Managed Mode

A couple notes I add:

  • the firewall is integrated as a go-to-mode firewall
  • we have Consumer Connector and Provider Connector
  • “predefined” function profiles
  • we map the firewall interfaces to the firewall virtual machine network adapters and we specify which interface is outside and which is inside.

ASA Failover in ACI

Originally, ASA supports the failover mechanism over a configured physical interface and VLAN, which is considered an out-of-band failover network.

However, it is possible to setup an ASA physical cluster in-band failover network in ACI by configuring a failover EPG with static binding, just like in a bare metal host integration.

If we have however an ASA virtual cluster and want to configure either in- or out-of-band failover, then:

  • we define a failover EPG manually, and
  • we define a port group, manually or through VMM domain integration.

IPS Service Insertion

  • Cisco ACI supports inserting the Firepower IPS as a L4-7 service. By the way, both Cisco Firepower and the Cisco Next-Generation Firewall are part of the Firepower Threat Defense (FTD) device package.
  • the IPS must be registered to the Firepower Management Center FMC
  • Cisco Firepower IPS intervenes before, during and after an attack to quarantaine the malicious workload, put it in a useg-EPG and log an entry into the FMC.
  • The IPS device / device cluster can be integrated into ACI with either of the following methods:
    • in a Service Graph in managed mode (Service Manager Mode):
      • the security administrator defines a security policy with allowed protocols, which can be edited later
      • the security administrator defines security profiles but does not define security zones.
      • the network administrator defines interfaces, matching VLANs, security zones and invokes security profiles.
    • in a Service Graph in unmanaged mode
    • manual mode
  • The IPS operates in either of the following modes
    • Layer 1:
      • the IPS requires its node legs to be connected to two Bridge Domains, each leg to a bridge domain, both bridge domains having the same VLAN encapsulation.
      • 2 VLAN Pools with the same block range are used
      • is not supported by the Service Graph
      • here we must disable loop detection protocols (in ACI case: MCP, LLDP) under the leaf interface policies.
    • Layer 2
    • Layer 3
  • The topology of the IPS within the ACI fabric can be:
    • one-arm with SNAT:
      • The IPS service device is connected to the fabric with only one leg
      • Te IPS service device holds the vIP of the server(s), i.e the external client (that wants to communicate with the server sitting behind the IPS) sends packets to the vIP, which resides on the IPS. The IPS does a SNAT on the source IP address of the packet, which hides the real IP address of the client from the server.
    • two-arm with PBR:
      • The IPS service device connects with two legs to the fabric.
      • The client communicates with the vIP on the IPS, thinking it is the server. Then the IPS forwards the packet without SNAT to the server. The server sends packets back to the client IP address, but the packets are redirected to the IPS.
      • This setting is required when the server has a need to “see” the real IP address of the client.

Layer 1

Copy Service Feature

The Copy Service is a L4-7 service we can integrate in ACI. It requires a Service Graph. It is supported on Nexus 9300 EX platforms.

We can benefit from the Copy Service when we integrate the Firepower IPS within ACI fabric. The Copy Service is different from SPAN in the following ways:

  • the traffic is copied, not duplicated, and the copy process does not impact performance because it is processed internally by the leaf.
  • there are no headers in the copied packets
  • The traffic to be copied is specified as part of a contract. So not all traffic is by default copied. Therefore, unless specified in the contract, the BUM traffic is for example not included.
  • like a SPAN session, the Copy Service has a source and a destination. The source of the copy service is limited to one interface, either logical or physical.
  • Copy Cluster is the destination device that receives the copied traffic.
  • CoS and DSCP can not be copied

ADC (Load Balancer) Service Insertion

  • An ADC (Application Delivery Controller) device performs content load balancing among other functions.
  • We distinguish the ADC server-side interface and the ADC client-side interface.
  • The ADC contains a VIP (virtual IP) which is the known interface to external clients. The Self IP address however is the real IP address of the ADC external leg (or leg, if we have a one-arm design).
  • Clients send packets to the VIP on the ADC, and the AD performs NAT and SNAT.

f5 Concepts

  • f5 employs iWorkflow, which generates a dynamic f5 device package to be integrated with ACI.
  • f5 uses a smart templating technology called iApps
  • We must configure the application template with iApps, then choose a template from iWorkflow catalog. After that, the f5 device package can be instantiated and uploaded to APIC.
  • the popular f5 ADC is called BIG IP.

Citrix ADC (aka Netscaler)

  • Citrix leverages the concept of Playbooks. A Playbook is a configuration template that we set up to suit a specific application.
  • Citrix NMAS or MAS: Network Management and Analysis System.

Click here to read the rest of my Cisco ACI study notes.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *