Integrating Openstack with ACI

Below are my study notes about the integration of Openstack with ACI. I begin with defining some concepts. Then I describe NAT feature within Openstack and I finish with discussing networking plugins.

  • Openstack is a group of technologies that provide enterprise cloud computing capabilities.
  • has the following minimal components or node types:
    • a compute node (aka Nova), it runs the nova-compute service,
    • a controller node which comprises many services. These services can be distributed over dedicated servers such as:
      • storage nodes (aka Swift) and
      • network nodes (aka Quantum or later as Neutron).
  • A compute node hosts one or more instances (the equivalent of virtual machines). Each instance is referenced with its Instance ID.
  • Each Openstack Compute node has a virtual bridge interface named br-int. Each Compute instance is connected to br-int through a tap virtual interface, then through a virtual switch, then to the br-int bridge.
  • Neutron provides VLAN, VXLAN and NVGRE encapsulation possibilities. Neutron provides also software-based layer 3 forwarding possibilities.
  • Neutron is run on the Openstack controller.
  • When we say that we are integrating Cisco ACI with Openstack, what we mean by that is allowing Openstack to manage ACI programmatically. This is only possible when Openstack uses KVM as its hypervisor. And, on the Openstack side, it is done through ML2 or GBP in order to define networking objects. We see then the results in ACI by going to ACI GUI -> VM Networking -> VMM Domain -> OpenStack. Under this menu we would see the integrated hypervisors, the OVS, the compute node, the instances, etc.
  • Having integrated ACI with Openstack and an existing Openstack Compute node:
    • creating a project in Openstack results in creating a tenant in ACI
    • creating a network in Openstack results in creating a bridge domain in ACI
    • creating a subnet for a network and creating a router between networks result in:
      • creating a subnet under the bridge domain
      • creating an EPG associated to it
      • creating a VRF
      • creating an “any-any” contract between EPGs
  • Defining a Neutron subnet (aka a Subnet in Openstack) won’t create the subnet under the associated bridge domain. It occurs only after you create the Neutron router and attach subnets to his interfaces.
  • Cisco Unified Plugin was developed by Cisco to integrate with Openstack. It has two modes: ML2 and GBP.
  • When we install the Cisco ACI Plugin for Openstack, we can optionally replace the native Neutron OVS and L3 agents with the Opflex OVS Agent (which interact with the OVS using OpenFlow protocol) and the Opflex-Neutron Agent (which communicates with the Neutron server). If so, we need to extend the Infrastructure VLAN out of ACI into the compute node, because remember that Opflex runs on the Infrastructure VLAN only.
  • The Cisco ACI Plugin for Openstack has two modes:
    • Opflex mode
    • non-Opflex mode called Physdom mode.
  • Openstack has the concept of L3out like ACI. The L3out can be dedicated per Openstack project or shared. In the latter case, configure the shared L3out in the ACI tenant common.

Openstack and Source NAT

Egress Communication

There is a SNAT address range of public IP addresses that will be automatically defined in a bridge domain in tenant common. An EPG associated to this bridge domain is equally created automatically.

Each Compute node has its reserved range of public IP addresses (let’s call it SNAT range), which is configured in the ACI ML2 plugin configuration file.

When an instance connects out of the node, its source IP address gets translated (SNAT) by taking an IP address from the public IP address range reserved to it, whose subnet is defined in the bridge domain.

If a L3 Out is needed to be shared between Projects, then:

  • the L3 Out must be manually defined in tenant common
  • an external EPG must be defined
  • a “shadow” L3 Out is created on each invoking tenant.

Ingress Communication

Ingress packets from the outside do not communicate directly with the real IP address of the instance, but rather with a floating IP address.

When an ingress packet reaches the floating IP address, the compute instance that is going to reply will change its source IP address to the floating IP address value, even if there is a SNAT range already defined.

ML2

  • ML2 (Modular Layer 2) is a framework developed to interact with the networking node. It is part of Neutron.
  • Openstack ML2 has no concept of contracts. The unique native security feature between instances is the concept of security groups.
  • Cisco has developed its own variation of it, called Cisco ML2, in order to integrate ACI with Openstack.
  • to integrate ACI to an Openstack environment through a VMM Domain, you do this not on ACI but on the Openstack network node itself using the Cisco ML2 plugin. Then when you create an Openstack project, an ACI tenant is created too along with a bridge domain and a VRF.
    ML2 allows only one EPG per Bridge Domain is possible
  • The Cisco ML2 Plugin for Openstack replaces the Open vSwitch agent and the layer 3 agent that are native to the Openstack networking node with an Opflex-based Open vSwitch agent. In addition, it supports only VLAN and VXLAN layer 2 transport protocols.
  • When integrating ACI with Openstack, we can choose between the ML2 object model or the GBP object model.

GBP

GBP (Group-Based Policy) is a network policy framework simpler than ML2 when interacting with Neutron. It does not intend to replace ACI but its constructs are very similar to those of ACI. It was developed to facilitate network integration concepts for people that have little to do with network engineering, instead of ML2.

Also, GBP does not offer additional functionnalities in comparison with the native Neutron ML2.

Leave a Comment