Cisco ACI Automation Deep Dive - Part 1

SDN as the Foundation for Infrastructure Automation

One of the key SDN promises is to provide better agility when it comes to responding to network changes demand. In other words, the network should reflect those changes as soon as possible, independently from the topology. Whether a new security rule is deployed across the data center or a new VM is provisioned and needs basic networking constructs, the deployment and rendering of these changes shouldn't be depending on the location of the device. Therefore, SDN solutions need a centralised programmable interface that can interact with any device where changes must be applied.
Nowadays this interface is commonly implemented under the form of a RESTful API that can deal with various markup formats, such as JSON and XML. Although modern programming languages provide comprehensive tool sets to work with flat files and send them to the API URI, I found it personally much more practical to work with an object model wrapper or programming language bindings. So in addition to a programmable interface, I would add another important requirement for an efficient SDN solution, which is a flexible object model along with the corresponding bindings (whatever language, but I'd definitively prefer JS or Python, but that's my call).
Essentially, having an object model centrally programmable via policies and rendering those policies into the hardware where required in a declarative fashion (ie. high level intent) are ACI foundations. In this post, I'm going to drive you through the first steps for making ACI the best SDN automation tool.

Fabric programmability with ACI

ACI provides different levels of programmability. For example, the API allow you to define connectivity policies within the fabric. This includes configuring vPC on specific ports between two leaf nodes, or extending external L2 or L3 domains to the fabric (VLAN, VXLAN or VRF extension to the outside). The policy model defines endpoints connectivity requirements and and how border leaf peers with the external network at Layer 3 level. This physical configuration management can be considered as "fabric infrastructure programmability", as opposed to "virtual fabric programmability", which is more tied to defining connectivity and network services requirements for applications, at a higher level.
Let's take a look at different methods for consuming APIC's API to achieve a particular topology configuration. The target design is the following:

We want to connect a 4-node ESXi cluster via vPCs created between Leaf nodes 101 and 102. Let's also assume that we want to provide external routed connectivity for Virtual Machines that will be hosted within this cluster. The external router will leverage VRF-lite to maintain multi-tenancy as traffic is leaving the fabric.

Sending JSON code to APIC

The simplest way to program the fabric is to send JSON formatted instructions to the controller cluster (All 3 controllers will accept the request) with HTTP REST calls. The object model is similar to a folder structure, where the top container is at the root level and all tree intersections or nodes represent an object and contain properties. The subsequent nodes in the tree are called children. So, from the root, an ACI JSON oject would have the following representation:

{
  "Policiy_universe_children_class": {
    "attributes": {
    "name": "instance_name",
    "attribute_1": "attribute_1_value",
    "attribute_2": "attribute_2_value",
    ....
    },
    "children": [{
      "children_class": {
        "attributes": {
          "name": "instance_name",
          "attribute_1": "attribute_1_value",
          "attribute_2": "attribute_2_value",
          ...
        },
        "children": [{
        ...
        }]
      } 
    }]
  }
}

Then JSON REST requests can be sent to multiple resources with different JSON constructs:

  • To the root MO, using the distinguished name (DN) of the object we want to create/modify.
  • To the root MO, building the full JSON representation from the root to the desired object.
  • To the URI of the resource directly containing the object (parent). In this case you just have to specify the object name but the DN doesn't have to be included in the code. Here is quick example that creates a policy name "CDP_Enabled" with these various methodologies:

POST to URL http(s):/<apic>/api/uni.json (uni is the root resource) using the DN: (If you want to know how to login to APIC first, check Hello World Automation with Cisco ACI )

{  
   "cdpIfPol":{  
      "attributes":{  
         "adminSt":"enabled",
         "dn":"uni/infra/cdpIfP-CDP_on",
         "name":"CDP_Enabled"
      }
   }
}

This following is equivalent to the previous, this time not specifying the DN, but building the absolute JSON description of the tree down to the CDP policy object: (uni -> infraInfra -> cdpIfPol )

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "cdpIfPol":{  
               "attributes":{  
                  "name":"CDP_Enabled",
                  "adminSt":"enabled"
               }
            }
         }
      ]
   }
}

or you can POST to the URI http(s):/<apic>/api/mo/uni/infra.json (infra is the resource containing the CDP instance), omitting the "infraInfra" wrapper:

{  
   "cdpIfPol":{  
      "attributes":{  
         "adminSt":"enabled",
         "name":"CDP_Enabled"
      }
   }
}

Back to our scenario. We need to create multiple objects in order to get the object model happy with our vPC configuration:

Referring to the above picture, to get our vPC configuration up and running for vSphere hosts and VM network integration we need:

  • One dynamic VLAN pool and an associated VLAN block.
  • One VMM (Virtual Machine Manager) domain linked to the vCenter containing the vSphere hosts we want to connect and eventually define a vSwitch Policy. This will provide vCenter network integration with ACI. We also need the associated Access Entity Profile.
  • vPC interface policies grouped into vPC policy groups, one per ESXi host. That's 4 in total, as we need to connect 4 hosts.
  • One interface profile per vPC policy group, with the corresponding interface selector. Each host is connected to a single port per peer switch.
  • One node (or switch) profile representing leaf 101 and leaf 102 connectivity (they will use the same port numbers for dual attached hosts, ie. port 1 on leaf 101 and port 1 on leaf 102 for ESX1 vPC).

Similarly, to connect the physical router we need the following:

  • One VLAN pool and an associated VLAN block.
  • One external routed domain.
  • vPC interface policies and policy group to dual home the router to the fabric.
  • The corresponding interface profile.
  • The node profile for leaf 103 and 104.

Where can I find information about objects and classes?

We've just defined objects that need to be created. Now the question is how can I get information about the actual JSON code I need to build?
There are a couple of ways to learn about the object model very quickly. The first one is by using the object explorer. You can find it at http://<apic>/visore.html. The following picture is showing the output for the interface policy previously defined. We can query the entire class, in this case "cdpIfPol". The result will display all existing objects of this class.

The flipside of this methodology is that the object have to be created prior to being consulted. So if it's the very first time you're creating an object, you won't be able to get the properties dump beforehand, unless this object already has a "default" instance generated. The value of VISORE is that it gives you the ability to access children and parent constructs very easily by clicking to the "<" and ">" arrows next to the object name. This will help you find all attributes and children objects you need to build the request.
The second way to find which attributes and classes are required to instantiate an object is to create it via the GUI and then save it in JSON format. You can then modify the file and re-push the configuration to the APIC via a new REST call.
Alternatively, one could also leverage the API inspector. As you create objects in the GUI, the corresponding JSON code is logged in a separate window, as depicted below:

I've highlighted the event describing the creation of a new tenant.

We also have a very nice ACI API logger on github. It creates a small web service to listen to APIC responses.

Finally you can use the object explorer from the embedded documentation. This will give you the full description of object classes as well as a good overview of the required properties, DN format, available methods, plus also things like properties constraints, such as which characters are allowed for the object name, min and max field values, and relations to other model trees (concrete, logical and resolved models). This one is not for feint heart!

To make things simpler, I can give you the following hints as to objects URIs:

  • Items defined under "Fabric > Access Policies" in the ACI GUI will be located under Uni > infraInfra instance.
  • Items defined under " Fabric > Fabric Policies" in the ACI GUI will be located under Uni > fabricInst instance.
  • Items related to VMM domains will be located under Uni > vmmProvP > VmmDomP instance.
  • Items related to physical domains will be located under Uni > physDomP instance.
  • Items related to external routed domains will be located under Uni > l3extDomP instance.
  • Items related to Tenants and applications policies (fabric virtualization and SDN components) will be located under Uni > fvTenant instance.

Creating VLAN pools

Using the method of your choice, you will end up with the following path to reach the VLAN pool object: Uni > infraInfra and the class name is fvnsVlanInstP.
The following example is using VLAN 1000-1500 and the pool name "VMware_pool". You can also notice that the allocation model is dynamic, which is required for a VMM domain creation.

 {  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "fvnsVlanInstP":{  
               "attributes":{  
                  "allocMode":"dynamic",
                  "name":"VMware_pool"
               },
               "children":[  
                  {  
                     "fvnsEncapBlk":{  
                        "attributes":{  
                           "allocMode":"inherit",
                           "from":"vlan-1000",
                           "to":"vlan-1500"
                        }
                     }
                  }
               ]
            }
         }
      ]
   }
}

You can then post the JSON request to the APIC at http://<apic>/api/mo/uni.json. Jump to "Creating Access Entity Profile" to learn how to use POSTMAN to do it.

In a similar way, a VLAN pool must be created to connect our border router. This routing device will be connected to the ACI fabric as a member of an external routed domain (as opposed to ESXi hosts, connected to a VMM or Virtual Machine Manager domain). VLAN allocation mode must be static in this case. The remaining part of the code is quite similar:

 {  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "fvnsVlanInstP":{  
               "attributes":{  
                  "allocMode":"static",
                  "name":"outside_vlans"
               },
               "children":[  
                  {  
                     "fvnsEncapBlk":{  
                        "attributes":{  
                           "allocMode":"inherit",
                           "from":"vlan-100",
                           "to":"vlan-110"
                        }
                     }
                  }
               ]
            }
         }
      ]
   }
}

As a result, we have now 2 VLAN pools, the first is named VMware_pool and contains vlans 1000-1500. It will be used for VMware port group provisioning. The second is called outside_vlans and contains vlans 100-110.

Creating the VMM Domain

The VMM domain (vmmDomP) defines how ACI integrates with hypervisor managers. We currently support VMware, Openstack (KVM) and Hyper-V. In our scenario, we need to provide integration with VMware, which is achieved via vCenter integration. Once the VMM is created, the APIC pushes a VMware VDS to vCenter(we can also specify vmmDomP mode "n1kv", which will leverage Cisco AVS instead). Then vSphere hosts that are connected to the ACI fabric can be added to the VDS from the vCenter console, allowing them to take advantage of ACI policies that will be defined later.
When creating the VMM domain, we need to provide several information, including:

  • vCenter Ip address or FQDN. (attribute hostOrIp of the object vmmCtrlrP below)
  • vCenter Credentials, defined by the object type vmmUsrAccP, and referenced by the relation object vmmRsAcc
  • Datacenter object name (rootContName).
  • version of the VDS to be used (optional).

If you want to examine all required components, I'd suggest you to create one test VMM with the GUI, then check the result by saving the JSON file, or any other methods I mentioned above. To save you some time, here's the minimum JSON code you'll need to create a VMM domain with a "VMware provider":

{  
   "vmmProvP":{  
      "attributes":{  
         "vendor":"VMware"
      },
      "children":[  
         {  
            "vmmDomP":{  
               "attributes":{  
                  "name":"VDS-01"
               },
               "children":[  
                  {  
                     "vmmCtrlrP":{  
                        "attributes":{  
                           "name":"vcenter",
                           "hostOrIp":"10.0.0.1",
                           "rootContName":"DC-01"
                        },
                        "children":[  
                           {  
                              "vmmRsAcc":{  
                                 "attributes":{  
                                    "tDn":"uni/vmmp-VMware/dom-VDS-01/usracc-root"
                                 }
                              }
                           }
                        ]
                     }
                  },
                  {  
                     "infraRsVlanNs":{  
                        "attributes":{  
                           "tDn":"uni/infra/vlanns-[VMware_pool]-dynamic"
                        }
                     }
                  },
                  {  
                     "vmmUsrAccP":{  
                        "attributes":{  
                           "name":"root",
                           "usr":"root",
                           "pwd":"Cisco123"
                        }
                     }
                  }
               ]
            }
         }
      ]
   }
}

The vlan pool we've created earlier is referenced by the target DN:
"uni/infra/vlanns-[VMware_pool]-dynamic". This URI is built upon on a specific pattern that is detailed in the API documentation located on the APIC. (I'm describing how to verify a tDN naming convention a few paragraphs below)

If you're more familiar with XML, here's the corresponding XML code: (replace uni.json by uni.xml when posting the configuration to APIC).

<vmmProvP vendor="VMware">  
<!-- VMM Domain -->  
<vmmDomP name="VDS-01">  
<!-- Association to VLAN Namespace -->  
<infraRsVlanNs tDn="uni/infra/vlanns-[VMware_pool]-dynamic"/>  
<!-- Credentials for vCenter -->  
<vmmUsrAccP name="root" usr="root" pwd="Cisco123" />  
<!-- vCenter IP address -->  
<vmmCtrlrP name="vcenter" hostOrIp="10.0.0.1" rootContName="DC-01">  
<vmmRsAcc tDn="uni/vmmp-VMware/dom-VDS-01/usracc-root"/>  
</vmmCtrlrP>  
</vmmDomP>  
</vmmProvP>  

Creating the External Routed Domain to connect the border router

In ACI, a domain is a construct ultimately binding VLAN pools and security domains to physical ports in the fabric. Security domains can further be associated to tenants and RBAC users. Also, users are granted particular permissions to access both tenants and domains. This means you can for example restrict a particular tenant to a single or multiple domains, whether VMM or physical. When VMM access is granted, the tenant can push port groups in vCenter or SCVMM. When physical domain access is granted, tenant users can eventually program a VLAN defined in the pool on specific physical ports. With this logic, one could give a tenant access to a limited number of ports in the fabric, or to a specific set of leaf nodes for example.

A physical domain is available by default out of the box, it's called "phys". You can see the required JSON code to create the object by navigating to Fabric > Physical and External Domains > Physical Domains > phys. Then right-click and select Save as. For Content, select "Only Configuration", for scope select "Subtree" and json for Export Format. The minimum JSON code you'd need to create a physical domain is the following:

{  
   "physDomP":{  
      "attributes":{  
         "name":"outside"
      },
      "children":[  
         {  
            "infraRsVlanNs":{  
               "attributes":{  
                  "tDn":"uni/infra/vlanns-[outside_vlans]-static"
               }
            }
         }
      ]
   }
}

You can notice that now the object type is "PhysDomP", which references a physical domain. This is why sometimes you will hear ACI specialists talk about "physdom". This just means that the ACI domain is covering physical hosts or a particular design that doesn't integrate hypervisor Virtual Machines networking (eg. Oracle VM).

But in our scenario, we don't need to use the physical domain at all. This is because we only have hypervisors which leverages VMM domain and the border router which leverages an external routed domain. Essentially an external domain is not that different from a physical domain, it's just used somewhere else in the object model. More specifically, when we're going to make the ACI fabric peer with the border router by creating what we call an L3 out object, we'll need to reference an external routed domain, as opposed to a physical domain when directly connecting bare metal servers to the fabric.

In our scenario, the JSON configuration representing the external routed domain in the object model would be as follows:

{  
   "l3extDomP":{  
      "attributes":{  
         "name":"outside_L3"
      },
      "children":[  
         {  
            "infraRsVlanNs":{  
               "attributes":{  
                  "tDn":"uni/infra/vlanns-[outside_vlans]-static"
               }
            }
         }
      ]
   }
}

We've called our domain "outside_L3" and we also need to specify the target DN (tDN) pointing to the initial vlan pool object. For the moment I'm just giving you the right format knowing that we've previously called the vlan pool to be used "outside_vlans". In addition, we can notice that the only difference compared to a "standard" physical domain is the domain object class, which is now l3extDomP.

Creating Access Entity Profiles

The AEP can be seen as a VLAN scope for fabric ports. This is because the AEP is ultimately establishing the link between leaf interfaces and domains. On top of that, a domain itself can be tied to specific tenants. So the AEP is also sometimes be referred as the VLAN "allowed list", which is also valid (although omitting the tenant visibility aspect). It's worth noting that this is only defining which VLANs can be used, but not programming them on the fabric. How policies are effectively brought to the leaf nodes are defined by resolution and deployment immediacy parameters.

  • Resolution immediacy is only relevant for hypervisors. If it is set to immediate, then the leaf downloads the policy as soon the hypervisor joins the ACI managed virtual switch. If it is set to lazy, the policy is downloaded when a VM is connected to the virtual port.
  • Deployment immediacy defines how the policy is effectively deployed in the hardware. If set to immediate, the policy is programmed in the TCAM as soon as it gets downloaded on the leaf. If set to lazy, it is deployed as the first packet is hitting the leaf node, which is better for TCAM optimization, but may introduce some delay when endpoints first talk on the network.

In our scenario, we need to create 2 AEPs. One profile to define the VLAN allowed list for vSphere hosts and one profile for the border router. Then we must link those profiles to the corresponding domains.

Let's use the manual approach to create the AEP and then take a look at the JSON information:

First, logon to the APIC GUI and in the top level menu, go to Fabric > Access Policies

access_policies

Then go to Global Policies > Attachable Access Entity Profiles in the right pane, right-click and validate "Create Attachable Access Entity Profile"

create_aep

Complete the form as below and click Next: (I've used a physcial domain called "inside", but you can use the one provided by default)

link_domain

On the following window, click Finish.

You should now see the new object created in the inventory. Right-click the object and select "Save as"

saveas

Then select the options depicted below:

You will see a file downloading in your browser. Open it and copy the content. You can paste it in Sublime Text or another text editor to get a more readable layout. After removing non-required default attributes, you end-up with the following configuration:

{  
   "infraAttEntityP":{  
      "attributes":{  
         "name":"aep_test"
      },
      "children":[  
         {  
            "infraRsDomP":{  
               "attributes":{  
                  "tDn":"uni/phys-inside"
               }
            }
         }
      ]
   }
}

The content can be directly posted to the APIC URI containing the object, which is http(s):/<apic>/api/mo/uni/infra.json. You could also have posted it to "uni.json" as before, but then it requires the "infraInfra" wrapper as in the previous request.
To check the resource containing the object, you can open the object explorer in the top right corner, click your user name, then Documentation > API Documentation. Look for infraAttEntityP in the "Classes" section, you will find the DN required format as shown below. This tells you how the object URI is built. In this example, the AEP resource name is "attentp-{name}", where name is the name you gave to the AEP object (AEP_test).

check_container

Now let's verify that the JSON snippet is actually working. In order to do that, first delete the AEP object you've just created, by right-clicking > Delete. Then open your preferred REST tool. I'm personally using POSTMAN.
You first need to authenticate against the APIC by using the following URI and JSON code (adapt with your credentials and APIC DNS/IP):

The result shows the token that will be stored in an HTTP cookie for subsequent REST calls. You can now execute the REST call with the JSON code previously created.

You can now create as many AEP's as you need by just modifying the JSON input with the appropriate name and domain.

As mentioned earlier, in our specific use case, we need 2 AEPs. The first one will be linked to vSphere hosts policy groups, defining a common VLAN allowed list. Therefore, we need to reference the VMM domain previously created. The code below will create the AEP "AEP_ESXi", allowing VLANs present under the VMM domain "VDS-01":

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "infraAttEntityP":{  
               "attributes":{  
                  "name":"AEP_ESXi"
               },
               "children":[  
                  {  
                     "infraRsDomP":{  
                        "attributes":{  
                           "tDn":"uni/vmmp-VMware/dom-VDS-01"
                        },
                        "children":[  

                        ]
                     }
                  }
               ]
            }
         }
      ]
   }
}

The second AEP will be linked to the border router physical domain and will define its VLAN allowed list. We just need a transit VLAN to provide the peering between the fabric and the router, as we'll be using SVIs to establish routing adjacency. The code below will create the AEP "AEP_BR", allowing VLANs present under the external routed domain "outside":

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "infraAttEntityP":{  
               "attributes":{  
                  "name":"AEP_BR"
               },
               "children":[  
                  {  
                     "infraRsDomP":{  
                        "attributes":{  
                           "tDn":"uni/l3dom-Outside_L3"
                        },
                        "children":[  

                        ]
                     }
                  }
               ]
            }
         }
      ]
   }
}

Creating Interface Policies

These policies define physical ports configuration options, things like CDP, LACP, link speed etc. Most of the time you'll find a "default" policy already existing for common options, but it doesn't really tells you what's behind. For example "default" for CDP doesn't make a lot of sense, you want to have a policy called "CDP_on" and another one called "CDP_off". You can then use them as required, in a more explicit fashion. The same is true for speed, I'd recommend creating at least a "10G" and 1G policy, etc.

For our scenario, we're going to assume that we need the following:

  • 10G connectivity for hosts and the border router
  • Link auto-negotiation
  • CDP disabled
  • LLDP enabled
  • LACP active

Once again, you can easily check how these objects must be created by using any of the tools I've already mentioned. The JSON code would look like this:

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "cdpIfPol":{  
               "attributes":{  
                  "name":"CDP_on",
                  "adminSt":"enabled"
               }
            }
         },
         {  
            "cdpIfPol":{  
               "attributes":{  
                  "name":"CDP_off",
                  "adminSt":"disabled"
               }
            }
         },
         {  
            "lldpIfPol":{  
               "attributes":{  
                  "name":"LLDP_on",
                  "adminRxSt":"enabled",
                  "adminTxSt":"enabled"
               }
            }
         },
         {  
            "lldpIfPol":{  
               "attributes":{  
                  "name":"LLDP_off",
                  "adminRxSt":"disabled",
                  "adminTxSt":"disabled"
               }
            }
         },
         {  
            "fabricHIfPol":{  
               "attributes":{  
                  "autoNeg":"on",
                  "name":"10G",
                  "speed":"10G"
               }
            }
         },
         {  
            "lacpLagPol":{  
               "attributes":{  
                  "name":"LACP_active",
                  "mode":"active"
               }
            }
         }
      ]
   }
}

A quick additional note here: although we're adding more explicit interface policies, don't delete "default" policies. You may have persistent error messages thrown by the APIC.

Creating Interface Policy Groups

Policy groups simply group interfaces policies together. In addition, this construct allows you to specify how you want to bundle the links facing the fabric (PC, vPC or no bundle - access port). Interface policy groups are further linked to AEP, so we can associate the required domain(s) (VLAN "allowed list").
As mentioned at the beginning, we need 4 vPC policy groups for vSphere hosts and 1 vPC policy group for the border router.

The 4 policy groups for hosts will have the same configuration, only the name will differ. We can use the template below:

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "infraFuncP":{  
               "attributes":{  

               },
               "children":[  
                  {  
                     "infraAccBndlGrp":{  
                        "attributes":{  
                           "name":"{{ VPC_name }}",
                           "lagT":"node"
                        },
                        "children":[  
                           {  
                              "infraRsHIfPol":{  
                                 "attributes":{  
                                    "tnFabricHIfPolName":"10G"
                                 }
                              }
                           },
                           {  
                              "infraRsAttEntP":{  
                                 "attributes":{  
                                    "tDn":"uni/infra/attentp-{{ AEP_name }}"
                                 }
                              }
                           },
                           {  
                              "infraRsLacpPol":{  
                                 "attributes":{  
                                    "tnLacpLagPolName":"LACP_active"
                                 }
                              }
                           },
                           {  
                              "infraRsLldpIfPol":{  
                                 "attributes":{  
                                    "tnLldpIfPolName":"LLDP_on"
                                 }
                              }
                           },
                           {  
                              "infraRsCdpIfPol":{  
                                 "attributes":{  
                                    "tnCdpIfPolName":"CDP_off"
                                 }
                              }
                           }
                        ]
                     }
                  }
               ]
            }
         }
      ]
   }
}

{{ AEP_name }} and {{ VPC_name }} must be replaced by the appropriate values:

  • ESX1: replace {{ AEP_name }} by "AEP_ESXi", and {{ VPC_name }} by "vpc_ESX1_polg".
  • ESX2: replace {{ AEP_name }} by "AEP_ESXi", and {{ VPC_name }} by "vpc_ESX2_polg".
  • ESX3: replace {{ AEP_name }} by "AEP_ESXi", and {{ VPC_name }} by "vpc_ESX3_polg".
  • ESX4: replace {{ AEP_name }} by "AEP_ESXi", and {{ VPC_name }} by "vpc_ESX4_polg".
  • Border router: replace {{ AEP_name }} by "AEP_BR", and {{ VPC_name }} by "vpc_BR_polg".

There are other type of interface policy groups. This example leverages VPC, but you can also create access port policies (single port) or change the bundling method from VPC to Port Channel, ie. create a local bundle on a single leaf. Different objects or properties would be instantiated in the object model:

For access ports, the object class would be infraAccPortGrp as opposed to infraAccBndlGrp for VPC or Port Channel. The difference between a local Port Channel and a VPC lies in the lagT property. For VPC, lagT = "node" whereas for Port Channel, lagT = "link". Obviously, you can get this information by using visore, the object model documentation or creating the object in the GUI and saving it as a JSON configuration file.

Creating Interface Profiles

Interface Profiles represent policy groups associated with specific interfaces "block" (also called interface selector), thus representing profiles that can further be associated with leaf nodes. If you plan carefully your physical switch layout, you can then reduce the number of profiles required. For example, if all VPC pairs have the same layout, eg. port 1 is VPC_ESX1, port 2 is VPC_ESX2 etc., you just need one profile per VPC per switch pair. Our scenario is exactly demonstrating this use case. Every host is dual connected with the same port ID on both leaf nodes.

The following JSON code can be used as a template for each host as well as the border router:

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "infraAccPortP":{  
               "attributes":{  
                  "name":"{{ prof_name }}"
               },
               "children":[  
                  {  
                     "infraHPortS":{  
                        "attributes":{  
                           "type":"range",
                           "name":"Interfaces"
                        },
                        "children":[  
                           {  
                              "infraRsAccBaseGrp":{  
                                 "attributes":{  
                                    "tDn":"uni/infra/funcprof/{{ polg_name }}"
                                 }
                              }
                           },
                           {  
                              "infraPortBlk":{  
                                 "attributes":{  
                                    "name":"block1",
                                    "fromCard":"1",
                                    "fromPort":"{{ port_id }}",
                                    "toCard":"1",
                                    "toPort":"{{ port_id }}"
                                 }
                              }
                           }
                        ]
                     }
                  }
               ]
            }
         }
      ]
   }
}

{{ prof_name }}, {{ polg_name }} and {{ port_id }} must be replaced by the appropriate values. "prof_name" is the name that we're going to give to our profiles and "polg_name" is the last part of the policy group tDN. It is composed of the policy-group name prepended by the link type, ie. bundle or access. {{ port_id }} is the port where the hypervisor is effectively connected, as described on the initial diagram, that is:

  • ESX1: {{ prof_name }} = "ESX1_intprof", {{ polg_name }} = "accbundle-vpc_ESX1_polg and {{ port_id }} = 5
  • ESX2: {{ prof_name }} = "ESX2_intprof", {{ polg_name }} = "accbundle-vpc_ESX2_polg and {{ port_id }} = 6
  • ESX3: {{ prof_name }} = "ESX3_intprof", {{ polg_name }} = "accbundle-vpc_ESX2_polg and {{ port_id }} = 7
  • ESX4: {{ prof_name }} = "ESX4_intprof", {{ polg_name }} = "accbundle-vpc_ESX1_polg and {{ port_id }} = 8
  • Border router: {{ prof_name }} = "BR_intprof", {{ polg_name }} = "accbundle-vpc_BR_polg and {{ port_id }} = 5. (port_id is the same as ESX1, but the interface profile will be linked to a different node profile later on).

As a quick exercise, let's verify the exact syntax for the tDN in the object model documentation. The class we're looking at is infra:RsAccBaseGrp, as it's the class containing the reference to the tDN on which we want to get some information. By looking for this class in the left tab ("Classes" section), we can see the following:

We can then double-click on infra:AccBaseGrp, which is the target class referenced by the tDN:

The diagram is showing that several classes inherit from infra:AccBaseGrp. This is also described in the inheritance section:

We can see that 2 classes specify interfaces policy groups for both bundle and single port connectivities: infra:AccBndlGrp and infra:AccPortGrp. We can double-click on these, and then have access to the DN naming convention:

For the bundle policy group:

and the single port policy group:

So in our case, we can see that the tDN referenced by the source relation class infra:RsAccBaseGrp must be specified under the form: "uni/infra/funcprof/accbundle-{name}", where name is the name of the policy group we want to create.CQFD!

Creating Node Profiles

Node profiles combine leaf with interface profiles. It describes the configuration of a subset of ports for a particular leaf , or group of leaves if they have a similar layout. In our scenario we need a common node profile for leaf 101 and 102 and a second common profile for leaf 103 and 104, since ESXi hosts and the border router are connected to the same port id on both VPC peers.

The following JSON code can be used for leaf 101 and 102, where ESXi hosts are connected. We'll have a single node profile defining both VPC peers.

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "infraNodeP":{  
               "attributes":{  
                  "name":"leaf_101_102"
               },
               "children":[  
                  {  
                     "infraLeafS":{  
                        "attributes":{  
                           "name":"vpc_101_102",
                           "type":"range"
                        },
                        "children":[  
                           {  
                              "infraNodeBlk":{  
                                 "attributes":{  
                                    "descr":"",
                                    "from_":"101",
                                    "to_":"102",
                                    "name":"101-102"
                                 }
                              }
                           }
                        ]
                     }
                  },
                  {  
                     "infraRsAccPortP":{  
                        "attributes":{  
                           "tDn":"uni/infra/accportprof-ESX1_intprof"
                        }
                     }
                  },
                  {  
                     "infraRsAccPortP":{  
                        "attributes":{  
                           "tDn":"uni/infra/accportprof-ESX2_intprof"
                        }
                     }
                  },
                  {  
                     "infraRsAccPortP":{  
                        "attributes":{  
                           "tDn":"uni/infra/accportprof-ESX3_intprof"
                        }
                     }
                  },
                  {  
                     "infraRsAccPortP":{  
                        "attributes":{  
                           "tDn":"uni/infra/accportprof-ESX4_intprof"
                        }
                     }
                  }
               ]
            }
         }
      ]
   }
}

Once again, by browsing the object model documentation, we can find that interface profile DN has the following naming convention:

It can be noticed that the interface profile name must be prepended by "accportprof-". So in our scenario, the tDN attributes of infraRsAccPortP must be: "uni/infra/accportprof-{name}". For ESXi hosts and the border router, we'll have:

  • ESX1: tDN = uni/infra/accportprof-ESX1_intprof.
  • ESX2: tDN = uni/infra/accportprof-ESX2_intprof.
  • ESX3: tDN = uni/infra/accportprof-ESX3_intprof.
  • ESX4: tDN = uni/infra/accportprof-ESX4_intprof.
  • Border router: tDN = uni/infra/accportprof-BR_intprof.

the following JSON code can be used to create the common node profile for leaf 103 and 104:

{  
   "infraInfra":{  
      "attributes":{  

      },
      "children":[  
         {  
            "infraNodeP":{  
               "attributes":{  
                  "name":"leaf_103_104"
               },
               "children":[  
                  {  
                     "infraLeafS":{  
                        "attributes":{  
                           "name":"vpc_103_104",
                           "type":"range"
                        },
                        "children":[  
                           {  
                              "infraNodeBlk":{  
                                 "attributes":{  
                                    "descr":"",
                                    "from_":"103",
                                    "to_":"104",
                                    "name":"103-104"
                                 }
                              }
                           }
                        ]
                     }
                  },
                  {  
                     "infraRsAccPortP":{  
                        "attributes":{  
                           "tDn":"uni/infra/accportprof-BR_intprof"
                        }
                     }
                  }
               ]
            }
         }
      ]
   }
}

You should now have a fully configured fabric as well as VMware vCenter integration. All hosts facing ports should be up and running and a new VMware VDS should have been added into vCenter and configured by the APIC, including teaming policies (IP hash in our case, since we're using VPC).

However, because ACI is implementing a whitelist, zero-trust model, no endpoint can talk together yet. We first need to define Virtual Machines security (or eventually micro-segmentation) and how they can be reached from the outside. This will be the purpose of part 2, where I'll cover how to leverage ACI SDN capabilities to define application profiles as well as configure routing protocols to peer with the border router.

Happy automation!

Comments

comments powered by Disqus