ACI Power Deployment Tool (acipdt)

I’ve been spending a lot of my free time working on a little side project with Python. Rather than waxing on about it, I’ll just post a very quick and dirty video, and the that will accompany my tool.

You can find the video here. Note that the issue (which worked out as a decent demo) in the video where there were a bunch of 400 errors has been fixed — you can blame silly Python sets not being ordered!

You can find the acitool library here.

## Synopsis

ACIPDT – or ACI Power Deployment Tool – is a Python library that is intended to be used for network engineers deploying an ACI fabric. ACIPDT is very much early alpha, later releases should add additional features/functionality as well as optimization and improved error handling.

## Overview

The “SDN” (hate the term, but it applies) movement has brought a great deal of discussion to the idea of how a network engineer deploys networking equipment. Historically text files, or perhaps macros in Excel have been used as templates for new deployments, and good old-fashioned copy/paste has been the actual deployment vehicle. Among other things, SDN is attempting to change this. With SDN we (networking folk) have been given APIs! However, most network engineers, myself included, have no idea what to do with said APIs.

Cisco ACI, as with the other networky “SDN products,” in the market have provided some nifty tools in order to begin the journey into this API driven next-generation network world, but the bar to entry in any meaningful way is still rather high. In example, ACI provides an API inspector, which displays the XML or JSON payloads that are configuring the ACI fabric, however the payload on its own of course doesn’t do much for a network guy – where am I supposed to paste that into? What became clear to me is that Postman was the obvious answer. Postman is a great tool for getting started with an API, and I have used it extensively with ACI, even to the point of deploying an entire fabric in 2 minutes with a handful of Postman collections. However…

Postman left much to be desired. I’m fairly certain that the way in which I was using it was never really the intended use case. In order to keep the collections to a reasonable size, which in turn kept spreadsheets relatively organized (spreadsheets contained the data to insert as a variable in the payloads), but then I had nine collections to run, which meant nine spreadsheets. On top of all of that, there was very little feedback in terms of even simple success/fail per post — and even if you captured that output, there would be things that would fail no matter what due to the way the spreadsheet was piping in variables (perhaps more on that later, maybe its just how I was using it).

The result of this frustration is the ACIPDT. My intention is to re-create the functionality that I have used Postman for in Python. In doing so, the goal is to have a single spreadsheet (source of data, could be anything but a spreadsheet is easy), to run a single script, and to have valuable feedback about the success or failure of each and every POST. ACIPDT itself is not a script that will configure your ACI fabric, but is instead a library that contains ReST calls that will configure the most common deployment scenarios. In addition to the library itself, I have created a very simple script to ingest a spreadsheet that contains the actual configuration data and pass the data to the appropriate method in the library.

Key features:
– Have a library that is de-coupled from any deployment type of script.
– This was a goal after early attempts became very intertwined and I was unable to cleanly separate the simple ReST call/payload from the rest of the script.
– Run one script that references one source of data.
– This is more relevant to the run script than it is to the library, but it was taken into consideration when creating the library. A single spreadsheet (with multiple tabs) is used to house all data, is parsed in the run script, then data is passed as kwargs to the appropriate methods for deployment.
– Have discreet configuration items.
– Ensure that an Interface Profile on an L3 Out can be modified without deleting/re-creating the parent object. While this library/script is intended for deployments where this is likely not a big deal, it was at any rate a design goal.
– Capture the status of every single call.
– Each method returns a status code. The run script simply enters this data into the Excel spreadsheet at the appropriate line so you know which POSTs failed and which ones succeeded. This is a simplistic status check, but is leaps better than I was getting with Postman.

## Resources

I believe the code to be relatively straight-forward (and I am very not good¬†at Python), and have provided comments ahead of every method in the library to document what is acceptable to pass to the method. As this is really just a pet project on the weekends, that’s probably about it from a resources perspective. Feel free to tweet at me (@carl_niger) if you run into any issues or have a burning desire for a feature add.

## Code Example

# Import acidpt from the acitool library
from acitool import acipdt
# Initialize the FabLogin class with the APIC IP, username, and password
fablogin = acipdt.FabLogin(apic, user, pword)
# Login to the fabric and capture the challenge cookie
cookies = fablogin.login()
# Initialize the Fabric Pod Policy class with the APIC IP and the returned cookies from the login class
podpol = acipdt.FabPodPol(apic, cookies)
# Configure an NTP server of
status = podpol.ntp('', 'created,modified')

## Getting Started

To get started, unzip the zip file and place the entire contents in a directory in your Python path. For me (using OSX), I placed the folder in /usr/local/lib/python2.7/. You can then import as outlined in the code example above.

## Disclaimer

This is NOT fancy code. It is probably not even “good” code. It does work (for me at least!) though.

ACI – Network vs Application Centric Deployments

If you’ve talked to folks about deploying ACI, chances are that the conversation about a ‘network centric’ vs ‘application centric’ deployment has come up. What the hell is the difference? Isn’t it ALL application centric? I mean it is in the name after all! I don’t think that there is any official definition, but really just some widely agreed upon basics. So lets start with the by trying to define what a ‘network centric’ deployment looks like.

I think in the simplest terms a network centric deployment basically just means that we are taking ACI, and we are treating it just like it is a traditional Nexus switch. We do all the same things we’ve been doing for years in say a Nexus 7k or a Nexus 5k, we just happen to do it on ACI instead. But what does that even mean, that still doesn’t really answer any questions… so here is my definition of a net-centric deployment:

  • Every VLAN (that you would be building if this was indeed a traditional Nexus deployment) = 1 Bridge Domain = 1 EPG
    • Often times the Bridge Domains are configured with flooding enabled
  • VRFs are unenforced, and/or VzAny (permit any/any at the VRF level instead of EPG level) is configured
  • ACI may or may not be the default gateway — i.e. Bridge Domains may or may not have a subnet
    • This often depends on the migration strategy — sometimes an existing 7k houses all default gateways
    • If the fabric isn’t the default gateway or VLANs (EPGs) need to be extended to other traditional devices the Bridge Domain requires flooding (hence point above)
  • ACI may integrate with L4-7 services, however this is done in the ‘traditional’ (see previous post) method — no service graphs

Obviously none of these are hard and fast rules, but this is generally speaking what I would consider a network-centric model. So if that’s the case that’s great, but what does application-centric mean? Do we just build some contracts and call it app-centric? Maybe? I think this is where the definitions start to get a bit fuzzy, but I’ll try to outline what I’ve seen and what I think comprises an app-centric deployment:

  • A single (or very few) Bridge Domains are configured
    • In an app-centric model, we don’t really need to care about flooding domains and extending layer 2 outside of the fabric. The idea here is that we can set our Bridge Domain(s) to be optimized, and just lump everything into a single BD for simplicity sake.
  • Following point one — all EPGs map back to the one or few Bridge Domains
  • VRFs are set to enforced mode — this means the fabric is not going to allow any traffic permitted between EPGs so contracts must be created and applied appropriately
  • L4-7 services may be integrated with managed or unmanaged service graphs
    • (ADC specifically) OR and I’ve seen this recently, in the ‘traditional’ method, however with a single interface into each VRF in the fabric. Normally you’d have the ADC have a leg into each EGP (vPC to the F5 for example with a bunch of dot1q tags representing each EPG) — in this method you would stick the ADC into a single EPG (call it ADC or Load-Balancer or whatever), and each EPG has contracts to the ADC EPG which does all its functionality off of this single interface (front end for VIPs, and inside for SNAT/SubnetIP or use this as default gateway for servers)
  • This one makes people’s heads hurt… Everything lives in a single subnet!
    • We could have multiple subnets roll up to our one BD, but why bother? We’re optimizing flooding, we’re isolating EPGs with contracts, so why not just have a single subnet?

Again, none of this is hard and fast. I think that if I had to be really picky I would say that if you are doing 1:1 EPG:BD in place of where you would traditionally build VLANs that would be net-centric — I would like to say that if you don’t map 1:1 and instead have a single BD with multiple EPGs and allow ACI to optimize flooding that would be app-centric but I think that’s just not fair to the people doing app-centric… I think app-centric really is a lot more than that. App-centric means understanding your application flows and building contracts as appropriate. It means not relying on extending L2 outside of the fabric because you’re not only building a network that eschews L2 in favor of L3, but more critically building apps that don’t require L2 (including the ability to migrate without needing to retain IP address — this obviously only applies across multiple data centers since we have anycast gateways within ACI).

So in summary I suppose that its easy (relatively speaking, obviously building a data center is not a trivial thing) to deploy a net-centric fabric, but it is an entirely other beast to really build an app-centric fabric. That being said, IF you can get everything you need to build contracts appropriately and to understand the flows and to eliminate the need to stretch L2 around, an app-centric fabric is seriously the coolest thing ever. All the complexity, once you get over the initial hurdle at least, starts to go away. One BD, one big fat subnet, no L2 extension to worry about, and security in theory gets simpler since the fabric can make policies re-usable and semi-self-documenting.

To be fair, most people are deploying ACI in a network-centric model as it is definitely the path of least resistance, but I really hope to keep seeing more application-centric deployments as I really think that is the best way to take advantage of ACI and to begin integrating ACI with other tools like CliQr and Service Now to get the most out of it.