Pre-TFD Segment Routing Roundtable Thoughts

Next week I will be attending the Tech Field Day Segment Routing Round Table (that was a mouthful) in San Jose. As is clearly evident by the title of the event, I can only imagine we will be discussing Segment Routing. At this point my exposure to Segment Routing is limited to a few blog posts, and a few YouTube videos just to get the lay of the land, so I’m very excited to go and hear from some super smart people a lot more details about this beast.

I figured I would just take this opportunity to jot down some thoughts/questions/comments about SR at this point, however unintelligible/ignorant:

  • My first thought is WTF ever happened to Network Services Headers?? My maybe not great summary about SR is that its in a nutshell source-routing + way way way easier MPLS-TE — the end goal of doing source routing and the ability to do traffic flow manipulation could be to route certain traffic over certain paths of course, or it could be to route traffic through transparent devices like an IPS or something, or even to an active IPS/firewall/ADC/etc. Wasn’t NSH going to solve all of this?
    • Follow up thought/comment: I can totally see NSH being near impossible to implement since I would imagine we would be relying on applications/hosts to insert appropriate information into the header. I suppose SR is easier to implement/more realistic as we are handling control of this at the network layer.
  • Oh man, this is going to be a config nightmare. It seems like this could easily spiral out of control into a massive unmanageable config (obviously depending on how granular you want to be I guess). If we are going to do some of the same stuff NSH is/was intending to do (L4-7 redirection more or less) then I can imagine SR configs are going to get nutty… that leads to the next question:
  • How granular can we be? If we are going to do some of the flow redirection stuff, how do we classify flows? What I’ve seen so far is source prefix X.X.X.X gets a label that means it goes to point A, then from point A to point B etc. which is cool, but that’s a whole prefix. What if we wanted to redirect only HTTP/HTTPs traffic? Possible?
    • Part of my question/concern here is that one of the biggest issues I personally see right now is how and what traffic do you shove down your firewall/load balancer… those devices (because of the complicated stuff they are doing) will never be able to handle the same traffic load as a clos 100g network… just not happening, so it would be really really nice to be able to redirect only the things I’m interested in seeing over to my firewall. This is IMO the biggest (only if I’m being snarky) reason NSX is powerful — the PAN integration (and I think now CheckPoint?) is really powerful — distributed, in software, selective firewall-ing is freaking hard. (Note that you can do that (statefull in kernel firewall) w/ AVS as well, just not as advanced as PAN+NSX)
  • If I already have MPLS/TE, why bother? I say that for a few reasons:
    • Isn’t MPLS dead? I feel @etherealmind screaming at us about how nobody uses MPLS anymore 🙂
    • I feel like this is maybe most useful in the data center (where that redirection would be needed quite heavily), but will it be supported there? Would I even want it there? I feel like we already have a lot of flexibility/tools to do this in the DC
    • If you already have MPLS/TE investment, this certainly seems WAY easier, but can it fully replace TE (bandwidth reservation, class-based tunnel selection, etc.), and if so, how can you/will you migrate toward it?

That may have come off a bit grumpy about SR, but that’s certainly not the intent (yet! Wait till after the Round Table hah!), I just want to makes sure I’m fully understanding this beast as I have for sure head a lot of cool things about it. A friend in the Seattle area actually messaged me after I tweeted about going to this event to tell me how much he loves SR and how his customers are jumping all over it! I’m very much looking forward to learning more about SR next week, so tune into the live stream and poke us (the delegates) on Twitter so we can harass presenters accordingly 🙂 See you next week!

You can find out more about Tech Field Day here.

Disclaimer: Tech Field Day is being super cool and flying me down to San Jose for this event, probably even buying me some beer if I’m well behaved… just a heads up.

ACI Power Deployment Tool (acipdt)

I’ve been spending a lot of my free time working on a little side project with Python. Rather than waxing on about it, I’ll just post a very quick and dirty video, and the that will accompany my tool.

You can find the video here. Note that the issue (which worked out as a decent demo) in the video where there were a bunch of 400 errors has been fixed — you can blame silly Python sets not being ordered!

You can find the acitool library here.

## Synopsis

ACIPDT – or ACI Power Deployment Tool – is a Python library that is intended to be used for network engineers deploying an ACI fabric. ACIPDT is very much early alpha, later releases should add additional features/functionality as well as optimization and improved error handling.

## Overview

The “SDN” (hate the term, but it applies) movement has brought a great deal of discussion to the idea of how a network engineer deploys networking equipment. Historically text files, or perhaps macros in Excel have been used as templates for new deployments, and good old-fashioned copy/paste has been the actual deployment vehicle. Among other things, SDN is attempting to change this. With SDN we (networking folk) have been given APIs! However, most network engineers, myself included, have no idea what to do with said APIs.

Cisco ACI, as with the other networky “SDN products,” in the market have provided some nifty tools in order to begin the journey into this API driven next-generation network world, but the bar to entry in any meaningful way is still rather high. In example, ACI provides an API inspector, which displays the XML or JSON payloads that are configuring the ACI fabric, however the payload on its own of course doesn’t do much for a network guy – where am I supposed to paste that into? What became clear to me is that Postman was the obvious answer. Postman is a great tool for getting started with an API, and I have used it extensively with ACI, even to the point of deploying an entire fabric in 2 minutes with a handful of Postman collections. However…

Postman left much to be desired. I’m fairly certain that the way in which I was using it was never really the intended use case. In order to keep the collections to a reasonable size, which in turn kept spreadsheets relatively organized (spreadsheets contained the data to insert as a variable in the payloads), but then I had nine collections to run, which meant nine spreadsheets. On top of all of that, there was very little feedback in terms of even simple success/fail per post — and even if you captured that output, there would be things that would fail no matter what due to the way the spreadsheet was piping in variables (perhaps more on that later, maybe its just how I was using it).

The result of this frustration is the ACIPDT. My intention is to re-create the functionality that I have used Postman for in Python. In doing so, the goal is to have a single spreadsheet (source of data, could be anything but a spreadsheet is easy), to run a single script, and to have valuable feedback about the success or failure of each and every POST. ACIPDT itself is not a script that will configure your ACI fabric, but is instead a library that contains ReST calls that will configure the most common deployment scenarios. In addition to the library itself, I have created a very simple script to ingest a spreadsheet that contains the actual configuration data and pass the data to the appropriate method in the library.

Key features:
– Have a library that is de-coupled from any deployment type of script.
– This was a goal after early attempts became very intertwined and I was unable to cleanly separate the simple ReST call/payload from the rest of the script.
– Run one script that references one source of data.
– This is more relevant to the run script than it is to the library, but it was taken into consideration when creating the library. A single spreadsheet (with multiple tabs) is used to house all data, is parsed in the run script, then data is passed as kwargs to the appropriate methods for deployment.
– Have discreet configuration items.
– Ensure that an Interface Profile on an L3 Out can be modified without deleting/re-creating the parent object. While this library/script is intended for deployments where this is likely not a big deal, it was at any rate a design goal.
– Capture the status of every single call.
– Each method returns a status code. The run script simply enters this data into the Excel spreadsheet at the appropriate line so you know which POSTs failed and which ones succeeded. This is a simplistic status check, but is leaps better than I was getting with Postman.

## Resources

I believe the code to be relatively straight-forward (and I am very not good at Python), and have provided comments ahead of every method in the library to document what is acceptable to pass to the method. As this is really just a pet project on the weekends, that’s probably about it from a resources perspective. Feel free to tweet at me (@carl_niger) if you run into any issues or have a burning desire for a feature add.

## Code Example

# Import acidpt from the acitool library
from acitool import acipdt
# Initialize the FabLogin class with the APIC IP, username, and password
fablogin = acipdt.FabLogin(apic, user, pword)
# Login to the fabric and capture the challenge cookie
cookies = fablogin.login()
# Initialize the Fabric Pod Policy class with the APIC IP and the returned cookies from the login class
podpol = acipdt.FabPodPol(apic, cookies)
# Configure an NTP server of
status = podpol.ntp('', 'created,modified')

## Getting Started

To get started, unzip the zip file and place the entire contents in a directory in your Python path. For me (using OSX), I placed the folder in /usr/local/lib/python2.7/. You can then import as outlined in the code example above.

## Disclaimer

This is NOT fancy code. It is probably not even “good” code. It does work (for me at least!) though.