Cisco ACI Bootcamp: Notes and Thoughts Pt.1

This week I got to spend some time at the Cisco office in Seattle getting all sorts of learned up on ACI and the APIC.  This post is a recap of the notes I took (slightly cleaned up, but don’t expect awesome notes, I’m no project manager), some answers to some questions I had going in, and other thoughts I had during the course. It’s a LOT of notes… first section is the high level TL;DR if you don’t feel like reading.

Overall Takeaways (TL;DR):

  • ACI is far more polished than I was anticipating at this stage. I love me some Cisco, but let’s be real… I wasn’t expecting ACI to be polished at this point. The interface felt smooth and intuitive, it was snappy, it made sense, overall it just seemed pretty awesome
  • The Nexus 9k hardware is super freakin’ sexy (ASR9ks are arguably even sexier, but in this post I’m talking strictly Nexus 9k). I felt this way before, but it REALLY is good. If you don’t want ACI, then don’t get ACI, but don’t overlook the 9k line. Bang for your buck is crazy good here, plus you get Cisco support which is first class, you can do VxLAN, you can route, you get tons of 40g… its hard to beat for what it is.
  • Cisco wants to be ‘open’ — and they don’t just say it (as far as I can tell); ACI is meant to be controlled by other things. While not FCS, Azure will be able to do tons of stuff out of the box, everything is open to use Python against (SDK already out there), and Puppet and Chef libraries are coming. They even specifically said that Github will be a very important place for ACI/APIC stuff going forward.
  • ACI sucks at multicast. This was a bit weird to me as VxLAN requires multicast (IETF draft at least), but the ACI fabric doesn’t support PIM. It does support IGMP, and you can put a router doing PIM attached to the fabric to handle things, but just seemed weird to me.
  • Cisco proprietary ASICs for the ACI magic, plus Broadcom T2 for all the other stuff; NX-OS only mode uses just the Broadcom chips.
  • ACI does NOT support multi-tier Clos. Not a big deal for almost any customer I’ve been at, but worth knowing.
  • Uses IETF VxLAN, just uses the reserved bits for some extra magic. It sounds like in theory that you can use a remote VTEP on something like a 1000v/9k/Arista(?)/Brocade(?)/etc. to extend a VxLAN outside of the ACI fabric. Not sure why you would want to do this, but its interesting at the least!
  • 1000v is NOT dead. AVS (application virtual switch) is going to basically be an ACI enabled version of the 1000v. 1000v was never on the ACI teams radar really. As Insieme was not Cisco when developing things, the 1000v not having tons of market share was not appealing enough for them. For the same reason there is no integration with the VSG. Both these products will keep on keepin’ on though.As of now there is no inter-vm (on a single hypervisor) security offered by the APIC. There is work on integrating APIC/ACI with vShield… seems silly when Cisco VSG would serve that purpose.
  • May as well ignore FC/FCoE for now. FCoE is supported only locally to a single leaf.
  • In the modular platform (9500) you are either a leaf or a spine — and the line cards don’t mix — kind of a bummer but makes sense.
  • I got the impression that Cisco Inter-cloud will be very …. important… going forward, and that the ‘cloud’ track of certifications could be pretty interesting (whatever they end up calling it officially)

Speeds/Feeds/Hardware:

  • Three basic chip types in use;
    • The ‘NFE’ or Network Forwarding Engine is the Trident T2 chip
    • The ‘ASE’ – Application Spine Engine is a custom Cisco ASIC for Spine switches (duh)
    • The ‘ALE’ – Application Leaf Engine is a custom Cisco ASIC for…. guess what? the Leaf switches
  • All the backplane stuff uses the Trident chips
  • NX-OS mode can only use the Tridents
  • The modular 9500 switches can be either a leaf OR a spine, not both (that part is relatively obvious)
    • You can NOT mix and match line card types in the 9500 — what i mean by that is some line cards are meant for leaf nodes, and some for spine, you cannot mix these flavors
  • Blades for the 9500:
    • 9400 blades are oversubscribed blades for NX-OS mode ONLY
    • 9500 blades are for NX-OS OR Leaf nodes
    • 9600 blades are NON-oversubscribed blades for NX-OS ONLY — they are NOT supported int he 9516 as there is not enough backplane speed to keep it non-oversubscribed
    • 9700 blades are Spine ONLY blades
  • Nexus 9336 is the ‘baby spine’ switch. Incredible value for the port speed/density. It can NOT be run in ‘normal’ NX-OS mode which is sad (no Trident chips, just the ASE).
  • In ACI mode (soft) reboot time is ~20 seconds! This can happen because there’s essentially a separate set of ‘supervisors’ for low-level things like power, fans, POST etc. You can reboot the ACI part of the switches without rebooting the low-level sup. Not sure this will be needed most times, but it is cool.
  • APIC comes in two flavors small or large. Small = <1000 edge ports, Large = >1000 edge ports
  • Some of the fixed chassis 9ks (93128TX comes to mind, but there may be others) can’t use all the ‘uplink’ ports and/or utilize the 40g->4x10g breakout cable options due to bandwidth limitations
  • ACI mode scales pretty crazy big
    • 1 million v4 and/or v6 routes — this is pretty cool actually; turns out there is some magic that makes v6 routes not take up two TCAM entries as in ‘normal’ boxes (i.e. 6509)
    • 8000 mcast groups PER leaf
    • 64000 tenants… that seems like enough… at least i think so
    • NX-OS mode does not scale like this — its limited by local hardware like ‘normal’ — ACI can scale since there is controller magic going on I think

ACI Overview:

  • Network industry is cyclical — VxLAN/NVGRE/other L2 tunnels allow us to have a big ‘routed’ fabric while maintaining L2 adjacency
  • Cisco seems to think (as do I) that VxLAN has won the ‘overlay wars’ (vs NVGRE)
  • Strong focus on the fact that overlays are fantastic, but if you can’t understand where the magical tunnel is taking traffic, and don’t have visibility into the underlay then you are missing out on a lot of information that could be valuable, especially in troubleshooting. This is no doubt a response to NSX… I can’t argue with Cisco on this front. Overlays are cool, but gotta know whats going on in the hardware too.
  • Cisco and Microsoft are partnering a lot it seems — not just in integration of Hyper V and Azure, but also in creating application profiles that the ACI fabric can act on.
    • Applications are pretty obviously the trend (it is called Application Centric Infrastructure… ) — application profiles can help the fabric understand what apps are running on devices automagically — i.e. an Exchange server comes on-line; the fabric can understand that due to the application profile and automatically treat that server with pre-defined security policies/load balancing/service chaining/etc.
  • I’m not a UCS guy, but it sounds like there are a lot of parallels between ACI and UCS — profiling things and GUI and just the overall way things work
  • At FCS there are not a ton of canned application profiles so we as engineers will have to create them.
    • The idea here is that we create them once and export and re-use for the next customer/data center
  • A big theme was not tying applications to subnets orVLANs. I think this is a pretty common theme in the big vendors ‘SDN’ strategy — ACI is saying we don’t care what subnet/VLAN a thing is in, we just care about the application.
    • In a lot of ways I think we’ve been able to do this already for quite some time with vCNS/vShield/1000v/VSG, but obviously as the industry progresses it’s becoming more and more of a theme
  • The ACI fabric has built-in extended ACL like functionality. It’s not a ‘real’ firewall, but it can do some cool stuff.
    • There is already some service-insertion type capabilities baked into the way ACI handles traffic flows (contracts/end point groups/etc.), but there is going to be further integration with other vendor firewalls it sounds like.
  • ACI is vendor agnostic — it doesn’t care what hypervisor you use, or if you want to use physical servers. Obviously there are different levels of integration, but you could do totally physical gear and still have some pretty powerful capabilities with ACI.
    • At FCS there seems to be only VMware support — it has hooks into vCenter day 1 and integrates with the distributed virtual switch right away.
    • Hyper V support should be available very soon (if not already — wasn’t clear on timing)
    • Since Hyper V is more about NVGRE, the leaf nodes will ‘translate’ between VxLAN and NVGRE — allowing L2 extension between VMware and Hyper V environments — pretty slick!
  • ACI is all ISIS and MP-BGP under the covers — but much like OTV/FabricPath you don’t ever touch/see this. All internal L2 adjacency is performed via VxLAN — then if required leaf nodes can translate between VxLAN/VLAN/NVGRE/etc.
  • EVERYTHING you can do in the CLI you can do in the GUI (and the GUI is NOT Java! Hooray!)
  • The APIC controller as well as the 9ks in ACI mode have a CLI. The 9ks CLI is only for troubleshooting.
  • ACI/APIC is NOT a DCI technology. This should be apparent, but must be called out just in case!

More to come very soon. I have another several pages of notes!! All in all I’m pretty impressed. Now I just need customers to buy it so I can play with it more!

Advertisements

10 thoughts on “Cisco ACI Bootcamp: Notes and Thoughts Pt.1

  1. Pingback: Cisco ACI Bootcamp: Notes and Thoughts Pt.2 | Come Route With Me!

  2. How well – static? – can ACI work without controllers ?
    I do not have a dynamic environment, manual configs work find today but do want some of the other ACI OS benefits .

    • ACI mode is unavailable without the APIC controllers. You can however use the Nexus 9000 line in ‘standalone’ NX-OS mode. I think thats a big play for Cisco right now — get the 9ks out there with customers with the ‘future proofing’/investment protection option of moving toward a full ACI architecture in the longer term. Just be careful with the different platforms — i.e. if you go 9500 chassis, with NX-OS blades, you may not be able to use that chassis as a spine switch in an ACI fabric.

      Carl

  3. Pingback: Cisco ACI Bootcamp: Notes and Thoughts Pt.3 | Come Route With Me!

  4. Pingback: Cisco ACI Bootcamp: Notes and Thoughts Pt.4 | Come Route With Me!

  5. I think that everything posted made a ton of sense.
    But, what about this? suppose you added a little information?
    I ain’t suggesting your information is not good., however
    suppose you added something that makes people want more? I mean Cisco ACI
    Bootcamp: Notes and Thoughts Pt.1 | Come Route With Me!
    is kinda boring. You might look at Yahoo’s home page and
    note how they create article titles to grab people to click.
    You might try adding a video or a related picture or two to grab people interested about what you’ve written. In my opinion, it might bring your blog a little bit more interesting.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s