Cisco ACI Bootcamp: Notes and Thoughts Pt.2

Here we go with part 2 of my ridiculously long notes taken from the ACI boot camp with Cisco this week. You can find part 1 here. There is a TL;DR in the first post, so I’ll just jump into the details again here.

APIC Controller:

  • APICs must be deployed in N+2 flavor. This has to do with the ‘shard’ (notshart… thankfully) data structure and how data is distributed across the controllers. It sounds like if you deploy threeAPICs, and lose one nothing will happen. If you lose a second controller simultaneously you may lose data. However if the second failure doesn’t happen right away it sounds like the controllers will have time to synchronize again and in theory you could lose the controller two without impact.
    • You can deploy up to 32 APICs… thats crazy town. Even the instructor said there is almost never a requirement for more than three controllers. I suppose you can gain even more fault tolerance with more controllers though if you are super paranoid like that!
  • APICs connect directly to ACI Leaf nodes, and only to ACI leaf nodes.
  • On boot up the APIC verifies the ACI topology to make sure its a leaf/spine topology and that everything is looking good to go — it does this withLLDP ACITLVs
    • The APIC will disable links that are out of policy — i.e. links that would not be allowed in a leaf/spine topology
  • APIC can be managed in-band or out-of-band; I’m assuming that initial management must be out-of-band or CLI, but the lab had pre-setup jump box to get to the APIC so I don’t know for sure.
  • Very importantly, the APIC is NOT in the data path and it is NOT the control plane.
    • The APIC is very much like a VSM in a 1000v deployment. Its only there to define policy and then push that policy out to the Leafs/Spines (or VEMs in the 1000v example)
    • If the APIC(s) go offline everything keeps going as defined; you can’t define policy with the APICs offline though. Again, very 1000v VSMesq
  • Management access to the APIC does normalRBAC type stuff — AD/ACS/RADIUS
    • Perhaps importantly you can also do full-blown RBAC on a per tenant basis
  • Observer:
    • This is a process that runs all the time to monitor the ‘health’ of the fabric
    • There is essentially a ‘health score’ that looks at things like link state, drops, latency, health score of dependent objects, remaining bandwidth, etc.; this is all monitored by ‘Observer’
    • In the future it sounds like there are plans to do this with weighted metrics per application; i.e. track jitter for voice calls and weight that higher than some other metric
  • API Magic Northbound
    • JSON and XML duh
    • Sounds like there is already a Python SDK to poke the REST API on theAPICs
      • It was stressed that literally EVERYTHING you can do in the GUI you can do in the CLI or via the API
    • DevOps Libraries; Chef and Puppet libraries are coming soon
  • API Magic Southbound
    • Not entirely sure how this part works, but APIC can obviously control ACI enabled devices
      • Right now this seems to mostly be Cisco stuff of course
      • Open vSwitch is coming (if not available now, don’t know for sure)
      • Hyper-V support is coming; there’s also some serious hooks with Azure (private cloud not the Public cloud service)
    • Vendors will be able to help develop more ‘L4-L7 Scripting APIs’ for APIC to control things like F5, Citrix, Sourcefire gear

Integrating with Hypervisors/Servers:

  • vCenter/ESX integration is shipping
  • Hyper-V is in production testing, and will be coming soon (6-9 months it sounds like)
  • KVM with Openstack integration is coming as well (perhaps similar timeline to Hyper-V?)
  • APIC ties into the hypervisor manager — again very much like VSM — vCenter, SCVMM, Redhat, others?
  • APIC can create port-groups in vCenter (VSM!!)
    • APIC does NOT need any particular hypervisor switch to do this; it will work with DvS, default Hyper-V switch, Open vSwitch, etc.
    • AVS (Application Virtual Switch) will enable more functionality, but is NOT required
    • It does this with API calls to vCenter — it creates its own “ACI” DvS
    • For existing VMs you can just assign port-groups to them as normal
  • APIC works with physical servers too; you still configure server facing ports in basically the same way access/trunk/vlans/etc. the configuration can be done through the GUI quite easily — there is basically no magic here, its just a port
  • Different flavors of hypervisors (i.e. vCenter, SCVMM, etc.) are grouped into VMM Domains (virtual machine manager domains)
  • EPGs can stretch across VMM domains – we haven’t talked about EPGs yet, but basically just because the hypervisors are logically separated into VMM domains we can still do any and all policy stuff between them and have a VM in vCenter and a VM in Hyper-V grouped into a single policy element.
  • As stated before, right now we do NOT have a way to prevent hosts in the same subnet on the same hypervisor switch from communicating — this will come with vShield integration and/or the AVS (seriously Cisco… VSG does this today…)
  • APIC is intended to be managed by other automation tools likevCAC in the future (guess it could be done now since the APIs are open)
    • Goal being to be very service provider-y where there is a catalog and customers just ‘click click next finish’ to deploy services/applications
  • Hyper-V
    • Azure can be used as this automation tool described above – its limited at the moment, but can deploy services and the like already
    • Initial Hyper-V support will be only for VLANs — no NVGRE just yet, but it will come
    • APIC integration REQUIRES the Azure pack
    • You can actually manage some ACI things in Azure
      • This is mostly basic stuff at the moment but it looks cool
      • You do NOT have to use Azure to do things though — particularly important if you want single pane of glass but have vCenter and Hyper-V — you can still do all the ACI stuff via the APIC


Thats all I’ve got time to clean up this morning. Still to come:

  • Policy Framework in ACI
  • Traffic Flow and Fabric Load Balancing
  • Service Insertion
  • ACI Connectivity to the Outside World
  • Misc. Notes

2 thoughts on “Cisco ACI Bootcamp: Notes and Thoughts Pt.2

  1. Pingback: Cisco ACI Bootcamp: Notes and Thoughts Pt.3 | Come Route With Me!

  2. Pingback: Cisco ACI Bootcamp: Notes and Thoughts Pt.4 | Come Route With Me!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.