Cisco ACI Bootcamp: Notes and Thoughts Pt.4

This is the fourth and final(!) post of my notes from the ACI partner bootcamp. You can find parts one here, two here, and three here.

ACI Connectivity to External Networks:

  •  Firstly — ACI is NOT a DCI tech. It is NOT a substitute for OTV. I’m not sure how you could even use it as a pseudo DCI, but just don’t since its not designed for that.
  • The ACI fabric is NOT a transit network — it will NOT advertise any networks learned externally back out anywhere else on the fabric.
  • ACI can connect to external networks viaL2 and/orL3.
    • You can connect to external networks via both and apply different policies to each ‘uplink’ (did not get to lab this, so not 100% on how this would pan out)
  • L2 ‘Uplink’
    • Leaf ports can be manually assigned a VLAN as an access port just like any ‘normal’ switch — this external connectivity would happen exactly as you would think.
    • Leaf ports can be configured as a standard 802.1q trunk to extend multipleVLANs out of the fabric.
      • There is NO spanning-tree in the ACI fabric; as mentioned before, the fabric looks like one bridge to the rest of the network.
      • The fabric will flood BPDUs received on ‘uplink’ ports to ensure a loop free topology. If the fabric is connected to two switches ‘upstream’ the BDPUs would help to ensure spanning-tree does its job normally — note that ACI does not generate BPDUs itself though (as far as I understand).
      • It is recommended to connect the fabric to other devices via vPC/multi-chassis ether channel/etc. so you don’t have to deal with spanning-tree at all.
      • Note that internal to the fabric, there is no concern of loops because if there is no end point to initiate a flow, there is no flow for ACI to do something with, so nothing will happen — also it would seem that the ‘auto’ topology checker would likely eliminate loops anyway.
    • Lastly (as a pseudoL2 uplink method) ACI will allow for remote VTEPs to terminate VxLAN tunnels coming from the fabric.
      • Uses IETF (draft) VxLAN to extend a VxLAN to a remote VTEP — presumably all the reserved bits in the header will just be ignored.
      • Note that there is no PIM support for ACI at this point, so if multicast is necessary to extend the VxLAN (i.e. remote VTEP is not on the same subnet as the ACI sourced VTEP), a router may be required to run PIM for the ACI environment.
  • L3 ‘Uplink’
    • Three options: OSPFv2, iBGP, static routing
    • For any of these options, L3 interfaces are of course required; can create L3 ports, sub-interfaces, or SVIs.
    • Static Routing
      • Obviously the simplest option — create static routes as per normal in the APIC.  Static route from the external network into the fabric.
    • OSPF
      • Just IPv4 for now.
      • The fabric is an NSSA area — remember, the fabric is NOT a transit area.
      • Uses vrf-lite for tenants.
      • ACI must be a NON backbone OSPF area.
      • All peering must be done on leaf nodes as with ‘normal’ leaf/spine architecture.
      • Leaf nodes basically redistribute OSPF learned routes into BGP for use within the fabric — this is totally transparent to administrators/users.
      • Normal OSPF functionality is retained — importantly you can equal cost multi path over multiple leaf ports
    • iBGP
      •  Internally all spines are route-reflectors — this is also transparent, but good to know.
      • Must use iBGP! (although nothing stopping you from doing some ‘local-as’ configs on your upstream device :p)
      • Also uses vrf-lite for tenants — I would assume that since the fabric uses MP-BGP that this could grow to use ‘real’ MPLS VPNs, but there was no mention of that at this point.
      • If BGP and OSPF are both used to connect to an upstream network – ONLY BGP is used to advertise ACI routes out to the external network.

Misc.:

  • Can do SPAN/ERSPAN with the fabric which may be quite helpful for understanding how the fabric works while people get used to it.
  • There was a lot of emphasis on using Github to share Application Profiles, Device Packages, scripts to make calls against the APIs, and any other shareable configuration/script type things.
  • The entire APIC topology is kept as objects in a filesystem. You can FTP tot he APIC and browse the topology – probably won’t do this often but its nice that at its core there is human readable things.
  • All API functionality can be tested on the APIC simulator fully — will be great for devs.
  • There is a built-in API inspector — as you configure things in the GUI, it can basically record macros of what you are doing — then take the output of the API inspector and provide it to a dev (or do it yourself) to script everyday tasks. The API inspector will record and show any and all configs you do in the GUI as an API call. This seems really powerful for non programer network folks.
  • UCS-D and Openstack seem to be the compelling managers that could run over the APIC. UCS-D since its already pretty well vetted out, and Openstack because of the momentum in the industry.
  • AVS will bring the ‘translation’ functionality that leaf nodes use to bridge VxLAN/VLAN/NVGRE to the hypervisor, as well as allow for segmentation between VMs in the same subnet. Until we have at least the latter of these two features I think we have a big gap in the overall platform.
  • It sounds like ACI/APIC will be added to the CCIE Data Center track (as opposed to the ‘cloud’ track — whatever that ends up being named).

Final Thoughts:

As stated previously, I’m pleasantly surprised with the overall product. I was worried with the delays to the ship date that eventually a rushed product would be delivered, but I don’t think thats the case. I’ve obviously not implemented ACI in production though, so there’s still room for debate on that front. Overall I really believe in the holistic approach that ACI takes; the fact that we are doing fantastic things in software doesn’t mean we can ignore the hardware. In fact I think it means quite the opposite — because we can do such cool things with software, I think we should take advantage, and understand what the hardware is doing and react (like the atomic counters and load balancing magic happening in ACI).

Integration of ACI into existing data centers, however, is likely not going to be awesome. ACI won’t just slide into a network and magically work — the transition to leaf/spine alone may be difficult for some consumers, let alone defining, creating, and testing all of the new application constructs that exist in ACI. It seems to me that the only viable options for ACI integration are as a ‘pod’ in a data center or as a greenfield deployment. Alternatively, with some work I think it would be possible to essentially run ACI in ‘open mode’ (implicit permit vs the default implicit deny behavior — not sure what it’s technically called in ACI) and simply re-create VLANs as necessary. This would allow you to slowly migrate into the Application Network Profile model, while maintaining existing functionality. In any case, the high hurdles to adopt ACI will likely be a big detractor for Cisco; and perhaps worse (to Cisco), a boon to NSX. In any case, Cisco will likely sell the pants of the 9k line as it’s a great product. Hopefully 9k adoption, and its reasonable entry price, will be enough to help customers build ACI ‘pods’ to get their hands dirty before jumping in head first.

It will be, at the very least,  interesting to see how ACI develops, and how it is received by the industry as a whole. I’m looking forward to, and am cautiously optimistic, for some real world ACI deployments.

 

Advertisements

3 thoughts on “Cisco ACI Bootcamp: Notes and Thoughts Pt.4

    • ACI is shipping now, but there are features that are still scheduled to be released. I don’t have much visibility into what/when those features are. Obviously though Cisco is pretty committed to ACI and will continue to improve it based on customer demand/feedback. Release notes are probably your best friend if you want to see what is presently shipping.

      Carl

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s