APIs and the Network – Far Behind?

As I’ve been I guess “evolving” as a network engineer/packet pusher/router jockey/packet herder/ whatever you’d like I’ve been working more and more with APIs (in my case mostly ACI, but also a bit of vCenter, and other random bits). This has obviously been really cool, getting exposed to a new, arguably better, way to interact with devices. It has, however, opened up lots of new questions for me. These questions pretty much all revolve around APIs and standardization, and I think most poignantly around the question of: “is networking really that far behind everyone else?”

That last question is pretty near and dear to me as I’ve so far made my career out of just being that “network” guy. I certainly don’t want to feel like networking is the last kid on the block to get it together, but that is for sure the messaging that has been coming out of the Twitters and podcasts and the like.

I guess I should preface the rest of everything I’m going to write with this one tidbit: “I don’t know what I don’t know.” What I mean by that is that this post really is an open question because I only know what I’ve seen or been exposed to, and I’m genuinely interested and curious about this topic.

So my position (question?) on this is that it doesn’t seem, to me at least, that networking is really *that* far behind in terms of programmability or standardization. Now I guess I should clarify that position a bit because I do feel there are some relevant caveats to that statement…. Firstly, I completely agree with a lot of the rhetoric out there that networking hasn’t really changed in the last two decades. We still manage things box by box, we still have spanning-tree, we still have basic routing protocols, we still are doing all of the things that were invented when the Internet came to be (more or less).

That being said, I think there are some valid reasons why we do things the way we do them. The biggest point for networking not changing is that it (networking) is arguably the most critical component (from a technology perspective) of any organization. If the network is down people are not happy. No email, no VoIP, no applications, no eCommerce, no XYZ — without the network these things just don’t work. That is a serious burden to bear for the network. It makes changing things hard, because if you screw up…. it could be a bad day for you and your organization. This is definitely not intended to diminish the importance of other disciplines in the IT world — storage is super critical (perhaps the next most critical in my view), voice is super critical, security is super critical, but none of those pieces by themselves necessarily will cause a complete failure for an organization.

All in all I don’t see that there are standardized APIs across any hyper-visor or storage array or any other box in general, so I don’t understand why we should expect the network to have universally accepted standard APIs. Moreover I think that would even be a bad thing — think about SNMP! SNMP was supposed to be a standard universal way to query devices, and indeed everyone supports SNMP, but look what a shit show it is — do you really want that again!?

To put a bow on this I guess I’ll just lay out what I think about the current state of things. APIs are good — and networking folk should learn to love them. We’re getting there — Arista, Cisco, Big Switch, and tons of other vendors have heard loud and clear and are implementing them. I can tell you from my personal experience that working with the ACI API is smooth and awesome. We’ll get there in other networking domains (WAN/Campus/Security), but it will take time. Taking time is probably even a good thing as network folk like myself need to learn to get up to speed on all of this new fangled API stuff. So, in the meantime, learn some basics of how to interact with an API — I would strongly recommend checking out Google Postman and the collections runner — it’s a tool I use regularly and is a super simple way to get started. What are you waiting for?

Advertisements

Elevator Operators Wanted

Recently I had the great joy to be invited to a special Tech Field Day (link: http://techfieldday.com, disclaimer below) event — the Data Field Day Roundtable 1! My first Tech Field Day event, and the very first Data Field Day, so basically very cool stuff! If you have some time, check out the videos on YouTube or Vimeo, it was really fascinating stuff! This event was focused on telemetry, configuration management, and the embrace of open source tools. Really though, the main theme wasn’t about how Netflix does their telemetry (although very cool), or how Google wants to help move along OpenConfig (again, also seems pretty cool), it all really boiled down to building a better mousetrap.

So, given that I’ve got a very network-centric viewpoint of the world, what did this event really mean to me? How does it play into the future of networking/networkers? Well, I think its safe to say that network folks will have to learn new skills — else be doomed to being the elevator operator (my friend Mark Snow’s favorite analogy when discussing this!). There is of course no scenario at all, assuming a desire to stay relevant, where network engineers can just stop learning — period. Technology is of course not standing still, so even if you’re just learning new syntax or a new routing protocol, you’re not off the hook.

elevator operator

Okay, great, we have to learn new stuff — why does how Netflix and Google interact with and/or configure their networks matter to me? Glad you asked!

I think the bottom line is it’s not about programming, it’s not about being an open-source tool wizards — its just about configuring and managing devices in a way that sucks less than the way we do it now. If you listen PacketPushers Podcast, you’ve undoubtedly heard Greg Ferro and his fantastic hatred of SNMP. He is so on the money with that hate it’s not even funny. SNMP is of course not the only, or even primary, method for configuring devices — that has been and still is the CLI. Well the jury is in and the CLI has got to go at some point — it will be a long slow death because it has evolved out of necessity to be what we rely on daily, but it just can’t remain. But why the hell are we still using these tools, and what else could we replace them with? Well, like I said earlier, I don’t know that the outcome matters, but we certainly can’t continue dealing with SNMP and MIBs and blah — thats painful… and the next time you want to make a simple config change across 100s, or even just 10s of devices, you’ll recall why the CLI isn’t that great.

Why does this have anything to do with DFDR1 (Data Field Day Roundtable 1 if you were wondering, a bit long don’t you think)? Even though I feel like DFDR1 was way over my head in a lot of ways – Linux/Program-y/Dev stuff — the central theme was pretty apparent; we need better ways to mine data from devices, we need better ways to configure devices, and we need it now. Not when we can finally figure out what the hell to replace the CLI with. So some really smart folks have said screw it and just forged ahead. Like i’ve said several times now — I don’t care what choices they’re making to do this, the platforms and tools they select may stick around forever or they may be gone next week… still doesn’t matter. If five wizard level people at Netflix can handle all of their monitoring and logging on millions and millions of points of data, we can do better than SNMP and platform/device/vendor specific CLIs. Lets not be the elevator operator.

 

Disclaimer: Not that this post was about any one vendor or anything in particular anyway, but wanted to point out that the lovely folks at Tech Field Day did indeed fly me down to and put me up in San Jose for the DFDR1 event. There was also drinking involved. It was great. Vendors I’m sure paid for my drinks and flights in some way or another, they did not ask me to write about stuff, and this is not that anyway.