Well, since I got on my soapbox and ranted about how you really need to buy your own hardware for certification study, I guess its fitting that I outline my current lab setup.
Right now I’m working toward my CCIE Service Provider. I’ve passed the written, and am shooting for the “recon” attempt end of February/early March. I call it that because, having done the CCIE R/S, I know that I will need to circle back and re-study at least a few sections (going out on a limb and guessing multicast…. again….).
So, how am I preparing? Well I’ve decided to roll with the INE service provider study guide. Seems to me that in terms of prepping for service provider there are not exactly a whole lot of options. INE and IPExpert are the two that I think of, but IPExpert doesn’t have any service provider material…. so i had an easy choice in who to choose (that’s definitely not a knock on INE FYI). Having a study guide makes it REALLY easy to know what kind of hardware you need which helps to give a little focus to the initial setup. To run the INE lab you need:
2x CRS 12k
The CRS 12ks each need:
1x POS/OC blade (I’m using OC48X/POS-SR-SC)
So, low hanging fruit — ALL of the routers in the INE topology can be virtualized (yay GNS3)!! Thats a HUGE help. So in GNS, we have 6x 7204VXRs, and two 2611XMs (but I’m using 3725 because I had that image handy and don’t have to worry about memory — more on that in a bit).
Next, switches. We need one ME3400 and one 3550. I have 3550s coming out of my ears, so that was no big deal, but the 3400 is a bit of a strange animal for somebody that lives mostly in enterprise-land (like myself). Thankfully, eBay is our friend, and the 3400 is a bit long in the tooth — I was able to snag a ME3400 for 399$. I suspect that I could have put in bids or waited around to pick it up for a better price, but I was impatient and greedy and wanted it ASAP.
All thats left is the big boy… the 12k… This took a bit more work than a quick search on eBay and a simple PayPal transaction. The first thing that is critical to know is that although you need “two” CRSs, you can actually get away with a single chassis. We can do that by configure an SDR — Secure Domain Router — instance for each “router.” If you’ve lived with Nexus 7ks, this is basically the same thing as a VDC — with the notable difference that entire blades must be allocated to each SDR, you cannot allocate ports or port-groups. What you can do is get a single chassis, buy all the required cards, then allocate cards between the two SDRs. (I’ll post up configs for how to do this next week when I’m back home and can fire up the chassis)
Acquiring a 12k is a bit more of an adventure. I totally lucked out and found somebody on eBay who had just finished their SP lab and was selling the chassis with power and fabric cards, 2x PRP-2, and 2x 4GE-SFP-LC. This tidy package was listed at 3500$ or best offer. I lead with 2700$ (thinking there couldn’t be too many crazy people like me out there looking for a second-hand CRS), and was countered w/ 3100$ splitting the difference. I figured that was fair, so that was that!
Next up I needed the OC blades. This was as easy as poking around on eBay for a bit. 175$ per blade later, those were in the mail — not too bad.
Putting all of this together turned out to be a pretty fun little adventure too. First thing was spinning up the GNS routers. I have a desktop I built relatively recently with a core i7 and 32gb memory that I’m running ESXi on, so I just went ahead and used this setup. I spun up two different Ubuntu VMs just to split the GNS3 load out a bit (having lived the GNS3 dream while studying for my R/S). Each Ubuntu host is running 3x 7204VXRs, and a single 3725. One of those fancy GNS3 switches connects to all the necessary copper ports of the routers, dumping each one into a separate VLAN. The last piece to this is connecting the switch to a “physical interface” in the host. This connection is a trunk in GNS3 and in ESXi, I left them trunking all VLANs just to make it easy.
The special sauce here is that my ESXi host is connecting to a 3750… this is actually super important because the 3750 apparently has some magical powers in regards to this quasi QinQ stuff happening between GNS and the “real” lab. (For details on the “breakout” switch setup check out this great IPExpert post: http://blog.ipexpert.com/2011/02/28/gns3-and-physical-switches-breakout-switch/) The 3750 basically just “pops” the tags that the layer 2 GNS3 switch was stuffing the virtual router traffic into, and then dumps that traffic out to the rest of the lab devices.
The rest of the lab setup is nothing fancy — just cabled up to support the INE topology, then logically configured per the lab guide. This lab will certainly allow for some helpful testing for real life use though — being able to lab two IOS-XR routers connected to the rest of the virtual routers, plus the ME switch will be pretty powerful. In fact I have a customer that this lab will come in handy for early next year I hope!
One final and interesting note about the CRS/blades for those considering building their own lab…. Memory… its a super important piece in CRS land. Each blade runs the IOS-XR software version, and is subject to particular requirements for memory. The OC blades that I ordered came with 512mb of “main memory” for the blade. This would be great… if 3.9.1 didn’t require 1gb of memory. Well memory is cheap and easy to come by right??? NOPE!! Holy crap… we all know that Cisco memory is overly ridiculously hilariously expensive, but for basically every router ever i’ve been able to order the right “type” of memory (pin, speed, etc.) and just rock and roll. Not so for this bad boy. Boot up the chassis (takes FOREVER), and no dice — memory is not “recognized.” Well shit… back to the internets. I was able to find some memory on our good friend eBay for 150$ per pair of 512 sticks (1 set of 2x512mb per blade). Not too bad, but certainly added to the total.
For those adding this up, or wondering how doable this is, here are where my totals stand (I think):
CRS chassis w/ PRP and GE blades = 3100
OC48X blades = 350
Memory for OC48X blades = 300
ME3400 = 400
3550 = “free” (probably as low 50 on eBay)
3750 = “free” (can do this as low as 300 I think on eBay)
Desktop = “free” (because i built this months ago preemptively for this adventure; I spent around 1100 on this)
INE SP Guide = 200 (normally 300, waited it out for holiday sale!!)
Including estimated costs for “free” things: 5485 (or tack on an extra 100$ for non holiday sale INE material)
Without “free” things: 4350
I’ll try and remember to post up the important 3750 (“breakout switch”) configs and SDR configs when I get back home to Bellevue.
Here is the relevant 3750 config:
vlan 100 name R1_F0/0 ! vlan 101 name R1_F0/1 ! <snip> ! vlan 800 name R8_F0/0 ! vlan 801 name R8_F0/1 ! interface FastEthernet1/0/1 description From R1 F0/0 switchport access vlan 100 switchport mode dot1q-tunnel l2protocol-tunnel cdp l2protocol-tunnel stp l2protocol-tunnel vtp no cdp enable no cdp tlv server-location no cdp tlv app spanning-tree portfast ! ! interface FastEthernet1/0/23 description Trunk to ESXi Host switchport trunk encapsulation dot1q switchport mode trunk no keepalive l2protocol-tunnel cdp l2protocol-tunnel stp l2protocol-tunnel vtp no cdp enable no cdp tlv server-location no cdp tlv app