Virtualisation of common workloads is the norm, and virtualised networks will soon be too. Back in 2009 I took my CCIE R&S and at the time, I had the full resources of being a Cisco employee supporting me. Now however, I find myself more of a lone-wolf and in order to keep up with the new trends of virtualised infrastructure and software-defined/developed-everything I’ve decided to invest in a small virtual lab box.
Now – being that I live in a flat with no dedicated study/office space and a partner who (quite rightly) enjoys having an aesthetically-pleasing home, I’ve come up with a box which I can hide in a cupboard and it’s relatively low-powered and almost silent..
- Supermicro X10SDV-TLN4F
- onboard – Intel Xeon D-1540/1541 SoC (System on Chip) – 8c + HT
- onboard – Dual 1GE, and Dual 10GE Base-T LAN
- onboard – Dedicated IPMI interface
- 64GB DDR4 RDIMM (16GB x 4)
- 256GB M.2 PCI-e 3.0 x4 SSD (Samsung SM951)
- Two 1TB Seagate Barracude (ST1000DM00) slow disks
- 16GB USB stick (to boot from)
- All wrapped up in a Cool Master Elite 120 Advanced
All the kit is on order and hopefully I’ll have some updates as to the build and performance as the weeks go by.
In an article titled “Places the CCIE can’t take me”, Ethan Banks recently wrote that network engineers need more and more to be aware of ‘the complete stack’; in my eyes this means the compute, the storage, the virtualisation, the applications and the management.
I’ve been lucky in that I was introduced to VMware in 2002 – you know, before ESX and vSphere, when you still had to compile Workstation from source. So when Cisco dropped the UCS bomb in 2009, setting up vSphere wasn’t alien to me – I was one of a few network engineers who could understand the interaction between all the components. It was a good time to be an engineer!
This wholistic knowledge I’ve carried forward today; I am a network engineer at heart and will always start there but I talk to other engineers and customers about all the other components too; what are you running on the network, how is it hosted, what hypervisor or bare-metal OS are you using, what type of storage is it and how is it accessed and a 100 other questions that lead me to some idea of what is trying to be achieved.
In the last few years I’ve also started to ask the questions around managing infrastructure; what do you monitor and how? How do you control and backup configuration? These questions have been spawned from exposure to financial customers, where availability, integrity and latency are high on the agenda. Infrastructure engineers have been scripting configuration tools for years, but now application developers are trying to do it as well and they get called DevOps.
In the future, I think there’ll still be a need for the specialist engineers we have today; network engineers, storage engineers; compute guys etc – but they’re all going to need to understand a more about the wider picture than they do now. The scariest thing for me, recently; talking to a DC network guy who doesn’t know the damnedest about vSwitches and in the same five minutes a Nutanix engineer who didn’t know if he needed a port-channel for his vSwitch uplinks or not.