I had absolutely no idea that AWS in its contemporary form came into existence in 2006, right around the time I started at Cisco and my first venture into the network industry. I was aware of AWS as a platform a few years later but like many (and with my lack of experience and insight) didn’t realise the impact or potential of it. Having worked in networking for 10-years now (Oct 2006-2016) and seen the dramatic change from old C6500 switching to modern SoC-based/merchant-silicon, as well as the more recent influx of ‘SDx’ technologies, I can see clearly now that platforms like AWS and Azure are quickly becoming the de-facto choice for future IT strategies for both infrastructure and services.
With this in mind, it’s time to upshift the skill set and move on. I had originally planned to complete the VCDX-NV qualification and I may well still do this over the long term but, in the short-term I’m going to focus on retaining my CCIE R&S until it becomes Emeritus and put significant efforts into training for AWS, Azure and some more general architecture specialisations such as TOGAF.
2017 will be the year of Architecture for me.
During my docker trials and tribulations, I found two great tools for storing measurement and then displaying them..
It’s not a complex database like MySQL – it’s a simple way of storing time-lapse measurements. I’ll late use it for storing temperature and humidity measurements, but for now we’ll get it setup and drop in some resource stats from the Pi.
Thankfully, someone’s already compiled Influx for the Raspberry Pi and Docker..
HypriotOS/armv7: pirate@black-pearl in ~ $ docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -v /var/docker_data/influxdb:/data --name influxsrv sbiermann/rpi-influxdb
InfluxDB exposes two webports:
- 8083 – is a web-based UI for basic DB administration and querying
- 8086 – is a HTTP API for posting/getting data
The default username and password for influx is root/root.
Getting System Stats
It’s useful to know what your Pi is up to and how the resource utilisation looks, especially if you start pushing some heavy scripts or apps to it. Telegraf has been compiled for the Pi architecture here. Don’t follow the instructions about creating a dedicated data folder.. let Docker do this for you.
Now- the default rpi-telegraf configuration tries to send data to influx using localhost:8086 – this will fail as we’re not running influx inside the same container. To fix this we need to do two things..
Firstly – add the ‘–link’ command to the docker run CLI to link the influxdb container to the telegraf container.
- –link influxsrv:influxsrv – docker will create a DNS entry internally and map the influxsrv hostname to the dynamic IP of the influx container
Secondly – modify the telegraf configuration to point to the right influx hostname. To do this, you’ll need to run telegraf once and then use the docker inspect to find the data directory and edit the telegraf.conf file.
Run telegraf with the link:
HypriotOS/armv7: pirate@black-pearl in /var/docker_data
$ docker run -ti -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf
And then kill the process
Find the config config:
As we’ve been creating a dedicated store for our container’s data, you should find the telegraf data in /var/docker_data/telegraf
Edit the telegraf.conf file and the influxdb section:
[[outputs.influxdb]] ## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = [“udp://localhost:8089”] # UDP endpoint example
urls = [“http://influxsrv:8086”] # required
## The target database for metrics (telegraf will create it if not exists).
database = “telegraf” # required
Now telegraf can be run as a daemon container:
HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf
Setup Raspberry Pi2 with HypriotOS
Assign static addressing
auto eth0 iface eth0 inet static address 192.168.1.111/24 gateway 192.168.1.1 dns-nameservers 192.168.1.1 dns-search jimleach.co.uk
Docker instances are non-persistant, but for most of the things I want to use them for, I need some consistent storage I can present to them. Don’t do this if you want your containers to be portable! A better way would be to present some storage via NFS and map that instead.. something a bit less host-centric.
HypriotOS/armv7: pirate@black-pearl in /var/docker_data
Create directories for:
We’ll need these later as we build up our stack of containers..
HypriotOS/armv7: pirate@black-pearl in ~ $ docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v /var/docker_data/dockerui:/data --name dockerui hypriot/rpi-dockerui Unable to find image 'hypriot/rpi-dockerui:latest' locally latest: Pulling from hypriot/rpi-dockerui f550508d4a51: Pull complete a3ed95caeb02: Pull complete Digest:sha256:6e245629d222e15e648bfc054b9eb24ac253b1f607d3dd513491dd9d5d272cfb Status: Downloaded newer image for hypriot/rpi-dockerui:latest 34d0b3f00a25e847743fd04b59952d7870f2bebbd3b7524e009afd6d5fd0404c
By trying to run the image without first downloading, you prompt docker into pulling it automatically from the Docker Hub and then starting it.
- -d – Puts the instance into daemon mode
- -p 9000:9000 – maps port 9000 on the localhost (the RiPi) to port 9000 on the instance
- -v – Maps our local storage to a volume/directory in the container (local:container)
- –name – gives us a recognisable name to reference the container with
Now if you browse to the Pi’s address on port 9000 – you should get the Docker UI:
In the later-half of 2015 I was lucky enough to be invited to the NSX Ninja partner course at VMware in Staines. This is a course specifically to drive the knowledge-base of partner consultants and architect-types to enable them to seek out and position NSX oppertunities. With two weeks of training on the agenda and the assumption you’ve already spent some time on either a training course (ICM or Fast Track) and earned the VCP-NV; this course focuses first on low-level troubleshooting components and packet flows, then on the design side with the intention of preparing students for the VCIX-NV.
Attending an event today at Cisco I was blindsided by the lack of forsight with some ‘fellow’ VAR representatives.
Virtualisation of common workloads is the norm, and virtualised networks will soon be too. Back in 2009 I took my CCIE R&S and at the time, I had the full resources of being a Cisco employee supporting me. Now however, I find myself more of a lone-wolf and in order to keep up with the new trends of virtualised infrastructure and software-defined/developed-everything I’ve decided to invest in a small virtual lab box.
Now – being that I live in a flat with no dedicated study/office space and a partner who (quite rightly) enjoys having an aesthetically-pleasing home, I’ve come up with a box which I can hide in a cupboard and it’s relatively low-powered and almost silent..
- Supermicro X10SDV-TLN4F
- onboard – Intel Xeon D-1540/1541 SoC (System on Chip) – 8c + HT
- onboard – Dual 1GE, and Dual 10GE Base-T LAN
- onboard – Dedicated IPMI interface
- 64GB DDR4 RDIMM (16GB x 4)
- 256GB M.2 PCI-e 3.0 x4 SSD (Samsung SM951)
- Two 1TB Seagate Barracude (ST1000DM00) slow disks
- 16GB USB stick (to boot from)
- All wrapped up in a Cool Master Elite 120 Advanced
All the kit is on order and hopefully I’ll have some updates as to the build and performance as the weeks go by.
In an article titled “Places the CCIE can’t take me”, Ethan Banks recently wrote that network engineers need more and more to be aware of ‘the complete stack’; in my eyes this means the compute, the storage, the virtualisation, the applications and the management.
I’ve been lucky in that I was introduced to VMware in 2002 – you know, before ESX and vSphere, when you still had to compile Workstation from source. So when Cisco dropped the UCS bomb in 2009, setting up vSphere wasn’t alien to me – I was one of a few network engineers who could understand the interaction between all the components. It was a good time to be an engineer!
This wholistic knowledge I’ve carried forward today; I am a network engineer at heart and will always start there but I talk to other engineers and customers about all the other components too; what are you running on the network, how is it hosted, what hypervisor or bare-metal OS are you using, what type of storage is it and how is it accessed and a 100 other questions that lead me to some idea of what is trying to be achieved.
In the last few years I’ve also started to ask the questions around managing infrastructure; what do you monitor and how? How do you control and backup configuration? These questions have been spawned from exposure to financial customers, where availability, integrity and latency are high on the agenda. Infrastructure engineers have been scripting configuration tools for years, but now application developers are trying to do it as well and they get called DevOps.
In the future, I think there’ll still be a need for the specialist engineers we have today; network engineers, storage engineers; compute guys etc – but they’re all going to need to understand a more about the wider picture than they do now. The scariest thing for me, recently; talking to a DC network guy who doesn’t know the damnedest about vSwitches and in the same five minutes a Nutanix engineer who didn’t know if he needed a port-channel for his vSwitch uplinks or not.