re:Invent 2019


  • AWS Outpost are Generally Available for Ordering – your customers can now get AWS physical infrastructure on-premise
  • End-of-Support Migration Programme for Windows – lets customers move applications off older versions of Windows without breaking them
  • Network Manager – enables a holistic view of all connectivity to AWS including on-premise VPNs, DirectConnect and SD-WAN deployments based on Cisco, Aruba, Silver Peak, and Aviatrix
  • AWS Wavelength – puts AWS resources directly into mobile carrier networks to improve the application experience for mobile users
  • IAM Access Policy Manager – helps audit and secure the access policies assigned to resources such as S3

About re:Invent

AWS held its first conference in 2012 in Las Vegas, and here in 2019 it’s back again with an attendance in excess of 65,000 and taking over the conference centres of six of Las Vegas’ biggest hotels. Starting from Monday and running through to Friday there are over 3000 scheduled sessions with more added as the week progresses.  This is a technical conference through and through – while there’s a splatter of marketing here, the majority of content is aimed at those building services and applications with AWS, as well as leadership around how to adapt enterprises and their IT operations to public cloud.

AWS in the Public Cloud Market

AWS are very proud (and boastful) of their public-cloud market share – holding 47% of the market, with Azure following at 15%, Alibaba at 7% and GCP 4%. The remaining 27% is split across Oracle, IBM and a few other niche platforms. Already, AWS’ first two quarters of revenue in 2019 have exceeded 2018’s entire reported revenue – they’re seeing 39% growth predictions for this year!

New Product Launches

AWS Outposts End-of-Support Migration Programme for Windows
First announced last year, has finally gone “GA” or Generally Available – this means that customers can order pre-built AWS compute infrastructure to be installed in their own on-premise DCs.  Designed to solve some of the issues around application latency and data locality – the infrastructure is managed and controlled right from the AWS console and is an extension of a customer’s AWS environment. One of the biggest launches this week is a migration service that essentially wraps applications that are dependent on older generations of Windows (2003, 2008 etc) in an envelope and allows you to port them onto newer Windows editions.  This means you can take advantage of the critical security and performance updates available in newer versions of Windows.

Other Notable New Services

Network Manager Introduces a new way to manage and monitor your use of AWS’ global networks as well as how it interfaces with your on-premise networks via Site-to-Site VPN and SD-WAN.   CiscoAruba, Silver Peak, and Aviatrix have all announced integrations of their SD-WAN products with Network Manager.
Amazon VPC Ingress Routing You can now segment your Amazon Virtual Private Cloud traffic so that it is routed via virtual appliances, both inbound and outbound. 
Access Analyzer for S3 and IAM Access Analyzer These new features monitor access policies and enables proactive remediation of potentially unwanted access.  
AWS License Manager additional functionality Dedicated hosts can be difficult to manage for certain licensing considerations (for example BYOL).  AWS License Manager now simplifies this.
Wavelength Will bring AWS services and capabilities as close to mobile users as possible by putting AWS resources directly in 5G carrier network hubs.
AWS Data Exchange AWS already have MarketPlace for ISVs and pre-built solutions – Data Exchange allows companies to share / sell data which might be useful to others. Examples of this include anonymised healthcare insights  or historic news items.
Amazon Bracket Probably not relevant to 99.99% of our customers but AWS have brought Quantum Computing to the cloud.


Networking is key to how AWS provides its Virtual Private Cloud (VPC) – enabling it to not only host virtual machines (AWS calls them Instances) in its infrastructure but to connect those VMs with the Internet and on-premise networks.  Here’s a couple of updates from the VPC world:

  • AWS Transit Gateway Multicast
    • Multicast, in the cloud… used most often by media broadcasters and financial/energy trading customers, the lack of this would have once been a show-stopper for cloud-adoption.
  • Accelerated Site-to-Site VPN
    • Brings the VPN gateway to an edge location closest to your on-premise VPN connection
    • Used in conjunction with transit gateways
    • Uses AWS backbone network and is essentially driven using anycast connectivity
  • AWS Transit Gateway Inter-Region Peering
    • Now you can connect between VPCs in different regions – previously you would have to do this with site-to-site/IPSEC tunnels.

Under The Hood

re:Invent isn’t just about learning how to use AWS’ technologies and services, it’s also about learning about what goes on under the hood (or behind the silver lining):

  • Graviton 2 ARM Chips – the next generation of ARM-powered Instances (this differs from your typical Intel/AMD x86 instance types)
  • Nitro 2 Controller – AWS use a specialised and custom built virtualisation controller called “Nitro” – this week saw the confirmation of its second generation being employed, providing low-latency 100Gbps network connectivity to convince the high-performance compute crowd that you can do HPC in the Cloud  (compared to 25Gbps previously).

AWS Exam Review

Last week I took and passed the AWS Solutions Architect Associate exam.  Other than the rather weird PSI test centre/environment/process (more on that later) I actually rather enjoy this examination.

Don’t worry – there’s no NDA-breaching here.

The exam itself presents 65 questions with just over two-hours to complete.  Most of the questions are scenario based, rather than quick-win factoid answers.  This is why I liked it.  I’m terrible at remember factoids (ie, this thing costs $0.065/MB on a Wednesday when the sun is shining and the wind is from the north).  But I am very good at putting things into practical use and designing around capabilities or intended functionality.  What’s the best way to ensure maximum availability for static content used on customer X’s website?  What’s the best way to provide maximum fault tolerance across availability zone’s when an application needs Y instances.    These are all great questions that make you think about the capabilities of AWS’s services, how they are distributed and how you can exploit them.

I used a number of study aids in preparing for the exam.

Firstly; A Cloud Guru – has to be the best technical training platform I’ve ever encountered.  The courses are delivered in chunks of 10~20 mins, so you’re never overloaded, and the majority have a practical element that you can follow-along with.  The quizzes at the end of each module keep you on track and the exam simulations are a realistic enough to give you a view of your progress.

Secondly; read the whitepapers. I can’t stress this enough – the baseline should be a thorough read of the Well-Architected Framework and the Cloud Adoption Framework.  You should also reading AWS blogs and design on solutions that have already been architected.

Finally; the AWS Free Tier.  Exploit it! Tinker with everything.  The more you get a practical view of AWS the more you understand how things hook together and where the limitations are (or aren’t).

The last point I have is around the PSI test experience.  I’m very used to doing Pearson Vue exams – the way the test centres work and how the exams work, be the Cisco or VMware or whatever – they are very structured.  The PSI experience was very lightweight.  I was given a username and one-time-password to use at the exam machine and that was it.  When the test is finished, you don’t get a score report (displayed or printed), you just get a “Congratulations” on an “Unsuccessful”.  The screen then kinda leaves you hanging here – I recommend calling for a proctor at this point and get them to log the exam out.

Next up.. AWS SysOps Associate!

Simple AWS Diagrams

So my most recent post included some Visio-style diagrams but not done in Visio.. try out, it’s pretty good. Basic, but good.

Moving Markets and upskilling

I had absolutely no idea that AWS in its contemporary form came into existence in 2006, right around the time I started at Cisco and my first venture into the network industry. I was aware of AWS as a platform a few years later but like many (and with my lack of experience and insight) didn’t realise the impact or potential of it.  Having worked in networking for 10-years now (Oct 2006-2016) and seen the dramatic change from old C6500 switching to modern SoC-based/merchant-silicon, as well as the more recent influx of ‘SDx’ technologies, I can see clearly now that platforms like AWS and Azure are quickly becoming the de-facto choice for future IT strategies for both infrastructure and services.

With this in mind, it’s time to upshift the skill set and move on.  I had originally planned to complete the VCDX-NV qualification and I may well still do this over the long term but, in the short-term I’m going to focus on retaining my CCIE R&S until it becomes Emeritus and put significant efforts into training for AWS, Azure and some more general architecture specialisations such as TOGAF.

2017 will be the year of Architecture for me.

Docker-Pi – DB and Visualisation

During my docker trials and tribulations, I found two great tools for storing measurement and then displaying them..


It’s not a complex database like MySQL – it’s a simple way of storing time-lapse measurements.  I’ll late use it for storing temperature and humidity measurements, but for now we’ll get it setup and drop in some resource stats from the Pi.

Thankfully, someone’s already compiled Influx for the Raspberry Pi and Docker..

HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -v /var/docker_data/influxdb:/data --name influxsrv sbiermann/rpi-influxdb

InfluxDB exposes two webports:

  • 8083 – is a web-based UI for basic DB administration and querying
  • 8086 – is a HTTP API for posting/getting data

The default username and password for influx is root/root.

Getting System Stats

It’s useful to know what your Pi is up to and how the resource utilisation looks, especially if you start pushing some heavy scripts or apps to it.  Telegraf has been compiled for the Pi architecture here.  Don’t follow the instructions about creating a dedicated data folder.. let Docker do this for you.

Now- the default rpi-telegraf configuration tries to send data to influx using localhost:8086 – this will fail as we’re not running influx inside the same container.  To fix this we need to do two things..

Firstly – add the ‘–link’ command to the docker run CLI to link the influxdb container to the telegraf container.

  • –link influxsrv:influxsrv – docker will create a DNS entry internally and map the influxsrv hostname to the dynamic IP of the influx container

Secondly – modify the telegraf configuration to point to the right influx hostname. To do this, you’ll need to run telegraf once and then use the docker inspect to find the data directory and edit the telegraf.conf file.

Run telegraf with the link:

HypriotOS/armv7: pirate@black-pearl in /var/docker_data
$ docker run -ti -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf

And then kill the process

Find the config config:

As we’ve been creating a dedicated store for our container’s data, you should find the telegraf data in /var/docker_data/telegraf

Edit the telegraf.conf file and the influxdb section:

## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = [“udp://localhost:8089”] # UDP endpoint example
urls = [“http://influxsrv:8086”] # required
## The target database for metrics (telegraf will create it if not exists).
database = “telegraf” # required

Now telegraf can be run as a daemon container:

HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf

Docker-Pi – Getting Started

Setup Raspberry Pi2 with HypriotOS

This gives us the basic docker platform to start from.. saves the agro of trying to work it all out for ourselves.  The guys over at Hypriot have put together a baseline OS for the ARM/Pi architecture that boots, DHCPs for network and then automatically starts the Docker components.
Download from here, and if you run a mac, follow these instructions.
The default SSH credentials for HypriotOS are pirate (username) / hypriot (password).

Assign static addressing

If you’re like me, you can’t remember half of the distribution variations for setting a static address.. HypriotOS is based on Debian – so the following would work fine: edit /etc/network/interfaces/eth0
auto eth0
iface eth0 inet static

Getting Persistance

Docker instances are non-persistant, but for most of the things I want to use them for, I need some consistent storage I can present to them.  Don’t do this if you want your containers to be portable! A better way would be to present some storage via NFS and map that instead.. something a bit less host-centric.

HypriotOS/armv7: pirate@black-pearl in /var/docker_data
$ pwd

Create directories for:

  • dockerui
  • influxdb
  • telegraf
  • grafana

We’ll need these later as we build up our stack of containers..


Docker itself doesn’t have a web-frontend, it’s all CLI driven – but Docker-UI is a container-made app that allows you to see all the images and containers in your docker engine and view the connectivity between them.  Hypriot have pre-compiled the UI for their OS, and you can grab directly from the docker hub and manually run or, or by the power of docker, just run it (without downloading first) and let docker do the hard work:
HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v /var/docker_data/dockerui:/data --name dockerui hypriot/rpi-dockerui
Unable to find image 'hypriot/rpi-dockerui:latest' locally
latest: Pulling from hypriot/rpi-dockerui
f550508d4a51: Pull complete
a3ed95caeb02: Pull complete
Status: Downloaded newer image for hypriot/rpi-dockerui:latest

By trying to run the image without first downloading, you prompt docker into pulling it automatically from the Docker Hub and then starting it.

  • -d – Puts the instance into daemon mode
  • -p 9000:9000 – maps port 9000 on the localhost (the RiPi) to port 9000 on the instance
  • -v – Maps our local storage to a volume/directory in the container (local:container)
  • –name – gives us a recognisable name to reference the container with

Now if you browse to the Pi’s address on port 9000 – you should get the Docker UI:


Pi-Docker Failure

Arrrgh!  Total disappointment this weekend.  I’d spent a few days learning the basics of docker and installed HypriotOS to my RPi2.  I added a temperature/humidity sensor and wrote some scripts that bound a set of containers running pigpiod, influxdb, grafana and a python script to grab the sensor data and push it to influx. I had it all working perfectly and was ready to start playing with Git and trying out some rapid script development when a hasty and ungraceful shutdown lost the lot!
Even though I don’t have any backup (it was early days) – I still have plenty of notes, so I’ll rebuild it this week and post up as I go.

Cisco Live 365

Cisco Live 365 is an unbelievable resource that’s totally free!  For those who haven’t come across it before – it’s a library of all the presentations from the Cisco Live conferences from the last four years (starting London 2012).  For most of the sessions, the presentation material is available for download in PDF form and for many the actual session has been recorded and hosted as well.
The Library of resources aren’t just there for those who couldn’t make the Cisco Live conferences in person, but provides excellent resources for reference and training materials. I just can’t recommend using enough for those of both Sales and Technical backgrounds.
For those who’re technically focused.. just search ‘Deep Dive’ and you’ll find a mass of material that’s not on’s documentation/support pages, most of it’s written by Technical Marketing Engineers or those just live-and-breathe specific technology stacks.

The Problem with NSX and ACI

Let’s face it, if there’s even the slightest whiff of someone in a business somewhere mentioning or even thinking about ‘SDN’, Cisco and VMware will be knocking on that door… with a sledge hammer!
The problem is, neither vendor’s product is perfect and as yet, they don’t talk to each other.
NSX doesn’t manage infrastructure. Period.  It has not a care in the world to what is going on with the underlay.  And you might say “Well that’s how it’s designed – to be underlay agnostic”.  My problem with this is; if you’re doing a greenfield DC or refresh, you still have to consider the physical infrastructure. How are you going to manage that infrastructure, monitor it and maintain it. NSX won’t make it go away.  What NSX is good at is the logical stuff – it’s easy to understand the concepts of an edge firewall, distributed firewall, dLR and logical networks. And it’s easy to create the tenant spaces within those constructs.
ACI is infrastructure, it is not virtualisation. The super-cool thing about ACI is just how easy it is to deploy, configure and manage large-scale network infrastructure.  It’s unbelievable how easy it is! Where it fails, not abysmally just badly, is delivery of the infrastructure constructs into the hypervisor space.  Cisco need to create an hypervisor-component capable of everything a physical leaf does – sending traffic up to a physical leaf for processing and then returning it back to the hypervisor is just clumsy. Even worse, assigning VLANs (which we’re trying to get away from the limits of) into port-groups on the VDS and using that for [micro-]EPG separation is clunky.
Are these two competitors? Cisco and VMware believe so, but in reality they are solving different problems, expensively.
What is the answer.. working together.  Which is tricky – NSX has come along way in terms of the VXLAN/Logical Switching/dLR development and of course ACI is doing the same at the physical layer in the leaf(s) (leaves?). I like NSX’s ability to provide a limited set of basic network functions (Edge, SSL/VPN/SLB) in an easy-to-consume way, what I don’t like is it’s total ignorance to physical infrastructure and physical workloads.

NSX Ninja

In the later-half of 2015 I was lucky enough to be invited to the NSX Ninja partner course at VMware in Staines.  This is a course specifically to drive the knowledge-base of partner consultants and architect-types to enable them to seek out and position NSX oppertunities.  With two weeks of training on the agenda and the assumption you’ve already spent some time on either a training course (ICM or Fast Track) and earned the VCP-NV; this course focuses first on low-level troubleshooting components and packet flows, then on the design side with the intention of preparing students for the VCIX-NV.

Read the rest of this entry »