I mentioned in Part 1 that I wasn’t sure of the costs of doing a lab environment with Control Tower (plus all the resources it turns on by default). To keep on top of things, I’ve enabled billing notifications with a monthly $20 limit – this needs to be done in the original route account (so logging into AWS with your root credentials and NOT the SSO credentials).
I’ll come back to this in a week and see how the bill is working out..
Since this is only and lab and I want to keep the costs under control, I’ve opted to set some lower retention/lifecycle policies in the buckets related to logs (the default is 365 days).
Since ControlTower deploys everything using StackSets, you need to modify the StackSet rather than editing the S3 Lifecycle policy directly.
Login as your AWS Administrator from your SSO console and go to Shared Accounts -> Log Archive and click on View CloudFormation StackSet:
Now you can modify the Retention policy in the Stack parameters by clicking through:
- Manage StackSet
- Edit StackSet
- Current template: Update AWSControlTowerLoggingResources
- and then changing the retention policy to 5-days.
After this, just Next-Next-Next until you can update the StackSet… this will rollout to the account automatically.
So, I’m deep in the midst of my AWS SA Pro studies this week but need to put some things into practice to help cement the ideas.. to do this I’ve decided to build a new AWS environment using all the best/recommended/guided methods I could!
First steps today – getting a basic account structure up and running using AWS Control Tower. Control tower builds a basic account structure called a Landing Zone – consisting of the root account, a logging account and an auditing account, and then wrapping this in a set of permissions, with SSO and an Organisations setup.
Warning Side note: this is probably going to cost a fortune to keep running, but it’s only for a short period.. I don’t think the Free Tier will cover much. I’ll try and do a costs-related post later on once they start racking up.
Steps to do first thing:
- Create basic AWS account – I’ve done this, given it a crazy lab-related name and setup a dedicate email alias in Gmail for the first tranche of emails.
- Created a WorkMail configuration with two email addresses; one for notifications (logs) and one for auditor (audit logs)
- Kicked off the Control Tower build process
WorkMail is quite useful here as I can completely contain all the email etc associated with the Organisation/LandingZone within the original account. I wouldn’t recommend this as best-practice, but it makes it easy to dismantle the lab environment afterwards.
Costs… (verbatim) $4.00 per user per month and includes 50 GB of mailbox storage for each user. You can get started with a 30-day free trial for up to 25 users.
Control Tower delivers a basic level of security and governance around AWS accounts.. getting the customer off to the right start on the Shared Responsibility Model.
As Control Tower sets up SSO, you’ll get an email to setup your SSO account – once done you should be able to login and get a view similar to this:
It takes about an hour for Control Tower to do its thing and the dashboard will keep you updated with its progress:
And once it’s done, something like this:
As part of my AWS-SA-Pro studies, I’m finding myself doing a lot of reading / watching on the subject of NoSQL databases and in doing so have come across a video from re:Invent 2018 presented by Rick Houlihan (Principle Technologist, NoSQL). This one slide pretty much sums up NoSQL for me:
Tables in NoSQL (specifically DynamoDB) have just a few things that you can base your query on; within the Primary Key there is a partition key and sort key. You can create relationships by using the partition key as a grouping mechanism and the sort key as the most common attribute from which you want to query those relationships.
You can create further sorting keys from other data using Global and Local Search Indexes (GSI, LSI) – but these effectively create copies of the table data organised with alternative fields forming the Primary Key, so you want to minimise the number of GSI and LSI you use.
The whole point of NoSQL is to create a scalable database which reduces the load on the CPU that’s inherent with complex relational databases.. taking something like a delivery service, with 6 different tables in SQL and reducing it down to one table and three GSI in NoSQL. The table format and attributes you use for your Primary and Sort keys all depend on the access
methods patterns – ie, what sort of queries you run on the data (what data are you looking for and what’s your input variables/fields for retrieving that data).
I started my networking career when I joined Cisco as an intern back in 2006 – and my first few projects rotated around CUCM. Later in life I did a project for Vodafone for a hosted Contact Centre service. At the time, the amount of effort, cost and complexity that went into buildinging these solutions seemed appropriate – seemed normal. A few years later and I now think it’s horrifying!
My first attempt at building some form of basic infrastructure constructs in AWS.. Keep in mind that this is the learning curve, so in no way represents best-practice deployments!
The Building Blocks
- Internet Gateway
- Single vPC in London Region (eu-west)
- Two subnets, one in each availability zone (eu-west-2a and eu-west-2b) for Web Servers
- Two subnets, one in each availability zone for Bastion hosts
- Two Launch Configurations – one for bastion hosts, one for webservers
- One Auto Scaling Group for Web Servers – min instances 2, linked to webserver Launch Configuration
- One Auto Scaling Group for Bastion host – min instances 1, linked to Bastion Launch Configuration
- Elastic Load Balancer – inbound HTTP/s connected to the Web Server auto-scaling group
- One Security Group for Web Servers – enables inbound HTTP / HTTPs from anywhere, and SSH from the Bastion Subnets
- One Security Group for Bastion hosts – enables inbound SSH from anywhere
- One IAM Role – to enable Read-Only access to S3
Web Server Launch Configuration
Each web server is built using a Launch configuration which has a bootstrap script to do the following:
- Update standard AMI packages
- Install Apache and PHP
- Start Apache
- Set Apache to start on bootup
- Copy custom index.php from S3 (this is why it needs an IAM role to access S3!)
- Copy health-check HTML from S3
- Make index.php executable
The index.php is a basic “Hello World” which also shows the internal IP of the host serving it.. this way when tweaking with load-balancers I can tell which instance has served the request. The two pages are stored in an S3 bucket and the IAM role applied to the Launch Configuration allows the instances to copy the files down to the web server.
#! /bin/bash yum update -y yum install http php -y service httpd start chkconfig httpd on aws s3 --region eu-west-2 cp s3://e02-lab-scripts/index.php /var/www/html/ aws s3 --region eu-west-2 cp s3://e02-lab-scripts/healthcheck.html /var/www/html/ chmod +x /var/www/html/index.php
This stuff is bloody complicated – but – certainly not impossible. Once you know what all the components are, how they work and interact with each other, it’s easy to start building services and constructs based on them.
Well – my first foray into AWS was going so well.. I’ve been using A Cloud Guru for my AWS Architect Associate training and following each of the labs and doing a bit of playing in the background myself. Since getting through the initial S3 and EC2 videos, I brought up a Bitnami WordPress instance to host this site and did the relevant Route53 transfer and what have you. I also moved our wedding website to an S3-Static Site bucket as post-wedding it didn’t need PHP or anything fancy to run.
All seemed well and good.
Then I decided that since I was now hosting ‘live’ sites from my AWS account, I didn’t really want to be meddling with labs and accidentally destroy something.. So I created a new account and went about building a little lab environment: New vPC, subnets for web services, subnets for bastion hosts, scaling groups.. the works! I thought I was doing so well until I tried connecting to the first bastion instance I’d created and got a dreaded SSH time-out!
Cut to the chase: If you don’t want to read the rest, just remember this: Don’t forget a default gateway in the routing table – it doesn’t add itself!
The idiot network engineer
“User error” – I thought. Obviously. So I walked through my configuration:
- ElasticIP associated.. check.
- Instance attached to Subnet.. duh, check.
- Subnet associated in routing table.. check.
- vPC associated with Internet Gateway.. check.
- Security group has sensible ruleset (SSH and HTTP/s inbound from 0.0.0.0/0).. check.
So I still couldn’t see what the issue was. Now, at this point, I hadn’t done any training on CloudWatch, but I noticed in the vPC configuration I could do something with Flow Logs. So a bit of playing around and following the on-screen “You’re trying to do something you’re not ready for yet“-type instructions, I got flow information for the vPC going into a log and I could drill down on a per-instance basis.
Now I could see what was going on.. Nothing wrong with the rules, my ping’s and SSH attempts clearly had ‘ACCEPT’ next to them, so the inbound path was fine.
Then it dawn on me… stuff is getting in, but how is it getting out – or more precisely – does it know HOW to get out? So back to look at the routing table I went, with the sudden realisation that there was no default route! Doh! And I’m supposed to be a network engineer!
So what’s the lesson here: An AWS Internet Gateway isn’t given the default route by default.. makes sense, you might want to use a vASA or something
What good came out of this.. I learnt how to use CloudWatch
That’s right.. in bowing to the mentality that you should do what you preach, the 23,333 blog has moved home to AWS!
More in the next post how this was accomplished…
I had absolutely no idea that AWS in its contemporary form came into existence in 2006, right around the time I started at Cisco and my first venture into the network industry. I was aware of AWS as a platform a few years later but like many (and with my lack of experience and insight) didn’t realise the impact or potential of it. Having worked in networking for 10-years now (Oct 2006-2016) and seen the dramatic change from old C6500 switching to modern SoC-based/merchant-silicon, as well as the more recent influx of ‘SDx’ technologies, I can see clearly now that platforms like AWS and Azure are quickly becoming the de-facto choice for future IT strategies for both infrastructure and services.
With this in mind, it’s time to upshift the skill set and move on. I had originally planned to complete the VCDX-NV qualification and I may well still do this over the long term but, in the short-term I’m going to focus on retaining my CCIE R&S until it becomes Emeritus and put significant efforts into training for AWS, Azure and some more general architecture specialisations such as TOGAF.
2017 will be the year of Architecture for me.