An AWSome day of learning
Our developer Craig recently took part in the AWSome Day where AWS tell you all about their cloud hosting offerings. (AWSome... do you see what they did there?!)
We've got a few clients on AWS as we've been working with Amazon's hosting for a few years now, so this webinar was a great way for Craig to start to get up to speed with everything they offer. Here's his beginners guide to AWS and if you need a hand moving to a more scalable cloud based infrastructure, do get in touch.
Unless you’ve been living under a rock for the last 20 something years, you’ve probably heard of Amazon.com. And if you haven’t (why are you even reading this?), they’re the largest online retailer in the world.
AWS is Amazon’s cloud computing subsidiary which offers a portfolio of more than 100 web services, from basic cloud computing used for web applications, through to machine learning algorithms used for artificial intelligence.
AWS’s cloud infrastructure is probably one of the most comprehensive to date, offering their services in 18 different regions globally, with as many as 54 different availability zones. Their infrastructure is comprehensive, secure, cost effective and easily scalable; perfect for both simple and enterprise grade applications.
Well Architected Framework
AWS define 5 “pillars” which result in their model that they call, and will help you achieve, a “Well Architected Framework.” These pillars are as follows:
- Security - Utilising AWS services to create granular control over users, VPC, subnets etc.
- Reliability - Select appropriate AWS services that most suit your application.
- Performance - Ensure that you are utilising correctly scaled services that meet your application’s performance requirements. Adopt ASG if necessary.
- Cost Optimisation - Only utilise the services and scale of services that you require to keep costs optimised. Over provisioning resources increase unnecessary costs.
The 4 pillars defined above will help you achieve a strong, fifth pillar “Operational Excellence”. Operational Excellence is seen as a collective of the 4 previous pillars. Have a strong Operational Excellence, you have a Well Architected Framework.
To help point you in the right direction for a Well Architected Framework, AWS offer a service called Trusted Advisor. This audits your current architected infrastructure to see if you are overspending, how much you could save and highlight the offending services. TA also audits the security of the current services you use, and highlights what needs to be done in order to help keep things as secure as possible.
Rather than continually monitor TA to see if your current architected infrastructure is running as efficiently as possible, alerts can be set up based on a range of different criteria to let you know that something can be optimised.
Virtual Private Clouds
VPCs contain your app’s infrastructure. The VPC will contain an array of subnets, and contained within those subnets you will have your services like EC2 or RDS clusters.
The VPC allows you to help achieve a Well Architected Framework by allowing granular control of what a subnet is allowed to communicate with. This is where Routing Tables come in. A Routing Table is attached to each subnet within the VPC which tell the subnet what it is allowed to communicate with.
For example, we may have an EC2 instance for our app which is sat within a subnet that has been set up to be accessible via the internet. We then have a different subnet which is not publically accessible, but is only accessible directly by our other subnet. This new, private subnet would be good to use for RDS clusters.
Glacier is AWS’s lowest cost data storage solution. It is primarily used for long term data storage, perfect for long-term backups, archiving data and disaster recovery.
Glacier is a highly secure block storage option which can use AES-256bit encryption, or even a BYOK (Bring Your Own Key) security solution.
Simple Storage Service (S3)
We know that one of S3’s main uses is for asset storage. But something which I learnt on the webinar is that it can also be set up to serve static websites; meaning, if a web server is overkill for a particular website, you can store it on S3 as a far more cost effective option. A good use case for this would be for maintenance or “under construction” pages, to help keep costs down for the client whilst another project is being worked on.
CW can be seen like a dashboard in a car. Where dashboard provides you with information as to how your car is running, CW provides metrics which allow you to monitor performance and health of your services such as EC2 instances or load balancers.
CW can be used to set up alerts to notify selected users of any issues if certain criteria and thresholds are met. E.g. CPU utilisation is >70%.
CW is used as part of a trio of services for AWS’s Elastic Services used for Auto Scaling Groups.
Elastic Services/Elastic Load Balancers
As aforementioned, AWS’s Elastic Service usually utilises three main components; Elastic Load Balancer, CloudWatch and Auto Scaling. If the load balancers aren’t set up to utilise round-robin, they will point the traffic to the most underutilised server based on data pulled from CloudWatch.
CloudWatch may then be set up to trigger auto scaling groups to fire up more instances if required. Different parameters can be set for specifying how many new instances should fire up. For example, it could be set up to increase the amount of instances in the auto scaling group by 10% of the current desired amount.
Auto Scaling Groups
I don’t think there’s too much more to say about ASGs, but one thing that stood out to me was the two main types of autoscaling set ups; reactive and predictive. At the moment we only utilise reactive auto scaling policies which spin up new EC2 instances if certain thresholds are exceeded. However, ASGs can also be set up to be predictively triggered based on upcoming events. For example, if we have an ecommerce shop and are predicting a lot of traffic because of a “Black Friday” event, we can tell the ASG to spin up more servers during this time period.
Identity and Access Management
Best practices for setting up and using IAM users is to set up your key infrastructure using the root account, then create another IAM user with restricted privileges for general purpose usage and monitor.
However users can be set up to enable them to assume different roles to elevate the user’s permissions, if required. For example, if they are required to perform a specific action which requires a certain level of access, the user can temporarily assume a different role to perform the action. This just adds a small layer of security to reduce any human errors.
To add another layer of security to this, these elevated “assumed” roles can be set to only trust specific certain users with certain criteria. For example, the user “Tom” has read access to the AWS services, however if the user “Tom” is logged in to the dashboard from a specific IP address, this user will have the ability to assume an admin role to then go on to be able to perform their desired action. If this user is logged in with a different IP address, this user can log in as normal but will not be able to assume the admin role.
Machine Learning/Deep Learning
I just found one of the use cases for their machine learning service quite interesting. Using AWS machine learning tools enables developers to build algorithm models for deep learning applications.
The particular use case was for missing children. In a nutshell, the deep learning application would be configured to check when the child went missing, then an algorithm is run to age the child, then images are cross referenced against any images that are publically available on the internet.
So there you have it...
An overview of AWS's key services. If you think you're outgrowing your existing hosting, or planning to start something BIG, please get in touch about how we could help you get set up on AWS.
Want to share? Tweet it!