While speaking with participants at local meetups I constantly hear that AWS is intimidating. That is completely understandable. Amazon offers nearly 100 services ranging form deployment, storage, and computing to DNS, mobile, and IOT. They hold a massive majority stake in the cloud-infrastructure industry. Where does someone new to AWS even start learning about it’s services?
For starters, AWS offers new users to open a free tier account, the caveat being you must share your credit card information. Don’t see this as a trap because once your free tier account is setup the AWS console is very clear about when you are, or are not, provisioning resources within the free tier limits. However, if you mistakenly leave up certain resources for too long or provision resources outside of the free tier limit, AWS will charge you. For extra protection you can setup billing alarms that warn you if your account has allocated costs beyond a certain limit. Online there is a vast array of tutorials and courses offering curriculums that teach you the fundamentals of AWS while remaining within in the free tier limit, my favorite being acloud guru. Here’s a link to an overview of AWS and tutorials provided by AWS.
AWS/Cloud Computing Benefits
1. Pay As You Go Pricing
* Unlike the olden days when you had to contract with data centers for a fixed period of time and a set of resources based on the necessary hardware you required, the pay as you go pricing model only charges you for the resources you have provisioned at a time/transaction rate.
2. Instant Delivery of Compute Power, Database Storage, Applications, and other IT Resources
* After you made those contracts back in the day, data centers then had to build and engineer the necessary hardware for the resources you required. Now through the cloud you can build those resources virtually in a matter of minutes!
3. Access to Servers, Storage, Databases, and other IT resources
* Say your application computer’s need updates or upgrades, you can do that quickly and easily when using Cloud Computing
Terminology
The purpose of this tutorial is to introduce you to some fundamental services within AWS that are ubiquitously being used by AWS users. By the end, you will be able to provision an AWS EC2 instance (a server) that connects to the internet and hosts a simple frontend website. And yes, this can be done all while still in AWS’s free tier limits. In order to start lets breakdown some AWS terminology that we’ll be constantly using.
1. Region and Availability Zone (AZ)
* AWS hosts data centers regionally across the globe. Within Regions is the physical hardware AWS uses to provision your virtual resources. Within each Region are Availability Zones. The AWS resources you provision need to be designated in a specific Region and AZ, depending on the resource. How you choose which Region and AZ to use depends on the market you’re trying to connect with and how you want to mitigate fault tolerance
2. Elastic Compute Cloud (EC2)
* AWS’s name for the virtual servers users provision. The term “instance” is synonymous with EC2. AWS offers an array of instance types that vary depending on the operation purposes of your server.
3. Amazon Machine Image (AMI)
* AWS’s name for the virtual machines users provision. AMI’s include the machine’s OS, storage defaults, and tools/services used for applications
4. Elastic Block Store (EBS)
* The volume or virtual hard disk attached to and EC2 for server storage. AWS offers an array of EBS types that vary depending on the operation purposes of your disk drive.
5. Security Group
* Rules directing communication and traffic throughout a distributed system
Terraform
Currently Terraform is still in beta at version 0.9.11. The language was created by the folks at HashiCorp and can be used in conjunction with AWS, Google Cloud, Microsoft Azure, Oracle Public Cloud, DigitalOcean, Cloudflare, Docker, Chef, Bitbucket, ex cetera.
To start you’ll need terraform installed and a handy text editor to write your terraform code.
1. Create a directory for this project and in that directory create a file named “main.tf”. As a reference, the complete project structure will look like:
.
|_main.tf
|_deploy-website/
| |_ansible.cfg
| |_inventory.ini
| |_playbook-deploy-website.yml
|
|_tutorial-site/
|_files
|_in
|_your
|_website
2. In main.tf set your provider first
provider "aws" {
region = "us-east-1"
access_key = "your_aws_access_key"
secret_key = "your_aws_secret_key"
}
* AWS access and secret keys are generated with in the AWS console of your account
* The region you declare is where you are virtual resources will be provisioned.
Region Name | Region |
---|---|
US East (N. Virginia) | us-east-1 |
US East (Ohio) | us-east-2 |
US West (Cali) | us-west-1 |
US West (Oregon) | us-west-2 |
Canada (Central) | ca-central-1 |
EU (Frankfurt) | eu-central-2 |
EU (Ireland) | eu-west-1 |
EU (London) | eu-west-2 |
Asia Pacific (Mumbai) | ap-south-1 |
Asia Pacific (Singapore) | ap-southeast-1 |
Asia Pacific (Sydney) | ap-southeast-2 |
Asia Pacific (Tokyo) | ap-northeast-1 |
Asia Pacific (Seoul) | ap-northeast-2 |
South America (Sao Paulo) | sa-east-1 |
3. Next create your EC2 resource
resource "aws_instance" "example-instance" {
count = 1
ami = "ami-4836a428"
instance_type = "t2.micro"
security_groups = ["${aws_security_group.instance.id}"]
key_name = "your_personal_key"
tags {
Name = "example-instance"
}
}
*count
references the number of instances you want provision.
*ami
declares the provided AMI you will use in the EC2. The value is an AMI id found within the AWS console. Note that different regions offer different AMIs so make sure you chose an AMI compatible with the region your using. Also not all AMIs fall within the free tier limits, so make sure you have that correct too.
*instance_type
references the instance type you want provision, t2.micro falls within the free tier limits.
*security_groups
attaches security groups to the instance. The “${}” notation is terraform interpolation. In this case we are passing the id value of an aws_security_group we will be creating in the next step.
*key_name
is the rsa key connected to your AWS account. This will allow you to ssh into your EC2 for admin management. If you have not generated a rsa key within your aws console, here are instructions on how to do that.
*tags
simply are key value pairs you use to better organize your AWS resources. For example here we are naming our instance example-ip
.
4. Create a security group for the instance to allow ssh communication and internet traffic
* What were doing here is saying, in the instance’s perspective, allow any IP to ssh into me (port 22), allow any IP to connect to me via the internet through http (port 80), and allow me (the instance) to talk to anything on all ports through all protocols (“-1”).
resource "aws_security_group" "instance" {
name = "example-instance"
ingress {
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group_rule" "http" {
security_group_id = "${aws_security_group.instance.id}"
type = "ingress"
from_port = 80
to_port = 80
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "all_outbound" {
security_group_id = "${aws_security_group.instance.id}"
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
* ingress
refers to the inbound communication allowed to connect to the instance
* egress
refers to the outbound communication allowed to connect to the instance
* from_port
and to_port
sets the ports the instance can communicate throughout
* protocol
is the communication protocol allowed for that rule
* cidr_blocks
denotes the IP ranges allowed to communicate to the instance through the rule settings
* security_group_id
attaches the security group rules to the instance security group
* If you remember in the instance resource we attached the security group, which now has all the security group rules attached to it, through the security_groups
declaration
That’s it. You’re ready to launch an AWS server that can connect to the internet! But what are you actually sharing to the world? Nothing yet. So let’s now get into Anible.
Ansible
Here’s an overview of Ansible. Essentially it’s an IT automation engine. It can do everything from provisioning AWS resources to installing software to setting OS users. The framework was designed for multi-tier deployments and is written in YAML.
To run Ansible you have to create playbooks. Playbooks use modules to run tasks you’ve set in your playbook. Modules are programs generated and executed in your system nodes and then removed when the playbook is done running. By default Ansible connects to remote hosts via ssh. You can write about Ansible services forever like you can AWS, but instead let’s get started writing a playbook.
1. In your project dir create another dir /deploy-website
2. In deploy-website/ create an inventory.ini file
3. Setup up your inventory variables
[tutorial-ec2]
ex.amp.le.ip!
[tutorial-ec2:vars]
ansible_connection = ssh
ansible_ssh_user = ec2-user
ansible_ssh_private_key_file = /path/to/your/private_key
* Under the [tutorial-ec2]
declaration we tell Ansible the nodes we want to run our playbook in by including relevant IPs. For now don’t worry about the IPs to place there because we will use terraform to stream edit (sed) the information into this file for us
* Under the [tutorial-ec2:vars]
we set variables to use while Ansible runs our playbook.
4. In deploy-website/ create an ansible.cfg file
[defaults]
host_key_checking = False
display_args_to_stdout = true
force_color = 1
gathering = smart
inventory = inventory.ini
[ssh_connection]
retries = 7
* In this file we are setting default variables for Ansible to consider when running our playbook. The [ssh_connection]
option tells Ansible that if a connection to our instance fails, retry 7 times and then give up.
5. In deploy-website create a playbook-deploy-website.yml file. This is where the magic will happen!
- hosts: tutorial-ec2
become: true
tasks:
- name: create www dir
file: path=/var/www state=directory
- name: create tutorial-site dir
file: path=/var/www/tutorial-site state=directory
- name: deploy tutorial-site
synchronize:
src: ~/tutorial-site/html/
dest: /var/www/tutorial-site
- name: install NGINX
raw: yum install nginx -y
- name: remove default NGINX.conf on server
raw: rm /etc/nginx/nginx.conf
- name: copy local NGINX.conf to server
synchronize:
src: ~/tutorial-site/nginx.conf
dest: /etc/nginx/nginx.conf
- name: starting NGINX
service:
name: nginx
state: started
Let’s break down what’s going on here. To start our hosts
is equal to everything presented in the inventory.ini file. Next, the become
key tells Ansible to become sudo (sudo su
) once in the remote machine. Then we get into our playbook tasks.
1. create www dir – This uses the file “module” and is creating a directory in the remote called “www”
2. create tutorial-site dir – Does exactly what the last task did but for the “tutorial-site” directory. This is where you’ll deploy your site files
3. deploy tutorial-site – We use the synchronize “module” in this task to quickly deploy our site files. The module enables the rsync utility in both the local and remote machines to execute a quick deployment
4. install nginx – The raw “module” is used here. It simply writes a command in the remote host terminal. For this particular task we are installing NGINX.
5. remove NGINX.conf – When NGINX is installed a pre defined conf file is provided. See an example with notation here. By default NGINX is configured to serve files in the /var/www/html directory. I simply could have had you copy our html directory into that directory, but for the purpose of learning, I wanted to share more about NGINX configuration. When using NGINX we have the freedom to customize it’s features for our server purposes. For example proxying. Often times folks use a NGINX webserver to proxy requests to another server.
6. copy local NGINX.conf file – Our NGINX.conf file has port 80 serving content from the /var/www/tutorial-site. NGINX needs to be told where to serve files from. As I mentioned before, by default NGINX serves content from the /var/www/html directory, instead we want our content served from /var/www/tutorial-site. Our local NGINX.conf file has implemented that rule, thus we need to make sure the remote NGINX service complies by copying over the file to it.
7. starting NGINX – Now that NGINX is configured properly and our website files are all in the right place, the last thing you have to do is start NGINX. This task uses the service “module” and simply starts NGINX.
Conclusion
That’s it! You’ve deployed a simple site into an AWS EC2 in a matter of minutes! There’s so many different ways to going about deploying a website, provisioning AWS resources, and writing Ansible playbooks. I hope at the least this tutorial gives you a little insight into tools that are commonly used today. Explore, tinker, learn, and master it all!