Website Hosting Using EFS As A Storage With Terraform

Rupali Gurjar
7 min readSep 3, 2020

What is Amazon EFS ?

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Task Overview

Perform the task-1 using EFS instead of EBS service on the AWS as,

Create/launch Application using Terraform

Here is my Task 1 :-

https://www.linkedin.com/pulse/task1-aws-infrastructure-terraform-rupali-gurjar

1. Create a key and a Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Task Description

For this task I am going to use Visual Studio Code text editor .So as a prerequisite

  1. we have to download an extension for terraform

2. Download AWS CLI command in our local system

3. Create an IAM user on aws

4. Create a profile using IAM user

Let’s begin this task !

Step 1: First of all , Create a main file having profile name and region

Here I am going to create separate files for each service or sub services so that it would be easily managed . I will put all those file inside a folder “/aws_resources” .

provider "aws" {
region = "ap-south-1"
profile = "Rupali"
}
module "resources" {
source = "./aws_resources"
}

Step 2: Create Security group which allow the port 80

resource "aws_security_group" "allow" {
name = "security_grp1"
description = "Allow TLS inbound traffic"
vpc_id = "vpc-da213db2"
ingress {
description = "ssh"
from_port = 0
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "http"
from_port = 0
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "security_grp1"
}
}

Here we can see , it is created

Note : Here we have to allow one more protocol NFS (port no 2049 )

Step 3 : Create the key

To create a key , I used RSA algorithm .

provider tls {}
resource "tls_private_key" "this" {
algorithm = "RSA"
}
module "key_pair" {
source = "terraform-aws-modules/key-pair/aws"
key_name = "rups-deployer-key"
public_key = tls_private_key.this.public_key_openssh
}

Step 4 : Launch an EC2 Instance

Here I used that key and security group that I have created .

resource "aws_instance" "web" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "rups-deployer-key"
vpc_security_group_ids = [ aws_security_group.allow.id ]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.this.private_key_pem
host = aws_instance.web.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "ec2-instance"
}
}

Here I used Remote Executor to install httpd service , git and php in the instance .

Step 5 : Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

Here we have to mention in which subnet , we want to attach this volume .

resource "aws_efs_file_system" "efs_file" {
creation_token = "task2"
tags = {
Name = "EfsVolume"
}
}
resource "aws_efs_mount_target" "mount_target" {
file_system_id = aws_efs_file_system.efs_file.id
subnet_id = aws_instance.web.subnet_id
}

Step 6 . Now we have to mount this folder to “/var/www/html” folder And download code from the Github .

resource "null_resource" "nullremote1"  {
depends_on = [
aws_efs_mount_target.mount_target,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.this.private_key_pem
host = aws_instance.web.public_ip
}
provisioner "remote-exec" {inline = [
"efs_id= ${aws_efs_file_system.efs_file.id} " ,
"sudo mount -t efs $efs_id:/ /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/rups04/Hybrid_cloud_task2.git /var/www/html/"
]
}
}

Step 7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

resource "aws_s3_bucket" "b" {
bucket = "rupsweb2"
acl = "public-read"
force_destroy = true
}
resource "aws_s3_bucket_object" "object" {
bucket = aws_s3_bucket.b.bucket
key = "image.jpg"
source = "C:/Users/LENOVO/Desktop/HybridTask2/image.jpg"
etag = filemd5("C:/Users/LENOVO/Desktop/HybridTask2/image.jpg")
acl = "public-read"
}
locals {
s3_origin_id = "myS3Origin"
}

The bucket is created ..

S3 Bucket Object :-

Step 8 : Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

resource "aws_cloudfront_distribution" "cloud_front" {
origin {
domain_name = aws_s3_bucket.b.bucket_domain_name
origin_id = local.s3_origin_id
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = [ "TLSv1","TLSv1.1","TLSv1.2" ]
}
}
enabled = trueis_ipv6_enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
viewer_certificate {
cloudfront_default_certificate = true
}
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "IN", "GB", "DE"]
}
}
}

Now , Here is the most important part , how to copy the domain_name into that code which we have stored into /var/www/html/ folder .

so here I used some basic concepts of Linux . When we launch an ec2- instance , the default user is ec2-user . This user has limited power so we use ‘root’ user which has all the power. For this I used the command

“ sudo su — root”

And copy the name of the domain name into a file .Now to run this , we need to run just two commands and this great infrastructure is ready .

resource "null_resource" "nullremote2"  {
depends_on = [
aws_cloudfront_distribution.cloud_front,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.this.private_key_pem
host = aws_instance.web.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo su <<END",
"sudo su - root",
" echo -n ${aws_cloudfront_distribution.cloud_front.domain_name} > /var/www/html/domain_name.txt ",
"END"
]
}
}

To download the required plugins run this command

terraform init

It has been successfully initialized , now apply this

terraform apply -auto-approve

It is successfully created , now we can see the final Output : )

Now , If we want to destroy this whole infrastructure , we need to write a single command

“terraform destroy”

Here is My Github Repository

Thanks for Reading :)

--

--