Developing an Infrastructure using Terraform with the help of EFS

Deepanshu Chajgotra
6 min readAug 4, 2020

Amazon Elastic File System

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

Description of Task:

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Solution:

  • Create The Profile for aws
//user//
provider "aws" {
region = "ap-south-1"
profile = "deepanshu"
}
  • Keypair:
//creation of private key//
resource "tls_private_key" "task2key" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "aws_key_pair" "mytask2key" {
depends_on = [ tls_private_key.task2key, ]
key_name = "mytask2key"
public_key = tls_private_key.task2key.public_key_openssh
}
  • Create Security group which allow the port 80.
//creation of security group//
resource "aws_security_group" "mytask2_sg" {
depends_on = [ aws_key_pair.mytask2key,]
name = "mytask2_sg"
description = "Allow SSH AND HTTP and NFS inbound traffic"
vpc_id = "vpc-77f2ef1f"
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "mytask2_sg"
}
}
  • Launching an EC2 instance:
//launching aws instance //
resource "aws_instance" "mytask2_os" {
depends_on = [ aws_key_pair.mytask2key, aws_security_group.mytask2_sg, ]
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "mytask2key"
security_groups = [ "mytask2_sg" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.task2key.private_key_pem
host = aws_instance.mytask2_os.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install httpd php git -y ",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "mytask2_os"
}
}
  • Launch one Volume using the EFS service .
//creation EFS FILE System//
resource "aws_efs_file_system" "allow_nfs" {
depends_on = [ aws_security_group.mytask2_sg, aws_instance.mytask2_os, ]
creation_token = "allow_nfs"
tags = {
Name = "allow_nfs"
}
}
  • Mounting EFS on the subnet in vpc where the instance is launched.
//mounting EFS FILE System//
resource "aws_efs_mount_target" "alpha" {
depends_on = [ aws_efs_file_system.allow_nfs,
]
file_system_id = aws_efs_file_system.allow_nfs.id
subnet_id = aws_instance.mytask2_os.subnet_id
security_groups = ["${aws_security_group.mytask2_sg.id}"]
}
  • Mount that volume on /var/www/html and Pull the code from Github.
resource "null_resource" "null-remote-1"  {
depends_on = [
aws_efs_mount_target.alpha,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.mytask2pkey.private_key_pem
host = aws_instance.mytask2_os.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo echo ${aws_efs_file_system.allow_nfs.dns_name}:/var/www/html efs defaults,_netdev 0 0 >> sudo /etc/fstab",
"sudo mount ${aws_efs_file_system.allow_nfs.dns_name}:/ /var/www/html",
"sudo curl https://github.com/xyz/project.git > index.html", "sudo cp index.html /var/www/html/",
]
}
  • Create S3 bucket, and create object in S3 bucket.
//creation of S3 bucket//
resource "aws_s3_bucket" "mytask2-s3bucket" {
depends_on = [
null_resource.null-remote-1,
]
bucket = "mytask2-s3bucket"
force_destroy = true
acl = "public-read"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mytask2-s3bucket/*"
}
]
}
POLICY
}//creation of an object in S3 bucket//
resource "aws_s3_bucket_object" "mytask2-object" {
depends_on = [ aws_s3_bucket.mytask2-s3bucket,
null_resource.null-remote-1,
]
bucket = aws_s3_bucket.mytask2-s3bucket.id
key = "one"
source = "C:/Users/user/Downloads/deepu.jpg"
etag = "C:/Users/user/Downloads/deepu.jpg"
acl = "public-read"
content_type = "image/jpg"
}
locals {
s3_origin_id = "aws_s3_bucket.mytask2-s3bucket.id"
}
  • Create a CloudFront using S3 bucket and Integrate the CloudFront .
resource "aws_cloudfront_origin_access_identity" "o" {
comment = "this is done"
}
resource "aws_cloudfront_distribution" "mytask2-s3_distribution" {
origin {
domain_name = aws_s3_bucket.mytask2-s3bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.o.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "terr.png"
logging_config {
include_cookies = false
bucket = aws_s3_bucket.mytask2-s3bucket.bucket_domain_name
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "IN","CA", "GB", "DE"]
}
}
tags = {
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
output "out3" {
value = aws_cloudfront_distribution.mytask2-s3_distribution.domain_name
resource "null_resource" "null-remote2" {
depends_on = [ aws_cloudfront_distribution.mytask2-s3_distribution, ]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.task2key.private_key_pem
host = aws_instance.mytask2_os.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo su << EOF",
"echo \"<img src='https://${aws_cloudfront_distribution.mytask2-s3_distribution.domain_name}/${aws_s3_bucket_object.mytask2-object.key }'>\" >> /var/www/html/index.html",
"EOF"
]
}
  • Compiling

Now we use the following commands to Initialize and run the code.

terraform initterraform apply

Now we use the following command to destroy the entire infrastructure.

terraform destroy

Finally the task is completed.

Thanks for the Reading.

--

--