Monday, March 13, 2023

AWS- Capstone Project

 

Description

The Blog Page Application aims to deploy blog application as a web application written Django Framework on AWS Cloud Infrastructure. This infrastructure has Application Load Balancer with Auto Scaling Group of Elastic Compute Cloud (EC2) Instances and Relational Database Service (RDS) on defined VPC. Also, The CloudFront and Route 53 services are located in front of the architecture and manage the traffic in secure. User is able to upload pictures and videos on own blog page and these are kept on S3 Bucket.

Project Details



  • Your company has recently ended up a project that aims to serve as Blog web application on isolated VPC environment. You and your colleagues have started to work on the project. Your Developer team has developed the application and you are going to deploy the app in production environment.

  • Application is coded by Fullstack development team and given you as DevOps team. App allows users to write their own blog page to whom user registration data should be kept in separate MySQL database in AWS RDS service and pictures or videos should be kept in S3 bucket. The object list of S3 Bucket containing movies and videos is recorded on DynamoDB table.

  • The web application will be deployed using Django framework.

  • The Web Application should be accessible via web browser from anywhere in secure.

  • You are requested to push your program to the project repository on the Github. You are going to pull it into the webservers in the production environment on AWS Cloud.

In the architecture, you can configure your infrastructure using the followings,

  • The application stack should be created with new AWS resources.

  • Specifications of VPC:

    • VPC has two AZs and every AZ has 1 public and 1 private subnets.

    • VPC has Internet Gateway

    • One of public subnets has NAT Instance.

    • You might create new instance as Bastion host on Public subnet or you can use NAT instance as Bastion host.

    • There should be managed private and public route tables.

    • Route tables should be arranged regarding of routing policies and subnet associations based on public and private subnets.

  • You should create Application Load Balancer with Auto Scaling Group of Ubuntu 18.04 EC2 Instances within created VPC.

  • You should create RDS instance within one of private subnets on created VPC and configure it on application.

  • The Auto Scaling Group should use a Launch Template in order to launch instances needed and should be configured to;

    • use all Availability Zones on created VPC.

    • set desired capacity of instances to  2

    • set minimum size of instances to  2

    • set maximum size of instances to  4

    • set health check grace period to  90 seconds

    • set health check type to  ELB

    • Scaling Policy --> Target Tracking Policy

      • Average CPU utilization (set Target Value  %70)

      • seconds warm up before including in metric ---> 200

      • Set notification to your email address for launch, terminate, fail to launch, fail to terminate instance situations

  • ALB configuration;

    • Application Load Balancer should be placed within a security group which allows HTTP (80) and HTTPS (443) connections from anywhere.

    • Certification should be created for secure connection (HTTPS)

      • To create certificate, AWS Certificate Manager can be utilized.
    • ALB redirects to traffic from HTTP to HTTPS

    • Target Group

      • Health Check Protocol is going to be HTTP
  • The Launch Template should be configured to;

    • Prepare Django environment on EC2 instance based on Developer Notes,

    • Deploy the Django application on port 80.

    • Launch Template only allows HTTP (80) and HTTPS (443) ports coming from ALB Security Group and SSH (22) connections from anywhere.

    • EC2 Instances type can be configured as t2.micro.

    • Instance launched should be tagged AWS Capstone Project

    • Since Django App needs to talk with S3, S3 full access role must be attached EC2s.

  • For RDS Database Instance;

    • Instance type can be configured as db.t2.micro

    • Database engine can be MySQL with version of 8.0.20.

    • RDS endpoint should be addressed within settings file of blog application that is explained developer notes.

    • Please read carefully "Developer notes" to manage RDS sub settings.

  • CloudFront should be set as a cache server which points to Application Load Balance with following configurations;

    • The CloudFront distribution should communicate with ALB securely.

    • Origin Protocol policy can be selected as HTTPS only.

    • Viewer Protocol Policy can be selected as Redirect HTTP to HTTPS

  • As cache behavior;

    • GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE methods should be allowed.

    • Forward Cookies must be selected All.

    • Newly created ACM Certificate should be used for securing connections. (You can use same certificate with ALB)

  • Route 53

    • Connection must be secure (HTTPS).

    • Your hostname can be used to publish website.

    • Failover routing policy should be set while publishing application

      • Primary connection is going to be CloudFormation

      • Secondary connection is going to be a static website placed another S3 bucket. This S3 bucket has just basic static website that has a picture said "the page is under construction" given files within S3_static_Website folder

      • Healthcheck should check If CloudFront is healthy or not.

  • As S3 Bucket

    • First S3 Bucket

      • It should be created within the Region that you created VPC

      • Since development team doesn't prefer to expose traffic between S3 and EC2s on internet, Endpoint should be set on created VPC.

      • S3 Bucket name should be addressed within configuration file of blog application that is explained developer notes.

    • Second S3 Bucket

      • This Bucket is going to be used for failover scenario. It has just a basic static website that has a picture said "the page is under construction"
  • To write the objects of S3 on DynamoDB table

    • Lambda Function

      • Lambda function is going to be Python 3.8

      • Python Function can be found in github repo

      • S3 event is set as trigger

      • Since Lambda needs to talk S3 and DynamoDB and to run on created VPC, S3, DynamoDB full access policies and NetworkAdministrator policy must be attached it

      • S3 Event must be created first S3 Bucket to trigger Lambda function

    • DynamoDB Table

      • Create a DynamoDB table which has primary key that is id

      • Created DynamoDB table's name should be placed on Lambda function.

Expected Outcome

Phonebook App Search Page

The following topics will be used at the end of the project:

  • Bash scripting

  • AWS EC2 Launch Template Configuration

  • AWS VPC Configuration

    • VPC
    • Private and Public Subnets
    • Private and Public Route Tables
    • Managing routes
    • Subnet Associations
    • Internet Gateway
    • NAT Gateway
    • Bastion Host
    • Endpoint
  • AWS EC2 Application Load Balancer Configuration

  • AWS EC2 ALB Target Group Configuration

  • AWS EC2 ALB Listener Configuration

  • AWS EC2 Auto Scaling Group Configuration

  • AWS Relational Database Service Configuration

  • AWS EC2, RDS, ALB Security Groups Configuration

  • IAM Roles configuration

  • S3 configuration

  • Static website configuration on S3

  • DynamoDB Table configuration

  • Lambda Function configuration

  • Get Certificate with AWS Certification Manager Configuration

  • AWS CloudFront Configuration

  • Route 53 Configuration

  • Git & Github for Version Control System

At the end of the project, you will be able to;

  • Construct VPC environment with whole components like public and private subnets, route tables and managing their routes, internet Gateway, NAT Instance.

  • Apply web programming skills, importing packages within Python Django Framework

  • Configure connection to the MySQL database.

  • Demonstrate bash scripting skills using user data section within launch template to install and setup Blog web application on EC2 Instance.

  • Create a Lambda function using S3, Lambda and DynamoDB table.

  • Demonstrate their configuration skills of AWS VPC, EC2 Launch Templates, Application Load Balancer, ALB Target Group, ALB Listener, Auto Scaling Group, S3, RDS, CloudFront, Route 53.

  • Apply git commands (push, pull, commit, add etc.) and Github as Version Control System.

Solution Steps

  • Step 1: Create dedicated VPC and whole components

  • Step 2: Create Security Groups (ALB ---> EC2 ---> RDS)

  • Step 3: Create RDS

  • Step 4: Create two S3 Buckets and set one of these as static website

  • Step 5: Download or clone project definition

  • Step 6: Prepare your Github repository

  • Step 7: Prepare a userdata to be utilized in Launch Template

  • Step 8: Write RDS, S3 in settings file given by Fullstack Developer team

  • Step 9: Create NAT Instance in Public Subnet

  • Step 10: Create Launch Template and IAM role for it

  • Step 11: Create certification for secure connection

  • Step 12: Create ALB and Target Group

  • Step 13: Create Autoscaling Group with Launch Template

  • Step 14: Create CloudFront in front of ALB

  • Step 15: Create Route 53 with Failover settings

  • Step 16: Create DynamoDB Table

  • Step 17-18: Create Lambda function

  • Step 17-18: Create S3 Event and set it as trigger for Lambda Function

Notes

  • RDS database should be located in private subnet. just EC2 machines that has ALB security group can talk with RDS.

  • RDS is located on private groups and only EC2s can talk with it on port 3306

  • ALB is located public subnet and it redirects traffic from http to https

  • EC2's are located in private subnets and only ALB can talk with them

Resources

Saturday, December 17, 2022

Ansible - Inventory File

Ansible Inventory file

An Ansible controller needs a list of hosts and groups of hosts upon which commands, modules, and tasks are performed on the managed nodes, this list is known as inventory. It may contain information such as – Host IPs, DNS Name, SSH User and Pass, SSH Port service info (in case it is other than port 22). The most common formats are INI and YAML. An inventory file is also sometimes called a host file. We will be using INI format in this guide. 

Common syntax

[webservers]

10.0.0.9

10.0.0.10

[dbservers]

10.0.0.11

10.0.0.12

Alias Name

    webserver01 ansible_host=10.0.0.9

    [webservers]

    webserver01


Creating custom inventory file

Although Ansible uses a default inventory file, we can create one of our own and can be customized as per the requirement. 

Step 1 — Disabling host key checking 

Firstly, make a change in ansible.cfg file which is located at /etc/ansible directory

Uncomment the line host_key_checking = False. This is to disable SSH key host checking:

Step 2 — Create an inventory file

In /etc/ansible/ directory, create an inv.txt file, and add the below details to it:

            webserver01 ansible_host=10.0.0.9

            [webservers]
            webserver01

Group InventoryFile

[webservers]
10.0.0.9

[dbservers]
10.0.0.10
# group inventory

[production:children]
dbservers
webservers

Ansible - Yaml - Syntax

YAML is very commonly used language in DevOps. Below are some basic syntax for YAML

1. Key Value pair

name: Ansible

Version: 2.3.4


2. Array or collection

ConfigurationManagement:

- Ansible

- Puppet

- Chef

- SaltStack

- Terraform


3. Dictionary

Ansible:

   commands: Adhoc

   Script: Playbooks


Puppet:

   commands: PuppetCommands

   script: Manifest


4. Dictionary In the dictionary

Ansible:

   commands: 

      type: Adhoc

      SingleLine: True


5. List of Dictionary

ConfigurationManagement:

  name: Ansible

  model: Push

  name: Puppet

  model: Pull


Saturday, October 29, 2022

Azure DevOps - Azure Pipeline with Docker (dotnet)

Build a Docker Image

1. Create an Azure Container registry (say myreg101).

2. Create a Dotnet app (I am using webapp aspnet core 6.0)

3. Publish the code on your local system.

4. Create an azure ubuntu vm and copy-publish code on this machine.

5. Install Docker on ubuntu VM (apt install docker.io -y)

6. Create a Dockerfile in the publish folder with the following code

    


FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "DotNetApp.dll" ]

7. Build a docker image(webimg) with the following command

    docker build . -t webimg

8. Once the image is built successfully then create a container with this image with port forwarding of port number 80.

    docker container run -it --name web -p 80:80 -d webimg

9. On the browser with the public IP address of the ubuntu VM you can see the output of the application.

10. Push the docker image to the container registry with the following commands (myreg101.azurecr.io is the azure container registry server in my case)

        docker login myreg101.azurecr.io

        docker image tag webimg myreg101.azurecr.io/mynginx

        docker push myreg101.azurecr.io/webimg

11. In VS project add a Docker file

12. Create an Azure Pipeline with above project and add below pipeline


# Docker
# Build and push an image to Azure Container Registry
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker

trigger:
- master

resources:
- repo: self

variables:
  # Container registry service connection established during pipeline creation
  dockerRegistryServiceConnection: 'dd082128-b9c9-4f1e-9f55-6035ec616d54'
  imageRepository: 'testproj'
  containerRegistry: 'myreg101.azurecr.io'
  dockerfilePath: '$(Build.SourcesDirectory)/DotNetApp/Dockerfile'
  tag: '$(Build.BuildId)'

  # Agent VM image name
  vmImageName: 'ubuntu-latest'

stages:
- stage: Build
  displayName: Build and push stage
  jobs:
  - job: Build
    displayName: Build
    pool:
      vmImage: $(vmImageName)
    steps:
    - task: UseDotNet@2
      inputs:
        packageType: sdk
        version: '6.x'
    - task: DotNetCoreCLI@2
      inputs:
        command: 'build'
        projects: '**/*.csproj'
        arguments: '--configuration $(buildConfiguration)'

    - task: DotNetCoreCLI@2
      inputs:
        command: 'publish'
        publishWebProjects: true
        arguments: '--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)'    
    - task: Docker@2
      displayName: Build and push an image to container registry
      inputs:
        command: buildAndPush
        buildContext: $(Build.Repository.LocalPath)
        repository: $(imageRepository)
        dockerfile: $(dockerfilePath)
        containerRegistry: $(dockerRegistryServiceConnection)
        tags: |
          $(tag)
latest

13.Run the pipeline and a docker image should be created in Azure container registry.

14. Create a release pipeline for docker container.

     In the Agent job Use Azure CLI and write an Inline script

     az container create -g MyRG --name appinstance200130 --cpu 1 --memory 1 --port 80 --ip-address Public --image myreg101.azurecr.io/mynginx --registry-username myreg101 --registry-password waJ3dncLQE4jDLolaa9hVvUmeoSMwIS/

15. Container instance should be created


Friday, October 21, 2022

Azure Release Pipeline

 Below are the steps to create the first Release pipeline in Azure DevOps

1. Develop the code(I am using DotNetCore Web App) which you want to make a part of the release pipeline.

2. Create Azure Repository.

3. Add Azure Repository into your code's Git Remote Settings.

4. Create Azure Pipeline with the below code. 

# ASP.NET Core (.NET Framework)
# Build and test ASP.NET Core projects targeting the full .NET Framework.
# Add steps that publish symbols, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core

trigger:
master

pool:
  vmImage'windows-latest'

variables:
  solution'**/*.sln'
  buildPlatform'Any CPU'
  buildConfiguration'Release'

steps:
taskNuGetToolInstaller@1

taskNuGetCommand@2
  inputs:
    restoreSolution'$(solution)'


taskDotNetCoreCLI@2
  inputs:
    command'build'
    projects'**/*.csproj'
    arguments'--configuration $(buildConfiguration)'

taskDotNetCoreCLI@2
  inputs:
    command'publish'
    publishWebProjectstrue
    arguments'--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)'
taskPublishPipelineArtifact@1
  inputs:
    targetPath'$(Build.ArtifactStagingDirectory)'  
    artifactName'myapp-artifact'

5. Save and Run the pipeline.

6. Create a Web app service 

7. Create a Release pipeline and select the Empty Job template.



8. Create a task and create an agent job for the WebApp service and assign the webapp which you created.

9. In the artifact add the artifact as Build and point to the Azure pipeline.

10. Create the release and after successful execution you can access web application on web app URL.


Tuesday, September 20, 2022

Terraform - User Data


provider "aws" {
  region = "us-east-1"
}
resource "aws_instance" "ec2_example" {

    ami = "ami-05fa00d4c63e32376"
    instance_type = "t2.micro"
    key_name= "newkey"
    vpc_security_group_ids = [aws_security_group.main.id]
    tags = {
      "Name" = "UserData command "
    }

 user_data = <<-EOF
 #!/bin/bash
sudo su
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
sudo echo "<h1>Hello World </h1>" > /var/www/html/index.html

 EOF
 
}

resource "aws_security_group" "main" {
  egress = [
    {
      cidr_blocks      = [ "0.0.0.0/0", ]
      description      = ""
      from_port        = 0
      ipv6_cidr_blocks = []
      prefix_list_ids  = []
      protocol         = "-1"
      security_groups  = []
      self             = false
      to_port          = 0
    }
  ]
 ingress                = [
   {
     cidr_blocks      = [ "0.0.0.0/0", ]
     description      = ""
     from_port        = 22
     ipv6_cidr_blocks = []
     prefix_list_ids  = []
     protocol         = "tcp"
     security_groups  = []
     self             = false
     to_port          = 22
  }
  ]
}


 

Sunday, September 18, 2022

Terraform State lock file -S3 bucket

 

Before we implement the Dynamo DB locking feature first we need to store the Terraform state file(terraform.tfstate) file remotely on AWS S3 bucket.

I am gonna take a very simple example in which we are going to provision an AWS EC2 machine and store the terraform state file remotely.

Let's start by creating main.tf and we will add the following resource blocks to it -

  1. Provider Block
  2. AWS Instance resource block(aws_instance) for EC2
  3. Backend S3 block
  4. Execute terraform script
  5. Verify the remote state file

As we are working on the AWS environment so we will be using AWS provider. So add the following block to your main. tf -

provider "aws" {
   region     = "eu-central-1"
   access_key = var.access_key
   secret_key = var.secret_key
}
BASH


After adding the provider block let's add the aws_instance resource block in which we are going to set up the EC2 the machine of type t2.micro -

provider "aws" {
   region     = "eu-central-1"
   access_key = var.access_key
   secret_key = var.secret_key
}

resource "aws_instance" "ec2_example" {
    ami = "ami-0767046d1677be5a0"
    instance_type = "t2.micro"
    tags = {
      Name = "EC2 Instance with remote state"
    }
}
BASH

(*Note - I have already created an S3 bucket with the name jhooq-terraform-s3-bucket, so make sure to create one for you as well.)

Now after adding the provider and aws_instance block let's add the backend S3 block to my main.tf -

provider "aws" {
   region     = "eu-central-1"
   access_key = var.access_key
   secret_key = var.secret_key
}

resource "aws_dynamodb_table" "state_locking" {
  hash_key = "LockID"
  name     = "dynamodb-state-locking"
  attribute {
    name = "LockID"
    type = "S"
  }
  billing_mode = "PAY_PER_REQUEST"
}

resource "aws_instance" "ec2_example" {
    ami = "ami-0767046d1677be5a0"
    instance_type = "t2.micro"
    tags = {
      Name = "EC2 Instance with remote state"
    }
}

terraform {
    backend "s3" {
        bucket = "test-terraform-s3-bucket"
        key    = "test/terraform/remote/s3/terraform.tfstate"
        region     = "eu-central-1"
    }
}
BASH


Now for implementing the state locking we need to create a DynamoDB table.

  1. Goto your AWS management console and search for DynamoDB onto the search bar.

Terraform dynamoDB

  1. Click on the DynamoDB

  2. From the left navigation panel click on Tables

Terraform dynamoDB table creation

  1. Click on Create Table

Terraform create table

  1. Enter the Table name - "dynamodb-state-locking" and Partition Key - "LockID"

dynamoDB table name and Partition key LockID

  1. Click on Create Table and you can verify the table after the creation

dynamoDB verify the table



After creating the DynamoDB table in the previous step, let's add the reference of DynamoDB table name (dynamodb-state-locking) to backend S3 sate.

terraform {
    backend "s3" {
        bucket = "test-terraform-s3-bucket"
        key    = "test/terraform/remote/s3/terraform.tfstate"
        region     = "eu-central-1"
   dynamodb_table  = "dynamodb-state-locking"
    }
}
BASH

Your final Terraform main.tf should look like this -

provider "aws" {
   region     = "eu-central-1"
  
}

resource "aws_dynamodb_table" "state_locking" {
  hash_key = "LockID"
  name     = "dynamodb-state-locking"
  attribute {
    name = "LockID"
    type = "S"
  }
  billing_mode = "PAY_PER_REQUEST"
}

resource "aws_instance" "ec2_example" {
    ami = "ami-0767046d1677be5a0"
    instance_type = "t2.micro"
    tags = {
      Name = "EC2 Instance with remote state"
    }
}

terraform {
    backend "s3" {
        bucket = "jhooq-terraform-s3-bucket"
        key    = "jhooq/terraform/remote/s3/terraform.tfstate"
        region     = "eu-central-1"
       dynamodb_table  = "dynamodb-state-locking"
    }
} 
BASH


  1. The first command we are gonna run is terraform init

terraform init for state locking

  1. Now the run the terraform plan command

terraform plan for state locking

  1. Finally, the terraform apply command

terraform apply for state locking

terraform apply for state locking

  1. Verify the DynamoDB LockID by going into the AWS management console -

verify the DynamoDB locking for remote state

(*Note- To simulate the locking scenario I am creating another main.tf with the same configuration. I would encourage you to create one main.tf and save the file in some other directory)

To test terraform state locking I will provision one more EC2 machine using the same Terraform state file (jhooq/terraform/remote/s3/terraform.tfstate) stored in my S3 bucket along with the same DynamoDB table (dynamodb-state-locking).

Keep in mind we are still using following two components from previous main.tf

  1. S3 Bucket - jhooq-terraform-s3-bucket
  2. DynamoDB Table - dynamodb-state-locking
  3. Terraform state file - jhooq/terraform/remote/s3/terraform.tfstate

Here is my another main.tf file -

provider "aws" {
   region     = "eu-central-1"
   access_key = var.access_key
   secret_key = var.secret_key
}

resource "aws_instance" "ec2_example" {
    ami = "ami-0767046d1677be5a0"
    instance_type = "t2.micro"
    tags = {
      Name = "EC2 Instance with remote state"
    }
}

terraform {
  backend "s3" {
    bucket = "test-terraform-s3-bucket"
    key    = "test/terraform/remote/s3/terraform.tfstate"
    encrypt        = true
    region         = "eu-central-1"
    dynamodb_table = "dynamodb-state-locking"
  }
}
BASH

On the left side of the screen, you will see the first terraform file(main.tf) which I have created in the Step-1 and on the right-hand side, you will see the terraform file(main.tf) from the Step-4.

**How did I simulate the remote state locking scenario? **

  1. I have executed terraform apply on terraform file present on the right-hand side but did not let it finish. While executing terraform apply command I did not type yes when it asks for Do you want to perform these actions? so basically terraform apply command is still running and holding a lock on the remote state file.

  2. At the same time I executed the terraform apply on main.tf from Step-4 which you can see on the right side of the screenshot. Since the second main.tf file also referring the same remote state as well as same dynamo db table it will throw en error - Error: Error acquiring the state lock Error message: ConditionalCheckFailedException: The conditional request failed Lock Info ID: 8f014160-8894-868e-529d-0f16e42af405


Error: Error acquiring the state lock Error message: ConditionalCheckFailedException: The conditional request failed Lock Info

Terraform state file locking is one of the most valuable features offered by terraform for managing the Terraform state file. If you are using the AWS S3 and Dynamo DB then terraform state locking can improve your state management and save your time from unforeseen issues.