Saturday, June 3, 2023

AWS- Solution Assoicate Architect Sample questions

What are the key components of AWS?


EC2 (Elastic Compute Cloud)

S3 (Simple Storage Service)

VPC (Virtual Private Cloud)

RDS (Relational Database Service)

IAM (Identity and Access Management)

AWS Lambda

Amazon Route 53

CloudFront

Auto Scaling

Elastic Load Balancer (ELB)


What is the difference between S3, EBS, and EFS?


S3 is object storage for files, EBS is block storage for EC2 instances, and EFS is a file system for EC2 instances.


What is the difference between a region and an availability zone?


A region is a physical location in the world that contains multiple availability zones. Availability zones are isolated data centers within a region that have their own power, networking, and cooling infrastructure.


What is the difference between EC2 and Lambda?


EC2 is a virtual server that allows you to run applications on the cloud. Lambda is a serverless computing service that allows you to run code without provisioning or managing servers.


Explain the concept of Elastic Load Balancing and Auto Scaling.


Elastic Load Balancing distributes incoming traffic across multiple EC2 instances to ensure high availability and fault tolerance. Auto Scaling automatically adjusts the number of EC2 instances in a group based on the load or specified conditions.


What is the purpose of Amazon VPC?


Amazon VPC enables you to create a virtual network in the AWS cloud, providing complete control over network configuration, including IP addressing, subnets, routing, and security.


What is the difference between RDS and DynamoDB?


RDS is a managed relational database service, while DynamoDB is a managed NoSQL database service. RDS supports multiple database engines like MySQL, PostgreSQL, Oracle, and SQL Server, whereas DynamoDB is a key-value and document database.


What is IAM, and what are IAM roles?


IAM (Identity and Access Management) is a service that helps you control access to AWS resources. IAM roles are a secure way to grant permissions to entities that you trust. Instead of using access keys, you can assign roles to AWS resources.


Explain the different storage classes in Amazon S3.


Amazon S3 offers multiple storage classes, including Standard, Intelligent-Tiering, Standard-IA (Infrequent Access), One Zone-IA, Glacier, and Glacier Deep Archive. Each storage class has different availability, durability, performance, and pricing characteristics.


What is CloudFront, and how does it work?


CloudFront is a content delivery network (CDN) service that speeds up the distribution of your static and dynamic web content, such as images, videos, and HTML files. CloudFront caches content at edge locations worldwide, reducing latency and improving user experience.


Docker Overview

Docker Overview

 Docker is an open-source platform that allows you to automate the deployment, scaling, and management of applications using containerization. Containers are lightweight, isolated environments that package an application and its dependencies, enabling it to run consistently across different computing environments.


Here's an overview of Docker's key components and concepts:


Docker Engine: Docker Engine is the core component of Docker. It is responsible for building and running containers. It consists of a server that manages container operations and a command-line interface (CLI) tool that allows you to interact with Docker.


Containers: Containers are self-contained, lightweight environments that encapsulate an application along with its dependencies, libraries, and configuration files. They provide isolation and consistency, making it easy to deploy and run applications on different systems without worrying about compatibility issues.


Images: An image is a read-only template that contains everything needed to run an application, including the code, runtime, libraries, and dependencies. It serves as the basis for creating containers. Images are built using a declarative text file called a Dockerfile or by pulling pre-built images from Docker registries.


Dockerfile: A Dockerfile is a text file that contains a set of instructions to build a Docker image. It specifies the base image, copies files, installs dependencies, sets environment variables, and defines the commands to run when the container starts.


Registries: Docker registries are repositories for storing and distributing Docker images. The default public registry is Docker Hub, which hosts a vast number of pre-built images. You can also set up private registries to store your custom images.


Docker Compose: Docker Compose is a tool that allows you to define and run multi-container applications. It uses a YAML file to specify the services, networks, and volumes required for the application. Compose simplifies the management of interconnected containers and their configurations.


Swarm Mode: Docker Swarm is Docker's native orchestration and clustering tool. It enables you to create and manage a swarm of Docker nodes, turning them into a single virtual Docker host. Swarm Mode provides high availability, load balancing, and scaling capabilities for containerized applications.


Docker Networking: Docker provides networking capabilities to allow communication between containers and between containers and the host system. Docker creates virtual networks and assigns IP addresses to containers, making it easy to connect and expose services.


Docker Volumes: Docker volumes are persistent storage areas that can be attached to containers. They allow data to be shared and preserved even if the container is removed or replaced. Volumes are useful for storing databases, logs, or any data that needs to persist beyond the lifespan of a container.


Docker revolutionized software development and deployment by providing a standardized and portable way to package and distribute applications. It enables developers to build once and run anywhere, simplifying the process of deploying applications across different environments and improving productivity and scalability.

Monday, March 13, 2023

AWS- Capstone Project

 

Description

The Blog Page Application aims to deploy blog application as a web application written Django Framework on AWS Cloud Infrastructure. This infrastructure has Application Load Balancer with Auto Scaling Group of Elastic Compute Cloud (EC2) Instances and Relational Database Service (RDS) on defined VPC. Also, The CloudFront and Route 53 services are located in front of the architecture and manage the traffic in secure. User is able to upload pictures and videos on own blog page and these are kept on S3 Bucket.

Project Details



  • Your company has recently ended up a project that aims to serve as Blog web application on isolated VPC environment. You and your colleagues have started to work on the project. Your Developer team has developed the application and you are going to deploy the app in production environment.

  • Application is coded by Fullstack development team and given you as DevOps team. App allows users to write their own blog page to whom user registration data should be kept in separate MySQL database in AWS RDS service and pictures or videos should be kept in S3 bucket. The object list of S3 Bucket containing movies and videos is recorded on DynamoDB table.

  • The web application will be deployed using Django framework.

  • The Web Application should be accessible via web browser from anywhere in secure.

  • You are requested to push your program to the project repository on the Github. You are going to pull it into the webservers in the production environment on AWS Cloud.

In the architecture, you can configure your infrastructure using the followings,

  • The application stack should be created with new AWS resources.

  • Specifications of VPC:

    • VPC has two AZs and every AZ has 1 public and 1 private subnets.

    • VPC has Internet Gateway

    • One of public subnets has NAT Instance.

    • You might create new instance as Bastion host on Public subnet or you can use NAT instance as Bastion host.

    • There should be managed private and public route tables.

    • Route tables should be arranged regarding of routing policies and subnet associations based on public and private subnets.

  • You should create Application Load Balancer with Auto Scaling Group of Ubuntu 18.04 EC2 Instances within created VPC.

  • You should create RDS instance within one of private subnets on created VPC and configure it on application.

  • The Auto Scaling Group should use a Launch Template in order to launch instances needed and should be configured to;

    • use all Availability Zones on created VPC.

    • set desired capacity of instances to  2

    • set minimum size of instances to  2

    • set maximum size of instances to  4

    • set health check grace period to  90 seconds

    • set health check type to  ELB

    • Scaling Policy --> Target Tracking Policy

      • Average CPU utilization (set Target Value  %70)

      • seconds warm up before including in metric ---> 200

      • Set notification to your email address for launch, terminate, fail to launch, fail to terminate instance situations

  • ALB configuration;

    • Application Load Balancer should be placed within a security group which allows HTTP (80) and HTTPS (443) connections from anywhere.

    • Certification should be created for secure connection (HTTPS)

      • To create certificate, AWS Certificate Manager can be utilized.
    • ALB redirects to traffic from HTTP to HTTPS

    • Target Group

      • Health Check Protocol is going to be HTTP
  • The Launch Template should be configured to;

    • Prepare Django environment on EC2 instance based on Developer Notes,

    • Deploy the Django application on port 80.

    • Launch Template only allows HTTP (80) and HTTPS (443) ports coming from ALB Security Group and SSH (22) connections from anywhere.

    • EC2 Instances type can be configured as t2.micro.

    • Instance launched should be tagged AWS Capstone Project

    • Since Django App needs to talk with S3, S3 full access role must be attached EC2s.

  • For RDS Database Instance;

    • Instance type can be configured as db.t2.micro

    • Database engine can be MySQL with version of 8.0.20.

    • RDS endpoint should be addressed within settings file of blog application that is explained developer notes.

    • Please read carefully "Developer notes" to manage RDS sub settings.

  • CloudFront should be set as a cache server which points to Application Load Balance with following configurations;

    • The CloudFront distribution should communicate with ALB securely.

    • Origin Protocol policy can be selected as HTTPS only.

    • Viewer Protocol Policy can be selected as Redirect HTTP to HTTPS

  • As cache behavior;

    • GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE methods should be allowed.

    • Forward Cookies must be selected All.

    • Newly created ACM Certificate should be used for securing connections. (You can use same certificate with ALB)

  • Route 53

    • Connection must be secure (HTTPS).

    • Your hostname can be used to publish website.

    • Failover routing policy should be set while publishing application

      • Primary connection is going to be CloudFormation

      • Secondary connection is going to be a static website placed another S3 bucket. This S3 bucket has just basic static website that has a picture said "the page is under construction" given files within S3_static_Website folder

      • Healthcheck should check If CloudFront is healthy or not.

  • As S3 Bucket

    • First S3 Bucket

      • It should be created within the Region that you created VPC

      • Since development team doesn't prefer to expose traffic between S3 and EC2s on internet, Endpoint should be set on created VPC.

      • S3 Bucket name should be addressed within configuration file of blog application that is explained developer notes.

    • Second S3 Bucket

      • This Bucket is going to be used for failover scenario. It has just a basic static website that has a picture said "the page is under construction"
  • To write the objects of S3 on DynamoDB table

    • Lambda Function

      • Lambda function is going to be Python 3.8

      • Python Function can be found in github repo

      • S3 event is set as trigger

      • Since Lambda needs to talk S3 and DynamoDB and to run on created VPC, S3, DynamoDB full access policies and NetworkAdministrator policy must be attached it

      • S3 Event must be created first S3 Bucket to trigger Lambda function

    • DynamoDB Table

      • Create a DynamoDB table which has primary key that is id

      • Created DynamoDB table's name should be placed on Lambda function.

Expected Outcome

Phonebook App Search Page

The following topics will be used at the end of the project:

  • Bash scripting

  • AWS EC2 Launch Template Configuration

  • AWS VPC Configuration

    • VPC
    • Private and Public Subnets
    • Private and Public Route Tables
    • Managing routes
    • Subnet Associations
    • Internet Gateway
    • NAT Gateway
    • Bastion Host
    • Endpoint
  • AWS EC2 Application Load Balancer Configuration

  • AWS EC2 ALB Target Group Configuration

  • AWS EC2 ALB Listener Configuration

  • AWS EC2 Auto Scaling Group Configuration

  • AWS Relational Database Service Configuration

  • AWS EC2, RDS, ALB Security Groups Configuration

  • IAM Roles configuration

  • S3 configuration

  • Static website configuration on S3

  • DynamoDB Table configuration

  • Lambda Function configuration

  • Get Certificate with AWS Certification Manager Configuration

  • AWS CloudFront Configuration

  • Route 53 Configuration

  • Git & Github for Version Control System

At the end of the project, you will be able to;

  • Construct VPC environment with whole components like public and private subnets, route tables and managing their routes, internet Gateway, NAT Instance.

  • Apply web programming skills, importing packages within Python Django Framework

  • Configure connection to the MySQL database.

  • Demonstrate bash scripting skills using user data section within launch template to install and setup Blog web application on EC2 Instance.

  • Create a Lambda function using S3, Lambda and DynamoDB table.

  • Demonstrate their configuration skills of AWS VPC, EC2 Launch Templates, Application Load Balancer, ALB Target Group, ALB Listener, Auto Scaling Group, S3, RDS, CloudFront, Route 53.

  • Apply git commands (push, pull, commit, add etc.) and Github as Version Control System.

Solution Steps

  • Step 1: Create dedicated VPC and whole components

  • Step 2: Create Security Groups (ALB ---> EC2 ---> RDS)

  • Step 3: Create RDS

  • Step 4: Create two S3 Buckets and set one of these as static website

  • Step 5: Download or clone project definition

  • Step 6: Prepare your Github repository

  • Step 7: Prepare a userdata to be utilized in Launch Template

  • Step 8: Write RDS, S3 in settings file given by Fullstack Developer team

  • Step 9: Create NAT Instance in Public Subnet

  • Step 10: Create Launch Template and IAM role for it

  • Step 11: Create certification for secure connection

  • Step 12: Create ALB and Target Group

  • Step 13: Create Autoscaling Group with Launch Template

  • Step 14: Create CloudFront in front of ALB

  • Step 15: Create Route 53 with Failover settings

  • Step 16: Create DynamoDB Table

  • Step 17-18: Create Lambda function

  • Step 17-18: Create S3 Event and set it as trigger for Lambda Function

Notes

  • RDS database should be located in private subnet. just EC2 machines that has ALB security group can talk with RDS.

  • RDS is located on private groups and only EC2s can talk with it on port 3306

  • ALB is located public subnet and it redirects traffic from http to https

  • EC2's are located in private subnets and only ALB can talk with them

Resources

Saturday, December 17, 2022

Ansible - Inventory File

Ansible Inventory file

An Ansible controller needs a list of hosts and groups of hosts upon which commands, modules, and tasks are performed on the managed nodes, this list is known as inventory. It may contain information such as – Host IPs, DNS Name, SSH User and Pass, SSH Port service info (in case it is other than port 22). The most common formats are INI and YAML. An inventory file is also sometimes called a host file. We will be using INI format in this guide. 

Common syntax

[webservers]

10.0.0.9

10.0.0.10

[dbservers]

10.0.0.11

10.0.0.12

Alias Name

    webserver01 ansible_host=10.0.0.9

    [webservers]

    webserver01


Creating custom inventory file

Although Ansible uses a default inventory file, we can create one of our own and can be customized as per the requirement. 

Step 1 — Disabling host key checking 

Firstly, make a change in ansible.cfg file which is located at /etc/ansible directory

Uncomment the line host_key_checking = False. This is to disable SSH key host checking:

Step 2 — Create an inventory file

In /etc/ansible/ directory, create an inv.txt file, and add the below details to it:

            webserver01 ansible_host=10.0.0.9

            [webservers]
            webserver01

Group InventoryFile

[webservers]
10.0.0.9

[dbservers]
10.0.0.10
# group inventory

[production:children]
dbservers
webservers

Ansible - Yaml - Syntax

YAML is very commonly used language in DevOps. Below are some basic syntax for YAML

1. Key Value pair

name: Ansible

Version: 2.3.4


2. Array or collection

ConfigurationManagement:

- Ansible

- Puppet

- Chef

- SaltStack

- Terraform


3. Dictionary

Ansible:

   commands: Adhoc

   Script: Playbooks


Puppet:

   commands: PuppetCommands

   script: Manifest


4. Dictionary In the dictionary

Ansible:

   commands: 

      type: Adhoc

      SingleLine: True


5. List of Dictionary

ConfigurationManagement:

  name: Ansible

  model: Push

  name: Puppet

  model: Pull


Saturday, October 29, 2022

Azure DevOps - Azure Pipeline with Docker (dotnet)

Build a Docker Image

1. Create an Azure Container registry (say myreg101).

2. Create a Dotnet app (I am using webapp aspnet core 6.0)

3. Publish the code on your local system.

4. Create an azure ubuntu vm and copy-publish code on this machine.

5. Install Docker on ubuntu VM (apt install docker.io -y)

6. Create a Dockerfile in the publish folder with the following code

    


FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "DotNetApp.dll" ]

7. Build a docker image(webimg) with the following command

    docker build . -t webimg

8. Once the image is built successfully then create a container with this image with port forwarding of port number 80.

    docker container run -it --name web -p 80:80 -d webimg

9. On the browser with the public IP address of the ubuntu VM you can see the output of the application.

10. Push the docker image to the container registry with the following commands (myreg101.azurecr.io is the azure container registry server in my case)

        docker login myreg101.azurecr.io

        docker image tag webimg myreg101.azurecr.io/mynginx

        docker push myreg101.azurecr.io/webimg

11. In VS project add a Docker file

12. Create an Azure Pipeline with above project and add below pipeline


# Docker
# Build and push an image to Azure Container Registry
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker

trigger:
- master

resources:
- repo: self

variables:
  # Container registry service connection established during pipeline creation
  dockerRegistryServiceConnection: 'dd082128-b9c9-4f1e-9f55-6035ec616d54'
  imageRepository: 'testproj'
  containerRegistry: 'myreg101.azurecr.io'
  dockerfilePath: '$(Build.SourcesDirectory)/DotNetApp/Dockerfile'
  tag: '$(Build.BuildId)'

  # Agent VM image name
  vmImageName: 'ubuntu-latest'

stages:
- stage: Build
  displayName: Build and push stage
  jobs:
  - job: Build
    displayName: Build
    pool:
      vmImage: $(vmImageName)
    steps:
    - task: UseDotNet@2
      inputs:
        packageType: sdk
        version: '6.x'
    - task: DotNetCoreCLI@2
      inputs:
        command: 'build'
        projects: '**/*.csproj'
        arguments: '--configuration $(buildConfiguration)'

    - task: DotNetCoreCLI@2
      inputs:
        command: 'publish'
        publishWebProjects: true
        arguments: '--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)'    
    - task: Docker@2
      displayName: Build and push an image to container registry
      inputs:
        command: buildAndPush
        buildContext: $(Build.Repository.LocalPath)
        repository: $(imageRepository)
        dockerfile: $(dockerfilePath)
        containerRegistry: $(dockerRegistryServiceConnection)
        tags: |
          $(tag)
latest

13.Run the pipeline and a docker image should be created in Azure container registry.

14. Create a release pipeline for docker container.

     In the Agent job Use Azure CLI and write an Inline script

     az container create -g MyRG --name appinstance200130 --cpu 1 --memory 1 --port 80 --ip-address Public --image myreg101.azurecr.io/mynginx --registry-username myreg101 --registry-password waJ3dncLQE4jDLolaa9hVvUmeoSMwIS/

15. Container instance should be created


Friday, October 21, 2022

Azure Release Pipeline

 Below are the steps to create the first Release pipeline in Azure DevOps

1. Develop the code(I am using DotNetCore Web App) which you want to make a part of the release pipeline.

2. Create Azure Repository.

3. Add Azure Repository into your code's Git Remote Settings.

4. Create Azure Pipeline with the below code. 

# ASP.NET Core (.NET Framework)
# Build and test ASP.NET Core projects targeting the full .NET Framework.
# Add steps that publish symbols, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core

trigger:
master

pool:
  vmImage'windows-latest'

variables:
  solution'**/*.sln'
  buildPlatform'Any CPU'
  buildConfiguration'Release'

steps:
taskNuGetToolInstaller@1

taskNuGetCommand@2
  inputs:
    restoreSolution'$(solution)'


taskDotNetCoreCLI@2
  inputs:
    command'build'
    projects'**/*.csproj'
    arguments'--configuration $(buildConfiguration)'

taskDotNetCoreCLI@2
  inputs:
    command'publish'
    publishWebProjectstrue
    arguments'--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)'
taskPublishPipelineArtifact@1
  inputs:
    targetPath'$(Build.ArtifactStagingDirectory)'  
    artifactName'myapp-artifact'

5. Save and Run the pipeline.

6. Create a Web app service 

7. Create a Release pipeline and select the Empty Job template.



8. Create a task and create an agent job for the WebApp service and assign the webapp which you created.

9. In the artifact add the artifact as Build and point to the Azure pipeline.

10. Create the release and after successful execution you can access web application on web app URL.