Creating Infrastructure on AWS

Create new environment from terraform scripts

These deployments would be done by default in the "us-east-1" (Ohio) AWS region. In case you want to deploy it in some other region, replace all the region names in terraform scripts of backend and frontend.

Backend

Overview:

To get UpGrade server up and running on your infrastructure, we have provided Terraform scripts. These scripts will start and configure required resources on your cloud. Please follow the steps below:

  1. Fork/clone the source code of the UpGrade Service located here in your GITHUB account:

    https://github.com/CarnegieLearningWeb/UpGrade

  2. Inside backend > terraform directory in the repository, we have provided all of the infrastructure configuration scripts. Run these scripts to start the UpGrade service.

These terraform scripts provide for creating infrastructure for Upgrade as well as setting up a CI/CD pipeline using AWS Code Pipeline.

Pre-requisites

  • Download and install Terraform on your system.

  • Make sure you know basic terraform commands like plan, init, apply passing variable file using --var-file.

  • Install & Configure aws-cli on your system.

  • Setup an aws provider profile using aws configure

  • Prefer the following profile name for aws cli: upgrade-terraform

  • Create an s3 Bucket to store tfstate files remotely for backend and frontend. We recommend enable versioning on that bucket. Also, prefer the following name for YOUR_BACKEND_TF-STATE_BUCKET: cli-terraform-artifacts-bucket

$ aws s3api create-bucket --acl private --bucket YOUR_BACKEND_TF-STATE_BUCKET aws s3api put-bucket-versioning --bucket YOUR_BACKEND_TF-STATE_BUCKET --versioning-configuration Status=Enabled

Note: In case you want to have some other bucket name then please make sure to replace this bucket name with existing bucket name inside environments/**/backend.tf & core/backend.tf after cloning repo as shown below.

  • Generate ssh key using ssh-keygen (if you generate it with a different name, make sure to replace variables accordingly inside main.tf of respective environment)

    # Generate a key pair with no passphrase
    ssh-keygen -f id_rsa -N ""

Steps to follow for backend deployment:

  1. Clone the Upgrade: $ git clone https://github.com/CarnegieLearningWeb/UpGrade.git

  2. Create a dev branch: $ git checkout -b dev

  3. Now, we need to run terraform scripts. Before that, you should just get an idea of the terraform architecture.

terraform top-level directory layout explanation

.
├── aws-ebs-with-rds                   # terraform module to create ebs environment with POSTGRES installed
├── aws-lambda                         # terraform module to create Schedular lambda function
├── aws-step-fn                        # terraform module to create Schedular step function
├── core
      ├── core.tf                      # Config file to create core resources.
      ├── backend.tf                   # File that gives details of where to store tfstate files
      ├── variables.tf                 # Gives info about varibles required
      ├── tfvars.sample                # sample variables file
├── environments
    ├── dev
      ├── main.tf                       # Config file for dev environment.
      ├── backend.tf                   # File that gives details of where to store tfstate files
      ├── variables.tf                 # Gives info about varibles required
      ├── tfvars.sample                 # sample variables file

    ├── staging
      ├── main.tf                   # Config file for staging environment.
      ├── backend.tf                   # File that gives details of where to store tfstate files
      ├── variables.tf                 # Gives info about varibles required
      ├── tfvars.sample                 # sample variables fil

Deployment Overview

  • Phase 1 - Create core resources shared by all environments:the AWS Code Commit repository and the Elastic Beanstalk application name.

  • Phase 2 - Create resources for multiple Elastic Beanstalk environments under core EBS application

Phase 1 - Core Resources

  • Change Directory - $ cd ~backend/terraform/core

  • Edit - backend.tf - replace the tfstate bucket, path, and aws profile name if required.

  • Edit - core.tf - replace aws profile name if required.

  • Copy - $ cp tfvars.sample core.auto.tfvars - change variables if necessary. All *.auto.tfvars are used automatically by terraform.

  • Run - $ terraform init to initialize the project.

  • Run - $ terraform apply to create the core resources.

  • Confirm - Terraform will show the list of resources it plans to create. Review them and enter yes.

( Note: You can use $ terraform destroy to terminate and delete all the resources created above. You can also delete one or more specific resources using resource names from $ terraform state list and providing resource names with -target flag $ terraform destroy -target RESOURCE_TYPE.NAME1 -target RESOURCE_TYPE.NAME2)

Phase 2 - Environment-specific Resources

  • Copy and paste any environment folder in backend/terraform/environments directory: e.g. Copy bsnl folder and rename is to your desired <envname>

  • Change Directory - $ cd ~backend/terraform/environments/<envname>

  • Update the below fields in the mentioned files for your particular environment:

    backend.tf

    key

    tfvars.sample

    current_directory, environment identifier, branch_name (if branch other than dev branch), NEW_RELIC_APP_NAME

  • Copy tfvars.sample file as terraform.tfvars : $ cp tfvars.sample terraform.tfvars - change variables if necessary. All terraform.tfvars are used automatically by terraform.

  • Update the terraform.tfvars file with the secrets:

terraform.tfvars

GOOGLE_CLIENT_ID, TOKEN_SECRET_KEY, ADMIN_USERS, EMAIL_FROM CLIENT_API_KEY, CLIENT_API_SECRET, NEW_RELIC_LICENSE_KEY, ssl_certificate_id

  • Run - $ terraform init to initialize the project.

  • Do you want to copy the existing state to the new backend? Enter a value: no

  • Run - $ terraform plan to check the plan of resources to be created.

    You should see: Plan: 40 to add, 0 to change, 0 to destroy. (Nothing to destroy, else your tfstate are shared and will destroy other environment resources)

  • Run - $ terraform apply to create the core resources.

  • Confirm - Terraform will show the list of resources it plans to create. Review them and enter yes.

Notes:

  • You can use $ terraform destroy to terminate and delete all the resources created above. You can also delete one or more specific resources using resource names from $ terraform state list and providing resource names with -target flag $ terraform destroy -target RESOURCE_TYPE.NAME1 -target RESOURCE_TYPE.NAME2

  • If you change the output_path, make sure the path exists. The build script will generate a zip of a serverless function and store it on output_path.

  • ebs_app_name & repository_name variables used in phase 2 are created in phase 1. Make sure their values are same in both phases.

AWS Resources that will be created by these script.

  • Elastic beanstalk environment

  • RDS (Postgres)

  • Step function

  • Lambda function

  • Elastic Load Balancer

  • Auto scaling group

  • CICD pipeline to build a Docker image from source code in AWS Code commit and then deploy it to created EBS app

Once the AWS backend infrastructure are created successfully, you need to deploy the backend docker image based package on elastic beanstalk using the github actions workflow. Before that do confirm by opening the elastic beanstalk endpoint, you should see the below screen:

Eg: http://prod-upgrade-experiment-app.eba-xkparwve.us-east-1.elasticbeanstalk.com/

4. To deploy the backend package, follow the below steps:

  • For this you need to update the following fields in the below files:

.github/variables/vars.env

EB_DEV_ENV_NAME, DEV_S3_BUCKET, DEV_LAMBDA_FUNCTION_NAME

.github/workflows/**

branches: your branch name

repository_owner: Change to your github username

repository: Change to your github username with repo name

  • Push the changes and the github actions should deploy the docker image package to your EB application.

Once deployed open: http://prod-upgrade-experiment-app.eba-xkparwve.us-east-1.elasticbeanstalk.com/api

You should see below response:

{ "name": "A/B Testing Backend", "version": "1.0.0", "description": "Backend for A/B Testing Project" }

It means the backend is deployed successfully!!!

CI/CD

Note: We're using github and jenkins rather than the codecommit pipeline.

CICD Pipeline info: AWS Code Commit -> ECR (Docker image) -> Elastic Beanstalk.

The module gets the code from a AWS CODECOMMIT repository, builds a Docker image from it by executing the buildspec.yml and Dockerfile files from the repository, pushes the Docker image to an ECR repository, and deploys the Docker image to Elastic Beanstalk running Docker stack. - http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html

Variables

Note: The variables marked as bold must be changed to create new environments.

Note: The variable prefix is used to prefix all resource name including s3 buckets for deploy phase. We recommend using combination of your org name with upgrade.

NameDescriptionType

current_directory

name of the folder holding main.tf

varchar

aws_region

aws region

varchar

environment

deployment environment name

varchar

prefix

prefix to be attached to all resources

varchar

app_version

Application version

varchar

aws_profile

aws profile name

varchar

allocated_storage

Storage for RDS instance

number in GBs

engine_version

RDS engine version

number

identifier

RDS DB identifier

varchar

instance_class

RDS instance class

varchar

storage_type

RDS Storage type

varchar

multi_az

RDS instance multi_az value for high availabilty

boolean

app_instance_type

EC2 instance that will be created in EBS environment

varchar

ebs_app_name

EBS application name created in core resources

varchar

autoscaling_min_size

Minimum number of instances that can be in running state

number

autoscaling_max_size

Max number of instances that can be in running state

number

GOOGLE_CLIENT_ID

google project id for upgrade client app

varchar

MONITOR_PASSWORD

Monitor password for upgrade service

varchar

SWAGGER_PASSWORD

Swagger password for upgrade service

varchar

TYPEORM_SYNCHRONIZE

Sync models on every instance of application start?

boolean

TOKEN_SECRET_KEY

Bearer token for auth

varchar

AUTH_CHECK

Auth check

boolean

repository_name

AWS CODE COMMIT repository name created in core resources for CICD pipeline

varchar

branch_name

AWS CODE COMMIT branch name for CICD pipeline

varchar

build_image

build image for AWS CODEBUILD

varchar

build_compute_type

AWS CODEBUILD Compute type

varchar

privileged_mode

codebuild priviledge mode

number

Outputs

NameDescription

ebs_cname

Public URL of the EBS app

step_function

Step function ARN

CL Implementation Details

CLI Upgrade Account - Terraform user credentials: https://vault.carnegielearning.com:8200/ui/vault/secrets/secret/show/providers/aws/cli-upgrade/terraform

EB URLs: http://development-cli-upgrade-experiment-app.eba-3bk2y9gi.us-east-1.elasticbeanstalk.com http://staging-cli-upgrade-experiment-app.eba-3bk2y9gi.us-east-1.elasticbeanstalk.com

Secrets: https://vault.carnegielearning.com:8200/ui/vault/internal/upgrade-experiment-service/environments/dev https://vault.carnegielearning.com:8200/ui/vault/internal/upgrade-experiment-service/environments/staging

Cloudwatch Log Groups: /aws/elasticbeanstalk/development-cli-upgrade-experiment-app /aws/elasticbeanstalk/staging-cli-upgrade-experiment-app

Frontend

Overview:

To get UpGrade frontend up and running on your AWS infrastructure, we have provided Terraform scripts. These terraform scripts provide for creating infrastructure for Upgrade as well as setting up a CI/CD pipeline using AWS Code Pipeline.

To host a static website on s3 you need to follow the steps given below.

  1. Fork/clone the source code of the upgrade-frontend located here in your

    GITHUB account, if not done yet:

    https://github.com/CarnegieLearningWeb/UpGrade

  2. Inside the frontend/terraform directory in the repository, we have provided all of the infrastructure configuration scripts. Run the script and you will get the link of the newly hosted website. You can use that to access your upgrade portal.

Steps to follow for frontend deployment:

  1. Clone the Upgrade (if not done yet for the backend): $ git clone https://github.com/CarnegieLearningWeb/UpGrade.git

  2. Create a dev branch (if not done yet for the backend): $ git checkout -b dev

  3. Now, we need to run terraform scripts. Before that, you should just get an idea of the terraform architecture.

Deployment Overview

  • Phase 1 - Create core resources shared by all environments

  • Phase 2 - Create resources for frontend for specific environment

Phase 1 - Core Resources

  • Change Directory - cd ~backend/terraform/core

  • Edit - backend.tf - replace the tfstate bucket, path, and aws profile name if required.

  • Edit - core.tf - replace aws profile name if required.

  • Copy - cp tfvars.sample core.auto.tfvars - change variables if necessary. All *.auto.tfvars are used automatically by terraform.

  • Run - terraform init to initialize the project.

  • Run - terraform apply to create the core resources.

  • Confirm - Terraform will show the list of resources it plans to create. Review them and enter yes.

( Note: You can use terraform destroy to terminate and delete all the resources created above. You can also delete one or more specific resources using resource names from terraform state list and providing resource names with -target flag terraform destroy -target RESOURCE_TYPE.NAME1 -target RESOURCE_TYPE.NAME2)

Phase 2 - Environment-specific Resources

  • Change Directory - $ cd ~frontend/terraform/environments/<envname>

  • Copy and paste any environment folder in frontend/terraform/environments directory: e.g. Copy bsnl folder and rename is to your desired <envname>

  • Change Directory - $ cd terraform/environments/<envname>

  • Update the below fields in the mentioned files for your particular environment:

    backend.tf

    key

    tfvars.sample

    environment, repository_branch

  • Copy tfvars.sample file as terraform.tfvars : $ cp tfvars.sample terraform.tfvars - change variables if necessary. All terraform.tfvars are used automatically by terraform.

  • Run - $ terraform init to initialize the project.

  • Do you want to copy the existing state to the new backend? Enter a value: no

  • Run - $ terraform plan to check the plan of resources to be created.

    You should see: Plan: 4 to add, 0 to change, 0 to destroy. (Nothing to destroy, else your tfstate are shared and will destroy other environment resources)

  • Run - $ terraform apply to create the core resources.

  • Confirm - Terraform will show the list of resources it plans to create. Review them and enter yes.

AWS Resources that will be created by this script.

  • aws_cloudfront_distribution

  • aws_s3_bucket

  • aws_s3_bucket_policy

Outputs

Description/Sample Value

website-cdn-endpoint

CloudFront Distribution URL (d183j5bt5anjx.cloudfront.net)

website-s3-endpoint

4. To deploy the frontend package, follow the below steps:

  • Push the changes for frontend terraform environment scripts and that would trigger a github action to deploy the frontend build distribution files to the S3 bucket.

  • Next, you need to register Google OAuth 2.0 Client ID for the endpoint on your GCP project. You will get the Client ID required for the environment variables. For this follow the below steps:

a. For the first time we need to create a GCP project for storing our URI.

Open: https://console.cloud.google.com/

Click: Select Project and Create New Project

Select the Project name and click on API and services

b. Now, click Credentials and create a new credential ees-client-1. You will see the ClientID over here. Keep it safe for the environment.json file.

c. Now, add the website-cdn-endpoint and website-s3-endpoint URI in the GCP project credentials page of the Project you have created and selected from the top, as shown below:

d. It will take 1 minute to a few hours to reflect the Google login access in the Upgrade Frontend.

e. Lastly after github actions you need to once upload the environment.json file with the endpointApi and gapiClientId in the frontend s3 bucket for prod deployment in the below format:

{
   "endpointApi": "<Your Deployed AWS Beanstalk endpoint url>",
   "gapiClientId": "<Your Google OAuth 2.0 Client ID>"
}

Mostly after 30 seconds it should be active, just open: http://prod-upgrade-frontend-prod.s3-website-us-east-1.amazonaws.com

You should see the below screen:

Finally your Upgrade frontend is deployed successfully!!!

Login using the gmail id with the domain specified during the deployment and you are ready to create experiments.

URL Mapping as per domain: (optional step)

For e.g.:

Adding CNAME records in your domain hosting account for backend and frontend:

For Backend Record:

Type: CNAME

Name: upgrade-prod-backend.edoptimize.com

Value: prod-upgrade-experiment-app.eba-xkparwve.us-east-1.elasticbeanstalk.com

For Frontend Record:

Type: CNAME

Name: upgrade-prod.edoptimize.com

Value: d183j5bt5anjx.cloudfront.net

Last updated