Provisioning a monolithic stack on AWS and redesigning it into a highly available and scalable microservices architecture

Shayan Azim Syed
10 min readFeb 16, 2021

I recently passed the AWS SAA-C02 exam and studied Adrian Cantrill’s course which can be found at https://learn.cantrill.io/. I highly recommend it to anyone who is planning to sit for the exam. The visuals and the depth of the materials are incredible. As part of the hands-on lessons, students are tasked with creating a monolithic architecture and restructuring it into independent components that can scale without depending on each other. Working through the demo helps solidify the core services that are provided by AWS. I wanted to share my experience and what I achieved through this exercise.

Again, this post is meant as an “exercise” for introductory exposure to AWS — it does not represent best practices and some code has been intentionally left out. If you would like a more comprehensive demonstration, please consider subscribing to Adrian’s course.

Setting up the environment:

1. Create a VPC with a meaningful name and an IPv4 CIDR block.

2. Provision an Internet gateway (IGW) because parts of the VPC want to connect to the Internet

3. Attach IGW to the created VPC

4. Create a Route table (RT) because the subnets must understand how to route traffic

5. Set default IPv4 route to ensure any Internet-bound IPv4 traffic traverses through IGW

6. Set default IPv6 route: this ensures IPv6 Internet-bound traffic traverses through IGW

7. Create subnets: develop a three-tiered architecture that has a web tier, an app tier, and a database tier. To ensure high availability, we will spread it across three availability zones (AZ)

8. Associate Subnets to RT

a. Assign web-tier subnets to RT so that traffic knows how to traverse it

9. Modify IPv6 auto-assign settings: so that deployed resources in the subnets get an IPv6 address automatically

10. Create Security Groups (SG) to control traffic at the instance levels

11. Create Ec2 Instance for WP

12. Connect to Ec2 instance using instance connect

13. Create variables for the DB we will be using: because this will make it easier to reference them when setting up WordPress

DBName=manutdwordpress
DBUser=manutdwordpressuser
DBEndpoint=localhost
DBPassword=*enter a strong password*
DBRootPassword=*enter a strong password*

14. Install updates in Ec2 instance

15. Install DB server, apache webserver (httpd), and wget (utility to install further WP components)

a. Install PHP and components that will allow PHP to connect to maria DB

b. Install EPEL repository to allow the instance to install packages that are not readily available in the standard repository

c. Install stress component for further testing that will come afterward

16. Enable and start DB and web server: so that they are started every time the instance starts and we do not have to do it manually every time

17. Check that both are running using

sudo systemctl status httpd
sudo systemctl status mariadb

a. Check that both say “active (running)”

18. Set maria DB root password: because this is the username and password that will be used initially to configure the DB

19. Download WordPress to the /var/www/html: because this is where the webserver will load initially when we browse to our WordPress site

20. Navigate to /var/www/html and extract the zip file using the utility “tar”. I used man pages to lookup certain commands for the utility.

21. We want to copy all the files from inside the WordPress folder into the /var/www/html directory

22. Delete the WordPress folder and the zip files we downloaded because we will not need them again

23. Change wp-cofig-sample.php to wp-config.php because this is what WordPress expects to find when it initially loads

24. We will change the config file and replace the placeholders using the Linux sed utility. Again, for certain commands that I did not understand, I used the man pages.

25. Must ensure that web server has access to all files in this folder

26. Create DB using a script that WordPress will use. The commands will reference the variables that were created earlier and append them to the DB setup file in the temp directory that we will create

27. We need to run a command so that Maria DB can access and run this script that we created

28. Now WordPress is set up

WordPress landing page

Configuring the SSM Parameter store:

1. Another thing we want to do before proceeding is to create an SSM parameter to store the variables securely

2. To do this we must create and grant an IAM role to Ec2 instance

3. We grant the following permissions: CWAgentServerPolicy, AmazonSSMFullAccess, AmazonEFSClientFullAccess

4. Now when we deploy the Ec2 instance we will select this role under the IAM field.

5. I used a CloudFormation template (provided through the course) to automate the previous deployments since I deleted the resources and had to start again (one of the many benefits of taking Adrian’s course)

6. The extra deployments that will be done with the CFN template are Security groups for: database, load balancer, and EFS which we will be using later.

7. Now to create the SSM parameters in Parameter Store which can be found under the Systems Manager resource in AWS

8. To import the parameters into the instance I used the command below:

{VariableName}=$(aws ssm get-parameters — region {RegionName} — names {SSMParameterStoreName} — query Parameters[0].Value){VariableName}=`echo ${VariableName} | sed -e ‘s/^”//’ -e ‘s/”$//’`

Creating a launch template to automate the deployment of the WordPress Ec2 instance and database:

1. Create the launch template using the “Free-tier eligible” instance types and the AMI used was Amazon Linux 2 AMI (HVM), SSD Volume Type

2. Selected the proper VPC and Security groups to be used

3. As well as assign the instance role

4. Select the instance role and fill in the user data

5. The user data will ensure that the instance(s) deployed from the launch template has a pre-baked WordPress and Db

6. The script in the user data follows the commands set out in the “Setting up the environment” component of this blog

7. Test the WordPress blog by creating a post

SSM Parameter Store
IAM Role
WordPress Landing Page after deploying from launch template

Separating the DB instance and migrating it into RDS:

1. Create a subnet group that allows RDS to choose from the different subnets

a. Choose the address range for the DB subnet groups that you set up

2. Now to deploy RDS instance and use the subnet group to control placement inside the VPC

a. Use single AZ to keep costs low, however, always use multi-AZ in a production setting to make the architecture highly available

b. Select 5.6.46 since that is the best for migrating a snapshot to Amazon’s proprietary RDS instance, Aurora

c. Enter the values stored in the SSM parameter store into DB Instance name, user, master password, and initial DB name under Additional Configuration

d. Set public accessibility to “no”

e. Set VPC and subnet group

3. Connect to Ec2 instance that was created in the previous video using the session manager

4. Navigate to use the bash terminal and import the values from the parameter store

5. Now to migrate to RDS, we must first take a backup of the existing database by running a command:

6. Then, we must restore the backup to RDS

a. To do this copy the endpoint of the RDS instance that was created earlier and replace the value in the SSM parameter store for the DB endpoint

b. Import the parameter store variable

c. Restore the DB backup that we stored into RDS

7. Change the WP config file to use the RDS instance

8. Stop the MariaDB database

9. Check that WP is still loading successfully. If it does it means that although the MariaDB database inside the monolithic Ec2 architecture is stopped, the blog is still loading from the separate RDS instance

10. Update the launch template so that MariaDB does not install automatically

11. Set a new launch template as the default version

WordPress landing page loaded from separate RDS instance
WordPress second page loaded from separate RDS instance

Deploying an EFS file system for shared storage so that there are no dependencies on the media stored in instances but rather, instances reference the EFS file system for media:

1. Create the file system by specifying a name, lifecycle management policies, enabling automatic backups (very important!), and selecting the performance and throughput modes. We will keep the defaults because the other two are used for special scenarios requiring custom performance or high performing scenarios

2. Configured the EFS mount targets in the VPC which the instances will connect to

a. Select the created VPC

b. Select the Az’s and subnets and choose the SG to be the EFS SG that was created earlier

c. Proceed to create the file system

d. Noted down the fs-xxxxxxxx id that is displayed at the top as this will be used to create an SSM parameter

3. Navigate to parameter store and create a parameter

4. Connecting instance to EFS

a. Need the EFS toolkit. Install using the command

sudo yum -y install amazon-efs-utils

b. Since all the content is stored locally on the instance inside wp-content, we must copy these into EFS and mount the EFS file system

i. Copy contents to a temporary location and create an empty folder which will act as our mount point

ii. Get EFS file system ID from parameter store

iii. Configure file system to mount as /var/www/html/wp-content/(which is what WordPress uses to store content)

iv. Copy the origin content back and change permissions

c. Use the command reboot to restart the instance and then test whether a blog is working. If it is, means that the media is being loaded from EFS

d. We can further use the df -k command and check that the EFS file system is mounted as the directory that WordPress looks for its content (see highlighted portion of the image below)

5. Now the media is being loaded from EFS and the instance storage is less crucial

6. We must update the launch template to reflect this so that EFS is mounted every time the launch template is used

a. We will modify the previous launch template and provide a meaningful description

b. We will modify the launch template with the commands we used earlier in this section

EFS file system mounted as /var/www/html/wp-content

Setting up the auto-scaling group (ASG) and a load balancer for scalability and high availability:

1. Create an internet-facing load balancer and set it to listen at port 80

2. Select appropriate availability zones based on the VPC that you created

3. Move on to the next. It will show a warning that we are not using HTTPS but that’s okay since this is just a demo

4. Next select the SG that we created earlier for the load balancer and set up the health check. The path should be root, i.e. “/”

5. Copy the DNS name inside the ALB that was created

6. Create SSM parameter of the ALB DNS name

7. Modify the launch template

8. Create an auto-scaling group

a. Set up the target group

b. Integrate ASG and load balancer

c. Add two policies and create CloudWatch alarms to scale in and out when CPU Utilization is over or below 40%

d. Adjust ASG values so that the minimum is 1, desired is 1 and maximum are 3

e. Use command stress -c 2 -v -t 3000 to simulate load on the Ec2 instance and check the activities blade on the ASG

f. You should find that it has detected the anomaly and is provisioning a new instance

Policy triggered and ASG is provisioning instances to achieve the desired state

To summarize what I have learned:

  1. Setting up the AWS environment by creating VPCs, subnets, route tables, internet gateways, and security groups.
  2. Provision an Ec2 instance and manually configure WordPress to host a blog.
  3. Automate the configuration using a launch template.
  4. Use Systems Manager’s parameter store to securely store and retrieve various parameters that are used in scripts in the instances as opposed to typing them out every time.
  5. Set up IAM role to allow Ec2 instances access to AWS resources
  6. Create a separate RDS instance and start to move towards a more microservice type architecture so that it can scale without having to rely on each other.
  7. Set up an Elastic File System and mount it onto the provisioned instances each time by modifying the launch template. This allows the media to be stored separately and makes the instance store unimportant.
  8. Configure an application load balancer (ALB) and an auto-scaling group to undertake health checks and detect anomalies. Add policies to scale in or out based on certain criteria.

I find cloud technology fascinating and will be posting more in-depth exercises as I continue to grow my knowledge.

Thank you for reading! Be sure to take care of yourself and especially each other.

--

--

Shayan Azim Syed

2x Azure | AWS SAA-C02 | Aspiring DevOps professional looking for opportunities