Angular on AmazonWebServices
Angular on Amazon Web Services
At interfacewerk we deploy Angular applications to different environments on HMI displays such as cashier machines or to more common web development infrastructure environments. With the increasing importance of Cloud Computing, we want to have a look at how to deploy an Angular application at Amazon Web Services. We will describe a basic architecture that includes a backend application and a setup of a domain via https for our Angular application. As a result, we will not examine any details of the actual Angular application nor details of the backend, but we will give a few commands and some tips and tricks — not a complete step by step procedure. Feel free to also deploy your first Angular project on AWS. The estimated costs of this setup without further usage is around 10USD/month.
We will use the following services for our setup: EC2, S3, CloudFront, Route 53 and AWS Certificate Manager.
EC2 is the foundation of AWS services and provides scalable compute capacity. To get started create a new Instance of Type Amazon Linux 2 AMI. For the testing purpose, it is recommended to use t2.micro since you get
a free tier for several hours per month. Continue with the default settings and select default VPC security group. Note that for production this is not recommended since it opens all protocols and ports.
SSH into your instance and install Docker:
sudo yum update -y
sudo yum install -y docker
sudo usermod -aG docker ec2-user
sudo service docker start
Pull your Docker Image from your registry that contains your webserver. When running, map its local port to 80.
S3 is an object storage that comes with lots of features such as versioning and high availability. Build your Angular application as you would usually do. Create a bucket in S3 with the following command:
aws s3api create-bucket --bucket YOURBUCKETNAME --region YOURAWSREGION
Afterwards, add a bucket policy that allows public read access for your bucket:
aws s3api put-bucket-policy --bucket YOURBUCKETNAME --policy “$policy_json”
The $policy_json looks like this:
In our setup we use Amazon’s CDN called CloudFront to distribute our content to different edge locations. This decreases latency all over the globe. Create a distribution with the following command:
aws cloudfront create-distribution \
--origin-domain-name YOURBUCKETNAME.s3.amazonaws.com \
Tip: Create an invalidation each time you upload new content to S3. This invalidates your old content and makes CloudFront
immediately fetch your new content into its edge locations.
Next, register certificates at AWS Certificate Manager. This can be also done externally but it is often more convenient to use AWS services.
Besides, AWS Certificate Manager can be used for free.
Next, we use an Elastic Load Balancer which is part of the EC2 service. This is usually suiteable if you want to use multiple EC2 instances to distribute your traffic. In this setup, Amazon Web Services forces you to use an Elastic Load Balancer because it allows to map the https requests to port 80 (http) of your instance. Create an Application Load Balancer that listens on protocol https. Select your certificate from your domain (e.g. api.YOURDOMAIN.com) and in the target group point to your instance on port 80. When you register your target you can select the instance we created above. If you have multiple EC2 instances with the same Docker Container running, select as many as you like for better load distribution.
Finally, we use Route53 to route our domains. We want two domains:
- one for the user to call (e.g. YOURDOMAIN.com) and
- one for the angular application to call our backend (e.g. api.YOURDOMAIN.com).
Tip: This approach also works across AWS accounts if the Domain is registered on a different AWS account.
- for the YOURDOMAIN.com: add as Alias Target the ‘Domain Name’ from the CloudFront Distribution
- for the api.YOURDOMAIN.com: Create an A Record with Alias and copy in Alias target the DNS name of your Elastic Load Balancer. Notice that the term dualstack will be added in front of Alias target to support ip6 .
This setup can be seen as an introduction to AWS Services. Compared to more advanced setups, such as using AWS ECS or Kubernetes for the composition of backend containers, it is very easy to setup. However most of the Angular related part would stay the same in any other scenario. In this setup on each update of your backend infrastructure you would have to ssh into your instance(s) each time to pull and restart your docker containers. This would be simplified by using any of the mentioned docker composition tools.