Skip to content

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full stack javascript app with AWS - 1 Part Series

How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk

Let's imagine that you have finished building your app. You have a Single Page Application (SPA) with a NestJS back-end. You are ready to launch, but what if your app is a hit, and you need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means that to serve traffic, you need to have more instances running behind a load balancer. Serving your front-end using a CDN will also be helpful.

Architecture

In this article, I am going to give you steps on how to set up a scalable distribution in AWS, using S3, CloudFront and Elastic Beanstalk. The NestJS API and the simple front-end are both inside an NX monorepo

The sample application

For the sake of this tutorial, we have put together a very simple HTML page that tries to reach an API endpoint and a very basic API written in NestJS.

The UI

The UI code is very simple. There is a "HELLO" button on the UI which when clicked, tries to reach out to the /api/hello endpoint. If there is a response with status code 2xx, it puts an h1 tag with the response contents into the div with the id result. If it errors out, it puts an error message into the same div.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <title>Frontend</title>
    <base href="/" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="icon" type="image/x-icon" href="favicon.ico" />
  </head>
  <body>
    <button id="hello">HELLO</button>

    <div id="result"></div>
    <script>
      const helloButton = document.getElementById('hello');
      const resultDiv = document.getElementById('result');
      helloButton.addEventListener('click', async () => {
        const request = await fetch('/api/hello');
        if (request.ok) {
          const response = await request.text();
          console.log(json);
          resultDiv.innerHTML = `<h1>${response}</h1>`;
        } else {
          resultDiv.innerHTML = `<h1>An error occurred.</h1>`;
        }
      });
    </script>
  </body>
</html>

The API

We bootstrap the NestJS app to have the api prefix before every endpoint call.

// main.ts
import { Logger } from '@nestjs/common';
import { NestFactory } from '@nestjs/core';

import { AppModule } from './app/app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  const globalPrefix = 'api';
  app.setGlobalPrefix(globalPrefix);
  const port = process.env.PORT || 3000;
  await app.listen(port);
  Logger.log(`🚀 Application is running on: http://localhost:${port}/${globalPrefix}`);
}

bootstrap();

We bootstrap it with the AppModule which only has the AppController in it.

// app.module.ts

import { Module } from '@nestjs/common';

import { AppController } from './app.controller';

@Module({
  imports: [],
  controllers: [AppController],
})
export class AppModule {}

And the AppController sets up two very basic endpoints. We set up a health check on the /api route and our hello endpoint on the /api/hello route.

import { Controller, Get } from '@nestjs/common';

@Controller()
export class AppController {
  @Get()
  health() {
    return 'OK';
  }

  @Get('hello')
  hello() {
    return 'Hello';
  }
}

Hosting the front-end with S3 and CloudFront

To serve the front-end through a CMS, we should first create an S3 bucket. Go to S3 in your AWS account and create a new bucket. Name your new bucket to something meaningful. For example, if this is going to be your production deployment I recommend having -prod in the name so you will be able to see at a glance, that this bucket contains your production front-end and nothing should get deleted accidentally.

We go with the defaults for this bucket setting it to the us-east-1 region. Let's set up the bucket to block all public access, becaCreate control settinguse we are going to allow get requests through CloudFront to these files. We don't need bucket versioning enabled, because these files will be deleted every time a new front-end version will be uploaded to this bucket. If we were to enable bucket versioning, old front-end files would be marked as deleted and kept, increasing the storage costs in the long run. Let's use server-side encryption with Amazon S3-managed keys and create the bucket.

When the bucket is created, upload the front-end files to the bucket and let's go to the CloudFront service and create a distribution.

As the origin domain, choose your S3 bucket. Feel free to change the name for the origin. For Origin access, choose the Origin access control settings (recommended). Create a new Control setting with the defaults. I recommend adding a description to describe this control setting.

At the Web Application Firewall (WAF) settings we would recommend enabling security protections, although it has cost implications. For this tutorial, we chose not to enable WAF for this CloudFront distribution.

In the Settings section, please choose the Price class that best fits you. If you have a domain and an SSL certificate you can set those up for this distribution, but you can do that later as well. As the Default root object, please provide index.html and create the distribution.

03 warning

When you have created the distribution, you should see a warning at the top of the page. Copy the policy and go to your S3 bucket's Permissions tab. Edit the Bucket policy and paste the policy you just copied, then save it.

If you have set up a domain with your CloudFront distribution, you can open that domain and you should be able to see our front-end deployed. If you didn't set up a domain the Details section of your CloudFront distribution contains your distribution domain name.

Distribution domain name is in the upper left corner of the Details section

If you click on the "Hello" button on your deployed front-end, it should not be able to reach the /api/hello endpoint and should display an error message on the page.

Hosting the API in Elastic Beanstalk

Elastic beanstalk prerequisites

For our NestJS API to run in Elastic Beanstalk, we need some additional setup. Inside the apps/api/src folder, let's create a Procfile with the contents: web: node main.js. Then open the apps/api/project.json and under the build configuration, extend the production build setup with the following (I only )

{
  "targets": {
    "build": {
      "configurations": {
        "development": {},
        "production": {
          "generatePackageJson": true,
          "assets": [
            "apps/api/src/assets",
            "apps/api/src/Procfile"
          ]
        }
      }
    }
  }
}

The above settings will make sure that when we build the API with a production configuration, it will generate a package.json and a package-lock.json near the output file main.js.

To have a production-ready API, we set up a script in the package.json file of the repository. Running this will create a dist/apps/api and a dist/apps/frontend folder with the necessary files.

{
  "scripts": {
    "build:prod": "nx run-many --target=build --projects api,frontend --configuration=production"
  }
}

After running the script, zip the production-ready api folder so we can upload it to Elastic Beanstalk later.

zip -r -j dist/apps/api.zip dist/apps/api

Creating the Elastic Beanstalk Environment

Let's open the Elastic Beanstalk service in the AWS console. And create an application. An application is a logical grouping of several environments. We usually put our development, staging and production environments under the same application name. The first time you are going to create an application you will need to create an environment as well.

We are creating a Web server environment. Provide your application's name in the Application information section. You could also provide some unique tags for your convenience. In the Environment information section please provide information on your environment. Leave the Domain field blank for an autogenerated value.

Application and environment information

When setting up the platform, we are going to use the Managed Node.js platform with version 18 and with the latest platform version.

Platform and engines

Let's upload our application code, and name the version to indicate that it was built locally. This version label will be displayed on the running environment and when we set up automatic deployments we can validate if the build was successful. As a Preset, let's choose Single instance (free tier eligible)

App upload and presets

On the next screen configure your service access. For this tutorial, we only create a new service-role. You must select the aws-elasticbeanstalk-ec2-role for the EC2 instance profile.

Create new service role

If can't select this role, you should create it in AWS IAM with the AWSElasticBeanstalkWebTier, AWSElasticBeanstalkMulticontainerDocker and the AWSElasticBeanstalkRoleWorkerTier managed permissions.

IAM ec2 role

The next step is to set up the VPC. For this tutorial, I chose the default VPC that is already present with my AWS account, but you can create your own VPC and customise it. In the Instance settings section, we want our API to have a public IP address, so it can be reached from the internet, and we can route to it from CloudFront. Select all the instance subnets and availability zones you want to have for your APIs.

VPC and instance settings

For now, we are not going to set up a database. We can set it up later in AWS RDS but in this tutorial, we would like to focus on setting up the distribution. Let's move forward

Let's configure the instance traffic and scaling. This is where we are going to set up the load balancer. In this tutorial, we are keeping to the defaults, therefore, we add the EC2 instances to the default security group.

Set the environment type to Load balanced

In the Capacity section we set the Environment type to Load balanced. This will bring up a load balancer for this environment. Let's set it up so that if the traffic is large, AWS can spin up two other instances for us. Please select your preferred tier in the instance types section, We only set this to t3.micro For this tutorial, but you might need to use larger tiers.

Configure the Scaling triggers to your needs, we are going to leave them as defaults. Set the load balancer's visibility to the public and use the same subnets that you have used before.

Load Balancer network settings

At the Load Balancer Type section, choose Application load balancer and select Dedicated for exactly this environment. Let's set up the listeners, to support HTTPS.

Set up a listener for port 443

Add a new listener for the 443 port and connect your SSL certificate that you have set up in CloudFront as well. For the SSL policy choose something that is over TLS 1.2 and connect this port to the default process.

443 port setup

Now let's update the default process and set up the health check endpoint. We set up our API to have the health check endpoint at the /api route. Let's modify the default process accordingly and set its port to 8080.

Default process on port 8080 and /api health check endpoint

For this tutorial, we decided not to enable log file access, but if you need it, please set it up with a separate S3 bucket.

At the last step of configuring your Elastic Beanstalk environment, please set up Monitoring, CloudWatch logs and Managed platform updates to your needs. For the sake of this tutorial, we have turned most of these options off. Set up e-mail notifications to your dedicated alert e-mail and select how you would like to do your application deployments.

At the end, let's configure the Environment properties. We have set the default process to occupy port 8080, therefore, we need to set up the PORT environment variable to 8080.

Set PORT to 8080

Review your configuration and then create your environment. It might take a few minutes to set everything up. After the environment's health transitions to OK you can go to AWS EC2 / Load balancers in your web console. If you select the freshly created load balancer, you can copy the DNS name and test if it works by appending /api/hello at the end of it.

Load balancer DNS record

Connect CloudFront to the API endpoint

Let's go back to our CloudFront distribution and select the Origins tab, then create a new origin. Copy your load balancer's URL into the Origin domain field and select HTTPS only protocol if you have set up your SSL certificate previously. If you don't have an SSL certificate set up, you might use HTTP only, but please know that it is not secure and it is especially not recommended in production. We also renamed this origin to API. Leave everything else as default and create a new origin.

New CloudFront Origin

Under the Behaviors tab, create a new behavior. Set up the path pattern as /api/* and select your newly created API origin. At the Viewer protocol policy select Redirect HTTP to HTTPS and allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For this tutorial, we have left everything else as default, but please select the Cache Policy and Origin request policy that suits you the best.

API behaviour

Now if you visit your deployment, when you click on the HELLO button, it should no longer attach an error message to the DOM.


Now we have a distribution that serves the front-end static files through CloudFront, leveraging caching and CDN, and we have our API behind a load balancer that can scale. But how do we deploy our front-end and back-end automatically when a release is merged to our main branch? For that we are going to leverage AWS CodeBuild and CodePipeline, but in the next blog post. Stay tuned.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...

Functional Programming in TypeScript using the fp-ts Library: Exploring Task and TaskEither Operators cover image

Functional Programming in TypeScript using the fp-ts Library: Exploring Task and TaskEither Operators

Introduction: Welcome back to our blog series on Functional Programming in TypeScript using the fp-ts library. In the previous three blog posts, we covered essential concepts such as the pipe and flow operators, Option type, and various methods and operators like fold, fromNullable, getOrElse, map, flatten, and chain. In this fourth post, we will delve into the powerful Task and TaskEither operators, understanding their significance, and exploring practical examples to showcase their usefulness. Understanding Task and TaskEither: Before we dive into the examples, let's briefly recap what Task and TaskEither are and why they are valuable in functional programming. Task: In functional programming, a Task represents an asynchronous computation that may produce a value or an error. It allows us to work with asynchronous operations in a pure and composable manner. Tasks are lazy and only start executing when we explicitly run them. They can be thought of as a functional alternative to Promises. Now, let's briefly introduce the Either type and its significance in functional programming since this concept, merged with Task gives us the full power of TaskEither. Either: Either is a type that represents a value that can be one of two possibilities: a value of type Left or a value of type Right. Conventionally, the Left type represents an error or failure case, while the Right type represents a successful result. Using Either, we can explicitly handle and propagate errors in a functional and composable way. Example: Handling Division with Either Suppose we have a function divide that performs a division operation. Instead of throwing an error, we can use Either to handle the potential division by zero scenario. Here's an example: `ts import { Either, left, right } from 'fp-ts/lib/Either'; const divide: (a: number, b: number) => Either = (a, b) => { if (b === 0) { return left('Error: Division by zero'); } return right(a / b); }; const result = divide(10, 2); result.fold( (error) => console.log(Error: ${error}`), (value) => console.log(Result: ${value}`) ); ` In this example, the divide function returns an Either type. If the division is successful, it returns a Right value with the result. If the division by zero occurs, it returns a Left value with an error message. We then use the fold function to handle both cases, printing the appropriate message to the console. TaskEither: TaskEither combines the benefits of both Task and Either. It represents an asynchronous computation that may produce a value or an error, just like Task, but also allows us to handle potential errors using the Either type. This enables us to handle errors in a more explicit and controlled manner. Examples: Let's explore some examples to better understand the practical applications of Task and TaskEither operators. Example 1: Fetching Data from an API Suppose we want to fetch data from an API asynchronously. We can use the Task operator to encapsulate the API call and handle the result using the Task's combinators. In the example below, we define a fetchData` function that returns a Task representing the API call. We then use the `fold` function to handle the success and failure cases of the Task. If the Task succeeds, we return a new Task with the fetched data. If it fails, we return a Task with an error message. Finally, we use the `getOrElse` function to handle the case where the Task returns `None`. `typescript import { pipe } from 'fp-ts/lib/function'; import { Task } from 'fp-ts/lib/Task'; import { fold } from 'fp-ts/lib/TaskEither'; import { getOrElse } from 'fp-ts/lib/Option'; const fetchData: Task = () => fetch('https://api.example.com/data'); const handleData = pipe( fetchData, fold( () => Task.of('Error: Failed to fetch data'), (data) => Task.of(Fetched data: ${data}`) ), getOrElse(() => Task.of('Error: Data not found')) ); handleData().then(console.log); ` Example 2: Performing Computation with Error Handling Let's say we have a function divide` that performs a computation and may throw an error. We can use TaskEither to handle the potential error and perform the computation asynchronously. In the example below, we define a `divideAsync` function that takes two numbers and returns a TaskEither representing the division operation. We use the `tryCatch` function to catch any potential errors thrown by the `divide` function. We then use the `fold` function to handle the success and failure cases of the TaskEither. If the TaskEither succeeds, we return a new TaskEither with the result of the computation. If it fails, we return a TaskEither with an error message. Finally, we use the `map` function to transform the result of the TaskEither. `typescript import { pipe } from 'fp-ts/lib/function'; import { TaskEither, tryCatch } from 'fp-ts/lib/TaskEither'; import { fold } from 'fp-ts/lib/TaskEither'; import { map } from 'fp-ts/lib/TaskEither'; const divide: (a: number, b: number) => number = (a, b) => { if (b === 0) { throw new Error('Division by zero'); } return a / b; }; const divideAsync: (a: number, b: number) => TaskEither = (a, b) => tryCatch(() => divide(a, b), (error) => new Error(String(error))); const handleComputation = pipe( divideAsync(10, 2), fold( (error) => TaskEither.left(Error: ${error.message}`), (result) => TaskEither.right(Result: ${result}`) ), map((result) => Computation: ${result}`) ); handleComputation().then(console.log); ` In the first example, we saw how to fetch data from an API using Task and handle the success and failure cases using fold and getOrElse functions. This allows us to handle different scenarios, such as successful data retrieval or error handling when the data is not available. In the second example, we demonstrated how to perform a computation that may throw an error using TaskEither. We used tryCatch to catch potential errors and fold to handle the success and failure cases. This approach provides a more controlled way of handling errors and performing computations asynchronously. Conclusion: In this blog post, we explored the Task` and `TaskEither` operators in the `fp-ts` library. We learned that Task allows us to work with asynchronous computations in a pure and composable manner, while TaskEither combines the benefits of Task and Either, enabling us to handle potential errors explicitly. By leveraging the concepts we have covered so far, such as pipe, flow, Option, fold, map, flatten, and chain, we can build robust and maintainable functional programs in TypeScript using the fp-ts library. Stay tuned for the next blog post in this series, where we will continue our journey into the world of functional programming....

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

I Broke My Hand So You Don't Have To (First-Hand Accessibility Insights) cover image

I Broke My Hand So You Don't Have To (First-Hand Accessibility Insights)

We take accessibility quite seriously here at This Dot because we know it's important. Still, throughout my career, I've seen many projects where accessibility was brushed aside for reasons like "our users don't really use keyboard shortcuts" or "we need to ship fast; we can add accessibility later." The truth is, that "later" often means "never." And it turns out, anyone could break their hand, like I did. I broke my dominant hand and spent four weeks in a cast, effectively rendering it useless and forcing me to work left-handed. I must thus apologize for the misleading title; this post should more accurately be dubbed "second-hand" accessibility insights. The Perspective of a Developer Firstly, it's not the end of the world. I adapted quickly to my temporary disability, which was, for the most part, a minor inconvenience. I had to type with one hand, obviously slower than my usual pace, but isn't a significant part of a software engineer's work focused on thinking? Here's what I did and learned: - I moved my mouse to the left and started using it with my left hand. I adapted quickly, but the experience wasn't as smooth as using my right hand. I could perform most tasks, but I needed to be more careful and precise. - Many actions require holding a key while pressing a mouse button (e.g., visiting links from the IDE), which is hard to do with one hand. - This led me to explore trackpad options. Apart from the Apple Magic Trackpad, choices were limited. As a Windows user (I know, sorry), that wasn't an option for me. I settled for a cheap trackpad from Amazon. A lot of tasks became easier; however, the trackpad eventually malfunctioned, sending me back to the mouse. - I don't know a lot of IDE shortcuts. I realized how much I've been relying on a mouse for my work, subconsciously refusing to learn new keyboard shortcuts (I'll be returning my senior engineer license shortly). So I learned a few new ones, which is good, I guess. - Some keyboard shortcuts are hard to press with one hand. If you find yourself in a similar situation, you may need to remap some of them. - Copilot became my best friend, saving me from a lot of slow typing, although I did have to correct and rewrite many of its suggestions. The Perspective of a User As a developer, I was able to get by and figure things out to be able to work effectively. As a user, however, I got to experience the other side of the coin and really feel the accessibility (or lack thereof) on the web. Here are a few insights I gained: - A lot of websites apparently tried_ to implement keyboard navigation, but failed miserably. For example, a big e-commerce website I tried to use to shop for the aforementioned trackpad seemed to work fine with keyboard navigation at first, but once I focused on the search field, I found myself unable to tab out from it. When you make the effort to implement keyboard navigation, please make sure it works properly and it doesn't get broken with new changes. I wholeheartedly recommend having e2e tests (e.g. with Playwright) that verify the keyboard navigation works as expected. - A few websites and web apps I tried to use were completely unusable with the keyboard and were designed to be used with a mouse only. - Some sites had elaborate keyboard navigation, with custom keyboard shortcuts for different functionality. That took some time to figure out, and I reckon it's not as intuitive as the designers thought it would be. Once a user learns the shortcuts, however, it could make their life easier, I suppose. - A lot of interactive elements are much smaller than they should be, making it hard to accurately click on them with your weaker hand. Designers, I beg you, please make your buttons bigger. I once worked on an application that had a "gloves mode" for environments where the operators would be using gloves, and I feel like maybe the size we went with for the "gloves mode" should be the standard everywhere, especially as screens get bigger and bigger. - Misclicking is easy, especially using your weaker hand. Be it a mouse click or just hitting an Enter key on accident. Kudos to all the developers who thought about this and implemented a confirmation dialog or other safety measures to prevent users from accidentally deleting or posting something. I've however encountered a few apps that didn't have any of these, and those made me a bit anxious, to be honest. If this is something you haven't thought about when developing an app, please start doing so, you might save someone a lot of trouble. Some Second-Hand Insights I was only a little bit impaired by being temporarily one-handed and it was honestly a big pain. In this post, I've focused on my anecdotal experience as a developer and a user, covering mostly keyboard navigation and mouse usage. I can only imagine how frustrating it must be for visually impaired users, or users with other disabilities, to use the web. I must confess I haven't always been treating accessibility as a priority, but I've certainly learned my lesson. I will try to make sure all the apps I work on are accessible and inclusive, and I will try to test not only the keyboard navigation, ARIA attributes, and other accessibility features, but also the overall experience of using the app with a screen reader. I hope this post will at least plant a little seed in your head that makes you think about what it feels like to be disabled and what would the experience of a disabled person be like using the app you're working on. Conclusion: The Humbling Realities of Accessibility The past few weeks have been an eye-opening journey for me into the world of accessibility, exposing its importance not just in theory but in palpable, daily experiences. My short-term impairment allowed me to peek into a life where simple tasks aren't so simple, and convenient shortcuts are a maze of complications. It has been a humbling experience, but also an illuminating one. As developers and designers, we often get caught in the rush to innovate and to ship, leaving behind essential elements that make technology inclusive and humane. While my temporary disability was an inconvenience, it's permanent for many others. A broken hand made me realize how broken our approach towards accessibility often is. The key takeaway here isn't just a list of accessibility tips; it's an earnest appeal to empathize with your end-users. "Designing for all" is not a checkbox to tick off before a product launch; it's an ongoing commitment to the understanding that everyone interacts with technology differently. When being empathetic and sincerely thinking about accessibility, you never know whose life you could be making easier. After all, disability isn't a special condition; it's a part of the human condition. And if you still think "Our users don't really use keyboard shortcuts" or "We can add accessibility later," remember that you're not just failing a compliance checklist, you're failing real people....