Skip to content

Introducing @this-dot/rxidb

When we are working on PWAs (Progressive Web Applications), we sometimes need to implement features that require us to store data on our user's machine. One way to do that is to use IndexedDb. Using the IndexedDb browser API has its challenges, so our team at This Dot has developed an RxJS wrapper library around it. With @this-dot/rxidb, one can set up reactive database connections and manipulate its contents the RxJS way. It also provides the ability to subscribe to changes in the database and update our UIs accordingly.

In this blog post, I'd like to show you some small examples of the library in action. If you'd like to see the finished examples, please check out the following links:

You can also check our OSS repository for more examples.

Storing data in chronological order

Imagine that you are working on a special text editor app, or something that needs to keep data between page reloads. These kinds of apps usually need to track larger amounts of data. It would be a bad practice to use localStorage for that. In the following example, we will focus on how to store and delete rows in an Object Store that has autoIncrement enabled. For the sake of simplicity, every time the user presses the Add Item button, a timestamp of the event will be stored in the database.

We would also like to be able to remove items from the beginning and the end of this store. We will add two buttons to our UI to deal with that, and we would like them to be disabled if there are no entries in the store. We have a starter HTML that looks like the following:

<h1>@this-dot/rxidb autoincrement example</h1>
<br>
<button id="add-item-btn"> Add item </button>
<button id="remove-first-item-btn"> Remove first item </button>
<button id="remove-last-item-btn"> Remove last item </button>

<hr>
<div id="container"></div>

Initiating the Database

For us to be able to store data in IndexedDb, we need a database connection and an Object Store set up. We want this Object Store to automatically increment, so we don't need to manually keep track of the last key. We do want to listen to every update that happens in the database, so let's set up our listeners for the keys and the key-value pairs using the entries() and the keys() operators.

import {
  addItem,
  connectIndexedDb,
  deleteItem,
  entries,
  getObjectStore,
  keys,
} from '@this-dot/rxidb';

// ...

const DATABASE_NAME = 'AUTO_INCREMENT';

const store$ = connectIndexedDb(DATABASE_NAME).pipe(
  getObjectStore('store', { autoIncrement: true })
);

const keyValues$: Observable<{ key: IDBValidKey; value: unknown }[]> =
  store$.pipe(entries());
const keys$: Observable<IDBValidKey[]> = store$.pipe(keys());

We want to display the contents of the database when they get updated in the #container div. For that, we need to subscribe to our keyValues$ observable. Whenever it emits, we want to update our div.

const containerDiv: HTMLElement = document.getElementById(`container`);

// ...

keyValues$.subscribe((entries) => {
  const content = entries
    .map(({ key, value }) => `<div>${key} | ${value} </div>`)
    .join('\n<br>\n');
  containerDiv.innerHTML = content;
});

Manipulating the data

We have three buttons on our UI. One for adding data to the Object Store, and two for removing data from it. Let's set up our event emitters, using the fromEvent observable creator function from RxJS.

const removeFirstBtn: HTMLElement = document.getElementById(
  'remove-first-item-btn'
);
const removeLastBtn: HTMLElement = document.getElementById(
  'remove-last-item-btn'
);
const addItemBtn: HTMLElement = document.getElementById('add-item-btn');

const addItemBtnClick$ = fromEvent(addItemBtn, 'click');
const removeFirstItemBtnClick$ = fromEvent(removeFirstBtn, 'click');
const removeLastItemBtnClick$ = fromEvent(removeLastBtn, 'click');

We can use the addItem operator to add rows to an automatically incrementing Ojbect Store. When the Add Item button gets clicked, we want to save a timestamp into our database.

addItemBtnClick$
  .pipe(
    map(() => new Date().getTime()),
    switchMap((timestamp) => store$.pipe(addItem(timestamp)))
  )
  .subscribe();

Removing elements from the database will happen on the other two button clicks. We need the keys$ observable so we can delete the first or the last items in the store.

removeFirstItemBtnClick$
  .pipe(
    withLatestFrom(keys$),
    switchMap(([, keys]) =>
      store$.pipe(
        filter(() => !!keys.length),
        deleteItem(keys[0])
      )
    )
  )
  .subscribe();

removeLastItemBtnClick$
  .pipe(
    withLatestFrom(keys$),
    switchMap(([, keys]) =>
      store$.pipe(
        filter(() => !!keys.length),
        deleteItem(keys[keys.length - 1])
      )
    )
  )
  .subscribe();

Toggling button states

The last feature we want to implement is toggling the remove buttons' disabled state. If there are no entries in the database we disable the buttons. If there are entries, we enable them. We can easily listen to the keyValues$ with the tap operator.

const keyValues$: Observable<{ key: IDBValidKey; value: unknown }[]> =
  store$.pipe(
    entries(),
    tap(toggleRemoveButtons)
  );

// ...

function toggleRemoveButtons(
  keys: { key: IDBValidKey; value: unknown }[]
): void {
  if (keys.length) {
    removeFirstBtn.removeAttribute('disabled');
    removeLastBtn.removeAttribute('disabled');
  } else {
    removeFirstBtn.setAttribute('disabled', 'true');
    removeLastBtn.setAttribute('disabled', 'true');
  }
}

Real-world use cases for autoIncrement Object Stores

An automatically incrementing object store could be useful when your app needs to support offline mode, but you also need to log certain events happening on the UI to an API endpoint. Such audit logs must be stored locally and sent when the device comes online the next time. When the device goes offline, every outgoing request to our logging endpoint can instead put the data into this Object Store, and when the device comes online, we just read the events and send them with their timestamps.

Storing objects

Have you ever needed to fill out an extremely long form online? Maybe even that form was part of a wizard? It is a very bad user experience when you accidentally refresh or close the tab, and you need to start over. Of course, it could be implemented to store the unfinished form in a database somehow, but that would mean storing people's sensitive PII (Personal Identifiable Information) data. IndexedDb can help here as well because it stores that data on the user's machine.

In the following example, we are going to focus on how to store data in specific keys. For the sake of simplicity, we set up some listeners and automatically save the information entered into the form. We will also have two buttons, one for clearing the form, and the other for submitting. Our HTML template looks like the following:

<div id="app">
  <h1>@this-dot/rxidb key-value pair store example</h1>

  <hr />

  <h2>Address form</h2>
  <form id="example-form" method="POST">
    <label for="first-name">First Name:</label>
    <br />
    <input id="first-name" required placeholder="John" />
    <br />
    <br />
    <label for="last-name">Last Name:</label>
    <br />
    <input id="last-name" required placeholder="Doe" />
    <br />
    <br />
    <label for="city">City:</label>
    <br />
    <input id="city" required placeholder="Metropolis" />
    <br />
    <br />
    <label for="address-first">Address line 1:</label>
    <br />
    <input id="address-first" required placeholder="Example street 1" />
    <br />
    <br />
    <label for="address-second">Address line 2 (optional):</label>
    <br />
    <input id="address-second" placeholder="4th floor; 13th door" />
    <br />
    <br />
    <div style="display: flex; justify-content: space-between; width: 153px">
      <button id="clear-button" type="button">Clear form</button>
      <button id="submit-button" type="submit" disabled>Submit</button>
    </div>
  </form>
</div>

Based on the above template, we do have a shape of how the object that we would like to store would look like. Let's set up a type for that, and some default constants.

type UserFormValue = {
  firstName: string;
  lastName: string;
  city: string;
  addressFirst: string;
  addressSecond: string;
};

const EMPTY_FORM_VALUE: UserFormValue = {
  firstName: '',
  lastName: '',
  city: '',
  addressFirst: '',
  addressSecond: '',
};

Set up the Object Store and the event listeners

Setting up the database is done similarly as in the previous example. We open a connection to the IndexedDb and then create a store. But this time, we just create a default store. This way, we have full control over the keys. With this form, we want to write the value of the USER_INFO key in this Object Store. We also want to get notified when this value changes, so we set up the suerInfo$ observable using the read() operator.

import {
  connectIndexedDb,
  deleteItem,
  setItem,
  read,
  getObjectStore,
} from '@this-dot/rxidb';
// ...

const DATABASE_NAME = 'KEY_VALUE_PAIRS';
const FORM_DATA_KEY = 'USER_INFO';

const store$ = connectIndexedDb(DATABASE_NAME).pipe(getObjectStore('store'));
const userInfo$: Observable<UserFormValue | null> = store$.pipe(
  read(FORM_DATA_KEY)
);

To be able to write values into our Object Store and update the data on our UI, we need some HTML elements. We set up constants that point towards our form, the two buttons, and all of the inputs inside the form.

const exampleForm = document.getElementById('example-form') as HTMLFormElement;
const submitButton = document.getElementById(
  'submit-button'
) as HTMLButtonElement;
const clearButton = document.getElementById(
  'clear-button'
) as HTMLButtonElement;

const firstNameInput = document.getElementById(
  'first-name'
) as HTMLInputElement;
const lastNameInput = document.getElementById('last-name') as HTMLInputElement;
const cityInput = document.getElementById('city') as HTMLInputElement;
const addressFirstInput = document.getElementById(
  'address-first'
) as HTMLInputElement;
const addressSecondInput = document.getElementById(
  'address-second'
) as HTMLInputElement;

And finally, we set up some event listener Observables to be able to act when an event occurs. Again, we use the fromEvent creator function from rxjs.

const formInputChange$ = fromEvent(exampleForm, 'input');
const formSubmit$ = fromEvent(exampleForm, 'submit');
const clearForm$ = fromEvent(clearButton, 'click');

Set up some helper methods

Before we set up our subscriptions, let's think through what behaviour we want with this form and the buttons.

It is certain that we need a way to get the current value of the form that matches the UserFormValue type. We also want to set the input fields of the form, especially when we reload the page and there is data saved in our Object Store. If there is no value provided to this setter method, it should use our predefined EMPTY_FORM_VALUE constant.

function getUserFormValue(): UserFormValue {
  return {
    firstName: firstNameInput.value,
    lastName: lastNameInput.value,
    city: cityInput.value,
    addressFirst: addressFirstInput.value,
    addressSecond: addressSecondInput.value,
  };
}

function setInputFieldValues(value: UserFormValue = EMPTY_FORM_VALUE): void {
  firstNameInput.value = value.firstName || '';
  lastNameInput.value = value.lastName || '';
  cityInput.value = value.city || '';
  addressFirstInput.value = value.addressFirst || '';
  addressSecondInput.value = value.addressSecond || '';
}

The UI should block the user from certain interactions. The submit button should be disabled while the form is invalid, and while a database write operation is still in progress. For handling the submit button state, we need two helper methods.

function disableSubmitButton(): void {
  submitButton.setAttribute('disabled', 'true');
}

function removeSubmitButtonDisabledIfFormIsValid(): void {
  const isFormValid = exampleForm.checkValidity();
  if (isFormValid) {
    submitButton.removeAttribute('disabled');
  }
}

Now we have every tool that we need to implement the logic.

Setting up our subscriptions

We would like to write the form data into the Object Store as soon as the form changes, but we don't want to start such operations on every change. To mitigate this, we are going to use the debounceTime(1000) operator, so it waits for 1 second before starting the write operation. We use our getUSerFormValue() helper method to get the actual data from the input fields and we use the setItem() method on the store$ observable, inside a switchMap operator to write the values. We also want to disable the Submit button when the form changes and re-enable it if the form is valid and the write operation is finished.

formInputChange$
  .pipe(
    tap(() => disableSubmitButton()),
    debounceTime(1000),
    map<unknown, UserFormValue>(getUserFormValue),
    switchMap((userFormValue) =>
      store$.pipe(setItem(FORM_DATA_KEY, userFormValue))
    ),
    tap(() => removeSubmitButtonDisabledIfFormIsValid())
  )
  .subscribe();

We also want to set the values of the input fields, for example, when we refresh the page. We also handled the submit button state and we set the values only if there are values to set. We use our setInputFieldValues() method to update the UI.

userInfo$
  .pipe(
    tap(() => disableSubmitButton()),
    filter((v: UserFormValue | null): v is UserFormValue => !!v),
    tap((storedValue: UserFormValue) => {
      setInputFieldValues(storedValue);
      removeSubmitButtonDisabledIfFormIsValid();
    })
  )
  .subscribe();

When we submit the form, we will probably want to do something asynchronously. When that succeeds, we want to clear our Object Store, so we don't keep the submitted data on our user's machine. We also want to update the UI and clear the input fields. In this example, our form would send a POST request when we press the submit button. Therefore, we call event.preventDefault() on the submit event, so we stay on the page.

formSubmit$
  .pipe(
    // We prevent the native HTML submit event to run, so it won't send a post request for the sake of the example.
    tap((event: SubmitEvent) => {
      event.preventDefault();
      disableSubmitButton();
    }),
    map(getUserFormValue),
    // this is the point where we could do anything with the current form values, for example, send them to the server, etc.
    switchMap(() => store$.pipe(deleteItem(FORM_DATA_KEY))),
    tap(() => setInputFieldValues())
  )
  .subscribe();

And when we want to clear the form, we need to do the same with the data stored in our Object Store.

clearForm$
  .pipe(
    switchMap(() => store$.pipe(deleteItem(FORM_DATA_KEY))),
    tap(() => setInputFieldValues())
  )
  .subscribe();

Real-world use cases for Key-Value pair Object Stores

Having persistent forms between page refreshes is just one useful feature you can do with IndexedDb. Our example above is very simple, but you can have a multi-page form. One could store the progress on such forms and allow the user to continue with the forms later. Keeping the constraints of IndexedDb in mind, one very cool feature is storing data for offline use.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...

Functional Programming in TypeScript using the fp-ts Library: Exploring Task and TaskEither Operators cover image

Functional Programming in TypeScript using the fp-ts Library: Exploring Task and TaskEither Operators

Introduction: Welcome back to our blog series on Functional Programming in TypeScript using the fp-ts library. In the previous three blog posts, we covered essential concepts such as the pipe and flow operators, Option type, and various methods and operators like fold, fromNullable, getOrElse, map, flatten, and chain. In this fourth post, we will delve into the powerful Task and TaskEither operators, understanding their significance, and exploring practical examples to showcase their usefulness. Understanding Task and TaskEither: Before we dive into the examples, let's briefly recap what Task and TaskEither are and why they are valuable in functional programming. Task: In functional programming, a Task represents an asynchronous computation that may produce a value or an error. It allows us to work with asynchronous operations in a pure and composable manner. Tasks are lazy and only start executing when we explicitly run them. They can be thought of as a functional alternative to Promises. Now, let's briefly introduce the Either type and its significance in functional programming since this concept, merged with Task gives us the full power of TaskEither. Either: Either is a type that represents a value that can be one of two possibilities: a value of type Left or a value of type Right. Conventionally, the Left type represents an error or failure case, while the Right type represents a successful result. Using Either, we can explicitly handle and propagate errors in a functional and composable way. Example: Handling Division with Either Suppose we have a function divide that performs a division operation. Instead of throwing an error, we can use Either to handle the potential division by zero scenario. Here's an example: `ts import { Either, left, right } from 'fp-ts/lib/Either'; const divide: (a: number, b: number) => Either = (a, b) => { if (b === 0) { return left('Error: Division by zero'); } return right(a / b); }; const result = divide(10, 2); result.fold( (error) => console.log(Error: ${error}`), (value) => console.log(Result: ${value}`) ); ` In this example, the divide function returns an Either type. If the division is successful, it returns a Right value with the result. If the division by zero occurs, it returns a Left value with an error message. We then use the fold function to handle both cases, printing the appropriate message to the console. TaskEither: TaskEither combines the benefits of both Task and Either. It represents an asynchronous computation that may produce a value or an error, just like Task, but also allows us to handle potential errors using the Either type. This enables us to handle errors in a more explicit and controlled manner. Examples: Let's explore some examples to better understand the practical applications of Task and TaskEither operators. Example 1: Fetching Data from an API Suppose we want to fetch data from an API asynchronously. We can use the Task operator to encapsulate the API call and handle the result using the Task's combinators. In the example below, we define a fetchData` function that returns a Task representing the API call. We then use the `fold` function to handle the success and failure cases of the Task. If the Task succeeds, we return a new Task with the fetched data. If it fails, we return a Task with an error message. Finally, we use the `getOrElse` function to handle the case where the Task returns `None`. `typescript import { pipe } from 'fp-ts/lib/function'; import { Task } from 'fp-ts/lib/Task'; import { fold } from 'fp-ts/lib/TaskEither'; import { getOrElse } from 'fp-ts/lib/Option'; const fetchData: Task = () => fetch('https://api.example.com/data'); const handleData = pipe( fetchData, fold( () => Task.of('Error: Failed to fetch data'), (data) => Task.of(Fetched data: ${data}`) ), getOrElse(() => Task.of('Error: Data not found')) ); handleData().then(console.log); ` Example 2: Performing Computation with Error Handling Let's say we have a function divide` that performs a computation and may throw an error. We can use TaskEither to handle the potential error and perform the computation asynchronously. In the example below, we define a `divideAsync` function that takes two numbers and returns a TaskEither representing the division operation. We use the `tryCatch` function to catch any potential errors thrown by the `divide` function. We then use the `fold` function to handle the success and failure cases of the TaskEither. If the TaskEither succeeds, we return a new TaskEither with the result of the computation. If it fails, we return a TaskEither with an error message. Finally, we use the `map` function to transform the result of the TaskEither. `typescript import { pipe } from 'fp-ts/lib/function'; import { TaskEither, tryCatch } from 'fp-ts/lib/TaskEither'; import { fold } from 'fp-ts/lib/TaskEither'; import { map } from 'fp-ts/lib/TaskEither'; const divide: (a: number, b: number) => number = (a, b) => { if (b === 0) { throw new Error('Division by zero'); } return a / b; }; const divideAsync: (a: number, b: number) => TaskEither = (a, b) => tryCatch(() => divide(a, b), (error) => new Error(String(error))); const handleComputation = pipe( divideAsync(10, 2), fold( (error) => TaskEither.left(Error: ${error.message}`), (result) => TaskEither.right(Result: ${result}`) ), map((result) => Computation: ${result}`) ); handleComputation().then(console.log); ` In the first example, we saw how to fetch data from an API using Task and handle the success and failure cases using fold and getOrElse functions. This allows us to handle different scenarios, such as successful data retrieval or error handling when the data is not available. In the second example, we demonstrated how to perform a computation that may throw an error using TaskEither. We used tryCatch to catch potential errors and fold to handle the success and failure cases. This approach provides a more controlled way of handling errors and performing computations asynchronously. Conclusion: In this blog post, we explored the Task` and `TaskEither` operators in the `fp-ts` library. We learned that Task allows us to work with asynchronous computations in a pure and composable manner, while TaskEither combines the benefits of Task and Either, enabling us to handle potential errors explicitly. By leveraging the concepts we have covered so far, such as pipe, flow, Option, fold, map, flatten, and chain, we can build robust and maintainable functional programs in TypeScript using the fp-ts library. Stay tuned for the next blog post in this series, where we will continue our journey into the world of functional programming....

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

I Broke My Hand So You Don't Have To (First-Hand Accessibility Insights) cover image

I Broke My Hand So You Don't Have To (First-Hand Accessibility Insights)

We take accessibility quite seriously here at This Dot because we know it's important. Still, throughout my career, I've seen many projects where accessibility was brushed aside for reasons like "our users don't really use keyboard shortcuts" or "we need to ship fast; we can add accessibility later." The truth is, that "later" often means "never." And it turns out, anyone could break their hand, like I did. I broke my dominant hand and spent four weeks in a cast, effectively rendering it useless and forcing me to work left-handed. I must thus apologize for the misleading title; this post should more accurately be dubbed "second-hand" accessibility insights. The Perspective of a Developer Firstly, it's not the end of the world. I adapted quickly to my temporary disability, which was, for the most part, a minor inconvenience. I had to type with one hand, obviously slower than my usual pace, but isn't a significant part of a software engineer's work focused on thinking? Here's what I did and learned: - I moved my mouse to the left and started using it with my left hand. I adapted quickly, but the experience wasn't as smooth as using my right hand. I could perform most tasks, but I needed to be more careful and precise. - Many actions require holding a key while pressing a mouse button (e.g., visiting links from the IDE), which is hard to do with one hand. - This led me to explore trackpad options. Apart from the Apple Magic Trackpad, choices were limited. As a Windows user (I know, sorry), that wasn't an option for me. I settled for a cheap trackpad from Amazon. A lot of tasks became easier; however, the trackpad eventually malfunctioned, sending me back to the mouse. - I don't know a lot of IDE shortcuts. I realized how much I've been relying on a mouse for my work, subconsciously refusing to learn new keyboard shortcuts (I'll be returning my senior engineer license shortly). So I learned a few new ones, which is good, I guess. - Some keyboard shortcuts are hard to press with one hand. If you find yourself in a similar situation, you may need to remap some of them. - Copilot became my best friend, saving me from a lot of slow typing, although I did have to correct and rewrite many of its suggestions. The Perspective of a User As a developer, I was able to get by and figure things out to be able to work effectively. As a user, however, I got to experience the other side of the coin and really feel the accessibility (or lack thereof) on the web. Here are a few insights I gained: - A lot of websites apparently tried_ to implement keyboard navigation, but failed miserably. For example, a big e-commerce website I tried to use to shop for the aforementioned trackpad seemed to work fine with keyboard navigation at first, but once I focused on the search field, I found myself unable to tab out from it. When you make the effort to implement keyboard navigation, please make sure it works properly and it doesn't get broken with new changes. I wholeheartedly recommend having e2e tests (e.g. with Playwright) that verify the keyboard navigation works as expected. - A few websites and web apps I tried to use were completely unusable with the keyboard and were designed to be used with a mouse only. - Some sites had elaborate keyboard navigation, with custom keyboard shortcuts for different functionality. That took some time to figure out, and I reckon it's not as intuitive as the designers thought it would be. Once a user learns the shortcuts, however, it could make their life easier, I suppose. - A lot of interactive elements are much smaller than they should be, making it hard to accurately click on them with your weaker hand. Designers, I beg you, please make your buttons bigger. I once worked on an application that had a "gloves mode" for environments where the operators would be using gloves, and I feel like maybe the size we went with for the "gloves mode" should be the standard everywhere, especially as screens get bigger and bigger. - Misclicking is easy, especially using your weaker hand. Be it a mouse click or just hitting an Enter key on accident. Kudos to all the developers who thought about this and implemented a confirmation dialog or other safety measures to prevent users from accidentally deleting or posting something. I've however encountered a few apps that didn't have any of these, and those made me a bit anxious, to be honest. If this is something you haven't thought about when developing an app, please start doing so, you might save someone a lot of trouble. Some Second-Hand Insights I was only a little bit impaired by being temporarily one-handed and it was honestly a big pain. In this post, I've focused on my anecdotal experience as a developer and a user, covering mostly keyboard navigation and mouse usage. I can only imagine how frustrating it must be for visually impaired users, or users with other disabilities, to use the web. I must confess I haven't always been treating accessibility as a priority, but I've certainly learned my lesson. I will try to make sure all the apps I work on are accessible and inclusive, and I will try to test not only the keyboard navigation, ARIA attributes, and other accessibility features, but also the overall experience of using the app with a screen reader. I hope this post will at least plant a little seed in your head that makes you think about what it feels like to be disabled and what would the experience of a disabled person be like using the app you're working on. Conclusion: The Humbling Realities of Accessibility The past few weeks have been an eye-opening journey for me into the world of accessibility, exposing its importance not just in theory but in palpable, daily experiences. My short-term impairment allowed me to peek into a life where simple tasks aren't so simple, and convenient shortcuts are a maze of complications. It has been a humbling experience, but also an illuminating one. As developers and designers, we often get caught in the rush to innovate and to ship, leaving behind essential elements that make technology inclusive and humane. While my temporary disability was an inconvenience, it's permanent for many others. A broken hand made me realize how broken our approach towards accessibility often is. The key takeaway here isn't just a list of accessibility tips; it's an earnest appeal to empathize with your end-users. "Designing for all" is not a checkbox to tick off before a product launch; it's an ongoing commitment to the understanding that everyone interacts with technology differently. When being empathetic and sincerely thinking about accessibility, you never know whose life you could be making easier. After all, disability isn't a special condition; it's a part of the human condition. And if you still think "Our users don't really use keyboard shortcuts" or "We can add accessibility later," remember that you're not just failing a compliance checklist, you're failing real people....