Skip to content

Migrating an Amplify Backend on Serverless Framework - Part 3

This is Part Three of a three part series on Migrating an Amplify Backend on Serverless Framework. You can find Part One here and Part Two here.

This is the third and final part of our series where we're showing the steps needed to migrate an Amplify backend to Serverless Framework. After scaffolding the project in the first part, and setting up the GraphQL API in the second part, what now remains is setting up final touches like DynamoDB triggers and S3 buckets. Let's get to it.

DynamoDB Triggers

A DynamoDB trigger allows you to invoke a lambda every time a DynamoDB table is updated, and the lambda will receive the updated row in the input event. In our application, we will be using this to add a new notification to the NotificationQueue table every time an Item row is created that has remindAt set. For this purpose, let's create that lambda, which will be just a placeholder since we're focusing mainly on the Serverless configuration.

Copy the contents of handlers/process-queue/index.js to handlers/add-to-queue/index.js. This lambda has the following content:

'use strict';

module.exports.handler = async (event) => {
  return {
    statusCode: 200,
    body: JSON.stringify(
      {
        message: 'Go Serverless v3.0! Your function executed successfully!',
        input: event,
      },
      null,
      2
    ),
  };
};

Now we need to make a slight modification to our Item table resource, to add stream configuration. Having stream configured on the DynamoDB table is a prerequisite for a trigger to be invoked on row modification. The stream is configured by adding a StreamViewType property to the table configuration.

The table configuration for the Item resource now becomes:

Resources:
    # ...other resources
    ItemTableResource:
      Type: AWS::DynamoDB::Table
      Properties:
        StreamSpecification:
          StreamViewType: NEW_AND_OLD_IMAGES
        KeySchema:
          - AttributeName: id
            KeyType: HASH
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
          - AttributeName: listId
            AttributeType: S
        GlobalSecondaryIndexes:
          - IndexName: byList
            KeySchema:
              - AttributeName: listId
                KeyType: HASH
            Projection:
              ProjectionType: ALL
        BillingMode: PAY_PER_REQUEST
        TableName: ${self:provider.environment.ITEM_TABLE_NAME}

The only remaining part is to connect the lambda and stream configuration together. This is done in the functions property of the Serverless configuration:

functions:
  # ... other functions
  addToQueue:
    handler: handlers/add-to-queue/index.handler
    events:
      - stream:
          type: dynamodb
          arn: !GetAtt ItemTableResource.StreamArn

We have the standard lambda function definition as well as an events property that hooks up the lambda to the stream of the Item table. Again, we use an intrinsic function, in this case !GetAtt to fetch the ARN (Amazon Resource Name) of the stream. With this in place, lambda is now hooked to the Item data stream and will begin listening to modification events.

One such event might look like this:

{
  "Records": [
    {
      "awsRegion": "us-east-1",
      "dynamodb": {
        "ApproximateCreationDateTime": 1632502576,
        "Keys": {
          "id": {
            "S": "..."
          }
        },
        "NewImage": {
          "__typename": {
            "S": "Item"
          },
          "id": {
            "S": "..."
          },
          "remindAt": {
            "S": "2021-09-24T16:56:15.182Z"
          },
          "cognitoUserId": {
            "S": "..."
          },
          "listId": {
            "S": "..."
          },
          "title": {
            "S": "Some title"
          },
          "notes": {
            "S": "Item notes"
          }
        },
        "SequenceNumber": "853010500000000020163575159",
        "SizeBytes": 356,
        "StreamViewType": "NEW_AND_OLD_IMAGES"
      },
      "eventID": "...",
      "eventName": "INSERT",
      "eventSource": "aws:dynamodb",
      "eventSourceARN": "arn:aws:dynamodb:us-east-1:...",
      "eventVersion": "1.1"
    }
  ]
}

Setting up S3 Buckets

In case a user of our app would like to upload an image as part of the todo item's note, we could upload that image to an S3 bucket and then serve it from there when displaying the note in the UI. For this to work, we would need to provision an S3 bucket through our Serverless configuration.

An S3 bucket is a resource, just like any other DynamoDB table in the configuration. We need to give it a name, so let's configure that in the environment first:

provider:
  # ...
  environment:
    # ... other environment variables...
    S3_BUCKET_NAME: ${self:service}-${opt:stage, self:provider.stage}-images

The S3 bucket name is composed of service name and stage, suffixed by the string "-images". In our case, for dev environment, the bucket would be named amplified-todo-api-dev-images.

Now we need to configure the resources for this S3 bucket. We can append the following configuration to the end of the Resources section:

Resources:
	# ...other resources
  ImageBucketResource:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: ${self:provider.environment.S3_BUCKET_NAME}
  ImageBucketPolicy:
    Type: 'AWS::S3::BucketPolicy'
    Properties:
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: PublicRead
            Effect: Allow
            Principal: '*'
            Action:
              - 's3:GetObject'
            Resource: !Join ['', ['arn:aws:s3:::', !Ref ImageBucketResource, /*]]
      Bucket:
        Ref: ImageBucketResource

In the above configuration, we create a resource for the bucket, and a policy specifying public read permissions for that resource. Note how ImageBucketPolicy is referencing ImageBucketResource. We're using intrinsic functions again to avoid hardcoding the image bucket resource name.

If we wanted to have a lambda that would upload to this S3 bucket, then we would need to add the permissions for it:

provider:
  # ...
  environment:
    # ...other environment variables
    S3_BUCKET_NAME: ${self:service}-${opt:stage, self:provider.stage}-images
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - s3:PutObject
            - s3:GetObject
          Resource: 'arn:aws:s3:::${self:provider.environment.S3_BUCKET_NAME}/*'

Our S3 bucket is now set up.

Bonus: Lambda Bundling

The project in this state is relatively simple and should take less than a couple of minutes to deploy. With time, however, it will probably grow larger, and the lambdas may start to require some external dependencies. The deployment will become slower, and lambda deployments will contain more files than are really necessary. At that point, it will be a good idea to introduce lambda bundling.

serverless-esbuild is a plugin that utilizes esbuild to bundle and minify your lambda code. It's almost zero-config, and works out of the box without the need to install any additional plugins. With it, you can have both TypeScript and JavaScript code.

To start using it, install it first:

npm install --save-dev serverless-esbuild

Now add it to the plugins array:

plugins:
  - serverless-esbuild
  - serverless-appsync-plugin

Finally, configure it to both bundle and minify your lambdas:

custom:
  esbuild:
    bundle: true
    minify: true
  appSync:
    # ... appSync config

That's it. Your lambdas will now be bundled and minified on every deploy.

Conclusion

This is the end of our three-part series on migrating an Amplify backend to Serverless Framework. We hope you enjoyed the journey! Even if you're not migrating from Amplify, these guides should help you configure various services such as AppSync and DynamoDB in Serverless Framework. Don't forget that the entire source code for this project is up on GitHub. Should you need any help, though, with either Amplify or Serverless Framework, please do not hesitate to drop us a line!

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools cover image

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools

In the ever-evolving world of web development, Nuxt.js has taken a monumental leap with the launch of Nuxt DevTools v1.0. More than just a set of tools, it's a game-changer—a faithful companion for developers. This groundbreaking release, available for all Nuxt projects and being defaulted from Nuxt v3.8 onwards, marks the beginning of a new era in developer tools. It's designed to simplify our development journey, offering unparalleled transparency, performance, and ease of use. Join me as we explore how Nuxt DevTools v1.0 is set to revolutionize our workflow, making development faster and more efficient than ever. What makes Nuxt DevTools so unique? Alright, let's start delving into the features that make this tool so amazing and unique. There are a lot, so buckle up! In-App DevTools The first thing that caught my attention is that breaking away from traditional browser extensions, Nuxt DevTools v1.0 is seamlessly integrated within your Nuxt app. This ensures universal compatibility across browsers and devices, offering a more stable and consistent development experience. This setup also means the tools are readily available in the app, making your work more efficient. It's a smart move from the usual browser extensions, making it a notable highlight. To use it you just need to press Shift + Option + D` (macOS) or `Shift + Alt + D` (Windows): With simple keystrokes, the Nuxt DevTools v1.0 springs to life directly within your app, ready for action. This integration eliminates the need to toggle between windows or panels, keeping your workflow streamlined and focused. The tools are not only easily accessible but also intelligently designed to enhance your productivity. Pages, Components, and Componsables View The Pages, Components, and Composables View in Nuxt DevTools v1.0 are a clear roadmap for your app. They help you understand how your app is built by simply showing its structure. It's like having a map that makes sense of your app's layout, making the complex parts of your code easier to understand. This is really helpful for new developers learning about the app and experienced developers working on big projects. Pages View lists all your app's pages, making it easier to move around and see how your site is structured. What's impressive is the live update capability. As you explore the DevTools, you can see the changes happening in real-time, giving you instant feedback on your app's behavior. Components View is like a detailed map of all the parts (components) your app uses, showing you how they connect and depend on each other. This helps you keep everything organized, especially in big projects. You can inspect components, change layouts, see their references, and filter them. By showcasing all the auto-imported composables, Nuxt DevTools provides a clear overview of the composables in use, including their source files. This feature brings much-needed clarity to managing composables within large projects. You can also see short descriptions and documentation links in some of them. Together, these features give you a clear picture of your app's layout and workings, simplifying navigation and management. Modules and Static Assets Management This aspect of the DevTools revolutionizes module management. It displays all registered modules, documentation, and repository links, making it easy to discover and install new modules from the community! This makes managing and expanding your app's capabilities more straightforward than ever. On the other hand, handling static assets like images and videos becomes a breeze. The tool allows you to preview and integrate these assets effortlessly within the DevTools environment. These features significantly enhance the ease and efficiency of managing your app's dynamic and static elements. The Runtime Config and Payload Editor The Runtime Config and Payload Editor in Nuxt DevTools make working with your app's settings and data straightforward. The Runtime Config lets you play with different configuration settings in real time, like adjusting settings on the fly and seeing the effects immediately. This is great for fine-tuning your app without guesswork. The Payload Editor is all about managing the data your app handles, especially data passed from server to client. It's like having a direct view and control over the data your app uses and displays. This tool is handy for seeing how changes in data impact your app, making it easier to understand and debug data-related issues. Open Graph Preview The Open Graph Preview in Nuxt DevTools is a feature I find incredibly handy and a real time-saver. It lets you see how your app will appear when shared on social media platforms. This tool is crucial for SEO and social media presence, as it previews the Open Graph tags (like images and descriptions) used when your app is shared. No more deploying first to check if everything looks right – you can now tweak and get instant feedback within the DevTools. This feature not only streamlines the process of optimizing for social media but also ensures your app makes the best possible first impression online. Timeline The Timeline feature in Nuxt DevTools is another standout tool. It lets you track when and how each part of your app (like composables) is called. This is different from typical performance tools because it focuses on the high-level aspects of your app, like navigation events and composable calls, giving you a more practical view of your app's operation. It's particularly useful for understanding the sequence and impact of events and actions in your app, making it easier to spot issues and optimize performance. This timeline view brings a new level of clarity to monitoring your app's behavior in real-time. Production Build Analyzer The Production Build Analyzer feature in Nuxt DevTools v1.0 is like a health check for your app. It looks at your app's final build and shows you how to make it better and faster. Think of it as a doctor for your app, pointing out areas that need improvement and helping you optimize performance. API Playground The API Playground in Nuxt DevTools v1.0 is like a sandbox where you can play and experiment with your app's APIs. It's a space where you can easily test and try out different things without affecting your main app. This makes it a great tool for trying out new ideas or checking how changes might work. Some other cool features Another amazing aspect of Nuxt DevTools is the embedded full-featured VS Code. It's like having your favorite code editor inside the DevTools, with all its powerful features and extensions. It's incredibly convenient for making quick edits or tweaks to your code. Then there's the Component Inspector. Think of it as your code's detective tool. It lets you easily pinpoint and understand which parts of your code are behind specific elements on your page. This makes identifying and editing components a breeze. And remember customization! Nuxt DevTools lets you tweak its UI to suit your style. This means you can set up the tools just how you like them, making your development environment more comfortable and tailored to your preferences. Conclusion In summary, Nuxt DevTools v1.0 marks a revolutionary step in web development, offering a comprehensive suite of features that elevate the entire development process. Features like live updates, easy navigation, and a user-friendly interface enrich the development experience. Each tool within Nuxt DevTools v1.0 is thoughtfully designed to simplify and enhance how developers build and manage their applications. In essence, Nuxt DevTools v1.0 is more than just a toolkit; it's a transformative companion for developers seeking to build high-quality web applications more efficiently and effectively. It represents the future of web development tools, setting new standards in developer experience and productivity....

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

The Renaissance of PWAs cover image

The Renaissance of PWAs

What Are PWAs? Progressive Web Apps, or PWAs, are not a new concept. In fact, they have been around for years, and have been adopted by companies such as Starbucks, Uber, Tinder, and Spotify. Here at This Dot, we have written numerous blog posts on PWAs. PWAs are essentially web applications that utilize modern web technologies to deliver a user experience akin to native apps. They can work offline, send push notifications, and even be added to a user's home screen, thus blurring the boundaries between web and native apps. PWAs are built using standard web technologies like HTML, CSS, and JavaScript. However, they leverage advanced web APIs to deliver enhanced capabilities. Most web apps can be transformed into PWAs by incorporating certain features and adhering to standards. The keystones of PWAs are the service worker and the web app manifest. Service workers enable offline operation and background syncing by acting as network proxies, managing requests programmatically. The web app manifest, on the other hand, gives the PWA a native-like presence on the user's device, specifying its appearance when installed. Looking Back The concept of PWAs was introduced by Google engineers Alex Russell and Frances Berriman in 2015, even though Steve Jobs had already discussed the idea of web apps that resembled and behaved like native apps as early as 2007. However, despite the widespread adoption of PWAs over the years, Apple's approach to PWAs drastically changed after Steve Jobs' 2007 presentation, distinguishing it from other tech giants such as Google and Microsoft. As a leader in technological innovation, Apple was notably slower in adopting PWA technology, much of which can be attributed to its business model and the ecosystem it built around the App Store. For example, Safari, Apple's web browser, has historically been slow to adopt the latest web standards and APIs crucial for PWAs. Features such as push notifications, background sync, and access to certain hardware functionalities were unsupported or only partially supported for a long time. As a result, the PWA experience on iOS/iPadOS was not - and to some extent, still isn't - on par with that provided by Android. Despite the varying degrees of support from different vendors, PWAs have seen a significant increase in adoption since 2015, both by businesses and users, due to their cross-platform nature, offline capabilities, and the enhanced user experience they offer. Major corporations like Twitter, Pinterest, and Alibaba have launched PWAs, leading to substantial increases in user engagement and session duration. For instance, according to a 2017 Pinterest case study, Pinterest's PWA led to a 60% increase in core engagements and a 44% increase in user-generated ad revenue. Google and Microsoft have also championed this technology, integrating more PWA support into their platforms. Google highlighted the importance of PWAs for the mobile web, while Microsoft sought to populate its Windows Store with PWAs. Apple's Shift Towards PWAs Despite slower adoption and limited support, Apple isn't completely dismissing PWAs. Recent updates have indicated some promising improvements in PWA capabilities on both iOS/iPadOS and MacOS. For instance, in iOS/iPadOS 16 released last year, Apple added notifications for Home Screen web apps, utilizing the Web Push standard with support for badging. They also included an API for iOS/iPadOS browsers to facilitate the 'Add to Home Screen' feature. The forthcoming Safari 17 and MacOS Sonoma releases, announced at June's WWDC, promise even more significant changes, most notably: Installing Web Apps on MacOS, iOS, and iPadOS Any web app, not just PWAs, can now be added to the MacOS dock from the File menu. Once added, these web apps will open in their own window and integrate with operating system features such as the Stage Manager, Screen Time, Notifications, and Focus. They will also have their own isolated storage, and any cookies present in the Safari browser for that web app at the time of installation will be transferred to this isolated storage. This means many users will not need to re-authenticate to web apps after installing them locally. PWAs, in particular, can control the appearance and behavior of the installed web app via the PWA manifest. For instance, if your web app already includes navigation controls, or if they're not necessary in the context of the app, you can manage whether the navigation buttons are displayed by setting the display` configuration option in the manifest to `standalone`. The `display` option will also be taken into consideration in iOS/iPadOS, where standalone web apps will become *Home Screen web apps*. These apps offer a standalone, app-like experience on iOS, complete with separate cookies and storage from the browser, and improved notification handling. Improved Notifications Apple initially added support for notifications in iOS/iPadOS 16, but Safari 17 and MacOS Sonoma take it a step further. If you've already implemented Web Push according to web standards, then push notifications should work for your web page as a web app on Mac without any additional effort. Moreover, the silent` property is now taken into account, and there are several improvements to the Notifications API to enhance its reliability. These recent updates put the support for notifications on Mac on par with iOS/iPadOS, including support for badging, and seamless integration with the Focus mode. Improved API Over the past year, Apple has also introduced several other API-level improvements. The enhancements to the User Activation API aid in determining whether a function that depends on user activation, such as requesting permission to send notifications, is called. Safari 16 updated the un-prefixed Fullscreen API, and introduced preliminary support for the Screen Orientation API. Safari 17 in particular has improved support for the Storage API and added support for ReadableStream`. The Renaissance of PWAs? With the new PWA-related features in the Apple ecosystem, it's hard not to wonder if we are witnessing a renaissance of PWAs. Initially praised for their ability to leverage web technology to create app-like experiences, PWAs went through a period of relative stagnation, especially on Apple's platforms. For a time, Apple was noticeably more conservative in their implementation of PWA features. However, their recent bolstering of PWA support in Safari signals a significant shift, aligning the browser with other major platforms such as Google Chrome and Microsoft Edge, both of which have long supported PWAs. The implications of this shift are profound. This widespread and robust support for PWAs across all major platforms could effectively reduce the gap between web applications and native applications. PWAs, with their promise of a single, consistent experience across all devices, could become the preferred choice for businesses and developers. The cost-effectiveness of developing and maintaining one PWA versus separate applications for multiple platforms is an undeniable benefit. The fact that all these platforms are now heavily supporting PWAs might suggest an industry-wide shift toward a more unified and simplified development paradigm, hinting that indeed, we could be on the verge of a PWA renaissance....

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...