Skip to content

GitHub Actions for Serverless Framework Deployments

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Background

Our team was building a Serverless Framework API for a client that wanted to use the Serverless Dashboard for deployment and monitoring. Based on some challenges from last year, we agreed with the client that using a monorepo tool like Nx would be beneficial moving forward as we were potentially shipping multiple Serverless APIs and frontend applications. Unfortunately, we discovered several challenges integrating with the Serverless Dashboard, and eventually opted into custom CI/CD with GitHub Actions. We’ll cover the challenges we faced, and the solution we created to mitigate our problems and generate a solution.

Serverless Configuration Restrictions

By default, the Serverless Framework does all its configuration via a serverless.yml file. However, the framework officially supports alternative formats including .json, .js, and .ts.

Our team opted into the TypeScript format as we wanted to setup some validation for our engineers that were newer to the framework through type checks. When we eventually went to configure our CI/CD via the Serverless Dashboard UI, the dashboard itself restricted the file format to just the YAML format. This was unfortunate, but we were able to quickly revert back to YAML as configuration was relatively simple, and we were able to bypass this hurdle.

serverless-dashboard-setup

Prohibitive Project Structures

With our configuration now working, we were able to select the project, and launch our first attempt at deploying the app through the dashboard. Immediately, we ran into a build issue:

Build step: serverless deploy --stage dev --region us-west-1 --force --org my-org --app my-app --verbose 
Environment: linux, node 14.19.2, framework 3.21.0 (local), plugin 6.2.2, SDK 4.3.2
Docs:        docs.serverless.com
Support:     forum.serverless.com
Bugs:        github.com/serverless/serverless/issues
Error:
Serverless plugin "serverless-esbuild" not found. Make sure it's installed and listed in the "plugins" section of your serverless config file. Run "serverless plugin install -n serverless-esbuild" to install it.
Build step failed: serverless deploy --stage dev --region us-west-1 --force --org my-org --app my-app --verbose

What we found was having our package.json in a parent directory of our serverless app prevented the dashboard CI/CD from being able to appropriately detect and resolve dependencies prior to deployment. We had been deploying using an Nx command: npx nx run api:deploy --stage=dev which was able to resolve our dependency tree which looked like:

serverless-nx-file-tree-snapshot

To resolve, we thought maybe we could customize the build commands utilized by the dashboard. Unfortunately, the only way to customize these commands is via the package.json of our project. Nx allows for package.json per app in their structure, but it defeated the purpose of us opting into Nx and made leveraging the tool nearly obsolete.

Moving to GitHub Actions with the Serverless Dashboard

We thought to move all of our CI/CD to GitHub Actions while still proxying the dashboard for deployment credentials and monitoring. In the dashboard docs, we found that you could set a SERVERLESS_ACCESS_KEY and still deploy through the dashboard. It took us a few attempts to understand exactly how to specify this key in our action code, but eventually, we discovered that it had to be set explicitly in the .env file due to the usage of the Nx build system to deploy. Thus the following actions were born:

api-ci.yml

name: API Linting, Testing, and Deployment

on:
  push:
    branches: [main]
  pull_request:
  workflow_dispatch:

jobs:
  api-deploy:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [14.x]
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v3

      - name: Configure Node
        uses: actions/setup-node@v3
        with:
          node-version: '14.x'
          cache: 'yarn'

      - name: Install packages
        run: yarn install --frozen-lockfile

      - name: Create .env file for testing
        run: |
          cp .env.example .env.test

      - name: Run tests
        run: npx nx run api:test

      - name: Create .env file for deployment
        run: |
          touch .env
          echo SERVERLESS_ACCESS_KEY=${{ secrets.SERVERLESS_ACCESS_KEY }} >> .env
          cat .env

      - name: Deploy SLS Preview
        if: github.ref != 'refs/heads/main'
        run: npx nx run api:deploy --stage=${GITHUB_HEAD_REF}
      - name: Deploy SLS Dev
        if: github.ref == 'refs/heads/main'
        run: npx nx run api:deploy --stage=dev

api-clean.yml

name: API Clean Up

# When pull requests are closed the associated
# branch will be removed from SLS deploy
on:
  pull_request:
    types: [closed]

jobs:
  tear-down:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [14.x]
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v3

      - name: Configure Node
        uses: actions/setup-node@v3
        with:
          node-version: '14.x'
          cache: 'yarn'

      - name: Install packages
        run: yarn install --frozen-lockfile

      - name: Create .env file for teardown
        run: |
          touch .env
          echo SERVERLESS_ACCESS_KEY=${{ secrets.SERVERLESS_ACCESS_KEY }} >> .env
          cat .env
      - name: Deploy SLS
        run: npx nx run api:teardown --stage=${GITHUB_HEAD_REF}

These actions ran smoothly and allowed us to leverage the dashboard appropriately. All in all this seemed like a success.

Local Development Problems

The above is a great solution if your team is willing to pay for everyone to have a seat on the dashboard. Unfortunately, our client wanted to avoid the cost of additional seats because the pricing was too high. Why is this a problem? Our configuration looks similar to this (I’ve highlighted the important lines with a comment):

serverless.ts

import type { Serverless } from 'serverless/aws';

const serverlessConfiguration: Serverless = {
  service: 'our-service',
  app: 'our-app', // NOTE THIS
  org: 'our-org', // NOTE THIS
  frameworkVersion: '3',
  useDotenv: true,
  plugins: [...],
  custom: { ... },
  package: { ... },
  provider: {
    name: 'aws',
    runtime: 'nodejs14.x',
    stage: "${opt:stage, 'dev'}",
    region: "${opt:region, 'us-east-1'}",
    memorySize: 512,
    timeout: 29,
    httpApi: { cors: true },
    environment: {
      REGION: '${aws:region}',
      SLS_STAGE: '${sls:stage}',
      SOME_SECRET: '${param:SOME_SECRET, env:SOME_SECRET}', // NOTE THIS
    },
  },
  functions: {
    graphql: {
      handler: 'src/handlers/graphql.server',
      events: [
        {
          httpApi: {
            method: 'post',
            path: '/graphql',
          },
        },
        {
          httpApi: {
            method: 'get',
            path: '/graphql',
          },
        },
      ],
    },
  },
};

module.exports = serverlessConfiguration;

The app and org variables make it so it is required to have a valid dashboard login. This meant our developers working on the API problems couldn’t do local development because the client was not paying for the dashboard logins. They would get the following error:

serverless-deploy-output-warning

Resulting Configuration

At this point, we had to opt to bypass the dashboard entirely via CI/CD. We had to make the following changes to our actions and configuration to get everything 100% working:

serverless.ts

  • Remove app and org fields
  • Remove accessing environment secrets via the param option
import type { Serverless } from 'serverless/aws';

const serverlessConfiguration: Serverless = {
  service: 'our-service',
  frameworkVersion: '3',
  useDotenv: true,
  plugins: [...],
  custom: { ... },
  package: { ... },
  provider: {
    name: 'aws',
    runtime: 'nodejs14.x',
    stage: "${opt:stage, 'dev'}",
    region: "${opt:region, 'us-east-1'}",
    memorySize: 512, // default: 1024MB
    timeout: 29,
    httpApi: { cors: true },
    environment: {
      REGION: '${aws:region}',
      SLS_STAGE: '${sls:stage}',
      SOME_SECRET: '${env:SOME_SECRET}',
    },
  },
  functions: {
    graphql: {
      handler: 'src/handlers/graphql.server',
      events: [
        {
          httpApi: {
            method: 'post',
            path: '/graphql',
          },
        },
        {
          httpApi: {
            method: 'get',
            path: '/graphql',
          },
        },
      ],
    },
  },
};

module.exports = serverlessConfiguration;

api-ci.yml

  • Add all our secrets to GitHub and include them in the scripts
  • Add serverless confg
name: API Linting, Testing, and Deployment

on:
  push:
    branches: [main]
  pull_request:
  workflow_dispatch:

jobs:
  api-deploy:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [14.x]
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v3

      - name: Configure Node
        uses: actions/setup-node@v3
        with:
          node-version: '14.x'
          cache: 'yarn'

      - name: Install packages
        run: yarn install --frozen-lockfile

      - name: Create .env file for testing
        run: |
          cp .env.example .env.test

      - name: Run tests
        run: npx nx run api:test

      - name: Configure credentials
        run: |
          npx serverless config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

      - name: Create .env file for deployment
        run: |
          touch .env
          echo SOME_SECRET=${{ secrets.SOME_SECRET }} >> .env
          # other keys here
          cat .env

      - name: Deploy SLS Preview
        if: github.ref != 'refs/heads/main'
        run: npx nx run api:deploy --stage=${GITHUB_HEAD_REF}
      - name: Deploy SLS Dev
        if: github.ref == 'refs/heads/main'
        run: npx nx run api:deploy --stage=dev

api-cleanup.yml

  • Add serverless config
  • Remove secrets
name: API Clean Up

# When pull requests are closed, the associated
# branch will be removed from SLS deploy
on:
  pull_request:
    types: [closed]

jobs:
  tear-down:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [14.x]
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v3

      - name: Configure Node
        uses: actions/setup-node@v3
        with:
          node-version: '14.x'
          cache: 'yarn'

      - name: Install packages
        run: yarn install --frozen-lockfile

      - name: Configure credentials
        run: |
          npx serverless config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

      - name: Create .env file for teardown
        run: |
          touch .env
          echo SERVERLESS_ACCESS_KEY=${{ secrets.SERVERLESS_ACCESS_KEY }} >> .env
          cat .env
      - name: Deploy SLS
        run: npx nx run api:teardown --stage=${GITHUB_HEAD_REF}

Conclusions

The Serverless Dashboard is a great product for monitoring and seamless deployment in simple applications, but it still has a ways to go to support different architectures and setups while being scalable for teams. I hope to see them make the following changes:

  • Add support for different configuration file types
  • Add better support custom deployment commands
  • Update the framework to not fail on login so local development works regardless of dashboard credentials

The Nx + GitHub actions setup was a bit unnatural as well with the reliance on the .env file existing, so we hope the above action code will help someone in the future. That being said, we’ve been working with this on the team and it’s been a very seamless and positive change as our developers can quickly reference their deploys and know how to interact with Lambda directly for debugging issues already.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Vercel vs. VPS - What's the drama, and which one should you choose? cover image

Vercel vs. VPS - What's the drama, and which one should you choose?

Vercel vs. VPS - What's the drama, and which one should you choose? Vercel recently announced an update to their pricing. For some users, that meant a significant increase in costs, which has led to some discussions about the costs of using cloud providers like Vercel, with a few suggesting that a VPS is a far superior and cost-effective solution. Does it make sense to move your popular app to a VPS and save hundreds of dollars? As you might have guessed, the issue is far more nuanced than some tweets suggest. Let's break it down. Integrations Let's talk about the elephant in the room first. Vercel is tailored explicitly for Next.js, providing tooling for conformance checks, continuous deployment, branch previews, and more. All of that works out of the box, and if you were to figure all of that out on your own, you'd be spending a lot of time and money on it, exposing yourself to potential bugs, problems, and disturbances in your workflow. If you're a one-person startup, you may get away with setting up a simple CI/CD pipeline on a VPS, but as your team grows, you'll need to invest more time and money into maintaining it. And that's not even considering the time you'll spend onboarding new team members and teaching them how to use your custom setup. That being said, some tools can make things easier for you and standardize the process. For example, coolify does a great job of providing a simple and easy-to-use interface for deploying Next.js apps to a VPS. You could also deploy a Docker container with all the necessary tools and configurations, but that's another can of worms. There’s also OpenNext adapter that lets you deploy across different environments, such as AWS. While that takes care of the deployment part, it doesn't solve the potential need for other integrations like conformance checks, branch previews, etc. Scaling and Performance Scaling is sometimes overrated in software engineering. Especially at the beginning of a greenfield project, you're better off focusing on building a great product and getting users than worrying about scaling. That said, it's essential to have a plan for scaling when the time comes, not to mention projects that are already popular and are considering moving to a VPS. I'm not saying you can't successfully run a popular high-traffic project built in Laravel from a Debian server in your basement. But I'm also not suggesting that you should optimize your hosting costs at the expense of investing precious time and energy into building and configuring infrastructure that could be better spent on building your product, especially given the fact that most cloud providers offer a free tier that can handle a decent amount of traffic. And there is one more thing to consider. Vercel has a global network of edge nodes that can cache your content and serve it to users from the nearest location, which can significantly improve your app's performance. If your app is targeted at a global audience, serving it from one server located in a single region may provide a suboptimal experience for users in other areas. Edge Nodes Speaking of edge nodes and Vercel, there has been a pretty significant announcement made by Vercel recently. They have moved away from edge rendering back to Node.js. The reason is that while they initially thought it may be worth the trade-off, having your compute resources far away from your database makes the app slower. Furthermore, the cost benefits of edge rendering are less significant than they initially thought. That doesn't mean they moved away from edge nodes completely. Static assets and the essential HTML and CSS are still served by the edge nodes. With networks getting fast and Node.js more performant, however, the argument for having a VPS with a CDN instead of a cloud provider like Vercel becomes more compelling. Skill Requirements Running a VPS requires a certain level of technical expertise. You must know how to set up and configure a server, install and configure software, manage security, monitor performance, and troubleshoot problems. If you're uncomfortable with these tasks, you may spend a lot of time learning and troubleshooting. While learning new things is great, it's not always the best use of your time, especially if you're trying to build a product. Part of the cost of using a cloud provider like Vercel is that they take care of all the technical details so you can focus on building your product. Suppose you're not comfortable with managing servers. In that case, it is worth paying extra for the convenience of not having to worry about it while benefiting from a global network of edge nodes and integrations that can save you time and money in the long run. If you're a developer who enjoys tinkering with servers and learning new things, running a VPS can be a fun and rewarding experience. Similarly, running a VPS may be a good option for you if you're a small team with a dedicated DevOps person who can handle all the technical details. But suppose you're a solo founder or a big company with an abundant budget. In that case, you may be better off using a cloud provider like Vercel that can save you time and money in the long run, allowing you to validate PRs in real-time and comment on a build preview in the process. Security A considerable concern when running a VPS is security. You must keep your server updated with security patches, configure firewalls, monitor logs for suspicious activity, and take other steps to protect your server from attacks. There are a lot of bots out there scanning the internet for vulnerable servers, and if you're not careful, you could end up with a compromised server that's being used to send spam, mine cryptocurrency, or launch DDoS attacks. Most cloud providers take care of these things for you, so you don't have to worry about them. Speaking of DDoS attacks, cloud providers usually have built-in DDoS protection that can help mitigate the impact of an attack. If you're running a VPS, you'll need to set up your own DDoS protection, which can be a complex and expensive process, or set up a CloudFlare in front of your server, effectively giving into a cloud provider anyway. That being said, I have read a few horror stories about landing an enormous bill from a cloud provider after a DDoS attack, so it's not all sunshine and rainbows. However, if you opt for a cloud hosting solution such as Vercel and unfortunately get attacked, remember they can be chill about it and refund some of the costs. Be aware of the possibility of a DDoS attack, though, and set up limits and alerts to prevent it from happening in the first place. Tech Stack Flexibility While you don't have to worry about keeping your server up to date when using a cloud provider, it is unfortunate that most cloud providers are usually late in supporting the newest Node.js versions. Despite Node.js 20 being the latest LTS version, some cloud providers' features still don't support it. If you plan to use bleeding-edge technologies or have specific requirements that cloud providers don't support, you may be better off running a VPS. Cost Finally, let's talk about cost. Running a VPS can be cheaper than using a cloud provider like Vercel, especially if you have a high-traffic app generating a lot of revenue. However, you need to consider the hidden costs of running a VPS, such as the time and money you'll spend on maintaining it, the cost of setting up and configuring a server, the cost of monitoring and troubleshooting problems, and the cost of scaling and performance optimization. If we take a model app that transfers around 1.5 TB of data per month, does a bit of computation, and we apply the new Vercel pricing, it could look something like this: 1. Base Plan Cost: $20 per month 2. Additional Fast Data Transfer: 500 GB x $0.15/GB = $75 (1 TB included in the base plan) 3. Additional Fast Origin Transfer: 50 GB x $0.06/GB = $3 (100 GB included in the base plan) 4. Additional Edge Requests: 2 million x $2/million = $4 (10 million included in the base plan) 5. Additional Function Execution Time: 200 GB-hours x $0.18/GB-hour = $36 (1,000 GB-hours included in the base plan) That would bring our total cost to $138 per month, assuming you pay for only one seat. If we decided to run a VPS instead, we would need something with at least 4 GB RAM, 2 vCPUs, and 80 GB SSD storage. A VPS with those specs would cost around $20 per month. You would also need to add the cost of a CDN, which would amount to $20 per month if you chose the CloudFlare Pro plan. That would bring our total cost to $40 per month. In this case, running a VPS would be significantly cheaper than using Vercel. However, this assumes that you know how much your app will scale in advance or that you're willing to go through the hassle of migrating your application to a larger server when the time comes. The thing is, if you're starting a new project, you could probably start with the free tier of a cloud provider like Vercel, later transfer to the Pro plan, and scale up as needed. In many cases, the appeal of the pay-as-you-go model is worth paying a bit extra in the long run. Conclusion Should you move your popular app to a VPS and save some dollars? The answer is it depends. It all boils down to assessing your project's specific needs, technical expertise, and growth expectations. The recent uproar regarding Vercel's pricing changes has illuminated this dilemma, presenting a classic case of convenience versus cost. Vercel stands out for its seamless integration with Next.js, providing out-of-the-box solutions that significantly reduce the time and effort required for deployment and ongoing maintenance. This convenience is especially valuable for developers leveraging modern development practices with minimal setup. Additionally, Vercel's edge network capabilities offer significant performance benefits for global applications, ensuring faster load times and improved user experience. However, these benefits come at a cost, which can be significantly higher than running a VPS, especially as your application scales. For those with the requisite skills, a VPS can offer a more cost-effective solution without the constraints of a platform-centric approach. This route grants more control over the hosting environment and potentially lowers costs at the expense of increased complexity in setup, scaling, and maintenance. It's also important to consider the non-financial costs associated with each option. Vercel provides a robust, developer-friendly environment that can significantly accelerate development cycles and reduce the burden on your team. On the other hand, managing a VPS requires a steep learning curve and ongoing attention to security, performance, and scalability. TLDR: If you're a solo founder or a small team looking to deploy and scale your application with minimal overhead quickly, Vercel is likely the best choice. If you are a Linux enthusiast or have a dedicated DevOps team, you can create a decent setup with a VPS and a CDN, but be prepared to invest time and effort into maintaining it. If you are a bigger company or team, the decision will likely depend on your specific needs, growth expectations, and budget constraints....

How to set up local cloud environment with LocalStack cover image

How to set up local cloud environment with LocalStack

How to set up local cloud environment with LocalStack Developers enjoy building applications with AWS due to the richness of their solutions to problems. However, testing an AWS application during development without a dedicated AWS account can be challenging. This can slow the development process and potentially lead to unnecessary costs if the AWS account isn't properly managed. This article will examine LocalStack, a development framework for developing and testing AWS applications, how it works, and how to set it up. Assumptions This article assumes you have a basic understanding of: - AWS: Familiarity with S3, CloudFormation, and SQS. - Command Line Interface (CLI): Comfortable running commands in a terminal or command prompt. - JavaScript and Node.js: Basic knowledge of JavaScript and Node.js, as we will write some code to interact with AWS services. - Docker Concepts: Understanding of Docker basics, such as images and containers, since LocalStack runs within a Docker container. What is LocalStack? LocalStack is a cloud service emulator that runs in a single container on your laptop or in your CI environment. With LocalStack, you can run your AWS applications or Lambdas entirely on your local machine without connecting to a remote cloud provider! Whether you are testing complex CDK applications or Terraform configurations or just beginning to learn about AWS, LocalStack simplifies your testing and development workflow, relieving you from the complexity of testing AWS applications. Prerequisite Before setting up LocalStack, ensure you have the following: 1. Docker Installed: LocalStack runs in a Docker container, so you need Docker installed on your machine. You can download and install Docker from here. 2. Node.js and npm: Ensure you have Node.js and npm installed, as we will use a simple Node.js application to test AWS services. You can download Node.js from here. 3. Python: Python is required for installing certain CLI tools that interact with LocalStack. Ensure you have Python 3 installed on your machine. You can download Python from here. Installation In this article, we will use the LocalStack CLI, which is the quickest way to get started with LocalStack. It allows you to start LocalStack from your command line. Localstack spins up a Docker instance, Alternative methods of managing the LocalStack container exist, and you can find them here. To install LocalStack CLI, you can use homebrew by running the following command: ` If you do not use a macOS or you don’t have Brew installed, you can install the CLI using Python: ` To confirm your installation was successful, run the following: ` That should output the installed version of LocalStack. Now you can start LocalStack by running the following command: ` This command will start LocalStack in docker mode, and since it is your first installation, try to pull the LocalStack image. You should see the below on your terminal After the image is downloaded successfully, the docker instance is spun up, and LocalStack is running on your machine on port 4566. Testing AWS Services with LocalStack LocalStack lets you easily test AWS services during development. It supports many AWS products but has some limitations, and not all features are free. Community Version: Free access to core AWS products like S3, SQS, DynamoDB, and Lambda. Pro Version: Access to more AWS products and enhanced features. Check the supported community and pro version resources for more details. We're using the community edition, and the screenshot below shows its supported products. To see the current products supported in the community edition, visit http://localhost:4566/_localstack/health. This article will test AWS CloudFormation and SQS. Before we can start testing, we need to create a simple Node.js app. On your terminal, navigate to the desired folder and run the following command: ` This command will create a package.json file at the root of the directory. Now we need to install aws-sdk. Run the following command: ` With that installed, we can now start testing various services. AWS CloudFormation AWS CloudFormation is a service that allows users to create, update, and delete resources in an AWS account. This service can also automate the process of provisioning and configuring resources. We are going to be using LocalStack to test creating a CloudFormation template. In the root of the folder, create a file called cloud-formation.js. This file will be used to create a CloudFormation stack that will be used to create an S3 bucket. Add the following code to the file: ` In the above code, we import the aws-sdk package, which provides the necessary tools to interact with AWS services. Then, an instance of the AWS.CloudFormation class is created. This instance is configured with: region: The AWS region where the requests are sent. In this case, us-east-1 is the default region for LocalStack. endpoint: The URI to send requests to is set to http://localhost:4566 for LocalStack. You should configure this with environment variables to switch between LocalStack for development and the actual AWS endpoint for production, ensuring the same code can be used in both environments. credentials: The AWS credentials to sign requests with. We are passing new AWS.Credentials("test", "test") The params object defines the parameters needed to create the CloudFormation stack: StackName: The name of the stack. Here, we are using 'test-local-stack'. TemplateBody: A JSON string representing the CloudFormation template. In this example, it defines a single resource, an S3 bucket named TestBucket. The createStack method is called on the CloudFormation client with the params object. This method attempts to create the stack. If there is an error, we log it to the console else, we log the successful data to the console. Now, let’s test the code by running the following command: ` If we run the above command, we should see the JSON response on the terminal. ` AWS SQS The process for testing SQS with LocalStack using the aws-sdk follows the same pattern as above, except that we will introduce another CLI package, awslocal. awslocal is a thin wrapper and a substitute for the standard aws command, enabling you to run AWS CLI commands within the LocalStack environment without specifying the --endpoint-url parameter or a profile. To install awslocal, run the following command on your terminal: ` Next, let’s create an SQS queue using the following command: ` This will create a queue name test-queue and return queueUrl like below: ` Now, in our directory, let’s create a sqs.js file, and inside of it, let’s paste the following code: ` In the above code, an instance of the AWS.SQS class is created. The instance is configured with the same parameters as when creating the CloudFormation. We also created a params object which had the required properties needed to send a SQS message: QueueUrl: The URL of the Amazon SQS queue to which a message is sent. In our case, it will be the URL we got when we created a local SQS. Make sure to manage this in environment variables to switch between LocalStack for development and the actual AWS queue URL for production, ensuring the same code can be used in both environments. MessageBody: The message to send. We call the sendMessage method, passing the params object and a callback that handles error and data, respectively. Let’s run the code using the following command: ` We should get a JSON object in the terminal like the following: ` To test if we can receive the SQS message sent, let’s create a sqs-receive.js. Inside the file, we can copy over the AWS.SQS instance that was created earlier into the file and add the following code: ` Run the code using the following command: ` We should receive a JSON object and should be able to see the previous message we sent ` When you are done with testing, you can shut down LocalStack by running the following command: ` Conclusion In this article, we looked at how to set up a local cloud environment using LocalStack, a powerful tool for developing and testing AWS applications locally. We walked through the installation process of LocalStack and demonstrated how to test AWS services using the AWS SDK, including CloudFormation and SQS. Setting up LocalStack allows you to simulate various AWS services on your local machine, which helps streamline development workflows and improve productivity. Whether you are testing simple configurations or complex deployments, LocalStack provides the environment to ensure your applications work as expected before moving to a production environment. Using LocalStack, you can confidently develop and test your AWS applications without an active AWS account, making it an invaluable tool for developers looking to optimize their development process....

How to Leverage Apollo Client Fetch Policies Like the Pros cover image

How to Leverage Apollo Client Fetch Policies Like the Pros

Apollo Client provides a rich ecosystem and cache for interfacing with your GraphQL APIs. You write your query and leverage the useQuery hook to fetch your data. It provides you with some state context and eventually resolves your query. That data is stored in a local, normalized, in-memory cache, which allows Apollo Client to respond to most previously run requests near instantaneously. This has huge benefits for client performance and the feel of your apps. However, sometimes Apollo's default doesn't match the user experience you want to provide. They provide fetch policies to allow you to control this behavior on each query you execute. In this article, we'll explore the different fetch policies and how you should leverage them in your application. cache-first This is the default for Apollo Client. Apollo will execute your query against the cache. If the cache can fully fulfill the request, then that's it, and we return to the client. If it can only partially match your request or cannot find any of the related data, the query will be run against your GraphQL server. The response is cached for the next query and returned to the handler. This method prioritizes minimizing the number of requests sent to your server. However, it has an adverse effect on data that changes regularly. Think of your social media feeds - they typically contain constantly changing information as new posts are generated. Or a real-time dashboard app tracking data as it moves through a system. cache-first is probably not the best policy for these use cases as you won't fetch the latest data from the upstream source. You can lower the cache time of items for the dashboard to avoid the staleness issue and still minimize the requests being made, but this problem will persist for social media feeds. The cache-first policy should be considered for data that does not change often in your system or data that the current user fully controls. Data that doesn't change often is easily cached, and that's a recommended pattern. For data that the user controls, we need to consider how that data changes. If only the current user can change it, we have 2 options: Return the updated data in the response of any mutation which is used to update the cache Use cache invalidation methods like refetchQueries or onQueryUpdated These methods will ensure that our cache stays in sync with our server allowing the policy to work optimally. However, if other users in the system can make changes that impact the current user's view, then we can not invalidate the cache properly using these strategies which makes this policy unideal. network-only This policy skips the cache lookup and goes to the server to fetch the results. The results are stored in the cache for other operations to leverage. Going back to the example I gave in my explanation cache-first of a social media feed, the network-only policy would be a great way to implement the feed itself as it's ever-changing, and we'll likely even want to poll for changes every 10s or so. The following is an example of what this component could look like: ` Whenever this SocialFeed component is rendered, we always fetch the latest results from the GraphQL server ensuring we're looking at the current data. The results are put in the cache which we can leverage in some children components. cache-only cache-only only checks the cache for the requested data and never hits the server. It throws an error if the specified cache items cannot be found. At first glance, this cache policy may seem unhelpful because it's unclear if our cache is seeded with our data. However, in combination with the network-only policy above, this policy becomes helpful. This policy is meant for components down tree from network-only level query. This method is for you if you're a fan of React components' compatibility. We can modify the return of our previous example to be as follows: ` Notice we're not passing the full post object as a prop. This simplifies our Post component types and makes later refactors easier. The Post would like like the following: ` In this query, we're grabbing the data directly from our cache every time because our top-level query should have fetched it. Now, a small bug here makes maintainability a bit harder. Our top-level GetFeed query doesn't guarantee fetching the same fields. Notice how our Post component exports a fragment. Fragments are a feature Apollo supports to share query elements across operations. In our SocialFeed component, we can change our query to be: ` Now, as we change our Post to use new fields and display different data, the refactoring is restricted to just that component, and the upstream components will detect the changes and handle them for us making our codebase more maintainable. Because the upstream component is always fetching from the network, we can trust that the cache will have our data, making this component safe to render. With these examples, though, our users will likely have to see a loading spinner or state on every render unless we add some server rendering. cache-and-network This is where cache-and-network comes to play. With this policy, Apollo Client will run your query against your cache and your GraphQL server. This further simplifies our example above if we want to provide the last fetched results to the user but then update the feed immediately upon gathering the latest data. This is similar to what X/Twitter does when you reload the app. You'll see the last value that was in the cache then it'll render the network values when ready. This can cause a jarring user experience though, if the data is changing a lot over time, so I recommend using this methodology sparsely. However, if you wanted to update our existing example, we'd just change our SocialFeed component to use this policy, and that'll keep our client and server better in sync while still enabling 10s polling. no-cache This policy is very similar to the network-only policy, except it bypasses the local cache entirely. In our previous example, we wrote engagement as a sub-selector on a Post and stored fields there. These metrics can change in real time pretty drastically. Chat features, reactions, viewership numbers, etc., are all types of data that may change in real time. The no-cache policy is good when this type of data is active, such as during a live stream or within the first few hours of a post going out. You may typically want to use the cache-and-network policy eventually but during that active period, you'll probably want to use no-cache so your consumers can trust your data. I'd probably recommend changing your server to split these queries and run different policies for the operations for performance reasons. I haven't mentioned this yet, but you can make the fetch policy on a query dynamic, meaning you combine these different policies' pending states. This could look like the following: ` We pass whether the event is live to the component that then leverages that info to determine if we should cache or not when fetching the chat. That being said, we should consider using subscription operations for this type of feature as well, but that's an exercise for another blog post. standby This is the most uncommon fetch policy, but has a lot of use. This option runs like a cache-first query when it executes. However, by default, this query does not run and is treated like a "skip" until it is manually triggered by a refetch or updateQueries caller. You can achieve similar results by leveraging the useLazyQuery operator, but this maintains the same behavior as other useQuery operators so you'll have more consistency among your components. This method is primarily used for operations pending other queries to finish or when you want to trigger the caller on a mutation. Think about a dashboard with many filters that need to be applied before your query executes. The standby fetch policy can wait until the user hits the Apply or Submit button to execute the operation then calls a await client.refetchQueries({ include: ["DashboardQuery"] }), which will then allow your component to pull in the parameters for your operation and execute it. Again, you could achieve this with useLazyQuery so it's really up to you and your team how you want to approach this problem. To avoid learning 2 ways, though, I recommend picking just one path. Conclusion Apollo Client's fetch policies are a versatile and helpful tool for managing your application data and keeping it in sync with your GraphQL server. In general, you should use the defaults provided by the library, but think about the user experience you want to provide. This will help you determine which policy best meets your needs. Leveraging tools like fragments will enable you to manage your application and use composable patterns more effectively. With the rise of React Server Components and other similar patterns, you'll need to be wary of how that impacts your Apollo Client strategy. However, if you're on a legacy application that leverages traditional SSR patterns, Apollo allows you to pre-render queries on the server and their related cache. When you combine these technologies, you'll find that your apps perform great, and your users will be delighted....

Roo Custom Modes cover image

Roo Custom Modes

Roo Custom Modes Roo Code is an extension for VS Code that provides agentic-style AI code editing functionality. You can configure Roo to use any LLM model and version you want by providing API keys. Once configured, Roo allows you to easily switch between models and provide custom instructions through what Roo calls "modes." Roo Modes can be thought of as a "personality" that the LLM takes on. When you create a new mode in Roo, you provide it with a description of what personality Roo should take on, what LLM model should be used, and what custom instructions the mode should follow. You can also define workspace-level instructions via a .roo/rules-{modeSlug}/ directory at your project root with markdown files inside. Having different modes allows developers to quickly fine-tune how the Roo Code agent performs its tasks. Roo ships out-of-the-box with some default modes: Code Mode, Architect Mode, Ask Mode, Debug Mode, and Orchestrator Mode. These can get you far, but I have expanded on this list with a few custom modes I have made for specific scenarios I run into every day as a software engineer. My Custom Modes 📜 Documenter Mode I created this mode to help me with generating documentation for legacy codebases my team works with. I use this mode to help produce documentation interactively with me while I read a codebase. Mode Definition You are Roo, a highly skilled technical documentation writer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. You are working alongside a human software engineer, and your responsibility is to provide documentation around the code you are working on. You will be asked to provide documentation in the form of comments, markdown files, or other formats as needed. Mode-specific Instructions You will respect the following rules: * You will not write any code, only markdown files. * In your documentation, you will provide references to specific files and line numbers of code you are referencing. * You will not attempt to execute any commands. * You will not attempt to run the application in the browser. * You will only look at the code and infer functionality from that. 👥 Pair Programmer Mode I created a “Pair Programmer” mode to serve as my personal coding partner. It’s designed to work in a more collaborative way with a human software engineer. When I want to explore multiple ideas quickly, I switch to this mode to rapidly iterate on code with Roo. In this setup, I take on the role of the navigator—guiding direction, strategy, and decisions—while Roo handles the “driving” by writing and testing the code we need. Mode Definition You are Roo, a highly skilled software engineer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. You are working alongside a human software engineer who will be checking your work and providing instructions. If you get stuck, ask for help and we will solve problems together. Mode-specific Instructions You will respect the following rules: * You will not install new 3rd party libraries without first providing usage metrics (stars, downloads, latest version update date). * You will not do any additional tasks outside of what you have been told to do. * You will not assume to do any additional work outside of what you have been instructed to do. * You will not open the browser and test the application. Your pairing partner will do that for you. * You will not attempt to open the application or the URL at which the application is running. Assume your pairing partner will do that for you. * You will not attempt to run npm run dev or similar commands. Your pairing partner will do that for you. * You will not attempt to run a development server of any kind. Your pairing partner will handle that for you. * You will not write tests unless instructed to. * You will not make any git commits unless explicitly told to do so. * You will not make suggestions of commands to run the software or execute the test suite. Assume that your human counterpart has the application running and will check your work. 🧑‍🏫 Project Manager I created this mode to help me write tasks for my team with clear and actionable acceptance criteria. Mode Definition You are a professional project manager. You are highly skilled in breaking down large tasks into bite-sized pieces that are actionable by an engineering team or an LLM performing engineering tasks. You analyze features carefully and detail out all edge cases and scenarios so that no detail is missed. Mode-specific Instructions Think creatively about how to detail out features. Provide a technical and business case explanation about feature value. Break down features and functionality in the following way. The following example would be for user login: User Login: As a user, I can log in to the application so that I can make changes. This prevents anonymous individuals from accessing the admin panel. Acceptance Criteria * On the login page, I can fill in my email address: * This field is required. * This field must enforce email format validation. * On the login page, I can fill in my password: * This field is required. * The input a user types into this field is hidden. * On failure to log in, I am provided an error dialog: * The error dialog should be the same if the email exists or not so that bad actors cannot glean info about active user accounts in our system. * Error dialog should be a red box pinned to the top of the page. * Error dialog can be dismissed. * After 4 failed login attempts, the form becomes locked: * Display a dialog to the user letting them know they can try again in 30 minutes. * Form stays locked for 30 minutes and the frontend will not accept further submissions. 🦾 Agent Consultant I created this mode for assistance with modifying my existing Roo modes and rules files as well as generating higher quality prompts for me. This mode leverages the Context7 MCP to keep up-to-date with documentation on Roo Code and prompt engineering best practices. Mode Definition You are an AI Agent coding expert. You are proficient in coding with agents and defining custom rules and guidelines for AI powered coding agents. Your specific expertise is in the Roo Code tool for VS Code are you are exceptionally capable at creating custom rules files and custom mode. This is your workflow that you should always follow: 1. 1. Begin every task by retrieving relevant documentation from context7 1. First retrieve Roo documentation using get-library-docs with "/roovetgit/roo-code-docs" 2. Then retrieve prompt engineering best practices using get-library-docs with “/dair-ai/prompt-engineering-guide" 2. Reference this documentation explicitly in your analysis and recommendations 3. Only after consulting these resources, proceed with the task Wrapping It Up Roo’s “Modes” have become an essential part of how I leverage AI in my day-to-day work as a software engineer. By tailoring each mode to specific tasks—whether it’s generating documentation, pairing on code, writing project specs, or improving prompt quality—I’ve been able to streamline my workflow and get more done with greater clarity and precision. Roo’s flexibility lets me define how it should behave in different contexts, giving me fine-grained control over how I interact with AI in my coding environment. Roo also has the capability of defining custom modes per project if that is needed by your team. If you find yourself repeating certain workflows or needing more structure in your interactions with AI tools, I highly recommend experimenting with your own custom modes. The payoff in productivity and developer experience is absolutely worth it....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co