Skip to content

Migrating a classic Express.js to Serverless Framework

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Problem

Classic Express.js applications are great for building backends. However, their deployment can be a bit tricky. There are several solutions on the market for making deployment "easier" like Heroku, AWS Elastic Beanstalk, Qovery, and Vercel. However, "easier" means special configurations or higher service costs.

In our case, we were trying to deploy an Angular frontend served through Cloudfront, and needed a separately deployed backend to manage an OAuth flow. We needed an easy to deploy solution that supported HTTPS, and could be automated via CI.

Serverless Framework

The Serverless Framework is a framework for building and deploying applications onto AWS Lambda, and it allowed us to easily migrate and deploy our Express.js server at a low cost with long-term maintainability. This was so simple that it only took us an hour to migrate our existing API, and get it deployed so we could start using it in our production environment.

Serverless Init Script

To start this process, we used the Serverless CLI to initialize a new Serverless Express.js project. This is an example of the settings we chose for our application:

$ serverless
What do you want to make? AWS - Node.js - Express API
What do you want to call this project? example-serverless-express

Downloading "aws-node-express-api" template...
Installing dependencies with "npm" in "example-serverless-express" folder
Project successfully created in example-serverless-express folder

What org do you want to add this service to? [Skip]
Do you want to deploy your project? No

Your project is ready for deployment and available in ./example-serverless-express
Run **serverless deploy** in the project directory
    Deploy your newly created service
Run **serverless info** in the project directory after deployment
    View your endpoints and services
Run **serverless invoke** and **serverless logs** in the project directory after deployment
    Invoke your functions directly and view the logs
Run **serverless** in the project directory
    Add metrics, alerts, and a log explorer, by enabling the dashboard functionality

Here's a quick explanation of our choices:

What do you want to make? This prompt offers several possible scaffolding options. In our case, the Express API was the perfect solution since that's what we were migrating.

What do you want to call this project? You should put whatever you want here. It'll name the directory and define the naming schema for the resources you deploy to AWS.

What org do you want to add this service to? This question assumes you are using the serverless.com dashboard for managing your deployments. We're choosing to use Github Actions and AWS tooling directly though, so we've opted out of this option.

Do you want to deploy your project? This will attempt to deploy your application immediately after scaffolding. If you don't have your AWS credentials configured correctly, this will use your default profile. We needed a custom profile configuration since we have several projects on different AWS accounts so we opted out of the default deploy.

Serverless Init Output

The init script from above outputs the following:

  • .gitignore
  • handler.js
  • package.json
  • README.md
  • serverless.yml

The key here is the serverless.yml and handler.js files that are outputted.

serverless.yml

service: example-serverless-express
frameworkVersion: '2 || 3'

provider:
  name: aws
  runtime: nodejs12.x
  lambdaHashingVersion: '20201221'

functions:
  api:
    handler: handler.handler
    events:
      - httpApi: '*'

handler.js

const serverless = require("serverless-http");
const express = require("express");
const app = express();

app.get("/", (req, res, next) => {
  return res.status(200).json({
    message: "Hello from root!",
  });
});

app.get("/hello", (req, res, next) => {
  return res.status(200).json({
    message: "Hello from path!",
  });
});

app.use((req, res, next) => {
  return res.status(404).json({
    error: "Not Found",
  });
});

module.exports.handler = serverless(app);

As you can see, this gives a standard Express server ready to just work out of the box. However, we needed to make some quality of life changes to help us migrate with confidence, and allow us to use our API locally for development.

Quality of Life Improvements

There are several things that Serverless Framework doesn't provide out of the box that we needed to help our development process. Fortunately, there are great plugins we were able to install and configure quickly.

Environment Variables

We need per-environment variables as our OAuth providers are specific per host domain. Serverless Framework supports .env files out of the box but it does require you to install the dotenv package and to turn on the useDotenv flag in the serverless.yml.

Babel/TypeScript Support

As you can see in the above handler.js file, we're getting CommonJS instead of modern JavaScript or TypeScript. To get these, you need webpack or some other bundler. serverless-webpack exists if you want full control over your ecosystem, but there is also serverless-bundle that gives you a set of reasonable defaults on webpack 4 out of the box. We opted into this option to get us started quickly.

Offline Mode

With classic Express servers, you can use a simple node script to get the server up and running to test locally. Serverless wants to be run in the AWS ecosystem making it. Lucky for us, David Hérault has built and continues to maintain serverless-offline allowing us to emulate our functions locally before we deploy.

Final Configuration

Given these changes, our serverless.yml file now looks as follows:

service: starter-dev-backend
frameworkVersion: '2 || 3'
useDotenv: true # enable .env file support

plugins: # install serverless plugins
  - serverless-bundle
  - serverless-offline

custom: # configure serverless-offline
  serverless-offline:
    httpPort: 4000

provider:
  name: aws
  profile: exampleprofile # use your own profile
  stage: production # use your specified stage
  runtime: nodejs14.x
  lambdaHashingVersion: '20201221'

functions:
  api:
    handler: handler.handler
    events:
      - httpApi: '*'

Some important things to note:

  • The order of serverless-bundle and serverless-offline in the plugins is critically important.
  • The custom port for serverless-offline can be any unused port. Keep in mind what port your frontend server is using when setting this value for local development.
  • We set the profile and stage in our provider configuration. This allowed us to use specify the environment settings and AWS profile credentials to use for our deployment.

With all this set, we're now ready to deploy the basic API.

Deploying the new API

Serverless deployment is very simple. We can run the following command in the project directory:

$ serverless deploy

This command will deploy the API to AWS, and create the necessary resources including the API Gateway and related Lambdas. The first deploy will take roughly 5 minutes, and each subsequent deply will only take a minute or two! In its output, you'll receive a bunch of information about the deployment, including the deployed URL that will look like:

https://<hash-string>.execute-api.<region>.amazonaws.com

You can now point your app at this API and start using it.

Next Steps

A few issues we still have to resolve but are easily fixed:

  • New Lambdas are not deploying with their Environment Variables, and have to be set via the AWS console. We're just missing some minor configuration in our serverless.yml.
  • Our deploys don't deploy on merges to main. For this though, we can just use the official Serverless Github Action. Alternatively, we could purchase a license to the Serverless Dashboard, but this option is a bit more expensive, and we're not using all of its features on this project. However, we've used this on other client projects, and it really helped us manage and monitor our deployments.

Conclusion

Given all the above steps, we were able to get our API up and running in a few minutes. And due to it being a 1-to-1 replacement for an existing Express server, we were able to port our existing implementations into this new Serverless implementation, deploy it to AWS, and start using it in just a couple of hours.

This particular architecture is a great means for bootstraping a new project, but it does come with some scaling issues for larger projects. As such, we do not recommend a pure Serverless, and Express for monolithic projects, and instead suggest utilizing some of the amazing capabilities of Serverless Framework with AWS to horizontally scale your application into smaller Lambda functions.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Vercel vs. VPS - What's the drama, and which one should you choose? cover image

Vercel vs. VPS - What's the drama, and which one should you choose?

Vercel vs. VPS - What's the drama, and which one should you choose? Vercel recently announced an update to their pricing. For some users, that meant a significant increase in costs, which has led to some discussions about the costs of using cloud providers like Vercel, with a few suggesting that a VPS is a far superior and cost-effective solution. Does it make sense to move your popular app to a VPS and save hundreds of dollars? As you might have guessed, the issue is far more nuanced than some tweets suggest. Let's break it down. Integrations Let's talk about the elephant in the room first. Vercel is tailored explicitly for Next.js, providing tooling for conformance checks, continuous deployment, branch previews, and more. All of that works out of the box, and if you were to figure all of that out on your own, you'd be spending a lot of time and money on it, exposing yourself to potential bugs, problems, and disturbances in your workflow. If you're a one-person startup, you may get away with setting up a simple CI/CD pipeline on a VPS, but as your team grows, you'll need to invest more time and money into maintaining it. And that's not even considering the time you'll spend onboarding new team members and teaching them how to use your custom setup. That being said, some tools can make things easier for you and standardize the process. For example, coolify does a great job of providing a simple and easy-to-use interface for deploying Next.js apps to a VPS. You could also deploy a Docker container with all the necessary tools and configurations, but that's another can of worms. There’s also OpenNext adapter that lets you deploy across different environments, such as AWS. While that takes care of the deployment part, it doesn't solve the potential need for other integrations like conformance checks, branch previews, etc. Scaling and Performance Scaling is sometimes overrated in software engineering. Especially at the beginning of a greenfield project, you're better off focusing on building a great product and getting users than worrying about scaling. That said, it's essential to have a plan for scaling when the time comes, not to mention projects that are already popular and are considering moving to a VPS. I'm not saying you can't successfully run a popular high-traffic project built in Laravel from a Debian server in your basement. But I'm also not suggesting that you should optimize your hosting costs at the expense of investing precious time and energy into building and configuring infrastructure that could be better spent on building your product, especially given the fact that most cloud providers offer a free tier that can handle a decent amount of traffic. And there is one more thing to consider. Vercel has a global network of edge nodes that can cache your content and serve it to users from the nearest location, which can significantly improve your app's performance. If your app is targeted at a global audience, serving it from one server located in a single region may provide a suboptimal experience for users in other areas. Edge Nodes Speaking of edge nodes and Vercel, there has been a pretty significant announcement made by Vercel recently. They have moved away from edge rendering back to Node.js. The reason is that while they initially thought it may be worth the trade-off, having your compute resources far away from your database makes the app slower. Furthermore, the cost benefits of edge rendering are less significant than they initially thought. That doesn't mean they moved away from edge nodes completely. Static assets and the essential HTML and CSS are still served by the edge nodes. With networks getting fast and Node.js more performant, however, the argument for having a VPS with a CDN instead of a cloud provider like Vercel becomes more compelling. Skill Requirements Running a VPS requires a certain level of technical expertise. You must know how to set up and configure a server, install and configure software, manage security, monitor performance, and troubleshoot problems. If you're uncomfortable with these tasks, you may spend a lot of time learning and troubleshooting. While learning new things is great, it's not always the best use of your time, especially if you're trying to build a product. Part of the cost of using a cloud provider like Vercel is that they take care of all the technical details so you can focus on building your product. Suppose you're not comfortable with managing servers. In that case, it is worth paying extra for the convenience of not having to worry about it while benefiting from a global network of edge nodes and integrations that can save you time and money in the long run. If you're a developer who enjoys tinkering with servers and learning new things, running a VPS can be a fun and rewarding experience. Similarly, running a VPS may be a good option for you if you're a small team with a dedicated DevOps person who can handle all the technical details. But suppose you're a solo founder or a big company with an abundant budget. In that case, you may be better off using a cloud provider like Vercel that can save you time and money in the long run, allowing you to validate PRs in real-time and comment on a build preview in the process. Security A considerable concern when running a VPS is security. You must keep your server updated with security patches, configure firewalls, monitor logs for suspicious activity, and take other steps to protect your server from attacks. There are a lot of bots out there scanning the internet for vulnerable servers, and if you're not careful, you could end up with a compromised server that's being used to send spam, mine cryptocurrency, or launch DDoS attacks. Most cloud providers take care of these things for you, so you don't have to worry about them. Speaking of DDoS attacks, cloud providers usually have built-in DDoS protection that can help mitigate the impact of an attack. If you're running a VPS, you'll need to set up your own DDoS protection, which can be a complex and expensive process, or set up a CloudFlare in front of your server, effectively giving into a cloud provider anyway. That being said, I have read a few horror stories about landing an enormous bill from a cloud provider after a DDoS attack, so it's not all sunshine and rainbows. However, if you opt for a cloud hosting solution such as Vercel and unfortunately get attacked, remember they can be chill about it and refund some of the costs. Be aware of the possibility of a DDoS attack, though, and set up limits and alerts to prevent it from happening in the first place. Tech Stack Flexibility While you don't have to worry about keeping your server up to date when using a cloud provider, it is unfortunate that most cloud providers are usually late in supporting the newest Node.js versions. Despite Node.js 20 being the latest LTS version, some cloud providers' features still don't support it. If you plan to use bleeding-edge technologies or have specific requirements that cloud providers don't support, you may be better off running a VPS. Cost Finally, let's talk about cost. Running a VPS can be cheaper than using a cloud provider like Vercel, especially if you have a high-traffic app generating a lot of revenue. However, you need to consider the hidden costs of running a VPS, such as the time and money you'll spend on maintaining it, the cost of setting up and configuring a server, the cost of monitoring and troubleshooting problems, and the cost of scaling and performance optimization. If we take a model app that transfers around 1.5 TB of data per month, does a bit of computation, and we apply the new Vercel pricing, it could look something like this: 1. Base Plan Cost: $20 per month 2. Additional Fast Data Transfer: 500 GB x $0.15/GB = $75 (1 TB included in the base plan) 3. Additional Fast Origin Transfer: 50 GB x $0.06/GB = $3 (100 GB included in the base plan) 4. Additional Edge Requests: 2 million x $2/million = $4 (10 million included in the base plan) 5. Additional Function Execution Time: 200 GB-hours x $0.18/GB-hour = $36 (1,000 GB-hours included in the base plan) That would bring our total cost to $138 per month, assuming you pay for only one seat. If we decided to run a VPS instead, we would need something with at least 4 GB RAM, 2 vCPUs, and 80 GB SSD storage. A VPS with those specs would cost around $20 per month. You would also need to add the cost of a CDN, which would amount to $20 per month if you chose the CloudFlare Pro plan. That would bring our total cost to $40 per month. In this case, running a VPS would be significantly cheaper than using Vercel. However, this assumes that you know how much your app will scale in advance or that you're willing to go through the hassle of migrating your application to a larger server when the time comes. The thing is, if you're starting a new project, you could probably start with the free tier of a cloud provider like Vercel, later transfer to the Pro plan, and scale up as needed. In many cases, the appeal of the pay-as-you-go model is worth paying a bit extra in the long run. Conclusion Should you move your popular app to a VPS and save some dollars? The answer is it depends. It all boils down to assessing your project's specific needs, technical expertise, and growth expectations. The recent uproar regarding Vercel's pricing changes has illuminated this dilemma, presenting a classic case of convenience versus cost. Vercel stands out for its seamless integration with Next.js, providing out-of-the-box solutions that significantly reduce the time and effort required for deployment and ongoing maintenance. This convenience is especially valuable for developers leveraging modern development practices with minimal setup. Additionally, Vercel's edge network capabilities offer significant performance benefits for global applications, ensuring faster load times and improved user experience. However, these benefits come at a cost, which can be significantly higher than running a VPS, especially as your application scales. For those with the requisite skills, a VPS can offer a more cost-effective solution without the constraints of a platform-centric approach. This route grants more control over the hosting environment and potentially lowers costs at the expense of increased complexity in setup, scaling, and maintenance. It's also important to consider the non-financial costs associated with each option. Vercel provides a robust, developer-friendly environment that can significantly accelerate development cycles and reduce the burden on your team. On the other hand, managing a VPS requires a steep learning curve and ongoing attention to security, performance, and scalability. TLDR: If you're a solo founder or a small team looking to deploy and scale your application with minimal overhead quickly, Vercel is likely the best choice. If you are a Linux enthusiast or have a dedicated DevOps team, you can create a decent setup with a VPS and a CDN, but be prepared to invest time and effort into maintaining it. If you are a bigger company or team, the decision will likely depend on your specific needs, growth expectations, and budget constraints....

How to set up local cloud environment with LocalStack cover image

How to set up local cloud environment with LocalStack

How to set up local cloud environment with LocalStack Developers enjoy building applications with AWS due to the richness of their solutions to problems. However, testing an AWS application during development without a dedicated AWS account can be challenging. This can slow the development process and potentially lead to unnecessary costs if the AWS account isn't properly managed. This article will examine LocalStack, a development framework for developing and testing AWS applications, how it works, and how to set it up. Assumptions This article assumes you have a basic understanding of: - AWS: Familiarity with S3, CloudFormation, and SQS. - Command Line Interface (CLI): Comfortable running commands in a terminal or command prompt. - JavaScript and Node.js: Basic knowledge of JavaScript and Node.js, as we will write some code to interact with AWS services. - Docker Concepts: Understanding of Docker basics, such as images and containers, since LocalStack runs within a Docker container. What is LocalStack? LocalStack is a cloud service emulator that runs in a single container on your laptop or in your CI environment. With LocalStack, you can run your AWS applications or Lambdas entirely on your local machine without connecting to a remote cloud provider! Whether you are testing complex CDK applications or Terraform configurations or just beginning to learn about AWS, LocalStack simplifies your testing and development workflow, relieving you from the complexity of testing AWS applications. Prerequisite Before setting up LocalStack, ensure you have the following: 1. Docker Installed: LocalStack runs in a Docker container, so you need Docker installed on your machine. You can download and install Docker from here. 2. Node.js and npm: Ensure you have Node.js and npm installed, as we will use a simple Node.js application to test AWS services. You can download Node.js from here. 3. Python: Python is required for installing certain CLI tools that interact with LocalStack. Ensure you have Python 3 installed on your machine. You can download Python from here. Installation In this article, we will use the LocalStack CLI, which is the quickest way to get started with LocalStack. It allows you to start LocalStack from your command line. Localstack spins up a Docker instance, Alternative methods of managing the LocalStack container exist, and you can find them here. To install LocalStack CLI, you can use homebrew by running the following command: ` If you do not use a macOS or you don’t have Brew installed, you can install the CLI using Python: ` To confirm your installation was successful, run the following: ` That should output the installed version of LocalStack. Now you can start LocalStack by running the following command: ` This command will start LocalStack in docker mode, and since it is your first installation, try to pull the LocalStack image. You should see the below on your terminal After the image is downloaded successfully, the docker instance is spun up, and LocalStack is running on your machine on port 4566. Testing AWS Services with LocalStack LocalStack lets you easily test AWS services during development. It supports many AWS products but has some limitations, and not all features are free. Community Version: Free access to core AWS products like S3, SQS, DynamoDB, and Lambda. Pro Version: Access to more AWS products and enhanced features. Check the supported community and pro version resources for more details. We're using the community edition, and the screenshot below shows its supported products. To see the current products supported in the community edition, visit http://localhost:4566/_localstack/health. This article will test AWS CloudFormation and SQS. Before we can start testing, we need to create a simple Node.js app. On your terminal, navigate to the desired folder and run the following command: ` This command will create a package.json file at the root of the directory. Now we need to install aws-sdk. Run the following command: ` With that installed, we can now start testing various services. AWS CloudFormation AWS CloudFormation is a service that allows users to create, update, and delete resources in an AWS account. This service can also automate the process of provisioning and configuring resources. We are going to be using LocalStack to test creating a CloudFormation template. In the root of the folder, create a file called cloud-formation.js. This file will be used to create a CloudFormation stack that will be used to create an S3 bucket. Add the following code to the file: ` In the above code, we import the aws-sdk package, which provides the necessary tools to interact with AWS services. Then, an instance of the AWS.CloudFormation class is created. This instance is configured with: region: The AWS region where the requests are sent. In this case, us-east-1 is the default region for LocalStack. endpoint: The URI to send requests to is set to http://localhost:4566 for LocalStack. You should configure this with environment variables to switch between LocalStack for development and the actual AWS endpoint for production, ensuring the same code can be used in both environments. credentials: The AWS credentials to sign requests with. We are passing new AWS.Credentials("test", "test") The params object defines the parameters needed to create the CloudFormation stack: StackName: The name of the stack. Here, we are using 'test-local-stack'. TemplateBody: A JSON string representing the CloudFormation template. In this example, it defines a single resource, an S3 bucket named TestBucket. The createStack method is called on the CloudFormation client with the params object. This method attempts to create the stack. If there is an error, we log it to the console else, we log the successful data to the console. Now, let’s test the code by running the following command: ` If we run the above command, we should see the JSON response on the terminal. ` AWS SQS The process for testing SQS with LocalStack using the aws-sdk follows the same pattern as above, except that we will introduce another CLI package, awslocal. awslocal is a thin wrapper and a substitute for the standard aws command, enabling you to run AWS CLI commands within the LocalStack environment without specifying the --endpoint-url parameter or a profile. To install awslocal, run the following command on your terminal: ` Next, let’s create an SQS queue using the following command: ` This will create a queue name test-queue and return queueUrl like below: ` Now, in our directory, let’s create a sqs.js file, and inside of it, let’s paste the following code: ` In the above code, an instance of the AWS.SQS class is created. The instance is configured with the same parameters as when creating the CloudFormation. We also created a params object which had the required properties needed to send a SQS message: QueueUrl: The URL of the Amazon SQS queue to which a message is sent. In our case, it will be the URL we got when we created a local SQS. Make sure to manage this in environment variables to switch between LocalStack for development and the actual AWS queue URL for production, ensuring the same code can be used in both environments. MessageBody: The message to send. We call the sendMessage method, passing the params object and a callback that handles error and data, respectively. Let’s run the code using the following command: ` We should get a JSON object in the terminal like the following: ` To test if we can receive the SQS message sent, let’s create a sqs-receive.js. Inside the file, we can copy over the AWS.SQS instance that was created earlier into the file and add the following code: ` Run the code using the following command: ` We should receive a JSON object and should be able to see the previous message we sent ` When you are done with testing, you can shut down LocalStack by running the following command: ` Conclusion In this article, we looked at how to set up a local cloud environment using LocalStack, a powerful tool for developing and testing AWS applications locally. We walked through the installation process of LocalStack and demonstrated how to test AWS services using the AWS SDK, including CloudFormation and SQS. Setting up LocalStack allows you to simulate various AWS services on your local machine, which helps streamline development workflows and improve productivity. Whether you are testing simple configurations or complex deployments, LocalStack provides the environment to ensure your applications work as expected before moving to a production environment. Using LocalStack, you can confidently develop and test your AWS applications without an active AWS account, making it an invaluable tool for developers looking to optimize their development process....

How to Leverage Apollo Client Fetch Policies Like the Pros cover image

How to Leverage Apollo Client Fetch Policies Like the Pros

Apollo Client provides a rich ecosystem and cache for interfacing with your GraphQL APIs. You write your query and leverage the useQuery hook to fetch your data. It provides you with some state context and eventually resolves your query. That data is stored in a local, normalized, in-memory cache, which allows Apollo Client to respond to most previously run requests near instantaneously. This has huge benefits for client performance and the feel of your apps. However, sometimes Apollo's default doesn't match the user experience you want to provide. They provide fetch policies to allow you to control this behavior on each query you execute. In this article, we'll explore the different fetch policies and how you should leverage them in your application. cache-first This is the default for Apollo Client. Apollo will execute your query against the cache. If the cache can fully fulfill the request, then that's it, and we return to the client. If it can only partially match your request or cannot find any of the related data, the query will be run against your GraphQL server. The response is cached for the next query and returned to the handler. This method prioritizes minimizing the number of requests sent to your server. However, it has an adverse effect on data that changes regularly. Think of your social media feeds - they typically contain constantly changing information as new posts are generated. Or a real-time dashboard app tracking data as it moves through a system. cache-first is probably not the best policy for these use cases as you won't fetch the latest data from the upstream source. You can lower the cache time of items for the dashboard to avoid the staleness issue and still minimize the requests being made, but this problem will persist for social media feeds. The cache-first policy should be considered for data that does not change often in your system or data that the current user fully controls. Data that doesn't change often is easily cached, and that's a recommended pattern. For data that the user controls, we need to consider how that data changes. If only the current user can change it, we have 2 options: Return the updated data in the response of any mutation which is used to update the cache Use cache invalidation methods like refetchQueries or onQueryUpdated These methods will ensure that our cache stays in sync with our server allowing the policy to work optimally. However, if other users in the system can make changes that impact the current user's view, then we can not invalidate the cache properly using these strategies which makes this policy unideal. network-only This policy skips the cache lookup and goes to the server to fetch the results. The results are stored in the cache for other operations to leverage. Going back to the example I gave in my explanation cache-first of a social media feed, the network-only policy would be a great way to implement the feed itself as it's ever-changing, and we'll likely even want to poll for changes every 10s or so. The following is an example of what this component could look like: ` Whenever this SocialFeed component is rendered, we always fetch the latest results from the GraphQL server ensuring we're looking at the current data. The results are put in the cache which we can leverage in some children components. cache-only cache-only only checks the cache for the requested data and never hits the server. It throws an error if the specified cache items cannot be found. At first glance, this cache policy may seem unhelpful because it's unclear if our cache is seeded with our data. However, in combination with the network-only policy above, this policy becomes helpful. This policy is meant for components down tree from network-only level query. This method is for you if you're a fan of React components' compatibility. We can modify the return of our previous example to be as follows: ` Notice we're not passing the full post object as a prop. This simplifies our Post component types and makes later refactors easier. The Post would like like the following: ` In this query, we're grabbing the data directly from our cache every time because our top-level query should have fetched it. Now, a small bug here makes maintainability a bit harder. Our top-level GetFeed query doesn't guarantee fetching the same fields. Notice how our Post component exports a fragment. Fragments are a feature Apollo supports to share query elements across operations. In our SocialFeed component, we can change our query to be: ` Now, as we change our Post to use new fields and display different data, the refactoring is restricted to just that component, and the upstream components will detect the changes and handle them for us making our codebase more maintainable. Because the upstream component is always fetching from the network, we can trust that the cache will have our data, making this component safe to render. With these examples, though, our users will likely have to see a loading spinner or state on every render unless we add some server rendering. cache-and-network This is where cache-and-network comes to play. With this policy, Apollo Client will run your query against your cache and your GraphQL server. This further simplifies our example above if we want to provide the last fetched results to the user but then update the feed immediately upon gathering the latest data. This is similar to what X/Twitter does when you reload the app. You'll see the last value that was in the cache then it'll render the network values when ready. This can cause a jarring user experience though, if the data is changing a lot over time, so I recommend using this methodology sparsely. However, if you wanted to update our existing example, we'd just change our SocialFeed component to use this policy, and that'll keep our client and server better in sync while still enabling 10s polling. no-cache This policy is very similar to the network-only policy, except it bypasses the local cache entirely. In our previous example, we wrote engagement as a sub-selector on a Post and stored fields there. These metrics can change in real time pretty drastically. Chat features, reactions, viewership numbers, etc., are all types of data that may change in real time. The no-cache policy is good when this type of data is active, such as during a live stream or within the first few hours of a post going out. You may typically want to use the cache-and-network policy eventually but during that active period, you'll probably want to use no-cache so your consumers can trust your data. I'd probably recommend changing your server to split these queries and run different policies for the operations for performance reasons. I haven't mentioned this yet, but you can make the fetch policy on a query dynamic, meaning you combine these different policies' pending states. This could look like the following: ` We pass whether the event is live to the component that then leverages that info to determine if we should cache or not when fetching the chat. That being said, we should consider using subscription operations for this type of feature as well, but that's an exercise for another blog post. standby This is the most uncommon fetch policy, but has a lot of use. This option runs like a cache-first query when it executes. However, by default, this query does not run and is treated like a "skip" until it is manually triggered by a refetch or updateQueries caller. You can achieve similar results by leveraging the useLazyQuery operator, but this maintains the same behavior as other useQuery operators so you'll have more consistency among your components. This method is primarily used for operations pending other queries to finish or when you want to trigger the caller on a mutation. Think about a dashboard with many filters that need to be applied before your query executes. The standby fetch policy can wait until the user hits the Apply or Submit button to execute the operation then calls a await client.refetchQueries({ include: ["DashboardQuery"] }), which will then allow your component to pull in the parameters for your operation and execute it. Again, you could achieve this with useLazyQuery so it's really up to you and your team how you want to approach this problem. To avoid learning 2 ways, though, I recommend picking just one path. Conclusion Apollo Client's fetch policies are a versatile and helpful tool for managing your application data and keeping it in sync with your GraphQL server. In general, you should use the defaults provided by the library, but think about the user experience you want to provide. This will help you determine which policy best meets your needs. Leveraging tools like fragments will enable you to manage your application and use composable patterns more effectively. With the rise of React Server Components and other similar patterns, you'll need to be wary of how that impacts your Apollo Client strategy. However, if you're on a legacy application that leverages traditional SSR patterns, Apollo allows you to pre-render queries on the server and their related cache. When you combine these technologies, you'll find that your apps perform great, and your users will be delighted....

Vercel BotID: The Invisible Bot Protection You Needed cover image

Vercel BotID: The Invisible Bot Protection You Needed

Nowadays, bots do not act like “bots”. They can execute JavaScript, solve CAPTCHAs, and navigate as real users. Traditional defenses often fail to meet expectations or frustrate genuine users. That’s why Vercel created BotID, an invisible CAPTCHA that has real-time protections against sophisticated bots that help you protect your critical endpoints. In this blog post, we will explore why you should care about this new tool, how to set it up, its use cases, and some key considerations to take into account. We will be using Next.js for our examples, but please note that this tool is not tied to this framework alone; the only requirement is that your app is deployed and running on Vercel. Why Should You Care? Think about these scenarios: - Checkout flows are overwhelmed by scalpers - Signup forms inundated with fake registrations - API endpoints draining resources with malicious requests They all impact you and your users in a negative way. For example, when bots flood your checkout page, real customers are unable to complete their purchases, resulting in your business losing money and damaging customer trust. Fake signups clutter the app, slowing things down and making user data unreliable. When someone deliberately overloads your app’s API, it can crash or become unusable, making users angry and creating a significant issue for you, the owner. BotID automatically detects and filters bots attempting to perform any of the above actions without interfering with real users. How does it work? A lightweight first-party script quickly gathers a high set of browser & environment signals (this takes ~30ms, really fast so no worry about performance issues), packages them into an opaque token, and sends that token with protected requests via the rewritten challenge/proxy path + header; Vercel’s edge scores it, attaches a verdict, and checkBotId() function simply reads that verdict so your code can allow or block. We will see how this is implemented in a second! But first, let’s get started. Getting Started in Minutes 1. Install the SDK: ` 1. Configure redirects Wrap your next.config.ts with BotID’s helper. This sets up the right rewrites so BotID can do its job (and not get blocked by ad blockers, extensions, etc.): ` 2. Integrate the client on public-facing pages (where BotID runs checks): Declare which routes are protected so BotID can attach special headers when a real user triggers those routes. We need to create instrumentation-client.ts (place it in the root of your application or inside a src folder) and initialize BotID once: ` instrumentation-client.ts runs before the app hydrates, so it’s a perfect place for a global setup! If we have an inferior Next.js version than 15.3, then we would need to use a different approach. We need to render the React component inside the pages or layouts you want to protect, specifying the protected routes: ` 3. Verify requests on your server or API: ` - NOTE: checkBotId() will fail if the route wasn’t listed on the client, because the client is what attaches the special headers that let the edge classify the request! You’re all set - your routes are now protected! In development, checkBotId() function will always return isBot = false so you can build without friction. To disable this, you can override the options for development: ` What happens on a failed check? In our example above, if the check failed, we return a 403, but it is mostly up to you what to do in this case; the most common approaches for this scenario are: - Hard block with a 403 for obviously automated traffic (just what we did in the example above) - Soft fail (generic error/“try again”) when you want to be cautious. - Step-up (require login, email verification, or other business logic). Remember, although rare, false positives can occur, so it’s up to you to determine how you want to balance your fail strategy between security, UX, telemetry, and attacker behavior. checkBotId() So far, we have seen how to use the property isBot from checkBotId(), but there are a few more properties that you can leverage from it. There are: isHuman (boolean): true when BotID classifies the request as a real human session (i.e., a clear “pass”). BotID is designed to return an unambiguous yes/no, so you can gate actions easily. isBot (boolean): We already saw this one. It will be true when the request is classified as automated traffic. isVerifiedBot (boolean): Here comes a less obvious property. Vercel maintains and continuously updates a comprehensive directory of known legitimate bots from across the internet. This directory is regularly updated to include new legitimate services as they emerge. This could be helpful for allowlists or custom logic per bot. We will see an example in a sec. verifiedBotName? (string): The name for the specific verified bot (e.g., “claude-user”). verifiedBotCategory? (string): The type of the verified bot (e.g., “webhook”, “advertising”, “ai_assistant”). bypassed (boolean): it is true if the request skipped BotID check due to a configured Firewall bypass (custom or system). You could use this flag to avoid taking bot-based actions when you’ve explicitly bypassed protection. Handling Verified Bots - NOTE: Handling verified bots is available in botid@1.5.0 and above. It might be the case that you don’t want to block some verified bots because they are not causing damage to you or your users, as it can sometimes be the case for AI-related bots that fetch your site to give information to a user. We can use the properties related to verified bots from checkBotId() to handle these scenarios: ` Choosing your BotID mode When leveraging BotID, you can choose between 2 modes: - Basic Mode: Instant session-based protection, available for all Vercel plans. - Deep Analysis Mode: Enhanced Kasada-powered detection, only available for Pro and Enterprise plan users. Using this mode, you will leverage a more advanced detection and will block the hardest to catch bots To specify the mode you want, you must do so in both the client and the server. This is important because if either of the two does not match, the verification will fail! ` Conclusion Stop chasing bots - let BotID handle them for you! Bots are and will get smarter and more sophisticated. BotID gives you a simple way to push back without slowing your customers down. It is simple to install, customize, and use. Stronger protection equals fewer headaches. Add BotID, ship with confidence, and let the bots trample into a wall without knowing what’s going on....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co