Skip to content

Linting, Formatting, and Type Checking Commits in an Nx Monorepo with Husky and lint-staged

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

One way to keep your codebase clean is to enforce linting, formatting, and type checking on every commit. This is made very easy with pre-commit hooks. Using Husky, you can run arbitrary commands before a commit is made. This can be combined with lint-staged, which allows you to run commands on only the files that have been staged for commit. This is useful because you don't want to run linting, formatting, and type checking on every file in your project, but only on the ones that have been changed.

But if you're using an Nx monorepo for your project, things can get a little more complicated. Rather than have you use eslint or prettier directly, Nx has its own scripts for linting and formatting. And type checking is complicated by the use of specific tsconfig.json files for each app or library. Setting up pre-commit hooks with Nx isn't as straightforward as in a simpler repository.

This guide will show you how to set up pre-commit hooks to run linting, formatting, and type checking in an Nx monorepo.

Configure Formatting

Nx comes with a command, nx format:write for applying formatting to affected files which we can give directly to lint-staged. This command uses Prettier under the hood, so it will abide by whatever rules you have in your root-level .prettierrc file. Just install Prettier, and add your preferred configuration.

npm install --save-dev prettier

Then add a .prettierrc file to the root of your project with your preferred configuration. For example, if you want to use single quotes and trailing commas, you can add the following:

{
    "singleQuote": true,
    "trailingComma": "all"
}

Configure Linting

Nx has its own plugin that uses ESLint to lint projects in your monorepo. It also has a plugin with sensible ESLint defaults for your linter commands to use, including ones specific to Nx. To install them, run the following command:

npm i --save-dev @nrwl/linter @nrwl/eslint-plugin-nx

Then, we can create a default .eslintrc.json file in the root of our project:

{
  "root": true,
  "ignorePatterns": ["**/*"],
  "plugins": ["@nrwl/nx"],
  "overrides": [
    {
      "files": ["*.ts", "*.tsx", "*.js", "*.jsx"],
      "rules": {
        "@nrwl/nx/enforce-module-boundaries": [
          "error",
          {
            "enforceBuildableLibDependency": true,
            "allow": [],
            "depConstraints": [
              {
                "sourceTag": "*",
                "onlyDependOnLibsWithTags": ["*"]
              }
            ]
          }
        ]
      }
    },
    {
      "files": ["*.ts", "*.tsx"],
      "extends": ["plugin:@nrwl/nx/typescript"],
      "rules": {}
    },
    {
      "files": ["*.js", "*.jsx"],
      "extends": ["plugin:@nrwl/nx/javascript"],
      "rules": {}
    }
  ]
}

The above ESLint configuration will, by default, apply Nx's module boundary rules to any TypeScript or JavaScript files in your project. It also applies its recommended rules for JavaScript and TypeScript respectively, and gives you room to add your own.

You can also have ESLint configurations specific to your apps and libraries. For example, if you have a React app, you can add a .eslintrc.json file to the root of your app directory with the following contents:

{
    "extends": ["plugin:@nrwl/nx/react", "../../.eslintrc.json"],
    "rules": {
        "no-console": ["error", { "allow": ["warn", "error"] }]
      }
}

Set Up Type Checking

Type checking with tsc is normally a very straightforward process. You can just run tsc --noEmit to check your code for type errors. But things are more complicated in Nx with lint-staged.

There are a two tricky things about type checking with lint-staged in an Nx monorepo. First, different apps and libraries can have their own tsconfig.json files. When type checking each app or library, we need to make sure we're using that specific configuration. The second wrinkle comes from the fact that lint-staged passes a list of staged files to commands it runs by default. And tsc will only accept either a specific tsconfig file, or a list of files to check.

We do want to use the specific tsconfig.json files, and we also only want to run type checking against apps and libraries with changes. To do this, we're going to create some Nx run commands within our apps and libraries and run those instead of calling tsc directly.

Within each app or library you want type checked, open the project.json file, and add a new run command like this one:

{
    // ...
    "targets": {
        // ...
        "typecheck": {
      "executor": "nx:run-commands",
      "options": {
        "commands": ["tsc -p tsconfig.app.json --noEmit"],
        "cwd": "apps/directory-of-your-app-goes-here",
        "forwardAllArgs": false
      }
    },
    }
}

Inside commands is our type-checking command, using the local tsconfig.json file for that specific Nx app. The cwd option tells Nx where to run the command from. The forwardAllArgs option tells Nx to ignore any arguments passed to the command. This is important because tsc will fail if you pass both a tsconfig.json and a list of files from lint-staged.

Now if we ran nx affected --target=typecheck from the command line, we would be able to type check all affected apps and libraries that have a typecheck target in their project.json. Next we'll have lint-staged handle this for us.

Installing Husky and lint-staged

Finally, we'll install and configure Husky and lint-staged. These are the two packages that will allow us to run commands on staged files before a commit is made.

npm install --save-dev husky lint-staged

In your package.json file, add the prepare script to run Husky's install command:

{
    "scripts": {
        "prepare": "husky install"
    }
}

Then, run your prepare script to set up git hooks in your repository. This will create a .husky directory in your project root with the necessary file system permissions.

npm run prepare

The next step is to create our pre-commit hook. We can do this from the command line:

npx husky add .husky/pre-commit "npx lint-staged --concurrent false --relative"

It's important to use Husky's CLI to create our hooks, because it handles file system permissions for us. Creating files manually could cause problems when we actually want to use the git hooks. After running the command, we will now have a file at .husky/pre-commit that looks like this:

#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"

npx lint-staged --concurrent false --relative

Now whenever we try to commit, Husky will run the lint-staged command. We've given it some extra options. First, --concurrent false to make sure attempts to write fixes with formatting and linting don't conflict with simultaneous attempts at type checking. Second is --relative, because our Nx commands for formatting and linting expect a list of file paths relative to the repo root, but lint-staged would otherwise pass the full path by default.

We've got our pre-commit command ready, but we haven't actually configured lint-staged yet. Let's do that next.

Configuring lint-staged

In a simpler repository, it would be easy to add some lint-staged configuration to our package.json file. But because we're trying to check a complex monorepo in Nx, we need to add a separate configuration file. We'll call it lint-staged.config.js and put it in the root of our project.

Here is what our configuration file will look like:

module.exports = {
  '{apps,libs,tools}/**/*.{ts,tsx}': files => {
    return `nx affected --target=typecheck --files=${files.join(',')}`;
  },
  '{apps,libs,tools}/**/*.{js,ts,jsx,tsx,json}': [
    files => `nx affected:lint --files=${files.join(',')}`,
    files => `nx format:write --files=${files.join(',')}`,
  ],
  };

Within our module.exports object, we've defined two globs: one that will match any TypeScript files in our apps, libraries, and tools directories, and another that also matches JavaScript and JSON files in those directories. We only need to run type checking for the TypeScript files, which is why that one is broken out and narrowed down to only those files.

These globs defining our directories can be passed a single command, or an array of commands. It's common with lint-staged to just pass a string like tsc --noEmit or eslint --fix. But we're going to pass a function instead to combine the list of files provided by lint-staged with the desired Nx commands.

The nx affected and nx format:write commands both accept a --files option. And remember that lint-staged always passes in a list of staged files. That array of file paths becomes the argument to our functions, and we concatenate our list of files from lint-staged into a comma-delimitted string and interpolate that into the desired Nx command's --files option. This will override Nx's normal behavior to explicitly tell it to only run the commands on the files that have changed and any other files affected by those changes.

Testing It Out

Now that we've got everything set up, let's try it out. Make a change to a TypeScript file in one of your apps or libraries. Then try to commit that change. You should see the following in your terminal as lint-staged runs:

Preparing lint-staged...
Running tasks for staged files...
  lint-staged.config.js
        {apps,libs,tools}/**/*.{ts,tsx}
            nx affected --target=typecheck --files=apps/your-app/file-you-changed.ts
        {apps,libs,tools}/**/*.{js,ts,jsx,tsx,json}
            nx affected:lint --files=apps/your-app/file-you-changed.ts
            nx format:write --files=apps/your-app/file-you-changed.ts
Applying modifications from tasks...
Cleaning up your temporary files...

Now, whenever you try to commit changes to files that match the globs defined in lint-staged.config.js, the defined commands will run first, and verify that the files contain no type errors, linting errors, or formatting errors. If any of those commands fail, the commit will be aborted, and you'll have to fix the errors before you can commit.

Conclusion

We've now set up a monorepo with Nx and configured it to run type checking, linting, and formatting on staged files before a commit is made. This will help us catch errors before they make it into our codebase, and it will also help us keep our codebase consistent and readable. To see an example Nx monorepo with these configurations, check out this repo.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml file. We put this file under the tools/aws/ folder. ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci to install the dependencies and in the build phase, we are going to run the build command using the ENVIRONMENT_TARGET variable. This is useful, because if you have more environments, like development and staging you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the aws/codebuild/standard:7.0 version. This version uses Node 18. We want to always use the latest image version for this runtime and as the Environment type we are good with Linux EC2. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec section select Use a buildspec file and give the path from your repository root as the Buildspec name. For our example, it is tools/aws/build-and-deploy.buildspec.yml. We leave the Batch configuration and the Artifacts sections as they are and in the Logs section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the aws-codebuild-build-logs bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2) and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild as the build provider and let's select the build that we created above. Remember that we have the ENVIRONMENT_TARGET as a variable used in our build, so let's add it to this stage with the Plaintext value prod. This way the build will run the build:prod command from our package.json. As the Build type we want Single build. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role. Let's go to the IAM page in the console and open the Roles page. Search for our codebuild role and let's add permissions to it. Click the Add permissions button and select Attach policies. We need two AWS-managed policies to be added to this service role. The AdministratorAccess-AWSElasticBeanstalk will allow us to deploy the API and the AmazonS3FullAccess will allow us to deploy the front-end. The CloudFrontFullAccess will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws command. Let's update our buildspec file with the following changes: ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit button. This will then enable us to edit the Build stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod as Plaintext for the FRONT_END_BUCKET and E3FV1Q1P98H4EZ as Plaintext for the CLOUDFRONT_DISTRIBUTION_ID Now if we add changes to our index.html file, for example, change the button to HELLO 2, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID: #{SourceVariables.CommitId} - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME: Test AWS App - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME: TestAWSApp-prod - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET: elasticbeanstalk-us-east-1-474671518642 - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. ` The APP_VERSION variable is the version property from the package.json file. In a release process, the application's version is stored here. The API_VERSION variable will contain the APP_VERSION and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the API_ZIP_KEY will have this information. The APP_VERSION_DESCRIPTION will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. ` Let's make a change in the API, for example, the message sent back by the /api/hello endpoint and push up the changes. --- Now every time a change is merged to the main branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

GitHub Actions for Serverless Framework Deployments cover image

GitHub Actions for Serverless Framework Deployments

Background Our team was building a Serverless Framework API for a client that wanted to use the Serverless Dashboard) for deployment and monitoring. Based on some challenges from last year, we agreed with the client that using a monorepo tool like Nx) would be beneficial moving forward as we were potentially shipping multiple Serverless APIs and frontend applications. Unfortunately, we discovered several challenges integrating with the Serverless Dashboard, and eventually opted into custom CI/CD with GitHub Actions. We’ll cover the challenges we faced, and the solution we created to mitigate our problems and generate a solution. Serverless Configuration Restrictions By default, the Serverless Framework does all its configuration via a serverless.yml file. However, the framework officially supports alternative formats) including .json, .js, and .ts. Our team opted into the TypeScript format as we wanted to setup some validation for our engineers that were newer to the framework through type checks. When we eventually went to configure our CI/CD via the Serverless Dashboard UI, the dashboard itself restricted the file format to just the YAML format. This was unfortunate, but we were able to quickly revert back to YAML as configuration was relatively simple, and we were able to bypass this hurdle. Prohibitive Project Structures With our configuration now working, we were able to select the project, and launch our first attempt at deploying the app through the dashboard. Immediately, we ran into a build issue: ` What we found was having our package.json in a parent directory of our serverless app prevented the dashboard CI/CD from being able to appropriately detect and resolve dependencies prior to deployment. We had been deploying using an Nx command: npx nx run api:deploy --stage=dev which was able to resolve our dependency tree which looked like: To resolve, we thought maybe we could customize the build commands utilized by the dashboard. Unfortunately, the only way to customize these commands is via the package.json of our project. Nx allows for package.json per app in their structure, but it defeated the purpose of us opting into Nx and made leveraging the tool nearly obsolete. Moving to GitHub Actions with the Serverless Dashboard We thought to move all of our CI/CD to GitHub Actions while still proxying the dashboard for deployment credentials and monitoring. In the dashboard docs), we found that you could set a SERVERLESS_ACCESS_KEY and still deploy through the dashboard. It took us a few attempts to understand exactly how to specify this key in our action code, but eventually, we discovered that it had to be set explicitly in the .env file due to the usage of the Nx build system to deploy. Thus the following actions were born: api-ci.yml ` api-clean.yml ` These actions ran smoothly and allowed us to leverage the dashboard appropriately. All in all this seemed like a success. Local Development Problems The above is a great solution if your team is willing to pay for everyone to have a seat on the dashboard. Unfortunately, our client wanted to avoid the cost of additional seats because the pricing was too high. Why is this a problem? Our configuration looks similar to this (I’ve highlighted the important lines with a comment): serverless.ts ` The app and org variables make it so it is required to have a valid dashboard login. This meant our developers working on the API problems couldn’t do local development because the client was not paying for the dashboard logins. They would get the following error: Resulting Configuration At this point, we had to opt to bypass the dashboard entirely via CI/CD. We had to make the following changes to our actions and configuration to get everything 100% working: serverless.ts - Remove app and org fields - Remove accessing environment secrets via the param option ` api-ci.yml - Add all our secrets to GitHub and include them in the scripts - Add serverless confg ` api-cleanup.yml - Add serverless config - Remove secrets ` Conclusions The Serverless Dashboard is a great product for monitoring and seamless deployment in simple applications, but it still has a ways to go to support different architectures and setups while being scalable for teams. I hope to see them make the following changes: - Add support for different configuration file types - Add better support custom deployment commands - Update the framework to not fail on login so local development works regardless of dashboard credentials The Nx + GitHub actions setup was a bit unnatural as well with the reliance on the .env file existing, so we hope the above action code will help someone in the future. That being said, we’ve been working with this on the team and it’s been a very seamless and positive change as our developers can quickly reference their deploys and know how to interact with Lambda directly for debugging issues already....

Creating Custom Types in TypeScript with Indexed Access Types, Const Assertions, and Satisfies cover image

Creating Custom Types in TypeScript with Indexed Access Types, Const Assertions, and Satisfies

Frequently when writing TypeScript, you may need to create a new type from an existing type. For example, you may have a large type that you need to use in multiple places, and you want to create a new type that is a subset of the original type. Or you may have a large object full of data that you want to use to create types to maintain type safety. In this post, we'll cover how to create new types from existing types and data in TypeScript. Accessing parts of a type with indexed access types In JavaScript, you can access an object property's value with the string key of that property using someObject['someProperty']. You can use the same sort of syntax with TypeScript's types to get specific pieces out of a type. For example: ` Using TypeName["someProperty"] allows you to extract that piece of the type. These are called indexed access types. If you needed to use a piece of a large, complex type, you could simply pull that piece out into its own type using indexed access types. Why indexed access types? But what good is this? Couldn't I just refactor? In the previous example, wouldn't it be better for the pizza's Toppings to be a type of its own before defining Pizza, and then passed in as toppings: Toppings? I'd say yes, it would be. (And we'll cover that later!) But what if you're working with a type that you don't have control over (e.g., from a third party library), but you need to use a piece of it in a different type? That's where indexed access types come in. Why not just use Pick? Wait, why not just use Pick instead of indexed access types? You would want to use the indexed access type when you want _specifically_ a piece of the type, and not a type with that single property. For example: ` The index is a type! It isn't obvious from looking at the examples, but when you index a type, you're doing so with another type! So if I wanted to access a piece of a type with a defined string, it would fail. For example: ` In this case, I would instead have to use Pizza[typeof key] to get the same result as I would from just passing the value directly as Pizza["toppings"]. Alternatively, changing const key into type key would work. Because the index is a type, you can pass a type in as the index. This lets me do things like tell TypeScript: "I want to create a type that could be any one of the items in this array". You would do this by using the type number as your index access type. For example, if I wanted to create a single Topping type from our Pizza example, I could do the following: ` Creating types with const assertions Sometimes in TypeScript, you'll have some object full of data that you would like to use in a type-safe way. Let's return to our pizza example. Say we're building a web app to let people order our pizzas. Inside our order form, we have a list of toppings. This list of data could include a name, a description, and an extra price. ` Since we've gone through the trouble of writing all of this out, we should use this data to inform the Pizza type about our toppings. If we don't, it's both a duplication of code (a time-waster) and an opportunity for this data to get out of sync with our Pizza type. For a first attempt, you might use the indexed access types we learned about earlier to get each of the topping names: ` But that won't work! TypeScript has widened the type from those literal values to the broader string type. It doesn't assume that these values can't be changed later on. But it did notice that every name in TOPPINGS was a string, so it decided that the string type was the safest bet. Here, you can see how it would widely interpret the type of any entry in TOPPINGS: ` This is a good default, but it's not what we want here. The fix to this problem is easy: const assertions. We can simply append as const at the end of our TOPPINGS declaration. This tells TypeScript that we want to treat everything about this object as literal values that should not be widened. For example: ` Now we've got a type with all of the literal values from TOPPINGS as readonly properties in our type! From here, we can use indexed access types to create our Topping type from the name property: ` And we can use this type to inform our Pizza type: ` Extra type safety with satisfies Let's say we're factoring out the available crusts for making our Pizza. We could start with an array of strings, use a const assertion to use the literal values and avoid widening, and then again use our indexed access types to create a type from that array: ` Well, almost there. Notice that we have an undefined type in there. That's because we have an extra comma in our array. This is effectively the same as saying ['thin', 'thick', undefined, 'stuffed']. You could detect the undefined with type annotations, but that can't be mixed with const assertions. The type cannot both be string[] and readonly ['thin', 'thick', 'stuffed']. ` To avoid this issue, we can use satisfies to confirm that the value conforms to a certain intended shape. In our case, we want to confirm that the array is a tuple of strings. We don't need TypeScript to confirm which strings exactly—only that it matches the intended shape. ` We can further combine satifies with as const to get the literal values we want while verifying that the array is a tuple of strings: ` With as const, we tell TypeScript that it should not widen the inferred type of CRUSTS and that we expect it to be the literal values given. And with satisfies readonly string[], we tell TypeScript that CRUSTS should satisfy the shape of an array of readonly strings. Now we can't accidentally add an extra comma or other value to the array, and we can still use the literal values from CRUST to create new types. Conclusion The combination of indexed access types, const assertions, and the satisfies operator, give us a lot of power to create types that are more specific, and more accurate. You can use them to transform your data into useful types, rather than attempting to duplicate that information manually, and inevitably having the data and types fall out of sync. This can ultimately save you and your team a lot of time, effort, and headache. If you want to view the examples in this article in a runnable playground, you can find them at the TypeScript playground....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co