Skip to content

Utilizing API Environment Variables on Next.js Apps Deployed to AWS Amplify

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Although Next.js is a Vercel product, you may choose not to deploy to Vercel due to their pricing model or concerns with vendor lock-in. Fortunately, several other platforms fully support deployment of Next.js including AWS Amplify. Whether you’re using the Next.js app directory or not, you still have API routes that get deployed as serverless functions to whatever cloud provider you choose. This is no different on AWS Amplify. However, Amplify may require an extra step for the serverless functions if you’re using environment variables. Let’s explore how AWS Amplify is deploying your API routes, and how you can properly utilize environment variables in this context.

How AWS Amplify manages Next.js API Routes

When you deploy Next.js apps via Amplify, it takes the standard build outputs, stores them in S3, and serves them from behind a Cloudfront distribution. However, when you start introducing server side rendering, Amplify utilizes Lambda Edge functions. These edge functions execute the functionality required to properly render the server rendered page. This same flow works for API routes in a Next.js app. They’re deployed to individual lambdas.

In Next.js apps, you have two (2) types of environment variables. There are the variables prefixed with NEXT_PUBLIC_ that indicate to Next.js that the variable is available on the frontend of your application and can be exposed to the general public. At build time, Amplify injects these variables, and values that are stored in the Amplify Console UI, into your frontend application. You also have other environment variables that represent secrets that should not be exposed to users. These will not be included in your build. However, neither set of these variables will be injected into your API routes.

If you need any environment variable in your API routes, you will need to explicitly inject these values into your application at build time so they can be referenced by the Next.js systems, and stored alongside your lambdas.

Injecting Environment Variables into the Amplify Build

By default, Amplify generates the following amplify.yml file that controls your application’s continuous delivery (CD). The following is that default file for Next.js applications:

version: 1
frontend:
  phases:
    preBuild:
      commands:
        - npm ci
    build:
      commands:
        - npm run build
  artifacts:
    baseDirectory: .next
    files:
      - '**/*'
  cache:
    paths:
      - node_modules/**/*
      - .next/cache/**/*

To inject variables into our build, we need to write them to a .env.production file before the application build runs in the build phase. We can do that using the following bash command:

env | grep -e <VARIABLE NAME> >> .env.production

env pulls all environment variables accessible. We use the pipe operator (|) to pass the result of that command to the grep -e which searches the output for the matching pattern. In this case, that’s our environment variable which will output the line that it is on. We then use the >> operator to append to the .env.production file, or create it if it does not exist. Be careful not to use a single > operator as that will overwrite your file’s full content.

Our amplify.yml should now look like this:

version: 1
frontend:
  phases:
    preBuild:
      commands:
        - npm ci
    build:
      commands:
        - env | grep -e <VARIABLE NAME> >> .env.production
        - npm run build
  artifacts:
    baseDirectory: .next
    files:
      - '**/*'
  cache:
    paths:
      - node_modules/**/*
      - .next/cache/**/*

It is important to note that you have to do this for all environment variables you wish to use in an API route whether they have the NEXT_PUBLIC_ prefix or not.

Now, you can use process.env.[VARIABLE NAME] in your API routes to access your functions without any problems. If you want to learn more about environment variables in Next.js, check out their docs.

Conclusion

In short, AWS Amplify deploys your Next.js API routes as Lambda Edge functions that can’t access your console set environment variables by default. As a result, you’ll need to use the method described above to get environment variables in your function as needed. If you want to get started with Next.js on Amplify today, check out our starter.dev kit to get started, and deploy it to your AWS Amplify account. It’ll auto-connect to your git repository and auto-deploy on push, and collaborating with others won’t cost you extra per seat.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Internationalization in Next.js with next-intl cover image

Internationalization in Next.js with next-intl

Internationalization in Next.js with next-intl Internationalization (i18n) is essential for providing a multi-language experience for global applications. next-intl integrates well with Next.js’ App Router, handling i18n routing, locale detection, and dynamic configuration. This guide will walk you through setting up i18n in Next.js using next-intl for URL-based routing, user-specific settings, and domain-based locale routing. Getting Started First, create a Next.js app with the App Router and install next-intl: ` Next, configure next-intl in the next.config.ts file to provide a request-specific i18n configuration for Server Components: ` Without i18n Routing Setting up an app without i18n routing integration can be advantageous in scenarios where you want to provide a locale to next-intl based on user-specific settings or when your app supports only a single language. This approach offers the simplest way to begin using next-intl, as it requires no changes to your app’s structure, making it an ideal choice for straightforward implementations. ` Here’s a quick explanation of each file's role: * translations/: Stores different translations per language (e.g., en.json for English, es.json for Spanish). Organize this as needed, e.g., translations/en/common.json. * request.ts: Manages locale-based configuration scoped to each request. Setup request.ts for Request-Specific Configuration Since we will be using features from next-intl in Server Components, we need to add the following configuration in i18n/request.ts: ` Here, we define a static locale and use that to determine which translation file to import. The imported JSON data is stored in the message variable, and is returned together with the locale so that we can access them from various components in the application. Using Translation in RootLayout Inside RootLayout, we use getLocale() to retrieve the static locale and set the document language for SEO and pass translations to NextIntlClientProvider: ` Note that NextIntlClientProvider automatically inherits configuration from i18n/request.ts here, but messages must be explicitly passed. Now you can use translations and other functionality from next-intl in your components: ` In case of async components, you can use the awaitable getTranslations function instead: ` And with that, you have i18n configured and working on your application! \ Now, let’s take it a step further by introducing routing. \ With i18n Routing To set up i18n routing, we need a file structure that separates each language configuration and translation file. Below is the recommended structure: ` We updated the earlier structure to include some files that we require for routing: * routing.ts: Sets up locales, default language, and routing, shared between middleware and navigation. * middleware.ts: Handles URL rewrites and locale negotiation. * app/[locale]/: Creates dynamic routes for each locale like /en/about and /es/about. Define Routing Configuration in i18n/routing.ts The routing.ts file configures supported locales and the default locale, which is referenced by middleware.ts and other navigation functions: ` This configuration lets Next.js handle URL paths like /about, with locale management managed by next-intl. Update request.ts for Request-Specific Configuration We need to update the getRequestConfig function from the above implementation in i18n/request.ts. ` Here, request.ts ensures that each request loads the correct translation files based on the user’s locale or falls back to the default. Setup Middleware for Locale Matching The middleware.ts file matches the locale based on the request: ` Middleware handles locale matches and redirects to localized paths like /en or /es. Updating the RootLayout file Inside RootLayout, we use the locale from params (matched by middleware) instead of calling getLocale() ` The locale we get from the params was matched in the middleware.ts file and we use that here to set the document language for SEO purposes. Additionally, we used this file to pass configuration from i18n/request.ts to Client Components through NextIntlClientProvider. Note: When using the above setup with i18n routing, next-intl will currently opt into dynamic rendering when APIs like useTranslations are used in Server Components. next-intl provides a temporary API that can be used to enable static rendering. Static Rendering for i18n Routes For apps with dynamic routes, use generateStaticParams to pass all possible locale values, allowing Next.js to render at build time: ` next-intl provides an API setRequestLocale that can be used to distribute the locale that is received via params in layouts and pages for usage in all Server Components that are rendered as part of the request. You need to call this function in every layout/page that you intend to enable static rendering for since Next.js can render layouts and pages independently. ` Note: Call setRequestLocale before invoking useTranslations or getMessages or any next-intl functions. Domain Routing For domain-specific locale support, use the domains setting to map domains to locales, such as us.example.com/en or ca.example.com/fr. ` This setup allows you to serve localized content based on domains. Read more on domain routing here. Conclusion Setting up internationalization in Next.js with next-intl provides a modular way to handle URL-based routing, user-defined locales, and domain-specific configurations. Whether you need URL-based routing or a straightforward single-locale setup, next-intl adapts to fit diverse i18n needs. With these tools, your app will be ready to deliver a seamless multi-language experience to users worldwide....

Vercel & React Native - A New Era of Mobile Development? cover image

Vercel & React Native - A New Era of Mobile Development?

Vercel & React Native - A New Era of Mobile Development? Jared Palmer of Vercel recently announced an acquisition that spiked our interest. Having worked extensively with both Next.js and Vercel, as well as React Native, we were curious to see what the appointment of Fernando Rojo, the creator of Solito, as Vercel's Head of Mobile, would mean for the future of React Native and Vercel. While we can only speculate on what the future holds, we can look closer at Solito and its current integration with Vercel. Based on the information available, we can also make some educated guesses about what the future might hold for React Native and Vercel. What is Solito? Based on a recent tweet by Guillermo Rauch, one might assume that Solito allows you to build mobile apps with Next.js. While that might become a reality in the future, Jamon Holmgren, the CTO of Infinite Red, added some context to the conversation. According to Jamon, Solito is a cross-platform framework built on top of two existing technologies: - For the web, Solito leverages Next.js. - For mobile, Solito takes advantage of Expo. That means that, at the moment, you can't build mobile apps using Next.js & Solito only - you still need Expo and React Native. Even Jamon, however, admits that even the current integration of Solito with Vercel is exciting. Let's take a closer look at what Solito is according to its official website: > This library is two things: > > 1. A tiny wrapper around React Navigation and Next.js that lets you share navigation code across platforms. > > 2. A set of patterns and examples for building cross-platform apps with React Native + Next.js. We can see that Jamon was right - Solito allows you to share navigation code between Next.js and React Native and provides some patterns and components that you can use to build cross-platform apps, but it doesn't replace React Native or Expo. The Cross-Platformness of Solito So, we know Solito provides a way to share navigation and some patterns between Next.js and React Native. But what precisely does that entail? Cross-Platform Hooks and Components If you look at Solito's documentation, you'll see that it's not only navigation you can share between Next.js and React Native. There are a few components that wrap Next.js components and make them available in React Native: - Link - a component that wraps Next.js' Link component and allows you to navigate between screens in React Native. - TextLink - a component that also wraps Next.js' Link component but accepts text nodes as children. - MotiLink - a component that wraps Next.js' Link component and allows you to animate the link using moti, a popular animation library for React Native. - SolitoImage - a component that wraps Next.js' Image component and allows you to display images in React Native. On top of that, Solito provides a few hooks that you can use for shared routing and navigation: - useRouter() - a hook that lets you navigate between screens across platforms using URLs and Next.js Url objects. - useLink() - a hook that lets you create Link components across the two platforms. - createParam() - a function that returns the useParam() and useParams() hooks which allow you to access and update URL parameters across platforms. Shared Logic The Solito starter project is structured as a monorepo containing: - apps/next - the Next.js application. - apps/expo or apps/native - the React Native application. - packages/app - shared packages across the two applications: - features - providers - navigation The shared packages contain the shared logic and components you can use across the two platforms. For example, the features package contains the shared components organized by feature, the providers package contains the shared context providers, and the navigation package includes the shared navigation logic. One of the key principles of Solito is gradual adoption, meaning that if you use Solito and follow the recommended structure and patterns, you can start with a Next.js application only and eventually add a React Native application to the mix. Deployments Deploying the Next.js application built on Solito is as easy as deploying any other Next.js application. You can deploy it to Vercel like any other Next.js application, e.g., by linking your GitHub repository to Vercel and setting up automatic deployments. Deploying the React Native application built on top of Solito to Expo is a little bit more involved - you cannot directly use the Github Action recommended by Expo without some modification as Solito uses a monorepo structure. The adjustment, however, is luckily just a one-liner. You just need to add the working-directory parameter to the eas update --auto command in the Github Action. Here's what the modified part of the Expo Github Action would look like: ` What Does the Future Hold? While we can't predict the future, we can make some educated guesses about what the future might hold for Solito, React Native, Expo, and Vercel, given what we know about the current state of Solito and the recent acquisition of Fernando Rojo by Vercel. A Competitor to Expo? One question that comes to mind is whether Vercel will work towards creating a competitor to Expo. While it's too early to tell, it's not entirely out of the question. Vercel has been expanding its offering beyond Next.js and static sites, and it's not hard to imagine that it might want to provide a more integrated, frictionless solution for building mobile apps, further bridging the gap between web and mobile development. However, Expo is a mature and well-established platform, and building a mobile app toolchain from scratch is no trivial task. It would be easier for Vercel to build on top of Expo and partner with them to provide a more integrated solution for building mobile apps with Next.js. Furthermore, we need to consider Vercel's target audience. Most of Vercel's customers are focused on web development with Next.js, and switching to a mobile-first approach might not be in their best interest. That being said, Vercel has been expanding its offering to cater to a broader audience, and providing a more integrated solution for building mobile apps might be a step in that direction. A Cross-Platform Framework for Mobile Apps with Next.js? Imagine a future where you write your entire application in Next.js — using its routing, file structure, and dev tools — and still produce native mobile apps for iOS and Android. It's unlikely such functionality would be built from scratch. It would likely still rely on React Native + Expo to handle the actual native modules, build processes, and distribution. From the developer’s point of view, however, it would still feel like writing Next.js. While this idea sounds exciting, it's not likely to happen in the near future. Building a cross-platform framework that allows you to build mobile apps with Next.js only would require a lot of work and coordination between Vercel, Expo, and the React Native community. Furthermore, there are some conceptual differences between Next.js and React Native that would need to be addressed, such as Next.js being primarily SSR-oriented and native mobile apps running on the client. Vercel Building on Top of Solito? One of the more likely scenarios is that Vercel will build on top of Solito to provide a more integrated solution for building mobile apps with Next.js. This could involve providing more components, hooks, and patterns for building cross-platform apps, as well as improving the deployment process for React Native applications built on top of Solito. A potential partnership between Vercel and Expo, or at least some kind of closer integration, could also be in the cards in this scenario. While Expo already provides a robust infrastructure for building mobile apps, Vercel could provide complementary services or features that make it easier to build mobile apps on top of Solito. Conclusion Some news regarding Vercel and mobile development is very likely on the horizon. After all, Guillermo Rauch, the CEO of Vercel, has himself stated that Vercel will keep raising the quality bar of the mobile and web ecosystems. While it's unlikely we'll see a full-fledged mobile app framework built on top of Next.js or a direct competitor to Expo in the near future, it's not hard to imagine that Vercel will provide more tools and services for building mobile apps with Next.js. Solito is a step in that direction, and it's exciting to see what the future holds for mobile development with Vercel....

Demystifying React Server Components cover image

Demystifying React Server Components

React Server Components (RSCs) are the latest addition to the React ecosystem, and they've caused a bit of a disruption to how we think about React. Dan Abramov recently wrote an article titled "The Two Reacts" that explores the paradigms of client component and server component mental models and leaves us with the thought on how we can cohesively combine these concepts in a meaningful way. It took me a while to finally give RSCs the proper exploration to truly understand the "new" model and grasp where React is heading. First off, the new model isn't really new so much as it introduces a few new concepts for us to consider when architecting our applications. Once I understood the pattern, I found an appreciation for the model and what it's trying to help us accomplish. In this post, I hope I can show you the progression of the React architecture in applications as I've experienced them and how I think RSCs help us improve this model and our apps. A "Brief" History of React Rendering and Data-Fetching Patterns One of the biggest challenges in React since its early days is "how do we server render pages?" Server-side rendering (SSR) is one of the techniques we can use to ensure users see data on initial load and helps with our site's SEO. Without SSR, users would see blank screens or loading spinners, and then a bunch of content would appear shortly after. An excellent graphic on Remix's website demonstrates this behavior from the end user's perspective and it's a problem we generally try to avoid as developers. This problem is so vast and difficult that we've been trying to solve it since 2015. Rick Hanlon from the React Core Team reminded us just how complicated this problem was recently. If you think react is complicated now, go back to 2015 when you had to configure babel and webpack yourself, you had to do SSR and routing yourself, you had to decide between flow and typescript, you had to learn flux and immutable.js, and you had to choose 1 of 100 boilerplates&mdash; Ricky (@rickhanlonii) January 28, 2024 But SSR has its issues too. Sometimes SSR is slow because we need to do a lot of data fetching for our page. Because of these large payloads, we'll defer their rendering using lazy loading patterns. All of a sudden we have spinners again! How we've managed these components has changed too. Over the years, we've seen a variety of patterns emerge for managing these problems. We had server-side pre-fetching where we had to hydrate our frontends application state. Then we tried controller-view patterned components for our lazily loaded client-side components. With the evolution of React, we were able to simplify the controller-view patterns to leverage hooks. Now, we're in a new era of multi-server entry points on a page with RSCs. The Benefits of React Server Components RSCs give us this new paradigm that allows us to have multiple entries into our server on a single page. Leveraging features like Next.js' streaming mode and caching, we can limit what our pages block on for SSR and optimize their performance. To illustrate this, let's look at this small block of PHP for something we might have done in the 2000s: index.php ` For simplicity, the blocks indicate server boundaries where we can execute PHP functionality. Once we exit that block, we can no longer leverage or communicate with the server. In this example, we're displaying a welcome message to a user and a list of posts. If we look at Next.js' page router leveraging getServerSideProps, we would maybe write this same page as follows: pages/posts.jsx ` In this case, we're having our server do a lot to fetch the data we need to render this page and all its components. This also makes the line between server and client much clearer as getServerSideProps runs before our client renders, and we're unable to go back to that function without an API route and client-side fetch. Now, let's look at Next.js' app router with server components. This same component could be rendered as follows: app/posts/page.tsx ` This moves us a bit closer back to our PHP version as we don't have to split our server and client functionality as explicitly. This example is relatively simple and only renders a few components that could be easily rendered as static content. This page also probably requires all the content to be rendered upfront. But, from this, we can see that we're able to simplify how server data fetching is done. If nothing else, we could use this pattern to make our SSR patterns better and client render the rest of our apps, but then we'd lose out on some of the additional benefits that we can get when we combine this with streaming. Let's look at a post page that probably has the content and a comments section. We don't need to render the comments immediately on page load or block on it because it's a secondary feature on the page. This is where we can pull in Suspense and make our page shine. Our code might look as follows: app/posts/[slug]/page.jsx ` Our PostComments are a server component that renders static content again, but we don't want its render to block our page from being served to the client. What we've done here is moved the fetch operation into the comments component and wrapped it in a Suspense boundary in our page to defer its render until its content has been fetched and then stream it onto the page into its render location. We could also add a fallback to our suspense boundary if we wanted to put a skeleton loader or other indication UI. With these changes, we're writing components similarly to how we've written them on the client historically but simplified our data fetch. What happens when we start to progressively enhance our features with client-side JavaScript? Client Components with RSCs All our examples focused on staying in the server context which is great for simple applications, but what happens when we want to add interactivity? If you were to put a useState, useEffect, onClick, or other client-side React feature into a server component, you'd find some error messages because you can't run client code from a server context. If we think back to our PHP example, that makes a lot of sense why this is the case, but how do we work around this? For me, this is where my first really mental challenge with RSCs started. Let's use our PostComments as an example to enhance by in-place sorting the section from the server. ` In this example, we're using a server component the same way as our previous example to do the initial data fetch and stream when ready. However, we're immediately passing the results to a client component that renders the elements and enhances our code with a list of sort options. When we select the sort we want, our code makes a request to the server at a predefined route and gets the new data that we re-render in the new order on screen. This is how we might have done things without RSCs before but without a useEffect for our initial rendering. If you're familiar with the old controller-view pattern, this is relatively similar to that pattern, but we have to relegate client re-fetching to the view (client) component, where we might have had all fetch and re-fetch patterns in the controller (server) component Another way to solve this same problem is leveraging server actions, but that would cause the page to re-render. Given the caching mechanisms in Next.js, this is probably fine, but it's not the user experience people are expecting. Server actions are a topic of their own, so I won't cover them in this post, but they're important for the holistic ecosystem experience in the new mindset. Interleaving Nested Server & Client Components These examples have shown a clean approach where we keep our server components at the top of our rendering tree and have a clear line where we move from server mode to client mode. But one of the advantages of RSCs is that we can interleave where our server component entry points may exist. Let's think about a product carousel on a storefront page for example. We may have built this as follows before: ` Here we're rendering carousels that render their cards and our tree goes server -> client -> server. This is not allowed in the new paradigm because the React compiler cannot detect that we moved back into a server component when we called ProductCards from a client component. Instead, we would need to refactor this to be: ` Here, we've changed ProductCarousel to accept children that are our ProductCards server component. This allows the compiler to detect the boundary and render it appropriately. I'd also recommend adding Suspense boundaries to these carousels but I omitted it for the sake of brevity in this example. Some Suggested Best Practices Our team has been using these new patterns for some time and have developed some patterns we've identified as best practices for us to help keep some of these boundaries clear that I thought worth sharing. 1. Be explicit with component types We've found that being explicit with component types is essential to expressing intent. Some components are strictly server, some are strictly client, and others can be used in both contexts. Our team likes using the server-only package to express this intent but we found we needed more. We've opted for the following naming conventions: Server only components: component-name.server.(jsx|tsx) Client only components: component-name.client.(jsx|tsx) Universal components: component-name.(jsx|tsx) This does not apply to special framework file names but we've found this helps us delineate where we are and to help with interleaving. 2. Defer Fetch when Cache is Available If we're trying to render a component that is a collection of other components, we've found deferring the fetch to the child allows our cache to do more for us in rendering in cases where the same element may exist in different locations. This is similar to how GraphQL's data loaders works and can ideally boost the performance of cached server components. So, in our product example above, we may only fetch the IDs of the products we need for the ProductCards and then fetch all the data per card. Yes, this is an extra fetch and causes an N+1, but if our cache is in play, end users will feel a performance gain. 3. Colocate loaders and queries If you're using GraphQL or need loading states for components for Suspense fallback, we recommend colocating those elements in the same file or directory. This makes it easier to modify and manage related elements for your components in a more meaningful way. 4. Use Suspense to defer non-essential content This will depend on your website and needs. Still, we recommend deferring as many non-essential elements to Suspense as possible. We define non-essential to be anything you could consider secondary to the page. On a blog post, this could be recommended posts or comments. On a product page, this could be reviews and related products. So long as the primary focus of your page is outside a Suspense boundary, your usage is probably acceptable. Conclusion RSCs represent a significant evolution in the React ecosystem, offering new paradigms and opportunities for improving the architecture of web applications. They enable multiple server entry points on a single page, optimizing server-side rendering, and simplifying data fetching. RSCs allow for a clearer separation between server and client functionality, enhancing both performance and maintainability. To make the most of RSCs, developers should consider best practices such as being explicit with component types, deferring fetch when caching is available, colocating loaders and queries, and using Suspense to defer non-essential content. Embracing these practices can help harness the potential of React Server Components and pave the way for more efficient and interactive web applications. We're excited about this development in the React ecosystem and the developments happening across teams and frameworks that are opting into using them. While Next.js offers a great solution, we're excited to see similar enhancements to Remix and RedwoodJS....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co