Skip to content

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK

In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it.

You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice.

Why should I use it?

You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently.

It’s simple

Let's start with a basic example:

app
 ↳flags.js

import { flag } from 'flags/next';

export const exampleFlag = flag({
    key: 'example-flag',
    identify() {
        return { user: { id: '123' } };
    },
    decide({ entities }) {
        return entities.user.id === '123';
    },
});

// page.js
const exampleValue = await exampleFlag();

This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration.

Many other SDKs require passing the flag's name and context every single time you check a flag, like this:

const exampleValue = await client.getBooleanValue('exampleFlag', context);

This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application.

By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly.

It’s flexible

Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter.

While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit.

Latency solutions

Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency.

These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly:

app
 ↳flags.js

import { flag } from 'flags/next';

export const exampleFlag = flag({
    key: 'example-flag',
    identify() {
        // Identify our evaluation context   
        return { user: { id: '123' } };
    },
    decide({ entities }) {
        // Evaluate or decide our value based on our condition
        return entities.user.id === '123';
    },
});

You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended.

Using Edge Config

When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues.

To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency.

Cookies

For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load.

Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data:

export const exampleFlag = flag({
    // Definition
    key: 'example-flag',
    // Context
    identify({ cookies }) {
        // We get the cookie that we need for our context
        const userId = cookies.get('user-id')?.value;
        return { user: userId ? { id: userId } : undefined };
    },
    // Evaluation
    decide({ entities }) {
        return entities?.user?.id === 12;
    },
});

You can also encrypt or sign cookies for additional security from the client side.

Dedupe

Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example:

app
 ↳flags.js

 import { dedupe, flag } from "flags/next";

// Simulate a fake fetch function
async function fakeFetch(url, options) {
    return new Response(JSON.stringify({ region: 'EU' }), { status: 200 });
}

// Simulated function to get the user's region from the request headers.
async function getUserRegion(headers) {
    // In a real-world scenario, this might involve calling an external geolocation API.
    // So we'll use a fake API to simulate the response.
    const response = await fakeFetch('https://api.example.com/get-region', {
        method: 'GET',
        headers: { 'x-country': headers.get('x-country') || '' }
    });
    const data = await response.json();
    return data;
}

// Wrap the region retrieval function using dedupe so that it runs only once per request.
const identifyRegion = dedupe(
    async ({ headers }) => {
        return await getUserRegion(headers);
    },
);

// Define the feature flag that decides the promotional discount eligibility based on the user's region.
export const promoDiscountFlag = flag({
    key: 'promo-discount-flag',
    // Use the deduped identify function for evaluation context.
    identify: identifyRegion,
    decide({ entities }) {
        // If the region isn’t determined, disable the flag.
        if (!entities?.region) return false;
        // Only enable the promotion for users in either 'EU' or 'NA'.
        return ['EU', 'NA'].includes(entities.region);
    },
});

app
 ↳plans
   ↳page.jsx

import { promoDiscountFlag } from '../flags';

export default async function PlansPage() {
    const isPromoAvailable = await promoDiscountFlag();

    return (
        <div className="p-4">
            <h1 className="text-2xl font-bold mb-4">Store</h1>

            <div className="grid grid-cols-1 md:grid-cols-2 gap-4">
                <div className="border rounded-lg p-4">
                    <h2 className="text-xl font-semibold mb-2">Basic Plan</h2>
                    <p className="text-gray-600 mb-2">Essential features for everyday use</p>
                    <p className="text-2xl font-bold">$9.99/month</p>
                </div>

                <div className="border rounded-lg p-4">
                    <h2 className="text-xl font-semibold mb-2">Premium Plan</h2>
                    <p className="text-gray-600 mb-2">Advanced features for power users</p>
                    { isPromoAvailable ? (
                        <div>
                            <p className="text-sm text-gray-500 line-through">$19.99/month</p>
                            <p className="text-2xl font-bold text-green-600">$14.99/month</p>
                            <p className="text-sm text-green-600">Special regional promotion!</p>
                        </div>
                    ) : (
                        <p className="text-2xl font-bold">$19.99/month</p>
                    )}
                </div>
            </div>
        </div>
    );
}

Server-side patterns for static pages

You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering.

To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute.

Precompute

Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers:

app
 ↳flags.js

import { flag } from "flags/next";

export const showNewLayout = flag({
    // Definition
    key: 'new-layout',
    // Context
    identify({ cookies }) {
        const userId = cookies.get('user-id')?.value;
        return { user: userId ? { id: userId } : undefined };
    },
    // Evaluation
    decide({ entities }) {
        return entities?.user?.id === '12';
    },
});

export const showSilksongBanner = flag({
    key: 'silksong-banner',
    identify({ cookies }) {
        return { user: cookies.get('vessel')?.value ? { id: cookies.get('vessel')?.value } : undefined };
    },
    decide({ entities }) {
        return entities?.user?.id === 'hornet';
    },
});

// Export our flags in an array (it can be just one or multiple flags)
export const homePageFlags = [showNewLayout, showSilksongBanner];

Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them.

// middleware.ts
import { type NextRequest, NextResponse } from 'next/server';
import { precompute } from 'flags/next';
import { homePageFlags } from './flags';

// Note that we're running this middleware for / only, but
//You could extend it to further pages you're experimenting on
export const config = { matcher: ['/'] };

export async function middleware(request: NextRequest) {
  // precompute returns a string encoding each flag's returned value
  const code = await precompute(homePageFlags);

  // rewrites the request to include the precomputed code for this flag combination
  const nextUrl = new URL(
    `/${code}${request.nextUrl.pathname}${request.nextUrl.search}`,
    request.url,
  );

  return NextResponse.rewrite(nextUrl, { request });
}

The user will never notice this because, as we use “rewrite”, they will only see the original URL.

Now, on our page, we “invoke” our flags, sending the code from the params:

app
 ↳[code]
   ↳page.jsx

import { Params } from "next/dist/server/request/params";
import { showSilksongBanner, homePageFlags, showNewLayout } from "../flags";

export default async function Page({ params }) {
    const { code } = params;
    const shouldShowSilksongBanner = await showSilksongBanner(code, homePageFlags);
    const shouldShowNewLayout = await showNewLayout(code, homePageFlags);

    return (
        <div className="p-4">
            {shouldShowSilksongBanner && (
                <div className="bg-blue-100 p-3 mb-4 rounded">
                    🎮 Silksong Available
                </div>
            )}

            <div className="bg-white p-4 rounded shadow">
                <h1 className="text-xl font-bold mb-2">Welcome to Hallownest</h1>

                {shouldShowNewLayout ? (
                    <div className="mt-4">
                        <h2 className="font-semibold mb-2">Your Progress</h2>
                        <div className="space-y-2">
                            <div>✅ 3 areas completed</div>
                            <div>🔄 2 areas in progress</div>
                            <div>🔒 5 areas locked</div>
                        </div>
                    </div>
                ) : (
                    <p className="text-gray-600">Start your journey in the vast underground kingdom.</p>
                )}
            </div>
        </div>
    );
}

By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user.

Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests:

import { Params } from "next/dist/server/request/params";
import { showSilksongBanner, homePageFlags, showNewLayout } from "../flags";

interface HomeParams extends Params {
    code: string;
}

export async function generateStaticParams() {
    // returning an empty array is enough to enable ISR
    return [];
}

export default async function Page({ params }: { params: HomeParams }) {
...
}

Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell.

Generate Permutations

If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases.

app
 ↳flags.js

import { flag } from 'flags/next'; 

export const showSilksongBanner = flag({
    key: 'showSilksongBanner',
    decide() {
        return true;
    },
});

export const showNewLayout = flag({
    key: 'showNewLayout',
    decide() {
        return true;
    },
});

export const greetingStyle = flag({
    key: 'greetingStyle',
    options: ['classic', 'modern', 'steampunk'],
    decide() {
        return 'classic';
    },
});

export const homePageFlags = [
  showSilksongBanner,
  showNewLayout,
  greetingStyle,
];
import { generatePermutations } from 'flags/next';
import {
  showSilksongBanner,
  showNewLayout,
  greetingStyle,
  homePageFlags,
} from '../flags';
import { Params } from 'next/dist/server/request/params';

// 1) at build time, Next will run this and prerender each combo
export async function generateStaticParams() {
  const codes = await generatePermutations(homePageFlags);
  return codes.map((code) => ({ code }));
}

export default async function Page({ params }) {
  const { code } = params;

  // 2) at request time, Next simply reads the prerendered HTML for this code
  const showBanner = await showSilksongBanner(code, homePageFlags);
  const useNewLayout = await showNewLayout(code, homePageFlags);
  const style = await greetingStyle(code, homePageFlags);

  return (
    <div className="p-4">
      {showBanner && (
        <div className="bg-blue-100 p-3 mb-4 rounded">
          🎮 Silksong Available
        </div>
      )}

      <div className="bg-white p-4 rounded shadow">
        <h1 className="text-xl font-bold mb-2">
          {style === 'steampunk'
            ? 'Welcome, Cog-and-Gear Explorer!'
            : style === 'modern'
            ? 'Welcome Back to Hallownest'
            : 'Welcome to Hallownest'}
        </h1>

        {useNewLayout ? (
          <div className="mt-4">
            <h2 className="font-semibold mb-2">Your Progress</h2>
            <div className="space-y-2">
              <div>✅ 3 areas completed</div>
              <div>🔄 2 areas in progress</div>
              <div>🔒 5 areas locked</div>
            </div>
          </div>
        ) : (
          <p className="text-gray-600">
            Start your journey in the vast underground kingdom.
          </p>
        )}
      </div>
    </div>
  );
}

Conclusion

Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Tim Neutkens, Co-Author of Next.js on the State of Next cover image

Tim Neutkens, Co-Author of Next.js on the State of Next

Watch this episode! Tim Neutkens, Co-author and Tech Lead for Next.js, discusses how open source maintainers are simplifying the web, and covers the challenges faced with the current Next.js setup. Tim talks about TurboPack, a solution that optimizes bundling, improves parallelism, caching, and module graph calculations. He also talks about TurboAC, which focuses on addressing performance and compatibility issues, providing seamless transitions for Next.js users. Tim highlights the importance of efficient bundling processes to avoid excessive recompilation and discusses the updates in Next.js versions to enhance caching, rendering behavior, and client-side caching. Tim also discusses some exciting upcoming features in Next.js 15. Tim Neutkens' Socials - Twitter: @timneutkens - GitHub: timneutkens - Bluesky: timneutkens.bsky.social - Website: https://timn.tech/ Links - Vercel on Twitter, LinkedIn, Facebook, Instagram, YouTube, GitHub and Vercel's website - Next.js on Twitter, GitHub, LinkedIn, YouTube, Instagram, Facebook, official Next.js website - Turbopack on Twitter, GitHub, YouTube, LinkedIn, Instagram, Official Turbopack Docs - Webpack on GitHub, Twitter, YouTube, and Official Webpack Website Show Notes - [00:00:02] Podcast episode featuring Tim Neutkens discussing Next.js and the upcoming release of TurboPack. - [00:04:27] JavaScript bundlers evolving to handle growth. - [00:07:58] TurboPack solves Webpack limitations efficiently. - [00:12:12] Bundler compatibility for optimal app performance. - [00:16:50] Client components separated in webpack instance. Turbo pack for better parallelism and stability. Industry moving towards server-side. Feed and rollup still relevant. Collaboration between tools for future. - [00:20:57] Replacing part with roll down, similar to Webpack. Overlapping ecosystem with Avonetic Conference. Limits with unbundling and loading on demand. Cycle of building frameworks and hitting limits. History of using Webpack for client-side code. Two compiler architecture for server and client. Coordination between server and client with Webpack. - [00:25:38] Server action imports, turbo pack improves performance. - [00:30:04] Next.js is popular for websites. - [00:34:18] Chipotle using Next in Vercel, exciting improvements. - [00:38:51] Next.js 15 release candidate with changes. This episode is sponsored by Wix Studio....

Keeping Costs in Check When Hosting Next.js on Vercel cover image

Keeping Costs in Check When Hosting Next.js on Vercel

Vercel is usually the go-to platform for hosting Next.js apps, and not without reason. Not only are they one of the sponsors of Next.js, but their platform is very straightforward to use, not just for Next.js but for other frameworks, too. So it's no wonder people choose it more and more over other providers. Vercel, however, is a serverless platform, which means there are a few things you need to be aware of to keep your costs predictable and under control. This blog post covers the most important aspects of hosting a Next.js app on Vercel. Vercel's Pricing Structure Vercel's pricing structure is based on fixed and usage-based pricing, which is implemented through two big components of Vercel: the Developer Experience Platform (DX Platform) and the Managed Infrastructure. The DX Platform offers a monthly-billed suite of tools and services focused on building, deploying, and optimizing web apps. Think of it as the developer-focused interface on Vercel, which assists you in managing your app and provides team collaboration tools, deployment infrastructure, security, and administrative services. Additionally, it includes Vercel support. Because the DX Platform is developer-focused, it's also charged per seat on a monthly basis. The more developers have access to the DX Platform, the more you're charged. In addition to charging per seat, there are also optional, fixed charges for non-included, extra features. Observability Plus is one such example feature. The Managed Infrastructure, on the other hand, is what makes your app run and scale. It is a serverless platform priced per usage. Thanks to this infrastructure, you don't need to worry about provisioning, maintaining, or patching your servers. Everything is executed through serverless functions, which can scale up and down as needed. Although this sounds great, this is also one of the reasons many developers hesitate to adopt serverless; it may have unpredictable costs. One day, your site sees minimal traffic, and the next, it goes viral on Hacker News, leading to a sudden spike in costs. Vercel addresses this uncertainty by including a set amount of free serverless usage in each of its DX Platform plans. Once you exceed those limits, additional charges apply based on usage. Pricing Plans The DX Platform can be used in three different pricing plans on a team level. A team can represent a single person, a team within a company, or even a whole company. When creating a team on Vercel, this team can have one or more projects. The first of the three pricing plans is the Hobby plan, which is ideal for personal projects and experimentation. The Hobby plan is free and comes with some of the features and resources of the DX Platform and Managed Infrastructure out of the box, making it suitable for hosting small websites. However, note that the Hobby plan is limited to non-commercial, personal use only. The Pro plan is the most recommended for most teams and can be used for commercial purposes. At the time of this writing, it costs $20 per team member and comes with generous resources that support most teams. The third tier is the Enterprise plan, which is the most advanced and expensive option. This plan becomes necessary when your application requires specific compliance or performance features, such as isolated build infrastructure, Single Sign-On (SSO) for corporate user management or custom support with Service-Level Agreements. The Enterprise plan has a custom pricing model and is negotiated directly with Vercel. What Contributes to Usage Costs and Limiting Them Now that you've selected a plan for your team, you're ready to deploy Next.js. But how do you determine what contributes to your per-usage costs and ensure they stay within your plan limits? The Vercel pricing page has a detailed breakdown of the resource usage limits for each plan, which can help you understand what affects your costs. However, in this section, we've highlighted some of the most impactful factors on pricing that we've seen on many of our client projects. Number of Function Invocations Each time a serverless function runs, it counts as an invocation. Most of the processing on Vercel for your Next.js apps happens through serverless functions. Some might think that only API endpoints or server actions count as serverless function invocations, but this is not true. Every request that comes to the backend goes through a serverless function invocation, which includes: - Invoking server actions (server functions) - Invoking API routes (from the frontend, another system, or even within another serverless function) - Rendering a React server component tree (as part of a request to display a page) To give you an idea of the number of invocations included in a plan, the Pro plan includes 1 million invocations per month for free. After that, it costs $0.60 per million, which can total a significant amount for popular websites. To minimize serverless function invocations, focus on reducing any of the above points. For example: - Batch up server actions: If you have multiple server actions, such as downloading a file and increasing its download count, you can combine them into one server action. - Minimize calls to the backend: Closely related to the previous point, unoptimized client components can call the backend more than they need to, contributing to increased function invocation count. If needed, consider using a library such as useSwr or TanStack Query to keep your backend calls under control. - Use API routes correctly: Next.js recommends using API routes for external systems invoking your app. For instance, Contentful can invoke a blog post through a webhook without incurring additional invocations. However, avoid invoking API routes from server component tree renders, as this counts as at least two invocations. Reducing React server component renders is also possible. Not all pages need to be dynamic - convert dynamic routes to static content when you don’t expect them to change in real-time. On the client, utilize Next.js navigation primitives to use the client-side router cache. Middleware in Next.js runs before every request. Although this doesn't necessarily count as a function invocation (for edge middleware, this is counted in a separate bucket), it's a good idea to minimize the number of times it has to run. To minimize middleware invocations, limit them only to requests that require it, such as protected routes. For static asset requests, you can skip middleware altogether using matchers. For example, the matcher configuration below would prevent invoking the middleware for most static assets: ` Function Execution Time The duration your serverless function takes to execute counts as the execution time, and it impacts your end bill unless it's within the limits of your plan. This means that any inefficient code that takes longer to execute directly adds up to the total function invocation time. Many things can contribute to this, but one common pattern we've seen is not utilizing caching properly or under-caching. Next.js offers several caching techniques you can use, such as: - Using a data cache to prevent unnecessary database calls or API calls - Using memoization to prevent too many API or database calls in the same rendering pass Another reason, especially now in the age of AI APIs, is having a function run too long due to AI processing. In this case, what we could do is utilize some sort of queuing for long-processing jobs, or enable Fluid Compute, a recent feature by Vercel that optimizes function invocations and reusability. Bandwidth Usage The volume of data transferred between users and Vercel, including JavaScript bundles, RSC payload, API responses, and assets, directly contributes to bandwidth usage. In the Pro plan, you receive 1 TB/month of included bandwidth, which may seem substantial but can quickly be consumed by large Next.js apps with: - Large JavaScript bundles - Many images - Large API JSON payloads Image optimization is crucial for reducing bandwidth usage, as images are typically large assets. By implementing image optimization, you can significantly reduce the amount of data transferred. To further optimize your bandwidth usage, focus on using the Link component efficiently. This component performs automatic prefetch of content, which can be beneficial for frequently accessed pages. However, you may want to disable this feature for infrequently accessed pages. The Link component also plays a role in reducing bandwidth usage, as it aids in client-side navigation. When a page is cached client-side, no request is made when the user navigates to it, resulting in reduced bandwidth usage. Additionally, API and RSC payload responses count towards bandwidth usage. To minimize this impact, always return only the minimum amount of data necessary to the end user. Image Transformations Every time Vercel transforms an image from an unoptimized image, this counts as an image transformation. After transformation, every time an optimized image is written to Vercel's CDN network and then read by the user's browser, this counts as an image cache read and an image cache write, respectively. The Pro plan includes 10k transformations per month, 600k CDN cache reads, and 200k CDN cache writes. Given the high volume of image requests in many apps, it's worth checking if the associated costs can be reduced. Firstly, not every image needs to be transformed. Certain types of images, such as logos and icons, small UI elements (e.g., button graphics), vector graphics, and other pre-optimized images you may have optimized yourself already, don't require transformation. You can store these images in the public folder and use the unoptimized property with the Image component to mark them as non-transformable. Another approach is to utilize an external image provider like Cloudinary or AWS CloudFront, which may have already optimized the images. In this case, you can use a custom image loader to take advantage of their optimizations and avoid Vercel's image transformations. Finally, Next.js provides several configuration options to fine-tune image transformation: - images.minimumCacheTTL: Controls the cache duration, reducing the need for rewritten images. - images.formats: Allows you to limit eligible image formats for transformation. - images.remotePatterns: Defines external sources for image transformation, giving you more control over what's optimized. - images.quality: Enables you to set the image quality for transformed images, potentially reducing bandwidth usage. Monitoring The "Usage" tab on the team page in Vercel provides a clear view of your team's resource usage. It includes information such as function invocation counts, function durations, and fast origin transfer amounts. You can easily see how far you are from reaching your team's limit, and if you're approaching it, you'll see the amount. This page is a great way to monitor regularity. However, you don't need to check it constantly. Vercel offers various aspects of spending management, and you can set alert thresholds to get notified when you're close to or exceed your limit. This helps you proactively manage your spending and avoid unexpected charges. One good feature of Vercel is its ability to pause projects when your spending reaches a certain point, acting as an "emergency break" in the case of a DDoS attack or a very unusual spike in traffic. However, this will stop the production deployment, and the users will not be able to use your site, but at least you won't be charged for any extra usage. This option is enabled by default. Conclusion Hosting a Next.js app on Vercel offers a great developer experience, but it's also important to consider how this contributes to your end bill and keep it under control. Hopefully, this blog post will clear up some of the confusion around pricing and how to plan, optimize, and monitor your costs. We hope you enjoyed this blog post. Be sure to check out our other blog posts on Next.js for more in-depth coverage of different features of this framework....

CSS Container Queries, what are they? cover image

CSS Container Queries, what are they?

CSS Container queries, what are they? Intro Media queries have always been crucial to building web applications. They help make our apps more accessible and easier to use and ensure we reach most of our audience. Media queries have been essential in frontend development to create unique user interfaces. But now, there’s something new: Container queries. In this blog post, we’ll explore what Container queries are, how they differ from media queries, and why they’re so amazing. So, let’s get started! Refresh on Media queries Media queries have been available in browsers for a long time, but they didn’t become popular until around 2010 when mobile devices started to take off. Media queries let us add specific styles based on the type of device, like screens or printers. This is especially helpful for creating modern, responsive apps. A simple use of Media queries would be changing, for example, a paragraph's font size when the screen width is less than a specific number. ` In this simple example, when the browser’s viewport width is less or equal to 400px, the font size changes to 8px. Notice how straightforward the syntax is: we start with the keyword @media, followed by the type of device it should apply to. In this case, we use screen so it doesn’t affect users who print the page—if you don’t add anything, then it falls back to the default, which is “all” including both print and screen. Then we specify a media feature, in this case, the width. Container queries Container queries are similar to Media queries. Their main function is to apply styles under certain conditions. The difference is that instead of listening to the viewport of the browser, it listens to a container size. Let’s see this example: In the above layout, we have a layout with a sidebar and three cards as the content. Using Media queries we could listen to the viewport width and change the layout depending on a specific width. Like so: ` That’s acceptable, but it requires us to constantly monitor the layout. For example, if we added another sidebar on the right (really weird, but let’s imagine that this is a typical case), our layout would become more condensed: We would need to change our media queries and adjust their range in this situation. Wouldn’t it be better to check the card container’s width and update its styles based on that? That way, we wouldn’t need to worry about if the layout changes, and that’s precisely what container queries are made for! First, to define the container we are going to listen to, we are going to add a new property to our styles: ` The .container class is the one in which our cards reside. By adding the property `container-type, ' we now define this class as a container we want to listen to. We said inline-size as the value to query based on the inline dimensions of the container because we just want to listen to the element's width. The value of container-type will depend on your use case. If you want to listen to both width and height, then size will be a better fit for you. You can also have normal as your container-type value, which means the element won’t act as a query container at all. This is handy if you need to revert to the default behavior. Next, to define our query, we use the new @container CSS at-rule: ` Notice that it is really similar to how we define our Media queries. Now, if we look at the same screen, we will see the following: This is very powerful because we can now style each component with its own rules without changing the rules based on the layout changes. The @container will affect all the defined containers in the scope; we might not want that. We can define the name of our container to specify that we only want to listen to that in specific: ` We can also have a shorthand to define our container and its name: ` Container query length units Container query lengths are similar to the viewport-percentage length units like vh or vw units, but instead of being relative to the viewport, they are to the dimensions of the query container. We have different units, each relative to different dimensions of the container: - cqw: 1% of a query container's width - cqh: 1% of a query container's height - cqi: 1% of a query container's inline size - cqb: 1% of a query container's block size - cqmin: The smaller value of either cqi or cqb - cqmax: The larger value of either cqi or cqb In our example, we could use them to define the font size of our cards: ` Using these units alone isn’t recommended because they’re percentage-based and can have a value we don’t want. Instead, it’s better to use a dynamic range. Using the max function, we can set 2 values and always pick the highest one. Conclusion Container queries bring a fresh and powerful approach to web design but are not meant to replace Media queries. I think their real power shines when used together. Media queries often require constant adjustments as your layout evolves. Container queries, however, let you style individual components based on their dimensions, making the designs more flexible and easier to manage. Adding a new component or rearranging elements won’t force us to rewrite our media queries. Instead, each component handles its styling, leading to cleaner and more organized code. Please note that, as of writing this blog post, they aren’t compatible with all browsers yet. Take a look at this table from caniuse.com: A good fallback strategy for this, when hitting an unsupported browser would be the use of the @support rule, which allows you to apply styles only if the browser supports the CSS feature. For example: ` Ensure your media queries are good enough to keep everything responsive and user-friendly when the condition is unmet. Thank you for reading! Enjoy the extra flexibility that container queries bring to your web designs. Check out a live demo to see it in action. Happy styling!...

Making AI Deliver: From Pilots to Measurable Business Impact cover image

Making AI Deliver: From Pilots to Measurable Business Impact

A lot of organizations have experimented with AI, but far fewer are seeing real business results. At the Leadership Exchange, this panel focused on what it actually takes to move beyond experimentation and turn AI into measurable ROI. Over the past few years, many organizations have experimented with AI, but the challenge today is translating experimentation into measurable business value. Moderated by Tracy Lee, CEO at This Dot Labs, panelists featured Dorren Schmitt, Vice President IT Strategy & Innovation at Allen Media Group, Greg Geodakyan, CTO at Client Command, and Elliott Fouts, CAIO & CTO at This Dot Labs. Panelists discussed how companies are moving from early AI experiments to initiatives that deliver real results. They began by examining how experimentation has evolved over the past year. While many organizations did not fully utilize AI experimentation budgets in 2025, 2026 is showing a shift toward more intentional investment. Structured budgets and clearly defined frameworks are enabling companies to explore AI strategically and identify initiatives with high potential impact. The conversation then turned to alignment and ROI. Panelists highlighted the importance of connecting AI projects to corporate strategy and leadership priorities. Ensuring that AI initiatives translate into operational efficiency, productivity gains, and measurable business impact is essential. Companies that successfully align AI efforts with organizational goals are better equipped to demonstrate tangible outcomes from their investments. Moving from pilots and proofs of concept to production was another major focus. Governance, prioritization, and workflow integration were cited as essential for scaling AI initiatives. One panelist shared that out of nine proofs of concept, eight successfully launched, resulting in improvements in quality and operational efficiency. Panelists also explored the future of AI within organizations, including the potential for agentic workflows and reduced human-in-the-loop processes. New capabilities are emerging that extend beyond coding tasks, reshaping how teams collaborate and how work is structured across departments. Key Takeaways - Structured experimentation and defined budgets allow organizations to explore AI strategically and safely. - Alignment with business priorities is essential for translating AI capabilities into measurable outcomes. - Governance and workflow integration are critical to moving AI initiatives from pilot stages to production deployment. Successfully leveraging AI requires a balance between experimentation, strategic alignment, and operational discipline. Organizations that approach AI as a structured, measurable initiative can capture meaningful results and unlock new opportunities for innovation. Curious how your organization can move from AI experimentation to real impact? Let’s talk. Reach out to continue the conversation or join us at an upcoming Leadership Exchange. Tracy can be reached at tlee@thisdot.co....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co