Skip to content

Content Projection in Angular

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Imagine yourself at a movie theater. No matter what theater you go to, or what movie you see, you'll notice some very similar things - most notably, the large screen the movie plays on. This screen is an excellent example of a reusable object - it can show any movie projected onto it! The screen doesn't need to know the details of the movie it's playing - all it does is show whatever content is projected onto it.

This is a similar idea to how content projection works in web development! Today, we'll go over what content projection looks like and how you can use it to make components that are reusable and less tied to the data they display.

We'll be using Angular for these examples, but any framework you'd like to use likely has a way to do this as well!

If you'd like to view the source code for these examples, you can check them out in this Stackblitz editor.

Single slot projection

The simplest way to do content projection is to use a single "slot" or area where we pass data in. Note that you're not limited on what data you pass in - you could provide as much data as you wanted in here. But your component that's handling the content projection uses a single slot for all that data.

Using the movie example above, let's create a simple "screen" that will display a random cat GIF, and some "now playing" text with it.

For the screen component, the HTML will be very simple - just a section container and a special Angular tag called ng-content. This element acts as a placeholder for the content that will be passed into it, and does not create a DOM element itself. It tells the component that we expect some data in an unknown shape to be shown in it's place.

// single-screen.component.html
<section>
  <ng-content></ng-content>
</section>

Then, in our main app component, we can send this screen the data we want it to show!

// app.component.html
<app-single-screen>
  <div class="movie">
    <img src="https://cataas.com/cat/gif" />
  </div>
  <div class="now-playing">
    <h2>Now Playing: Random Cat GIF!</h2>
  </div>
</app-single-screen>

And that's it - our single-screen component will display the GIF and title!

Single slot content projection Angular example

Multi-slot projection

Sometimes, it can be useful to designate some of the data that gets projected into certain sections of our component. In the previous example, we could have the "Now Playing" text set up almost anywhere - it could be on the top of the screen, the bottom, over the image itself. Even though we're projecting data into our component, we can designate some sections with a particular name, so when we write the content that gets sent to the component, we can tell it where to go.

If we wanted to set up our screen so that it always shows the name first and then the image in our component, we would specify the select value on the ng-content tag. If we want one of these tags to be the default value, we can leave its select value empty.

// multi-screen.component.html
<section>
  <ng-content></ng-content>
  <ng-content select="[movie]"></ng-content>
</section>

Then, when we add this tag to our main app, we'll see the data in the order our component says it should show, no matter what order we type it in.

// app.component.html
<app-multi-screen>
  <div class="movie" movie>
    <img src="https://cataas.com/cat/gif?tags=silly" />
  </div>
  <div class="now-playing">
    <h2>Now Playing: Random Cat GIF!</h2>
  </div>
</app-multi-screen>
Multi slot content projection Angular example

In conclusion

This topic can be as simple or as complex as you need it to. There's tons of great use cases for content projection. You could use it to create reusable layouts, as an expansion toggle wrapper around different lists of data, or to create reusable designed pieces that can take in whatever type of data you'd like. If you're interested in some more complex examples, I'd highly recommend checking out the Angular documentation for this subject. Can you think of other great use cases? Let us know!

Demo

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

SSR Finally a First-Class Citizen in Angular? cover image

SSR Finally a First-Class Citizen in Angular?

I am sure you've already heard about server-side rendering (SSR). Generally, it's a technique used to deliver the HTML content generated by the server to the client. The client-side JavaScript framework then hydrates that HTML content to provide a fully interactive application. SSR provides numerous benefits if used correctly, including improved initial load times, better content indexing by search engines, and improved performance on lower-tier devices. Driven by the need for faster, more efficient web applications, with the rise in popularity of Node.js and, thus, server-side JavaScript, SSR has become a de facto standard in modern web development frameworks. At first, SSR was being provided as a part of meta-frameworks like Next.js, Nuxt, or Analog.js. Then, React introduced React Server Components, which marked a significant milestone in the way we think about front-end development. We think of front-end frameworks as being effectively full-stack, and a purely SPA approach is falling out of favor. However, I'm willing to bet that if you're a developer focused mostly on Angular, you may not have heard as much about SSR as your React counterparts. For a long time, Angular has been mostly focused on the SPA approach, and while we had Angular Universal or Analog.js, it was not as widely adopted. And that makes sense, given that Angular was widely regarded as best suited for complex, interactive applications, such as SaaS products or internal company tools, where SEO and initial load times were not as critical as they are for consumer-facing applications. Nevertheless, we have been blessed with the so-called Angular Renaissance, which has, among other things, brought SSR to the forefront of Angular development with the @angular/ssr package. Let's look at what Angular SSR is, how it works, and how you can get started. @angular/ssr The @angular/ssr package is not new; it was written from scratch. It's still Angular Universal. However, it's been pulled into the Angular CLI repository, and the appropriate schematics have been added to make it easier to use. What that means is that you can now add SSR to your Angular application with a single command: ` and you can generate a new SSR application with: ` And to make SSR a first-class citizen, the Angular team is planning to make it the default in the next version. But surely, it's not only rebranding and some CLI commands, you may say. And you're right. You may remember that hydration in Angular Universal has not been optimal as it was destructive and required essentially a complete re-render of the application, thus pretty much negating the benefits of SSR. With the introduction of non-destructive hydration in Angular 16 and its stability in Angular 17, the Angular team has taken a significant step forward in making SSR a first-class citizen. How to Make an SSR-capable Angular Application Let's look at how to use Angular SSR in practice by creating a simple SSR app. First, we need to create a new Angular application with SSR support: ` That will generate a few server-specific files: - server.ts, which is an Express server configured to handle Angular SSR requests by default but can also be extended to handle API routes, or anything else that Express can handle. - src/app/app.config.server.ts provides configuration for the server application similar to your app.config.ts, which defines the configuration for your client application. Both export an instance of ApplicationConfig - src/main.server.ts bootstraps the server application in a similar way to what main.ts does for your client app by calling the bootstrapApplication function. You should have an SSR application ready just like that. You can verify that by running ng serve. The terminal output should show you information about the server bundles generated, and you should be able to see the default app when going to http://localhost:4200. Open the Developer Tools and check the console to see that hydration is enabled. You'll see a message with hydration-related stats, such as the number of components and nodes hydrated. Speaking of hydration... Client Hydration If you open the src/app/app.config.ts in your new SSR application, you'll notice one interesting provider: provideClientHydration(). This provider is crucial for enabling hydration - a process that restores the server-side rendered application on the client. Hydration improves application performance by avoiding extra work to re-create DOM nodes. Angular tries to match existing DOM elements to the application's structure at runtime and reuses DOM nodes when possible. This results in a performance improvement measured using Core Web Vitals statistics, such as reducing the Interaction To Next Paint (INP), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS). Improving these metrics also positively impacts SEO performance. Without hydration enabled, server-side rendered Angular applications will destroy and re-render the application's DOM, which may result in visible UI flicker, negatively impact Core Web Vitals like LCP, and cause layout shifts. Enabling hydration allows the existing DOM to be reused and prevents flickering. Event Replay and Improved Debugging With Angular 18, event replay, a feature that significantly enhances the SSR experience, came into developer preview. Event replay captures user interactions during the SSR phase and replays them once the application is hydrated. This ensures that user actions, such as adding items to a cart, are not lost during the hydration process, providing a seamless user experience even on slow networks. To enable event replay, you can add withEventReplay to your provideClientHydration call: ` Furthermore, Angular 18 takes another step towards making SSR a first-class citizen by enhancing the debugging experience with Angular DevTools. Now, DevTools include overlays and detailed error breakdowns, empowering developers to visualize and troubleshoot hydration issues directly in the browser. Constraints However, hydration imposes a few constraints on your application that are not present without hydration enabled. Your application must have the same generated DOM structure on both the server and the client. The hydration process expects the DOM tree to have the same structure in both environments, including whitespaces and comment nodes produced during server-side rendering. Direct DOM Manipulation The fact that server and client DOM structures must match means that components that manipulate the DOM using native DOM APIs or use innerHTML or outerHTML will encounter hydration errors. The errors are thrown because Angular is unaware of these changes and cannot resolve them during hydration. Examples include accessing the document, querying for specific elements, and injecting additional nodes using appendChild. Detaching DOM nodes and moving them to other locations will also cause errors. Direct DOM manipulation is generally considered bad practice in Angular, so I recommend refactoring your components to avoid it regardless of hydration. However, in warranted cases where refactoring is not feasible, you can use the ngSkipHydration attribute until a hydration-friendly solution can be implemented. You can use it directly in a template: ` or as a host binding: ` > Note: The ngSkipHydration attribute forces Angular to skip hydrating the entire component and its children. So use this attribute carefully, as, for example, applying it to your root application component would effectively disable hydration for the entire application. Valid HTML Structure Another constraint is that invalid HTML structures in component templates can result in DOM mismatch errors during hydration. Common issues include: - without a - inside a - inside an - inside another Again, if you have an invalid HTML, you should fix this regardless of hydration. If you are unsure about your HTML's validity, you can use the W3 validator to check it. Zoneless apps or custom Zone.js Hydration relies on a signal from Zone.js when it becomes stable. Custom or "noop" Zone.js implementations may thus disrupt the timing of this event, triggering the serialization or cleanup process too early or too late. This configuration is not yet fully supported. While Zoneless is already available in developer preview and we could assume this issue will be resolved when it lands as stable, there is no explicit ETA mentioned in the official roadmap. I18n One of the biggest blockers for adoption of client hydration was the lack of support for internationalization with hydration. By default, Angular 17 skipped hydration for components that used i18n blocks, re-rendering them from scratch. In Angular 18, however, hydration for i18n blocks got into developer preview. To enable hydration for i18n blocks, you can add withI18nSupport to your provideClientHydration call. ` Third-Party Libraries with DOM Manipulation The last thing to consider is third-party libraries that manipulate DOM, such as D3 charts. They could cause DOM mismatch errors when hydration is enabled. You can use the ngSkipHydration attribute to skip hydration for components rendered using these libraries. Conclusion The Angular team has been doing great work to make the framework more modern and competitive, hearing the community feedback and adjusting the roadmap accordingly. And SSR is an excellent example of that. With the introduction of non-destructive hydration in Angular 16 and making it stable in Angular 17, the Angular team has taken a significant step forward in making SSR a first-class citizen in Angular. The @angular/ssr package is now part of the Angular CLI, making adding SSR to your Angular application easier. The Angular team plans to make SSR the default in the next version, further solidifying its position in the Angular ecosystem. However, as with any new feature, there are some constraints and best practices to follow when using SSR with Angular. You should be aware of hydration's limitations and ensure your application is compatible with it. However, if you follow the guidelines provided by the Angular team, you should be able to take advantage of the benefits of SSR without any significant issues....

Angular 18 Announced: Zoneless Change Detection and More cover image

Angular 18 Announced: Zoneless Change Detection and More

Angular 18 Announced: Zoneless Change Detection and More Angular 18 has officially landed, and yet again, the Angular team has proven that they are listening to the community and are committed to continuing the framework's renaissance. The release polishes existing features, addresses common developer requests, and introduces experimental zoneless change detection. Let's examine the major updates and enhancements that Angular 18 offers. 1. Zoneless Angular One of the most exciting features of Angular 18 is the introduction of experimental support for zoneless change detection. Historically, Zone.js has been responsible for triggering Angular's change detection. However, this approach has downsides, especially regarding performance and debugging complexity. The Angular team has been working towards making Zone.js optional, and Angular 18 marks a significant milestone in this journey. Key Features of Zoneless Angular 1. Hybrid Change Detection: In Angular 18, a new hybrid change detection mode is enabled by default. This mode allows Angular to listen to changes from signals and other notifications regarding changes that occur either inside an Angular zone or not. That effectively means you can write (library) code that works regardless of whether Zone.js is being used, paving the way to fully zoneless apps without compromising backward compatibility. For new applications, Angular enables zone coalescing by default, which removes the number of change detection cycles and improves performance. For existing applications, you can enable zone coalescing by configuring your NgZone provider in bootstrapApplication: ` 2. Experimental API for Zoneless Mode: Angular 18 introduces an experimental API to disable Zone.js entirely. This API allows developers to run applications in a fully zoneless mode, paving the way for improved performance and simpler debugging. The zoneless change detection requires an entirely new scheduler which relies on notifications from the Angular core APIs, such as ChangeDetectorRef.markForCheck (called automatically by the AsyncPipe), ComponentRef.setInput, signal updates, host listener updates, or attaching a view that was marked dirty. 3. Improved Composability and Interoperability: Zoneless change detection enhances composability for micro-frontends and interoperability with other frameworks. It also offers faster initial render and runtime, smaller bundle sizes, more readable stack traces, and simpler debugging. How to Try Zoneless Angular To experiment with zoneless change detection, you can add the provideExperimentalZonelessChangeDetection provider to your application bootstrap: ` After adding the provider, remove Zone.js from your polyfills in angular.json. You can read more about the experimental zoneless change detection in the official documentation. By the way, angular.dev is now considered the official home for Angular developers! 2. Server-side Rendering and Hydration Enhancements A feature I'm particularly excited about is the improvements to Angular's server-side rendering (SSR) and hydration in terms of developer experience and debugging: 1. Enhanced DevTools Support: Angular DevTools now includes overlays and detailed error breakdowns to help visualize and debug hydration issues directly in the browser. This refreshing focus on developer experience shows that the Angular team wants to make the framework more approachable and user-friendly. 2. Hydration Support for Angular Material: All Angular Material components now support client hydration and are no longer skipped, which enhances performance and user experience. 3. Event Replay: Available in developer preview, event replay captures user interactions during SSR and replays them once the application is hydrated, ensuring a seamless user experience before complete hydration. It is powered by the same library as Google Search. 4. i18n Hydration Support: Up to v17, Angular skipped hydration for components with i18n blocks. From v18, hydration support for i18n blocks is in developer preview, allowing developers to use client hydration in internationalized applications. 3. Stable Material Design 3 After introducing experimental support for Material Design 3 in Angular 17, Angular 18 now includes stable support. The key features of Material Design 3 in Angular 18 include: 1. Simplified Theme Styles: Based on CSS variables, the new theming styles offer more granular customization and a flexible API for applying color variants to components. 2. Theming Generation Schematics: Using the Ng CLI, you can generate Material 3 themes. 3. Sass APIs: New Sass APIs allow developers to read colors, typography, and other properties from the Material 3 theme, making it easier to create custom components. How to use Material Design 3 in Angular 18 To use Material Design 3 in Angular 18, you can define a theme in your application's styles.scss file using the mat.defineTheme function: ` Or generate a Material 3 theme using the Ng CLI: ` You can then apply the theme to your application using the mat.theme mixin: ` Head to the official guide for a more detailed guide. You'll also notice they have refreshed the docs with the new themes and further documentation. 4. Signal-Based APIs The path to fully signal-based components includes new signal inputs, model inputs, and signal query APIs. We already wrote about them as they were in developer-preview in v17, but they have been further refined in v18. These APIs offer a type-safe, reactive way to manage state changes and interactions within components: 1. Signal Input API: Signal inputs allow values to be bound from parent to child components. Those values are exposed using a signal and can change during the component's life cycle. 2. Model Input API: Model inputs are a special input type that enables a component to propagate new values back to the parent component. That allows developers to keep the parent component in sync with the child component with two-way binding. 3. Signal Query API: This was a particularly requested feature from the community. The signal query APIs work the same way as ViewChild and ContentChild under the hood, but they return signals, providing more predictable timing and type safety. 5. Fallback Content For ng-content A very requested feature from the community, Angular 18 introduces a new ng-content directive that allows developers to define fallback content when no content is projected into a component. This feature is particularly useful for creating reusable components with default content. Here's an example of using the new ng-content directive. Using the following component ` like this ` will render Howdy World. 6. Other Improvements In addition to the major updates mentioned above, Angular 18 also includes several other improvements and updates: 1. TypeScript 5.4: Angular 18 now supports TypeScript 5.4, which lets you take advantage of new features such as preserved narrowing in closures following last assignments. 2. Global Observable in Angular Forms: Angular 18 introduces a global events observable in Angular forms, which allows you to track all changes around any abstract control and its children, including the touched or dirty in a single observable. Here's an example of how you can use the global observable: ` 3. Stable Deferrable views: Deferrable views are now stable in Angular 18. 4. Stable Control Flow: The built-in control flow is now stable in Angular 18! It is more performant than its predecessor. It also received improved type checking, including guardrails for certain performance-related anti-patterns. 5. Route Redirects as Functions: For added flexibility in managing redirects, Angular v18 now lets you use a function for redirectTo that returns a string, which allows you to create more sophisticated redirection logic based on runtime conditions. For example: ` Conclusion Angular 18 is a significant release that brings many new features, enhancements, and experimental APIs to the Angular ecosystem. The introduction of zoneless change detection, improvements to server-side rendering and hydration, stable Material Design 3 support, signal-based APIs, and fallback content for ng-content are just a few of the highlights of this release. The Angular team has again demonstrated its commitment to improving the framework's developer experience, performance, and flexibility. It also demonstrated a clear vision for Angular's future. If you're curious about what's next, you can check out the Angular roadmap....

Introducing the Vue 3 and XState kit for starter.dev cover image

Introducing the Vue 3 and XState kit for starter.dev

*Starter.dev is an open source community resource developed by This Dot Labs that provides code starter kits in a variety of web technologies, including React, Angular, Vue, etc. with the hope of enabling developers to bootstrap their projects quickly without having to spend time configuring tooling.* Intro Today, we’re delighted to announce a new starter kit featuring Vue and XState! In this blog post, we’ll dive into what’s included with the kit, how to get started using it, and what makes this kit unique. What’s included All of our kits strive to provide you with popular and reliable frameworks and libraries, along with recommended tooling all configured for you, and designed to help you spin your projects up faster. This kit includes: - Vue as the core JS framework - XState for managing our application’s state - CSS for styling - Cypress component testing - Vue Router to manage navigation between pages - Storybook for visual prototyping - ESlint and Prettier to lint and format your code How to get started using the kit To get started using this kit, we recommend the starter CLI tool. You can pass in the kit name directly, and the tool will guide you through naming your project, installing your dependencies, and running the app locally. Each kit comes with some sample components, so you can see how the provided tooling works together right away. ` Now let’s dive into some of the unique aspects of this kit. Vue 3 Vue is a very powerful JS framework. We chose to use Vue directly to highlight some of the features that make it such a joy to work with. One of our favorite features is Vue’s single file components (SFC). We can include our JavaScript, HTML, and CSS for each component all in the same file. This makes it easier to keep all related code directly next to each other, making it easier to debug issues, and allowing less file flipping. Since we’re in Vue 3, we’re also able to make use of the new Composition API, which looks and feels a bit more like vanilla JavaScript. You can import files, create functions, and do most anything you could in regular JavaScript within your component’s script tag. Any variable name you create is automatically available within your HTML template. Provide and Inject Another feature we got to use specifically in this starter kit is the new provide and inject functionality. You can read more details about this in the Vue docs, but this feature gives us a way to avoid prop drilling and provide values directly where they’re needed. In this starter kit, we include a “greeting” example, which makes an API call using a provided message, and shows the user a generated greeting. Initially, we provided this message as a prop through the router to the greeting component. This works, but it did require us to do a little more legwork to provide a fallback value, as well as needing our router to be aware of the prop. Using the provide / inject setup, we’re able to provide our message through the root level of the app, making it globally available to any child component. Then, when we need to use it in our ` component, we inject the “key” we expect (our message), and it provides a built-in way for us to provide a default value to use as a fallback. Now our router doesn’t need to do any prop handling! And our component consistently works with the provided value or offers our default if something goes wrong. ` ` Using XState If you haven’t had a chance to look into XState before, we highly recommend checking out their documentation. They have a great intro to state machines and state charts that explains the concepts really well. One of the biggest mindset shifts that happens when you work with state machines is that they really help you think through how your application should work. State machines make you think explicitly through the different modes or “states” your application can get into, and what actions or side effects should happen in those states. By thinking directly through these, it helps you avoid sneaky edge cases and mutations you don’t expect. Difference between Context and State One of the parts that can be a little confusing at first is the difference between “state” and “context” when it comes to state machines. It can be easy to think of state as any values you store in your application that you want to persist between components or paths, and that can be accurate. However, with XState, a “state” is really more the idea of what configurations your app can be in. A common example is a music player. Your player can be in an “off” state, a “playing” state, or a “paused” state. These are all different modes, if you will, that can happen when your music player is interacted with. They can be values in a way, but they’re really more like the versions of your interface that you want to exist. You can transition between states, but when you go back to a specific state, you expect everything to behave the same way each time. States can trigger changes to your data or make API calls, but each time you enter or leave a state, you should be able to see the same actions occur. They give you stability and help prevent hidden edge cases. Values that we normally think of as state, things like strings or numbers or objects, that might change as your application is interacted with. These are the values that are stored in the “context” within XState. Our context values are the pieces of our application that are quantitative and that we expect will change as our application is working. ` Declaring Actions and Services When we create a state machine with XState, it accepts two values- a config object and an options object. The config tells us what the machine does. This is where we define our states and transitions. In the options object, we can provide more information on how the machine does things, including logic for guards, actions, and effects. You can write your actions and effect logic within the state that initiates those calls, which can be great for getting the machine working in the beginning. However, it’s recommended to make those into named functions within the options object, making it easier to debug issues and improving the readability for how our machine works. Cypress Testing The last interesting thing we’d like to talk about is our setup for using component testing in Cypress! To use their component testing feature, they provide you with a mount command, which handles mounting your individual components onto their test runner so you can unit test them in isolation. While this works great out of the box, there’s also a way to customize the mount command if you need to! This is where you’d want to add any configuration your application needs to work properly in a testing setup. Things like routing and state management setups would get added to this function. Since we made use of Vue’s provide and inject functions, we needed to add the provided value to our ` command in order for our greeting test to properly work. With that set up, we can allow it to provide our default empty string for tests that don’t need to worry about injecting a value (or when we specifically want to test our default value), and then we can inject the value we want in the tests that do need a specific value! ` Conclusion We hope you enjoy using this starter kit! We’ve touched a bit on the benefits of using Vue 3, how XState keeps our application working as we expect, and how we can test our components with Cypress. Have a request or a question about a [starter.dev] project? Reach out in the issues to make your requests or ask us your questions. The project is 100% open sourced so feel free to hop in and code with us!...

Introduction to Vercel’s Flags SDK cover image

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it. You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice. Why should I use it? You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently. It’s simple Let's start with a basic example: ` This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration. Many other SDKs require passing the flag's name and context every single time you check a flag, like this: ` This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application. By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly. It’s flexible Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter. While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit. Latency solutions Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency. These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly: ` You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended. Using Edge Config When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues. To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency. Cookies For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load. Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data: ` You can also encrypt or sign cookies for additional security from the client side. Dedupe Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example: ` Server-side patterns for static pages You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering. To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute. Precompute Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers: ` Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them. ` The user will never notice this because, as we use “rewrite”, they will only see the original URL. Now, on our page, we “invoke” our flags, sending the code from the params: ` By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user. Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests: ` Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell. Generate Permutations If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases. ` ` Conclusion Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co