Skip to content

Adding React to your ASP.NET MVC web app

Adding React to your ASP.NET MVC web app

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Adding React to your ASP.NET MVC web app

In this article, we'll take a look how and why you might want to choose React to handle your front end considerations, and let ASP.NET manage the backend

Architecture

First, let's consider the range of responsibilities that each solution offers:

  • Templating - scaffolding out HTML markup based on page data
  • Routing - converting a request into a response
  • Middleware Pipeline - building assets
  • Model Binding - building usable model objects from HTML form data
  • API Services - handling data requests on a server
  • Client Side Interaction - updating a web page based on user interaction

For the most part, both of these frameworks offer ways to handle most of these concerns:

FeatureASP.NETReact
Templating
Routing
Middleware Pipeline
Model Binding
API Services
Client Side Interaction

Challenges when combining these two technologies include finding which pieces work best together.

Historically, ASP.NET has struggled to offer any sort of rich client experience out of the box. Because ASP.NET performs templating on the server, it's difficult to immediately respond to changes in state on the client without making a round trip, or writing the client side logic entirely separate from the rest of your application.

Also, React doesn't have anything to say about server-side considerations or API calls, so part of building any React app will involve picking how you want to handle requests on the server.

Here's roughly how we'll structure our architecture, with React taking care of all the client side concerns, and .NET handling the server side API.

architecture

Aside: Blazor is adding rich interactivity to .NET with a monolithic API. If you want to give your existing .cshtml super powers, that's another possible path to pursue, but there's already a large and vibrant ecosystem for using React to take care of front end architecture.

Prerequisites - If you don't have these, you'll need em'

Getting Started

.NET comes with its own react starter template which is documented here. We'll use it for this example. You can spin up a new project with the dotnet new command like this:

dotnet new react my-new-app

Note: If you have Visual Studio, you'll also see the available template starters under the the new project dialog new-web-app-templates

You should be able to cd into your application directory, open with VS Code, and launch with F5, or by running the following:

dotnet run

Which will be available at https://localhost:5001/

hello-world

Net React Starter

Let's review what this template adds, and how all the pieces fit together

File Structure

Here's a very high level view of some of the files involved

my-app
├── .vscode/          # vs code configs
├── bin/              # generated dotnet output
├── ClientApp         # React App - seeded with CRA
│   ├── build/        # generated react output
│   ├── public/       # static assets
│   ├── src/          # react app source
│   └── package.json  # npm configuration
├── Controllers/      # dotnet controllers
├── Models/           # dotnet models
├── Pages/            # razor pages
├── Program.cs        # dotnet entry point
├── Startup.cs        # dotnet app configuration
└── my-app.csproj     # project config and build steps

The startup.cs file will wire up the react application as asp.net middleware with the following services / configuration:

// ConfigureServices()
services.AddSpaStaticFiles(configuration =>
{
    configuration.RootPath = "ClientApp/build";
});

// Configure()
app.UseStaticFiles();
app.UseSpaStaticFiles();
app.UseSpa(spa =>
{
    spa.Options.SourcePath = "ClientApp";
    if (env.IsDevelopment())
    {
        spa.UseReactDevelopmentServer(npmScript: "start");
    }
});

This takes advantage of the Microsoft.AspNetCore.SpaServices.Extensions library on nuget to wire up some of the client side dependencies, and server kick off.

Finally, to wire up React to our .NET build step, there are a couple transforms in the .csproj file which run the local build process via npm and copy the build output to the .NET project.

<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
  <!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
  <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
  <Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />
  <!-- ... -->
</Target>

Adding Functionality - More Cats Please

And finally, let's review how to leverage all of these existing components to start adding functionality to our application. We can take advantage of the fact that API calls can happen on the server-side, allowing us to protect API Keys and tokens, while also benefiting from immediate client updates without a full page refresh.

As an example, we'll use TheCatApi, which has a quick, easy, and free sign up process to get your own API key.

For this demo project, we'll add a random image viewer, a listing of all breeds, and a navigation component using react-router to navigate between them.

We can scaffold C# classes from the sample output, and processing, through JSON2CSharp, and we'll add them to the models folder.

MVC Controller - That's the C in MVC!

The controller will orchestrate all of the backend services. Here's an example that helps setup a wrapper around the cat API so we can simplify our client side logic, and forward the request on the backend, but we could also hit a database, filesystem, or any other backend interfaces. We'll deserialize the API into a strongly typed class using the new System.Text.Json.JsonSerializer. This is baked into .NET core, and saves us the historical dependency on Newtonsoft's JSON.NET. The API Controller can then just return data (wrapped up in a Task because we made our method async).

[ApiController]
[Route("api/[controller]")]
public class BreedsController : ControllerBase

    [HttpGet]
    public async Task<IEnumerable<Breed>> Get()
    {
        var BASE_URL = "https://api.thecatapi.com";

        using var client = new HttpClient();
        client.DefaultRequestHeaders.Add("x-api-key", _configuration["CatApiKey"]);

        var resp = await client.GetStringAsync(BASE_URL + "/v1/breeds");
        var breeds = JsonSerializer.Deserialize<Breed[]>(resp);

        return breeds;
    }

}

If we run the project, we should now be able to fetch data by navigating to /api/breed in the browser.

You can test check if it works by itself by going to https://localhost:5001/api/breed

Now let's load that up into a React component.

React Component - It's Reactive :D

Here's the basic structure of a React component:

import React, { useState } from 'react';

export function Breeds() {
  const [loading, setLoading] = useState(true);
  const [breeds, setBreeds] = useState([]);

  const contents = loading
    ? <p><em>Loading...</em></p>
    : renderBreedsTable(breeds);

  return (
    <div>
      <h1 id="tableLabel" >Get Breeds</h1>
      <p>This component demonstrates fetching data from the server.</p>
      {contents}
    </div>
  );
}

If this JavaScript is looking unfamiliar, it's because it's using JSX as well as ES6 syntax that's not supported by all browsers, which is why React needs to get preprocessed, by Babel, into something more universally understood to browsers. But let's take a look at what's going on piece-by-piece.

In order for JSX to be processed, we need to import React into our file (until React 17 that is).

The component function's primary responsibility is to return the HTML markup that we want to use one the page. Everything else is there to help manage state, or inform how to template and transform the data into HTML. So if we want a conditional, we can use the ternary operator (bool ? a : b), or if we want to transform items in an array, we could use Array.map to produce the output.

In order to manage state inside of a component, we use the React Hooks function useState, which returns two items in an array that we can deconstruct like this:

const [loading, setLoading] = useState(true);

This allows us to update the state of this property and informs React to dynamically re-render the output when we do so. For example, while loading, we can display some boilerplate text until the data has returned, and then update the view appropriately.

We'll want to fetch data on load, but sending async fetch calls during the initial execution can have some weird side effects, so we can use another hook called useEffect. Every time the component renders, we want to introduce an effect, so we'll put the initial call for data inside of there and it will fire as soon as the component goes live:

useEffect(async () => {
  const response = await fetch('/api/breeds');
  const data = await response.json();

  setBreeds(data)
  setLoading(false);
}, []);

This hits our backend API, gets the data, and updates our component state not only with the data, but also by toggling the loading indicator. Any updates to state that are used in the render function will cause it to kick off again and update the DOM automatically.

Let's imagine we create a second component, roughly the same as the first, to go get a random cat image. How do we handle navigation between both components. Many of the advantages to choosing React occur from preventing full page reloads on navigation. After all, your header, footer, loaded JS, and CSS libraries haven't changed - why should they get shipped to and parsed on the client from scratch. So we'll handle all navigational concerns within React as well.

Here's a pared down version of how that occurs:

  1. Our index.js file wraps the entire application in BrowserRouter from react-router-dom.
  2. It then loads our application component which passes Route objects with a path and component into the overall Layout.
  3. The layout then renders the "live" route by injecting this.props.children into the rest of the site layout.
  4. React router will handle the rest by only loading components when the path is navigated to in the browser or from other areas in the app.
// index.js
ReactDOM.render(
  <BrowserRouter basename={baseUrl}>
    <App />
  </BrowserRouter>,
  rootElement);

// app.js
export default function App() {
  return (
    <Layout>
      <Route exact path='/' component={RandomCat} />
      <Route path='/breeds' component={Breeds} />
    </Layout>
  );
}

// layout.js
export function Layout(props) {
  return (
    <div>
      <NavMenu />
      <Container>
        {props.children}
      </Container>
    </div>
  );
}

Wrap Up & Source Code

That's a quick run down of how you can combine both of these two technologies, and leverage the good parts of each.

There are many ways to explore each of these technologies on their own, and also try out use cases for using them together:

For more context, you can browse the full source code for the article on my Github at KyleMit/aspnet-react-cats.

What would you like to see added to this example next? Tweet me at KyleMitBTV.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Communication Between Client Components in Next.js cover image

Communication Between Client Components in Next.js

Communication Between Client Components in Next.js In recent years, Next.js has become one of the most popular React frameworks for building server-rendered applications. With the introduction of the App Router in Next.js 13, the framework has taken a major leap forward by embracing a new approach to building web applications: the concept of server components and client components. This separation of concerns allows developers to strategically decide which parts of their application should be rendered on the server and which are then hydrated in the browser for interactivity. The Challenge: Communicating Between Client Components While server components offer numerous benefits, they also introduce a new challenge: how can client components within different boundaries communicate with each other? For instance, let's consider a scenario where you have a button (a client component) and a separate client component that displays the number of times the button has been clicked. In a regular React application, this would typically be accomplished by lifting the state to a common ancestor component, allowing the button to update the state, which is then passed down to the counter display component. However, the traditional approach may not be as straightforward in a Next.js application that heavily relies on server components, with client components scattered across the page. This blog post will explore three ways to facilitate communication between client components in Next.js, depending on where you want to store the state. Lifting State to a Common Client Component One approach to communicating between client components is to lift the state to a common client component. This could be a React context provider, a state management system like Zustand, or any other solution that allows you to share state across components. The key aspect is that this wrapper component should be higher up in the tree (perhaps even in the layout) and accept server components as children. Next.js allows you to interleave client and server components as much as you want, as long as server components are passed to client components as props or as component children. Here's how this approach might look in practice. First of all, we'd create a wrapper client component that holds the state: ` This wrapper component can be included in the layout: ` Any server components can be rendered within the layout, potentially nesting them several levels deep. Finally, we'd create two client components, one for the button and one for the counter display: ` ` The entire client and server component tree is rendered on the server, and the client components are then hydrated in the browser and initialized. From then on, the communication between the client components works just like in any regular React application. Check out the page using this pattern in the embedded Stackblitz window below: Using Query Params for State Management Another approach is to use query params instead of a wrapper client component and store the state in the URL. In this scenario, you have two client components: the button and the counter display. The counter value (the state) is stored in a query param, such as counterValue. The client components can read the current counter value using the useSearchParams hook. Once read, the useRouter hook can update the query param, effectively updating the counter value. However, there's one gotcha to this approach. If a route is statically rendered, calling useSearchParams will cause the client component tree up to the closest Suspense boundary to be client-side rendered. Next.js recommends wrapping the client component that uses useSearchParams in a boundary. Here's an example of how this approach might look. The button reads the current counter value and updates it on click by using the router's replace function: ` The counter display component is relatively simple, only reading the counter value: ` And here is the page that is a server component, hosting both of the above client components: ` Feel free to check out the above page in the embedded Stackblitz below: Storing State on the Server The third approach is to store the state on the server. In this case, the counter display component accepts the counter value as a prop, where the counter value is passed by a parent server component that reads the counter value from the database. The button component, when clicked, calls a server action that updates the counter value and calls revalidatePath() so that the counter value is refreshed, and consequently, the counter display component is re-rendered. It's worth noting that in this approach, unless you need some interactivity in the counter display component, it doesn't need to be a client component – it can be purely server-rendered. However, if both components need to be client components, here's an example of how this approach might look. First, we'll implement a server action that updates the counter value. We won't get into the mechanics of updating it, which in a real app would require a call to the database or an external API - so we're only commenting that part. After that, we revalidate the path so that the Next.js caches are purged, and the counter value is retrieved again in server components that read it. ` The button is a simple client component that calls the above server action when clicked. ` The counter display component reads the counter value from the parent: ` While the parent is a server component that reads the counter value from the database or an external API. ` This pattern can be seen in the embedded Stackblitz below: Conclusion Next.js is a powerful framework that offers various choices for implementing communication patterns between client components. Whether you lift state to a common client component, use query params for state management, or store the state on the server, Next.js provides all the tools you need to add a communication path between two separate client component boundaries. We hope this blog post has been useful in demonstrating how to facilitate communication between client components in Next.js. Check out other Next.js blog posts we've written for more insights and best practices. You can also view the entire codebase for the above snippets on StackBlitz....

React Conf 2024 Review cover image

React Conf 2024 Review

A lot has happened since the last React Conf, so 2024 was action-packed. The conference lasted two days in Las Vegas and was focused on all the new stuff coming with React v19. Day one was centered around React 19/Server Components, and day two was React Native. There were some exciting reveals, fantastic talks, and demos. This post will cover the conference's big news, specifically the main keynotes from days 1 and 2. Day 1 Keynote The main keynote included several speakers from the React team, and early in the talk they mentioned a goal of “Make it easy for anyone to build great user experiences.” This sounds great but I was curious to see exactly what they meant by this since it seems like React has arguably gotten more complex with the introduction of Server Components and related API’s. The keynote was split into 3 sections: State of React - Lauren Tan Lauren shared the staggering adoption React has received over the years and thanked all the contributors and the community. She also highlighted the amazing ecosystem of packages, tools, and frameworks. It’s hard to argue against these points, but React seems to be in a weird in-between state. The framework is headed in this new direction but doesn’t feel quite there yet. React 19 A few team members collaborated on a walkthrough of React 19, which included all the new features and a bunch of quality-of-life improvements. The main highlights were: * Server Rendering & Suspense - One of the coolest bits was showing how to co-locate and load stylesheets and other assets in components. “HTML Composition for Bundlers, Libraries, & Apps” * Hydration - This was a fan favorite, and the TL;DR is fewer hydration errors with better messages and debuggability. * Actions - This is one of my favorite new features, as it feels like something that was missing from the React toolkit for a long time. The new Actions API is a “First-class pattern for asynchronous data updates in response to user input”. It includes Remix inspired form behaviors and API’s that replace a lot of boilerplate for handling async operations like loading and error states. * JSX Improvements One of the most applauded announcements was the changes to refs, which are now automatically passed to every component 🙌 React Compiler !! Yes, we’ve all heard the news by now. They are finally open sourcing the compiler that they teased over 2 years ago. React compiler understands React code/rules and is able to automatically optimize/memoize our application code. One of the big complaints of React has long been that there’s a lot of mental overhead and thought that needs to be put into optimizing and building fast components. So, the goal is to remove that burden from the developers so that they can focus on writing their business logic. One of the slides mentioned that the compiler has the following behaviors: Dead Code Elimination, Constant Propagation, Type Inference, Alias Analysis, Mutable Lifetime Inference, HIR, and Diagnostics. I’m not going to pretend to know all those things, but I’ve already heard a lot of positive feedback about how it works and how advanced it is. The team mentioned that they are already running the compiler in production on Instagram.com and some of their other websites. Day 2 Keynote Day 2 opened with sharing the incredible growth in downloads and adoptions that React Native has had (78M downloads in 2023). They shared some meta info about the recent releases and talked about all the partners helping to push the framework forward. They mentioned the Meta investment in React Native and shared about some of the ways that they are using it internally in places like Facebook Marketplace. Next they shared about all the work that Microsoft is doing with React Native for things like supporting building Windows apps with React Native. I was surprised to learn that the Windows Start Bar is a React Native app 🤯. New Architecture "Rewrite React Native to be synchronous so it’s possible to be concurrent" I was surprised when I read this since these two things seem to conflict with one another. The new architecture allows for the concurrent mode in React. This section opened by describing the original architecture of React Native (JavaScript code with a bridge for passing information back and forth from the native layer) The new architecture replaces “the bridge” with a concept called “JSI (JavaScript Interface)”. Instead of message passing, it uses memory sharing between the JS and Native layers. The big announcement was that the new architecture has been moved into Beta and they have released a bunch of things to help with compatibility between the old and new architecture to minimize breakages when upgrading. Production Ready Apps The section opened by talking about all the complexities and pieces involved with shipping a high-quality app to Android and iOS. If you’re a mobile developer, you know it’s quite a lot. They highlighted a new section of the documentation that provides details about “React Native Frameworks”. They are recommending using a “Framework” with React Native, just like they are recommending for React on the web. I had never even considered the idea of a “React Native Framework” before, so I thought that was pretty surprising. I guess it makes sense that Expo is a Framework but I’m not aware of any others like it. So the recommendation from the React Native documentation is: “Use Expo”. Which leads to the next part of the keynote…. Expo This part of the talk went more in-depth on a lot of the complexities with mobile development that Expo helps you solve. They highlighted an amazing suite of modules that they have dubbed the “Expo SDK”. The SDK consists of well-defined modules that handle some type of integration with the underlying Native platforms. They also have a Modules API to help build your own Native modules. Another new feature that I wasn’t aware of is API routes with Expo Router. How does that even work??? The next thing they mentioned was a feature called CNG (Continuous Native Generation). I’ve played with this a bit and it’s pretty nice. Expo will generate the iOS and Android projects/code that is needed to build the final application. The sales pitch is that this will help with upgrades if you aren’t managing and changing your native code by yourself. If you’ve upgraded a React Native app before you know how painful this can be. I am interested to see if CNG is practical in the real world. In my experience most projects have ended up ejected from expo and require managing your own native code. This section wrapped up with talking about builds and deployments with Expo Application Services (EAS). I don’t have experience with EAS yet personally but builds and deployments have always been the biggest pain point in my experience. I would definitely give it a try on my next project. As someone who has worked with React Native and Expo quite a bit over the years, it truly has come a really long way. Expo is amazing. Summary There were a ton of really exciting announcements and content just between the two keynotes. My impression from the conference is that a new era of React is kicking off and it’s really exciting. I’d hate to be one of the people betting against React as they’ve shown time and time again that they are willing to take risks and innovate in this space. It’s going to be a busy year in the React ecosystem as more frameworks adopt Server Components and more applications give it a try. There were a bunch of other really incredible talks in the conference and I’m really excited to write about a couple of them in particular. I wish I could have attended in person but the live stream didn’t disappoint!...

Build your Backend with Netlify Functions in 20 Minutes cover image

Build your Backend with Netlify Functions in 20 Minutes

Build your Backend with Netlify Functions in 20 minutes Netlify makes deploying your front end quick and easy, and Netlify functions makes running a serverless backend just as easy. In this guide, we'll get setup on how to use Netlify functions. As an indie developer, you should embrace serverless offerings because of their low barrier to entry and generous free tiers. And as an enterprise shop, you should seriously consider them for an extremely cheap, fast, and scalable way to build out your backend infrastructure. Use Cases - What can you build? Modern JavaScript frameworks allow us to build large and complex applications on the client, but they can occasionally run into limitations. For everything else, there's the "backend" which excels at handling some of these use cases: * Protecting Secrets & Credentials * Server Side Rendering * Sending Emails * Handling File IO * Running centralized logic * Executing tasks off the main thread * Bypassing CORS issues for locked down APIs * Providing Progressive Enhancement / NoScript Fallback Composition of a Function Netlify Functions provides a wrapper around AWS Lambdas. While the Netlify documentation should be sufficient, it's good to know that there's an escape hatch if you ever want to run on your own AWS subscription. However, Netlify handles some of the deployment magic for you, so let's start there. Here's the bare bones of a Netlify function in JavaScript: ` If you're familiar with running JavaScript on Node, this should look somewhat familiar. Each function should live in its own file, and will execute whatever is assigned to exports.handler. We have access to event and context. We can run whatever code we need on Node, and return whatever response type we'd like. To set this up, lets create an empty repository on GitHub. We need to add functions to a folder. While we can use any name, a common pattern is to create a folder name functions. Let's add a file in there called hello.js ` In our function, we can grab information from the query string parameters passed in. We'll destructure those (with a default value) and look for a name param. To actually wire up our functions folder, we'll need to add a netlify.toml config file at the root of our project. ` Walk Before You Run (Locally) Our "repo" should look like this at this point: ` The best way to run your Netlify site locally, with all the bells and whistles attached, is to use Netlify Dev which you can install via npm: ` And then kick off your dev server like this: ` Your "site" should now be live at http://localhost:8888. By default, Netlify hosts functions under the subpath /.netlify/functions/ so you can invoke your function here: http://localhost:8888/.netlify/functions/hello?name=Beth Now, let's make our function's address a little cleaner by also taking advantage of another free Netlify feature using redirects. This allows us to expose the same functions at a terser url by replacing /.netlify/functions with /api. FROM: /.netlify/functions/hello TO: /api/hello To do so, append the following info to your netlify.toml config, and restart Netlify dev: ` This will route all traffic at /api/* internally to the appropriate functions directory, and the wildcard will capture all additional path info, and move to :splat. By setting the HTTP Status Code = 200, Netlify will preform a "rewrite" (as opposed to a "redirect") which will change the server response without changing the URL in the browser address bar. So let's try again with our new url: http://localhost:8888/api/hello?name=Beth 👏 Awesome, you just created a function! (you're following along live, right?) Getting the CRUD Out & Submitting Data Now that we can build functions, let's create our own API with some basic CRUD functions (Create, Read, Update, & Delete) for a simple todos app. One of the central tenants of serverless computing is that it's also stateless. If you need to store any state across function invocations, it should be persisted to another, layer like a database. For this article, let's use the free tier of DynamoDb, but feel free to BYODB (Bring Your Own DB), especially if it has a Node SDK. In the next steps, we'll: 1. Setup a table on DynamoDB in AWS 2. Install npm packages into our project 3. Setup secret keys in AWS, and add to our environment variables 4. Initialize the aws-sdk package for NodeJs 5. And then finally add a Netlify function route to create a record on our database AWS - Amazon Web Services This guide will assume some degree of familiarity with AWS & DynamoDB, but if you're new to DynamoDB, you can start with this guide on Getting Started with Node.js and DynamoDB. On AWS, create a table with the name NetlifyTodos, and string partition key called key. NPM - Node Package Manager Now, let's setup npm and install aws-sdk, nanoid, & dotenv. In a terminal at the root of your project, run the following commands: ` ENV - Environment Variables You'll need to provision an access key / secret for an IAM user that we'll use to authenticate our API calls. One of the benefits of running these calls on the server is you're able to protect your application secret through environment variables, instead of having to ship them to the client, which is not recommended. There are quite a few ways to log into AWS on your local machine, but just to keep everything inside of our project, let's create a .env file at the root of our project, and fill in the following keys with your own values: ` NOTE: One little gotcha here is that the more common AWS_ACCESS_KEY_ID is a reserved environment keyword used by the Netlify process. So if we want to pass around env variables, we'll have to use our own key, in this case prefixed with MY_. Once they're added to the process, we can destructure them and use in setting up our AWS SDK. We'll need to setup AWS for every CRUD function, so let's assemble all this logic in a separate file called dyno-client.js. ` The following is required. SDK - Software Developer Kit Using the aws-sdk makes our life a lot easier for connecting to DynamoDB from our codebase. We can create an instance of the Dynamo client that we'll use for the remaining examples: ` To make this available to all our functions, add the DynamoDB instance to your exports, and we'll grab it when we need it: ` Create Todo (Due by EOD 😂) ⚡ We're finally ready to create our API function! In the following example, we'll post back form data containing the text for our todo item. We can parse the form data into JSON, and transform it into an item to insert into our table. If it succeeds, we'll return the result with a status code of 200, and if it fails, we'll return the the error message along with the status code from the error itself. ` This should give you the gist of how to expose your API routes and logic to perform various operations. I'll hold off on more examples because most of the code here is actually just specific to DynamoDB, and we'll save that for a separate article. But the takeaway is that we're able to return something meaningful with very minimal plumbing. And that's the whole point! > With Functions, you *only* have to write your own business logic! Debugging - For Frictionless Feedback Loops There are two critical debugging tools in Visual Studio Code I like to use when working with node and API routes. 1. Script Debugger & 2. Rest Client Plugin ✨ Did you know, instead of configuring a custom launch.json file, you can run and attach debuggers directly onto npm scripts in the package.json file: And while tools like Postman are a valuable part of comprehensive test suite, you can add the REST Client Extension to invoke API commands directly within VS Code. We can easily use the browser to mock GET endpoints, but this makes it really easy to invoke other HTTP verbs, and post back form data. Just add a file like test.http to your project. *REST Client* supports expansion of variable environment, and custom variables. If you stub out multiple calls, you can separate multiple different calls by delimiting with ###. Add the following to your sample file: ` We can now run the above by clicking "Send Request". This should hit our Netlify dev server, and allow us to step through our function logic locally! Publishing Publishing to Netlify is easy as well. Make sure your project is committed, and pushed up to a git repository on GitHub, GitLab or BitBucket. Login to Netlify, and click the option to Create "New Site From Git" and select your repo. Netlify will prompt for a Build command, and a Publish directory. Believe it or not, we don't actually have either of those things yet, and it's probably a project for another day to set up our front end. Those commands refer to the static site build part of the deployment. Everything we need to build serverless functions is inside our functions directory and our netlify.toml config. Once we deploy the site, the last thing we'll need to do is add our environment variables to Netlify under Build > Environment Next Steps - This is only the beginning Hopefully some ideas are spinning as to how you can use these technologies on your own sites and projects. The focus of this article is on building and debugging Netlify functions, but an important exercise left to the reader is to take advantage of that on your front end. TIP: If you want to add Create React App to your current directory (without creating a new folder), add a . when scaffolding out a new app like this: ` Try it out - build a front end, and let me know how it goes at KyleMitBTV! For more context, you can browse the full source code for the article on GitHub at KyleMit/netlify-functions-demo. For even more practical examples with actual code, checkout the following resources as well! * David Wells - Netlify Serverless Functions Workshop * netlify/functions - Community Functions Examples Good luck, and go build things!...

Next.js Rendering Strategies and how they affect core web vitals cover image

Next.js Rendering Strategies and how they affect core web vitals

When it comes to building fast and scalable web apps with Next.js, it’s important to understand how rendering works, especially with the App Router. Next.js organizes rendering around two main environments: the server and the client. On the server side, you’ll encounter three key strategies: Static Rendering, Dynamic Rendering, and Streaming. Each one comes with its own set of trade-offs and performance benefits, so knowing when to use which is crucial for delivering a great user experience. In this post, we'll break down each strategy, what it's good for, and how it impacts your site's performance, especially Core Web Vitals. We'll also explore hybrid approaches and provide practical guidance on choosing the right strategy for your use case. What Are Core Web Vitals? Core Web Vitals are a set of metrics defined by Google that measure real-world user experience on websites. These metrics play a major role in search engine rankings and directly affect how users perceive the speed and smoothness of your site. * Largest Contentful Paint (LCP): This measures loading performance. It calculates the time taken for the largest visible content element to render. A good LCP is 2.5 seconds or less. * Interaction to Next Paint (INP): This measures responsiveness to user input. A good INP is 200 milliseconds or less. * Cumulative Layout Shift (CLS): This measures the visual stability of the page. It quantifies layout instability during load. A good CLS is 0.1 or less. If you want to dive deeper into Core Web Vitals and understand more about their impact on your website's performance, I recommend reading this detailed guide on New Core Web Vitals and How They Work. Next.js Rendering Strategies and Core Web Vitals Let's explore each rendering strategy in detail: 1. Static Rendering (Server Rendering Strategy) Static Rendering is the default for Server Components in Next.js. With this approach, components are rendered at build time (or during revalidation), and the resulting HTML is reused for each request. This pre-rendering happens on the server, not in the user's browser. Static rendering is ideal for routes where the data is not personalized to the user, and this makes it suitable for: * Content-focused websites: Blogs, documentation, marketing pages * E-commerce product listings: When product details don't change frequently * SEO-critical pages: When search engine visibility is a priority * High-traffic pages: When you want to minimize server load How Static Rendering Affects Core Web Vitals * Largest Contentful Paint (LCP): Static rendering typically leads to excellent LCP scores (typically &lt; 1s). The Pre-rendered HTML can be cached and delivered instantly from CDNs, resulting in very fast delivery of the initial content, including the largest element. Also, there is no waiting for data fetching or rendering on the client. * Interaction to Next Paint (INP): Static rendering provides a good foundation for INP, but doesn't guarantee optimal performance (typically ranges from 50-150 ms depending on implementation). While Server Components don't require hydration, any Client Components within the page still need JavaScript to become interactive. To achieve a very good INP score, you will need to make sure the Client Components within the page is minimal. * Cumulative Layout Shift (CLS): While static rendering delivers the complete page structure upfront which can be very beneficial for CLS, achieving excellent CLS requires additional optimization strategies: * Static HTML alone doesn't prevent layout shifts if resources load asynchronously * Image dimensions must be properly specified to reserve space before the image loads * Web fonts can cause text to reflow if not handled properly with font display strategies * Dynamically injected content (ads, embeds, lazy-loaded elements) can disrupt layout stability * CSS implementation significantly impacts CLS—immediate availability of styling information helps maintain visual stability Code Examples: 1. Basic static rendering: ` 2. Static rendering with revalidation (ISR): ` 3. Static path generation: ` 2. Dynamic Rendering (Server Rendering Strategy) Dynamic Rendering generates HTML on the server for each request at request time. Unlike static rendering, the content is not pre-rendered or cached but freshly generated for each user. This kind of rendering works best for: * Personalized content: User dashboards, account pages * Real-time data: Stock prices, live sports scores * Request-specific information: Pages that use cookies, headers, or search parameters * Frequently changing data: Content that needs to be up-to-date on every request How Dynamic Rendering Affects Core Web Vitals * Largest Contentful Paint (LCP): With dynamic rendering, the server needs to generate HTML for each request, and that can't be fully cached at the CDN level. It is still faster than client-side rendering as HTML is generated on the server. * Interaction to Next Paint (INP): The performance is similar to static rendering once the page is loaded. However, it can become slower if the dynamic content includes many Client Components. * Cumulative Layout Shift (CLS): Dynamic rendering can potentially introduce CLS if the data fetched at request time significantly alters the layout of the page compared to a static structure. However, if the layout is stable and the dynamic content size fits within predefined areas, the CLS can be managed effectively. Code Examples: 1. Explicit dynamic rendering: ` 2. Simplicit dynamic rendering with cookies: ` 3. Dynamic routes: ` 3. Streaming (Server Rendering Strategy) Streaming allows you to progressively render UI from the server. Instead of waiting for all the data to be ready before sending any HTML, the server sends chunks of HTML as they become available. This is implemented using React's Suspense boundary. React Suspense works by creating boundaries in your component tree that can "suspend" rendering while waiting for asynchronous operations. When a component inside a Suspense boundary throws a promise (which happens automatically with data fetching in React Server Components), React pauses rendering of that component and its children, renders the fallback UI specified in the Suspense component, continues rendering other parts of the page outside this boundary, and eventually resumes and replaces the fallback with the actual component once the promise resolves. When streaming, this mechanism allows the server to send the initial HTML with fallbacks for suspended components while continuing to process suspended components in the background. The server then streams additional HTML chunks as each suspended component resolves, including instructions for the browser to seamlessly replace fallbacks with final content. It works well for: * Pages with mixed data requirements: Some fast, some slow data sources * Improving perceived performance: Show users something quickly while slower parts load * Complex dashboards: Different widgets have different loading times * Handling slow APIs: Prevent slow third-party services from blocking the entire page How Streaming Affects Core Web Vitals * Largest Contentful Paint (LCP): Streaming can improve the perceived LCP. By sending the initial HTML content quickly, including potentially the largest element, the browser can render it sooner. Even if other parts of the page are still loading, the user sees the main content faster. * Interaction to Next Paint (INP): Streaming can contribute to a better INP. When used with React's &lt;Suspense />, interactive elements in the faster-loading parts of the page can become interactive earlier, even while other components are still being streamed in. This allows users to engage with the page sooner. * Cumulative Layout Shift (CLS): Streaming can cause layout shifts as new content streams in. However, when implemented carefully, streaming should not negatively impact CLS. The initially streamed content should establish the main layout, and subsequent streamed chunks should ideally fit within this structure without causing significant reflows or layout shifts. Using placeholders and ensuring dimensions are known can help prevent CLS. Code Examples: 1. Basic Streaming with Suspense: ` 2. Nested Suspense boundaries for more granular control: ` 3. Using Next.js loading.js convention: ` 4. Client Components and Client-Side Rendering Client Components are defined using the React 'use client' directive. They are pre-rendered on the server but then hydrated on the client, enabling interactivity. This is different from pure client-side rendering (CSR), where rendering happens entirely in the browser. In the traditional sense of CSR (where the initial HTML is minimal, and all rendering happens in the browser), Next.js has moved away from this as a default approach but it can still be achievable by using dynamic imports and setting ssr: false. ` Despite the shift toward server rendering, there are valid use cases for CSR: 1. Private dashboards: Where SEO doesn't matter, and you want to reduce server load 2. Heavy interactive applications: Like data visualization tools or complex editors 3. Browser-only APIs: When you need access to browser-specific features like localStorage or WebGL 4. Third-party integrations: Some third-party widgets or libraries that only work in the browser While these are valid use cases, using Client Components is generally preferable to pure CSR in Next.js. Client Components give you the best of both worlds: server-rendered HTML for the initial load (improving SEO and LCP) with client-side interactivity after hydration. Pure CSR should be reserved for specific scenarios where server rendering is impossible or counterproductive. Client components are good for: * Interactive UI elements: Forms, dropdowns, modals, tabs * State-dependent UI: Components that change based on client state * Browser API access: Components that need localStorage, geolocation, etc. * Event-driven interactions: Click handlers, form submissions, animations * Real-time updates: Chat interfaces, live notifications How Client Components Affect Core Web Vitals * Largest Contentful Paint (LCP): Initial HTML includes the server-rendered version of Client Components, so LCP is reasonably fast. Hydration can delay interactivity but doesn't necessarily affect LCP. * Interaction to Next Paint (INP): For Client Components, hydration can cause input delay during page load, and when the page is hydrated, performance depends on the efficiency of event handlers. Also, complex state management can impact responsiveness. * Cumulative Layout Shift (CLS): Client-side data fetching can cause layout shifts as new data arrives. Also, state changes might alter the layout unexpectedly. Using Client Components will require careful implementation to prevent shifts. Code Examples: 1. Basic Client Component: ` 2. Client Component with server data: ` Hybrid Approaches and Composition Patterns In real-world applications, you'll often use a combination of rendering strategies to achieve the best performance. Next.js makes it easy to compose Server and Client Components together. Server Components with Islands of Interactivity One of the most effective patterns is to use Server Components for the majority of your UI and add Client Components only where interactivity is needed. This approach: 1. Minimizes JavaScript sent to the client 2. Provides excellent initial load performance 3. Maintains good interactivity where needed ` Partial Prerendering (Next.js 15) Next.js 15 introduced Partial Prerendering, a new hybrid rendering strategy that combines static and dynamic content in a single route. This allows you to: 1. Statically generate a shell of the page 2. Stream in dynamic, personalized content 3. Get the best of both static and dynamic rendering Note: At the time of this writing, Partial Prerendering is experimental and is not ready for production use. Read more ` Measuring Core Web Vitals in Next.js Understanding the impact of your rendering strategy choices requires measuring Core Web Vitals in real-world conditions. Here are some approaches: 1. Vercel Analytics If you deploy on Vercel, you can use Vercel Analytics to automatically track Core Web Vitals for your production site: ` 2. Web Vitals API You can manually track Core Web Vitals using the web-vitals library: ` 3. Lighthouse and PageSpeed Insights For development and testing, use: * Chrome DevTools Lighthouse tab * PageSpeed Insights * Chrome User Experience Report Making Practical Decisions: Which Rendering Strategy to Choose? Choosing the right rendering strategy depends on your specific requirements. Here's a decision framework: Choose Static Rendering when * Content is the same for all users * Data can be determined at build time * Page doesn't need frequent updates * SEO is critical * You want the best possible performance Choose Dynamic Rendering when * Content is personalized for each user * Data must be fresh on every request * You need access to request-time information * Content changes frequently Choose Streaming when * Page has a mix of fast and slow data requirements * You want to improve perceived performance * Some parts of the page depend on slow APIs * You want to prioritize showing critical UI first Choose Client Components when * UI needs to be interactive * Component relies on browser APIs * UI changes frequently based on user input * You need real-time updates Conclusion Next.js provides a powerful set of rendering strategies that allow you to optimize for both performance and user experience. By understanding how each strategy affects Core Web Vitals, you can make informed decisions about how to build your application. Remember that the best approach is often a hybrid one, combining different rendering strategies based on the specific requirements of each part of your application. Start with Server Components as your default, use Static Rendering where possible, and add Client Components only where interactivity is needed. By following these principles and measuring your Core Web Vitals, you can create Next.js applications that are fast, responsive, and provide an excellent user experience....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co