Skip to content

Adding React to your ASP.NET MVC web app

Adding React to your ASP.NET MVC web app

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Adding React to your ASP.NET MVC web app

In this article, we'll take a look how and why you might want to choose React to handle your front end considerations, and let ASP.NET manage the backend

Architecture

First, let's consider the range of responsibilities that each solution offers:

  • Templating - scaffolding out HTML markup based on page data
  • Routing - converting a request into a response
  • Middleware Pipeline - building assets
  • Model Binding - building usable model objects from HTML form data
  • API Services - handling data requests on a server
  • Client Side Interaction - updating a web page based on user interaction

For the most part, both of these frameworks offer ways to handle most of these concerns:

FeatureASP.NETReact
Templating
Routing
Middleware Pipeline
Model Binding
API Services
Client Side Interaction

Challenges when combining these two technologies include finding which pieces work best together.

Historically, ASP.NET has struggled to offer any sort of rich client experience out of the box. Because ASP.NET performs templating on the server, it's difficult to immediately respond to changes in state on the client without making a round trip, or writing the client side logic entirely separate from the rest of your application.

Also, React doesn't have anything to say about server-side considerations or API calls, so part of building any React app will involve picking how you want to handle requests on the server.

Here's roughly how we'll structure our architecture, with React taking care of all the client side concerns, and .NET handling the server side API.

architecture

Aside: Blazor is adding rich interactivity to .NET with a monolithic API. If you want to give your existing .cshtml super powers, that's another possible path to pursue, but there's already a large and vibrant ecosystem for using React to take care of front end architecture.

Prerequisites - If you don't have these, you'll need em'

Getting Started

.NET comes with its own react starter template which is documented here. We'll use it for this example. You can spin up a new project with the dotnet new command like this:

dotnet new react my-new-app

Note: If you have Visual Studio, you'll also see the available template starters under the the new project dialog new-web-app-templates

You should be able to cd into your application directory, open with VS Code, and launch with F5, or by running the following:

dotnet run

Which will be available at https://localhost:5001/

hello-world

Net React Starter

Let's review what this template adds, and how all the pieces fit together

File Structure

Here's a very high level view of some of the files involved

my-app
├── .vscode/          # vs code configs
├── bin/              # generated dotnet output
├── ClientApp         # React App - seeded with CRA
│   ├── build/        # generated react output
│   ├── public/       # static assets
│   ├── src/          # react app source
│   └── package.json  # npm configuration
├── Controllers/      # dotnet controllers
├── Models/           # dotnet models
├── Pages/            # razor pages
├── Program.cs        # dotnet entry point
├── Startup.cs        # dotnet app configuration
└── my-app.csproj     # project config and build steps

The startup.cs file will wire up the react application as asp.net middleware with the following services / configuration:

// ConfigureServices()
services.AddSpaStaticFiles(configuration =>
{
    configuration.RootPath = "ClientApp/build";
});

// Configure()
app.UseStaticFiles();
app.UseSpaStaticFiles();
app.UseSpa(spa =>
{
    spa.Options.SourcePath = "ClientApp";
    if (env.IsDevelopment())
    {
        spa.UseReactDevelopmentServer(npmScript: "start");
    }
});

This takes advantage of the Microsoft.AspNetCore.SpaServices.Extensions library on nuget to wire up some of the client side dependencies, and server kick off.

Finally, to wire up React to our .NET build step, there are a couple transforms in the .csproj file which run the local build process via npm and copy the build output to the .NET project.

<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
  <!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
  <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
  <Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />
  <!-- ... -->
</Target>

Adding Functionality - More Cats Please

And finally, let's review how to leverage all of these existing components to start adding functionality to our application. We can take advantage of the fact that API calls can happen on the server-side, allowing us to protect API Keys and tokens, while also benefiting from immediate client updates without a full page refresh.

As an example, we'll use TheCatApi, which has a quick, easy, and free sign up process to get your own API key.

For this demo project, we'll add a random image viewer, a listing of all breeds, and a navigation component using react-router to navigate between them.

We can scaffold C# classes from the sample output, and processing, through JSON2CSharp, and we'll add them to the models folder.

MVC Controller - That's the C in MVC!

The controller will orchestrate all of the backend services. Here's an example that helps setup a wrapper around the cat API so we can simplify our client side logic, and forward the request on the backend, but we could also hit a database, filesystem, or any other backend interfaces. We'll deserialize the API into a strongly typed class using the new System.Text.Json.JsonSerializer. This is baked into .NET core, and saves us the historical dependency on Newtonsoft's JSON.NET. The API Controller can then just return data (wrapped up in a Task because we made our method async).

[ApiController]
[Route("api/[controller]")]
public class BreedsController : ControllerBase

    [HttpGet]
    public async Task<IEnumerable<Breed>> Get()
    {
        var BASE_URL = "https://api.thecatapi.com";

        using var client = new HttpClient();
        client.DefaultRequestHeaders.Add("x-api-key", _configuration["CatApiKey"]);

        var resp = await client.GetStringAsync(BASE_URL + "/v1/breeds");
        var breeds = JsonSerializer.Deserialize<Breed[]>(resp);

        return breeds;
    }

}

If we run the project, we should now be able to fetch data by navigating to /api/breed in the browser.

You can test check if it works by itself by going to https://localhost:5001/api/breed

Now let's load that up into a React component.

React Component - It's Reactive :D

Here's the basic structure of a React component:

import React, { useState } from 'react';

export function Breeds() {
  const [loading, setLoading] = useState(true);
  const [breeds, setBreeds] = useState([]);

  const contents = loading
    ? <p><em>Loading...</em></p>
    : renderBreedsTable(breeds);

  return (
    <div>
      <h1 id="tableLabel" >Get Breeds</h1>
      <p>This component demonstrates fetching data from the server.</p>
      {contents}
    </div>
  );
}

If this JavaScript is looking unfamiliar, it's because it's using JSX as well as ES6 syntax that's not supported by all browsers, which is why React needs to get preprocessed, by Babel, into something more universally understood to browsers. But let's take a look at what's going on piece-by-piece.

In order for JSX to be processed, we need to import React into our file (until React 17 that is).

The component function's primary responsibility is to return the HTML markup that we want to use one the page. Everything else is there to help manage state, or inform how to template and transform the data into HTML. So if we want a conditional, we can use the ternary operator (bool ? a : b), or if we want to transform items in an array, we could use Array.map to produce the output.

In order to manage state inside of a component, we use the React Hooks function useState, which returns two items in an array that we can deconstruct like this:

const [loading, setLoading] = useState(true);

This allows us to update the state of this property and informs React to dynamically re-render the output when we do so. For example, while loading, we can display some boilerplate text until the data has returned, and then update the view appropriately.

We'll want to fetch data on load, but sending async fetch calls during the initial execution can have some weird side effects, so we can use another hook called useEffect. Every time the component renders, we want to introduce an effect, so we'll put the initial call for data inside of there and it will fire as soon as the component goes live:

useEffect(async () => {
  const response = await fetch('/api/breeds');
  const data = await response.json();

  setBreeds(data)
  setLoading(false);
}, []);

This hits our backend API, gets the data, and updates our component state not only with the data, but also by toggling the loading indicator. Any updates to state that are used in the render function will cause it to kick off again and update the DOM automatically.

Let's imagine we create a second component, roughly the same as the first, to go get a random cat image. How do we handle navigation between both components. Many of the advantages to choosing React occur from preventing full page reloads on navigation. After all, your header, footer, loaded JS, and CSS libraries haven't changed - why should they get shipped to and parsed on the client from scratch. So we'll handle all navigational concerns within React as well.

Here's a pared down version of how that occurs:

  1. Our index.js file wraps the entire application in BrowserRouter from react-router-dom.
  2. It then loads our application component which passes Route objects with a path and component into the overall Layout.
  3. The layout then renders the "live" route by injecting this.props.children into the rest of the site layout.
  4. React router will handle the rest by only loading components when the path is navigated to in the browser or from other areas in the app.
// index.js
ReactDOM.render(
  <BrowserRouter basename={baseUrl}>
    <App />
  </BrowserRouter>,
  rootElement);

// app.js
export default function App() {
  return (
    <Layout>
      <Route exact path='/' component={RandomCat} />
      <Route path='/breeds' component={Breeds} />
    </Layout>
  );
}

// layout.js
export function Layout(props) {
  return (
    <div>
      <NavMenu />
      <Container>
        {props.children}
      </Container>
    </div>
  );
}

Wrap Up & Source Code

That's a quick run down of how you can combine both of these two technologies, and leverage the good parts of each.

There are many ways to explore each of these technologies on their own, and also try out use cases for using them together:

For more context, you can browse the full source code for the article on my Github at KyleMit/aspnet-react-cats.

What would you like to see added to this example next? Tweet me at KyleMitBTV.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to test React custom hooks and components with Vitest cover image

How to test React custom hooks and components with Vitest

Introduction In this guide, we'll navigate through the process of testing React hooks and components using Vitest—a powerful JavaScript unit testing framework. Discover how Vitest simplifies testing setups, providing an optimal solution for both Vite-powered projects and beyond. Vitest is a javascript unit testing framework that aims to position itself as the Test Runner of choice for Vite projects and as a solid alternative even for projects not using Vite. Vitest was built primarily for Vite-powered projects, to help reduce the complexity of setting up testing with other testing frameworks like Jest. Vitest uses the same configuration of your App (through vite.config.js), sharing a common transformation pipeline during dev, build, and test time. Prerequisites This article assumes a solid understanding of React and frontend unit testing. Familiarity with tools like React Testing Library and JSDOM will enhance your grasp of the testing process with Vitest. Installation and configuration Let’s see how we can use Vitest for testing React custom hooks and components. But first, we will need to create a new project with Vite! If you already have an existing project, you can skip this step. ` Follow the prompts to create a new React project successfully. For testing, we need the following dependencies installed: Vitest as the unit testing framework JSDOM as the DOM environment for running our tests React Testing Library as the React testing utilities. To do so, we run the following command: ` Once we have those packages installed, we need to configure the vite.config.js file to run tests. By default, some of the extra configs we need to set up Vitest are not available in the Vite config types, so we will need the vite.config.ts file to reference Vitest types by adding /// reference types=”vitest” /> at the top of the file. Add the following code to the vite.config.ts ` We set globals to true because, by default, Vitest does not provide global APIs for explicitness. So with this set to true, we can use keywords like describe, test and it without needing to import them. To get TypeScript working with the global APIs, add vitest/globals to the types field in your tsconfig.json. ` The environment property tells Vitest which environment to run the test. We are using jsdom as the environment. The root property tells Vitest the root folder from where it should start looking for test files. We should add a script for running the test in package.json ` With all that configured, we can now start writing unit tests for customs hooks and React components. Writing test for custom hooks Let’s write a test for a simple useCounter hook that takes an initial value and returns the value, an increment function and a decrement function. ` We can write a test to check the default return values of the hook for value as below: ` To test if the hook works when we increment the value, we can use the act() method from @testing-library/react to simulate the increment function, as shown in the below test case: ` Kindly Note that you can't destructure the reactive properties of the result.current instance, or they will lose their reactivity. Testing hooks with asynchronous logic Now let’s test a more complex logic that contains asynchronous logic. Let’s write a useProducts hook that fetches data from an external api and return that value ` Now, let’s see what the test looks like: ` In the above example, we had to spy on the global fetch API, so that we can mock its return value. We wrapped that inside a beforeAll so that this runs before any test in this file. Then we added an afterAll method and called the mockRestore() to run after all test cases have been completed and return all mock implementations to their original function. We can also use the mockClear() method to clear all the mock's information, such as the number of calls and the mock's results. This method is handy when mocking the same function with different return values for different tests. We usually use mockClear() in beforeEach() or afterEach() methods to ensure our test is isolated completely. Then in our test case, we used a waitFor(), to wait for the return value to be resolved. Writing test for components Like Jest, Vitest provides assertion methods (matchers) to use with the expect methods for asserting values, but to test DOM elements easily, we will need to make use of custom matchers such as toBeInTheDocument() or toHaveTextContent(). Luckily the Vitest API is mostly compatible with the Jest API, making it possible to reuse many tools originally built for Jest. For such methods, we can install the @testing-library/jest-dom package and extend the expect method from Vitest to include the assertion methods in matchers from this package. ` After installing the jest-dom testing library package, create a file named vitest-setup.ts on the root of the project and import the following into the project to extend js-dom custom matchers: ` Since we are using typescript, we also need to include our setup file in our tsconfig.json: ` In vite.config.ts, we need to add the vitest-setup.ts file to the test.setupFiles field: ` Now let’s test the Products.tsx component: ` We start by spying and mocking the useProducts hook with vi.spyOn() method from Vitest: ` Now, we render the Products component using the render method from @testing-library/react and assert that the component renders the list of products as expected and also the product has the title as follows: ` In the above code, we use the render method from @testing-library/react to render the component and this returns some useful methods we can use to extract information from the component like getByTestId and getByText. The getByTestId method will retrieve the element whose data-testid attribute value equals product-list, and we can then assert its children to equal the length of our mocked items array. Using data-testid attribute values is a good practice for identifying a DOM element for testing purposes and avoiding affecting the component's implementation in production and tests. We also used the getByText method to find a text in the rendered component. We were able to call the toBeInTheDocument() because we extended the matchers to work with Vitest earlier. Here is what the full test looks like: ` Conclusion In this article, we delved into the world of testing React hooks and components using Vitest, a versatile JavaScript unit testing framework. We walked through the installation and configuration process, ensuring compatibility with React, JSDOM, and React Testing Library. The comprehensive guide covered writing tests for custom hooks, including handling asynchronous logic, and testing React components, leveraging custom matchers for DOM assertions. By adopting Vitest, developers can streamline the testing process for their React applications, whether powered by Vite or not. The framework's seamless integration with Vite projects simplifies the setup, reducing the complexities associated with other testing tools like Jest....

Why is My React Reducer Called Twice and What the Heck is a Pure Function? cover image

Why is My React Reducer Called Twice and What the Heck is a Pure Function?

Why is My React Reducer Called Twice and What the Heck is a Pure Function? In a recent project, we encountered an interesting issue: our React reducer was dispatching twice, producing incorrect values, such as incrementing a number in increments of two. We hopped on a pairing session and started debugging. Eventually, we got to the root of the problem and learned the importance of pure functions in functional programming. This article will explain why our reducer was being dispatched twice, what pure functions are, and how React's strict mode helped us identify a bug in our code. The Issue We noticed that our useReducer hook was causing the reducer function to be called twice for every action dispatched. Initially, we were confused about this behavior and thought it might be a bug in React. Additionally, we had one of the dispatches inside a useEffect, which caused it to be called twice due to React strict mode, effectively firing the reducer four times and further complicating our debugging process. However, we knew that React's strict mode caused useEffect to be called twice, so it didn't take very long to realize that the issue was not with React but with how we had implemented our reducer function. React Strict Mode React's strict mode is a tool for highlighting potential problems in an application. It intentionally double-invokes specific lifecycle methods and hooks (like useReducer and useEffect) to help developers identify side effects. This behavior exposed our issue, as we had reducers that were not pure functions. What is a Pure Function? A pure function is a function that: - Is deterministic: Given the same input, always returns the same output. - Does Not Have Side Effects: Does not alter any external state or have observable interactions with the outside world. In the context of a reducer, this means the function should not: - Modify its arguments - Perform any I/O operations (like network requests or logging) - Generate random numbers - Depend on any external state Pure functions are predictable and testable. They help prevent bugs and make code easier to reason about. In the context of React, pure functions are essential for reducers because they ensure that the state transitions are predictable and consistent. The Root Cause: Impure Reducers Our reducers were not pure functions. They were altering external state and had side effects, which caused inconsistent behavior when React's strict mode double-invoked them. This led to unexpected results and made debugging more difficult. The Solution: Make Reducers Pure To resolve this issue, we refactored our reducers to ensure they were pure functions. Here's an extended example of how we transformed an impure reducer into a pure one in a more complex scenario involving a task management application. Let's start with the initial state and action types: ` And here's the impure reducer similar to what we had initially: ` This reducer is impure because it directly modifies the state object, which is a side effect. To make it pure, we must create a new state object for every action and return it without modifying the original state. Here's the refactored pure reducer: ` Key Changes: - Direct State Modification: In the impure reducer, the state is directly modified (e.g., state.tasks.push(action.payload)). This causes side effects and violates the principles of pure functions. - Side Effects: The impure reducer included side effects such as logging and direct state changes. The pure reducer eliminates these side effects, ensuring consistent and predictable behavior. I've created an interactive example to demonstrate the difference between impure and pure reducers in a React application. Despite the RESET_TASKS action being implemented similarly in both reducers, you'll notice that the impure reducer does not reset the tasks correctly. This problem happens because the impure reducer directly modifies the state, leading to unexpected behavior. Check out the embedded StackBlitz example below: Conclusion Our experience with the reducer dispatching twice was a valuable lesson in the importance of pure functions in React. Thanks to React's strict mode, we identified and fixed impure reducers, leading to more predictable and maintainable code. If you encounter similar issues, ensure your reducers are pure functions and leverage React strict mode to catch potential problems early in development. By embracing functional programming principles, you can write cleaner, more reliable code that is easier to debug and maintain....

Build your Backend with Netlify Functions in 20 Minutes cover image

Build your Backend with Netlify Functions in 20 Minutes

Build your Backend with Netlify Functions in 20 minutes Netlify makes deploying your front end quick and easy, and Netlify functions makes running a serverless backend just as easy. In this guide, we'll get setup on how to use Netlify functions. As an indie developer, you should embrace serverless offerings because of their low barrier to entry and generous free tiers. And as an enterprise shop, you should seriously consider them for an extremely cheap, fast, and scalable way to build out your backend infrastructure. Use Cases - What can you build? Modern JavaScript frameworks allow us to build large and complex applications on the client, but they can occasionally run into limitations. For everything else, there's the "backend" which excels at handling some of these use cases: * Protecting Secrets & Credentials * Server Side Rendering * Sending Emails * Handling File IO * Running centralized logic * Executing tasks off the main thread * Bypassing CORS issues for locked down APIs * Providing Progressive Enhancement / NoScript Fallback Composition of a Function Netlify Functions provides a wrapper around AWS Lambdas. While the Netlify documentation should be sufficient, it's good to know that there's an escape hatch if you ever want to run on your own AWS subscription. However, Netlify handles some of the deployment magic for you, so let's start there. Here's the bare bones of a Netlify function in JavaScript: ` If you're familiar with running JavaScript on Node, this should look somewhat familiar. Each function should live in its own file, and will execute whatever is assigned to exports.handler. We have access to event and context. We can run whatever code we need on Node, and return whatever response type we'd like. To set this up, lets create an empty repository on GitHub. We need to add functions to a folder. While we can use any name, a common pattern is to create a folder name functions. Let's add a file in there called hello.js ` In our function, we can grab information from the query string parameters passed in. We'll destructure those (with a default value) and look for a name param. To actually wire up our functions folder, we'll need to add a netlify.toml config file at the root of our project. ` Walk Before You Run (Locally) Our "repo" should look like this at this point: ` The best way to run your Netlify site locally, with all the bells and whistles attached, is to use Netlify Dev which you can install via npm: ` And then kick off your dev server like this: ` Your "site" should now be live at http://localhost:8888. By default, Netlify hosts functions under the subpath /.netlify/functions/ so you can invoke your function here: http://localhost:8888/.netlify/functions/hello?name=Beth Now, let's make our function's address a little cleaner by also taking advantage of another free Netlify feature using redirects. This allows us to expose the same functions at a terser url by replacing /.netlify/functions with /api. FROM: /.netlify/functions/hello TO: /api/hello To do so, append the following info to your netlify.toml config, and restart Netlify dev: ` This will route all traffic at /api/* internally to the appropriate functions directory, and the wildcard will capture all additional path info, and move to :splat. By setting the HTTP Status Code = 200, Netlify will preform a "rewrite" (as opposed to a "redirect") which will change the server response without changing the URL in the browser address bar. So let's try again with our new url: http://localhost:8888/api/hello?name=Beth 👏 Awesome, you just created a function! (you're following along live, right?) Getting the CRUD Out & Submitting Data Now that we can build functions, let's create our own API with some basic CRUD functions (Create, Read, Update, & Delete) for a simple todos app. One of the central tenants of serverless computing is that it's also stateless. If you need to store any state across function invocations, it should be persisted to another, layer like a database. For this article, let's use the free tier of DynamoDb, but feel free to BYODB (Bring Your Own DB), especially if it has a Node SDK. In the next steps, we'll: 1. Setup a table on DynamoDB in AWS 2. Install npm packages into our project 3. Setup secret keys in AWS, and add to our environment variables 4. Initialize the aws-sdk package for NodeJs 5. And then finally add a Netlify function route to create a record on our database AWS - Amazon Web Services This guide will assume some degree of familiarity with AWS & DynamoDB, but if you're new to DynamoDB, you can start with this guide on Getting Started with Node.js and DynamoDB. On AWS, create a table with the name NetlifyTodos, and string partition key called key. NPM - Node Package Manager Now, let's setup npm and install aws-sdk, nanoid, & dotenv. In a terminal at the root of your project, run the following commands: ` ENV - Environment Variables You'll need to provision an access key / secret for an IAM user that we'll use to authenticate our API calls. One of the benefits of running these calls on the server is you're able to protect your application secret through environment variables, instead of having to ship them to the client, which is not recommended. There are quite a few ways to log into AWS on your local machine, but just to keep everything inside of our project, let's create a .env file at the root of our project, and fill in the following keys with your own values: ` NOTE: One little gotcha here is that the more common AWS_ACCESS_KEY_ID is a reserved environment keyword used by the Netlify process. So if we want to pass around env variables, we'll have to use our own key, in this case prefixed with MY_. Once they're added to the process, we can destructure them and use in setting up our AWS SDK. We'll need to setup AWS for every CRUD function, so let's assemble all this logic in a separate file called dyno-client.js. ` The following is required. SDK - Software Developer Kit Using the aws-sdk makes our life a lot easier for connecting to DynamoDB from our codebase. We can create an instance of the Dynamo client that we'll use for the remaining examples: ` To make this available to all our functions, add the DynamoDB instance to your exports, and we'll grab it when we need it: ` Create Todo (Due by EOD 😂) ⚡ We're finally ready to create our API function! In the following example, we'll post back form data containing the text for our todo item. We can parse the form data into JSON, and transform it into an item to insert into our table. If it succeeds, we'll return the result with a status code of 200, and if it fails, we'll return the the error message along with the status code from the error itself. ` This should give you the gist of how to expose your API routes and logic to perform various operations. I'll hold off on more examples because most of the code here is actually just specific to DynamoDB, and we'll save that for a separate article. But the takeaway is that we're able to return something meaningful with very minimal plumbing. And that's the whole point! > With Functions, you *only* have to write your own business logic! Debugging - For Frictionless Feedback Loops There are two critical debugging tools in Visual Studio Code I like to use when working with node and API routes. 1. Script Debugger & 2. Rest Client Plugin ✨ Did you know, instead of configuring a custom launch.json file, you can run and attach debuggers directly onto npm scripts in the package.json file: And while tools like Postman are a valuable part of comprehensive test suite, you can add the REST Client Extension to invoke API commands directly within VS Code. We can easily use the browser to mock GET endpoints, but this makes it really easy to invoke other HTTP verbs, and post back form data. Just add a file like test.http to your project. *REST Client* supports expansion of variable environment, and custom variables. If you stub out multiple calls, you can separate multiple different calls by delimiting with ###. Add the following to your sample file: ` We can now run the above by clicking "Send Request". This should hit our Netlify dev server, and allow us to step through our function logic locally! Publishing Publishing to Netlify is easy as well. Make sure your project is committed, and pushed up to a git repository on GitHub, GitLab or BitBucket. Login to Netlify, and click the option to Create "New Site From Git" and select your repo. Netlify will prompt for a Build command, and a Publish directory. Believe it or not, we don't actually have either of those things yet, and it's probably a project for another day to set up our front end. Those commands refer to the static site build part of the deployment. Everything we need to build serverless functions is inside our functions directory and our netlify.toml config. Once we deploy the site, the last thing we'll need to do is add our environment variables to Netlify under Build > Environment Next Steps - This is only the beginning Hopefully some ideas are spinning as to how you can use these technologies on your own sites and projects. The focus of this article is on building and debugging Netlify functions, but an important exercise left to the reader is to take advantage of that on your front end. TIP: If you want to add Create React App to your current directory (without creating a new folder), add a . when scaffolding out a new app like this: ` Try it out - build a front end, and let me know how it goes at KyleMitBTV! For more context, you can browse the full source code for the article on GitHub at KyleMit/netlify-functions-demo. For even more practical examples with actual code, checkout the following resources as well! * David Wells - Netlify Serverless Functions Workshop * netlify/functions - Community Functions Examples Good luck, and go build things!...

Making AI Deliver: From Pilots to Measurable Business Impact cover image

Making AI Deliver: From Pilots to Measurable Business Impact

A lot of organizations have experimented with AI, but far fewer are seeing real business results. At the Leadership Exchange, this panel focused on what it actually takes to move beyond experimentation and turn AI into measurable ROI. Over the past few years, many organizations have experimented with AI, but the challenge today is translating experimentation into measurable business value. Moderated by Tracy Lee, CEO at This Dot Labs, panelists featured Dorren Schmitt, Vice President IT Strategy & Innovation at Allen Media Group, Greg Geodakyan, CTO at Client Command, and Elliott Fouts, CAIO & CTO at This Dot Labs. Panelists discussed how companies are moving from early AI experiments to initiatives that deliver real results. They began by examining how experimentation has evolved over the past year. While many organizations did not fully utilize AI experimentation budgets in 2025, 2026 is showing a shift toward more intentional investment. Structured budgets and clearly defined frameworks are enabling companies to explore AI strategically and identify initiatives with high potential impact. The conversation then turned to alignment and ROI. Panelists highlighted the importance of connecting AI projects to corporate strategy and leadership priorities. Ensuring that AI initiatives translate into operational efficiency, productivity gains, and measurable business impact is essential. Companies that successfully align AI efforts with organizational goals are better equipped to demonstrate tangible outcomes from their investments. Moving from pilots and proofs of concept to production was another major focus. Governance, prioritization, and workflow integration were cited as essential for scaling AI initiatives. One panelist shared that out of nine proofs of concept, eight successfully launched, resulting in improvements in quality and operational efficiency. Panelists also explored the future of AI within organizations, including the potential for agentic workflows and reduced human-in-the-loop processes. New capabilities are emerging that extend beyond coding tasks, reshaping how teams collaborate and how work is structured across departments. Key Takeaways - Structured experimentation and defined budgets allow organizations to explore AI strategically and safely. - Alignment with business priorities is essential for translating AI capabilities into measurable outcomes. - Governance and workflow integration are critical to moving AI initiatives from pilot stages to production deployment. Successfully leveraging AI requires a balance between experimentation, strategic alignment, and operational discipline. Organizations that approach AI as a structured, measurable initiative can capture meaningful results and unlock new opportunities for innovation. Curious how your organization can move from AI experimentation to real impact? Let’s talk. Reach out to continue the conversation or join us at an upcoming Leadership Exchange. Tracy can be reached at tlee@thisdot.co....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co