Skip to content

Using XState Actors to Model Async Workflows Safely

Using XState Actors to Model Async Workflows Safely

In my previous post I discussed the challenges of writing async workflows in React that correctly deal with all possible edge cases. Even for a simple case of a client with two dependencies with no error handling, we ended up with this code:

const useClient = (user) => {
  const [client, setClient] = useState(null);

  useEffect(() => {
    let cancelled;
    (async () => {
      const clientAuthToken = await fetchClientToken(user);
      if (cancelled) return;

      const connection = await createWebsocketConnection();
      if (cancelled) {
        connection.close();
        return;
      }

      const client = await createClient(connection, clientAuthToken);
      if (cancelled) {
        client.disconnect();
        return;
      }

      setClient(client);
    })();

    return () => {
      cancelled = true;
    };
  }, [user]);

  useEffect(() => {
    return () => {
      client.disconnect();
    };
  }, [client]);

  return client;
};

This tangle of highly imperative code functions, but it will prove hard to read and change in the future. What we need is a way to express the stateful nature of the various pieces of this workflow and how they interact with each other in a way in which we can easily see if we've missed something or make changes in the future. This is where state machines and the actor model can come in handy.

State machines? Actors?

These are programming patterns that you may or may not have heard of before. I will explain them in a brief and simplified way but you should know that there is a great deal of theoretical and practical background in this area that we will be leveraging even though we won't go over it explicitly.

  1. A state machine is an entity consisting of state and a series of rules to be followed to determine the next state from a combination of its previous state and external events it receives. Even though you might rarely think about them, state machines are everywhere. For example, a Promise is a state machine going from pending to resolved state when it receives a value from the asynchronous computation it is wrapping.
  2. The actor model is a computing architecture that models asynchronous workflows as the interplay of self-contained units called actors. These units communicate with each other by sending and receiving events, they encapsulate state and they exist in a hierarchical relationship, where parent actors spawn child actors, thus linking their lifecycles.

It's common to combine both patterns so that a single entity is both an actor and a state machine, so that child actors are spawned and messages are sent based on which state the entity is in. I'll be using XState, a Javascript library which allows us to create actors and state machines in an easy declarative style. This won't be a complete introductory tutorial to XState, though. So if you're unfamiliar with the tool and need context for the syntax I'll be using, head to their website to read through the docs.

Setting the stage

The first step is to break down our workflow into the distinct states it can be in. Not every step in a process is a state. Rather, states represent moments in the process where the workflow is waiting for something to happen, whether that is user input or the completion of some external process. In our case we can break our workflow down coarsely into three states:

  1. When the workflow is first created, we can immediately start creating the connection, and fetching the auth token. But, we have to wait until those are finished before creating the client. We'll call this state "preparing".
  2. Then, we've started the process of creating the client, but we can't use it until the client creation returns it to us. We'll call this state "creatingClient".
  3. Finally, everything is ready, and the client can be used. The machine is waiting only for the exit signal so it can release its resources and destroy itself. We'll call this state "clientReady".

This can be represented visually like so (all visualizations produced with Stately):

Basic state machine

And in code like so

export const clientFactory = createMachine({
  id: "clientFactory",
  initial: "preparing",
  states: {
    preparing: {
      on: {
        "preparations.complete": {
          target: "creatingClient",
        },
      },
    },
    creatingClient: {
      on: {
        "client.ready": {
          target: "clientReady",
        },
      },
    },
    clientReady: {},
  },
});

However, this is a bit overly simplistic. When we're in our "preparing" state there are actually two separate and independent processes happening, and both of them must complete before we can start creating the client. Fortunately, this is easily represented with parallel child state nodes. Think of parallel state nodes like Promise.all: they advance independently but the parent that invoked them gets notified when they all finish. In XState, "finishing" is defined as reaching a state marked "final", like so

export const clientFactory = createMachine({
  id: "clientFactory",
  initial: "preparing",
  states: {
    preparing: {
      // Parallel state nodes are a good way to model two independent
      // workflows that should happen at the same time
      type: "parallel",
      states: {
        // The token child node
        token: {
          initial: "fetching",
          states: {
            fetching: {
              on: {
                "token.ready": {
                  target: "done",
                },
              },
            },
            done: {
              type: "final",
            },
          },
        },
        // The connection child node
        connection: {
          initial: "connecting",
          states: {
            connecting: {
              on: {
                "connection.ready": {
                  target: "done",
                },
              },
            },
            done: {
              type: "final",
            },
          },
        },
      },
      // The "onDone" transition on parallel state nodes gets called when all child
      // nodes have entered their "final" state. It's a great way to wait until
      // various workflows have completed before moving to the next step!
      onDone: {
        target: "creatingClient",
      },
    },
    creatingClient: {
      on: {
        "client.ready": {
          target: "clientReady",
        },
      },
    },
    clientReady: {},
  },
});

Leaving us with the final shape of our state chart:

Using XState actors to model async workflows safely - Complete state machine

Casting call

So far, we only have a single actor: the root actor implicitly created by declaring our state machine. To unlock the real advantages of using actors we need to model all of our disposable resources as actors. We could write them as full state machines using XState but instead let's take advantage of a short and sweet way of defining actors that interact with non-XState code: functions with callbacks. Here is what our connection actor might look like, creating and disposing of a WebSocket:

// Demonstrated here is the simplest and most versatile form of actor: a function that
// takes a callback that sends events to the parent actor, and returns a function that
// will be called when it is stopped.
const createConnectionActor = () => (send) => {
  const connection = new WebSocket("wss://example.com");

  connection.onopen = () =>
    // We send an event to the parent that contains the ready connection
    send({ type: "connection.ready", data: connection });

  // Actors are automatically stopped when their parent stops so simple actors are a great
  // way to manage resources that need to be disposed of. The function returned by an
  // actor will be called when it receives the stop signal.
  return () => {
    connection.close();
  };
};

And here is one for the client, which demonstrates the use of promises inside a callback actor. You can spawn promises as actors directly but they provide no mechanism for responding to events, cleaning up after themselves, or sending any events other than "done" and "error", so they are a poor choice in most cases. It's better to invoke your promise-creating function inside a callback actor, and use the Promise methods like .then() to control async responses.

// We can have the actor creation function take arguments, which we will populate
// when we spawn it
const createClientActor => (token, connection) => (send) => {
  const clientPromise = createClient(token, connection);
  clientPromise.then((client) =>
    send({ type: "client.ready", data: client })
  );

  return () => {
    // A good way to make sure the result of an async function is
    // always cleaned up is by invoking cleanup through .then()
    // If this executes before the promise is resolved, it will cleanup
    // on resolution, whereas if it executes after it's resolved, it will
    // clean up immediately
    clientPromise.then((client) => {
      client.disconnect();
    });
  };
};

Actors are spawned with the spawn action creator from XState, but we also need to save the reference to the running actor somewhere, so spawn is usually combined with assign to create an actor, and save it into the parent's context.

// We put this as the machine options. Machine options can be customised when the
// machine is interpreted, which gives us a way to use values from e.g. React context
// to define our actions, although this is not demonstrated here
const clientFactoryOptions = {
  spawnConnection: assign({
    connectionRef: () => spawn(createConnectionActor()),
  }),
  spawnClient: assign({
    // The assign action creator lets us use the machine context when defining the
    // state to be assigned, this way actors can inherit parent state
    clientRef: (context) =>
      spawn(createClientActor(context.token, context.connection)),
  }),
};

And then it becomes an easy task to trigger these actions when certain states are entered:

export const clientFactory = createMachine({
  id: "clientFactory",
  initial: "preparing",
  states: {
    preparing: {
      type: "parallel",
      states: {
        token: {
          initial: "fetching",
          states: {
            fetching: {
              // Because there's no resource to manage once it's done, we
              // can simply invoke a promise here. Invoked services are like
              // actors, but they're automatically spawned when the state node
              // is entered, and destroyed when it is exited,
              invoke: {
                src: "fetchToken",
                // Invoking a promise provides us with a handy "onDone" transition
                // that triggers when the promise resolves. To handle rejections,
                // we would similarly implement "onError"
                onDone: {
                  // These "save" actions will save the result to the machine
                  // context. They're simple assigners, but you can see them in
                  // the full code example linked at the end.
                  actions: "saveToken",
                  target: "done",
                },
              },
            },
            done: {
              type: "final",
            },
          },
        },
        // The connection child node
        connection: {
          initial: "connecting",
          states: {
            connecting: {
              // We want our connection actor to stick around, because by design,
              // the actor destroys the connection when it exits, so we store
              // it in state by using a "spawn" action
              entry: "spawnConnection",
              on: {
                // Since we're dealing with a persistent actor, we don't get an
                // automatic "onDone" transition. Instead, we rely on the actor
                // to send us an event.
                "connection.ready": {
                  actions: "saveConnection",
                  target: "done",
                },
              },
            },
            done: {
              type: "final",
            },
          },
        },
      },
      onDone: {
        target: "creatingClient",
      },
    },
    creatingClient: {
      // The same pattern as the connection actor. We spawn a  persistent actor
      // that takes care of creating and destroying the client.
      // entry: "spawnClient",
      on: {
        "client.ready": {
          actions: "saveClient",
          target: "clientReady",
        },
      },
    },
    // Even though this node can't be exited, it is not "final". A final node would
    // cause the machine to stop operating, which would stop the child actors!
    clientReady: {},
  },
});

Putting on the performance

XState provides hooks that simplify the process of using state machines in React, making this the equivalent to our async hook at the start:

const useClient = (user) => {
  const [state] = useMachine(clientFactory, clientFactoryOptions);

  if (state.matches("clientReady")) return state.context.client;
  return null;
};

Of course, combined with the machine definition, the action definitions and the actor code are hardly less code, or even simpler code. The advantage of breaking a workflow down like this include:

  1. Each part can be tested independently. You can verify that the machine follows the logic set out without invoking the actors, and you can verify that the actors clean up after themselves without running the whole machine.
  2. The parts can be shuffled around, and added to without having to rewrite them. We could easily add an extra step between connecting and creating the client, or introduce error handling and error states.
  3. We can read and visualize every state and transition of the workflow to make sure we've accounted for all of them. This is a particular improvement over long async/await chains where every await implicitly creates a new state and two transitions — success and error — and the precise placement of catch blocks can drastically change the shape of the state chart.

You won't need to break out these patterns very often in an application. Maybe once or twice, or maybe never. After all, many applications never have to worry about complex workflows and disposable resources. However, having these ideas in your back pocket can get you out of some jams, particularly if you're already using state machines to model UI behaviour — something you should definitely consider doing if you're not already.

A complete code example with everything discussed above using Typescript, and with mock actors and services, that actually run in the visualizer, can be found here.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Building a Multi-Response Streaming API with Node.js, Express, and React cover image

Building a Multi-Response Streaming API with Node.js, Express, and React

Introduction As web applications become increasingly complex and data-driven, efficient and effective data transfer methods become critically important. A streaming API that can send multiple responses to a single request can be a powerful tool for handling large amounts of data or for delivering real-time updates. In this article, we will guide you through the process of creating such an API. We will use video streaming as an illustrative example. With their large file sizes and the need for flexible, on-demand delivery, videos present a fitting scenario for showcasing the power of multi-response streaming APIs. The backend will be built with Node.js and Express, utilizing HTTP range requests to facilitate efficient data delivery in chunks. Next, we'll build a React front-end to interact with our streaming API. This front-end will handle both the display of the streamed video content and its download, offering users real-time progress updates. By the end of this walkthrough, you will have a working example of a multi-response streaming API, and you will be able to apply the principles learned to a wide array of use cases beyond video streaming. Let's jump right into it! Hands-On Implementing the Streaming API in Express In this section, we will dive into the server-side implementation, specifically our Node.js and Express application. We'll be implementing an API endpoint to deliver video content in a streaming fashion. Assuming you have already set up your Express server with TypeScript, we first need to define our video-serving route. We'll create a GET endpoint that, when hit, will stream a video file back to the client. Please make sure to install cors for handling cross-origin requests, dotenv for loading environment variables, and throttle for controlling the rate of data transfer. You can install these with the following command: ` yarn add cors dotenv throttle @types/cors @types/dotenv @types/throttle ` `typescript import cors from 'cors'; import 'dotenv/config'; import express, { Request, Response } from 'express'; import fs from 'fs'; import Throttle from 'throttle'; const app = express(); const port = 8000; app.use(cors()); app.get('/video', (req: Request, res: Response) => { // Video by Zlatin Georgiev from Pexels: https://www.pexels.com/video/15708449/ // For testing purposes - add the video in you static` folder const path = 'src/static/pexels-zlatin-georgiev-15708449 (2160p).mp4'; const stat = fs.statSync(path); const fileSize = stat.size; const range = req.headers.range; if (range) { const parts = range.replace(/bytes=/, '').split('-'); const start = parseInt(parts[0], 10); const end = parts[1] ? parseInt(parts[1], 10) : fileSize - 1; const chunksize = end - start + 1; const file = fs.createReadStream(path, { start, end }); const head = { 'Content-Range': bytes ${start}-${end}/${fileSize}`, 'Accept-Ranges': 'bytes', 'Content-Length': chunksize, 'Content-Type': 'video/mp4', }; res.writeHead(206, head); file.pipe(res); } else { const head = { 'Content-Length': fileSize, 'Content-Type': 'video/mp4', }; res.writeHead(200, head); fs.createReadStream(path).pipe(res); } }); app.listen(port, () => { console.log(Server listening at ${process.env.SERVER_URL}:${port}`); }); ` In the code snippet above, we are implementing a basic video streaming server that responds to HTTP range requests. Here's a brief overview of the key parts: 1. File and Range Setup__: We start by determining the path to the video file and getting the file size. We also grab the range header from the request, which contains the range of bytes the client is requesting. 2. Range Requests Handling__: If a range is provided, we extract the start and end bytes from the range header, then create a read stream for that specific range. This allows us to stream a portion of the file rather than the entire thing. 3. Response Headers__: We then set up our response headers. In the case of a range request, we send back a '206 Partial Content' status along with information about the byte range and total file size. For non-range requests, we simply send back the total file size and the file type. 4. Data Streaming__: Finally, we pipe the read stream directly to the response. This step is where the video data actually gets sent back to the client. The use of pipe() here automatically handles backpressure, ensuring that data isn't read faster than it can be sent to the client. With this setup in place, our streaming server is capable of efficiently delivering large video files to the client in small chunks, providing a smoother user experience. Implementing the Download API in Express Now, let's add another endpoint to our Express application, which will provide more granular control over the data transfer process. We'll set up a GET endpoint for '/download', and within this endpoint, we'll handle streaming the video file to the client for download. `typescript app.get('/download', (req: Request, res: Response) => { // Again, for testing purposes - add the video in you static` folder const path = 'src/static/pexels-zlatin-georgiev-15708449 (2160p).mp4'; const stat = fs.statSync(path); const fileSize = stat.size; res.writeHead(200, { 'Content-Type': 'video/mp4', 'Content-Disposition': 'attachment; filename=video.mp4', 'Content-Length': fileSize, }); const readStream = fs.createReadStream(path); const throttle = new Throttle(1024 1024 * 5); // throttle to 5MB/sec - simulate lower speed readStream.pipe(throttle); throttle.on('data', (chunk) => { Console.log(Sent ${chunk.length} bytes to client.`); res.write(chunk); }); throttle.on('end', () => { console.log('File fully sent to client.'); res.end(); }); }); ` This endpoint has a similar setup to the video streaming endpoint, but it comes with a few key differences: 1. Response Headers__: Here, we include a 'Content-Disposition' header with an 'attachment' directive. This header tells the browser to present the file as a downloadable file named 'video.mp4'. 2. Throttling__: We use the 'throttle' package to limit the data transfer rate. Throttling can be useful for simulating lower-speed connections during testing, or for preventing your server from getting overwhelmed by data transfer operations. 3. Data Writing__: Instead of directly piping the read stream to the response, we attach 'data' and 'end' event listeners to the throttled stream. On the 'data' event, we manually write each chunk of data to the response, and on the 'end' event, we close the response. This implementation provides a more hands-on way to control the data transfer process. It allows for the addition of custom logic to handle events like pausing and resuming the data transfer, adding custom transformations to the data stream, or handling errors during transfer. Utilizing the APIs: A React Application Now that we have a server-side setup for video streaming and downloading, let's put these APIs into action within a client-side React application. Note that we'll be using Tailwind CSS for quick, utility-based styling in our components. Our React application will consist of a video player that uses the video streaming API, a download button to trigger the download API, and a progress bar to show the real-time download progress. First, let's define the Video Player component that will play the streamed video: `tsx import React from 'react'; const VideoPlayer: React.FC = () => { return ( Your browser does not support the video tag. ); }; export default VideoPlayer; ` In the above VideoPlayer component, we're using an HTML5 video tag to handle video playback. The src attribute of the source tag is set to the video endpoint of our Express server. When this component is rendered, it sends a request to our video API and starts streaming the video in response to the range requests that the browser automatically makes. Next, let's create the DownloadButton component that will handle the video download and display the download progress: `tsx import React, { useState } from 'react'; const DownloadButton: React.FC = () => { const [downloadProgress, setDownloadProgress] = useState(0); const handleDownload = async () => { try { const response = await fetch('http://localhost:8000/download'); const reader = response.body?.getReader(); if (!reader) { return; } const contentLength = +(response.headers?.get('Content-Length') || 0); let receivedLength = 0; let chunks = []; while (true) { const { done, value } = await reader.read(); if (done) { console.log('Download complete.'); const blob = new Blob(chunks, { type: 'video/mp4' }); const url = window.URL.createObjectURL(blob); const a = document.createElement('a'); a.style.display = 'none'; a.href = url; a.download = 'video.mp4'; document.body.appendChild(a); a.click(); window.URL.revokeObjectURL(url); setDownloadProgress(100); break; } chunks.push(value); receivedLength += value.length; const progress = (receivedLength / contentLength) 100; setDownloadProgress(progress); } } catch (err) { console.error(err); } }; return ( Download Video {downloadProgress > 0 && downloadProgress Download progress: )} {downloadProgress === 100 && Download complete!} ); }; export default DownloadButton; ` In this DownloadButton component, when the download button is clicked, it sends a fetch request to our download API. It then uses a while loop to continually read chunks of data from the response as they arrive, updating the download progress until the download is complete. This is an example of more controlled handling of multi-response APIs where we are not just directly piping the data, but instead, processing it and manually sending it as a downloadable file. Bringing It All Together Let's now integrate these components into our main application component. `tsx import React from 'react'; import VideoPlayer from './components/VideoPlayer'; import DownloadButton from './components/DownloadButton'; function App() { return ( My Video Player ); } export default App; ` In this simple App component, we've included our VideoPlayer and DownloadButton components. It places the video player and download button on the screen in a neat, centered layout thanks to Tailwind CSS. Here is a summary of how our system operates: - The video player makes a request to our Express server as soon as it is rendered in the React application. Our server handles this request, reading the video file and sending back the appropriate chunks as per the range requested by the browser. This results in the video being streamed in our player. - When the download button is clicked, a fetch request is sent to our server's download API. This time, the server reads the file, but instead of just piping the data to the response, it controls the data sending process. It sends chunks of data and also logs the sent chunks for monitoring purposes. The React application collects these chunks and concatenates them, displaying the download progress in real-time. When all chunks are received, it compiles them into a Blob and triggers a download in the browser. This setup allows us to build a full-featured video streaming and downloading application with fine control over the data transmission process. To see this system in action, you can check out this video demo. Conclusion While the focus of this article was on video streaming and downloading, the principles we discussed here extend beyond just media files. The pattern of responding to HTTP range requests is common in various data-heavy applications, and understanding it can be a useful tool in your web development arsenal. Finally, remember that the code shown in this article is just a simple example to demonstrate the concepts. In a real-world application, you would want to add proper error handling, validation, and possibly some form of access control depending on your use case. I hope this article helps you in your journey as a developer. Building something yourself is the best way to learn, so don't hesitate to get your hands dirty and start coding!...

Setting Up React Navigation in Expo Web: A Practical Guide cover image

Setting Up React Navigation in Expo Web: A Practical Guide

Introduction We have come a long way from the days where we had to write different code for different platforms. Today, we can write code once and run it anywhere using technologies like React Native. React Native is a framework that allows us to write native apps for Android, iOS, and the web, using JavaScript and React. This makes it more interesting for teams that want to cut time/development cost to look into it. We recently launched our Expo-Zustand-Styled Components showcase app here at This Dot Labs. We also showcased how we can use expo`, `zustand`, and `styled-components` to build a React Native app that can run on Android, iOS, and the web all in one codebase. In this article, we will be looking at React Navigation, how to configure Deep Linking to navigate to different screens, and handling dynamic routes paths in our app. We'll do this by looking at the challenges we had to deal with while working on our showcase app, especially when running the app on the web. Getting started Before we dive in, we need to install all the needed dependencies. Let's make sure we have the expo cli installed. We can do this by running the following command: `bash npm i expo-cli ` We can now Initialize a new Expo app`. We can do this by running the following command: `bash npx create-expo-app rn-web-routing ` Next, we need to install the navigation package, and since we are also building this app for the web, we need to install the needed dependencies. `bash npx expo install @react-navigation/native @react-navigation/native-stack react-dom react-native-web @expo/webpack-config ` You can also leverage our expo-zustand-styled-component starter kit, which offers all configurations to build for IOS, Android, and the web. Setting up the navigation Let's set up our navigation routes. We will use native-stack` for our navigation. We can do this by creating a `navigation` folder in our `src` folder, and creating an `index.ts` file in it. This will contain our root navigator. `js import { createNativeStackNavigator } from "@react-navigation/native-stack"; import React from "react"; // import screens const Stack = createNativeStackNavigator(); const Routes = () => { return {/ screens stack */}; }; export default Routes; ` We can now import our Routes` component in our `App.tsx` file and render it. `js import React from 'react'; import Routes from './navigation'; export default function App() { return ( {/ ... other config eg provider */} {/ ... other config eg provider */} ); } ` Creating our screens We can now create our screens. We will create a screens` folder in our `src` folder and a `Home.tsx` file in it which will contain our home screen. `js import React from "react"; import { View, Text } from "react-native"; const Home = () => { return ( This is Home ); }; export default Home; ` We can now import our Home` component in our `Routes` component, and add it to our stack. `js import { createNativeStackNavigator } from "@react-navigation/native-stack"; import React from "react"; import Home from "../screens/Home"; const Stack = createNativeStackNavigator(); const Routes = () => { return ( ); }; export default Routes; ` We can now do the same for the other screens we want to add to our stack. Deep linking During the development of the expo showcase kit, we faced an issue on the web. When we navigate to a page, the URL in the browser still remains in the index. This is because we have not configured deep linking. Deep Linking is a way to navigate to different screens in our app using a custom URL link. This is very useful when we want to have individual URLs for each of our pages. This is also useful when we want to share a link to a specific screen in our app, saving the user time and energy in locating a particular page themselves. We can do this by creating a config file in our navigation folder with the following code. `js import { NavigationContainer } from "@react-navigation/native"; import as Linking from "expo-linking"; const linking = { prefixes: [Linking.createURL("/")], // this is the prefix for our app. Could be anything eg https://myapp.com config: { screens: { Home: "", // ... other screens }, }, }; export default linking; ` And then we can import our linking` config in our `App.tsx` file and pass it to the `NavigationContainer`. `js export default function App() { return ( {/ ... other config eg provider */} {/ ... other config eg provider */} ); } ` We will come back to this config file later as we are going to deep dive into more complex routing, and how to handle it. Let's run our app now, and see how it works. Running the app Now, we can run our app. We can do this by running the following command: `bash expo start ` Press w` to open it on a web browser, you should see something like this: We can see we are in our index page which is the home page, we can now navigate to the eg Profile` page by clicking the button on the home page. To have your preferred page url path, we need to update the linking config with the appropriate page URL. Otherwise, deep linking will use the screen name as the page URL. Handling dynamic routes Say you want to have a page that can be accessed by different users. You can do this by passing the user id as a parameter in the URL. This is called dynamic routing. We can do this by updating our linking` config file with the following code. ` import { NavigationContainer } from "@react-navigation/native"; import as Linking from "expo-linking"; const linking = { prefixes: [Linking.createURL("/")], config: { screens: { Home: "", Profile: "profile/:id", // ... other screens }, }, }; export default linking; ` This will allow us to access the Profile` page by passing the user id as a parameter in the URL. We can now update our `Profile` page to get the user id from the URL and display it. `js import React from "react"; import { View, Text } from "react-native"; import { useRoute } from "@react-navigation/native"; const Profile = () => { const route = useRoute(); const { id } = route.params; return ( This is Profile User id: {id} ); }; export default Profile; ` During the development of our showcase app, we were faced with an issue. We needed to replicate a route /tree/main/a/b/c` with a pattern as `/tree/:branch/:path*`, where the path is `a/b/c`just like we have in other showcases. `js import { NavigationContainer } from "@react-navigation/native"; import as Linking from "expo-linking"; const linking = { prefixes: [Linking.createURL("/")], config: { screens: { Home: "", Profile: "profile/:id", Tree: "tree/:branch/:path", // ... other screens }, }, }; export default linking; ` This didn't work as expected. But on further investigation, we found out that we have to manually handle this route in the getStateFromPath of the linking configuration file, by updating the state and returning it. `js import { getStateFromPath } from "react-native"; import { NavigationContainer } from "@react-navigation/native"; import as Linking from "expo-linking"; const linking = { prefixes: [Linking.createURL("/")], config: { screens: { Home: "", Profile: "profile/:id", Tree: "tree/:branch/:path", // ... other screens }, }, getStateFromPath(path, options) { let state = getStateFromPath(path, options); // If the state is undefined, it means that the path doesn't match any route in our config, and // we want to handle the path ourselves to the right screen, by updating the state and returning it. if (!state) { // check if route contains the main identifier of the screen we want to show const isTree = path.includes("tree"); if (isTree) { const [, , branch, ...rest] = path.split("/"); state = { routes: [ { path, name: "Tree", params: { branch, path: rest.join("/"), // here, our path is the rest of the path after the branch }, }, ], }; } } return state; }, }; export default linking; ` With these configurations, we can now navigate throughout our app using the Link component from react-navigation/native`. `js import React from "react"; import { View, Text } from "react-native"; import { Link } from "@react-navigation/native"; const Home = () => { return ( This is Home ); }; ` To make use of the path` parameter in our `Tree` page: `js import React from "react"; import { View, Text } from "react-native"; const Tree = ({ route }) => { const { branch, path } = route.params; return ( This is Tree Branch: {branch} Path: {path} ); }; export default Tree; ` Conclusion In this article, we have seen how to set up our navigation in our expo app using react-navigation/native`. We have also seen how to configure deep linking, which will help us to navigate through our app using the URL path. Next, we've learned how to handle dynamic routes, which will help us pass parameters in the URL path, and handle some special cases like the one we had with the `Tree` page. If you have any questions or run into any trouble, feel free to join the discussions going on at starter.dev or on our Discord....

Introducing Framework.dev cover image

Introducing Framework.dev

Have you ever started to learn a technology, and felt overwhelmed by the amount of information out there and became paralyzed, not knowing where to start? Do you sometimes struggle to keep up with the latest developments in the OSS community surrounding a stack, and wish you could call up a list of, say, every major React state management solution? We at This Dot Labs can certainly relate, which is why we are creating framework.dev, a series of websites dedicated to cataloging resources to learn and develop in a given frontend framework. We are starting with React, but our solution is deliberately generic, and designed to be themed and filled in with different content for different frameworks with Angular and Vue versions already in the pipeline. Browse resources Want to learn React but don't know where to start? Check out the list of courses and books. From podcasts to blogs, we aim to provide a catalogue of content relevant to the community so you can scroll through when you're looking for the next thing to check out in your learning journey. We also curated lists of all major React libraries that aim to solve state management, styling, internationalization, and more. Don't remember which simple state management library had that cute bear as a mascot? Have a look through the options to see if any ring a bell. (It's zustand) Find what you're looking for If you're looking for something more specific, all resources are tagged and filterable by a number of different attributes, and all titles and descriptions are searchable. Want to find a video course aimed at beginners that will introduce you to Redux? We got you. Compare libraries When selecting which open-source libraries to use, you use more than just its description and listed features to make a decision. You'll look at things like its number of downloads, stars on Github or test coverage to try to get an idea of how popular and well-supported it is. To help you in these decisions, we've made it so you can take any set of libraries, and arrange them into a sortable table with a number of useful statistics sourced from npms. Help build and curate the content Is there anything you think is missing? Did we make a typo in your favorite podcast's name? Did you just publish a new React course? Do you have an idea for an extra statistic that should be added to the comparison tables? Framework.dev is hosted and maintained by This Dot Labs, but is a fully open-source project, so go look at our contribution guidelines and open a PR today! This resource is made for the community, and we hope it will be built by the community too. No single person can keep up with how the React ecosystem constantly evolves and changes, but together we might just stand a chance....

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...