Skip to content

GraphQL Contributor Days | December 2021

This is a recap of the December GraphQL Contributor Days, an event where maintainers and contributors came together to talk about all things current and upcoming in the GraphQL realm! Read on or rewatch the event here to see what these experts have to share, and see what they're using at companies such as Facebook, PayPal, Hasura, and Netflix.

Our hosts for this event were Dustin Goodman and Tanmai Gopal.

Panelists

Community Showcase & Updates

GraphQL Yoga (Uri Goldshtein)

GraphQL Yoga 2.0 has been in the works for the past few months, and it’s finally launched its alpha! This version is shipping with Envelop, and includes support for deferring streams, live queries, and the latest GraphiQL IDE.

RescriptRelay CLI (Gabriel Nordeborn)

A new feature has been added to the RescriptRelay CLI! This new command has the ability to go through your code and remove all unused fields and fragments from your GraphQL queries, which in turn helps prevent over-fetching in your application.

yarn rescript-relay-cli remove-unused-fields

GraphQL Learning

What can we do to support or easily onboard team members that may be new to GraphQL? What challenges do we face? Our panelists shared some of their favorite resources and methods that have worked for their teams, as well as brought up some great points about resources the community is still lacking.

The general consensus was that folks coming from REST can have trouble learning the GraphQL way of doing things. Things like introspection & HTTP 200 blanket responses can cause errors and introduce security risks that may be overlooked by those learning GraphQL for the first time.

Our panelists also agreed that there’s a need for concise resources outlining some of these common pitfalls, and there’s also a need for a simple, language-agnostic GraphQL starter package (think Create React App, but for GraphQL). Unfortunately, these resources don’t seem to exist just yet. However, our panelists did still have some great tips and resources to share for those trying to learn GraphQL.

  • Kyle Schrade had a couple suggestions that worked for his team:
    • Create internal documentation where you can keep track of answers to common questions.
    • Create a Codegen so that it’s easier for new team members to set up subgraphs.
  • Adhithi Rhavichandran shared a great Pluralsight course she co-authored on GraphQL API’s with Apollo. This is a great language-agnostic intro to GraphQL that talks about the differences between GraphQL and REST.
  • Jamie Barton created graphql.wtf to provide the GraphQL community with short, digestible videos on GraphQL concepts.

Authorization & Security

How can we handle authorization and security for our GraphQL API’s? What kinds of considerations need to be made? There is no “one size fits all” in handling authorization in GraphQL API’s.

With regards to where one should place their authorization logic, some argued for placing this logic at the resolver level (keeping auth logic close to business logic), while others wished to raise this logic up to a top layer to keep responsibilities separate. If the latter is your preference, Uri Goldshtein recommended GraphQL Authz for easily adding an authorization layer to your GraphQL API. This library is compatible with all modern GraphQL architectures as well.

As mentioned earlier, there were some differing ideas around whether or not to allow schema introspection. Though some saw it as a potential security risk, Joey Nenni argued that the benefits for developers can be enormous and provide a better user experience. If you've properly secured your API, there shouldn't be any significant risks involved with allowing schema introspection.

Federation

What are the benefits and challenges of federating? A pain point for Roy Derks is that underlying services need to be ready for federation. "It'd be nice if every schema could be federated with any schema," he said. Tanmai Gopal suggested that it'd be best to do configuration at the GraphQL entry point, rather than at the service level in order to tackle that problem.

What companies use federation, and how are they implementing it? Many of our panelists' companies are either actively using or planning to use Apollo Federation.

Tanmai Gopal, founder of Hasura, said that Hasura does federation in two steps - service to service federation, and data source to data source federation. At PayPal, Staff Engineer Joey Nenni shared that they're currently working on implementing the Apollo Federation spec.

Marc-André Giroux also shared that Netflix uses the Apollo Federation spec, though they have their own gateway built in Kotlin. The observability federation brings to their API, and the way federation fits within Netflix's microservice architecture works "beautifully" on his team. However, federating a monolith is perhaps not the best idea.

Lee Byron, co-creator of GraphQL, agreed and shared that Facebook has a different approach. "I think federation brings a lot of value, but at a lot of cost," he explained. Schema registries, query plans, and all of the other "stuff" that comes with federation can be a lot to manage.

"Facebook doesn’t use any of that ... instead they use a model ... where you have a single definition of your schema that exists at your gateway." - Lee Byron

Federation can be a great fit if your organization isn't a monolith and values team autonomy. "[At Facebook] team autonomy is an anti-pattern," Lee joked. Teams that value autonomy or that may not share a unified language, on the other hand, may find a lot of value in federating.

Other panelists agreed that company culture and team structure play a big part in determining whether to use a monolith or microservice structure, which can also determine whether or not federation is a good fit for your project. Culture can be very difficult to change, and may not even be possible.

"Regardless of the decision the net result is very similar. … However it is that we arrive at that conclusion, it doesn’t matter that much, and so we should choose the one that is easier for your organization." - Lee Byron

What's new at GraphQL Foundation?

  • The Working Group, which contributes changes to the GraphQL spec, is about to run a vote for the leaders of their technical steering committee. This steering committee will be responsible for overseeing the Working Group.

  • The GraphQL Foundation has a bunch of new members and sponsors, which has allowed them to launch a brand new community grant program in order to redistribute some of those funds and encourage community-driven development. You can find the grant program here!

  • There's a newly released cut of the GraphQL spec!

That's a Wrap!

GraphQL Contributor Days is always a great way to keep your ear to the ground on what's coming in the GraphQL ecosystem. This month's event provided some fascinating insight into how some of the top companies in the world are using GraphQL. If you'd like to watch the full event and hear our panelists dive deeper on some of these topics (and more), you can find it here!

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

How to Optimize GraphQL Performance with Redis cover image

How to Optimize GraphQL Performance with Redis

Intro GraphQL is a great way to build backend applications. It simplifies the project’s structure and scales easily, but there are some challenges that come with it. One of the biggest is how to optimize GraphQL at scale. In this post, we will show you how to use Redis as an optimization tool for your GraphQL servers. What is Redis? Redis is an open-source database that can be used with any programming language. It is well-suited for large-scale applications. It’s a key-value store that can be used to store and retrieve data. It is designed to be fast, with no single point of failure. It also provides a high degree of concurrency, which means it's possible for multiple clients to access the same data at the same time. Redis is also lightweight enough that you can run it in the cloud. Why GraphQL? GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Why Redis with GraphQL? Redis can be your best friend when it comes to optimizing the performance of GraphQL. First, Redis provides a powerful system that helps you to cache queries and results so that they can be reused. This is crucial because there's no way to predict how often you'll need to run a query on a serverless architecture. Sometimes you use a 3rd party API that is slow, or there are request limits, or even the local databases could take a quite long time to return the data. If you're creating tons of different queries every day, this can add up quickly! Different caching strategies There are two main caching strategies that you can use with GraphQL and Apollo Server: 1. Caching the entire response (use Redis as a cache)__ Caching the whole response is the most straightforward way to cache your GraphQL queries. When you cache the entire response, you're essentially caching the entire query, including all of the fields that are returned by the query. This is a great option if you're only interested in caching the data that are returned by the query, and you don't need to worry about caching any of the other data that is returned by the query, or if you have a repeatable query for different users. Example of in-memory cache: 2. Caching individual fields (use Redis as a data store)__ This is a more proper way to cache your GraphQL queries. It's also a more complex way to cache queries in Apollo Server. When you cache individual fields, you're caching the individual fields that are returned by the query. This is a great option if you're only interested in caching the data that are returned by the query, and you don't need to worry about caching any of the other data that are returned by the query. What not to cache? Redis is not built for large data. If you're storing critical business data, you're going to need a more traditional data solution. If you're looking for a way to store complex data queries, look elsewhere. Redis is designed to be simple and fast, but that doesn't mean it's ready for just about anything. Even if you think your data could grow into a large enough set that it would benefit from relational databases, remember that Redis does not have support for building relational databases from scratch. It has no support for tables or relationships or any other kind of constraints that would be required if your data was stored in a relational database. Conclusion In this post, we showed you how to use Redis as an optimization tool for your GraphQL. We also showed you how to use Redis as a cache and as a data store. We hope you found this post helpful! Also, check out our GraphQL Serverless Contentful starter kit on Starter.dev If you have any questions or comments, please feel free to send them to us by email at hi@thisdot.co. Thanks for reading!...

How to Resolve Nested Queries in Apollo Server cover image

How to Resolve Nested Queries in Apollo Server

When working with relational data, there will be times when you will need to access information within nested queries. But how would this work within the context of Apollo Server? In this article, we will take a look at a few code examples that explore different solutions on how to resolve nested queries in Apollo Server. I have included all code examples in CodeSandbox if you are interested in trying them out on your own. Prerequisites** This article assumes that you have a basic knowledge of GraphQL terminology. Table of Contents - How to resolve nested queries: An approach using resolvers and the filter method - A refactored approach using Data Loaders and Data Sources - What are Data Loaders - How to setup a Data Source - Setting up our schemas and resolvers - Resolving nested queries when microservices are involved - Conclusion How to resolve nested queries: An approach using resolvers and the filter method In this first example, we are going to be working with two data structures called musicBrands` and `musicAccessories`. `musicBrands` is a collection of entities consisting of id and name. `musicAccessories` is a collection of entities consisting of the product name, price, id and an associated `brandId`. You can think of the `brandId` as a foreign key that connects the two database tables. We also need to set up the schemas for the brands and accessories. `graphql const typeDefs = gql scalar USCurrency type MusicBrand { id: ID! brandName: String } type MusicAccessories { id: ID! product: String price: USCurrency brandId: Int brand: MusicBrand } type Query { accessories: [MusicAccessories] } ; ` The next step is to set up a resolver for our Query` to return all of the music accessories. `js const resolvers = { Query: { accessories: () => musicAccessories, }, }; ` When we run the following query and start the server, we will see this JSON output: `graphql query Query { accessories { product brand { brandName } } } ` `json { "data": { "accessories": [ { "product": "NS Micro Violin Tuner Standard", "brands": null }, { "product": "Standard Gong Stand", "brands": null }, { "product": "Black Cymbal Mallets", "brands": null }, { "product": "Classic Series XLR Microphone Cable", "brands": null }, { "product": "Folding 5-Guitar Stand Standard", "brands": null }, { "product": "Black Deluxe Drum Rug", "brands": null } ] } } ` As you can see, we are getting back the value of null` for the `brands` field. This is because we haven't set up that relationship yet in the resolvers. Inside our resolver, we are going to create another query for the MusicAccessories` and have the value for the `brands` key be a filtered array of results for each brand. `js const resolvers = { Query: { accessories: () => musicAccessories, }, MusicAccessories: { // parent represents each music accessory brand: (parent) => { const isBrandInAccessory = (brand) => brand.id === parent.brandId; return musicBrands.find(isBrandInAccessory); }, }, }; ` When we run the query, this will be the final result: `graphql query Query { accessories { product brand { brandName } } } ` `json { "data": { "accessories": [ { "product": "NS Micro Violin Tuner Standard", "brands": [ { "brandName": "D'Addario" } ] }, { "product": "Standard Gong Stand", "brands": [ { "brandName": "Zildjian" } ] }, { "product": "Black Cymbal Mallets", "brands": [ { "brandName": "Zildjian" } ] }, { "product": "Classic Series XLR Microphone Cable", "brands": [ { "brandName": "D'Addario" } ] }, { "product": "Folding 5-Guitar Stand Standard", "brands": [ { "brandName": "Fender" } ] }, { "product": "Black Deluxe Drum Rug", "brands": [ { "brandName": "Zildjian" } ] } ] } } ` This single query makes it easy to access the data we need on the client side as compared to the REST API approach. If this were a REST API, then we would be dealing with multiple API calls and a Promise.all` which could get a little messy. You can find the entire code in this CodeSandbox example. A refactored approach using Data Loaders and Data Sources Even though our first approach does solve the issue of resolving nested queries, we still have an issue fetching the same data repeatedly. Let’s look at this example query: `graphql query MyAccessories { accessories { id brand { id brandName } } } ` If we take a look at the results, we are making additional queries for the brand each time we request the information. This leads to the N+1 problem in our current implementation. We can solve this issue by using Data Loaders and Data Sources. What are Data Loaders Data Loaders are used to batch and cache fetch requests. This allows us to fetch the same data and work with cached results, and reduce the number of API calls we have to make. To learn more about Data Loaders in GraphQL, please read this helpful article. How to setup a Data Source In this example, we will be using the following packages: - apollo-datasource - apollo-server-caching - dataloader We first need to create a BrandAccessoryDataSource` class which will simulate the fetching of our data. `js class BrandAccessoryDataSource extends DataSource { ... } ` We will then set up a constructor with a custom Dataloader. `js constructor() { super(); this.loader = new DataLoader((ids) => { if (!ids.length) { return musicAccessories; } return musicAccessories.filter((accessory) => ids.includes(accessory.id)); }); } ` Right below our constructor, we will set up the context and cache. `js initialize({ context, cache } = {}) { this.context = context; this.cache = cache || new InMemoryLRUCache(); } ` We then want to set up the error handling and cache keys for both the accessories and brands. To learn more about how caching works with GraphQL, please read through this article. `js didEncounterError(error) { throw new Error(There was an error loading data: ${error}`); } cacheKey(id) { return music-acc-${id}`; } cacheBrandKey(id) { return brand-acc-${id}`; } ` Next, we are going to set up an asynchronous function called get` which takes in an `id`. The goal of this function is to first check if there is anything in the cached results and if so return those cached results. Otherwise, we will set that data to the cache and return it. We will set the `ttl`(Time to Live in cache) value to 15 seconds. `js async get(id) { const cacheDoc = await this.cache.get(this.cacheKey(id)); if (cacheDoc) { return JSON.parse(cacheDoc); } const doc = await this.loader.load(id); this.cache.set(this.cacheKey(id), JSON.stringify(doc), { ttl: 15 }); return doc; } ` Below the get` function, we will create another asynchronous function called `getByBrand` which takes in a `brand`. This function will have a similar setup to the `get` function but will filter out the data by brand. `js async getByBrand(brand) { const cacheDoc = await this.cache.get(this.cacheBrandKey(brand.id)); if (cacheDoc) { return JSON.parse(cacheDoc); } const musicBrandAccessories = musicAccessories.filter( (accessory) => accessory.brandId === brand.id ); this.cache.set( this.cacheBrandKey(brand.id), JSON.stringify(musicBrandAccessories), { ttl: 15 } ); return musicBrandAccessories; } ` Setting up our schemas and resolvers The last part of this refactored example includes modifying the resolvers. We first need to add an accessory` key to our `Query` schema. `graphql type Query { brands: [Brand] accessory(id: Int): Accessory } ` Inside the resolver`, we will add the `accessories` key with a value for the function that returns the data source we created earlier. `js // this is the custom scalar type we added to the Accessory schema USCurrency, Query: { brands: () => musicBrands, accessory: (, { id }, context) => context.dataSources.brandAccessories.get(id), }, ` We also need to refactor our Brand` resolver to include the data source we set up earlier. `js Brand: { accessories: (brand, , context) => context.dataSources.brandAccessories.getByBrand(brand), }, ` Lastly, we need to modify our ApolloServer object to include the BrandAccessoryDataSource`. `js const server = new ApolloServer({ typeDefs, resolvers, dataSources: () => ({ brandAccessories: new BrandAccessoryDataSource() }), }); ` Here is the entire CodeSandbox example. When the server starts up, click on the Query your server` button and run the following query: `graphql query Query { brands { id brandName accessories { id product price } } } ` Resolving nested queries when microservices are involved Microservices is a type of architecture that will split up your software into smaller independent services. All of these smaller services can interact with a single API data layer. In this case, this data layer would be GraphQL. The client will interact directly with this data layer, and will consume API data from a single entry point. You would similarly resolve your nested queries as before because, at the end of the day, there are just functions. But now, this single API layer will reduce the number of requests made by the client because only the data layer will be called. This simplifies the data fetching experience on the client side. Conclusion In this article, we looked at a few code examples that explored different solutions on how to resolve nested queries in Apollo Server. The first approach involved creating custom resolvers and then using the filter` method to filter out music accessories by brand. We then refactored that example to use a custom DataLoader and Data Source to fix the "N+1 problem". Lastly, we briefly touched on how to approach this solution if microservices were involved. If you want to get started with Apollo Server and build your own nested queries and resolvers using these patterns, check out our serverless-apollo-contentful starter kit!...

Benefits of Software Apprenticeship & How to Make the Most of Yours cover image

Benefits of Software Apprenticeship & How to Make the Most of Yours

It’s no surprise that over the past year, many have started switching careers into tech. I was one of thousands of people that hopped on the “Learn to Code” train last year after losing my retail job. Being without a college degree, I looked into programming because it was one of the few industries where you could teach yourself the necessary skills without a long and expensive college education. As with many things though, the process of learning to code, and finding a job, proved more difficult than expected. About two months into a fruitless job search, a mentor of mine told me about This Dot and their Apprentice Program that had helped her get started as a self-taught software engineer. I was so excited to find an approachable entry point into tech for self-taught folks like myself! What is an apprenticeship? So what is an apprenticeship, and how does it differ from an internship? As I understand it, an apprenticeship is a program that provides hands-on work experience with the explicit goal of placing you in a job. An internship is more focused on general exposure to a work environment, or industry, without the explicit goal of hiring or placing you at the end. Another major difference is that most internships will require you to be an active college student pursuing a degree, while apprenticeships are generally more accommodating of unconventional backgrounds. Why do an apprenticeship? These are some of the key reasons I decided to pursue an apprenticeship at This Dot Labs. Keep in mind that every apprenticeship and every company is different, and the reasons I’ve listed here are specific to my experience in This Dot’s Apprentice Program. 1. Mentorship Opportunities Before I started my apprenticeship at This Dot, I was told I would be assigned a mentor that would meet with me once or twice a week. I didn’t realize that nearly all of my teammates and managers would also become my mentors. If I have a question, someone on the team is bound to have the answer, and everyone is more than willing to help. I’m only a few weeks into the apprentice program, but the meetings I’ve had with my mentor, teammates, and managers have already helped grow my skills immensely. 2. Room to Learn and Grow The whole purpose of an apprenticeship is to give someone the room to ask questions, fail, learn, and grow. In my first week as an apprentice, I was absolutely terrified of asking too many “stupid” questions. I struggled trying to do things on my own. Over and over, my managers reassured and encouraged me to keep asking more. They understand what it’s like starting out as a dev in your first job, and they are there to help you through every struggle. I quickly learned that if I ask questions early on instead of struggling for hours, I actually end up getting more work done, and being a bigger help to my team. They strongly encourage you to follow your interests and curiosities outside of software development at This Dot as well. I’ve already been given opportunities to try out other areas of interest in tech, such as Developer Relations (DevRel). In my first few weeks, I’ve been able to do things I’ve never tried before, like speaking on a podcast, writing blog posts (like this one!) and tweets, and moderating a panel at MagnoliaJS Conference. This room to experiment is a rare and valuable opportunity for an early-career developer. 3. Hands-on Experience Working on high-impact projects for major companies, and actually being responsible for the outcomes of these projects, is incredibly rewarding and validating. Being given a high level of autonomy, and the opportunity to “rise to the occasion” will grow your skills faster than any internship where you might be given tasks unrelated to your area of interest. The opportunity to build confidence, and validate your skills so early in your career, is invaluable. 4. The Team This point is obviously specific to my experience at This Dot, but it is so important to find a company that meshes well with your personality and values. This Dot greatly values good people, good work, personal growth, and a diverse team. I’m exposed to a lot of different people with different backgrounds and unique perspectives at work, which has been immensely valuable. This Dot even has a Hire the Fempire* program that works with companies to hire more women in tech through their apprentice program. 5. An Accessible Option This is perhaps the most important point to me and the reason that apprenticeships are so invaluable: They offer an easier way to break into the tech industry for people that otherwise might not have the resources to be able to pursue a degree. They allow people with unconventional backgrounds to get real job experience, and change the trajectory of their lives. How can I make the most of my apprenticeship? You've done it! You've made it past the barrier and gotten your first tech job! Here are some ways I plan to make the absolute most of my time as an apprentice. 1. Ask Questions Now is the time to ask a lot of questions, and when I say "a lot", I mean a lot*. If you are like the majority of people who experience impostor syndrome, this is something you will need to push past very early on. It will only hold you back. If, like me, you are terrified of sounding "stupid" or being found out as a fraud, try to reframe how you view asking questions. You are not asking a stupid question. You are simply trying to get **unblocked**, to allow yourself to do more awesome work. Don't stifle your own progress because you're afraid of appearing less knowledgeable than you are. If you're still worried about overloading your teammates or mentors with lots of questions, try batching your questions. Keep a list of questions you'd like to ask, and ask several at a time rather than spreading them out throughout the day or week. Lastly, be sure you know how to ask a good question, and always show gratitude to the person who's helping you. 2. Go Out of Your Way to Meet People In the first few weeks, I made sure to set up a short video meeting with all my managers, teammates- anyone I would be working with directly. These are the people that will celebrate wins with you, and help you when you're struggling. Once you get to know people long enough, you'll probably start to see that everyone has a "thing" they're particularly passionate about. You'll find out who to go to for React or Angular help, who to go to for GraphQL or Elixir knowledge. Fostering good, genuine relationships is incredibly important if you want to have a meaningful career (and life). 3. Don't Hide Your Struggles If you're struggling, don't be afraid to ask for help. If you followed my last tip for building relationships, hopefully you'll have found who you can go to for emotional support. Talking out your struggles can be one of the quickest ways to move past them, and your confidant may have their own personal experiences that can help you with yours. 4. Stay Open to Criticism Feedback is the fastest way to level up as a developer and human being. To use feedback most effectively, it's important to separate yourself from your work. If someone offers you feedback that is delivered in a not-so-nice way, you can still take something constructive from it, and use it to improve your work. In a lot of development work, it can actually be hard to know how or what to improve, so I try to view criticism as a blessing in disguise, and be grateful for being given some direction. 5. Set Goals and Track Your Progress Speaking of direction, it's important to have goals to work towards in many aspects of life. Set some short and long term goals for your apprenticeship, and share them with your mentors. If your mentors are good mentors, they'll support you in achieving these goals. Along with this, it's also important to have some way to track your progress. Some people like to journal, some people keep a Google doc or a Notion page. Personally, I've been sending myself a weekly email to track my progress. Point is: find a way to track your progress, and keep up with it regularly. Closing Thoughts I hope this post was helpful and inspiring to anyone trying to break into the tech industry! My hope for the future is that more companies will adopt apprentice programs, and structure them in a way that empowers early-career devs to ask questions, fail fast and fail forward, and be fearless learners. Tech still has a major diversity problem, and a lot of marginalized groups fall into the category of those with unconventional career backgrounds. If more companies offered apprenticeships, it would be a great step in the right direction for bringing more diversity into tech....

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...