Skip to content

GraphQL is the new REST — Part 1

GraphQL is the new REST - 1 Part Series

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

GraphQL is a new API standard and offers a revolutionary approach to building data-driven applications. The project was first created by Facebook while they were shifting their mobile app from HTML5 to native mobile app.

Only In 2015 did GraphQL open to the public as an open-source project.

The major features and benefits of GraphQL urge you to rethink the way you build your clients’ apps and the way they communicate with backend servers to query or mutate data. For instance, GraphQL is:

  • A powerful query language used to communicate data between a client browser and a server.

  • An application-level query language rather than a database query language.

  • Platform-agnostic, whether on the server-side or client-side, GraphQL can embed itself seamlessly with many programming languages, provided an integration is in place (C#, Go, Java, Python, Node.js, etc.).

  • Database-agnostic, enabling you to connect GraphQL to any database of choice by providing the required hooks by GraphQL.

  • Dedicated to a declarative data fetching approach. Within a GraphQL query, you define exactly what data or fields you are querying for and what input filters you are sending. You compose a query from objects and sub-objects as per your needs.

  • An alternative to RESTful and a more flexible approach to manage data in your apps.

RESTful approach vs GraphQL approach

Let’s say that, in our app, we are tracking data about Books and Authors. With RESTful, you would define a REST API and expose multiple endpoints that a client app could communicate with.

For instance, you would define the following API endpoints:

  • /my-domain/Books — GET all books

  • /my-domain/Books/1 — GET book with ID = 1

  • /my-domain/Authors — GET all authors

  • /my-domain/Authors/1 — GET author with ID = 1

The cycle starts by requesting all books (/Books) in the library. With a book, you receive the Author ID. To get details about the author, you issue a new request to the REST API on a different endpoint (/Authors/1) to get the Author. With an author, you receive list of Book IDs written by this author. To get the details for a certain book, you issue a new request to the REST API on a different endpoint (/Books/1).

With REStful, the app is in continuous communication mode with the server to query the data and traverse it.

GraphQL is your saving grace. The above communication between the client and the server can be summarized with a single GraphQL query.

query {
	books {
		id
		name
		genre
		author {
			id
			name
			age
			books {
				id
				name
			}
		}
	}
}

The GraphQL standard allows you to build a graph of related data in the most efficient and concise way.

With a single query, you retrieve information about the books, the author of each book and all the books authored by that author.

In this series of articles (Part 1 and Part 2, we will build together a GraphQL Server API and a GraphQL Client app to communicate in a GraphQLful way.

Learning by example

The best way to learn GraphQL syntax and concepts is to build your own GraphQL Server API and Client app. I will take you through the process of building a GraphQL Server API on top of Node.js and Express engine. At the same time, I will be using Angular 7 to build a client app to utilize the GraphQL Server API and perform CRUD operations for managing Books and Authors.

In Part 1 of this article, we will build the GraphQL Server API. In Part 2, we will build the GraphQL Angular Client app.

You can find the source code for this article on Github by following this link GraphQL CRUD repo.

Build the GraphQL Server API

The GraphQL API Server will be a Node.js application. Follow these steps to get your GraphQL server up and running.

Create a wrapping folder for the application

Start by creating a new folder named graphql-crud. This folder will hold the source code for both the client and the server.

Add the main JavaScript file for the Node.js app

Add a new folder named server inside the graphql-crud folder. Inside this new folder, create a new JS file named app.js. This file is the main Node.js file that will start the whole server-side application.

Let’s now add some NPM packages that we will necessary when building the server. Issue the following command:

npm install express express-graphql graphql mongoose cors — save

express: Enables the Node.js app to listen to requests and server responses.

express-graphql: Enables the Node.js app to understand and process GraphQL requests/responses.

graphql: The JavaScript reference implementation for GraphQL.

mongoose: An ORM to use MongoDB inside a Node.js app. We will be storing our data inside a MongoDB instance using mLab free and online service.

cors: A Node.js package for providing a Connect/Express middleware that can be used to enable CORS with various options.

Require the libraries needed into app.js file

Require the following libraries at the beginning of app.js file:

const express = require('express');

// Import GraphQL middleware
const expressGraphQL = require('express-graphql');

// Import Mongo Db Client API
const mongoose = require('mongoose');

// Import CORs middleware to allow connections from another URL:PORT
const cors = require('cors');

Create the App instance and connect to MongoDB instance

To create a Node.js app instance:

const app = express();

To enable CORS on the Node.js app instance:

app.use(cors());

To connect to a MongoDB instance:

mongoose.connect(‘{CONNETION_STRING}’);

Wondering how to obtain a connection string to a MongoDB instance? If you are following this article and using mLab online service, I suggest you create an account for yourself on mLab and then create a new MongoDB instance. Don’t forget to create a database user and password for the newly created database instance. Once a username and password are created, mLab provide you with a connection string that you grab and put in the line of code above.

Finally, configure the Node.js app to listen on Port 4000 for incoming requests.

app.listen(4000, () => {
    console.log('Listening for requests on port 4000');
});

Add GraphQL middleware

Express allows external APIs or packages to hook into the request/response model by means of a middleware. To redirect all requests coming on /graphql to the GraphQL middleware, configure the Node.js app with the following: (Add the code just above the app.listen())

app.use('/graphql', expressGraphQL({
    schema,
    graphiql: true
}));

Any request to /graphql is now handled by expressGraphQL middleware. This middleware requires passing as a parameter the schema to be used by the GraphQL Server API. We will define this schema soon below. The graphiql: true option enables an in-browser IDE to run your queries and mutations against the GraphQL Server. This is very helpful when testing your GraphQL Server API before having an actual client app connected to the server.

GraphQL Schema

GraphQL flexibility is governed by the flexibility your API provides in terms of which object types can be queried or mutated and which fields belonging to the object types that can be returned by the API and consumed by the clients. This all depends on the schema and object types that you build and configure with the GraphQL Server. At the end of the day, GraphQL will validate a query or mutation based on a set of rules that the developer of the API has provided depending on a certain schema. This is the real power of GraphQL.

A GraphQL Schema is more or less analogous to a RESTful API route tree. Defining a schema means:

  • You tell the GraphQL Server about the object types to expect, whether in the body of the queries/mutations or in the response generated by those queries/mutations.

  • You build the valid “Endpoints” that the GraphQL Server exposes to client applications.

Build a GraphQL Schema

Create a new JavaScript file schema.js inside a new folder named schema. Import the GraphQL library at the top of the file as follows:

const graphQL = require(‘graphql’);

Next, import some of the GraphQL types to be used later throughout the code:

const {
  GraphQLObjectType,
  GraphQLString,
  GraphQLSchema,
  GraphQLID,
  GraphQLInt,
  GraphQLList,
  GraphQLNonNull
} = graphQL;

We are using the ES6 destructuring feature to extract the types from graphql-js package. We will be using those types to define our schema below.

A schema is defined by query and mutation.

A query is a root endpoint exposing all available query API endpoints to the client apps.

A mutation is a root endpoint exposing all available mutation (update, create or delete) API endpoints to the client apps.

module.exports = new GraphQLSchema({
  query: RootQuery,
  mutation: Mutation
});

We define the RootQuery as follows:

const RootQuery = new GraphQLObjectType({
  name: 'RootQueryType',
  fields: {
    book: {
      // book() {} endpoint
      type: BookType,
      args: {
        id: {
          type: GraphQLID
        }
      },
      resolve(parent, args) {
        // mongoose api call to return a book by ID
      }
    },
    author: {
      // author() {} endpoint
      type: AuthorType,
      args: {
        id: {
          type: GraphQLID
        }
      },
      resolve(parent, args) {
        // mongoose api call to return the book author
      }
    },
    books: {
      // books {} endpoint
      type: new GraphQLList(BookType),
      resolve(parent, args) {
        // mongoose api call to return all books
      }
    },
    authors: {
      // authors {} endpoint
      type: new GraphQLList(AuthorType),
      resolve(parent, args) {
        // mongoose api call to return authors
      }
    }
  }
});

The RootQuery object is of type GraphQLObjectType. It has a name of RootQueryType and an array of fields. The array of fields is actually an array of endpoints that the client apps can use to query the GraphQL Server API.

Let’s explore one of the endpoints. The rest will be similar.

book: {
      // book() {} endpoint
      type: BookType,
      args: {
        id: {
          type: GraphQLID
        }
      },
      resolve(parent, args) {
        // mongoose api call to return a book by ID
      }
    }

To define an endpoint, you provide the following fields:

type: The data that this endpoint returns. In this case, it is the BookType. We will get back to this type shortly.

args: The arguments that this endpoint accepts to filter the data accordingly and return a response. In this case, the code defines a single argument named id and is of type GraphQLID. This is a special type provided by GraphQL to indicate that this field is an ID of the object rather than a normal field.

resolve: A function that gets called by GraphQL whenever it is executing the book query endpoint. This function should be implemented by the developer to return a Book type based on the id argument passed to this query. We will fill in the gaps later on when we connect to a MongoDB instance to retrieve a book based on its id. The resolve() function receives as arguments the parent object (if any) and the list of arguments packaged under the args input parameter. Therefore, to access the book id passed to this query, you do it by args.id.

That’s the power of GraphQL. It allows you, the author, to build a graph of objects with instructions in the form of resolve() functions, to tell GraphQL how to query for a sub-object by executing the resolve() function, and so on.

The BookType is defined as follows:

const BookType = new GraphQLObjectType({
  name: 'Book',
  fields: () => ({
    // Fields exposed via query
    id: {
      type: GraphQLID
    },
    name: {
      type: GraphQLString
    },
    genre: {
      type: GraphQLString
    },
    author: {
      // How to retrieve Author on a Book object
      type: AuthorType,
      /**
       * parent is the book object retrieved from the query below
       */
      resolve(parent, args) {
        // mongoose api call to return the book author by authorId
      }
    }
  })
});

Define an object type by using GraphQLObjectType. A type is defined by providing the name Book in our case. In addition, you return an object with all the available fields on the book type inside the fields() function. The fields defined on the object type are the only fields available for a query response.

The Book type in our case defines the following fields:

  • An id field of type GraphQLID

  • A name field of type GraphQLString

  • A genre field of type GraphQLString

  • An author field of type AuthorType. AuthorType is another custom object type that we define ourselves. This field is special because it defines a resolve() function. This function receives two input parameters: parent and args. The parent parameter is the actual Book object queries by GraphQL. When GraphQL wants to return a field of type custom object type, it will execute the resolve() function to return this object. Later on, we will add the mongoose API call to retrieve an author from the database by means of Book.authorId field. This is the beauty of GraphQL in giving the developer of the API the upper hand to define the different pathways that GraphQL can use to traverse the object model and return a single response per query, no matter how complicated the query is.

Let’s define the Mutation object used by the schema.

const Mutation = new GraphQLObjectType({
  name: 'Mutation',
  fields: {
    addBook: {
      // addBook() {} endpoint
      type: BookType,
      args: {
        name: {
          type: new GraphQLNonNull(GraphQLString)
        },
        genre: {
          type: new GraphQLNonNull(GraphQLString)
        },
        authorId: {
          type: new GraphQLNonNull(GraphQLID)
        }
      },
      resolve(parent, args) {
        // mongoose api call to insert a new book object and return it as a reponse
      }
    },
  }
});

Similar to the RootQuery object, the Mutation object defines the API endpoints that can be used by client apps to mutate or change the data. We will go through one of the mutations here, and the rest are available on the Github repo.

The code fragment defines the addBook endpoint. This endpoint returns data of type BookType. It receives name, genre and authorId as input parameters. Notice how we can make input parameters mandatory by using the GraphQLNonNull object. In addition, this mutation defines the resolve() function. This function is executed by GraphQL when this endpoint is called. The resolve() function should contain code to insert or add a new Book object to the MongoDB.

Now that the GraphQL is defined, let’s explore an example of how to connect to MongoDB using the mongoose client API.

Connect to MongoDB via mongoose API

Let’s start by defining the schema of the database that mongoose will connect to. To do so, create a new folder named models. Then, for each collection or table in the relational database terms, you add a JavaScript file and define the mongoose model object.

For now, add the book.js file with the following code:

const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const bookSchema = new Schema({
   /**
   * No need to add "id" column
   * It is being created by mLab as "_id"
   */
   name: String,
   genre: String,
   authorId: String
});
module.exports = mongoose.model('Book', bookSchema);

This is the mongoose way of defining a collection inside MongoDB. This collection will be created inside the MongoDB database the first time it connects to the database instance and doesn’t find a collection there.

The Book collection in our case defines the name, genre, and authorId fields.

Let’s import this collection to our schema.js file and connect to MongoDB. The rest of the collections are available on the Github repo.

Start by requiring or importing the Book collection as follows:

const Book = require(‘../models/book’);

Now let’s revisit the book query endpoint and implement its resolve() function to return an actual book from the database:

book: {
      // book() {} endpoint
      type: BookType,
      args: {
        id: {
          type: GraphQLID
        }
      },
      resolve(parent, args) {
        return Book.findById(args.id);
      }
    }

Simply by calling the findById() function on the Book model, you are retrieving a Book object based on its id field.

The rest of the mongoose API calls are available on the Github repo.

Now that the GraphQL schema is ready and we’ve already configured the Express-GraphQL middleware with this schema, it’s showtime!

Demonstrate the Graphiql IDE

Run the server app by issuing the following command:

node app.js

The server is now up and running on PORT 4000. Let’s open a browser and visit this URL: http://localhost:4000/graphiql

On the left side you can type in your queries or mutations.

In the middle is the results panel, where the results of a query or mutation are displayed.

On the right side you find the Documentation Explorer. You can use it to diagnose the documentation of the GraphQL Server API. Here you are able to see the available queries and mutations, together with their details, as well as how to construct the queries and which fields are available to you.

Let’s add a single book by typing the following GraphQL query. Check the GraphQL website for a complete reference on how to construct queries and mutations.

mutation addBook($name: String!, $genre: String!, $authorId: ID!) {
	addBook(name: $name, genre: $genre, authorId: $authorId) {
		id
	   	name
	} 
}

In the Query Variables section of the panel, on the left side, add the following variables:

{
  "name":"Learning GraphQL",
  "genre": "APIs", 
  "authorId": "cjktxh5i50w53yul6lp4n"
}

The mutation above creates a new Book record with a certain name, genre, and authorId and returns the id and name of the newly created Book.

The response generated by the above mutation is as follows:

{
  id: ecjkv56ab219vireudxw
  name: “Learning GraphQL”
}

To query this book, you issue the following:

query {
	book (id: “ecjkv56ab219vireudxw”) {
		Name
		Genre
	}
}

Now, the GraphQL Server API is up and running. Let’s switch gears and build the GraphQL Angular Client app in Part 2 of this series.

Conclusion

GraphQL is considered a recent addition to technology. With its rapid momentum, it looks like it will replace the REStful in terms of accessing and manipulating data in our applications.

In this first part of the series on GraphQL, we’ve built together a GraphQL Server API on top of Node.js Express. In Part 2 of this series, we will build a GraphQL Client app with Angular.

Stay tuned!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to create and use custom GraphQL Scalars cover image

How to create and use custom GraphQL Scalars

How to create and use custom GraphQL Scalars In the realm of GraphQL, scalars form the bedrock of the type system, representing the most fundamental data types like strings, numbers, and booleans. As explored in our previous post, "Leveraging GraphQL Scalars to Enhance Your Schema," scalars play a pivotal role in defining how data is structured and validated. But what happens when the default scalars aren't quite enough? What happens when your application demands a data type as unique as its requirements? Enter the world of custom GraphQL scalars. These data types go beyond the conventional, offering the power and flexibility to tailor your schema to precisely match your application's unique needs. Whether handling complex data structures, enforcing specific data formats, or simply bringing clarity to your API, custom scalars open up a new realm of possibilities. In this post, we'll explore how to understand, create, and effectively utilize custom scalars in GraphQL. From conceptualization to implementation, we'll cover the essentials of extending your GraphQL toolkit, empowering you to transform abstract ideas into concrete, practical solutions. So, let's embark together on the journey of understanding and utilizing custom GraphQL scalars, enhancing and expanding the capabilities of your GraphQL schema. Understanding Custom Scalars Custom scalars in GraphQL extend beyond basic types like String or Int, allowing data to be defined, validated, and processed more precisely. They're instrumental when default types don't quite capture the complexity or specificity of the data, such as with specialized date formats or unique identifiers. The use of custom scalars brings several benefits: * Enhanced Clarity: They offer a clearer representation of what data looks like and how it behaves. * Built-in Validation: Data integrity is bolstered at the schema level. * Flexibility: They can be tailored to specific data handling needs, making your schema more adaptable and robust. With this understanding, we'll explore creating and integrating custom scalars into a GraphQL schema, turning theory into practice. Creating a Custom Scalar Defining a Custom Scalar in TypeScript: Creating a custom scalar in GraphQL with TypeScript involves defining its behavior through parsing, serialization, and validation functions. * Parsing: Transforms input data from the client into a server-understandable format. * Serializing: Converts server data back to a client-friendly format. * Validation: Ensures data adheres to the defined format or criteria. Example: A 'Color' Scalar in TypeScript The Color scalar will ensure that every color value adheres to a valid hexadecimal format, like #FFFFFF for white or #000000 for black: ` In this TypeScript implementation: * validateColors: a function that checks if the provided string matches the hexadecimal color format. * parseValue: a method function that converts the scalar’s value from the client into the server’s representation format - this method is called when a client provides the scalar as a variable. See parseValue docs for more information * serialize: a method function that converts the scalar’s server representation format to the client format, see serialize docs for more information * parseLiteral: similar to parseValue, this method function converts the scalar’s value from the client to the server’s representation format. Still, this method is called when the scalar is provided as a hard-coded argument (inline). See parseLiteral docs for more information In the upcoming section, we'll explore how to incorporate and validate these custom scalars within your schema, ensuring they function seamlessly in real-world scenarios. Integrating Custom Scalars into a Schema Incorporating the 'Color' Scalar After defining your custom Color scalar, the next crucial step is effectively integrating it into your GraphQL schema. This integration ensures that your GraphQL server recognizes and correctly utilizes the scalar. Step-by-Step Integration 1. Add the scalar to Type Definitions: Include the Color scalar in your GraphQL type definitions. This inclusion informs GraphQL about this new scalar type. 2. Resolver Mapping: Map your custom scalar type to its resolver. This connection is key for GraphQL to understand how to process this type during queries and mutations. ` 1. Use the scalar: Update your type to use the new custom scalar ` Testing the Integration With your custom Color scalar integrated, conducting thorough testing is vital. Ensure that your GraphQL server correctly handles the Color scalar, particularly in terms of accepting valid color formats and rejecting invalid ones. For demonstration purposes, I've adapted a creation mutation to include the primaryColor field. To keep this post focused and concise, I won't detail all the code changes here, but the following screenshots illustrate the successful implementation and error handling. Calling the mutation (createTechnology) successfully: Calling the mutation with forced fail (bad color hex): Conclusion The journey into the realm of custom GraphQL scalars reveals a world where data types are no longer confined to the basics. By creating and integrating scalars like the Color type, we unlock precision and specificity in our GraphQL schemas, which significantly enhance our applications' data handling capabilities. Custom scalars are more than just a technical addition; they testify to GraphQL's flexibility and power. They allow developers to express data meaningfully, ensuring that APIs are functional, intuitive, and robust. As we've seen, defining, integrating, and testing these scalars requires a blend of creativity and technical acumen. It encourages a deeper understanding of how data flows through your application and offers a chance to tailor that experience to your project's unique needs. So, as you embark on your GraphQL journey, consider the potential of custom scalars. Whether you're ensuring data integrity, enhancing API clarity, or simply making your schema a perfect fit for your application, the possibilities are as vast as they are exciting. Embrace the power of customization, and let your GraphQL schemas shine!...

How to Leverage Apollo Client Fetch Policies Like the Pros cover image

How to Leverage Apollo Client Fetch Policies Like the Pros

Apollo Client provides a rich ecosystem and cache for interfacing with your GraphQL APIs. You write your query and leverage the useQuery hook to fetch your data. It provides you with some state context and eventually resolves your query. That data is stored in a local, normalized, in-memory cache, which allows Apollo Client to respond to most previously run requests near instantaneously. This has huge benefits for client performance and the feel of your apps. However, sometimes Apollo's default doesn't match the user experience you want to provide. They provide fetch policies to allow you to control this behavior on each query you execute. In this article, we'll explore the different fetch policies and how you should leverage them in your application. cache-first This is the default for Apollo Client. Apollo will execute your query against the cache. If the cache can fully fulfill the request, then that's it, and we return to the client. If it can only partially match your request or cannot find any of the related data, the query will be run against your GraphQL server. The response is cached for the next query and returned to the handler. This method prioritizes minimizing the number of requests sent to your server. However, it has an adverse effect on data that changes regularly. Think of your social media feeds - they typically contain constantly changing information as new posts are generated. Or a real-time dashboard app tracking data as it moves through a system. cache-first is probably not the best policy for these use cases as you won't fetch the latest data from the upstream source. You can lower the cache time of items for the dashboard to avoid the staleness issue and still minimize the requests being made, but this problem will persist for social media feeds. The cache-first policy should be considered for data that does not change often in your system or data that the current user fully controls. Data that doesn't change often is easily cached, and that's a recommended pattern. For data that the user controls, we need to consider how that data changes. If only the current user can change it, we have 2 options: Return the updated data in the response of any mutation which is used to update the cache Use cache invalidation methods like refetchQueries or onQueryUpdated These methods will ensure that our cache stays in sync with our server allowing the policy to work optimally. However, if other users in the system can make changes that impact the current user's view, then we can not invalidate the cache properly using these strategies which makes this policy unideal. network-only This policy skips the cache lookup and goes to the server to fetch the results. The results are stored in the cache for other operations to leverage. Going back to the example I gave in my explanation cache-first of a social media feed, the network-only policy would be a great way to implement the feed itself as it's ever-changing, and we'll likely even want to poll for changes every 10s or so. The following is an example of what this component could look like: ` Whenever this SocialFeed component is rendered, we always fetch the latest results from the GraphQL server ensuring we're looking at the current data. The results are put in the cache which we can leverage in some children components. cache-only cache-only only checks the cache for the requested data and never hits the server. It throws an error if the specified cache items cannot be found. At first glance, this cache policy may seem unhelpful because it's unclear if our cache is seeded with our data. However, in combination with the network-only policy above, this policy becomes helpful. This policy is meant for components down tree from network-only level query. This method is for you if you're a fan of React components' compatibility. We can modify the return of our previous example to be as follows: ` Notice we're not passing the full post object as a prop. This simplifies our Post component types and makes later refactors easier. The Post would like like the following: ` In this query, we're grabbing the data directly from our cache every time because our top-level query should have fetched it. Now, a small bug here makes maintainability a bit harder. Our top-level GetFeed query doesn't guarantee fetching the same fields. Notice how our Post component exports a fragment. Fragments are a feature Apollo supports to share query elements across operations. In our SocialFeed component, we can change our query to be: ` Now, as we change our Post to use new fields and display different data, the refactoring is restricted to just that component, and the upstream components will detect the changes and handle them for us making our codebase more maintainable. Because the upstream component is always fetching from the network, we can trust that the cache will have our data, making this component safe to render. With these examples, though, our users will likely have to see a loading spinner or state on every render unless we add some server rendering. cache-and-network This is where cache-and-network comes to play. With this policy, Apollo Client will run your query against your cache and your GraphQL server. This further simplifies our example above if we want to provide the last fetched results to the user but then update the feed immediately upon gathering the latest data. This is similar to what X/Twitter does when you reload the app. You'll see the last value that was in the cache then it'll render the network values when ready. This can cause a jarring user experience though, if the data is changing a lot over time, so I recommend using this methodology sparsely. However, if you wanted to update our existing example, we'd just change our SocialFeed component to use this policy, and that'll keep our client and server better in sync while still enabling 10s polling. no-cache This policy is very similar to the network-only policy, except it bypasses the local cache entirely. In our previous example, we wrote engagement as a sub-selector on a Post and stored fields there. These metrics can change in real time pretty drastically. Chat features, reactions, viewership numbers, etc., are all types of data that may change in real time. The no-cache policy is good when this type of data is active, such as during a live stream or within the first few hours of a post going out. You may typically want to use the cache-and-network policy eventually but during that active period, you'll probably want to use no-cache so your consumers can trust your data. I'd probably recommend changing your server to split these queries and run different policies for the operations for performance reasons. I haven't mentioned this yet, but you can make the fetch policy on a query dynamic, meaning you combine these different policies' pending states. This could look like the following: ` We pass whether the event is live to the component that then leverages that info to determine if we should cache or not when fetching the chat. That being said, we should consider using subscription operations for this type of feature as well, but that's an exercise for another blog post. standby This is the most uncommon fetch policy, but has a lot of use. This option runs like a cache-first query when it executes. However, by default, this query does not run and is treated like a "skip" until it is manually triggered by a refetch or updateQueries caller. You can achieve similar results by leveraging the useLazyQuery operator, but this maintains the same behavior as other useQuery operators so you'll have more consistency among your components. This method is primarily used for operations pending other queries to finish or when you want to trigger the caller on a mutation. Think about a dashboard with many filters that need to be applied before your query executes. The standby fetch policy can wait until the user hits the Apply or Submit button to execute the operation then calls a await client.refetchQueries({ include: ["DashboardQuery"] }), which will then allow your component to pull in the parameters for your operation and execute it. Again, you could achieve this with useLazyQuery so it's really up to you and your team how you want to approach this problem. To avoid learning 2 ways, though, I recommend picking just one path. Conclusion Apollo Client's fetch policies are a versatile and helpful tool for managing your application data and keeping it in sync with your GraphQL server. In general, you should use the defaults provided by the library, but think about the user experience you want to provide. This will help you determine which policy best meets your needs. Leveraging tools like fragments will enable you to manage your application and use composable patterns more effectively. With the rise of React Server Components and other similar patterns, you'll need to be wary of how that impacts your Apollo Client strategy. However, if you're on a legacy application that leverages traditional SSR patterns, Apollo allows you to pre-render queries on the server and their related cache. When you combine these technologies, you'll find that your apps perform great, and your users will be delighted....

Take your App to the Next Level with Vue 3 cover image

Take your App to the Next Level with Vue 3

Vue 3 has now officially launched and you are probably wondering how you are going to start migrating your existing Vue 2 apps to Vue 3. I will be honest with you: a framework migration is always the most tedious and painstaking task you will ever encounter. The good news is that migrating from Vue 2 to Vue 3 is not that difficult and complicated. As you may know, Vue 3 source code has been written from scratch. However, the maintainers of the framework made sure not to change the API too much. In other words, we will benefit from all the goodies Vue 3 brings, with minimal change. How awesome is that?! Vue 3 Official Migration Guide The Vue documentation website has been refreshed to reflect the latest changes. The Vue community has maintained the best documentation to help us learn and use Vue. The Vue documentation dedicates a section on Vue 3 Migration making mention of the new features, the breaking changes for Vue 2 apps, the supporting libraries like Vue CLI, Vuex, Vue Router and others. The website explicitly states that the team is still working on a dedicated migration guide from Vue 2 to Vue 3. Meanwhile, until an official migration guide is released, let’s get a little insight on what could possibly be involved if you were to tackle this for yourself. What to consider before upgrading to Vue 3? As Vue 3 is still new, there will be some things to keep in mind. There are thousands of applications and third-party libraries created for Vue 2 and even Vue 1.5. It’s going to be a lengthy and time consuming effort to migrate all those libraries to support Vue 3. Before you attempt any migration process, make sure all the libraries you use are supported by Vue 3. For instance, Vuetify isn't. You can read more about this here. In addition, if you use any of the third-party libraries, they need to be checked or you might find they have upgraded already. Moreover, the Vue 3 reactivity system has been rewritten to utilize the new language features in ES2015. Vue 3 uses proxies for its reactivity system instead of the Object.defineProperty() function. JavaScript proxies are supported by most modern browsers. Unfortunately, Proxies cannot be polyfilled for older browsers; therefore, Vue 3 offers two implementations of it’s reactivity system. One implementation will use proxies for the most recent and modern browsers. The other one will fall back to the Vue 2 way of implementing reactivity to support the older browsers. Step by Step Migration - Demo In this section, we'll go through migrating the This Dot - Vue Meetup website. The existing website is built with Vue 2. It’s essential to follow the steps below as is. Of course, depending on the features you’ve used, you will adjust this workflow to fit your needs and migrate more features in your apps. Let's start! Step 1: Create a new Git branch It’s important to start off with a brand new branch to play around with the migration. This way your main branches, like master or dev, will remain intact without disrupting any live code. Let’s assume we are branching from the master branch. Run the following command to create a new branch: ` Step 2: Install the latest Vue CLI version Currently, as of the time of writing this article, the latest version of the Vue CLI is 4.5.6. To install the latest version, run the following command: ` Verify the installation by running the following command and making sure it reads as @vue/cli 4.5.6: ` Upgrading the Vue CLI not only helps in upgrading the existing application, but it also gives you the chance to scaffold a new Vue 3 app in the future. Step 3: Upgrade the Vue libraries The next step is to upgrade the Vue NPM packages and all other packages used inside the package.json file. To start with, open the package.json file and make sure to amend the Vue libraries with the following versions: ` Now, let’s upgrade the rest of the libraries using the Vue CLI. Run the following command to start upgrading the libraries: ` The command goes through all the libraries you are using inside the package.json file and tries to upgrade them to the latest compatible version. For example, when I run this command, the Vue CLI detects the Vue libraries that need to be upgraded and prompts for confirmation: Type Y to continue with the upgrade. In summary, the upgrade reported the following packages changes, additions, and removal. While the CLI is upgrading the @vue/cli-plugin-eslint it will also upgrade the current ESLint version installed on your computer. The latest Vue CLI supports ESLint v6. Once again, the Vue CLI prompts for confirmation before upgrading ESLint. Type Y to continue with the upgrade. ` The numbers will definitely be different for you and depending on the app. It’s now time to run the app and make sure you don’t have missing libraries or other problems. In my case, I ran the app and got a few ESLint issues. Luckily, the Vue CLI comes packaged with a command to auto fix ESLint issues. In this case, you run the npm run lint and the Vue CLI handles the rest for you! Step 4: Add the @vue/compiler-sfc NPM package The @vue/compiler-sfc package contains lower level utilities that you can use if you are writing a plugin / transform for a bundler or module system that compiles Vue single file components into JavaScript. It is used in the vue-loader. This is an essential component if you are using Single File Components which is the case in most of the Vue apps. Let’s install this package by running the following command: ` Let’s move on and start upgrading the source code to use the latest APIs offered by Vue 3. Step 5: Upgrade the main.js file Vue 3 changes the way an application is created by introducing the createApp() function. Back in Vue 2 and earlier versions, we used to create a global Vue instance. This approach had several disadvantages. The main one had third-party libraries to making changes to our Vue single App instance. By introducing createApp(), you can instantiate multiple Vue apps side by side. It creates a context or boundary for your app instance where you do all the registration as you will see shortly. Typically, a Vue app is started inside the main.js file. Let’s visit this file and make the necessary changes to upgrade to Vue 3. ` This is a slimmed down version of the original main.js file in the app. Let’s dissect the file one line at a time and upgrade accordingly. ` Replace the line above with: ` Let’s replace the code that’s creating the app using the Vue 3 createApp() function. ` Replace with: ` The app variable now holds a new Vue app instance for us. The router instance will be registered separately. Let’s update the Vue component registration. ` Replace with: ` With Vue 3, you register components at the app instance level and not globally. Let’s update the Vue mixin registration. ` Replace with: ` Now register the router on the app instance as follows: ` Now let’s register the vue-analytics plugin on the app instance. ` Replace with: ` The plugin is now installed on the app instance rather than the global Vue instance. This is also valid for any other plugin out there. Make sure to remove the line below as it’s not needed anymore in Vue 3 apps: ` Finally, let’s mount the app instance by using the following: ` The final version of the upgrade main.js file looks like this: ` That’s it! Step 6: Upgrade the router.js file The Vue Router has undergone changes and it’s now under v4.0. Let’s review what the current router.js file looks like: ` Replace the import statements with the following line: ` Instead of creating a new instance of the Router object, we will be using the new function provided by Vue Router which is createRouter(). ` The router.js file now exports an instance of the Router object using the createRoute() function. This function expects an input parameter of type object. The routes and history properties are the minimum accepted to pass into this object. The routes array is still the same as in Vue 2. It’s an array of routes, nothing has changed here. The createWebHashHistory() function is now used to specify a Hash History mode in the Vue Router. As a side note, depending on what you are using in your app, there is also the createWebHistory() function that sets the mode to HTML 5. You can read more about History Modes. Next, we will update the scrollBehavior() function as it has undergone some major changes. Replace the existing function with the following: ` You can read more about Scroll Behavior in the Vue Router 4.0. Now, let’s run the app and see if everything works as expected. When I run the app, I get the following warning in the Dev Tools: ` This warning has to do with Vue Router and the Transition component. You can read more about the Transitions in Vue Router. Let’s navigate to the App.vue component and check what the current source code is: ` In Vue Router v4.0, you can no longer nest a component inside a component. The fix is simple and provided to you in the documentation. Replace the above with the following: ` These were all the steps needed to upgrade the Vue Meetup app. Others I’d like to draw your attention to a few more things when upgrading your apps to Vue 3. One of the components in the app had a single slot; that is, the default slot. The way it was used in the Vue 2 app was: ` When I ran the app, the component was showing nothing, an empty screen! It seems Vue 2 was more tolerant by not forcing me to specify the name of the slot, even though this is the default slot. The quick fix in Vue 3 is as follows: ` Something else I didn’t mention is the Vuex v4.0. The same steps that we followed to upgrade the Vue Router can be followed here. The approach is similar. You can read more about the Vuex v4.0 Breaking Changes. Conclusion I am pretty sure we will all face more issues and encounter different hiccups while upgrading our apps. It will all depend on the features of your Vue 2. Remember, everything has a solution! While we wait for the Vue team to share an official migration guide, start trying to upgrade and see how you go. If you get stuck, feel free to drop me a message on twitter using my Twitter handle @bhaidar....

Introduction to Vercel’s Flags SDK cover image

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it. You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice. Why should I use it? You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently. It’s simple Let's start with a basic example: ` This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration. Many other SDKs require passing the flag's name and context every single time you check a flag, like this: ` This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application. By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly. It’s flexible Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter. While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit. Latency solutions Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency. These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly: ` You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended. Using Edge Config When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues. To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency. Cookies For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load. Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data: ` You can also encrypt or sign cookies for additional security from the client side. Dedupe Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example: ` Server-side patterns for static pages You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering. To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute. Precompute Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers: ` Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them. ` The user will never notice this because, as we use “rewrite”, they will only see the original URL. Now, on our page, we “invoke” our flags, sending the code from the params: ` By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user. Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests: ` Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell. Generate Permutations If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases. ` ` Conclusion Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co