Skip to content

Level up your REST API's with JSON Schema

Level up your REST API's with JSON Schema

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

JSON Schema isn’t a hot topic that gets a lot of attention compared to GraphQL or other similar tools. I discovered the power of JSON Schema while I was building a REST API with Fastify. What is it exactly? The website describes it as “the vocabulary that enables JSON data consistency, validity, and interoperability at scale”. Or more simply, it’s a schema specification for JSON data. This article is going to highlight some of the benefits gained by defining a JSON Schema for a REST API.

JSON Schema Basics

Here’s an example of a simple schema representing a user:

{
  "$id": "<https://example.com/schemas/user>",
  "$schema": "<http://json-schema.org/draft-07/schema#>",
  "type": "object",
  "properties": {
    "firstName": {
      "type": "string"
    },
    "lastName": {
      "type": "string"
    },
    "email": {
      "type": "string",
      "format": "email"
    },
    "age": {
      "type": "integer"
    },
    "newsletterSubscriber": {
      "type": "boolean"
    },
    "favoriteGenres": {
      "type": "array",
      "items": {
        "type": "string"
      }
    }
  },
  "required": ["email"],
  "additionalProperties": false
}

If you’re familiar with JSON already, you can probably understand most of this at a glance. This schema represents a JSON object with some properties that define a User in our system, for example. Along with the object’s properties, we can define additional metadata about the object. We can describe which fields are required and whether or not the schema can accept any additional properties that aren’t defined on the properties list.

Types

We covered a lot of types in our example schema. The root type of our JSON schema is an object with various properties defined on it. The base types available to define in your JSON Schema map to valid JSON types: object, array, string, number, integer, boolean. Check the type reference page to learn more.

Formats

The email property in our example has an additional field named format next to its type. The format property allows us to define a semantic identification for string values. This allows our schema definition to be more specific about the type of values allowed for a given field. “hello” is not a valid string value for our email type.

Another common example is for date or timestamp values that get serialized. Validation implementations can use the format definition to make sure a value matches the expected type and format defined. There’s a section on the website that lists the various formats available for the string type.

Schema Structuring

JSON Schema supports referencing schemas from within a schema. This is a very important feature that helps us keep our schemas DRY. Looking back to our initial example we might want to define a schema for a list of users. We defined an id on our user schema of “user”, we can use this to reference that schema from another schema.

{
	"type": "array",
	"items": {
		"$ref": "<https://example.com/schemas/user>"
	}
}

In this example we have a simple schema that is just an array whose items definition references our user schema. This schema is exactly the same as if we defined our initial schema inside of "items": { }. The JSON Schema website has a page dedicated to structuring schemas.

JSON Schema Benefits

Validation

One of the main benefits of defining a schema for your API is being able to validate inputs, outputs. Inputs include things like the request body, URL parameters, and search parameters. The output is your response JSON data or headers. There are some different libraries available to handle schema validation. A popular choice and the one used by Fastify is called Ajv.

Security

Validating inputs has some security advantages. It can prevent bad or malicious data from being accepted by your API. For instance, you can specify that a certain field must be an integer, or that a string must match a certain regex pattern. This can help prevent SQL injection, cross-site scripting (XSS).

Defining a schema for your response types can help to prevent leaking sensitive data from your database. Your web server can be configured to not include any data that is not defined in the schema from your responses.

Performance

By validating data at the schema level, you can reject invalid requests early, before they reach more resource-intensive parts of your application. This can help protect against Denial of Service (DoS) attacks.

fast-json-stringify is a library that creates optimized stringify functions from JSON schemas that can help improve response times and throughput for JSON API’s.

Documentation

JSON Schema also greatly aids in API documentation. Tools like OpenAPI and Swagger use JSON Schema to automatically generate human-readable API documentation. This documentation provides developers with clear, precise information about your API’s endpoints, request parameters, and response formats. This not only helps to maintain consistent and clear communication within your development team, but also makes your API more accessible to outside developers.

Type-safety

I plan to cover this in more detail in an upcoming post but there are tools available that can help achieve type-safety both on your server and client-side by pairing JSON Schema with TypeScript. In Fastify for example, you can infer types in your request handlers based on your JSON Schema specifications.

Schema Examples

I’ve taken some example schemas from the Fastify website to walk through how they would work in practice.

### queryStringJsonSchema
const queryStringJsonSchema = {
  type: 'object',
   properties: {
     name: { type: 'string' },
     excitement: { type: 'integer' }
   }, 
  additionalProperites: "false"
}

We would use this schema to define, validate, and parse the query string of an incoming request in our API.

Given a query string like: ?name=Dane&excitement=10&other=additional - we can expect to receive an object that looks like this:

{
  name: "Dane",
  excitement: 10
}

Since additionalProperties are not allowed, the other property that wasn’t defined on our schema gets parsed out.

### paramsJsonSchema

Imagining we had a route in our API defined like /users/:userId/posts/:slug

const paramsJsonSchema = {
  type: 'object',
  properties: {
     userId: { type: 'number' },
     slug: { type: 'string' }
   },
 additionalProperties: "false",
 required: ["userId", "slug"]
}

Given this url: /users/1/posts/hello-world - we can expect to get an object in our handler that looks like this:

{
  userId: 1,
  slug: "hello-world"
}

We can be sure about this since the schema doesn’t allow for additional properties and both properties are required. If either field was missing or not matching its type, our API can automatically return a proper error response code.

Just to highlight what we are getting here again. We are able to provide fine-grained schema definitions for all the inputs and outputs of our API. Aside from serving as documentation and specification, it powers validation, parsing, and sanitizing values. I’ve found this to be a very simple and powerful tool in my toolbox.

Summary

In this post, we've explored the power and functionality of JSON Schema, a tool that often doesn't get the spotlight it deserves. We've seen how it provides a robust structure for JSON data, ensuring consistency, validity, and interoperability on a large scale. Through our user schema example, we've delved into key features like types, formats, and the ability to structure schemas using references, keeping our code DRY. We've also discussed the substantial benefits of using JSON Schema, such as validation, enhanced security, improved performance, and the potential for type-safety. We've touched on useful libraries like Ajv for validation and fast-json-stringify for performance optimization.

In a future post we will explore how we can utilize JSON Schema to achieve end-to-end type-safety in our applications.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

End-to-end type-safety with JSON Schema cover image

End-to-end type-safety with JSON Schema

End-to-end type-safety with JSON Schema I recently wrote an introduction to JSON Schema post. If you’re unfamiliar with it, check out the post, but TLDR: It’s a schema specification that can be used to define the input and output data for your JSON API. In my post, I highlight many fantastic benefits you can reap from defining schemas for your JSON API. One of the more interesting things you can achieve with your schemas is end-to-end type safety from your backend API to your client application(s). In this post, we will explore how this can be accomplished slightly deeper. Overview The basic idea of what we want to achieve is: * a JSON API server that validates input and output data using JSON Schema * The JSON Schema definitions that our API uses transformed into TypeScript types With those pieces in place, we can achieve type safety on our API server and the consuming client application. The server side is pretty straightforward if you’re using a server like Fastify with already enabled JSON Schema support. This post will focus on the concepts more than the actual implementation details though. Here’s a simple diagram illustrating the high-level concept: We can share the schema and type declaration between the client and server. In that case, we can make a request to an endpoint where we know its type and schema, and assuming the server validates the data against the schema before sending it back to the client, our client can be confident about the type of the response data. Marrying JSON Schema and TypeScript There are a couple of different ways to accomplish this: * Generating types from schema definitions using code generation tools * Creating TypeBox definitions that can infer TypeScript types and be compiled to JSON Schema I recommend considering both and figuring out which would better fit your application and workflows. Like anything else, each has its own set of trade-offs. In my experience, I’ve found TypeBox to be the most compelling if you want to go deep with this pattern. Code generation A couple of different packages are available for generating TS types from JSON Schema definitions. * https://github.com/bcherny/json-schema-to-typescript * https://github.com/vega/ts-json-schema-generator They are CLI tools that you can provide a glob path to where your schema files are located and will generate TS declaration files to a specified output path. You can set up an npm hook or a similar type of script that will generate types for your development environment. TypeBox TypeBox is a JSON Schema type builder. With this approach, instead of json files, we define schemas in code using the TypeBox API. The TypeBox definitions infer to TypeScript types directly, which eliminates the code generation step described above. Here’s a simple example from the documentation of a JSON Schema definition declared with TypeBox: ` This can then be inferred as a TypeScript type: ` Aside from schemas and types, TypeBox can do a lot more to help us on our type-safety journey. We will explore it a bit more in upcoming sections. Sharing schemas between client and server applications Sharing our JSON Schema between our server and client app is the main requirement for end-to-end type-safety. There are a couple of different ways to accomplish this, but the simplest would be to set up our codebase as a monorepo that contains both the server and client app. Some popular options for TypeScript monorepos are: PNPM, Turborepo, and NX. If a monorepo is not an option, you can publish your schema and types as a package that can be installed in both projects. However, this setup would require a lot more maintenance work. Ultimately, as long as you can import your schemas and types from the client and server app, you are in good shape. Server-to-client validation and type-safety For the sake of simplicity, let's focus on data flowing from the server to the client for now. Generally speaking, the concepts also apply in reverse, as long as your JSON API server validates your inputs and outputs. We’ll look at the most basic version of having strongly typed data on the client from a request to our server. Type-safe client requests In our server application, if we validate the /users endpoint with a shared schema - on the client side, when we make the request to the endpoint, we know that the response data is validated using the user schema. As long as we are confident of this fact, we can use the generated type from that schema as the return type on our client fetch call. Here’s some pseudocode: ` Our server endpoint would look something like this: ` You could get creative and build out a map that defines all of your endpoints, their metadata, and schemas, and use the map to define your server endpoints and create an API client. Transforming data over the wire Everything looks stellar, but we can still take our efforts a bit further. To this point, we are still limited to serialized JSON data. If we have a created_at field (number or ISO string) tied to our user, and we want it to be a Date object when we get a hold of it on the client side - additional work and consideration are required. There are some different strategies out there for deserializing JSON data. The great thing about having shared schemas between our client and server is that we can encode our type information in the schema without sending additional metadata from our server to the client. Using format to declare type data In my initial JSON Schema blog post, I touched on the format field of the specification. In our schema, if the actual type of our date is a string in ISO8601 format, we can declare our format to be "date-time". We can use this information on the client to transform the field into a proper Date object. ` Transforming serialized JSON Data This can be a little bit tricky; again, there are many ways to accomplish it. To demonstrate the concept, we’ll use TypeBox to define our schemas as discussed above. TypeBox provides a Transform type that you can use to declare, encode, and decode methods for your schema definition. ` It even provides helpers to statically generate the decoded and encoded types for your schema ` If you declare your decode and encode functions for your schemas, you can then use the TypeBox API to handle decoding the serialized values returned from your JSON API. Here’s what the concept looks like in practice fetching a user from our API: ` Nice. You could use a validation library like Zod to achieve a similar result but here we aren’t actually doing any validation on our client side. That happened on the server. We just know the types based on the schema since both ends share them. On the client, we are just transforming our serialized JSON into what we want it to be in our client application. Summary There are a lot of pieces in play to accomplish end-to-end type safety. With the help of JSON Schema and TypeBox though, it feels like light work for a semi-roll-your-own type of solution. Another great thing about it is that it’s flexible and based on pretty core concepts like a JSON API paired with a TypeScript-based client application. The number of benefits that you can reap from defining JSON Schemas for your APIs is really great. If you’re like me and wanna keep it simple by avoiding GraphQL or other similar tools, this is a great approach....

An example-based guide to CSS Cascade Layers cover image

An example-based guide to CSS Cascade Layers

CSS is actually good now! If you’ve been a web developer for a while, you’ll know this hasn’t always been the case. Over the past few years, a lot of really amazing features have been added that now support all the major browsers. Cascading and selector specificity have always been a pain point when writing stylesheets. CSS cascade layers is a new feature that provides us with a lot more power and flexibility for tackling this problem. We no longer need to resort to tricky specificity hacks or order-of-appearance magic. Cascade layers are really easy to get started with. I think the best way to understand how and when they are useful is by walking through some practical examples. In this post, we’ll cover: * What CSS cascade layers are and how they work * Real-world examples of using layers to manage style priorities * How Tailwind CSS leverages cascade layers What are CSS Cascade Layers? Imagine CSS cascade layers as drawers in a filing cabinet, each holding a set of styles. The drawer at the top represents the highest priority, so when you open the cabinet, you first access the styles in that drawer. If a style isn't found there, you move down to the next drawer until you find what you need. Traditionally, CSS styles cascade by specificity (i.e., more specific selectors win) and source order (styles declared later in the file override earlier ones). Cascade layers add a new, structured way to manage styles within a single origin—giving you control over which layer takes precedence without worrying about specificity. This is useful when you need to control the order of styles from different sources, like: * Resets (e.g., Normalize) * Third-party libraries (e.g., Tailwind CSS) * Themes and overrides You define cascade layers using the @layer rule, assigning styles to a specific layer. The order in which layers are defined determines their priority in the cascade. Styles in later layers override those in earlier layers, regardless of specificity or order within the file. Here’s a quick example: ` In this example, since the theme layer comes after base, it overrides the paragraph text color to dark blue—even though both declarations have the same specificity. How Do CSS Layers Work? Cascade layers allow you to assign rules to specific named layers, and then control the order of those layers. This means that: * Layers declared later take priority over earlier ones. * You don’t need to increase selector specificity to override styles from another layer—just place it in a higher-priority layer. * Styles outside of any layer will always take precedence over layered styles unless explicitly ordered. Let’s break it down with a more detailed example. ` In this example: * The unlayered audio rule takes precedence because it’s not part of the reset layer, even though the audio[controls] rule has higher specificity. * Without the cascade layers feature, specificity and order-of-appearance would normally decide the winner, but now, we have clear control by defining styles in or outside of a layer. Use Case: Overriding Styles with Layers Cascade layers become especially useful when working with frameworks and third-party libraries. Say you’re using a CSS framework that defines a keyframe animation, but you want to override it in your custom styles. Normally, you might have to rely on specificity or carefully place your custom rules at the end. With layers, this is simplified: ` There’s some new syntax in this example. Multiple layers can be defined at once. This declares up front the order of the layers. With the first line defined, we could even switch the order of the framework and custom layers to achieve the same result. Here, the custom layer comes after framework, so the translate animation takes precedence, no matter where these rules appear in the file. Cascade Layers in Tailwind CSS Tailwind CSS, a utility-first CSS framework, uses cascade layers starting with version 3. Tailwind organizes its layers in a way that gives you flexibility and control over third-party utilities, customizations, and overrides. In Tailwind, the framework styles are divided into distinct layers like base, components, and utilities. These layers can be reordered or combined with your custom layers. Here's an example: ` Tailwind assigns these layers in a way that utilities take precedence over components, and components override base styles. You can use Tailwind’s @layer directive to extend or override any of these layers with your custom rules. For example, if you want to add a custom button style that overrides Tailwind’s built-in btn component, you can do it like this: ` Practical Example: Layering Resets and Overrides Let’s say you’re building a design system with both Tailwind and your own custom styles. You want a reset layer, some basic framework styles, and custom overrides. ` In this setup: * The reset layer applies basic resets (like box-sizing). * The framework layer provides default styles for elements like paragraphs. * Your custom layer overrides the paragraph color to black. By controlling the layer order, you ensure that your custom styles override both the framework and reset layers, without messing with specificity. Conclusion CSS cascade layers are a powerful tool that helps you organize your styles in a way that’s scalable, easy to manage, and doesn’t rely on specificity hacks or the appearance order of rules. When used with frameworks like Tailwind CSS, you can create clean, structured styles that are easy to override and customize, giving you full control of your project’s styling hierarchy. It really shines for managing complex projects and integrating with third-party CSS libraries....

The Future of Dates in JavaScript: Introducing Temporal cover image

The Future of Dates in JavaScript: Introducing Temporal

The Future of Dates in JavaScript: Introducing Temporal What is Temporaal? Temporal is a proposal currently at stage 3 of the TC39 process. It's expected to revolutionize how we handle dates in JavaScript, which has always been a challenging aspect of the language. But what does it mean that it's at stage 3 of the process? * The specification is complete * It has been reviewed * It's unlikely to change significantly at this point Key Features of Temporal Temporal introduces a new global object with a fresh API. Here are some important things to know about Temporal: 1. All Temporal objects are immutable 2. They're represented in local calendar systems, but can be converted 3. Time values use 24-hour clocks 4. Leap seconds aren't represented Why Do We Need Temporal? The current Date object in JavaScript has several limitations: * No support for time zones other than the user's local time and UTC * Date objects can be mutated * Unpredictable behavior * No support for calendars other than Gregorian * Daylight savings time issues While some of these have workarounds, not all can be fixed with the current Date implementation. Let's see some useful examples where Temporal will improve our lives: Some Examples Creating a day without a time zone is impossible using Date, it also adds time beyond the date. Temporal introduces PlainDate to overcome this. ` But what if we want to add timezone information? Then we have ZonedDateTime for this purpose. The timezone must be added in this case, as it also allows a lot of flexibility when creating dates. ` Temporal is very useful when manipulating and displaying the dates in different time zones. ` Let's try some more things that are currently difficult or lead to unexpected behavior using the Date object. Operations like adding days or minutes can lead to inconsistent results. However, Temporal makes these operations easier and consistent. ` Another interesting feature of Temporal is the concept of Duration, which is the difference between two time points. We can use these durations, along with dates, for arithmetic operations involving dates and times. Note that Durations are serialized using the ISO 8601 duration format ` Temporal Objects We've already seen some of the objects that Temporal exposes. Here's a more comprehensive list. * Temporal * Temporal.Duration` * Temporal.Instant * Temporal.Now * Temporal.PlainDate * Temporal.PlainDateTime * Temporal.PlainMonthDay * Temporal.PlainTime * Temporal.PlainYearMonth * Temporal.ZonedDateTime Try Temporal Today If you want to test Temporal now, there's a polyfill available. You can install it using: ` Note that this doesn't install a global Temporal object as expected in the final release, but it provides most of the Temporal implementation for testing purposes. Conclusion Working with dates in JavaScript has always been a bit of a mess. Between weird quirks in the Date object, juggling time zones, and trying to do simple things like “add a day,” it’s way too easy to introduce bugs. Temporal is finally fixing that. It gives us a clear, consistent, and powerful way to work with dates and times. If you’ve ever struggled with JavaScript dates (and who hasn’t?), Temporal is definitely worth checking out....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co