Skip to content

Intro to EdgeDB - The 10x ORM

Intro to EdgeDB - The 10x ORM

Intro to EdgeDB - The 10x ORM

I’ve written a couple of posts recently covering different TypeScript ORMs. One about Prisma, and another about Drizzle. ORM’s are a controversial topic in their own right - some people think they are evil, and others think they are great. I enjoy them quite a bit. They make it easy to interact with your databases. What is more important and magical for an application than data? SQL without an ORM is amazing as well, but there are some pain points with that approach. Today I’m excited to write about EdgeDB, which isn’t exactly an ORM or a database from my perspective (although they call themselves one). It is, however an incredibly impressive piece of technology that solves these common pain points in a pretty novel way.

So if it’s not an ORM or a database, what exactly is it?

I don’t think I can answer that in one or two sentences, so we will explore the various pieces that make up EdgeDB in this article. From a high-level standpoint, though, an interface/query language that sits in front of PostgreSQL. This may seem like some less important implementation detail, but in my eyes, it’s a feature and one of the most compelling selling points.

Data Modeling

EdgeDB advertises itself as a “graph-relational database”. The core components of EdgeDB data modeling are the schema, type system, and relationship definitions. A schema will consist of objects that contain various typed attributes and links that connect the objects. In SQL, a table is analogous to an Object, and a foreign-key is associated with a link.

Here’s what a simple schema in EdgeDB looks like

type User {
  required email: str {
    constraint exclusive;
  };
}

type Post {
  required content: str;
  required author: User;
}

There’s a few things to highlight here

  • We defined two different objects (tables) User and Post
  • Each object contains properties with their types defined
  • str is one of several scalar types (bool, int, float, json, datetime, etc)
  • the author property is a required link to the User object

Defining relations / associations

In our example above we defined a one-to-many relationship between a user and posts. All the relation types that you can define in traditional SQL are available. One interesting feature though is called backward links. These can be defined in your schema and it allows you to access related data from both sides of a relationship.

type User {
  multi likes: Tweet;
}

type Tweet {
  text: str;
  multi link likers := Tweet.<likes[is User];
}

likes are a many-to-many relationship between Tweet and User. With a backlink defined multi link likers := Tweet.&lt;likes[is User]; - we can access likes from a User and likers from a Tweet.

select User {
    name,
    likes: {
        text
    }
};

select Tweet {
    text,
    likers: {
        name
    }
};

That's how we can access these relations in our queries. You might be looking at these queries and thinking it looks a lot like GraphQL. This is why they call it a ‘Graph-relational’ database.

We’ve only scratched the surface of EdgeDB schema’s. Hopefully, I’ve at least managed to pique your interest.

Computed properties

Computed properties are a super powerful feature that can be added to your schema or queries. This example user schema creates a computed discriminator and username property. The discriminator uses an EdgeDB standard library function to generate a discriminator number and the username property is a combination between the name and discriminator properties.

type User {
  required name: str;
	discriminator := std::random(1000, 9999);
	username := .name ++ "#" ++ <str>.discriminator;
}

Globals and Access Policies

EdgeDB allows you to define global variables as part of your schema. The most common use case I’ve seen for this is to power the access policy feature.

You can define a global variable as part of your schema: global current_user: uuid;

With a global variable defined, you can provide the value as a sort of context from your application by providing it into your EdgeDB driver/client.

const client = createClient().withGlobals({
  current_user: '2141a5b4-5634-4ccc-b835-437863534c51',
});

You can then add access policies directly to your schema, for example, to provide fine-grained access control to blog posts in your blogging application.

type BlogPost {
    required title: str;
    required author: User;

    access policy author_has_full_access
      allow all
      using (global current_user    ?= .author.id
        and  global current_country ?= Country.Full) {
       errmessage := "User does not have full access";
      }
     access policy author_has_read_access
       allow select
       using (global current_user    ?= .author.id
         and  global current_country ?= Country.ReadOnly);
  }

Aside from access policies, you can use your global variables in your queries as well. For example, if you wanted a query to select the current user.

select User filter .id = global current_user;

Types and Sets

EdgeDB is very type-centric. All of your data is strongly typed. We’ve touched on some of the types already.

  • Scalars - There are a lot of scalar types available out of the box
  • Custom scalars - Custom scalar types are user-defined extensions of existing types
  • Enums - Are supported out of the box - enum&lt;Admin, Moderator, Member>
  • Arrays - Defined by passing the singular value type - array&lt;str>;
  • Tuples - In EdgeD, tuples can contain more than 2 elements and come in named and unnamed varieties - &lt;str, number>; tuple&lt;name: str, jersey_number: float64, active: bool>;

All queries return a Set which is a collection of values of a given type. In the query language, all values are Sets. Sets are a collection of values of a given type. A comma-separated list of values inside a set of {curly braces}. A query with no results will return an empty or singleton set. If we have no User values stored yet - select User returns {}.

Paired with the query language types and set provides an incredibly powerful and expressive system for interacting with your data. If you thought TypeScript was cool, wait until you start writing EdgeQL! 🙂

EdgeQL

Now to the fun stuff: the query language. We’ll use a schema from the docs and start by looking at some of those queries and build on those.

The example schema has an abstract type Person with two sub-types based on it Hero and Villian. This is known as a polymorphic type in EdgeDB. The Movie type includes a 1:m association with Person

module default {
  abstract type Person {
    required name: str { constraint exclusive };
  }

  type Hero extending Person {
    secret_identity: str;
    multi villains := .<nemesis[is Villain];
  }

  type Villain extending Person {
    nemesis: Hero;
  }

  type Movie {
    required title: str { constraint exclusive };
    required release_year: int64;
    multi characters: Person;
  }
}

Selecting properties / data

Before we dig into some real queries we should just touch on how we select actual data from a query. It’s pretty obvious and GraphQL-like but worth mentioning. To specify properties to select, you attach a shape. This works for getting nested link / association data as well.

Based on our schema, here’s how we could select fields from Movie, including data from the collection of related characters.

select Movie {
	title,
	release_year,
	characters: {
		name
	}
};

There is also a feature called a splats that allows you to select all fields and/or all linked fields without specifying them individually.

# select all properties
select Movie {*}; 

# select all properties including linked properties
select Movie {**};

If you don’t specify any properties or splats, only id’s get returned select Movie; .

Adding some objects with insert

To get started, we can use insert to add objects to our database.

We’ll start big by looking at the nested insert example. This example is interesting because it shows the creation of two objects in a single query. You’ll notice the simplicity of the syntax. Even though this is the first EdgeQL query we’re looking at, in my experience, it’s like this across the board. I’ve found EdgeQL queries to be simple and intuitive to the point where I’ve been able to intuit how to accomplish things in my head without having to reference the docs or ask the AI.

This example adds a new Villian and a new Hero which gets assigned as a link or association to the nemesis field on our Villian. To accomplish this we see that we can nest queries by wrapping them in ().

insert Villain {
  name := "The Mandarin",
  nemesis := (insert Hero {
    name := "Shang-Chi",
    secret_identity := "Shaun"
  })
};

The next example is pretty similar, but instead of creating the linked property, we are select ing and adding several potential objects to the characters list of Movie since it is a multi link. This is a pretty complex query that is doing a lot of different things. It’s deceivingly succinct. To accomplish the same thing with SQL this would probably be about 3 different queries. This query finds the objects to add to the characters multi link by filtering on a collection of different strings to match against the name property.

insert Movie {
  title := "Spider-Man: No Way Home",
  release_year := 2021,
  characters := (
    select Person
    filter .name in {
      'Spider-Man',
      'Doctor Strange',
      'Doc Ock',
      'Green Goblin'
    }
  )
};

The last thing we’ll cover for insert is bulk inserts. This is particularly useful for things like seed scripts.

In this example, you can just imagine that you have a JSON array of objects with hero names that gets passed in as an argument to your query

with
  raw_data := <json>$data,
for item in json_array_unpack(raw_data) union (
  insert Hero { name := <str>item['name'] }
);

Querying data with select

We’ve already seen subqueries and a select in the last section where we found a collection of Person records with a filter. We’ll build on that and see what tools are available to us when it comes to querying data.

This one covers a lot of ground. Very similar to SQL we have order and limit, and offset operators to support sorting and pagination. Also there is a whole standard library of functions and operators like count that can be used in our queries. This example returns a collection of villian names, excluding the first and last result.

select Villain {name}
order by .name
offset 1
limit count(Villain) - 1;

Most commonly, you will want to filter by an id

select Villian {*} filter .id = <uuid>"6c22c502-5c03-11ee-99ff-cbacc3918129";

Here’s another common example filtering by datetime. Since we’re using a string value here we need to cast it to the EdgeDB datetime type.

select Movie {*}
filter
    Movie.release_date > <cal::local_datetime>'2020-01-01T00:00:00';

You get a pretty similar toolbox to SQL when it comes to filtering with your common operators and things. Combined with all the tools in the standard library, you can get pretty creative with it.

The update..filter..set statement is how we can update existing data with EdgeQL. set is followed by a shape with assignments of properties to be updated.

update Hero
filter .name = "Hawkeye"
set { name := "Ronin" };

You can replace links for an object

update movie
filter .title = "Black Widow"
set {
 characters := (
  select Person
  filter .name in { "Black Widow", "Yelena", "Dreykov" }
 )
};

or add additional ones

update Movie
filter .title = "Black Widow"
set {
 characters += (insert Villain {name := "Taskmaster"})
};

An even more interesting example is removing links matched on a type. Since Villian is a sub-type of Person , this query will remove all characters linked of the Villian type.

update Movie
filter .title = "Black Widow"
set {
 characters -= Villain # remove all villains
};

Deleting objects with delete

Deleting is pretty straight forward. Using the delete command you can just filter for the objects that you would like to remove.

delete Hero
filter .name = 'Iron Man';

When the EdgeQL pieces fall into place

As you become more familiar with the EdgeQL query language chances are you’ll start writing very complex queries fluently because everything just makes sense once you’ve learned the building blocks.

Domain and business concerns

I don’t think they explicitly mention this as a goal anywhere but it’s something that I picked up on pretty quickly. EdgeDB nudges you to move more of what might have traditionally been application logic into your database layer. This is a topic that can bring a lot of division since even things like foreign keys and constraints in SQL are frowned upon in some circles. EdgeDB goes as far as providing constraints, global variables, contexts, and authorization support built into the database. I think that the ability to bake some of these concerns into your EdgeDB Schema is great. The way you model your schema and database in EdgeDB map to your domain in a much more intuitive way where domain concerns don’t really feel out of place there.

Database Clients and Query Builders and Generators

We’ve covered a lot so far to highlight what EdgeDB is and how to handle common use cases with the query language. To use it in your project though, you will need a client/driver library. There are clients available in several different languages. The one that they clearly have put the most investment into is the TypeScript query builder. We’ll briefly look at both options: simple driver/client and query builder. Whichever you end up choosing you will need to instantiate a driver and make sure you have a connection to your database instance configured.

Basic client

Although the TS query builder is very popular and pretty amazing, I couldn’t get away from just writing EdgeQL queries. In my application, I composed queries using template strings, and it worked great. The clients all have a set of methods available for passing in EdgeQL queries and parameters.

querySingle is a method for queries where you are only expecting a single result. If your query will have multiple results you would use query instead. There is also a queryRequiredSingle which will throw an error if no results are found. There are some other methods available as well including one for running queries in a transaction

import * as edgedb from "edgedb";

const client = edgedb.createClient();

async function main() {
  const result = await client.querySingle(`
    select Movie {
      title,
      actors: {
        name,
      }
    } filter .title = <str>$title
  `, { title: "Iron Man 2" });

  console.log(JSON.stringify(result, null, 2));
}

The first argument is the query, and the second is a map of parameters. In this example we include the title parameter and it is accessed in our query via $title.

TypeScript query builder

If you have a TypeScript app and type-safety is important, you might prefer using the query builder. It is a pretty incredible feat of TypeScript magic initially developed by the same developer behind the popular library Zod. We can’t cover it in very much depth here but we’ll look at an example just to have an idea of what the query builder looks like in an application.

import * as edgedb from "edgedb";
import e from "./dbschema/edgeql-js";

const client = edgedb.createClient();

async function main() {
  // result will be inferred based on the query
  const result = await e
    .select(e.Movie, () => ({
      title: true,
      actors: () => ({ name: true }),
      filter_single: { title: "Iron Man 2" },
    }))
    .run(client);

  console.log(JSON.stringify(result, null, 2));
}

The query builder is able to infer the result type automatically. It knows which fields you’ve selected, it knows that the result will be a single item.

Query generator

There are generators for queries and types. So even if you opt out of using the query builder you can still have queries that are strongly typed. It’s nice to have this option if you want to just write your queries as EdgeQL in .edgeql files.

└── queries
    └── getUser.edgeql
    └── getUser.query.ts    <-- generated file

We end up with an exported function named getUser that is strongly typed.

import { getUser } from "./queries/getUser.query";

const user = await getUser(client, newUser); // GetUserReturns

Tools and Utilities

The team at EdgeDB puts a big emphasis on developer experience. It shows up all over the place. We’ve already seen some utilities with the generators that are available. There are some other tools available as well that help complete the entire experience.

EdgeDB CLI

The first and most important tool to mention is the CLI. If you’ve started using EdgeDB then you’ve most likely already installed and used it. The CLI is pretty extensive. It includes commands for things like migrations, managing EdgeDB versions and installations, managing projects and local/cloud database instances, dumps and restores, a repl, and more. The CLI makes managing EdgeDB a breeze.

Admin UI

The CLI includes a command to launch an admin UI for any project or database. The Admin UI includes a awesome interactive diagram of your database schema, a repl for running queries, and a table to inspect and make changes to the data stored in your database.

Summary

Adopting newer database technology is a tough sales pitch. Replacing your application’s database technology at any point in its lifecycle is not a problem that anyone wants to have. This is one of the reasons why EdgeDB being a superset of PostgreSQL is a huge feature in my opinion. The underlying database technology is tried and true, and EdgeDB is open-source. Based on this, I would feel confident using EdgeDB if it aligned well from a technical and business perspective.

We’ve covered a lot of ground in this post. EdgeDB is feature-packed and powerful. Databases is a tough nut to crack, and I commend the team for all their hard work to help continue pushing forward one of the most important components of almost any application. I’m typically pretty conservative when it comes to databases, but EdgeDB took a great approach, in my opinion. I recommend at least giving it a try. You might catch the EdgeDB bug like I did!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

D1 SQLite: Writing queries with the D1 Client API cover image

D1 SQLite: Writing queries with the D1 Client API

Writing queries with the D1 Client API In the previous post we defined our database schema, got up and running with migrations, and loaded some seed data into our database. In this post we will be working with our new database and seed data. If you want to participate, make sure to follow the steps in the first post. We’ve been taking a minimal approach so far by using only wrangler and sql scripts for our workflow. The D1 Client API has a small surface area. Thanks to the power of SQL, we will have everything we need to construct all types of queries. Before we start writing our queries, let's touch on some important concepts. Prepared statements and parameter binding This is the first section of the docs and it highlights two different ways to write our SQL statements using the client API: prepared and static statements. Best practice is to use prepared statements because they are more performant and prevent SQL injection attacks. So we will write our queries using prepared statements. We need to use parameter binding to build our queries with prepared statements. This is pretty straightforward and there are two variations. By default we add ? ’s to our statement to represent a value to be filled in. The bind method will bind the parameters to each question mark by their index. The first ? is tied to the first parameter in bind, 2nd, etc. I would stick with this most of the time to avoid any confusion. ` I like this second method less as it feels like something I can imagine messing up very innocently. You can add a number directly after a question mark to indicate which number parameter it should be bound to. In this exampl, we reverse the previous binding. ` Reusing prepared statements If we take the first example above and not bind any values we have a statement that can be reused: ` Querying For the purposes of this post we will just build example queries by writing them out directly in our Worker fetch handler. If you are building an app I would recommend building functions or some other abstraction around your queries. select queries Let's write our first query against our data set to get our feet wet. Here’s the initial worker code and a query for all authors: ` We pass our SQL statement into prepare and use the all method to get all the rows. Notice that we are able to pass our types to a generic parameter in all. This allows us to get a fully typed response from our query. We can run our worker with npm run dev and access it at http://localhost:8787 by default. We’ll keep this simple workflow of writing queries and passing them as a json response for inspection in the browser. Opening the page we get our author results. joins Not using an ORM means we have full control over our own destiny. Like anything else though, this has tradeoffs. Let’s look at a query to fetch the list of posts that includes author and tags information. ` Let’s walk through each part of the query and highlight some pros and cons. ` * The query selects all columns from the posts table. * It also selects the name column from the authors table and renames it to author_name. * It aggregates the name column from the tags table into a JSON array. If there are no tags, it returns an empty JSON array. This aggregated result is renamed to tags. ` * The query starts by selecting data from the posts table. * It then joins the authors table to include author information for each post, matching posts to authors using the author_id column in posts and the id column in authors. * Next, it left joins the posts_tags table to include tag associations for each post, ensuring that all posts are included even if they have no tags. * Next, it left joins the tags table to include tag names, matching tags to posts using the tag_id column in posts_tags and the id column in tags. * Finally, group the results by the post id so that all rows with the same post id are combined in a single row SQL provides a lot of power to query our data in interesting ways. JOIN ’s will typically be more performant than performing additional queries.You could just as easily write a simpler version of this query that uses subqueries to fetch post tags and join all the data by hand with JavaScript. This is the nice thing about writing SQL, you’re free to fetch and handle your data how you please. Our results should look similar to this: ` This brings us to our next topic. Marshaling / coercing result data A couple of things we notice about the format of the result data our query provides: Rows are flat. We join the author directly onto the post and prefix its column names with author. ` Using an ORM we might get the data back as a child object: ` Another thing is that our tags data is a JSON string and not a JavaScript array. This means that we will need to parse it ourselves. ` This isn’t the end of the world but it is some more work on our end to coerce the result data into the format that we actually want. This problem is handled in most ORM’s and is their main selling point in my opinion. insert / update / delete Next, let’s write a function that will add a new post to our database. ` There’s a few queries involved in our create post function: * first we create the new post * next we run through the tags and either create or return an existing tag * finally, we add entries to our post_tags join table to associate our new post with the tags assigned We can test our new function by providing post content in query params on our index page and formatting them for our function. ` I gave it a run like this: http://localhost:8787authorId=1&tags=Food%2CReview&title=A+review+of+my+favorite+Italian+restaurant&content=I+got+the+sausage+orchette+and+it+was+amazing.+I+wish+that+instead+of+baby+broccoli+they+used+rapini.+Otherwise+it+was+a+perfect+dish+and+the+vibes+were+great And got a new post with the id 11. UPDATE and DELETE operations are pretty similar to what we’ve seen so far. Most complexity in your queries will be similar to what we’ve seen in the posts query where we want to JOIN or GROUP BY data in various ways. To update the post we can write a query that looks like this: ` COALESCE acts similarly to if we had written a ?? b in JavaScript. If the binded value that we provide is null it will fall back to the default. We can delete our new post with a simple DELETE query: ` Transactions / Batching One thing to note with D1 is that I don’t think the traditional style of SQLite transactions are supported. You can use the db.batch API to achieve similar functionality though. According to the docs: Batched statements are SQL transactions ↗. If a statement in the sequence fails, then an error is returned for that specific statement, and it aborts or rolls back the entire sequence. ` Summary In this post, we've taken a hands-on approach to exploring the D1 Client API, starting with defining our database schema and loading seed data. We then dove into writing queries, covering the basics of prepared statements and parameter binding, before moving on to more complex topics like joins and transactions. We saw how to construct and execute queries to fetch data from our database, including how to handle relationships between tables and marshal result data into a usable format. We also touched on inserting, updating, and deleting data, and how to use transactions to ensure data consistency. By working through these examples, we've gained a solid understanding of how to use the D1 Client API to interact with our database and build robust, data-driven applications....

The 2025 Guide to JS Build Tools cover image

The 2025 Guide to JS Build Tools

The 2025 Guide to JS Build Tools In 2025, we're seeing the largest number of JavaScript build tools being actively maintained and used in history. Over the past few years, we've seen the trend of many build tools being rewritten or forked to use a faster and more efficient language like Rust and Go. In the last year, new companies have emerged, even with venture capital funding, with the goal of working on specific sets of build tools. Void Zero is one such recent example. With so many build tools around, it can be difficult to get your head around and understand which one is for what. Hopefully, with this blog post, things will become a bit clearer. But first, let's explain some concepts. Concepts When it comes to build tools, there is no one-size-fits-all solution. Each tool typically focuses on one or two primary features, and often relies on other tools as dependencies to accomplish more. While it might be difficult to explain here all of the possible functionalities a build tool might have, we've attempted to explain some of the most common ones so that you can easily understand how tools compare. Minification The concept of minification has been in the JavaScript ecosystem for a long time, and not without reason. JavaScript is typically delivered from the server to the user's browser through a network whose speed can vary. Thus, there was a need very early in the web development era to compress the source code as much as possible while still making it executable by the browser. This is done through the process of *minification*, which removes unnecessary whitespace, comments, and uses shorter variable names, reducing the total size of the file. This is what an unminified JavaScript looks like: ` This is the same file, minified: ` Closely related to minimizing is the concept of source maps#Source_mapping), which goes hand in hand with minimizing - source maps are essentially mappings between the minified file and the original source code. Why is that needed? Well, primarily for debugging minified code. Without source maps, understanding errors in minified code is nearly impossible because variable names are shortened, and all formatting is removed. With source maps, browser developer tools can help you debug minified code. Tree-Shaking *Tree-shaking* was the next-level upgrade from minification that became possible when ES modules were introduced into the JavaScript language. While a minified file is smaller than the original source code, it can still get quite large for larger apps, especially if it contains parts that are effectively not used. Tree shaking helps eliminate this by performing a static analysis of all your code, building a dependency graph of the modules and how they relate to each other, which allows the bundler to determine which exports are used and which are not. Once unused exports are found, the build tool will remove them entirely. This is also called *dead code elimination*. Bundling Development in JavaScript and TypeScript rarely involves a single file. Typically, we're talking about tens or hundreds of files, each containing a specific part of the application. If we were to deliver all those files to the browser, we would overwhelm both the browser and the network with many small requests. *Bundling* is the process of combining multiple JS/TS files (and often other assets like CSS, images, etc.) into one or more larger files. A bundler will typically start with an entry file and then recursively include every module or file that the entry file depends on, before outputting one or more files containing all the necessary code to deliver to the browser. As you might expect, a bundler will typically also involve minification and tree-shaking, as explained previously, in the process to deliver only the minimum amount of code necessary for the app to function. Transpiling Once TypeScript arrived on the scene, it became necessary to translate it to JavaScript, as browsers did not natively understand TypeScript. Generally speaking, the purpose of a *transpiler* is to transform one language into another. In the JavaScript ecosystem, it's most often used to transpile TypeScript code to JavaScript, optionally targeting a specific version of JavaScript that's supported by older browsers. However, it can also be used to transpile newer JavaScript to older versions. For example, arrow functions, which are specified in ES6, are converted into regular function declarations if the target language is ES5. Additionally, a transpiler can also be used by modern frameworks such as React to transpile JSX syntax (used in React) into plain JavaScript. Typically, with transpilers, the goal is to maintain similar abstractions in the target code. For example, transpiling TypeScript into JavaScript might preserve constructs like loops, conditionals, or function declarations that look natural in both languages. Compiling While a transpiler's purpose is to transform from one language to another without or with little optimization, the purpose of a *compiler* is to perform more extensive transformations and optimizations, or translate code from a high-level programming language into a lower-level one such as bytecode. The focus here is on optimizing for performance or resource efficiency. Unlike transpiling, compiling will often transform abstractions so that they suit the low-level representation, which can then run faster. Hot-Module Reloading (HMR) *Hot-module reloading* (HMR) is an important feature of modern build tools that drastically improves the developer experience while developing apps. In the early days of the web, whenever you'd make a change in your source code, you would need to hit that refresh button on the browser to see the change. This would become quite tedious over time, especially because with a full-page reload, you lose all the application state, such as the state of form inputs or other UI components. With HMR, we can update modules in real-time without requiring a full-page reload, speeding up the feedback loop for any changes made by developers. Not only that, but the full application state is typically preserved, making it easier to test and iterate on code. Development Server When developing web applications, you need to have a locally running development server set up on something like http://localhost:3000. A development server typically serves unminified code to the browser, allowing you to easily debug your application. Additionally, a development server will typically have hot module replacement (HMR) so that you can see the results on the browser as you are developing your application. The Tools Now that you understand the most important features of build tools, let's take a closer look at some of the popular tools available. This is by no means a complete list, as there have been many build tools in the past that were effective and popular at the time. However, here we will focus on those used by the current popular frameworks. In the table below, you can see an overview of all the tools we'll cover, along with the features they primarily focus on and those they support secondarily or through plugins. The tools are presented in alphabetical order below. Babel Babel, which celebrated its 10th anniversary since its initial release last year, is primarily a JavaScript transpiler used to convert modern JavaScript (ES6+) into backward-compatible JavaScript code that can run on older JavaScript engines. Traditionally, developers have used it to take advantage of the newer features of the JavaScript language without worrying about whether their code would run on older browsers. esbuild esbuild, created by Evan Wallace, the co-founder and former CTO of Figma, is primarily a bundler that advertises itself as being one of the fastest bundlers in the market. Unlike all the other tools on this list, esbuild is written in Go. When it was first released, it was unusual for a JavaScript bundler to be written in a language other than JavaScript. However, this choice has provided significant performance benefits. esbuild supports ESM and CommonJS modules, as well as CSS, TypeScript, and JSX. Unlike traditional bundlers, esbuild creates a separate bundle for each entry point file. Nowadays, it is used by tools like Vite and frameworks such as Angular. Metro Unlike other build tools mentioned here, which are mostly web-focused, Metro's primary focus is React Native. It has been specifically optimized for bundling, transforming, and serving JavaScript and assets for React Native apps. Internally, it utilizes Babel as part of its transformation process. Metro is sponsored by Meta and actively maintained by the Meta team. Oxc The JavaScript Oxidation Compiler, or Oxc, is a collection of Rust-based tools. Although it is referred to as a compiler, it is essentially a toolchain that includes a parser, linter, formatter, transpiler, minifier, and resolver. Oxc is sponsored by Void Zero and is set to become the backbone of other Void Zero tools, like Vite. Parcel Feature-wise, Parcel covers a lot of ground (no pun intended). Largely created by Devon Govett, it is designed as a zero-configuration build tool that supports bundling, minification, tree-shaking, transpiling, compiling, HMR, and a development server. It can utilize all the necessary types of assets you will need, from JavaScript to HTML, CSS, and images. The core part of it is mostly written in JavaScript, with a CSS transformer written in Rust, whereas it delegates the JavaScript compilation to a SWC. Likewise, it also has a large collection of community-maintained plugins. Overall, it is a good tool for quick development without requiring extensive configuration. Rolldown Rolldown is the future bundler for Vite, written in Rust and built on top of Oxc, currently leveraging its parser and resolver. Inspired by Rollup (hence the name), it will provide Rollup-compatible APIs and plugin interface, but it will be more similar to esbuild in scope. Currently, it is still in heavy development and it is not ready for production, but we should definitely be hearing more about this bundler in 2025 and beyond. Rollup Rollup is the current bundler for Vite. Originally created by Rich Harris, the creator of Svelte, Rollup is slowly becoming a veteran (speaking in JavaScript years) compared to other build tools here. When it originally launched, it introduced novel ideas focused on ES modules and tree-shaking, at the time when Webpack as its competitor was becoming too complex due to its extensive feature set - Rollup promised a simpler way with a straightforward configuration process that is easy to understand. Rolldown, mentioned previously, is hoped to become a replacement for Rollup at some point. Rsbuild Rsbuild is a high-performance build tool written in Rust and built on top of Rspack. Feature-wise, it has many similiarities with Vite. Both Rsbuild and Rspack are sponsored by the Web Infrastructure Team at ByteDance, which is a division of ByteDance, the parent company of TikTok. Rsbuild is built as a high-level tool on top of Rspack that has many additional features that Rspack itself doesn't provide, such as a better development server, image compression, and type checking. Rspack Rspack, as the name suggests, is a Rust-based alternative to Webpack. It offers a Webpack-compatible API, which is helpful if you are familiar with setting up Webpack configurations. However, if you are not, it might have a steep learning curve. To address this, the same team that built Rspack also developed Rsbuild, which helps you achieve a lot with out-of-the-box configuration. Under the hood, Rspack uses SWC for compiling and transpiling. Feature-wise, it’s quite robust. It includes built-in support for TypeScript, JSX, Sass, Less, CSS modules, Wasm, and more, as well as features like module federation, PostCSS, Lightning CSS, and others. Snowpack Snowpack was created around the same time as Vite, with both aiming to address similar needs in modern web development. Their primary focus was on faster build times and leveraging ES modules. Both Snowpack and Vite introduced a novel idea at the time: instead of bundling files while running a local development server, like traditional bundlers, they served the app unbundled. Each file was built only once and then cached indefinitely. When a file changed, only that specific file was rebuilt. For production builds, Snowpack relied on external bundlers such as Webpack, Rollup, or esbuild. Unfortunately, Snowpack is a tool you’re likely to hear less and less about in the future. It is no longer actively developed, and Vite has become the recommended alternative. SWC SWC, which stands for Speedy Web Compiler, can be used for both compilation and bundling (with the help of SWCpack), although compilation is its primary feature. And it really is speedy, thanks to being written in Rust, as are many other tools on this list. Primarily advertised as an alternative to Babel, its SWC is roughly 20x faster than Babel on a single thread. SWC compiles TypeScript to JavaScript, JSX to JavaScript, and more. It is used by tools such as Parcel and Rspack and by frameworks such as Next.js, which are used for transpiling and minification. SWCpack is the bundling part of SWC. However, active development within the SWC ecosystem is not currently a priority. The main author of SWC now works for Turbopack by Vercel, and the documentation states that SWCpack is presently not in active development. Terser Terser has the smallest scope compared to other tools from this list, but considering that it's used in many of those tools, it's worth separating it into its own section. Terser's primary role is minification. It is the successor to the older UglifyJS, but with better performance and ES6+ support. Vite Vite is a somewhat of a special beast. It's primarily a development server, but calling it just that would be an understatement, as it combines the features of a fast development server with modern build capabilities. Vite shines in different ways depending on how it's used. During development, it provides a fast server that doesn't bundle code like traditional bundlers (e.g., Webpack). Instead, it uses native ES modules, serving them directly to the browser. Since the code isn't bundled, Vite also delivers fast HMR, so any updates you make are nearly instant. Vite uses two bundlers under the hood. During development, it uses esbuild, which also allows it to act as a TypeScript transpiler. For each file you work on, it creates a file for the browser, allowing an easy separation between files which helps HMR. For production, it uses Rollup, which generates a single file for the browser. However, Rollup is not as fast as esbuild, so production builds can be a bit slower than you might expect. (This is why Rollup is being rewritten in Rust as Rolldown. Once complete, you'll have the same bundler for both development and production.) Traditionally, Vite has been used for client-side apps, but with the new Environment API released in Vite 6.0, it bridges the gap between client-side and server-rendered apps. Turbopack Turbopack is a bundler, written in Rust by the creators of webpack and Next.js at Vercel. The idea behind Turbopack was to do a complete rewrite of Webpack from scratch and try to keep a Webpack compatible API as much as possible. This is not an easy feat, and this task is still not over. The enormous popularity of Next.js is also helping Turbopack gain traction in the developer community. Right now, Turbopack is being used as an opt-in feature in Next.js's dev server. Production builds are not yet supported but are planned for future releases. Webpack And finally we arrive at Webpack, the legend among bundlers which has had a dominant position as the primary bundler for a long time. Despite the fact that there are so many alternatives to Webpack now (as we've seen in this blog post), it is still widely used, and some modern frameworks such as Next.js still have it as a default bundler. Initially released back in 2012, its development is still going strong. Its primary features are bundling, code splitting, and HMR, but other features are available as well thanks to its popular plugin system. Configuring Webpack has traditionally been challenging, and since it's written in JavaScript rather than a lower-level language like Rust, its performance lags behind compared to newer tools. As a result, many developers are gradually moving away from it. Conclusion With so many build tools in today's JavaScript ecosystem, many of which are similarly named, it's easy to get lost. Hopefully, this blog post was a useful overview of the tools that are most likely to continue being relevant in 2025. Although, with the speed of development, it may as well be that we will be seeing a completely different picture in 2026!...

D1 SQLite: Schema, migrations and seeds cover image

D1 SQLite: Schema, migrations and seeds

I’ve written posts about some of the popular ORM’s in TypeScript and covered their pros and cons. Prisma is probably the most well known and Drizzle is a really popular up and comer. I like and use ORM’s in most of my projects but there’s also a camp of folks who believe they shouldn’t be used. I started a small Cloudflare Workers project recently and decided to try using their D1 SQLite database without adding any ORM. This is the first post in a 2 part series where we’ll explore what this experience is like using only the driver and utilities made available in the Wrangler CLI. Introduction If you’re unfamiliar with Cloudflare D1 - it’s a distributed SQLite database for the Cloudflare Workers platform. Workers are lightweight serverless functions/compute distributed across a global network. The platform includes services and API’s like D1 that provide extended capabilities to your Workers. At the time of this writing, there are only two ways to interact with a D1 database that I’m aware of. * From a Worker using the D1 Client API * D1 HTTP API In this 2 part series, we will create a simple Workers project and use the D1 Client API to build out our queries. Getting Started For this tutorial, we’ll create a simple Cloudflare Worker project and treat it like a simple node script/http server to do our experiments. The first step is initializing a new Cloudflare Worker and D1 database: ` ` We need to take our binding and add it to our project wrangler.toml configuration. Once our binding is added we can re-generate the types for our Worker project. ` We now have our DB binding added to our project Env types. Let’s add a simple query to our worker script to make sure our database is setup and working: ` Start the development server with npm run dev and access the server at http://localhost:8787 . When the page loads we should see a successful result {"1 + 1":2} . We now have a working SQLite database available. Creating a schema Since we’re not using an ORM with some kind of DSL to define a schema for our database, we’re going to do things the old fashioned way. “Schema” will just refer to the data model that we create for our database. We’ll define it using SQL and the D1 migrations utility. Let’s create a migration to define our initial schema: ` For our demo purposes we will build out a simple blog application database. It includes posts, authors, and tags to include some relational data. We need to write the SQL in our migration to create all the tables and columns that we need: ` The SQL above defines our tables, columns, and their relations with foreign keys. It also includes a join table for posts and tags to represent a many-to-many relationship. If you don’t have a lot of experience writing SQL queries it might look a little bit intimidating at first. Once you take some time to learn it it’s actually pretty nice. DataLemur is a pretty great resource for learning SQL. If you need help with a specific query, Copilot and GPT are quite good at generating SQL queries. Just make sure you take some time to try to understand them and check for any potential issues. After completing a migration script it needs to be applied: ` I added the --local flag so that we’re working against a local database for now. Typing our Schema One of the downsides of our ORMless approach is we don’t get TypeScript types out of the box. In a smaller project, I think the easiest approach is just to manage your own types. It’s not hard, you can even have GPT help if you want. If managing type definitions for your schema is not acceptable for your project or use case you can look for a code generation tool or switch to an ORM / toolset that provides types. For this example I created some basic types to map to our schema so that we can get help from the lsp when working with our queries. ` Seeding development data Outside of our migrations, we can write SQL scripts and execute them against our D1 SQLite database using wrangler. To start we can create a simple seeds/dev.sql script to load some initial development seed data into our local database. Another example might be a reset.sql that drops all of our tables so we can easily reset our database during development as we rework the schema or run other experiments. Since our database is using auto incrementing integer ID’s, we can know up front what the ID’s for the rows we are creating are since our database is initially empty. This can be a bit tricky if you’re using randomly generated ID’s. In that case you would probably want to write a script that can collect ID’s of newly created records and use them for creating related records. Here we are just passing the integer ID directly in our SQL script. As an example, we know up front that the author Alice Smith will have the id 1, Bob Johnson 2, and so on. Post_tags looks a little bit crazy since it’s just a join table. Each row is just a post_id and a tag_id. (1, 1), (1 2), etc. Here’s the code for a dev seed script: ` Here’s the code for a reset script - it’s important to remember to drop the migrations table in your reset so you can apply your migrations. ` Using the wrangler CLI we can execute our script files against our local development and remote d1 database instances. Since we have already applied our migrations to our local database, we can use our dev.sql seed script to load some data into our db. ` The Wrangler output is pretty helpful - it lets us know to add the --remote flag to run against our remote instance. We can also use execute to run commands against our database. Lets run a select to look at the data added to our posts table. ` This command should output a table showing the columns of our db and the 7 rows we added from the dev seed script. Summary Using wrangler and the Cloudflare D1 platform we’ve already gotten pretty far without an ORM or any other additional tooling. We have a simple but effective migrations system in place and some initial scripts for easily seeding and resetting our databases. There are also some other really great things built-in to the D1 platform like time travel and backups. I definitely recommend at least skimming through the documentation at some point. In the next post we will start interacting with our database and sample data using the D1 Client API....

Introduction to Vercel’s Flags SDK cover image

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it. You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice. Why should I use it? You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently. It’s simple Let's start with a basic example: ` This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration. Many other SDKs require passing the flag's name and context every single time you check a flag, like this: ` This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application. By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly. It’s flexible Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter. While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit. Latency solutions Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency. These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly: ` You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended. Using Edge Config When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues. To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency. Cookies For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load. Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data: ` You can also encrypt or sign cookies for additional security from the client side. Dedupe Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example: ` Server-side patterns for static pages You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering. To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute. Precompute Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers: ` Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them. ` The user will never notice this because, as we use “rewrite”, they will only see the original URL. Now, on our page, we “invoke” our flags, sending the code from the params: ` By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user. Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests: ` Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell. Generate Permutations If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases. ` ` Conclusion Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co