Skip to content

Testing with Vitest

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Vitest is a new testing framework powered by Vite. It's still in development, and some features may not be ready yet, but it's a nice alternative to try and explore.

Setup

Let's create a new Vite Project!

Note: Vitest requires Vite >=v2.7.10 and Node >=v14 to work.

npm init vite@latest

✔ Project name: · try-vitest
✔ Select a framework: · svelte
✔ Select a variant: · svelte-ts

cd try-vitest
npm install //use the package manager you prefer
npm run dev

With our project created, we now need to install all the dependencies required for Vitest to work.

npm i -D vitest jsdom

I added jsdom to be able to mock the DOM API. By default, Vitest will use the configuration from vite.config.ts. I will add one modification to it, and it's svelte specific. Disabling Svelte's hot module replacement when running tests.

It should look like the following:

import { defineConfig } from 'vite'
import { svelte } from '@sveltejs/vite-plugin-svelte'

export default defineConfig({
  plugins: [
    svelte({ hot: !process.env.VITEST }),
  ],
})

I'm using the VITEST env variable to differentiate when running tests, but if your configuration is too different, you can use another configuration file for tests. There are a couple of options to do this.

  • Create a configuration file named vitest.config.ts: it will take precedence when running tests
  • Using the --config flag: use it like npx vitest --config <path_to_file>

Writing tests

Let's write some tests for the Counter component created by default with our project.

<script lang="ts">
  let count: number = 0
  const increment = () => {
    count += 1
  }
</script>

<button on:click={increment}>
  Clicks: {count}
</button>

<style>
  button {
    font-family: inherit;
    font-size: inherit;
    padding: 1em 2em;
    color: #ff3e00;
    background-color: rgba(255, 62, 0, 0.1);
    border-radius: 2em;
    border: 2px solid rgba(255, 62, 0, 0);
    outline: none;
    width: 200px;
    font-variant-numeric: tabular-nums;
    cursor: pointer;
  }

  button:focus {
    border: 2px solid #ff3e00;
  }

  button:active {
    background-color: rgba(255, 62, 0, 0.2);
  }
</style>

To write our first set of tests, let's create a file named Counter.spec.ts next to our component.

// @vitest-environment jsdom
import { tick } from 'svelte';
import { describe, expect, it } from 'vitest';
import Counter from './Counter.svelte';

describe('Counter component', function () {
  it('creates an instance', function () {
    const host = document.createElement('div');
    document.body.appendChild(host);
    const instance = new Counter({ target: host });
    expect(instance).toBeTruthy();
  });

  it('renders', function () {
    const host = document.createElement('div');
    document.body.appendChild(host);
    new Counter({ target: host });
    expect(host.innerHTML).toContain('Clicks: 0');
  });

  it('updates count when clicking a button', async function () {
    const host = document.createElement('div');
    document.body.appendChild(host);
    new Counter({ target: host });
    expect(host.innerHTML).toContain('Clicks: 0');
    const btn = host.getElementsByTagName('button')[0];
    btn.click();
    await tick();
    expect(host.innerHTML).toContain('Clicks: 1');
  });
});

Adding the comment line @vitest-environment jsdom at the top of the file will allow us to mock the DOM APIs for all of the tests in the file. However, doing this in every file can be avoided via the config file. We can also make sure that we import describe, it, expect as globals. We do this via the config file. We need to make the types available by adding vitest/globals types in your tsconfig.json file (you can skip this if not using TypeScript).

import { defineConfig } from 'vite'
import { svelte } from '@sveltejs/vite-plugin-svelte'

export default defineConfig({
  plugins: [svelte({ hot: !process.env.VITEST })],
  test: {
    globals: true,
    environment: 'jsdom',
  },
});
{
  "extends": "@tsconfig/svelte/tsconfig.json",
  "compilerOptions": {
    "target": "esnext",
    "useDefineForClassFields": true,
    "module": "esnext",
    "resolveJsonModule": true,
    "baseUrl": ".",
    "allowJs": true,
    "checkJs": true,
	/**
     *Add the next line if using globals
     */
    "types": ["vitest/globals"]
  },
  "include": ["src/**/*.d.ts", "src/**/*.ts", "src/**/*.js", "src/**/*.svelte"]
}

Our test files don't need to import globals now, and we can remove the jsdom environment setup.

import { tick } from 'svelte';
import Counter from './Counter.svelte';

describe('Counter component', function () {
  // tests are the same
});

Commands

There are four commands to run from the cli:

  • dev: run vitest in development mode
  • related: runs tests for a list of source files
  • run: run tests once
  • watch: default mode, same as running vitest. Watches for changes and then reruns the tests.

test/suite modifiers

There are modifiers for tests and suites that will change how your tests run.

  • .only will focus on one or more tests, skipping the rest. For suites, it will focus on all the tests in it.
  • .skip will skip the specified test/suite.
  • .todo will mark a test or suite to be implemented later.
  • .concurrently will run contiguous tests marked as concurrent in parallel. For suites, it will run all tests in it in parallel. This modifier can be combined with the previous ones. For example: it.concurrently.todo("do something async")

Assertions

Vitest ships with chai and jest compatible assertions

expect(true).toBeTruthy() //ok
expect(1).toBe(Math.sqrt(4)) // false

For a list of available assertions, check the API docs.

Coverage

For coverage reports, we will need to install c8 and run the tests with the --coverage flag

npm i -D c8

npx vitest --coverage

This will give us a nice coverage report.

Vitest 02

An output folder coverage will be created at the root of the project. You can specify the desired output type in the configuration file.

import { defineConfig } from 'vite';
import { svelte } from '@sveltejs/vite-plugin-svelte';

// https://vitejs.dev/config/
export default defineConfig({
  plugins: [svelte({ hot: !process.env.VITEST })],
  test: {
    globals: true,
    environment: 'jsdom',
    coverage:{
      reporter:['text', 'json', 'html'] // change this property to the desired output
    }
  },
});

UI

You can also run vitest using a UI, which can help you visualize what tests you are running, and their results. Let's install the required package, and run it with the --ui flag.

npm i -D @vitest/ui

npx vitest --ui

I like this interface. It even allows you to read the test code and open it in your editor.

Vitest UI

More features

Vitest comes with many more features, like snapshot testing, mocking, fake timers, and more that you may know from other testing libraries.

Migrating to Vitest (from a Vite project using jest)

If you are working on a small project or just starting one, you may need to adapt your config file and that would be it. If you are using mock functions, Vitest uses TinySpy and for fake timers, it uses @sinonjs/fake-timers. Check for compatibility. Also, remember to import {vi} from vitest if you will be using it. Another thing that you may need to configure is a setup file. For example, to use jest-dom matchers, we can create a setup file.

import '@testing-library/jest-dom'

and declare it on our config file.

export default defineConfig(({ mode }) => ({
    // ...
	test: {
		globals: true,
		environment: 'jsdom',
		setupFiles: ['<PATH_TO_SETUP_FILE>']
	}
}))

Here is an example of the migration of VitePress to Vitest. (There are some ts-config changes but you can see where vitest is added, and the vitest.config.ts file)

Final Thoughts

Even though Vitest is still in development, it looks very promising, and the fact that they kept the API so similar to Jest, makes the migration very smooth. It also ships with TypeScript support (no external types package). Using the same config file (by default) lets you focus on writing tests very quickly. I look forward to v1.0.0

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer&lt;typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

Exploring Open Props and its Capabilities cover image

Exploring Open Props and its Capabilities

Exploring Open Props and its Capabilities With its intuitive approach and versatile features, Open Props empowers you to create stunning designs easily. It has the perfect balance between simplicity and power. Whether you're a seasoned developer or just starting, Open Props makes styling websites a breeze. Let's explore how Open Props can help your web development workflow. What is Open Props Open Props is a CSS library that packs a set of CSS variables for quickly creating consistent components using “Sub-Atomic” Styles. These web design tokens are crafted to help you get great-looking results from the start using consistent naming conventions and providing lots of possibilities out-of-the-box. At the same time, it's customizable and can be gradually adopted. Installing open props There are many ways to get started with open props, each with its advantages. The library can be imported from a CDN or installed using npm. You can import all or specific parts of it, and for greater control of what's bundled or not, you can use PostCSS to include only the variables you used. From Zero to Open Props Let's start with the simplest way to test and use open props. I'll create a simple HTML file with some structure, and we'll start from there. Create an index.html file. ` Edit the content of your HTML file. In this example, we’ll create a landing page containing a few parts: a hero section, a section for describing the features of a service, a section for the different pricing options available and finally a section with a call to action. We’ll start just declaring the document structure. Next we’ll add some styles and finally we’ll switch to using open props variables. ` To serve our page, we could just open the file, but I prefer to use serve, which is much more versatile. To see the contents of our file, let's serve our content. ` This command will start serving our site on port 3000. Our site will look something like this: Open-props core does not contain any CSS reset. After all, it’s just a set of CSS variables. This is a good start regarding the document structure. Adding open props via CDN Let's add open-props to our project. To get started, add: ` This import will make the library's props available for us to use. This is a set of CSS variables. It contains variables for fonts, colors, sizes, and many more. Here is an excerpt of the content of the imported file: ` The :where pseudo-class wraps all the CSS variables declarations, giving them the least specificity. That means you can always override them with ease. This imported file is all you need to start using open props. It will provide a sensible set of variables that give you some constraints in terms of what values you can use, a palette of colors, etc. Because this is just CSS, you can opt-out by not using the variables provided. I like these constraints because they can help with consistency and allow me to focus on other things. At the same time, you can extend this by creating your own CSS variables or just using any value whenever you want to do something different or if the exact value you want is not there. We should include some styles to add a visual hierarchy to our document. Working with CSS variables Let's create a new file to hold our styles. ` And add some styles to it. We will be setting a size hierarchy to headings, using open props font-size variables. Additionally, gaps and margins will use the size variables. ` We can explore these variables further using open-props’ documentation. It's simple to navigate (single page), and consistent naming makes it easy to learn them. Trying different values sometimes involves changing the number at the end of the variable name. For example: font-size-X, where X ranges from 0 to 8 (plus an additional 00 value). Mapped to font-sizes from 0.5rem up to 3.5rem. If you find your font is too small, you can add 1 to it, until you find the right size. Colors range from 0-12: –red-0 is the lightest one (rgb(255, 245, 245)) while –red-12 is the darkest (rgb(125, 26, 26)). There are similar ranges for many properties like font weight, size (useful for padding and margins), line height and shadows, to name a few. Explore and find what best fits your needs. Now, we need to include these styles on our page. ` Our page looks better now. We could keep adding more styles, but we'll take a shortcut and add some defaults with Open Props' built in normalized CSS file. Besides the core of open props (that contains the variables) there’s an optional normalization file that we can use. Let's tweak our recently added styles.css file a bit. Let’s remove the rules for headings. Our resulting css will now look like this. ` And add a new import from open-props. ` Open props provides a normalization file for our CSS, which we have included. This will establish a nice-looking baseline for our styles. Additionally, it will handle light/dark mode based on your preferences. I have dark mode set and the result already looks a lot better. Some font styles and sizes have been set, and much more. More CSS Variables Let's add more styles to our page to explore the variables further. I'd like to give the pricing options a card style. Open Props has a section on borders and shadows that we can use for this purpose. I would also like to add a hover effect to these cards. Also, regarding spacing, I want to add more margins and padding when appropriate. ` With so little CSS added and using many open props variables for sizes, borders, shadows, and easing curves, we can quickly have a better-looking site. Optimizing when using the CDN Open props is a pretty light package; however, using the CDN will add more CSS than you'll probably use. Importing individual parts of these props according to their utility is possible. For example, import just the gradients. ` Or even a subset of colors ` These are some options to reduce the size of your app if using the CDN. Open Props with NPM Open Props is framework agnostic. I want my site to use Vite. Vite is used by many frameworks nowadays and is perfect to show you the next examples and optimizations. ` Let's add a script to our package.json file to start our development server. ` Now, we can start our application on port 5173 (default) by running the following command: ` Your application should be the same as before, but we will change how we import open props. Stop the application and remove the open-props and normalize imports from index.html. Now in your terminal install the open-props package from npm. ` Once installed, import the props and the normalization files at the beginning of your styles.css file. ` Restart your development server, and you should see the same results. Optimizing when using NPM Let's analyze the size of our package. 34.4 kb seems a bit much, but note that this is uncompressed. When compressed with gzip, it's closer to 9 kb. Similar to what we did when using the CDN, we can add individual sections of the package. For example in our CSS file we could import open-props/animations or open-props/sizes. If this concerns you, don't worry; we can do much better. JIT Props To optimize our bundled styles, we can use a PostCSS plugin called posts-jit-props. This package will ensure that we only ship the props that we are using. Vite has support for PostCSS, so setting it up is straightforward. Let's install the plugin: ` After the installation finishes, let's create a configuration file to include it. ` The content of your file should look like this: ` Finally, remove the open-props/style import from styles.css. Remember that this file contains the CSS variables we will add "just in time". Our page should still look the same, but if we analyze the size of our styles.css file again, we can see that it has already been reduced to 13.2kb. If you want to know where this size is coming from, the answer is that Open Props is adding all the variables used in the normalize file + the ones that we require in our file. If we were to remove the normalize import, we would end up with much smaller CSS files, and the number of props added just in time would be minimal. Try removing commenting it out (the open-props/normalize import) from the styles.css file. The page will look different, but it will be useful to show how just the props used are added. 2.4kB uncompressed. That's a lot less for our example. If we take a quick look at our generated file, we can see the small list of CSS variables added from open props at the top of our file (those that we use later on the file). Open props ships with tons of variables for: - Colors - Gradients - Shadows - Aspect Ratios - Typography - Easing - Animations - Sizes - Borders - Z-Index - Media Queries - Masks You probably won't use all of these but it's hard to tell what you'll be using from the beginning of a project. To keep things light, add what you need as you go, or let JIT handle it for you. Conclusion Open props has much to offer and can help speed your project by leveraging some decisions upfront and providing a sensible set of predefined CSS Variables. We've learned how to install it (or not) using different methods and showcased how simple it is to use. Give it a try!...

What Sets the Best Autonomous Coding Agents Apart? cover image

What Sets the Best Autonomous Coding Agents Apart?

Must-have Features of Coding Agents Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained. As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let's delve into the features that you'll absolutely want to get the best results. 1. Customizable System Prompts Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a "planning mode" focused on outlining development steps and gathering requirements, a "coding mode" optimized for generating and testing code, or a "documentation mode" emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you'll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what's important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use. Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, 'pnpm' instead of 'npm' for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details. 2. Usage-based Pricing The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. 3. Autonomous Workflows The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session's context. This should leverage your custom modes, which excel at each phase of your workflow. 4. Working in the Background The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer's machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team. 5. Integrations with your Tools The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests. These integrations ensure the agent can participate meaningfully across the full software development lifecycle. 6. Support for Multiple Model Providers Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues. Final Thoughts Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team's success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co