Skip to content

Composing React Components with TypeScript

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

TypeScript is a language that supercharges your JavaScript by giving your application's source codes type-checking. Combining the compiler tool and the IDE plug-ins gives a beautiful development experience when building JavaScript applications.

What I love most about using TypeScript is that when I use it, I know exactly the structure of data to give to components and when I give a different structure, the IntelliSense immediately notifies me.

Also, as a friend said:

If you use TypeScript in your application (without doing "illegal" kinds of stuff like passing any everywhere), you'll never have an uncaught error of "x as undefined"

This view is opinionated, but I quite agree with it.

Using TypeScript with React makes building React components faster with little to no uncaught errors. It allows you to specify the exact structure of expected props for any component.

In this article, we'll learn how to use TypeScript to compose React components. To continue with this article, a fair knowledge of TypeScript is required. This is a great starting guide to learn TypeScript.

At the end, we'll also look at the difference between prop-types and TypeScript.

Let's start building components

In this article, we'll build four components: an Input, a Button, a Header, and a BlogCard component. These components will show how TypeScript can be used with React.

Setting up TypeScript for React

Some React frameworks (like NextJS and GatsbyJS) already have support for TypeScript out of the box, but for Create React App, you have a few things you'll need to do.

If it's a new project, you can create the project like so:

create-react-app project-name --template typescript

The --template typescript installs dependencies that add support for TypeScript in your React application.

If it's an existing project, then you would need to install the Typescript dependencies:

npm install --save typescript @types/node @types/react @types/react-dom

With these, you can rename .js files to .tsx to allow TypeScript codes.

Now, let's build our components.

An Input component

For our Input component, we need the following props: defaultValue, onChange, placeholder and name. Each of them is a string value except onChange, which is a function.

Using TypeScript, here's how we define the component:

// Input.tsx
import React from "react";

type Props = {
  onChange: (str: string) => void;
  placeholder: string;
  name: string;
  value?: string;
};
function Input({ onChange, name, placeholder, value = "" }: Props) {
  return (
    <input
      onChange={event => onChange(event.target.value)}
      name={name}
      placeholder={placeholder}
      value={value}
    />
  );
}

export default Input;

This way, our component is well defined. The expected onChange method must accept only one argument which must be a string. placeholder, name and value (if provided) must be a string. If a different data type is passed, the IntelliSense immediately yells, or the compile command on the terminal breaks.

And here's how this component is used:

// Form.tsx
import React, { useState } from "react";

import Input from "./Input";

function Form() {
  const [nameInput, setNameInput] = useState("");
  const onChange = (str: string) => {
    setNameInput(str);
  };

  return (
    <form>
      <Input
        onChange={onChange}
        name="name"
        placeholder="Enter your name"
        value={nameInput}
      />
    </form>
  );
}

export default Form;

Let's change the data type of the placeholder property to see the warning we get:

...
<form>
    <Input
        ...
        name={10}
    />
</form>
...

Here's the warning:

Error gotten when an unexpected data type is passed to Input

A Button component

Our Button component will have the following props: value and processing like so:

// Button.tsx
type Props = {
    value: "Submit" | "Continue" | "Update";
    processing: boolean;
};
function Button({ value, processing }: Props) {
    return <button>{processing ? "Processing" : value}</button>;
}

For the value prop, we're expecting either of three strings: "Submit", "Continue", or "Update", and the processing expects a true or false value.

Let's see the component in use:

// Form.tsx
import React, { useState } from "react";

import Input from "./Input";
import Button from "./Button";

function Form() {
  const [nameInput, setNameInput] = useState("");
  const onChange = (str: string) => {
    setNameInput(str);
  };

  return (
    <form>
      <Input
        onChange={onChange}
        name="name"
        placeholder="Enter your name"
        value={nameInput}
      />
      <Button value='Submit' processing={false} />
      <Button value='Submit' processing={true} />
    </form>
  );
}

export default Form;

As you'd notice, "Next" is not included in the expected strings for value. Therefore, we get an error from IntelliSense. Here are two things you'd notice on your IDE:

Showing the expected values of the Button value prop

As seen above, on entering quotes, the IDE already gives you the acceptable values. But if you pass "Next", you'll get this:

Error gotten after passing an unexpected value to Button value prop

A Header component

So our Header component would be a bit complex. For an authenticated user, the header would have the user's name, but if otherwise, we have the "Sign in" text. Here's how we'll define it:

// Header.tsx
import React from "react";

type User = {
  name: string;
};
type Props =
  | {
      authenticated: false;
      profile: null;
    }
  | {
      authenticated: true;
      profile: User;
    };
function Header(props: Props) {
  return (
    <header>
      <a href="/">Home</a>
      <a href="/about">About</a>
      {props.authenticated ? props.profile.name : <a href="/signin">Sign in</a>}
    </header>
  );
}

export default Header;

The Header component accepts two props: authenticated and profile. The props are conditional such that when props.authenticated is false, props.profile is null and when props.authenticated is true, props.profile is the User type.

This means, if a user is authenticated, a profile object must also be provided.

Here's how the component is used:

import Header from "./Header";

function Layout() {
    return (
        <div>
            <Header authenticated={true} profile={null} />
        </div>
    );
}

For the above, we do something unacceptable. authenticated is true, but a different data type for profile is provided. Here's what the IntelliSense gives:

Error gotten after passing unaccepted value to Header component

A BlogCard component

In this component, we expect a post prop which is an object with the following properties: title, author, date and timeToRead. Here's how we define it with TypeScript:

// BlogCard.tsx
import React from "react";

type Props = {
  post: {
    title: string;
    author: {
      name: string;
    };
    date: Date;
    timeToRead: number;
  };
};
function BlogCard({ post }: Props) {
  return (
    <div className="blog-card">
      <span className="title">{post.title}</span>
      <span className="date">
        on {new Intl.DateTimeFormat().format(post.date)}
      </span>
      <span className="time-to-read">{post.timeToRead}mins</span>
      <span className="author-name">By {post.author.name}</span>
    </div>
  );
}

export default BlogCard;

And here's how it's used:

// BlogPosts.tsx
import React from "react";
import BlogCard from "./BlogCard";

type Post = {
  title: string;
  author: {
    name: string;
  };
  date: Date;
  timeToRead: number;
};
function BlogPosts() {
  const posts: Post[] = [
    {
      title: "What is JavaScript",
      date: new Date(),
      timeToRead: 3,
      author: {
        name: "Dillion Megida"
      }
    }
  ];
  return (
    <div>
      {posts.map((p, i) => (
        <BlogCard key={`post-${i}`} post={p} />
      ))}
    </div>
  );
}

export default BlogPosts;

Note that the Post type does not have to be written multiple times in different files. It can be a shared type exported from its own file and used anywhere.

With the above, we do not get an error because every data type is as expected. Now let's say we added an extra property to the Post type in the blog posts like so:

type Post = {
    title: string;
    author: {
        name: string;
    };
    date: Date;
    timeToRead: number;
    excerpt: string; // new property
}
...

We get errors in the IDE like so:

Error gotten when excerpt property in Post type is not provided in post object

In the components examples above, we've seen how to add typings to the component's properties such that a parent components using such components would know exactly what the component wants to receive. We've seen how the Intellisense provides error messages when types are not valid.

Having an IntelliSense makes the development faster as you can easily see the warnings and errors in your IDE. Without IntelliSense, you can also verify the data types when you try building (npm run build) your React application.

For example, using the Header component like so:

...
<Header
    authenticated={true}
    profile={null}
/>
...

Running npm run build for the above code gives the following error in the terminal:

Terminal error when you run npm run build with an invalid type in a component

The examples above are in this Stackblitz project. You can play with it, and violate expected types to see warnings.

Prop Types

TypeScript is not the only way to ensure expected data types in a React application. There are also prop-types. They are quite similar, but work in different ways. prop-types is more of an injected tool that inspects data received from an API to ensure it has the expected type. Also, it can be used in libraries that are compiled to JavaScript to be consumed by other applications. This means, even in Vanilla JavaScript, you'll still be able to catch type errors. However, prop-types is limited in the way you can specify data types compared to TypeScript. For example, prop-types cannot have interfaces, neither can they have the conditional props as we saw for the Header component.

This StackOverFlow answer shows a detailed difference between them.

Conclusion

While TypeScript has a lot of work (adding typings to almost everything) which can be strenuous, it makes developing React applications faster and with little fear of errors. You're not just limited to single types as with prop-types, but you can also specify objects of objects or literally any pattern as an expected type.

There's also more than you can do with TypeScript and React. You can further read the TypeScript Documentation to learn more.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Communication Between Client Components in Next.js cover image

Communication Between Client Components in Next.js

Communication Between Client Components in Next.js In recent years, Next.js has become one of the most popular React frameworks for building server-rendered applications. With the introduction of the App Router in Next.js 13, the framework has taken a major leap forward by embracing a new approach to building web applications: the concept of server components and client components. This separation of concerns allows developers to strategically decide which parts of their application should be rendered on the server and which are then hydrated in the browser for interactivity. The Challenge: Communicating Between Client Components While server components offer numerous benefits, they also introduce a new challenge: how can client components within different boundaries communicate with each other? For instance, let's consider a scenario where you have a button (a client component) and a separate client component that displays the number of times the button has been clicked. In a regular React application, this would typically be accomplished by lifting the state to a common ancestor component, allowing the button to update the state, which is then passed down to the counter display component. However, the traditional approach may not be as straightforward in a Next.js application that heavily relies on server components, with client components scattered across the page. This blog post will explore three ways to facilitate communication between client components in Next.js, depending on where you want to store the state. Lifting State to a Common Client Component One approach to communicating between client components is to lift the state to a common client component. This could be a React context provider, a state management system like Zustand, or any other solution that allows you to share state across components. The key aspect is that this wrapper component should be higher up in the tree (perhaps even in the layout) and accept server components as children. Next.js allows you to interleave client and server components as much as you want, as long as server components are passed to client components as props or as component children. Here's how this approach might look in practice. First of all, we'd create a wrapper client component that holds the state: ` This wrapper component can be included in the layout: ` Any server components can be rendered within the layout, potentially nesting them several levels deep. Finally, we'd create two client components, one for the button and one for the counter display: ` ` The entire client and server component tree is rendered on the server, and the client components are then hydrated in the browser and initialized. From then on, the communication between the client components works just like in any regular React application. Check out the page using this pattern in the embedded Stackblitz window below: Using Query Params for State Management Another approach is to use query params instead of a wrapper client component and store the state in the URL. In this scenario, you have two client components: the button and the counter display. The counter value (the state) is stored in a query param, such as counterValue. The client components can read the current counter value using the useSearchParams hook. Once read, the useRouter hook can update the query param, effectively updating the counter value. However, there's one gotcha to this approach. If a route is statically rendered, calling useSearchParams will cause the client component tree up to the closest Suspense boundary to be client-side rendered. Next.js recommends wrapping the client component that uses useSearchParams in a boundary. Here's an example of how this approach might look. The button reads the current counter value and updates it on click by using the router's replace function: ` The counter display component is relatively simple, only reading the counter value: ` And here is the page that is a server component, hosting both of the above client components: ` Feel free to check out the above page in the embedded Stackblitz below: Storing State on the Server The third approach is to store the state on the server. In this case, the counter display component accepts the counter value as a prop, where the counter value is passed by a parent server component that reads the counter value from the database. The button component, when clicked, calls a server action that updates the counter value and calls revalidatePath() so that the counter value is refreshed, and consequently, the counter display component is re-rendered. It's worth noting that in this approach, unless you need some interactivity in the counter display component, it doesn't need to be a client component – it can be purely server-rendered. However, if both components need to be client components, here's an example of how this approach might look. First, we'll implement a server action that updates the counter value. We won't get into the mechanics of updating it, which in a real app would require a call to the database or an external API - so we're only commenting that part. After that, we revalidate the path so that the Next.js caches are purged, and the counter value is retrieved again in server components that read it. ` The button is a simple client component that calls the above server action when clicked. ` The counter display component reads the counter value from the parent: ` While the parent is a server component that reads the counter value from the database or an external API. ` This pattern can be seen in the embedded Stackblitz below: Conclusion Next.js is a powerful framework that offers various choices for implementing communication patterns between client components. Whether you lift state to a common client component, use query params for state management, or store the state on the server, Next.js provides all the tools you need to add a communication path between two separate client component boundaries. We hope this blog post has been useful in demonstrating how to facilitate communication between client components in Next.js. Check out other Next.js blog posts we've written for more insights and best practices. You can also view the entire codebase for the above snippets on StackBlitz....

Why is My React Reducer Called Twice and What the Heck is a Pure Function? cover image

Why is My React Reducer Called Twice and What the Heck is a Pure Function?

Why is My React Reducer Called Twice and What the Heck is a Pure Function? In a recent project, we encountered an interesting issue: our React reducer was dispatching twice, producing incorrect values, such as incrementing a number in increments of two. We hopped on a pairing session and started debugging. Eventually, we got to the root of the problem and learned the importance of pure functions in functional programming. This article will explain why our reducer was being dispatched twice, what pure functions are, and how React's strict mode helped us identify a bug in our code. The Issue We noticed that our useReducer hook was causing the reducer function to be called twice for every action dispatched. Initially, we were confused about this behavior and thought it might be a bug in React. Additionally, we had one of the dispatches inside a useEffect, which caused it to be called twice due to React strict mode, effectively firing the reducer four times and further complicating our debugging process. However, we knew that React's strict mode caused useEffect to be called twice, so it didn't take very long to realize that the issue was not with React but with how we had implemented our reducer function. React Strict Mode React's strict mode is a tool for highlighting potential problems in an application. It intentionally double-invokes specific lifecycle methods and hooks (like useReducer and useEffect) to help developers identify side effects. This behavior exposed our issue, as we had reducers that were not pure functions. What is a Pure Function? A pure function is a function that: - Is deterministic: Given the same input, always returns the same output. - Does Not Have Side Effects: Does not alter any external state or have observable interactions with the outside world. In the context of a reducer, this means the function should not: - Modify its arguments - Perform any I/O operations (like network requests or logging) - Generate random numbers - Depend on any external state Pure functions are predictable and testable. They help prevent bugs and make code easier to reason about. In the context of React, pure functions are essential for reducers because they ensure that the state transitions are predictable and consistent. The Root Cause: Impure Reducers Our reducers were not pure functions. They were altering external state and had side effects, which caused inconsistent behavior when React's strict mode double-invoked them. This led to unexpected results and made debugging more difficult. The Solution: Make Reducers Pure To resolve this issue, we refactored our reducers to ensure they were pure functions. Here's an extended example of how we transformed an impure reducer into a pure one in a more complex scenario involving a task management application. Let's start with the initial state and action types: ` And here's the impure reducer similar to what we had initially: ` This reducer is impure because it directly modifies the state object, which is a side effect. To make it pure, we must create a new state object for every action and return it without modifying the original state. Here's the refactored pure reducer: ` Key Changes: - Direct State Modification: In the impure reducer, the state is directly modified (e.g., state.tasks.push(action.payload)). This causes side effects and violates the principles of pure functions. - Side Effects: The impure reducer included side effects such as logging and direct state changes. The pure reducer eliminates these side effects, ensuring consistent and predictable behavior. I've created an interactive example to demonstrate the difference between impure and pure reducers in a React application. Despite the RESET_TASKS action being implemented similarly in both reducers, you'll notice that the impure reducer does not reset the tasks correctly. This problem happens because the impure reducer directly modifies the state, leading to unexpected behavior. Check out the embedded StackBlitz example below: Conclusion Our experience with the reducer dispatching twice was a valuable lesson in the importance of pure functions in React. Thanks to React's strict mode, we identified and fixed impure reducers, leading to more predictable and maintainable code. If you encounter similar issues, ensure your reducers are pure functions and leverage React strict mode to catch potential problems early in development. By embracing functional programming principles, you can write cleaner, more reliable code that is easier to debug and maintain....

Getting Started with GatsbyJS cover image

Getting Started with GatsbyJS

GatsbyJS is a React framework and a Static Site Generator (SSG) tool used in building web applications. It combines Server Side Rendering (SSR) features, and static site development for building SEO-powered, secured, and fast applications. In this article we’ll start with an introduction to Gatsby, we’ll learn the terms SSG, CSR, and SSR, how Gatsby improves SEO, and then we’ll build a simple website with Gatsby. Introduction GatsbyJS is built on React. React is a frontend UI library for frontend implementations. It supports the idea of small components that merges with other components to make bigger components. As a UI library, React is a tool that can be combined with other tools for building web applications. Therefore, React on its own may require other tools (like routing tools, webpack server, and so on) for building full-fledged frontend applications. With that being said, when you install React, you need to install other tools to make up your application. This results in an opinionated setup aided by Create React App (CRA). Despite this, more configurations and tools needed to be installed for a full application. Then, Gatsby! Gatsby is an opinionated framework that takes away the hassle of setting up the application and allows you to begin development immediately. Asides from this, Gatsby also solves the issue of Search Engine Optimisation (SEO) that using only React provides. react-helmet is not an effective SEO solution. This article explains that further. SSR, CSR, and SSG Client-Side Rendering (CSR) In CSR, all routings and renderings are handled by the browser with JavaScript. For this technique, different HTML files are not created for different pages, instead, one page referencing some JavaScript files that determine what to display depending on the URL. React is a CSR tool. This means all routings are handled by the browser. In React, you have an index.html file found in the public folder which codes similar to this: ` After the build process (npm run build), the index.html will look like this: ` The referenced .js files handles all routings and responds to the URL with contents to share. build/index.html is only fetched once, also with the JavaScript files. This may result in low page load speed due to fetching all resources. This method affects SEO in such a way that SEO crawlers only see React App and does not see every other meta changes because those changes only happen when libraries like react-helmet are executed (which is only on the browser). Server Side Rendering In contrast to CSR, SSR involves populating the browser with resources from the server. This means that for every route change, a request is made to the server to fetch new resources. SSR is perfect for SEO because SEO crawlers get the right meta information when any page is requested. SSR also has its cons, one of which is a delay when navigating between pages. CSR wins in this area because all JavaScript resources are fetched on the first request and every other navigation does not need a page refresh. Static Site Generator An SSG is a tool or set of tools that create static HTML pages from input files or content sources. Many SSG tools work in various ways but most of them take away the issues of security and slow fetching that database-driven platforms use. SSG takes content from different sources and builds them all into static pages which can be accessed faster when fetched by a browser. How Gatsby improves SEO Gatsby is an SSG tool that solves the issue of SEO that CSR brings and also makes routing faster compared to SSR. Gatsby does this by pre-building the web application before it is hosted. During the build process, all meta information provided within components is attached to the built pages. So when SEO crawlers or social sharing tools access any page of the application, they get access to the meta-information that has been provided to all pages during development. This does not involve any rendering in the browser. The built files are static pages which looks like each page was built separately like so: ` Building a simple website with Gatsby To show how Gatsby sites are built, we’ll be building a very simple website. No much complexities or dynamics, just simple. Install the CLI tool Firstly, install the gatsby CLI tool. Or you can use npx if that’s what you want. ` Create new site You can either create a new Gatsby site with a basic template ([gatsby-starter-default]9https://www.gatsbyjs.com/starters/gatsbyjs/gatsby-starter-default/)) provided by the team, or use a specify another template to customize. For the default template, a new site is created like so: ` Where new-site is the name of the project you’re creating. This gives the following project structure: The template provides SEO configurations using GraphQL which you can improve. To see the site in action, run: ` At localhost:8000, you’ll find your site displayed like so: Alternatively, you can specify a template you want to use. You can find different starter templates from their list of starter libraries. To use a template, say, gatsby-starter-blog, the following command will be used: ` This gives the following project structure: On starting the development server, localhost:8000 shows this: Improving the gatsby-starter-default template The template has three folders under src namely components, images and pages. The components and images page are optional, but pages is a required page for Gatsby. Unlike React, where you need a router library to show a set of components for a particular URL, in Gatsby, you create pages by having React JavaScript files under the pages folder. Let’s add an about page under pages like so: In about.js, you can create your React components or import components. For example: ` SEO is a component that dynamically updates meta information about each pages and Layout is a wrapper component that serves as the layout of all pages. This can be configured to fit your needs too. When you start your development server, go to localhost:8000/about and you’ll find this: Note that: whatever you can do in React (components structuring, prop-types, and so on), you can do the same in Gatsby. Gatsby makes things easier allow you to focus on the important parts of your application and pre-building your site to make SEO-fit. Also, Gatsby makes your site fast and since they are static pages, they can be served from anywhere (like CDNs). Conclusion Gatsby goes beyond the general understanding of “static pages”. Gatsby can source content from Content Management Systems and build static pages for them. An example is gatsby-source-instagram. It sources content from Gatsby at every building process, source the latest content from Instagram, and makes them available on your website. There a lot of other awesome applications that can be achieved by using Gatsby, such as e-commerce tools, portfolios, and so on. Here's a gallery of sites using Gatsby. Another beautiful thing about Gatsby is the community. You’ll find a ton of plugins that make development easier and more effective. This article gives introductory information on what makes Gatsby an awesome tool. There are still more to learn to make the best use of Gatsby such as Gatsby and GraphQL, SSR APIs, and many more. Their documentation gives a very great guide to learning more about the tool. I hope this article gives you reasons to try out Gatsby in your next project....

What Sets the Best Autonomous Coding Agents Apart? cover image

What Sets the Best Autonomous Coding Agents Apart?

Must-have Features of Coding Agents Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained. As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let's delve into the features that you'll absolutely want to get the best results. 1. Customizable System Prompts Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a "planning mode" focused on outlining development steps and gathering requirements, a "coding mode" optimized for generating and testing code, or a "documentation mode" emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you'll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what's important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use. Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, 'pnpm' instead of 'npm' for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details. 2. Usage-based Pricing The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. 3. Autonomous Workflows The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session's context. This should leverage your custom modes, which excel at each phase of your workflow. 4. Working in the Background The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer's machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team. 5. Integrations with your Tools The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests. These integrations ensure the agent can participate meaningfully across the full software development lifecycle. 6. Support for Multiple Model Providers Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues. Final Thoughts Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team's success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co