Skip to content

Progressive Web Apps and Mobile Apps

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Introduction

In a talk by Alex Russell titled, "The Mobile Web: MIA", Alex discussed a number of issues facing the expansion of the web platform on mobile devices. He noted that, "People don't use the web the way they lean on it, and rely on it, and come to depend on it on desktop." In his talk, he noted that people use the web about 4% of the time they are using phones, and dropping. The rest of the time, users are typically interacting with mobile apps, rather than the browser.

Much of this is driven by the fact that companies that own the platforms (Google and Apple, in particular) are primarily focused on native app developments. This focus on mobile app development pushes the market to accept the importance of a mobile app, and drive users to leave the web for a native experience. Mobile app development locks experiences to devices and operating systems, which increases the cost of development. Users, in turn, grow to expect mobile applications, even for simple tasks such as viewing bus routes or filling out a form.

As web developers, we know there is another option for app development. Progressive Web Apps (PWAs) are built with standard web technologies - HTML, CSS, JavaScript, and modern browser APIs - to provide an enhanced experience when using them on supported platforms. By utilizing the latest browser features, web developers can construct the same experiences users expect of native applications. These features include: accessing the camera, notifications, Bluetooth and network information- even augmented/virtual reality and payments. Browsers have been working to support these features for years, and some companies (such as Twitter) have been building PWAs to provide an improved experience for their platforms.

Let's say that we have a company, BetterX, which is looking to build a new app for their users. The primary goal is to provide an excellent experience for mobile users, including offline support and hardware features such as notifications and payments. We will explore and compare the benefits of native mobile applications and PWAs, and discuss why each platform may be the better choice.

Progressive Web Apps - The Open Web

One of the key benefits when considering a progressive web app is that we are utilizing modern web development tools to build our application. As web developers, we are already familiar with a number of complex tasks, such as state management, caching, and performance optimization. To build a PWA, we need to take these concepts to their natural conclusions. By utilizing a service worker to cache assets and IndexedDB or other methods to store local data, we can build a system that is capable of working fully offline. By using network detection, our application can determine whether an internet connection is available, and provide that information to the user.

Another benefit of building with web technologies is that we have a better chance of achieving the goal of, "write once, run anywhere". By utilizing standard architecture patterns in our application, and relying on progressive enhancement as the browser/platform we are running on allows, our PWA can run on both mobile devices (as an installed app) or on browsers. Most developers are already familiar with responsive design, which allows a website to change its appearance depending on the viewport or device. The same concepts can be applied to a PWA, incrementing our functionality as the device allows it, and providing a fallback for when certain features are not available.

Web development also has the benefit of traditionally being cheaper than mobile app development. Smaller companies don't always have the time or money to invest in a mobile development team. Most of them, however, do have a website. By utilizing some of the APIs available to progressive web applications, these shops and companies can provide a mobile experience. Also, if a website/web app is built with mobile devices in mind, the time it takes to build a fully functional PWA could be weeks, compared to a brand new mobile application taking months.

PWAs can also be significantly smaller than their native alternatives. In a report by Google, the Twitter PWA "requires less than 3% of the device storage space compared to Twitter for Android". As fewer mobile devices have ports for expanded storage space, the size of applications becomes increasingly more important.

Drawbacks

However, there are some drawbacks to choosing a progressive web app. Users expect to find mobile applications in the app store, not on a website. In his talk, Alex Russell shares a screen of an Android device with a Google search bar at the time, and a row of icons at the bottom, including the Google Play Store. He explains that people click on the search bar when they are looking for "answers", and click on the store when they are looking for "experiences". For PWAs, the way to install them is to visit the URL, and click on the install button when prompted. This is not how users have been trained to find apps for their smartphones.

It's also not clear to a user what installing a PWA achieves. On an Android device where a user installs a PWA, an icon for that app appears on their desktop as any other app would. However, depending on the app, this could be a complete experience, including offline support, or it could simply be a wrapper to load a website. In many cases, a PWA is little more than an enhanced bookmark on a mobile phone.

Mobile Applications - Platform Builders

Mobile apps are the standard established by Google and Apple for delivering user experiences on phones and tablets. Apps are an expected feature of any new platform - it's rare to see a new service thrive without a presence in the Google Play Store, or Apple App Store. Keystone applications, like Facebook or Twitter, are regularly highlighted by these platforms as a way to bring new users into their walled gardens.

Users are trained to search for, and install, mobile applications. Often, websites will guide users directly to the respective app store. On iPhones and iPads, the app store is the only way for apps to be installed on a phone, making the store even more crucial to a product's success on the platform. Since Apple does not support PWAs in Safari, this makes mobile development a requirement to reach customers within their ecosystems.

Mobile development has first-class support from both Apple and Google, providing access to APIs and features as new hardware is released. Apps being developed for newer devices can do more, and utilize more resources than ever before. Resource-intensive apps like Adobe Photoshop or Procreate can leverage these resources to achieve results previously held only on desktops and laptops.

Modern mobile development frameworks, such as Flutter and React Native, allow developers to target these devices in a cross-platform way. They provide access to the APIs and features of the hardware, and a streamlined way to write a majority of your app once, while targeting multiple platforms. Other frameworks such as Cordova or Capacitor even allow for using modern web technologies, and having a fully bundled app that can be released on the app store.

Downsides

Mobile development provides amazing functionality and allows for powerful applications to be built. However, it comes at a cost. These applications can only run on the latest and greatest hardware and OS version. Most mobile users do not have access to the hardware we, as developers, are using to build our applications. What takes a few seconds to load on 5G using an iPhone 12 Max could take nearly a minute to load on phones common in most of the world. Also, final application sizes are going to impact how many users can actually download our app in the first place.

In many cases, a mobile application in the app store could become more of a burden than a benefit. Consider that you're visiting a foreign country. Because you are roaming, your internet speed is significantly slower than you are used to. While traveling in a city, you want to check for bus routes and schedules. You go to the website for the municipal bus system, and are directed to download an app to view schedules. This app is not too large (my local bus system's app is 8.6 MB), but on your slower connection still takes a long time to download. Also, you may only need this app once or twice, before you travel to your next destination. A website (or PWA) would provide a much smoother experience than requiring a mobile app be downloaded.

Considerations

Regardless of which architecture you decide to use for building out your mobile application, there are some considerations to keep in mind. First, your developers are going to have better hardware and internet connection than many of your users. Most users do not have a high-end iPhone or Android device, and are not on 5G or gigabit internet. Whether you're building a PWA or a native app, remember that every megabyte will take substantially longer to download and initialize, and your application will run slower. If able, you should test your applications on slower or ittermitant internet speeds, or on lower end hardware.

In general, if you are going to build a mobile application, it has to be lean, loadable, and support offline use. Many companies, (and some end users), will try to push for new features or content without regard for the experience of all users. Setting up a truly performant and offline-friendly experience is complicated, but it truly is worth taking into account all potential users as you work to build and deploy it.

If you decide that building a progressive web app is the way to go for your app or company, it is important to remember that PWAs are not supported in Safari or on iOS/iPadOS. On both iPhones and iPads, the only browser engine is WebKit, regardless of which browser you are using. This means that users will not be able to install your PWA on Apple mobile devices, and the browser APIs may not be available. Take this into account while building your app, and allow for graceful degredation when features are not available. This is not to say that you shouldn't build a PWA if you want to target Apple's ecosystem - much the opposite! The more PWAs that exist, and have a large number of users on Apple's devices, the better chance that Apple will support the standardized browser features that enable PWAs.

At the end of the day, choose the architecture that best supports your users, and build with them in mind. Your app should help your users and customers in some way, and should not be a burden to them. It may be fun to try out a new mobile framework, or build a PWA with enhanced features, but not if it does not serve the end user.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

The Dangers of ORMs and How to Avoid Them cover image

The Dangers of ORMs and How to Avoid Them

Background We recently engaged with a client reporting performance issues with their product. It was a legacy ASP .NET application using Entity Framework to connect to a Microsoft SQL Server database. As the title of this post and this information suggest, the misuse of this Object-Relational Mapping (ORM) tool led to these performance issues. Since our blog primarily focuses on JavaScript/TypeScript content, I will leverage TypeORM to share examples that communicate the issues we explored and their fixes, as they are common in most ORM tools. Our Example Data If you're unfamiliar with TypeORM I recommend reading their docs or our other posts on the subject. To give us a variety of data situations, we will leverage a classic e-commerce example using products, product variants, customers, orders, and order line items. Our data will be related as follows: The relationship between customers and products is one we care about, but it is optional as it could be derived from customer->order->order line item->product. The TypeORM entity code would look as follows: ` With this example set, let's explore some common misuse patterns and how to fix them. N + 1 Queries One of ORMs' superpowers is quick access to associated records. For example, we want to get a product and its variants. We have 2 options for writing this operation. The first is to join the table at query time, such as: ` Which resolves to a SQL statement like (mileage varies with the ORM you use): ` The other option is to query the product variants separately, such as: ` This example executes 2 queries. The join operation performs 1 round trip to the database at 200ms, whereas the two operations option performs 2 round trips at 100ms. Depending on the location of your database to your server, the round trip time will impact your decision here on which to use. In this example, the decision to join or not is relatively unimportant and you can implement caching to make this even faster. But let's imagine a different situation/query that we want to run. Let's say we want to fetch a set of orders for a customer and all their order items and the products associated with them. With our ORM, we may want to write the following: ` When written out like this and with our new knowledge of query times, we can see that we'll have the following performance: 100ms * 1 query for orders + 100ms * order items count + 100ms * order items' products count. In the best case, this only takes 100ms, but in the worst case, we're talking about seconds to process all the data. That's an O(1 + N + M) operation! We can eagerly fetch with joins or collection queries based on our entity keys, and our query performance becomes closer to 100ms for orders + 100ms for order line items join + 100ms for product join. In the worst case, we're looking at 300ms or O(1)! Normally, N+1 queries like this aren't so obvious as they're split into multiple helper functions. In other languages' ORMs, some accessors look like property lookups, e.g. order.orderItems. This can be achieved with TypeORM using their lazy loading feature (more below), but we don't recommend this behavior by default. Also, you need to be wary of whether your ORM can be utilized through a template/view that may be looping over entities and fetching related records. In general, if you see a record being looped over and are experiencing slowness, you should verify if you've prefetched the data to avoid N+1 operations, as they can be a major bottleneck for performance. Our above example can be optimized by doing the following: ` Here, we prefetch all the needed order items and products and then index them in a hash table/dictionary to look them up during our loops. This keeps our code with the same readability but improves the performance, as in-memory lookups are constant time and nearly instantaneous. For performance comparison, we'd need to compare it to doing the full operation in the database, but this removes the egregious N+1 operations. Eager v Lazy Loading Another ORM feature to be aware of is eager loading and lazy loading. This implementation varies greatly in different languages and ORMs, so please reference your tool's documentation to confirm its behavior. TypeORM's eager v lazy loading works as follows: - If eager loading is enabled for a field when you fetch a record, the relationship will automatically preload in memory. - If lazy loading is enabled, the relationship is not available by default, and you need to request it via a key accessor that executes a promise. - If neither is enabled, it defaults to the behavior ascribed above when we explained handling N+1 queries. Sometimes, you don't want these relationships to be preloaded as you don't use the data. This behavior should not be used unless you have a set of relations that are always loaded together. In our example, products likely will always need product variants loaded, so this is a safe eager load, but eager loading order items on orders wouldn't always be used and can be expensive. You also need to be aware of the nuances of your ORM. With our original problem, we had an operation that looked like product.productVariants.insert({ … }). If you read more on Entity Framework's handling of eager v lazy loading, you'll learn that in this example, the product variants for the product are loaded into memory first and then the insert into the database happens. Loading the product variants into memory is unnecessary. A product with 100s (if not 1000s) of variants can get especially expensive. This was the biggest offender in our client's code base, so flipping the query to include the ID in the insert operation and bypassing the relationship saved us _seconds_ in performance. Database Field Performance Issues Another issue in the project was loading records with certain data types in fields. Specifically, the text type. The text type can be used to store arbitrarily long strings like blog post content or JSON blobs in the absence of a JSON type. Most databases use a technique that stores text fields off rows, which requires a special file system lookup operation to fetch that data. This can make a typical database lookup that would take 100ms under normal conditions to take 200ms. If you combine this problem with some of the N+1 and eager loading problems we've mentioned, this can lead to seconds, if not minutes, of query slowdown. For these, you should consider not including the column by default as part of your ORM. TypeORM allows for this via their hidden columns feature. In this example, for our product description, we could change the definition to be: ` This would allow us to query products without descriptions quickly. If we needed to include the product description in an operation, we'd have to use the addSelect function to our query to include that data in our result like: ` This is an optimization you should be wary of making in existing systems but one worth considering to improve performance, especially for data reports. Alternatively, you could optimize a query using the select method to limit the fields returned to those you need. Database v. In-Memory Fetching Going back to one of our earlier examples, we wrote the following: ` This involves loading our data in memory and then using system memory to perform a group-by operation. Our database could have also returned this result grouped. We opted to perform this operation like this because it made fetching the order item IDs easier. This takes some performance challenges away from our database and puts the performance effort on our servers. Depending on your database and other system constraints, this is a trade-off, so do some benchmarking to confirm your best options here. This example is not too bad, but let's say we wanted to get all the orders for a customer with an item that cost more than $20. Your first inclination might be to use your ORM to fetch all the orders and their items and then filter that operation in memory using JavaScript's filter method. In this case, you're loading data you don't need into memory. This is a time to leverage the database more. We could write this query as follows in SQL: ` This just loads the orders that had the data that matched our condition. We could write this as: ` We constrained this to a single customer, but if it were for a set of customers, it could be significantly faster than loading all the data into memory. If our server has memory limitations, this is a good concern to be aware of when optimizing for performance. We noticed a few instances on our client's implementation where the filter operations were applied in functions that appeared to run the operation in the database but were running the operation in memory, so this was preloading more data into memory than needed on a memory-constrained server. Refer to your ORM manual to avoid this type of performance hit. Lack of Indexes The final issue we encountered was a lack of indexes on key lookups. Some ORMs do not support defining indexes in code and are manually applied to databases. These tend not to be documented, so an out-of-sync issue can happen in different environments. To avoid this challenge, we prefer ORMs that support indexes in code like TypeORM. In our last example, we filtered on the cost of an order item, but the cost field does not contain an index. This leads to a full table scan of our data collection filtered by the customer. The query cost can be very expensive if a customer has thousands of orders. Adding the index can make our query super fast, but it comes at a cost. Each new index makes writing to the database slower and can exponentially increase the size of our database needs. You should only add indexes to fields that you are querying against regularly. Again, be sure you can notate these indexes in code so they can be replicated across environments easily. In our client's system, the previous developer did not include indexes in the code, so we retroactively added the database indexes to the codebase. We recommend using your database's recommended tool for inspection to determine what indexes are in place and keep these systems in sync at all times. Conclusion ORMs can be an amazing tool to help you and your teams build applications quickly. However, they have gotchas for performance that you should be aware of and can identify while developing or during code reviews. These are some of the most common examples I could think of for best practices. When you hear the horror stories about ORMs, these are some of the challenges typically discussed. I'm sure there are more, though. What are some that you know? Let us know!...

It's Impossible For This Code to Fail - with Loren Sands-Ramshaw  cover image

It's Impossible For This Code to Fail - with Loren Sands-Ramshaw

Loren Sands-Ramshaw, Developer Relations Engineer at Temporal joins Rob Ocel to talk about reliable application development. They introduce the topic of durable execution and talk about reliability in systems, unraveling common issues developers face and showcase the benefits that durable execution can bring to software development. They also talk about the challenges of traditional programming and the complexities of event-driven architecture. Listen to the full podcast here: https://modernweb.podbean.com/e/modern-web-podcast-s11e19-its-impossible-for-this-code-to-fail/...

Vue 3.2 - Using Composition API with Script Setup cover image

Vue 3.2 - Using Composition API with Script Setup

Introduction Vue 3 introduced the Composition API as a new way to work with reactive state in a Vue application. Rather than organizing code by functionality, (data, computed, methods, watch, etc), you can group code by feature (users, API, form). This allows for a greater amount of flexibility while building a Vue application. We've already talked about the Composition in other articles (if you haven't read them, check them out!), but with the release of Vue 3.2, another Composition-related feature has been released as stable - . In short, allows developers to define a component without having to export anything from your JavaScript block - simply define your variables and use them in your template! This style of writing a component resembles Svelte in many ways, and is a massive improvement for anyone coming into Vue for the first time. Basics Let's look at an example. If you were using the Options API (the standard of Vue 2), all of your single-file components would look something like this: ` We have our template (a simple form), and our script block. Within the script block, we export an object with three keys: name, computed, and methods. If you are familiar with Vue, this should look familiar to you. Now, let's switch this code to use the Composition API. ` Our component does the exact same thing as before. We define our state (name), a computed property (isNamePresent), and our submit function. If any of this is unfamiliar, check out my previous articles on the Vue Composition API. Rather than having to scaffold our application within an object being exported, we are free to define our variables as we want. This flexibility also allows us to extract repeated logic from the component if we want to, but in this case our component is pretty straightforward. However, we still have that awkward export default statement. Our code all lives within the setup function, while the rest is really just boilerplate. Can't we just remove it? Actually, we can now! This is where comes in. Let's switch to use script setup instead of the standard script block. ` Let's go over what changed here. First, we added the word "setup" to our script tag, which enables this new mode for writing Vue components. Second, we took our code from within the setup function, and replaced our existing exported object with just our code. And everything works as expected! Note that everything declared within the script tags is available in the template of your component. This includes non-reactive variables or constants, as well as utility functions or other libraries. The major benefit of this is that you don't need to manually bind an external file (Constants.js, for example) as a value of your component - Vue handles this for you now. Additional Features You may be wondering how to handle some of the core aspects of writing Vue components, like utilizing other components or defining props. Vue 3.2 has us covered for those use cases as well! Let's take a look at some of the additional features provided by this approach to building Vue single-file components. Defining Components When using , we don't have to manually define our imported components any more. By importing a component into the file, the compiler is able to automatically add it to our application. Let's update our component by abstracting the form into its own component. We'll call it Form.vue. For now, it will simply be the template, and we'll get to the logic in a moment. ` That's it! Our component now has to be imported into our Vue file, and it's automatically available in our template. No more components block taking up space in our file! Now, we need to pass name into our child component as a prop. But wait, we can't define props! We don't have an object to add the props option to! Also, we need to emit that the form was submitted so that we can trigger our submission. How can we define what our child component emits? defineProps and defineEmits We can still define our components props and emits by using new helper methods defineProps and defineEmits. From the Vue docs, "defineProps and defineEmits are compiler macros only usable inside . They do not need to be imported, and are compiled away when is processed." These compile-time functions take the same arguments as the standard keys would use with a full export object. Let's update our app to use defineProps and defineEmits. ` Let's go over what changed here. - First, we used defineProps to expect a modelValue (the expected prop for use with v-model in Vue 3). - We then defined our emits with defineEmits, so that we are both reporting what this component emits, and are also getting access to the emit function (previously available on `this.$emit). - Next, we create a computed property that utilizes a custom getter and setting. We do this so we can easily use v-model on our form input, but it's not a requirement. The getter returns our prop, where the setter emits the update event to our parent component. - Last of all, we hook up our submitHandler function to emit a submit event as well. Our App.vue component is more or less as we left it, with the addition of v-model="name" and @submit="submitForm" to the Form child component. With that, our application is working as expected again! Other Features There are a lot more features available to us here, but they have fewer use cases in a typical application. - Dynamic Components - Since our components are immediately available in the template, we can utilize them when writing a dynamic component (, for example). - Namespaced Components - If you have a number of components imported from the same file, these can be namespaced by using the import * as Form syntax. You then have access to or , for example, without any extra work on your part. - Top-Level Await - If you need to make an API request as part of the setup for a component, you are free to use async/await syntax at the top level of your component - no wrapping in an async function required! Keep in mind that a component that utilizes this must be wrapped externally by a component - read more here to learn how to use Suspense in Vue. Another point to keep in mind is that you aren't locked into using . If you are using this new syntax for a component and run into a case where you aren't able to get something done, or simply want to use the Options syntax for a particular case, you are free to do so by adding an additional block to your component. Vue will mix the two together for you, so your Composition code and Options code can remain separate. This can be extremely useful when using frameworks like Nuxt that provide additional methods to the standard Options syntax that are not exposed in . See the Vue docs for a great example of this. Conclusion This is a big step forward for Vue and the Composition API. In fact, Evan You has gone on the record as saying this is intended to be the standard syntax for Vue single-file components going forward. From a discussion on Github: > There's some history in this because the initial proposal for Composition API indicated the intention to entirely replace Options API with it, and was met with a backlash. Although we did believe that Composition API has the potential to be "the way forward" in the long run, we realized that (1) there were still ergonomics/tooling/design issues to be resolved and (2) a paradigm shift can't be done in one day. We need time and early adopters to validate, try, adopt and experiment around the new paradigm before we can confidently recommend something new to all Vue users. > That essentially led to a "transition period" during which we intentionally avoided declaring Composition API as "the new way" so that we can perform the validation process and build the surrounding tooling /ecosystem with the subset of users who proactively adopted it. > Now that has shipped, along with improvements in IDE tooling support, we believe Composition API has reached a state where it provides superior DX and scalability for most users. But we needed time to get to this point. Earlier in that same thread, Evan expressed his views on what development looks like going forward for Vue: > The current recommended approach is: > - Use SFC + + Composition API > - Use VSCode + Volar (or WebStorm once its support for ships soon) > - Not strictly required for TS, but if applicable, use Vite for build tooling. If you're looking to use Vue 3 for either a new or existing application, I highly recommend trying out this new format for writing Vue single-file components. Looking to try it out? Here's a Stackblitz project using Vite and the example code above....

What Sets the Best Autonomous Coding Agents Apart? cover image

What Sets the Best Autonomous Coding Agents Apart?

Must-have Features of Coding Agents Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained. As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let's delve into the features that you'll absolutely want to get the best results. 1. Customizable System Prompts Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a "planning mode" focused on outlining development steps and gathering requirements, a "coding mode" optimized for generating and testing code, or a "documentation mode" emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you'll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what's important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use. Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, 'pnpm' instead of 'npm' for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details. 2. Usage-based Pricing The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. 3. Autonomous Workflows The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session's context. This should leverage your custom modes, which excel at each phase of your workflow. 4. Working in the Background The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer's machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team. 5. Integrations with your Tools The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests. These integrations ensure the agent can participate meaningfully across the full software development lifecycle. 6. Support for Multiple Model Providers Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues. Final Thoughts Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team's success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co