Skip to content

Reducing Mental Fatigue: NestJS + ObjectionJS

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

▶️ Introduction

Most of the article you’ll find me ranting on what helps me enjoy my job.

But you’ll also figure out how to start using Objection with Nest and what they are all about. I won’t explain in detail how Nest or Objection work. I think those two have wonderful documentation which is fun and worth being explored, BUT I’ll show you how one can start using them together 😌

☝️ Prerequisites

You need to have PostgreSQL available locally. This can be achieved in various ways but I would suggest two approaches:

  1. You can install docker [Windows, macOS, Ubuntu] and execute a package.json script described later in the article . Which essentially runs PostgreSQL in a docker container and destroys it once you hit ctrl+c in your terminal

  2. You can install it directly on your machine (though I’d recommend option #1 if you don’t have it already installed)

💆 Mental fatigue

Lot's of programmers nowadays are dealing with mental fatigue in software development. And it's not only because of new tools, approaches, and paradigms that pop up every minute.

With every new software development project (let's assume back-end app, cause I'm more of a back-end person) a programmer or a team should decide on a variety of things:

  • Web framework
  • Project structure
  • Linting rules, code formatting (based on Google, Airbnb, Microsoft or homegrown conventions)
  • Data storage (SQL, NoSQL)
  • Deployment platform (Amazon, Google Cloud, Azure, Netlify, etc.)
  • CI/CD
  • Testing tools and strategies
  • Documentation
  • And so on and so forth ...

As you can see this list can grow on and on. A whole lot of high-level decisions involved in this process, not talking about millions of small ones.

Every decision made depletes our mental energy even if it's trivial. After lots of minor decisions, we are less capable to make a good major one.

A tools landscape that we have right now in a JavaScript world is a blessing and a curse at the same time. On one side, it causes us to make more decisions, on the other side there are gems that come with lot's of good decisions made for us, decisions trusted by thousands of developers.

So why can't we use this opportunity to reduce our mental fatigue, reduce the number of decisions we make by adopting well-proven opinions?

By choosing the right tools we have a chance to develop a project which is easy to reason about and onboard new people on.

Almost any back-end project gets built on top of a web framework and ORM of some kind that heavily influence future project architecture.

We’ll have a look at the way they can simplify mental models we built in mind and reduce the number of decisions we usually have to make. We are going to develop a simple (but enough to demonstrate the powers of Nest and Objection) back-end for a note-taking app.

🕸️ Web framework

There are plenty of web frameworks available in the Node.js land: Express, Hapi, Koa, Fastify, Restify, etc. They are flexible and time-tested folks that allow you to structure a project in many different ways.

So you need to decide on how you want to organize routes, handlers, views, authentication, services, repositories, etc. This gives you a lot of freedom but comes with a cost. You need to make plenty of decisions to properly architect the app, and the way project is organized will be different in any other project that gets built using the same framework, because developers of a new project made their decisions in a slightly different way.

You have to start over again and grasp the way this framework is used in that particular project. You’re loosing that feeling of familiarity and awareness you developed on the previous project, or in other words, the level of framework knowledge conversion is not that high.

For me personally, those frameworks are missing one important thing (though I think they are very powerful) - a shared conceptual base, on top of which you can start growing an actual business logic, this base would repeat from project to project and allow to quickly familiarize new developers with the codebase.

Such conceptual base allows the increase of the framework’s knowledge conversion and reduce the amount of mental effort needed to start using it.

What do I mean by conceptual base? It is a minimal set of concepts or building blocks that framework gives you. And if those building blocks get aligned well with what you need to develop, it becomes easy to reason about the project and to communicate its different parts to other team members (to both seasoned developers and newcomers).

For me, such a framework is Nest! It’s written in Typescript and has good, concise documentation. So for those who don’t like to read lengthy manuals (I don’t), this documentation gives just enough information and examples to do the job - no more, no less.

Nest has modules systems heavily inspired by Angular, so Angular developers should be quite comfortable reading Nest code. Angular and Nest is usually a good combination because their conceptual bases have a high intersection, and you can transfer some of your Angular knowledge to Nest.

I don’t want to repeat the docs, and encourage you to have a look on your own.

Though I’ll describe Nest’s main building blocks:

  • Guard - protects system from unauthenticated/unauthorized access
  • Interceptor - intercepts incoming requests or outgoing responses
  • Controller - processes the requests
  • Provider - this is basically a service that is dedicated to some set of tasks and can be injected into any other thing from this list thanks to Nest built-in dependency injection capabilities
  • Pipe - transforms/validates incoming request body
  • Middleware - the purpose of the middleware is to intercept the request, execute some logic and pass the control flow to the next middleware
  • Module - this thing helps to organize your application structure and it has the same purpose as Angular modules do

Also, I suggest reading on this series of articles on Nest.js Step by Step.

You can ask me “Why do I need other stuff listed above if I have middlewares?”.

Middlewares are too generic whereas Guards, Intercepts, etc. are dedicated to one particular task. So by hearing the word “Guard”, you already know what it is responsible for.

You might need middlewares if you want to implement something beyond those concepts listed here.

💥 Big Bang! For the sake of brevity, I won’t be describing every file of our future project, but rather will be highlighting key concepts along the way.

We’re starting with the Nest application, which has all the plumbings but no database. This way it’ll be easy to talk about Nest stuff and then gradually move to Objection. As I already said we are going to develop a toy notes API. Our notes can have theme and tags.

This is what our app structure looks like initially:

Initial app structure

Go ahead and investigate the code we have so far (the codebase is on the initial commit at the moment). We're gonna start building on top of it.

Just by looking at names we have in the codebase it becomes immediately clear the purpose and responsibilities of different classes.

Let’s have a look at notes folder in more details (cause tags and themes works exactly in the same vein).

The first thing is NotesModule

In NotesModule we’ve registered NotesService. Here how it looks

NotesService is used by NotesController and is injected by Nest once it discovers that the latter is dependent on the former.

You might have noticed that NotesService (as well as other services) is just a stub at the moment and does nothing. We’re going to fix that soon, after a small conversation about ORMs.

🦖 ORM

Historically, the purpose of ORMs was to remove object-relational impedance mismatch. They do this by abstracting out RDBMS and relational concepts as much as possible, they are especially good at hiding SQL from you and forcing you to use their DSL, which still sucks because it is a prominent example of a leaky abstraction.

I remember lot’s of situations when I was struggling with such DSL for hours, trying to mimic a query which I’d already written in SQL (and spent minutes on this). Even if we’ve managed to write a proper DSL it still might be converted into monstrous (not always performant) SQL you have no control over.

The true power of relational databases comes with SQL and its declarative expressiveness. In reality, the majority of my colleagues are quite good with the RDBMS concepts. It’s comfortable for them to think in terms of SQL queries and more often than not developers have an intuitive understanding of how DB record should be represented as an object (dictionary, map, you name it) in their language of choice.

It simply makes no sense to hide SQL from developers in ORMs because you still have to know it for fetching at least something from the DB, but apart from that you need to enable a compiler in your head that converts DSL to SQL in order to understand what kind of query will be generated eventually and whether it’ll give you what you want.

It’s double work that puts extra pressure on your brain which is trying to keep and reconcile a million of other little things about the project you are working on.

If you are already proficient with SQL, why do you need to learn another language (DSL) to fetch/update stuff from/in the database? Wouldn’t it be better for ORMs to implement an API that is as close to SQL as possible, allowing to transfer existing developer’s SQL knowledge to that API and flattening learning curve? Such API would take advantage over the language features like auto-completion and static code analysis while still being close to generated SQL.

The solutions like Hibernate, TypeORM and similar ones are overloaded, heavyweight and over-complicated in my opinion.

And here is where Objection comes in. Comparing to other ORMs it doesn’t try to put SQL and relational model behind the curtains. Here is how Objection developers describe their product:

Objection.js is an ORM for Node.js that aims to stay out of your way and make it as easy as possible to use the full power of SQL and the underlying database engine while still making the common stuff easy and enjoyable.

🍽️ Integrating Objection with Nest

TLDR; If you just need to know what should be done to have Objection support in Nest, here is the diff which shows changes that should be applied on top of our initial commit.

1️⃣ Installing required dependencies npm i @types/dotenv dotenv objection knex pg

  • dotenv populates process.env with environment variables defined in the .env file
  • objection - the ORM
  • knex is a SQL query builder that Objection uses under the hood. It also provides migrations and data-seeding support (we’ll talk about this a bit later)
  • pg is a client for PostgreSQL database.

2️⃣ Relational model Next step is to define our relation model (for now just get comfortable with the tables we are about to build)

Note app relational model
  • Notes might have a theme
  • Notes can have multiple tags
  • One tag can belong to multiple notes

knex_migrations and knex_migrations_lock are tables created and managed by Knex. They are not relevant for our data model.

3️⃣ Extending package.json with helper scripts Before we start creating the migrations, let’s add a couple of commands to our package.json No worries, their purpose will become clear in later sections.

4️⃣ Knexfile In the package.json excerpt above you might be noticed --knexfile knexfile.ts. This is an argument that points to Knex configuration file, so let’s create it at the root of the project.

knexSnakeCaseMappers converts camelCase names in code to snake_case names in the database. So in our database model, we have a themes table with font_family column. In order to update this column from the code, you can refer to it using fontFamily and mappers will do the job by transforming font_family → fontFamily and vice versa automatically.

The purpose of migrations is to create a database schema and subsequent changes to that schema that might come up over time. This allows versioning your database and rollback schema to its previous state when needed.

Seeds are useful in the development environment when you need to populate your database with some data.

migration.stub and seed.stub are template files which Knex uses to generate our migrations and seeds. Put those under the database folder as specified in the config

migration.stub
seed.stub

5️⃣ Migrations Now when we have knexfile.ts created we can start using Knex commands we’ve added previously to the package.json.

  • npm run migrate:make CreateTags
  • npm run migrate:make CreateThemes
  • npm run migrate:make CreateNotes
  • npm run migrate:make CreateNoteTags

Those will generate migration files under the database/migrations folder using our migration.stub.

It’s time to define our tables. Let’s do it together for CreateNotes migration for others please have a look at the final solution.

6️⃣ Connect our models with Objection In order to reflect relational tables in our code, we need to create a bunch of appropriate classes called models. For now, they are just plain Typescript classes located under database/models directory, so let’s sprinkle some Objection on them.

BaseModel ( base.model.ts )

TagModel ( tag.model.ts )

NoteTagModel ( note-tag.model.ts )

ThemeModel ( theme.model.ts )

7️⃣ Mapping relations Especially interesting for us is how Objection handles relations between tables and the way we can express them in code.

8️⃣ Connecting models to database and database.module.ts

Each model class can be used to perform various SQL queries, but for that, we need to wire those classes with a Knex database connection.

Once they are wired we can expose those classes as injectable service to other modules.

DatabaseModule needs to be registered under the main ApplicationModule, so all its exported services are available to other modules.

9️⃣ Implementing.service.ts files To start manipulating data we have in the database, we need to implement methods defined in the .service.ts files.

Each service relies on the model class(-es) we’ve exposed through the module’s exports above. Here is NotesService implementation (notes.service.ts):

As you can see the .query() method is a gateway for building rich queries. In the example above we also have a transaction example, so any error thrown in the transaction callback will cause database changes triggered inside of that callback to roll back.

🔟 Loading Note relations Let’s have a look at findOne method in NotesController:

The notable change is $loadRelated invocation. Here we’re asking Objection to load relations for this particular note:

tags and theme are names of the relations defined in the NoteModel class. This is how Objection knows how to fetch them.

All fetched relations get transformed into appropriate model instances. Once fetched, Objection will create tags and theme fields for this particular note instance.

So all the relations get loaded only on demand by default.

In case you want to fetch lots of objects with loaded relations there is another way you can use:

In here, once all notes are loaded - Objection will loads tags relation for all of them.

1️⃣ 1️⃣ Seeds Now we’re ready to generate seed files:

  • npm run seed:make 01-Tags
  • npm run seed:make 02-Themes
  • npm run seed:make 03-Notes
  • npm run seed:make 04-NoteTags

Seeds get generated under the database/seeds folder using our seed.stub.

Seed files get executed by Knex in order, so we have to ensure that it is correct. This is the reason we’ve prefixed seed files with numbers: we want to have tags created before note-tags because the latter depends on the former.

Let’s have a look at 02-Themes.ts seed implementation

1️⃣ 2️⃣ dotenv dotenv is a library that loads environment variables from .env file into process.env We’re going to utilize it for defining DATABASE_URL env var, which then will be used throughout the app including migrations and seed scripts.

All you need is to

  1. create a .env file at the root of the app and put there this single line DATABASE_URL=postgres://postgres:docker@localhost:5432/postgres

This connection string gets constructed based on the command we have in package.json. Postgres uses postgres as a name for a default user and database.

  1. Add dotenv import at the very top of knexfile.ts and main.ts

1️⃣ 3️⃣ Running PostgreSQL At this point we need to start our PostgreSQL instance:

`npm run run:pg-docker`

Then create the schema (by executing migrations) and populate it with data (by executing seeds):

`npm run migrate && npm run seed`

🚀 Playing with the app

Now you should have a fully working Nest application with the Objection support.

You can run it using

`npm run start`

README.md contains example http requests (using curl) which you can modify and execute against the server.

And we’re done 🎉

✍️ Summary

In the article I shared my thoughts on mental fatigue and that with the right tools it can be reduced by utilizing clear and intuitive concepts that help to communicate and share knowledge with others.

Nest does this by providing a conceptual base, which is great not only for reasoning about the project but also for communicating the way it works to other developers.

Objection gives you a framework that allows thinking in SQL terms, and avoid wasting time debugging esoteric DSLs. I would call it the “ORM without a pain”.

I hope you’ve enjoyed the article and get some understanding of how to start using Objection with Nest.

You can find the full project on my github.

Working with AWS AppSync

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk cover image

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk Let's imagine that you have finished building your app. You have a Single Page Application (SPA) with a NestJS back-end. You are ready to launch, but what if your app is a hit, and you need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means that to serve traffic, you need to have more instances running behind a load balancer. Serving your front-end using a CDN will also be helpful. In this article, I am going to give you steps on how to set up a scalable distribution in AWS, using S3, CloudFront and Elastic Beanstalk. The NestJS API and the simple front-end are both inside an NX monorepo The sample application For the sake of this tutorial, we have put together a very simple HTML page that tries to reach an API endpoint and a very basic API written in NestJS. The UI The UI code is very simple. There is a "HELLO" button on the UI which when clicked, tries to reach out to the /api/hello endpoint. If there is a response with status code 2xx, it puts an h1 tag with the response contents into the div with the id result. If it errors out, it puts an error message into the same div. ` The API We bootstrap the NestJS app to have the api prefix before every endpoint call. ` We bootstrap it with the AppModule which only has the AppController in it. ` And the AppController sets up two very basic endpoints. We set up a health check on the /api route and our hello endpoint on the /api/hello route. ` Hosting the front-end with S3 and CloudFront To serve the front-end through a CMS, we should first create an S3 bucket. Go to S3 in your AWS account and create a new bucket. Name your new bucket to something meaningful. For example, if this is going to be your production deployment I recommend having -prod in the name so you will be able to see at a glance, that this bucket contains your production front-end and nothing should get deleted accidentally. We go with the defaults for this bucket setting it to the us-east-1 region. Let's set up the bucket to block all public access, because we are going to allow get requests through CloudFront to these files. We don't need bucket versioning enabled, because these files will be deleted every time a new front-end version will be uploaded to this bucket. If we were to enable bucket versioning, old front-end files would be marked as deleted and kept, increasing the storage costs in the long run. Let's use server-side encryption with Amazon S3-managed keys and create the bucket. When the bucket is created, upload the front-end files to the bucket and let's go to the CloudFront service and create a distribution. As the origin domain, choose your S3 bucket. Feel free to change the name for the origin. For Origin access, choose the Origin access control settings (recommended). Create a new Control setting with the defaults. I recommend adding a description to describe this control setting. At the Web Application Firewall (WAF) settings we would recommend enabling security protections, although it has cost implications. For this tutorial, we chose not to enable WAF for this CloudFront distribution. In the Settings section, please choose the Price class that best fits you. If you have a domain and an SSL certificate you can set those up for this distribution, but you can do that later as well. As the Default root object, please provide index.html and create the distribution. When you have created the distribution, you should see a warning at the top of the page. Copy the policy and go to your S3 bucket's Permissions tab. Edit the Bucket policy and paste the policy you just copied, then save it. If you have set up a domain with your CloudFront distribution, you can open that domain and you should be able to see our front-end deployed. If you didn't set up a domain the Details section of your CloudFront distribution contains your distribution domain name. If you click on the "Hello" button on your deployed front-end, it should not be able to reach the /api/hello endpoint and should display an error message on the page. Hosting the API in Elastic Beanstalk Elastic beanstalk prerequisites For our NestJS API to run in Elastic Beanstalk, we need some additional setup. Inside the apps/api/src folder, let's create a Procfile with the contents: web: node main.js. Then open the apps/api/project.json and under the build configuration, extend the production build setup with the following (I only ) ` The above settings will make sure that when we build the API with a production configuration, it will generate a package.json and a package-lock.json near the output file main.js. To have a production-ready API, we set up a script in the package.json file of the repository. Running this will create a dist/apps/api and a dist/apps/frontend folder with the necessary files. ` After running the script, zip the production-ready api folder so we can upload it to Elastic Beanstalk later. ` Creating the Elastic Beanstalk Environment Let's open the Elastic Beanstalk service in the AWS console. And create an application. An application is a logical grouping of several environments. We usually put our development, staging and production environments under the same application name. The first time you are going to create an application you will need to create an environment as well. We are creating a Web server environment. Provide your application's name in the Application information section. You could also provide some unique tags for your convenience. In the Environment information section please provide information on your environment. Leave the Domain field blank for an autogenerated value. When setting up the platform, we are going to use the Managed Node.js platform with version 18 and with the latest platform version. Let's upload our application code, and name the version to indicate that it was built locally. This version label will be displayed on the running environment and when we set up automatic deployments we can validate if the build was successful. As a Preset, let's choose Single instance (free tier eligible) On the next screen configure your service access. For this tutorial, we only create a new service-role. You must select the aws-elasticbeanstalk-ec2-role for the EC2 instance profile. If can't select this role, you should create it in AWS IAM with the AWSElasticBeanstalkWebTier, AWSElasticBeanstalkMulticontainerDocker and the AWSElasticBeanstalkRoleWorkerTier managed permissions. The next step is to set up the VPC. For this tutorial, I chose the default VPC that is already present with my AWS account, but you can create your own VPC and customise it. In the Instance settings section, we want our API to have a public IP address, so it can be reached from the internet, and we can route to it from CloudFront. Select all the instance subnets and availability zones you want to have for your APIs. For now, we are not going to set up a database. We can set it up later in AWS RDS but in this tutorial, we would like to focus on setting up the distribution. Let's move forward Let's configure the instance traffic and scaling. This is where we are going to set up the load balancer. In this tutorial, we are keeping to the defaults, therefore, we add the EC2 instances to the default security group. In the Capacity section we set the Environment type to Load balanced. This will bring up a load balancer for this environment. Let's set it up so that if the traffic is large, AWS can spin up two other instances for us. Please select your preferred tier in the instance types section, We only set this to t3.micro For this tutorial, but you might need to use larger tiers. Configure the Scaling triggers to your needs, we are going to leave them as defaults. Set the load balancer's visibility to the public and use the same subnets that you have used before. At the Load Balancer Type section, choose Application load balancer and select Dedicated for exactly this environment. Let's set up the listeners, to support HTTPS. Add a new listener for the 443 port and connect your SSL certificate that you have set up in CloudFront as well. For the SSL policy choose something that is over TLS 1.2 and connect this port to the default process. Now let's update the default process and set up the health check endpoint. We set up our API to have the health check endpoint at the /api route. Let's modify the default process accordingly and set its port to 8080. For this tutorial, we decided not to enable log file access, but if you need it, please set it up with a separate S3 bucket. At the last step of configuring your Elastic Beanstalk environment, please set up Monitoring, CloudWatch logs and Managed platform updates to your needs. For the sake of this tutorial, we have turned most of these options off. Set up e-mail notifications to your dedicated alert e-mail and select how you would like to do your application deployments. At the end, let's configure the Environment properties. We have set the default process to occupy port 8080, therefore, we need to set up the PORT environment variable to 8080. Review your configuration and then create your environment. It might take a few minutes to set everything up. After the environment's health transitions to OK you can go to AWS EC2 / Load balancers in your web console. If you select the freshly created load balancer, you can copy the DNS name and test if it works by appending /api/hello at the end of it. Connect CloudFront to the API endpoint Let's go back to our CloudFront distribution and select the Origins tab, then create a new origin. Copy your load balancer's URL into the Origin domain field and select HTTPS only protocol if you have set up your SSL certificate previously. If you don't have an SSL certificate set up, you might use HTTP only, but please know that it is not secure and it is especially not recommended in production. We also renamed this origin to API. Leave everything else as default and create a new origin. Under the Behaviors tab, create a new behavior. Set up the path pattern as /api/* and select your newly created API origin. At the Viewer protocol policy select Redirect HTTP to HTTPS and allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For this tutorial, we have left everything else as default, but please select the Cache Policy and Origin request policy that suits you the best. Now if you visit your deployment, when you click on the HELLO button, it should no longer attach an error message to the DOM. --- Now we have a distribution that serves the front-end static files through CloudFront, leveraging caching and CDN, and we have our API behind a load balancer that can scale. But how do we deploy our front-end and back-end automatically when a release is merged to our main branch? For that we are going to leverage AWS CodeBuild and CodePipeline, but in the next blog post. Stay tuned....

NestJS API Versioning Strategies cover image

NestJS API Versioning Strategies

Versioning is an important part of API design. It's also one of those project aspects that is not given enough thought upfront, and it often happens that it comes into play late in the game, when it's difficult to introduce breaking changes (and introducing versioning can sometimes be a breaking change). In this blog post, we will describe the various versioning strategies that you can implement in NestJS, with a special focus on the highest-matching version selection. This is a strategy that you might consider when you want to minimize the amount of changes needed to upgrade your API-level versions. Types of versioning In NestJS, there are four different types of versioning that can be implemented: * URI versioning * The version will be passed within the URI of the request. For example, if a request comes in to /api/v1/users, then v1 marks the version of the API. * This is the default in NestJS. * Custom header versioning * A custom request header will specify the version. For example, X-API-Version: 1 in a request to /api/users will request v1 version of the API. * Media type versioning * Similar to custom header versioning, a header will specify the version. Only, this time, the standard media accept header is used. For example: Accept: application/json;v=2 * Custom versioning * Any aspect of the request may be used to specify the version(s). A custom function is provided to extract said version(s). * For example, you can implement query parameter versioning using this mechanism. URI versioning and custom header versioning are the most common choices when implementing versioning. Before deciding which type of versioning you want to use, it's also important to define the versioning strategy. Do you want to version on the API level? Or on the endpoint level? If you want to go with the endpoint-versioning approach, this gives you more fine-grained control over your endpoints, without needing to reversion the entire API. The downside of this approach, is that it may get difficult to track endpoint versions. How would an API client know which version is the latest, or which endpoints are compatible with each other? There would need to be a discovery mechanism for this, or just very well maintained documentation. API-level versioning is more common, though. With API-level versioning, every time you introduce a breaking change, you deliver a new version of the entire API, even though internally, most of the code is unchanged. There are some strategies to mitigate this, and we will focus on one in particular in this blog post. But first, let's see how we can enable versioning on our API. Applying versions to your endpoints The first step is to enable versioning on the NestJS application: ` With URI versioning enabled, to apply a version on an endpoint, you'd either provide the version on the @Controller decorator to apply the version to all endpoints under the controller, or you'd apply the version to a route in the controller with the @Version decorator. In the below example, we use endpoint versioning on the findAll() method. ` We can invoke findAll() using curl: ` How can we invoke findOne(), though? Since only findAll() is versioned, invoking findOne() needs to be without a version. When you request an endpoint without a version, NestJS will try to find so-called "version-neutral" endpoints, which are the endpoints that are not annotated with any version. In our case, this would mean the URI we use will not contain v1 or any other version in the path: ` This happens because implicitly, NestJS considers the "version-neutral" version to be the *default version* if no version is requested by the API client. The default version is the version that is applied to all controllers/routes that don't have a version specified via the decorators. The versioning configuration we wrote earlier could have easily been written as: ` Meaning, any controllers/routes without a version (such as findAll() above), will be given the "version-neutral" version by default. If we don't want to use version-neutral endpoints, then we can specify some other version as the default version. ` The findOne() endpoint will now return a 404, unless you call it with an explicit version. This is because we no longer have any "version-neutral" versions defined anywhere (the controllers/routes or the defaultVersion property). ` Multiple versions Multiple versions can be applied to a controller/route by setting the version to be an array. ` Invoking /api/v1/users or /api/v2/users will both land on the same method findAll() in the controller. Multiple versions can also be set in the defaultVersion of the versioning configuration: ` This simply means that controllers/routes without a version decorator will be applied to both version 1 and version 2. Selection of highest-matching version Imagine the following scenario: You've decided to use API-level versioning, but you don't want to update all of your controllers/routes every time you increase a version of the API. You only want to do it on those that had breaking changes. Other controllers/routes should remain at whatever version they are currently. Currently, in NestJS, there is no way of accomplishing this with just a configuration option. But fortunately, the versioning config allows you to define a custom version extractor. A version extractor is simply a function that will tell NestJS *which versions the client is requesting*, in order of preference. For example, if the version extractor returns an array such as ['3', '2', '1']. This means the client is requesting version 3, or version 2 if 3 is not available, or version 1 if neither 2 nor 3 is available. This kind of highest-matching version selection does have a caveat, though. It does not reliably work with the Express server, so we need to switch to the Fastify server instead. Fortunately, that is easy in NestJS. Install the Fastify adapter first: ` Next, provide the FastifyAdapter to the NestFactory: ` And that's it. Now we can proceed onto writing the version extractor: ` The version extractor uses the x-api-version header to extract the requested version and then returns an array of all possible versions up to and including the requested version. The reason why we chose to use header-based versioning in this example is that it would be too complex to implement URI-based versioning using a version extractor. First of all, the version extractor gets an instance of FastifyRequest. This instance does not provide any properties or methods for obtaining parts of the URL. You only get the URL path in the request.url property. You would need to parse this yourself if you wanted to extract a route token or a query parameter. Secondly, you would also need to handle the routing based on the version requested. Now, if we add multiple versions to our controller, we will always be getting the highest supported version: ` Let's test this: ` We have only one findOne() implementation, which doesn't have any explicit version applied. However, since the default version is 1 (as configured in the versioning config), this means that version 1 applies to the findOne() endpoint. Now, if a client requested version 2 of our API, the version extractor would tell NestJS to first try version 2 of the endpoint if exists, or to try version 1 if it doesn't exist. Unlike findOne(), findAll1() and findAll2() have explicit versions applied: version 1 and version 2, respectively. That's why the third and the fourth calls will return the versions that were explicitly requested by the client. Conclusion This was an overview of the tools you have at your disposal for implementing various versioning strategies in NestJS, with a special focus on API-level versioning and highest-matching version selection. As you can see, NestJS provides a very robust way of implementing various strategies. But some come with caveats, so it is always good to know them upfront before deciding which versioning strategy to use in your project. The entire source code for this mini-project is available on GitHub, with the code related to highest-matching version implementation being in the highest-matching-version-selection branch....

What Sets the Best Autonomous Coding Agents Apart? cover image

What Sets the Best Autonomous Coding Agents Apart?

Must-have Features of Coding Agents Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained. As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let's delve into the features that you'll absolutely want to get the best results. 1. Customizable System Prompts Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a "planning mode" focused on outlining development steps and gathering requirements, a "coding mode" optimized for generating and testing code, or a "documentation mode" emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you'll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what's important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use. Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, 'pnpm' instead of 'npm' for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details. 2. Usage-based Pricing The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. 3. Autonomous Workflows The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session's context. This should leverage your custom modes, which excel at each phase of your workflow. 4. Working in the Background The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer's machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team. 5. Integrations with your Tools The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests. These integrations ensure the agent can participate meaningfully across the full software development lifecycle. 6. Support for Multiple Model Providers Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues. Final Thoughts Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team's success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co