Skip to content

Build Typescript Project with Bazel Chapter 2: File Structure

Build Typescript Project with Bazel Chapter 2: File Structure

Build Typescript Project with Bazel - 1 Part Series

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Build Typescript Project with Bazel Chapter 2: File Structure

In the last chapter, we introduced the basic concept of Bazel. In this blog, I would like to talk about the file structure of Bazel.

Concept and Terminology

Before we introduce the file structure, we need to understand several key concepts and terminology in Bazel.

  • Workspace
  • Package
  • Target
  • Rule

These concepts, and terminology, are composed to Build File, which Bazel will analyze, and execute.

The basic relationship among these concepts looks like this graph, we will discuss the details one by one.

Workspace

A "workspace" refers to the directories, which contain

  1. The source files of the project.
  2. Symbolic links contain the build output.

And the Bazel definition is in a file named WORKSPACE, or WORKSPACE.bazel at the root of the project directory. NOTE, one project can only have one WORKSPACE definition file.

Here is an example of the WORKSPACE file.

workspace(
    name = "com_thisdot_bazel_demo",
)

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# Fetch rules_nodejs so we can install our npm dependencies
http_archive(
    name = "build_bazel_rules_nodejs",
    sha256 = "ad4be2c6f40f5af70c7edf294955f9d9a0222c8e2756109731b25f79ea2ccea0",
    urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/0.38.3/rules_nodejs-0.38.3.tar.gz"],
)

load("@build_bazel_rules_nodejs//:defs.bzl", "node_repositories", "yarn_install")

node_repositories()

yarn_install(
    name = "npm",
    package_json = "//:package.json",
    yarn_lock = "//:yarn.lock",
)

# Install all Bazel dependencies of the @npm npm packages
load("@npm//:install_bazel_dependencies.bzl", "install_bazel_dependencies")

install_bazel_dependencies()

# Setup the rules_typescript toolchain
load("@npm_bazel_typescript//:index.bzl", "ts_setup_workspace")

ts_setup_workspace()

In a WORKSPACE file, we should

  1. Define the name of the workspace. The name should be unique globally, or at least unique in your organization. You could use the reverse dns name, such as com_thisdot_bazel_demo, or the name of the project on GitHub.
  2. Install environment related packages, such as yarn/npm/bazel.
  3. Setup toolchains needed to build/test the project, such as typescript/karma.

Once WORKSPACE is ready, application developers don't really need to touch this file.

Package

  • The primary unit of code organization (something like module) in a repository
  • Collection of related files and a specification of the dependencies among them
  • Directory containing a file named BUILD or BUILD.bazel, residing beneath the top-level directory in the workspace
  • A package includes all files in its directory, plus all subdirectories beneath it, except those which, themselves, contain a BUILD file

It is important to know how to split a project into package. It should be easy for the users to develop/test/share the unit of a package. If the unit is too big, the package has to be rebuilt on every package file change. If the unit is too small, it will be very hard to maintain and share. So, this is not an issue of Bazel. It is a general problem of project management.

In Bazel, every package will have a BUILD.bazel file, containing all of the build/test/bundle target definitions.

For example, here is a screenshot of the Angular structure. Every directory under packages directory is a package of code organization, and also the build organization of Bazel.

Let's take a look at the file structure of gulpjs in Angular, so we can have a better understanding about the difference between Bazel and gulpjs.

gulp.task('build-animations', () => {});;
gulp.task('build-core', () => {});
gulp.task('build-core-schematics', () => {});

In most cases,

  • a gulpjs file doesn't have 1:1 relationship to the package directory.
  • a gulpjs file can reference any files inside the project.

But for Bazel,

  • Each package should have their own BUILD.bazel file.
  • The BUILD.bazel can only reference the file inside the current package, and if the current package depends on other packages, we need to reference the Bazel build target from the other packages instead of the files directly.

Here is a Bazel Package directory structure in Angular repo. Angular

Build File

Before we talk about target, let's take a look at the content of a BUILD.bazel file.

package(default_visibility = ["//visibility:private"])

load("@npm_bazel_typescript//:index.bzl", "ts_library")

ts_library(
    name = "lib",
    srcs = [":lib.ts"],
    visibility = ["//visibility:public"],
)

The language of the BUILD.bazel file is Starlark.

  • Starlark is a subset of Python.
  • It is a very feature-limited language. A ton of Python features, such as class, import, while, yield, lambda, is, raise, are not supported.
  • Recursion is not allowed.
  • Most of Python's builtin methods are not supported.

So Starlark is a very very simple language, and only supports very limited Python syntax.

Target

The BUILD.bazel file contains build targets. Those targets are the definitions of the build, test, and bundle work we want to achieve.

The build target can represent:

  • Files
  • Rules

The target can also depend on other targets

  • Circular dependencies are not allowed
  • Two targets, generating the same output, will cause a problem
  • Target dependency must be declared explicitly.

Let's see the previous sample,

package(default_visibility = ["//visibility:private"])

load("@npm_bazel_typescript//:index.bzl", "ts_library")

ts_library(
    name = "lib",
    srcs = [":lib.ts"],
    visibility = ["//visibility:public"],
)

Here, ts_library is a rule imported from @npm_bazel_typescript workspace, and ts_library(name = "lib") is a target. The name is lib, and this target defines the metadata for compiling the lib.ts with ts_library rule.

Label

Every target has a unique name called label. For example, if the BUILD.bazel file above is under /lib directory, then the label of the target is

@com_thisdot_bazel_demo//lib:lib

The label is composed of several parts.

  1. the name of the workspace: @com_thisdot_bazel_demo.
  2. the name of the package: lib.
  3. the name of the target: lib.

So, the composition is <workspace name>//<package name>:<target name>.

Most of the times, the name of the workspace can be omitted, so the label above can also be expressed as //lib:lib.

Additionally, if the name of the target is the same as the package's name, the name of the target can also be omitted. Therefore, the label above can also be expressed as //lib.

NOTE: The label for the target needs to be unique in the workspace.

Visibility

We can also define the visibility to define whether the rule inside this package can be used by other packages.

package(default_visibility = ["//visibility:private"])

The visibility can be:

  • private: the rules can be only used inside the current package.
  • public: the rules can be used everywhere.
  • //some_package:package_scope: the rules can only be used in the specified scope under //some_package. The package_scope can be: __pkg__/__subpackages__/package group.

And if the rules in one package can be accessed from the other package, we can use load to import them. For example:

load("@npm_bazel_typescript//:index.bzl", "ts_library")

Here, we import the ts_library rule from the Bazel typescript package.

Target

  • Target can be Files or Rule.
  • Target has input and output. The input and output are known at build time.
  • Target will only be rebuilt when input changes.

Let's take a look at Rule first.

Rule

The rule is just like a function or macro. It can accept named parameters as options. Just like in the previous post, calling a rule will not execute an action. It is just metadata. Bazel will decide what to do.

ts_library(
    name = "lib",
    srcs = [":lib.ts"],
    visibility = ["//visibility:public"],
)

So here, we use the ts_library rule to define a target, and the name is lib. The srcs is lib.ts in the same directory. The visibility is public, so this target can be accessed from the other packages.

Rule Naming

It is very important to follow the naming convention when you want to create your own rule.

  • *_binary: executable programs in a given language (nodejs_binary)
  • *_test: special _binary rule for testing
  • *_library: compiled module for a given language (ts_library)
Rule common attributes

Several common attributes exist in almost all rules. For example:

ts_library(
    name = "lib",
    srcs = [":index.ts"],
    tags = ["build-target"],
    visibility = ["//visibility:public"],
    deps = [
        ":date",
        ":user",
    ],
)
  • name: unique name within this package
  • srcs: inputs of the target, typically files
  • deps: compile-time dependencies
  • data: runtime dependencies
  • testonly: target which should be executed only when running Bazel test
  • visibility: specifies who can make a dependency on the given target

Let's see another example:

http_server(
   name = "prodserver",
   data = [
       "index.html",
       ":bundle",
       "styles.css",
   ],
)

Here, we use the data attribute. The data will only be used at runtime. It will not be analyzed by Bazel at build time.

So, in this blog, we introduced the basic Bazel structure concepts. In the next blog, we will introduce how to query Bazel targets.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Build Typescript Project with Bazel Chapter 1: Bazel introduction cover image

Build Typescript Project with Bazel Chapter 1: Bazel introduction

Bazel is a fast, scalable, incremental, and universal (for any languages/frameworks) build tool, and is especially useful for big mono repo projects. I would like to write a series of blogs to introduce the concept of how to build a typescript project with Bazel. - Chapter 1: Bazel Introduction - Chapter 2: Bazel file structure and Bazel Query - Chapter 3: Build/Develop/Test a typescript project In chapter 1, I would like introduce basic Bazel concepts, and define some of the benefits we can expect from using Bazel. - What is Bazel? - Bazel: Correctness - Bazel: Fast - Bazel: Universal - Bazel: Industrial grade What is Bazel? As you may know, we already have a lot of build tools. They include: - CI tools: Jenkins/CircleCI - Compile tools: tsc/sass - Bundle tools: webpack/rollup - Coordinate tools: make/grunt/gulp So what is Bazel? Does it simply replace Jenkins or Webpack? @AlexEagle helped us answer this question at ngconf 2019, but here is a great picture that will explain a little as well. So Bazel is a build tool, used to coordinate other tools (compile/bundle tools), and will use all the existing tools (such as tsc/webpack/rollup) to do the underlying work. Another graph, also from @AlexEagle, will show this relationship more clearly. Ok, so Bazel is at the same position as Gulp, why not continue to use Gulp? To answer this question, let's think about what the goal of the build tool is. - Essential: - Correct - don't need to worry about environmental pollution. - Fast - Incremental - Parallelism - Predictable - Same input will guarantee the same output. - Reusable - Build logic can be easily composed and reused. - Nice to have: - Universal - support multiple languages and frameworks. Correctness This is the most important requirement. We all want our build systems to be stable, and we don't want them to generate unexpected results. Therefore, we want every build to be executed in an isolated environment. Otherwise, we will run into problems if, for example, we forget to delete some temp files, forget to reset environment variables, or if the build only works under certain conditions. - Sandboxing: Bazel supports sandboxing to isolate the environment. When we do a Bazel build, it will create an isolated working folder, and Bazel will run the build process under this folder. This is what we would call "sandboxing". Bazel will then restrict the access to the files outside of this folder. Also, Bazel makes sure that elements of the build tool, such as the compiler, only know their own input files, so the output will only depend on input. - The rule can only access an input file. Unlike a Gulp task, a Bazel rule can only access the files declared as input (we will talk about the target/rule in detail later). Here is an example of a Gulp task: ` So inside of a Gulp task, there is just a normal function. There is no concept of Input, and dependencies only tells gulp to run tasks in a specified order, so the task can access any files, and use any environmental variables with no restrictions. Gulp will have no idea which files are used in this task, so if some logic depends on the unintended file access/environment reference, it is impossible for Gulp to guarantee that the task will always generate the same results. Let's see a Bazel target. ` We will talk about the Bazel target/rule in more detail in the next chapter. Here, we will declare a Bazel target with a ts_library rule. Unlike with a Gulp task, here we have a strict input which is srcs = ["a.ts"], so when Bazel decides to execute this target, the typescript compiler can only access the file a.ts inside of the sandbox, and nowhere else. Therefore, there is no way that the Bazel target will produce wrong results because of the unpredictable environment or input. Fast Bazel is incremental because Bazel is declarative and predictable. Bazel is Declarative Let's use Gulp to compile those two files, in order to demonstrate that Gulp is imperative and Bazel is declarative. Let's see an example with Gulp. ` When we run gulp test for the first time, both the compile, and the test tasks will be executed. And then, even if we don't change any files, those two tasks will still be executed if we run gulp test again. Gulp is imperative, so we just have to tell it to do those two commands, and Gulp will do what we asked. Specifically, it checks the dependency, and guarantees the execution order. That's all. Let's see how Bazel works. Here, we have two typescript files: user.ts, and print.ts. print.ts uses user.ts. ` ` To demonstrate that Bazel is declarative, let's use two Bazel build targets. ` So we declare two Bazel targets, user, and print. The print target depends on the user target. All those targets are using the ts_library rule. It contains the metadata to tell Bazel how to compile the typescript files. And again, all this information is just a definition. It's not about commands, so when you use Bazel to build those targets, it is up to Bazel to decide whether to execute those rules or not. Let's see the result first. When we run bazel build //src:print, both the user and print targets will be compiled, which makes sense. When we run bazel build //src:print again, you will find Bazel will not run any targets because nothing changed, and Bazel knows it. As a result, Bazel decides not to run any targets. Let's change something in user.ts, and see what happens. ` After we run bazel build //src:print again, we may expect that both user and print will be compiled once more because user.ts has been changed, and print.ts references user.ts, and the print target depends on the user target. But the result is that only the user target has been compiled, and the print target has not. Why? This is because changes in user.ts don't impact print.ts, and Bazel understands this. Let's check out the following graph, which describes the input/output of the target. So for user target, the input is user.ts, and we have two outputs. One is user.js, and the other is user.d.ts. The latter of the two is the typescript declaration file. So let's see the relationship between the user, and print target. Here, we can see that the print target depends on the user target, and that it uses one of the user target's outputs, user.d.ts, as it's own input. So, because we only updated toString of user.ts, and the user.d.ts was not changed at all, Bazel analyses the dependency graph. As a result, it knows that only user target needs to be built. Further, it also knows that the print target doesn't need to be built because the inputs of the print target, which are user.d.ts and print.ts, have not changed. Because of this, Bazel decides not to build print target. It is very important to remember that Bazel is declarative and not imperative. Bazel analyses the input/output of all build targets to determine which targets need to be executed. (We can generate the dependency graph with bazel query, and we will talk about it in the next chapter.) So Bazel can do incremental builds based on the analysis of the dependency graph, and only build the really impacted targets. Bazel is Predictable (Bazel's Rules are Pure Functions) Also, all Bazel rules are pure functions, so the same input will always result in the same output, hence Bazel is predictable. We can use the input as a key to cache the result of each target, and save the cache locally or remotely. For example, developer 1 builds some targets, and pushes the result to remote cache. The other developer can then directly use this cache without building those targets in their own environment. So these amazing features make Bazel super fast. Universal Bazel is a coordinate tool. It doesn't rely on any specified languages/frameworks, and it can build within almost all languages/frameworks from server to client, and from desktop to mobile. It is difficult and costly to employ a specialist team to handle builds with several build tools/frameworks when working on a full-stack project. In one of my previous projects, we used Maven to build a Java backend, used Webpack to build its frontend, and used XCode and Gradle to build iOS and Android clients. Consequently, we needed a special build team consisting of people that knew all of those build tools, which makes it very difficult to do an incremental build, cache the results, or share the build script with other projects. Bazel is also a perfect tool for mono repo, and full-stack projects that include multiple languages/frameworks. Industrial Grade Bazel is not an experimental project. It is used in almost all projects inside Google. When I started contributing to Angular, I did not use Bazel. Because of this, the time that Angular CI was taking was about 1 hour. Once Bazel was introduced to Angular CI, the time reduced to about 15 minutes, and the build process became much more stable, and less flaky than before, even with double the amount of test cases. It is amazing! I believe that Bazel will be the "must have" tool for many big projects. I really like Bazel, and in the next blog post, I would like to introduce Bazel's file structure with bazel query. Thanks for reading, and any feedback is appreciated....

What's new in Angular v12 cover image

What's new in Angular v12

Angular 12 Released Angular v12 has been released! Here are some main updates. Nullish coalescing operator support ` Core APP_INITIALIZER now supports Observable. It is also a long-waited feature request: in Angular v12, we can now use Observable as APP_INITIALIZER. ` Http HttpClient Before Angular 12, if you wanted to pass boolean or number as HttpParams, you had to: `` HttpClient.get('api/endpoint', { param1: '10', param2: 'true' }) `` With Angular 12, the HttpParams has been updated to accept the boolean and the number values: `` HttpClient.get('api/endpoint', { param1: 10, param2: true }) `` Http Interceptor Context This is a very long waited feature. In Angular 12, you can now pass data between interceptors. Here is a great article by Netanel Basal. More readable Http status code An enum HttpStatusCode is provided, so the user can use this enum to write more readable code for status code check. `` httpClient.post('api/endpoint', postBody, {observe: 'response'}).subscribe(res => { if (res.status === HttpStatusCode.Ok) { ... } }); `` Form Form: Support min/max validator HTML5 standards support built in validator such as min/max for input element. `` `` Before Angular 12, the min/max attributes are ignored by FormsModule, so the Angular FormControl valid property will still be true even the input data exceed the min or max range. But the underlying DOM HTMLInputElement's validity.valid will still be false. From Angular 12, the min/max validator is implemented so the valid result will be consistent with native DOM status. Animation Sometimes we want to disable animations. Before Angular 12, we needed to provide the NoopAnimationModule. In Angular 12, there is a new way to do that by using BrowserAnimationsModule.withConfig({disableAnimations: true}) so the user can disable animations based on the runtime information. Router Router: Support location.historyGo() functionality. Support location.historyGo(relativePosition: number) to navigate to the specific history page so it will be compatible with the DOM history.go() API. Router: Provides RouterOutletContract interface Provides RouterOutletContract interface allows the 3rd party library to implement its own RouterOutlet more easily (such as add own logic for navigation, animation). For details, please refer to this PR https://github.com/angular/angular/pull/40827. Tooling Angular Linker Ivy has been released from Angular 9 and has become the default in the Angular App from Angular 10. It is fast, easy to debug, and has a lot of possibilities in the future. However, for Angular library development, the user still has to compile the lib with the old ViewEngine to publish to npm. The Angular library deploy process is: 1. The developer compiles the lib with ViewEngine. 2. When the Angular app is using the lib, when yarn install, Angular compiler will travel the node_modules directory and use ngcc to convert the ViewEngine library to Ivy. This process is slow and needs to run whenever node_modules changes. The reason we need this process is if we remove ngcc and publish the Ivy instructions to npm, the instruction code will become the public API. It is very difficult to change it in the future. ViewEngine is not instruction but metadata, so it can be used across multiple versions even for Ivy. To resolve this issue, Angular provides an RFC last year, https://github.com/angular/angular/issues/38366, with an idea which is Angular Linker. It is an advanced version ngcc. It compiles the Angular library code with partial mode. - The output is not Ivy instructions, so it allows future changes. - The output is still metadata, but not like ViewEngine, the metadata is self-contained, so the output for one component contains all the information needed to compile this component without any other global knowledge(such as NgModule). It is fast and be able to do incremental/cache/parallel build. - The new intermediate output can be directly consumed by Angular Compiler, so the node_modules folder doesn't need to be touched. To use this feature, update the option in the tsconfig.lib.prod.json like this. `` "enableIvy": "true", "compilationMode": "partial" `` Typescript 4.2 support - Supports Typescript 4.2. E2E protractor RFC Angular is using protractor for e2e test. The protractor has a long history from 2013. It helped users write e2e test, and to support better async test, protractor did a lot of work via ControlFlow to make async/await work. Since the native async/await is introduced into Javascript standard, protractor ControlFlow becomes incompatible with the native async/await, so protractor drops the ControlFlow and uses the solution of selenium-webdriver completely. Also protractor has a lot of features just for AngularJS such as locator/mock. After removing these codes, protractor becomes a wrapper of selenium wrapper without any valuable functionalities. And now the Angular developers are using several alternative solutions to do the e2e test such as: - Cypress - PlayWright - Puppeteer The Angular team posts an RFC to deprecate the protractor and make Angular easy to integrate with the 3rd party e2d platform. Here is the RFC https://github.com/angular/protractor/issues/5502 Based on the RFC, Angular plans to drop the protractor support until Angular v15 and will guide the Angular developers to select an alternative platform when the user tries to run ng e2e. CLI: Webpack 5 support Webpack 5 support many new features such as: - Module federation is a new way to develop micro-frontend - Disk caching for faster build - Better tree shaking CLI: Lint Angular v11 deprecated to use TSLint as the default linter, and Angular v12 remove TSLint and codelyzer from cli, so please reference the following article to migrate to eslint. (https://github.com/angular-eslint/angular-eslint#migrating-from-codelyzer-and-tslint) CLI: Build When we build an Angular app, we use ng build --prod all the time. The options often confuse the user that --prod will make a production build, but it is just an alias of ng build --configuration=production. To make this more clear, CLI remove --prod and add a defaultConfiguration setting with the value production. Now, ng build default will build with production configuration. CLI: Build bundles with ES2017+ Until Angular 12, Angular only supports output bundles in ES2015 format. ES2017+ is not supported because Zone.js is not supporting native async/await patching. From Angular 12, CLI supports to only transpile the native async/await code to Promise but keep the other ES2017+ code untouched, so you can specify the output bundle format to ES2017+ from Angular 12. CLI: inlineCritical by default The inlineCritical settings of styles optimization in the angular.json becomes true by default. Please reference here for more details. CLI: strict mode by default The workspace generated by CLI will use strict mode by default. To generate non-strict mode, use --no-strict option. CLI: Built in tailwindcss support Tailwindcss is a popular CSS library. Before Angular 12, to integrate with Tailwindcss, the user needs to do some postcss configuration themselves. With Angular 12, this is done by the CLI so it is super easy to do the integration. `` npm install -D tailwindcss npx tailwindcss init `` This is the same process to install the normal Tailwindcss. CLI: confirm ng add before install packages from npm. Now ng add will ask the user to confirm the package/version before installing the packages. This gives the user a chance to stop the process when the user has a typo in the project name or want to initialize the project with other options. Angular Language Service for Ivy Angular Ivy has been released since v9, but the Angular Language Service is still using View Engine to compile the template and did a lot of magic work to make everything work in Ivy, which is not efficient. Also, since View Engine is deprecated, some issues and features may not be implemented in the View Engine version of Language Service for Ivy. Now we have the new Angular Language Service written in Ivy and for Ivy. You can opt-in the new Angular Language Service beta with Ivy support now. To use it in VSCode, try the following steps. 1. Install the latest extension in the VSCode Marketplace 2. Set strictTemplates: true in tsconfig.json Angular compiler options 3. Go to Preferences -> Settings -> Extensions -> Angular Language Service -> Check “Experimental Ivy” Please reference this blog for more details. CLI: Support inlineStyleLanguage option A new build option inlineStyleLanguage is added. Now you can write css/sass/scss/less in the inline style of a component! Angular Components MDC Web based new component Angular Component is continuing to create new MDC Web based components and the backward-compatible. Here is the design. For the new Angular Material Component, Angular CDK provides the behavior and the MDC Web Component provides the user experiences defined by the Material Design spec. The Angular Material team doesn't need to maintain another implementation of the Material Design and the UI will be more consistent. New @use SASS module system You may already use the @import syntax in the SASS style file, but it has some flaws and make the module system hard to maintain. For details, please reference the article here. Now Angular Components are using the new @use syntax for the better module system of SASS. To use the new module system, you can use the following schematic to migrate. `` ng g @angular/material:themingApi `` Also, the node-sass package will be replaced by the new sass package. Deprecations - IE11 support is deprecated! - ViewEngine for application build is removed. All applications are using Ivy to build. - entryComponent is removed from Component schematic. - string based lazy loading support is dropped. `` { path: 'Lazy', component: LazyComponent }, { path: 'lazy', loadChildren: './modules/lazy/lazy.module#LazyModule', } `` is not supported, please use dynamic import instead. `` { path: 'Lazy', component: LazyComponent }, { path: 'lazy', loadChildren: () => import('./lazy/lazy.module').then(m => m.LazyModule) } `` For the full deprecation list, please check here. How to update to v12 Use the Angular CLI: `` ng update @angular/cli @angular/core `` You can find more information from update.angular.io/....

Increasing development velocity with Cursor cover image

Increasing development velocity with Cursor

If you’re a developer, you’ve probably heard of Cursor by now and have either tried it out or are just curious to learn more about it. Cursor is a fork of VSCode with a ton of powerful AI/LLM-powered features added on. For around $20/month, I think it’s the best value in the AI coding space. Tech giants like Shopify and smaller companies like This Dot Labs have purchased Cursor subscriptions for their developers with the goal of increased productivity. I have been using Cursor heavily for a few months now and am excited to share how it’s impacted me personally. In this post, we will cover some of the basic features, use cases, and I’ll share some tips and tricks I’ve learned along the way. If you love coding and building like me, I hope this post will help you unleash some of the superpowers Cursor’s AI coding features make possible. Let’s jump right in! Cursor 101 The core tools of the Cursor tool belt are Autocomplete, Ask, and Agent. Feature: Autocomplete The first thing that got me hooked was Autocomplete. It just worked so much better than the tools I had used previously, like GitHub Copilot. It was quicker and smarter, and I could immediately notice the amount of keystrokes that it was saving me. This feature is great because it doesn’t really require any work or skilled prompting from the user. There are a couple of tricks for getting a little bit more out of it that I will share later, but for now, just enjoy the ride! Feature: Ask If you’ve interacted with AI/LLMs before, like ChatGPT - this is what the Ask feature is. It’s just a chat feature you can easily provide context to from your code base and choose which Model to chat with. This feature is best suited for just asking more general questions that you might have queried Google or Stack Overflow for in the past. It’s also good for planning how to implement a feature you’re working on. After chatting or planning, you can switch directly to Agent mode to pick up and take action on something you were cooking up in Ask mode. Here’s an example of planning a simple tic-tac-toe game implementation using the Ask feature: Feature: Agent Agent mode lets the AI model take the wheel and write code, make edits, or take other similar actions on your code base. The goal is that you can write prompts and give instructions, and the Agent can generate the code and build features or even entire applications for you. With great power comes great responsibility. Agents are a feature where the more you put into them, the more you get out. The more skilled you become in using them by providing better prompts and including the right context, you will continue to get better results. The AI doesn’t always get it right, but the fact that the models and the users are both getting better is exciting. Throughout this post, I will share the best use cases, tips, and tricks I have found using Cursor Agent. Here’s an example using the Agent to execute the implementation details of the tic-tac-toe game we planned using Ask: Core Concept: Context After understanding the features and the basics of prompting, context is the most important thing for getting the best results out of Cursor. In Cursor and in general, whenever you’re prompting a chat or an agent, you want to make sure that it has all the relevant information that it needs to provide an answer or result. Cursor, by default, always has some context of your code. It indexes your code base and usually keeps the open buffer in the context window at the very least. At the top left of the Ask or Agent panel, there is an @ button, and next to that are badges for all the current items that have been explicitly added to the context for the current session. The @ button has a dropdown that allows you to add files, folders, web links, past chats, git commits, and more to the context. Before you prompt, always make sure you add the relevant content it needs as context so that it has everything it needs to provide the best response. Settings and Rules Cursor has its own settings page, which you can access through Cursor → Settings → Cursor Settings. This is where you log in to your account, manage various features, and enable or disable models. In the General section, there is an option for Privacy Mode. This is one setting in particular I recommend enabling. Aside from that, just explore around and see what’s available. Models The model you use is just as important as your prompt and the context that you provide. Models are the underlying AI/LLM used to process your input. The most well-known is GPT-4o, the default model for ChatGPT. There are a lot of different models available, and Cursor provides access to most of them out of the box. Model pricing A lot of the most common models, like GPT-4o or Sonnet 3.5/3.7, are included in your Cursor subscription. Some models like o1 and Sonnet 3.7 MAX are considered premium models, and you will be billed for usage for these. Be sure to pay attention to which models you are using so you don’t get any surprise bills. Choosing a Model Some models are better suited for certain tasks than others. You can configure which models are enabled in the Cursor Settings. If you are planning out a big feature or trying to solve some complex logic issue, you may want to use one of the thinking models, like o1, o3-mini, or Deep Seek R1. For most coding tasks and as a good default, I recommend using Sonnet 3.5 or 3.7. The great thing about Cursor is that you have the options available right in your editor. The most important piece of advice that I can give in this post is to keep trying things out and experimenting. Try out different models for different tasks, get a feel for it, and find what works for you. Use cases Agents and LLM models are still far from perfect. That being said, there are already a lot of tasks they are very good at. The more effective you are with these tools, the more you will be able to get done in a shorter amount of time. Generating test cases Have some code that you would like unit tested? Cursor is very good at generating test cases and assertions for your code. The fewer barriers there are to testing a piece of code, the better the result you will get. So, try your best to write code that is easily testable! If testing the code requires some mocks or other pieces to work, do your best to provide it the context and instructions it needs before writing the tests. Always review the test cases! There could be errors or test cases that don’t make sense. Most of the time, it will get you pretty close to where you want to be. Here’s an example of using the Agent mode to install packages for testing and generate unit tests for the tic-tac-toe game logic: Generating documentation This is another thing we know AI models are good at - summarizing large chunks of information. Make sure it has the context of whatever you want to document. This one, in particular, is really great because historically, keeping documentation up to date is a rare and challenging practice. Here’s an example of using the Agent mode to generate documentation for the tic-tac-toe game: Code review There are a lot of up-and-coming tools outside of Cursor that can handle this. For example, GitHub now has Copilot integrated in pull requests for code reviews. It’s never a bad idea to have whatever change set you’re looking to commit reviewed and inspected before pushing it up to the remote, though. You can provide your unstaged changes or even specific commits as context to a Cursor Ask or Agent prompt. Getting up to speed in a new code base Being able to query a codebase with the power of LLM’s is truly fantastic. It can be a great help to get up to speed in a large new codebase quickly. Some example prompts: > Please provide an overview of this project and how to get started developing with it > I need to make some changes to the way that notifications are grouped in the UI, please provide a detailed analysis and pseudo code outlining how the grouping algorithm works If you have a question about the code base, ask Cursor! Refactoring Refactoring code in a code base is a much quicker process in Cursor. You can execute refactors depending on their scope in a couple of distinct ways. For refactors that don’t span a lot of files or are less complex, you can probably get away with just using the autocomplete. For example, if you make a change to something in a file and there are several instances of the same pattern following, the autocomplete will quickly pick up on this and help you tab through the changes. If you switch to another file, this information will still be in context and can be continued most of the time. For larger refactors spanning several files, using the Agent feature will most likely be the quickest way to get it done. Add all the files you plan to make changes to the Agent tab’s context window. Provide specific instructions and/or a basic example of how to execute the refactor. Let the Agent work, if it doesn’t get it exactly right initially, you can always give it corrections in a follow-up prompt. Generating new code/features This is the big promise of AI agents and the one with the most room for mixed results. My main recommendation here is to keep experimenting. Keep learning to prompt more effectively, compare results from different models, and pay attention to the results you get from each use case. I personally get the best results building new features in small, focused chunks of work. It can also be helpful to have a dialog with the Ask feature first to plan out the feature's details that the Agent can follow up on and implement. If there are existing patterns in your codebase for accomplishing certain things, provide this information in your prompts and make sure to add the relevant code to the context. For example, if you’re adding a new form to the web page and you have other similar forms that handle validation and making back-end calls in the same way, Cursor can base the code for the new feature on this. Example prompt: Generate a form for creating a new post, follow similar patterns from the create user profile form, and look to the post schema for the fields that should be included. Remember that you can always follow up with additional prompts if you aren’t quite happy with the results of the first.. If the results are close but need to be adjusted in some way, let the agent know in the next prompt. You may find that for some things, it just doesn’t do well yet. Mentally note these things and try to get to a place where you can intuit when to reach for the Agent feature or just write some of the code the old-fashioned way. Tips and tricks The more you use Cursor, the more you will find little ways to get more out of it. Here are some of the tips and patterns that I find particularly useful in my day-to-day work. Generating UI with screenshots You can attach images to your prompts that the models can understand using computer vision. To the left of the send button, there is a little button to attach an image from your computer. This functionality is incredibly useful for generating UI code, whether you are giving it an example UI as a reference for generating new UI in your application or providing a screenshot of existing UI in your application and prompting it to change details in reference to the image. Cursor Rules Cursor Rules allow you to add additional information that the LLM models might need to provide the best possible experience in your codebase. You can create global rules as well as project-specific ones. An example use case is if your project has some updated dependency with newer APIs than the one on which the LLM has been trained. I ran into this when adding Tailwind v4 to a project; the models are always generating code based on Tailwind v3 or earlier. Here’s how we can add a rules file to handle this use case: ` If you want to see some more examples, check out the awesome-cursorrules repository. Summary Learn to use Cursor and similar tools to enhance your development process. It may not give you actual superpowers, but it may feel like it. All the features and tools we’ve covered in this post come together to provide an amazing experience for developing all types of software and applications....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co