Skip to content

Getting Started with Docker

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Getting Started With Docker

Introduction

Docker is quickly becoming one of the most popular technologies for hosting web applications. It is a set of tools for packaging, distributing, and running software applications. Developers can write configuration files to create packages called images, which are distributed via decentralized, web-based repositories (some public, some private). Images downloaded from repositories are used as templates to create isolated environments called "containers" that run applications within them. Many containers may exist alongside each other on a single host. Memory and CPU resources are shared between all the containers running on a machine, but each container has its own fully isolated file system and environment. This is convenient for a number of reasons, but most of all, it simplifies the process of installing and running one or more applications on a single host machine.

Installing Docker

If you are on MacOS or Windows, the best way to install Docker is by installing Docker Desktop. It provides a complete installation of Docker and provides a GUI for managing it. You can use the GUI to start or stop your Docker daemon, or to manage installing software updates to the Docker platform. (Bonus: Docker Desktop can also manage a local Kubernetes cluster for you. It's not relevant to this article, but it provides a straightforward way to get started with Kubernetes, a platform for managing running containers across a scalable number of hosts). Linux users can install docker from their distribution’s package manager, but the Docker Desktop GUI is not included. Installation instructions for the most popular Linux distributions can be found in the Docker documentation.

Working With 3rd Party Containers

The first thing to try once you've installed Docker on your computer is running containers based on 3rd party images. This exercise is a great way to quickly display the power of Docker. First open your favorite system terminal and enter docker pull nginx.

This command will download the official nginx image from Docker Hub. Docker Hub is a managed host for Docker images. You can think of it sort of like npm for Docker. We've pulled the newest version of the nginx image, however, as with npm, we could have chosen a specific version to download by changing the command to docker pull nginx:1.18. You can find more details about an image, including which versions are available for download, on its Docker Hub page.

Now that we've downloaded an image, we can use it to create a container on our local machine just as simply as we downloaded it. Run docker run -d -p 8080:80 nginx to start an nginx container. I’ve added a couple options to the command. By default, nginx runs on port 80, and your system configuration likely prevents you from exposing port 80. Therefore, we use -p 8080:80 to bind port 80 on the container to port 8080 on your local machine. We use -d to detach the running container from the terminal session. This will allow us to continue using the same terminal while the nginx container continues to run in the background.

Now, you can navigate to http://localhost:8080 with your web browser, and see the nginx welcome page that is being served from within Docker. You can stop the nginx container running in the background by using the docker kill command. First, you'll need to use docker ps to get its container ID, then you can run docker kill <container ID>. Now, if you navigate to http://localhost:8080 again, you will be met with an error, and docker ps will show no containers running.

The ability to simply download and run any published image is one of the most powerful features of Docker. Docker Hub hosts millions of already baked images, many of which are officially supported by the developer of the software contained within. This allows you to quickly and easily deploy 3rd party software to your servers and workstations without having to follow bespoke installation processes. However, this isn’t all that Docker can do. You can also use it build your own images so that you can benefit from the same streamlined deployment processes for your own software.

Build Your Own

As I said before, Docker isn’t only good for running software applications from 3rd parties. You can build and publish your own images, so that your applications can also benefit from the streamlined deployment workflows that Docker provides. Docker images are built using 2 configuration files, Dockerfile and .dockerignore. Dockerfile is the most important of the two. It contains instructions for telling docker how to run your application within a container. The .dockerignore file is similar to Git’s .gitignore file. It contains a list of project files that should never be copied into container images.

For this example, we'll Dockerize a dead "hello world" app, written with Node.js and Express. Our example project has a package.json and index.js like the following:

package.json:

{ 
    "name": "hiwrld",
    "version": "1.0.0",
    "description": "hi world",
    "main": "index.js",
    "scripts": {
      "start": "node index.js"
    },
    "author": "Adam Clarke",
    "license": "MIT",
    "dependencies": {
      "express": "^4.17.1"
    }
  }

index.js:

const express = require('express')
const app = express()
const port = 3000
const greeting = process.env.GREETING

app.get('/', (req, res) => {
 res.send('Hello World!')
})

app.listen(port, () => {
 console.log('Example app listening at http://localhost:' + port)
})

The package.json manages our single express dependency, and configures an npm start command with which to start the application. In index.js, I've defined a basic express app that responds to requests on the root path with a greeting message.

The first step to Dockerizing this application is creating a Dockerfile. The first thing we should do with our empty Dockerfile is add a FROM directive. This tells Docker which image we want to use as the base for our application image. Any Docker image published to a repository can be used in your FROM directive. Since we've created a Node.js application, we'll use the official node docker image. This will prevent us from needing to install Node.js on our own. Add the following to the top of your empty Dockerfile:

FROM node:15

Next, we need to make sure that our npm dependencies are installed into the container so that the application will run. We will use the COPY and RUN directives to copy our package.json file (along with the package-lock.json that was generated when modules were installed locally) and run npm install. We'll also use the WORKDIR directive to create a folder and make it the image's working directory. Add the following to the bottom of your Dockerfile:

# Create a directory for the app and make it the working directory
WORKDIR /usr/src/app

# Copy package files from the local filesystem directory to the working directory of the container
# You can use a wildcard character to capture multiple files for copying. In this case we capture
# package.json and package-lock.json
COPY package*.json ./

# now install the dependencies into the container image
RUN npm install

Now that we've configured the image so that Docker installs the application dependencies, we need to copy our application code and tell Docker how to run our application. We will again use COPY, but we’ll add CMD and EXPOSE directives as well. These will explain to Docker how to start our application and which ports it needs exposed to operate. Add these lines to your Dockerfile:

# Copy everything from the local filesystem directory to the working directory. Including
# the source code
COPY . .

# The app runs on port 3000
EXPOSE 3000

# Use the start script defined in package.json to start the application
CMD ["npm", "start"]

Your completed Dockerfile should look like this:

FROM node:15

# Create a directory for the app and make it the working directory
WORKDIR /usr/src/app

# Copy package files from the local filesystem directory to the working directory of the container
# You can use a wildcard character to capture multiple files for copying. In this case we capture
# package.json and package-lock.json
COPY package*.json ./

# now install the dependencies into the container image
RUN npm install

# Copy everything from the local filesystem directory to the working directory. Including
# the source code
COPY . .

# The app runs on port 3000
EXPOSE 3000

# Use the start script defined in package.json to start the application
CMD ["npm", "start"]

Now that we have a complete Dockerfile, we need to create a .dockerignore as well. Since our project is simple, we only need to ignore our local node_modules folder. That will ensure that the locally installed modules aren’t copied from your local disk via the COPY . . directive in our Dockerfile after they've already been installed into the container image with npm. We'll also ignore npm debug logs since they're never needed, and it's a best practice to keep Docker images' storage footprints as small as possible. Add the following .dockerignore to the project directory:

node_modules
npm-debug.log

On a larger project, you would want to add things like the .git folder and any text and/or configuration files that aren't required for the app to run, like continuous integration configuration, or project readme files.

Now that we've got our Docker configuration files, we can build an image and run it! In order to build your Docker image open your terminal and navigate to the same location where your Dockerfile is, then run docker build -t hello-world .. Docker will look for your Dockerfile in the working folder, and will build an image, giving it a tag of “hello-world”. The “tag” is just a name we can use later to reference the image.

Once your image build has completed, you can run it! Just as you did before with nginx, simply run docker run -d -p 3000:3000 hello-world. Now, you can navigate your browser to http://localhost:3000, and you will be politely greeted by our example application. You may also use docker ps and docker kill as before in order to verify or stop the running container.

Conclusion

By now, it should be clear to see the power that Docker provides. Not only does Docker make it incredibly easy to run 3rd party software and applications in your cloud, it also gives you tools for making it just as simple to deploy your own applications. Here, we've only scratched the surface of what Docker is capable of. Stay tuned to the This Dot blog for more information about how you can use Docker and other cloud native technologies with your applications.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Deploying apps and services to AWS using AWS Copilot CLI cover image

Deploying apps and services to AWS using AWS Copilot CLI

Copilot has become a household name for developers, all thanks to GitHub’s popular AI tooling. Before GitHub released Copilot, AWS already had a developer tool in the wild, also named Copilot. I stumbled across AWS Copilot a year or two ago and found it to be a really great tool for easily deploying Serverless applications and services to AWS infrastructure. Since deploying to AWS has been such a huge pain point for so many developers, Copilot CLI is one of many tools that is designed to make this process a lot easier. AWS Copilot is a command-line interface (CLI) that simplifies the process of deploying and managing containerized applications on AWS. It abstracts away the complexity of managing infrastructure, allowing us to focus on writing code. This blog post will provide an overview of AWS Copilot CLI and explore its practical use cases. AWS Copilot CLI is designed to simplify the building, releasing, and operating of production-ready containerized applications on Amazon ECS and AWS Fargate. It offers an intuitive interface for developers to launch and manage environments, jobs, pipelines, and services in the cloud. Although ECS and Fargate are considered ‘Serverless’; you are actually deploying to more traditional, stateful, web servers on EC2 _(these are cool again)_. Installation You can install the Copilot CLI using Homebrew. ` Otherwise use one of the scripts from the installation page. Credentials You need to add AWS credentials with proper permissions to your ~/aws/.credentials file to use Copilot. See the credentials docs for more details on this. ` Overview All you need is a Dockerfile that knows how to build and run your application and Copilot will handle the rest. Copilot provides a simple declarative set of commands, including examples and guided experiences built-in. These commands make it easy to start from scratch and get a containerized application running in the cloud in just a few steps. The configuration files that Copilot generates (called manifests) allow us to easily configure and adjust the amount of compute resources available to our web service _(cpu, memory, instances, etc)_. Here’s an architecture diagram for a load-balanced web service running on AWS and deployed with Copilot. The outermost layer is the region your application is deployed to on AWS ie: us-east-1, us-west-1, etc. Copilot handles all the networking for your application which includes a VPC, and public subnets that your server instances can be reached from. The load balancer (ALB) sits at the top, listens for requests, and directs traffic to the ECS Cluster (instances of your application server). Typically, the work involved with setting all of these pieces up and getting them working together is cumbersome to put it lightly. Concepts Before we dive into some of the commands, a quick look at the concepts that Copilot is built on so we have a clear understanding of what we can achieve with it. Applications Applications in Copilot is the top-level parent of all the AWS infrastructure managed by your Copilot setup. In this article I will generally refer to an application as any kind of software that you are deploying (api server, etc). In Copilot your ‘Application’ is the entire collection of all the environments and services that you have configured. Here’s the diagram from the documentation. The Vote application can consist of a number of different services, jobs, pipelines, etc. Services In AWS Copilot, “Services” refer to the type of application (not to be confused with the definition of Application in the previous section) that you’re deploying and the underlying AWS service infrastructure that supports it. Using the Load Balanced Web Service service from the diagram above, the service consists of the ECS (Elastic Container Service) service, the application load balancer, and the network load balancer. These are some of the AWS infrastructure that Copilot orchestrates for you when deploying this type of “Service”. There are a few main types of services that you can deploy with Copilot - Internet-facing web services (Static Site, Load Balanced Web Service, etc) - Backend services (services that can only be accessed internally) - Worker services (pub/sub queues, etc) Environments When setting up your project and your first service you will supply Copilot with the name of the environment you want to deploy to. For example, you can create a production environment initially to deploy your application and services to and then later on add additional environments like a staging environment. Jobs You might also know these as ‘crons’ but these are just tasks or some code that runs on a schedule. Pipelines Pipelines are for automating tests and releases. This is AWS’s CI/CD service somewhat similar to something like GitHub actions. Copilot can initialize and create new pipelines for you. We can configure this as a manifest in our codebase that declares instructions on how we build and deploy our application(s). Configuration Copilot is configured through various configuration files called ‘Manifests’. The documentation contains examples on how to configure your application, services, environments, jobs, and pipelines. You can also refer to it to learn what options are available. Since Copilot is a CLI tool they make a lot of the configuration process pretty painless by providing walkthrough prompts and generating configuration files for you. The CLI Now that we have a pretty good idea of what AWS Copilot is and the types of things we can accomplish with it, let's walk through using the CLI. ` This is where you will most likely want to start to get your application ready for deployment to AWS. The init command will prompt you with questions like what kind of service you want to generate and once you’re done, it will generate all the initial configuration files for you. Example setup: ` You’ll pick your service type, tell Copilot where your Dockerfile lives, name your environment, and let it do its magic from there. Copilot will generate your Manifest/config files which in turn become CloudFormation stacks that set up all your infrastructure on AWS. Other Commands There are commands for each concept that we covered above. There are Create, Delete, and Deploy operation commands for Environments, Services, Jobs, and Pipelines. If you wanted to add a staging environment to your application you could just run copilot env init . There are also commands to manage your Application or things like tasks. To see details about your Application you can run copilot app show -n [name] Conclusion AWS Copilot CLI is a powerful tool that simplifies the deployment and management of containerized applications on AWS. It abstracts away the complexities of the underlying infrastructure, allowing developers to focus on writing code and delivering value to their customers. Whether you're deploying a simple web service or managing multiple environments, AWS Copilot CLI can make your cloud development process more efficient and enjoyable....

Publishing Docker Containers cover image

Publishing Docker Containers

Publishing Docker Containers If you've read my previous blog posts, Getting Started With Docker and Building Docker Containers, you may find yourself wondering what your options are for publishing your custom docker image. Thankfully, publishing your custom images is one of the simplest things you can do. I'll show you how. I'm going to assume you've already got Docker installed locally. If you don't have it and aren't sure how to get it, check out the older posts I linked before. This Is How We Do It The fastest and easiest way to get started publishing images is to make yourself a DockerHub account. Once you've done this, you'll need to log in with your Docker client. At a command prompt, enter the following: ` In return, you'll be prompted for your DockerHub username and password. Enter them to complete the login process. Once you've logged in successfully, you're ready to start publishing images. Publishing a public docker image to your personal account is incredibly easy. First, you need to make sure your image is tagged appropriately. You'll need to prefix the container's name with your DockerHub username so that docker knows what to do. Then you can publish using docker push. The whole process goes like this: ` You will see the status of your image upload. Once it's complete, head over to hub.docker.com and log in to see your published image. It's important to note that following this process will publish your image publicly. Anyone will be able to view your DockerHub profile and download your image. DockerHub does support private repositories, but only provides one free private image per account (paid accounts). You can make the image you just uploaded private by navigating to it from your DockerHub dashboard, selecting the "Settings" tab, and clicking the "Make Private" button. Alternatively, if you'd prefer to make sure your image is private as soon as you publish it, you may create your private repository on DockerHub, _before_ you use docker push to publish it. Click "Create Repository" on the DockerHub dashboard (after logging in) and follow the instructions given. Alternatives To DockerHub DockerHub isn't the only place you can publish your Docker image artifacts online. There are a number of other image repository hosts you can use both managed and self-hosted that offer a similar feature set to that of DockerHub. Amazon, Google, Microsoft each have a container registry offering, so if you're already using one of those clouds for hosting, you can leverage those providers' own solutions to keep your billing consolidated. Alternatively, GitHub and GitLab users can choose to keep their container images in those services alongside their application code. These are just a handful of the options available to you. A quick Google search will reveal even more vendors like Sloppy.io and Quay.io. For some, whether because of personal preference or business requirements, storing images on the public internet won't be desireable. The good news is that there are options for folks who need or want to host their images within their own private networks, or simply want to maintain control of their data. Two of the most popular open-source registry hosts are Harbor and Artifactory. Harbor is a Kubernetes (Cloud Native) focused solution. It also acts as a repository for hosting Helm Charts. Artifactory by JFrog is a one-stop shop for all your build artifact storage needs. In addition to being able to manage container images and Helm Charts, it can also manage RubyGems, NPM modules, or nearly any other sort of build artifact that you'd like to publish. These self-hosted options require administration and maintenance, so they are more labor-intensive solutions, but each is a great choice if you'd like to take image hosting into your own hands. Publishing to Other Registries If you choose to use a registry hosted somewhere other than dockerhub, your process for publishing images will change slightly. You'll still use the same tools but when tagging your image, the instructions will be slightly different. You will need to login to your preferred provider using docker login and you will need to provide your registry's hostname and other required metadata in your image's tag. The process for publishing to each provider differs slightly, but here is an example using AWS Elastic Container Registry (ECR). : ` You'll notice that the login process is different and requires you to use awscli to retrieve your password and pipe it into docker login, using "AWS" as your username. This is an added security measure. AWS changes your password regularly to keep your account secure. In ECR, all images are private by default, and you must create the repository before using docker push either via the AWS console or commandline interface. The logging and tagging process will differ slightly for each provider, but most provide straightforward and clear instructions for their process when you create your registry. Refer to your chosen provider's documentation for more info. Conclusion While Docker may be the company that introduced us to Linux container images, more and more vendors and open-source projects are getting involved in the hosting of images. You are no longer limited to using a host on the public internet or run by Docker. I hope this post has helped you understand more about all the other options available to you for image hosting....

Building Docker Containers cover image

Building Docker Containers

Revisiting The Basics In my earlier post, Getting Started with Docker, I covered building a basic Dockerfile using the FROM, COPY, RUN, and CMD instructions and how to use a .dockerignore file to keep unnecessary files out of your images and containers. If you haven't read that post, go check it out to learn the basics of building Docker images. In this post, I'll cover some more advanced techniques for building container images. In addition, I recently published a post exploring advanced Docker CLI usage. I recommend giving it a read, too, if you aren't already a CLI pro. Installing Dependencies Using FROM with an official image for your language or framework will get you a long way, but many applications will require a system dependency that's not included in the FROM image. For example, many applications use ImageMagick for processing image uploads, but it's not included by default in the Debian images that most language images are based on. You can use RUN and apt-get to install missing dependencies. ` We started the Dockerfile just like the example from my earlier post, using the official NodeJS 15 image, but then we do 2 additional steps to install ImageMagick using apt-get. To keep the base image size low, Debian does not come pre-loaded with all of the data it needs to install packages from apt-get, so we need to run apt-get update first so that apt-get has that info. Then, we simply use apt-get install -y imagemagick to install imagemagick. The -y option is used to automatically respond with "yes" when apt-get prompts you to confirm the package installation. RUN vs CMD (vs ENTRYPOINT) By now you've probably noticed that there are two different instructions that run commands in your containers, RUN and CMD. While both are used to run commands, they're used in very different contexts. As we've seen in previous examples, RUN is used exclusively in the build process to run commands to modify the image as needed. CMD is different because it specifies the command that will be run by the container when you launch it using docker run. You can have as many RUN instructions as you need, but only one CMD. If you need to run a different command at runtime, you can pass it as an argument when you launch the container with docker run (check out my Docker CLI Deep Dive post). Additionally, Docker provides the ENTRYPOINT instruction. This is a command that the command you provide to the CMD instruction will be passed to as arguments. If you do not provide an ENTRYPOINT it will default to /bin/sh -c which will cause your CMD command to execute in a basic unix shell environment. The default ENTRYPOINT will satisfy most use cases. It's possible to override a container's CMD at runtime, but it is not possible to change its ENTRYPOINT. Docker's own ENTRYPOINT documentation goes into more detail about how it can be used. In the example Dockerfile above, you probably noticed that the way commands are passed to CMD and RUN looks different. Typically, when using RUN you provide commands using shell syntax, and you provide commands to CMD (and ENTRYPOINT) using the exec syntax, but they can be used interchangably. When using shell syntax, you can resolve shell expressions within your command. You can use shell variables and operators like output pipes (|) and redirects (>, >>), as well as boolean operations (&&, ||) to join commands. Exec syntax is much more straightforward. Each string within the bracketed array is joined with the other elements with a space in between and run exactly as provided. Layers and Caching Each isntruction in your Dockerfile adds a new Layer to your image. For performance reasons, it's considered a best practice to limit the total number of layers that comprises your finished image. There are a number of ways to do this. The simplest is by combining lines where RUN or COPY are used in close proximity to each other. Consider the example above where we installed ImageMagick; instead of using two separate RUN instructions, we can combine them using the bash && operator. ` Combining copy commands is a bit easier. The COPY instruction takes any number of arguments. The first N parameters provided to COPY are interpreted as a list of files to copy, and the N+1th paramter is the location to copy those files to. You can also use * as a wildcard character as I did in the first example when copying the package.json and package-lock.json files to the image. Anothing thing to consider when thinking about how your image layers are composed is caching. When Docker processes your Dockerfile to build your image, it runs each of the instructions in order to create the layers of your image. Docker analyzes each instruction before it is run and checks its cache to determine whether or not there is an identical existing image layer. When analyzing RUN instructions, Docker looks for any cached image layer that was built using the exact same command and uses it instead of rebuilding the same layer. For COPY and ADD instructions, it analyzes the files to be copied and looks for a previously built layer that has the exact same file contents. If at any point any instruction requires its layer to be rebuilt, all of the following instructions will result in a rebuild. Optimizing your Dockerfile to take advantage of the layer cache can greatly reduce the time it takes to build your image. Organize your Dockerfile so that the layers least likely to change are processed first (ex: installing dependencies) and those more likely to change (ex: copying application code) are processed later. Conclusion These techniqes will help you create more advanced container images and hopefully help you optimize them. However, I've only covered a small slice of the options available to you when building container images. If you dig deeper into the official Dockerfile reference you'll find information about all of the instructions available to you and more advanced concepts and use cases....

Roo Custom Modes cover image

Roo Custom Modes

Roo Custom Modes Roo Code is an extension for VS Code that provides agentic-style AI code editing functionality. You can configure Roo to use any LLM model and version you want by providing API keys. Once configured, Roo allows you to easily switch between models and provide custom instructions through what Roo calls "modes." Roo Modes can be thought of as a "personality" that the LLM takes on. When you create a new mode in Roo, you provide it with a description of what personality Roo should take on, what LLM model should be used, and what custom instructions the mode should follow. You can also define workspace-level instructions via a .roo/rules-{modeSlug}/ directory at your project root with markdown files inside. Having different modes allows developers to quickly fine-tune how the Roo Code agent performs its tasks. Roo ships out-of-the-box with some default modes: Code Mode, Architect Mode, Ask Mode, Debug Mode, and Orchestrator Mode. These can get you far, but I have expanded on this list with a few custom modes I have made for specific scenarios I run into every day as a software engineer. My Custom Modes 📜 Documenter Mode I created this mode to help me with generating documentation for legacy codebases my team works with. I use this mode to help produce documentation interactively with me while I read a codebase. Mode Definition You are Roo, a highly skilled technical documentation writer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. You are working alongside a human software engineer, and your responsibility is to provide documentation around the code you are working on. You will be asked to provide documentation in the form of comments, markdown files, or other formats as needed. Mode-specific Instructions You will respect the following rules: * You will not write any code, only markdown files. * In your documentation, you will provide references to specific files and line numbers of code you are referencing. * You will not attempt to execute any commands. * You will not attempt to run the application in the browser. * You will only look at the code and infer functionality from that. 👥 Pair Programmer Mode I created a “Pair Programmer” mode to serve as my personal coding partner. It’s designed to work in a more collaborative way with a human software engineer. When I want to explore multiple ideas quickly, I switch to this mode to rapidly iterate on code with Roo. In this setup, I take on the role of the navigator—guiding direction, strategy, and decisions—while Roo handles the “driving” by writing and testing the code we need. Mode Definition You are Roo, a highly skilled software engineer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. You are working alongside a human software engineer who will be checking your work and providing instructions. If you get stuck, ask for help and we will solve problems together. Mode-specific Instructions You will respect the following rules: * You will not install new 3rd party libraries without first providing usage metrics (stars, downloads, latest version update date). * You will not do any additional tasks outside of what you have been told to do. * You will not assume to do any additional work outside of what you have been instructed to do. * You will not open the browser and test the application. Your pairing partner will do that for you. * You will not attempt to open the application or the URL at which the application is running. Assume your pairing partner will do that for you. * You will not attempt to run npm run dev or similar commands. Your pairing partner will do that for you. * You will not attempt to run a development server of any kind. Your pairing partner will handle that for you. * You will not write tests unless instructed to. * You will not make any git commits unless explicitly told to do so. * You will not make suggestions of commands to run the software or execute the test suite. Assume that your human counterpart has the application running and will check your work. 🧑‍🏫 Project Manager I created this mode to help me write tasks for my team with clear and actionable acceptance criteria. Mode Definition You are a professional project manager. You are highly skilled in breaking down large tasks into bite-sized pieces that are actionable by an engineering team or an LLM performing engineering tasks. You analyze features carefully and detail out all edge cases and scenarios so that no detail is missed. Mode-specific Instructions Think creatively about how to detail out features. Provide a technical and business case explanation about feature value. Break down features and functionality in the following way. The following example would be for user login: User Login: As a user, I can log in to the application so that I can make changes. This prevents anonymous individuals from accessing the admin panel. Acceptance Criteria * On the login page, I can fill in my email address: * This field is required. * This field must enforce email format validation. * On the login page, I can fill in my password: * This field is required. * The input a user types into this field is hidden. * On failure to log in, I am provided an error dialog: * The error dialog should be the same if the email exists or not so that bad actors cannot glean info about active user accounts in our system. * Error dialog should be a red box pinned to the top of the page. * Error dialog can be dismissed. * After 4 failed login attempts, the form becomes locked: * Display a dialog to the user letting them know they can try again in 30 minutes. * Form stays locked for 30 minutes and the frontend will not accept further submissions. 🦾 Agent Consultant I created this mode for assistance with modifying my existing Roo modes and rules files as well as generating higher quality prompts for me. This mode leverages the Context7 MCP to keep up-to-date with documentation on Roo Code and prompt engineering best practices. Mode Definition You are an AI Agent coding expert. You are proficient in coding with agents and defining custom rules and guidelines for AI powered coding agents. Your specific expertise is in the Roo Code tool for VS Code are you are exceptionally capable at creating custom rules files and custom mode. This is your workflow that you should always follow: 1. 1. Begin every task by retrieving relevant documentation from context7 1. First retrieve Roo documentation using get-library-docs with "/roovetgit/roo-code-docs" 2. Then retrieve prompt engineering best practices using get-library-docs with “/dair-ai/prompt-engineering-guide" 2. Reference this documentation explicitly in your analysis and recommendations 3. Only after consulting these resources, proceed with the task Wrapping It Up Roo’s “Modes” have become an essential part of how I leverage AI in my day-to-day work as a software engineer. By tailoring each mode to specific tasks—whether it’s generating documentation, pairing on code, writing project specs, or improving prompt quality—I’ve been able to streamline my workflow and get more done with greater clarity and precision. Roo’s flexibility lets me define how it should behave in different contexts, giving me fine-grained control over how I interact with AI in my coding environment. Roo also has the capability of defining custom modes per project if that is needed by your team. If you find yourself repeating certain workflows or needing more structure in your interactions with AI tools, I highly recommend experimenting with your own custom modes. The payoff in productivity and developer experience is absolutely worth it....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co