Skip to content
Dustin Goodman

AUTHOR

Dustin Goodman

Engineer Manager

Engineering Manager with a passion for web and application development. Speaker and writer on work experiences and software development. Dog dad and musician fueled by coffee.

Select...
Select...
Configure your project with Drizzle for Local & Deployed Databases cover image

Configure your project with Drizzle for Local & Deployed Databases

It was a fun Friday, and Jason Lengstorf and I both decided to try and use Drizzle on our respective projects. Jason went the SQLite route and wrote an amazing article about how he got his setup working. My approach was a bit different. I started with Vercel's Postgres + Drizzle Next.js Starter and wanted to use PostgreSQL. If you don't know what Drizzle is, it's a type-safe ORM similar to Prisma. My colleague, Dane Grant, wrote a great intro post on it, so go check his article out if you want to learn more about Drizzle. Getting my project off the ground took longer than I expected, especially coming from a starter kit, but I figured it out. This is the article I wish I had at the time to help get this project set up with less friction. I will focus on using local and Vercel PostgreSQL, but this same setup should work with other databases and adapters. I'll make some notes about where those places are. While I did use Next.js here, these setup instructions work on other projects, too. Configuring Drizzle Every project that leverages Drizzle requires a drizzle.config in the root. Because I'm leveraging TypeScript, I named mine drizzle.config.ts, and to secure secrets, I also installed dotenv. My final file appeared as follows: ` The schema field is used to identify where your project's database schema is defined. Mine is in a file called schema.ts but you can split your schema into multiple files and use glob patterns to detect all of them. The out field determines where your migration outputs will be stored. I recommend putting them in a folder in the same directory as your schema to keep all your database-related information together. Additionally, the config requires a driver and dbCredentials.connectionString to be specified so Drizzle knows what APIs to leverage and where your database lives. For the connectionString, I'm using dotenv to store the value in a secret and protect it. The connectionString should be in a valid connection format for your database. For PostgreSQL, this format is postgresql://:@:/. Getting your connection string Now, you may be wondering how to get that connection string. If you're hosting on Vercel using their Postgres offering, you need to go to your Vercel dashboard and select their Postgres option and attach it to your app. This will set environment variables for you that you can pull into your local development environment. This is all covered in their "Getting Started with Vercel Postgres" guide. If you use a different database hosting solution, they'll offer similar instructions for fetching your connection string. However, I wanted a local database to modify and blow away as needed for this project. Out of the box, Drizzle does not offer a database initialization command and I needed something that could be easily and quickly replicated across different machines. For this, I pulled in Docker Compose and set up my docker-compose.yaml as follows: ` The 3 most important values to note here are the values in the environment key and the ports. These are what allowed me to determine my connection key. For this example, it would be: postgresql://postgres:postgres@localhost:5432/my-local-db. With the compose file set, I ran docker-compose up -d to get the container running, which also initializes the database. Now, we can connect and operate on the database as needed. Creating the database connection To make operations in our app, we need to get a database connection instance. I put mine in db/drizzle.ts to keep all my related database files together. My file looks like: ` This is a bit more complicated because we're using 2 different Drizzle adapters depending on our environment. For local development, we're using the generic PostgreSQL adapter, but for production, we're using the Vercel adapter. While these have different initializers, they have the same output interface which is why this works. The same wouldn't be true if you used MySQL locally and PostgreSQL in production. If we chose a RDS or similar PostgreSQL solution, we could use the same postgres adapter in both cases but change the connection string. That's all this file does at the end - detects which environment and uses the chosen adapter. If we go to use this exported instance, it won't be able to find our tables or provide type safety. This is because we haven't created our database tables yet. Creating database tables To get our database tables created, we're going to leverage Drizzle's Migrations. This allows us to create atomic changes to our database as our schema evolves. To accomplish this, we define the schema changes in our schema files as specified in our config. Then we can run npm run drizzle-kit generate:pg (or whatever script runner you use) to generate the migration SQL file that will be located where we specified in our config. You want to check this file into source! By default, Drizzle doesn't allow you to override migration names _yet_ (they're working on it!) so if you want to make your migration file more descriptive, you need to take both of these steps: 1. Rename the migration file. Take note of the old name. 2. Locate _journal.json. It should be in your migration folder in a folder called meta. From here, find the old file name and replace it with the new file name. Now, we need to run the migrations. I had some issues with top-level awaits and tsx like the Drizzle docs recommend, so I had to go a slightly different route and I'm not thrilled about it still. I made a file called migrate.mts that I stored next to my drizzle.ts. In theory, I should have been able to import my drizzle connection instance here and use that, but I ran out of time to figure it out and ended up repeating myself across files. Here's the file: ` Here, I'm connecting to the correct database pending environment then running the drizzle migrate command. For local development, I set my connection pool to max at 1. This probably isn't necessary for this use case, but when connecting to a cluster, this is a recommended best practice from the Drizzle team. For the local case, I also had to close the connection to the db when I was done. For both cases, though, I had to specify the migrations folder location. I could probably DRY this up a bit, but hopefully, the Drizzle team will eliminate this need and use the config to set this value in the future. With the above file set and our schema generated, we can now run npm run tsx db/migrate.mts and our database will have our latest schema. We can now use the db client to fetch and store data in our database. Note: Jason uses the push command here. This is fine for an initial database creation, but it will override tables in the future. The migration path is the recommended pattern for non-destructive database updates. Conclusion Congratulations! We can connect to our database and perform CRUD operations against our tables. We can also use Drizzle Studio to modify and inspect our data. To review, we had to: 1. Setup a local PostgreSQL server via a tool like Docker Compose 2. Configure the database adapter to work in local mode 3. Generate a schema 4. Create a script to execute migrations so our database is aligned with our schema This was my first experience with Drizzle, and I enjoyed its SQL-like interfaces which made it easy to quickly prototype my project. I noticed in their Discord that they're about to have a full-time maintainer so I'm excited to see what their future looks like. I hope you enjoy it too!...

Demystifying React Server Components cover image

Demystifying React Server Components

React Server Components (RSCs) are the latest addition to the React ecosystem, and they've caused a bit of a disruption to how we think about React....

How to configure and optimize a new Serverless Framework project with TypeScript cover image

How to configure and optimize a new Serverless Framework project with TypeScript

Elevate your Serverless Framework project with TypeScript integration. Learn to configure TypeScript, enable offline mode, and optimize deployments to AWS with tips on AWS profiles, function packaging, memory settings, and more....

Getting the most out of your project management tool cover image

Getting the most out of your project management tool

Does your team constantly complain about your project management tool? In this post, we'll talk about why this might be true and how you can get the most out of your project management tools!...

Utilizing API Environment Variables on Next.js Apps Deployed to AWS Amplify cover image

Utilizing API Environment Variables on Next.js Apps Deployed to AWS Amplify

Although Next.js is a Vercel product, you may choose not to deploy to Vercel due to their pricing model or concerns with vendor lock-in. Fortunately, several other platforms fully support deployment of Next.js including AWS Amplify. Whether you’re using the Next.js app directory or not, you still have API routes that get deployed as serverless functions to whatever cloud provider you choose. This is no different on AWS Amplify. However, Amplify may require an extra step for the serverless functions if you’re using environment variables. Let’s explore how AWS Amplify is deploying your API routes, and how you can properly utilize environment variables in this context. How AWS Amplify manages Next.js API Routes When you deploy Next.js apps via Amplify, it takes the standard build outputs, stores them in S3, and serves them from behind a Cloudfront distribution. However, when you start introducing server side rendering, Amplify utilizes Lambda Edge functions. These edge functions execute the functionality required to properly render the server rendered page. This same flow works for API routes in a Next.js app. They’re deployed to individual lambdas. In Next.js apps, you have two (2) types of environment variables. There are the variables prefixed with NEXT_PUBLIC_ that indicate to Next.js that the variable is available on the frontend of your application and can be exposed to the general public. At build time, Amplify injects these variables, and values that are stored in the Amplify Console UI, into your frontend application. You also have other environment variables that represent secrets that should not be exposed to users. These will not be included in your build. However, neither set of these variables will be injected into your API routes. If you need any environment variable in your API routes, you will need to explicitly inject these values into your application at build time so they can be referenced by the Next.js systems, and stored alongside your lambdas. Injecting Environment Variables into the Amplify Build By default, Amplify generates the following amplify.yml file that controls your application’s continuous delivery (CD). The following is that default file for Next.js applications: ` To inject variables into our build, we need to write them to a .env.production file before the application build runs in the build phase. We can do that using the following bash command: ` env pulls all environment variables accessible. We use the pipe operator (|) to pass the result of that command to the grep -e which searches the output for the matching pattern. In this case, that’s our environment variable which will output the line that it is on. We then use the >> operator to append to the .env.production file, or create it if it does not exist. Be careful not to use a single > operator as that will overwrite your file’s full content. Our amplify.yml should now look like this: ` It is important to note that you have to do this for all environment variables you wish to use in an API route whether they have the NEXT_PUBLIC_ prefix or not. Now, you can use process.env.VARIABLE NAME] in your API routes to access your functions without any problems. If you want to learn more about environment variables in Next.js, [check out their docs. Conclusion In short, AWS Amplify deploys your Next.js API routes as Lambda Edge functions that can’t access your console set environment variables by default. As a result, you’ll need to use the method described above to get environment variables in your function as needed. If you want to get started with Next.js on Amplify today, check out our starter.dev kit to get started, and deploy it to your AWS Amplify account. It’ll auto-connect to your git repository and auto-deploy on push, and collaborating with others won’t cost you extra per seat....

Leveraging Astro's Content Collections cover image

Leveraging Astro's Content Collections

Astro’s content-focused approach to building websites got a major improvement with their v2 release. If you’re not familiar with Astro, it is web framework geared towards helping developers create content-rich websites that are highly performant. They enable developers to use their favorite UI framework to build components leveraging an islands architecture, and provide the end-user with just the minimal download needed to interact with the site and progressively enhance the site as needed. Astro is a fantastic tool for building technical documentation sites and blogs because it provides markdown and MDX support out of the box, which enables a rich writing experience when you need more than just your base-markdown. The React Docs leverage MDX help the documentation writers provide the amazing experience we’ve all been enjoying with the new docs. In Astro v2, they launched Content Collections, which has significantly improved their already impressive developer experience (DX). In this post, we’re going to look into how Astro (and other frameworks) managed content before Content Collections, what Content Collections are, and some of the superpowers Content Collections give us in our websites. How Content is Managed in Projects? A little bit of history… Content management for websites has always been an interesting challenge. The question is typically: where should I store my content and manage it? We have content management systems (CMS), like WordPress, that people have historically and currently use to quickly build out websites. We also have Headless CMS like Contentful and Sanity that enable writers to enter their content, and then bring on developers to build out the site utilizing modern web frameworks to display content. All these solutions have enabled us to manage our content in a meaningful way, especially when the content writers aren’t developers or technical content writers. However, these tools and techniques can be limiting for writers who want to use rich content objects. For example, in the React Docs, they use Sandpack to create interactive code samples. How can we achieve these same results in our projects? The Power of MDX This is where MDX comes in. We can create re-usable markdown components that allow writers to progressively enhance their blog posts with interactive elements without requiring them to write custom code into their article. In the example below, we can see the HeaderLink component that allows the writer to add a custom click handler on the link that executes a script. While this is a simple example, we could expand this to create charts, graphs, and other interactive elements that we normally couldn’t with plain markdown. Most CMS systems haven’t been upgraded to handle MDX yet, so to provide this type of experience, we need to provide a good writing experience in our codebases. The MDX Experience Before Content Collections, we had two main approaches for structuring content in our projects. The first was to write each new document as a markdown or MDX page in our pages directory, and allow the file system router to handling the routing and define pages for us. This makes it easy to map the blog post to a page quickly. However, this leads to a challenge of clutter where, as our content grows, our directory will grow. This can make it harder to find files or articles unless a clear naming convention is utilized which can be hard to enforce and maintain. It also mixes our implementation details and content documents which can cause some organizational mess. The second approach is to store our content in a separate directory, and then create a page to collect the data out of this directory and organize it. This is the approach the React Docs take. This model has the clear advantage that the content and implementation details are separated. However, in these models, the page responsible for bringing the content together becomes a glue file trying to do file system operations, and joining data, in a logical way. This can be very brittle as any refactor could cause breakage in this model. Astro enables doing this using their Astro.glob API, but it has some limitations we’ll go over a little later. So… What Are Content Collections? Content Collections enable you to better manage content files in your project. They provide a standard for organizing content, validating aspects of the content, and providing some type-saftey features to your content. Content Collections took the best parts of the separate directory approach, similar to the React Docs, and did their best to eliminate all the cons of this approach. You can leverage Content Collections by simply moving your content into the src/content directory of your project under a folder of the type of content it represents. Is it a blog post? Stick it in blog. Working with a newsletter? Toss it in newsletter. These folders are the “collections”. You can stick either .md or .mdx files in these folders, and those are your “content entries”. Once your content is in this structure, you can now use Astro’s new content APIs to query your data out in a structured way, and start using its superpowers. Supercharging your Content! Query Your Content like a Database Astro’s content API provides two functions: getCollection() and getEntryBySlug() for querying your data. getCollection() has 2 arguments: the collection name and a filter function. This enables you to fetch all the content in a collection and filter to only specific files/entries based on parameters in the files frontmatter of your choosing. getEntryBySlug() takes in the collection name and file slug and returns the specific requested file. What’s particularly meaningful about these functions is that they return content with full TypeScript typings so you can validate your entries. You don’t need to write file system connecting logic and manage it yourself anymore. Configuring Content Entry Types Collection entries can be configured to meet specific requirements. In src/content/config.ts, you can define collections and their schemas using Zod and then registering those with the framework as demonstrated below. This is extremely powerful because now Astro can handle validating our markdown to ensure all the required fields are defined, AND it returns those entities in their target format through the content API. When you used the Astro.glob API, you would get all frontmatter data as strings or numbers requiring you to parse your data for other standard primitives. With this change, you can now put dates into your frontmatter and get them out as date objects via the content API. You can now remove all your previous validation and remapping code and convert it all to Zod types in your collection config. But instead of having to run linters and tests to find the issues, the Astro runtime will let you know about your collection errors as you’re creating them through your IDE, or server runtime. Content Collection Gotchas Content collections can only be top-level folders in the src/content directory. This means you can’t nest collections. However, you can organize content within a collection using subdirectories, and use the filtering feature of the content API to create sub-selections. The main use case for this would be i18n translations for a collection. You can place the content collections in a directory for that language, and use the filter function to select those at runtime for display. The other main "gotcha" is routing. Before, we were leveraging the file based router to handle rendering our pages. But now, there are no explicit routes defined for these pages. In order to get your pages to render properly, you’ll need to leverage Astro’s dynamic route features to generate pages from your entries. If you’re in static mode (the default), you need to define a getStaticPaths() function on your specified catch all route. If you’re in SSR mode, you’ll need to parse the route at runtime, and query for the expected data. Some Notes on Migrating from File-Based Routing If you had a project using Astro before v2, you probably want to upgrade to using content collections. Astro has a good guide on how to accomplish this in their docs. There’s two main gotchas to highlight for you. The first is that layouts no longer need to be explicitly defined in the markdown files. Because you’re shifting content to use a specified layout, this property is unnecessary. However, if you leave it, it will cause the layout to be utilized on the page causing weird double layouting, so be sure to remove these properties from your frontmatter. The second is that the content API shifts the frontmatter properties into a new data property on the return entries. Before, you might have had a line of code like post.frontmatter.pubDate. This now needs to be post.data.pubDate. Also, if this was a stringified date before, you now need to stringify the date to make it behave properly, e.g. post.data.pubDate.toDateString(). Finally, you can remove any custom types you made before, because now you can get those directly from your collection config. In summary… Astro Content Collections are a great way to manage your content and websites, especially if they’re content-focused and rich. I’ve put together some code demonstrating all the patterns and techniques described in this post that you can check out here. At This Dot, we love utilizing the right tool for the right job. Astro is increasingly becoming one of our favorites for content site projects. We use it for our open source projects - framework.dev and starter.dev- and are always considering it for additional projects....

Starter.dev: Bootstrap your project with zero configuration! cover image

Starter.dev: Bootstrap your project with zero configuration!

Starter.dev: Bootstrap your project with zero configuration! Table of contents - Why Starter Kits? - Why Showcases? - Getting Started - What Kits Exist Today? - Collaborate with Us - What’s Next? We’re excited to present you with starter.dev, a set of zero-configuration project kits built with your favorite tools. Each kit is configured with the following so you can focus on building features instead of spending time on configuration: - State Management - Data Fetching - Testing - Storybook - Community standardized ESLint and Prettier We’re also launching starter.dev showcases which utilizes the kits and demonstrates how to build at-scale applications and go beyond the basic TodoMVC app, though we love a good TodoMVC! Why Starter Kits? This Dot Labs engages with its clients to build modern applications, and many of them are requesting greenfield projects using the latest and greatest technologies. While starting these projects, our architects found that they were repeating a bunch of the same steps each time they needed to start a new one. Most meta frameworks ship with the bare minimum, and don’t include features like testing or Storybook out of the box, and configuring these technologies can be time consuming. With this challenge in mind, we sought to create zero-config _starter_ templates to help kick _start_ projects. And thus, starter.dev was born! Why Showcases? During starter.dev’s development, Minko Gechev from the Angular team approached us about a project to help enhance Angular education tooling. You can learn more about this specific effort in the blog post announcing the GitHub Clone Showcases. Minko’s idea was to demonstrate building applications that utilize these key features of the Angular framework: Routing Forms State Management API interactions - REST or GraphQL Authentication This idea laid the groundwork for many of the starter kits we created. We’ve developed several GitHub Clone showcase to help developers understand how to best utilize the kits, and build at-scale applications that accompany our kits. Getting Started Getting started with starter.dev (pun intended) is as simple as running a scaffolding script: - Run npx @this-dot/create-starter to run the scaffolding tool - Select one of the kits from our library from the CLI - Name your project - cd into your project directory, install dependencies using the tool of your choice. After completing these steps, you can start building features in your new project immediately. What Kits Exist Today? This Dot is happy to ship starter.dev with the following kits: - Angular + Apollo Client + Tailwind CSS - Angular + NgRx + SCSS - Create React App + RxJS + Styled Components - Next.js + TanStack Query (formerly React Query) + Tailwind CSS - Remix + GraphQL + Tailwind CSS - Vue 3 + Apollo Client + Quasar - Qwik + GraphQL + Tailwind CSS - SvelteKit + SASS Each kit ships with the following out-of-the-box: - Testing via jest or vitest - Storybook - ESLint and Prettier Configuration - State Management - Data Fetching for either REST or GraphQL - Some starter components to demonstrate global state management and data fetching These starter kits don’t ship with E2E testing libraries, such as Cypress or Playwright, for now, because these tools come with amazing out-of-the-box configurations and do not require additional setups. However, the showcases use Cypress tests consistency which you can check out in the showcases repo. Collaborate with us Starter.dev began as an internal need, but anyone can benefit from the existence of these kits. While there is a set structure for building out new kits, This Dot welcomes requests for new kits. We’ll work with you to determine what the structure of your kit should be and then scaffold out all of the issues needed to complete the work. Our team will help build out new issues, but we encourage the community to jump in and help build as well. This is a great opportunity to collaborate with the community as both a learner and a teacher. At This Dot, two of our guiding principles are: Getting Better Together and Giving Back to the Community. We believe that starter.dev is a perfect opportunity for us to get better through collaborative efforts with community members, and helping the community through what we hope is a great open source resource. What’s Next? We want to get this tool into your hands, and improve what exists. Tell us what is working for you and not, and we’ll do our best to address those issues. Next, we want to expand our library of kits. Tell us what are some kits you would like to see. We’re looking into building out kits for SolidJS and Node backend technologies as part of our next iterations, but we’re sure there other tools people would like to see. Finally, we’ll be publishing some additional educational materials around some of the decisions and design patterns we’ve chosen for each kit and showcase. We’re excited to share our process with you....

GitHub Actions for Serverless Framework Deployments cover image

GitHub Actions for Serverless Framework Deployments

Background Our team was building a Serverless Framework API for a client that wanted to use the Serverless Dashboard) for deployment and monitoring. Based on some challenges from last year, we agreed with the client that using a monorepo tool like Nx) would be beneficial moving forward as we were potentially shipping multiple Serverless APIs and frontend applications. Unfortunately, we discovered several challenges integrating with the Serverless Dashboard, and eventually opted into custom CI/CD with GitHub Actions. We’ll cover the challenges we faced, and the solution we created to mitigate our problems and generate a solution. Serverless Configuration Restrictions By default, the Serverless Framework does all its configuration via a serverless.yml file. However, the framework officially supports alternative formats) including .json, .js, and .ts. Our team opted into the TypeScript format as we wanted to setup some validation for our engineers that were newer to the framework through type checks. When we eventually went to configure our CI/CD via the Serverless Dashboard UI, the dashboard itself restricted the file format to just the YAML format. This was unfortunate, but we were able to quickly revert back to YAML as configuration was relatively simple, and we were able to bypass this hurdle. Prohibitive Project Structures With our configuration now working, we were able to select the project, and launch our first attempt at deploying the app through the dashboard. Immediately, we ran into a build issue: ` What we found was having our package.json in a parent directory of our serverless app prevented the dashboard CI/CD from being able to appropriately detect and resolve dependencies prior to deployment. We had been deploying using an Nx command: npx nx run api:deploy --stage=dev which was able to resolve our dependency tree which looked like: To resolve, we thought maybe we could customize the build commands utilized by the dashboard. Unfortunately, the only way to customize these commands is via the package.json of our project. Nx allows for package.json per app in their structure, but it defeated the purpose of us opting into Nx and made leveraging the tool nearly obsolete. Moving to GitHub Actions with the Serverless Dashboard We thought to move all of our CI/CD to GitHub Actions while still proxying the dashboard for deployment credentials and monitoring. In the dashboard docs), we found that you could set a SERVERLESS_ACCESS_KEY and still deploy through the dashboard. It took us a few attempts to understand exactly how to specify this key in our action code, but eventually, we discovered that it had to be set explicitly in the .env file due to the usage of the Nx build system to deploy. Thus the following actions were born: api-ci.yml ` api-clean.yml ` These actions ran smoothly and allowed us to leverage the dashboard appropriately. All in all this seemed like a success. Local Development Problems The above is a great solution if your team is willing to pay for everyone to have a seat on the dashboard. Unfortunately, our client wanted to avoid the cost of additional seats because the pricing was too high. Why is this a problem? Our configuration looks similar to this (I’ve highlighted the important lines with a comment): serverless.ts ` The app and org variables make it so it is required to have a valid dashboard login. This meant our developers working on the API problems couldn’t do local development because the client was not paying for the dashboard logins. They would get the following error: Resulting Configuration At this point, we had to opt to bypass the dashboard entirely via CI/CD. We had to make the following changes to our actions and configuration to get everything 100% working: serverless.ts - Remove app and org fields - Remove accessing environment secrets via the param option ` api-ci.yml - Add all our secrets to GitHub and include them in the scripts - Add serverless confg ` api-cleanup.yml - Add serverless config - Remove secrets ` Conclusions The Serverless Dashboard is a great product for monitoring and seamless deployment in simple applications, but it still has a ways to go to support different architectures and setups while being scalable for teams. I hope to see them make the following changes: - Add support for different configuration file types - Add better support custom deployment commands - Update the framework to not fail on login so local development works regardless of dashboard credentials The Nx + GitHub actions setup was a bit unnatural as well with the reliance on the .env file existing, so we hope the above action code will help someone in the future. That being said, we’ve been working with this on the team and it’s been a very seamless and positive change as our developers can quickly reference their deploys and know how to interact with Lambda directly for debugging issues already....

Git Strategies for Working on Teams cover image

Git Strategies for Working on Teams

Background Every team I’ve worked on has had different expectations and rules around git. Some had strict guidelines that created developer experience issues. Others had rules that were so loose that there was no consistency on the team. These are my thoughts on what I’ve discovered to be a healthy balance between the strict and loose rules on teams. I hope to make suggestions for your team, and process that don’t sound dogmatic, and eventually help. Always Branch and Pull Request to Main This may seem like a no brainer to some, but no one should ever be interacting on your repo’s main branch directly. All changes to upstream main should be handled via pull requests (PR) with the exception of the initial repository commit. This will make all the suggestions throughout the article function properly. But it should also make it so your main branch will remain healthy assuming you have CI/CD working on PR. It also enables peer review on work which is a good habit. I still PR on my personal projects to make so I understand a set of changes in context as well. Merges to main via Squash and Merge All git hosting services have a “Squash and Merge” button on PRs which is an alias for: ` This takes all the commits on the specified branch and reduces them into a single atomic commit that gets inserted into the main branch. This is great for a few reasons: 1. Keeps your git history clean 1. omits merge commits 2. keeps merge historically fully sequential 2. Creates a single commit describing a bulk of changes but still makes the history of those changes available via a link to the original PR for full context 3. Helps using bisect strategies to identify when a change was introduced, and reference the PR in which that change was introduced The following is an example output of the squash and merge strategy: Individual Branching I have two rules I suggest for individual branches: 1. conform to some naming convention the team sets 2. use a non-destructive strategy for commits and updates once a review has occurred Rule 1 exists for a variety of reasons but primarily to avoid naming collisions. Rule 2, on the other hand, exists for a reviewer’s sake. All net new changes should be done via commits once a review has be initiated. Otherwise, it’s hard for a reviewer to track what they’ve already looked at versus what’s changed. Be nice to your reviewers. Once that PR is open, you may not use --amend or rebase on existing commits moving forward. Otherwise, how you keep your branch up-to-date with the main branch is entirely your choice. Merge is typically the easiest if there are conflicts, especially when trying to resolve, and is usually best for novice developers. Rebase is great if you don’t have any merge conflicts or merge commits, and lets you get all the latest changes in the main brach. At the end of the day, do what is best for you. The squash and merge strategy makes it so your branch can be whatever you want it to be. Do you commit every 5 minutes? Fine! Have 15 merge commits from main? That’s also fine! All these actions tell a story but squash and merge to main keeps that history on your branch, and PR and does not impact upstream main. Long Running Epic Branches All my rules above are great for single changes applied directly to main. However, some teams work on long running epic branches to avoid introducing changes to main that are not ready for release. The best way to avoid long running epic branches is feature flags, but some teams aren’t able to use these for one reason or another. That’s ok! For this situation, I recommend the following: 1. Create a base integration branch that is kept up with main using the merge strategy 2. Create individual branches off the integration branch and follow my instructions on individual branching from above 3. Squash merge branches back into the integration 4. Squash merge the integration back into main when ready I’ve been on teams that try to do fancy rebase --onto strategies with the integration branch, and then the following individual branches. At the end of the day, this creates a lot of unnecessary git work for teams, and wastes time that is better spent working on features, bugs, and tech debt. This is because branches have to be updated in a particular way, and sometimes the conflicts that arise lead to lost changes and duplicated effort and work. Simplify your process and simplify your teams’ lives. Conclusion This only covers a few of the most common situations, but these general rules should help for teams and individuals in the way that is best for them while keeping the upstream main branch healthy and easier to debug when critical issues are identified. I hope this helps you and your team on your next project and allows more time for technical debt and other important work that may have been lost to git related issues....

Announcing Angular GitHub Clone for starter.dev showcases cover image

Announcing Angular GitHub Clone for starter.dev showcases

The Engineering team at This Dot Labs loves building open-source software, providing resources to the community, and collaborating with our clients and partners. In cooperation with the Angular team, we have developed an implementation of GitHub’s user interface to deliver a project to the community that demonstrates the power of Angular while providing a fully featured open-source application for developers to use as a learning tool. We’re including this Angular GitHub Clone as part of a new initiative we’re working on called “starter.dev GitHub showcases”, and we are excited about its future. If you want to jump into the code, visit the GitHub repository, or you can check out the deployed application at https://angular-apollo-tailwind.starter.dev/. --- Our Goals Based on the initial projects by Trung Vo (Spotify clone / JIRA clone), we were looking to create an example project that goes beyond the basics of a TodoMVC application and really showcases the power of the technology. Specifically, we wanted to showcase Angular with routing, forms, and global state management. What we built For this project, we utilized Angular v13 with Apollo Client and Tailwind CSS to bring a GitHub user interface to life that features: User login via GitHub OAuth User Dashboard featuring a user’s top repositories and gists for quick access Repositories including file tree exploration, file viewing, issue lists with filtering, and pull requests with filtering User Profiles with near feature parity with the GitHub UI We also created a Serverless Framework based server to manage the OAuth handshake between the application and GitHub servers for security purposes. Why Apollo Client? At This Dot Labs, we love GraphQL, and GitHub has an amazing Public GraphQL API for consumption that we were able to leverage to build all the features in this application. Angular Apollo Client provides us with a GraphQL client and global state management out of the box. Check out our Issues Store to see how we were able to leverage Apollo Client to fetch data from Github and inject it into a ComponentStore for consumption. Why TailwindCSS? Other projects we’ve seen demonstrate how to extend existing component libraries, and we didn’t want to replicate this behavior. Instead, we thought it would be great to demonstrate how to use a CSS utility framework in conjunction with Angular component stylesheets. This afforded us an opportunity to show the Angular community the power of a new tool and further community excitement around a framework agnostic technology that has reshaped the CSS space. Why Serverless Framework? We wanted an easy to use backend ecosystem for any developers wanting to fork and test this application. Additionally, for those looking to deploy this backend, we wanted to find an economical solution. With the serverless-offline plugin, we were able to provide a local first development environment to developers while giving them the ability to ship the API to their favorite cloud hosting provider that supports serverless functions. How to use starter.dev showcases We recommend starting with an exploration of the codebase. To get started, follow these steps: ` In both .env files, you’ll see missing values for GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET. To generate these values, please follow GitHub’s Creating an OAuth App instructions to generate your personal client and secret that you can then input into your .env files. Now, you can run yarn start in both directories and point your browser to localhost:4200 to see the app running. From here, you can start exploring and experimenting with project components. What should I do next? The project README includes several ideas and features that we’ve vetted for feasibility. We’ve given some high level details about the feature and included how difficult we believe it will be to accomplish. We think these are great features to both improve the app and provide you with an opportunity to experience Angular without having to start your own project or find projects that need contributions. New to Angular and want feedback on your approach? Send us a pull request with your changes, and we’d be happy to review it and provide feedback. If it’s the first submission for a particular feature, we’d love to include your change and attribute it back to you. --- We plan on extending starter.dev showcases to include more Angular stacks and examples to give the community even more resources. If you’re interested in contributing to starter.dev showcases or have ideas for new features or showcases, we’d love to hear from you. Our GitHub issues are open for collaboration and pull requests are always welcome, but not required....

Migrating a classic Express.js to Serverless Framework cover image

Migrating a classic Express.js to Serverless Framework

Problem Classic Express.js applications are great for building backends. However, their deployment can be a bit tricky. There are several solutions on the market for making deployment "easier" like Heroku, AWS Elastic Beanstalk, Qovery, and Vercel. However, "easier" means special configurations or higher service costs. In our case, we were trying to deploy an Angular frontend served through Cloudfront, and needed a separately deployed backend to manage an OAuth flow. We needed an easy to deploy solution that supported HTTPS, and could be automated via CI. Serverless Framework The Serverless Framework is a framework for building and deploying applications onto AWS Lambda, and it allowed us to easily migrate and deploy our Express.js server at a low cost with long-term maintainability. This was so simple that it only took us an hour to migrate our existing API, and get it deployed so we could start using it in our production environment. Serverless Init Script To start this process, we used the Serverless CLI to initialize a new Serverless Express.js project. This is an example of the settings we chose for our application: ` Here's a quick explanation of our choices: What do you want to make? This prompt offers several possible scaffolding options. In our case, the Express API was the perfect solution since that's what we were migrating. What do you want to call this project? You should put whatever you want here. It'll name the directory and define the naming schema for the resources you deploy to AWS. What org do you want to add this service to? This question assumes you are using the serverless.com dashboard for managing your deployments. We're choosing to use Github Actions and AWS tooling directly though, so we've opted out of this option. Do you want to deploy your project? This will attempt to deploy your application immediately after scaffolding. If you don't have your AWS credentials configured correctly, this will use your default profile. We needed a custom profile configuration since we have several projects on different AWS accounts so we opted out of the default deploy. Serverless Init Output The init script from above outputs the following: - .gitignore - handler.js - package.json - README.md - serverless.yml The key here is the serverless.yml and handler.js files that are outputted. serverless.yml ` handler.js ` As you can see, this gives a standard Express server ready to just work out of the box. However, we needed to make some quality of life changes to help us migrate with confidence, and allow us to use our API locally for development. Quality of Life Improvements There are several things that Serverless Framework doesn't provide out of the box that we needed to help our development process. Fortunately, there are great plugins we were able to install and configure quickly. Environment Variables We need per-environment variables as our OAuth providers are specific per host domain. Serverless Framework supports .env files out of the box but it does require you to install the dotenv package and to turn on the useDotenv flag in the serverless.yml. Babel/TypeScript Support As you can see in the above handler.js file, we're getting CommonJS instead of modern JavaScript or TypeScript. To get these, you need webpack or some other bundler. serverless-webpack exists if you want full control over your ecosystem, but there is also serverless-bundle that gives you a set of reasonable defaults on webpack 4 out of the box. We opted into this option to get us started quickly. Offline Mode With classic Express servers, you can use a simple node script to get the server up and running to test locally. Serverless wants to be run in the AWS ecosystem making it. Lucky for us, David Hérault has built and continues to maintain serverless-offline allowing us to emulate our functions locally before we deploy. Final Configuration Given these changes, our serverless.yml file now looks as follows: ` Some important things to note: - The order of serverless-bundle and serverless-offline in the plugins is critically important. - The custom port for serverless-offline can be any unused port. Keep in mind what port your frontend server is using when setting this value for local development. - We set the profile and stage in our provider configuration. This allowed us to use specify the environment settings and AWS profile credentials to use for our deployment. With all this set, we're now ready to deploy the basic API. Deploying the new API Serverless deployment is very simple. We can run the following command in the project directory: ` This command will deploy the API to AWS, and create the necessary resources including the API Gateway and related Lambdas. The first deploy will take roughly 5 minutes, and each subsequent deply will only take a minute or two! In its output, you'll receive a bunch of information about the deployment, including the deployed URL that will look like: ` You can now point your app at this API and start using it. Next Steps A few issues we still have to resolve but are easily fixed: - New Lambdas are not deploying with their Environment Variables, and have to be set via the AWS console. We're just missing some minor configuration in our serverless.yml. - Our deploys don't deploy on merges to main. For this though, we can just use the official Serverless Github Action. Alternatively, we could purchase a license to the Serverless Dashboard, but this option is a bit more expensive, and we're not using all of its features on this project. However, we've used this on other client projects, and it really helped us manage and monitor our deployments. Conclusion Given all the above steps, we were able to get our API up and running in a few minutes. And due to it being a 1-to-1 replacement for an existing Express server, we were able to port our existing implementations into this new Serverless implementation, deploy it to AWS, and start using it in just a couple of hours. This particular architecture is a great means for bootstraping a new project, but it does come with some scaling issues for larger projects. As such, we do not recommend a pure Serverless, and Express for monolithic projects, and instead suggest utilizing some of the amazing capabilities of Serverless Framework with AWS to horizontally scale your application into smaller Lambda functions....

Building a Multi Platform Community Engagement Tool cover image

Building a Multi Platform Community Engagement Tool

Intro We’re building a community forum for our consumers to discuss our products here at A Latte Java. This is going to be a new greenfield project that is a companion app to our ecommerce site. Our team has determined that we really need both desktop and mobile presences and need to get a MVP to market in the next 3 months so we’re on a relatively tight timeline. For this, we’re starting with just the basic product idea and are having our first round table discussion to think through the requirements and identify options for creating our solution. The only requirements from the business is to generate a space where people can safely discuss how they use our products and share their how-to guides. Mobile & Desktop When we said we needed both mobile and desktop presences, what are some of the options to get us there? Do we have to build a website and a mobile application? Are there ways to share or reuse code across the different platforms? Hybrid apps and PWAs are a few solutions that can get us to the solution that we're seeking. Hybrid apps let us use tools like React Native or Ionic to use web technology we're familiar with and compile it to native app code. > One of my first thoughts would be to create some kind of a hybrid app so people can comment on the go as they feel like it. Maybe they want to snap a picture of their coffee as they're brewing it in the morning or take a quick video. - Morgan Worrell On the other hand, we have solutions like PWAs that allow us to leverage our existing website code to generate an app-like feeling version of our website that users can use instead. > I think what's popular nowadays is building PWAs. This will basically be an a web app, but it can act as a kind of native app within the mobile device. That would definitely cut us some time to deliver it more quickly, especially in the beginning. - Chris Trzesniewski Going purely native requires us to hire speciality developers in Objective-C and Java and maintain 3 different code bases: web, iOS, and Android. But is it worth going down this path? > The average mobile user installs zero apps per month. - Rob Ocel Mobile Device Differences A key problem with supporting multiple devices is the look and feel of the app on different devices. Users on different devices have expectations of how apps should work on their respective device. Forgetting some of these details can even be detrimental to your overall build and leave your app feeling boring or lackluster. > You can see a picture of the app without seeing the device frame itself and you can go 'that's an Android app' or 'that's an iOS app' and there's a lot of those subtle differences. - Rob Ocel So how do we keep our app engaging? What can we do to drive traffic into the application and keep users active in the community? Gamification Creating feedback driven interactive features can be the major differentiator. Look at how other companies or products in the same or similar space are doing things. On one hand, you could go the social route like Untappd where you can keep a rating of all the different drinks you've tried and share with your friends, which then gives you badges and other little gamey features. On the other, you could try to follow the Yelp model where you become an amateur food critic and others can follow you to get opinions on restaurants to try. You could also go the StackOverflow approach where different actions get you points in some internal "ranking" system. Do we really need both then? For speed, we've decided that a website is the most practical approach. There are certain features of a mobile device, like its constant availability and camera, that make it a must have for our app. Because we decided on a website approach, though, that means our app is accessible from a desktop so it needs to look good here as well. Using CSS responsive design practices, we can easily achieve a site that looks good on both desktop devices and mobile devices. Post MVP Eventually, we might want to have a native app to get some of the other awesome features that the mobile hardware allows that aren't necessarily available through the web version of the app. This will lead us to an eventual migration, but one we can focus on more intentiontally after the initial launch. > All migrations are effort. It's just whether you're going to spend the effort or not. Sometimes there's more mandatory upfront work and other times there's a lot of hidden work that's sort of labeled optional. - Rob Ocel Moderation Our app is allowing arbitrary data from the community and isn't in any way curated. This means we could get some content we don't necessarily want such as expletives, rude or harmful messaging, or inappropriate images. What are strategies we could employ to counter this risk? Are there ways to automate these processes? How can we build these tools and processes in a way that scale with our community growth? Our business doesn't necessarily want to commit dedicates resources full time to this process long-term so how can we work around this problem? Basic Version An early version of the app might do some simple content filtering and moderation. We could have a list of words we don't want to be posted and use a CMS system to keep that list updated. We then can filter out words that are on the list from appearing on the site through algorithms to censor those words or just to change the display settings on the site to hide these words. We could also build some basic moderation tooling to allow admins to delete inappropriate posts and ban users from using the site. These features could be combined with a reporting system where community members can flag certain content as inappropriate. We could also create moderator roles where we give certain figures in the community to purge bad content or users from our system. This comes with the risk of having moderators who fail to do the role or go on a power trip and delete a bunch of content or ban a bunch of users because they can. The system will need some checks and balances and some data security features like shallow deletes where content isn't actually purged but filtered from appearing on the UI. Negative Side Effects of Moderation We have to be careful about our implementation of moderation as it could lead to a subpar community where there is snobbery or other forms of toxicity that we don't want. If we choose an AI or the wrong moderators, this could lead to a down turn in community sentiment. We maybe want to run experiments with tools like IBM Watson's sentiment analysis or other AIs to see what would happen. We also have to be wary of what happens when our moderation mode of choice stops operating at optimal capacity. Do we have a fall back plan that works? Is there any redundancy in our system to prevent any outages in moderation tooling? Positive Moderation We also want to encourage people for contributing positively. We could build quick and simple features of awarding a user a point in a commendation category where they prove to be an expert in certain areas. It encourages them to engage more and let's them become a community leader chosen by the community. Where does the moderation tooling live? We can build the moderation tooling in its own separate admin application. Alternatively, we could follow the YouTube model where everything lives inside the main user interface. Given our goal to have a single app for users and desire to leverage the community, it probably makes the most sense for our application to have the tooling live alongside the main application. > Let's say performance. Yes, there are things like code splitting, but how many additional bytes over the wire are you going to send down for, to in-service of an admin portal that maybe 99% of your users will never see? And again, there are ways to mitigate that, but you know, that's another thing to consider: do you really want to ship two completely orthogonal experiences in one bundle? That being said, we should absolutely evaluate how large this additional tooling is and how memory-costly is it going to be to send that to every user. Conclusion I want to thank my guests Rob Ocel, Morgan Worrell, and Chris Trzesniewski with This Dot for joining me on Build IT Better. It was an amazing conversation and knowledge share. This article would not be possible without their time and insight. Thank you....