Skip to content

This Dot Blog

This Dot provides teams with technical leaders who bring deep knowledge of the web platform. We help teams set new standards, and deliver results predictably.

Newest First
Tags:Git
Ensuring Accurate Workflow Status in GitHub for Enhanced Visibility cover image

Ensuring Accurate Workflow Status in GitHub for Enhanced Visibility

Master the nuances of GitHub workflows with our latest blog post. Discover key strategies to ensure your workflows accurately reflect the true status of tests and tasks, preventing misleading green checks....

Deploying Multiple Apps From a Monorepo to GitHub Pages cover image

Deploying Multiple Apps From a Monorepo to GitHub Pages

Explore deploying multiple front-end applications on GitHub Pages with our guide. Learn how to navigate the challenges of client-side routing and efficiently manage multiple apps in one repository....

OAuth2 for JavaScript Developers cover image

OAuth2 for JavaScript Developers

Using GitHub as an example, the post guides JavaScript developers through the OAuth2 process, emphasizing its importance for secure and efficient third-party integrations in web applications....

Mastering Git Rerere: Solving Repetitive Merge Conflicts with Ease cover image

Mastering Git Rerere: Solving Repetitive Merge Conflicts with Ease

Are you curious to discover one of the hidden powers of Git? Incorporate git rerere into your Git workflow, and say goodbye to the frustration of repetitive merge conflicts....

How to Create a Bot That Sends Slack Messages Using Block Kit and GitHub Actions cover image

How to Create a Bot That Sends Slack Messages Using Block Kit and GitHub Actions

Have you ever wanted to get custom notifications in Slack about new interactions in your GitHub repository? If so, then you're in luck. With the help of GitHub actions and Slack's Block Kit, it is super easy to set up automated workflows that will send custom messages to your Slack channel of choice. In this article, I will guide you on how to set up the Slack bot and send automatic messages using GH actions. Create a Slack app Firstly, we need to create a new Slack application. Go to Slack's app page. If you haven't created an app before you should see: otherwise you might see a list of your existing apps: Let's click the Create an App button. Frpm a modal that shows up, choose From scratch option: In the next step, we can choose the app's name (eg. My Awesome Slack App) and pick a workspace that you want to use for testing the app. After the app is created successfully, we need to configure a couple of additional options. Firstly we need to configure the OAuth & Permissions section: In the Scopes section, we need to add a proper scope for our bot. Let's click Add an OAuth Scope in the Bot Token Scopes section, and select an incoming-webhook scope: Next, in OAuth Tokens for Your Workspace section, click Install to Workspace and choose a channel that you want messages to be posted to. Finally, let's go to Incoming Webhooks page, and activate the incoming hooks toggle (if it wasn't already activated). Copy the webhook URL (we will need it for our GitHub action). Create a Github Action Workflow In this section, we will focus on setting up the GitHub action workflow that will post messages on behalf of the app we've just created. You can use any of your existing repositories, or create a new one. Setting Up Secrets In your repository, go to Settings -> Secrets and variables -> Actions section and create a New Repository Secret. We will call the secret SLACK_WEBHOOK_URL and paste the url we've previously copied as a value. Create a workflow To actually send a message we can use slackapi/slack-github-action GitHub action. To get started, we need to create a workflow file in .github/workflows directory. Let's create .github/workflows/slack-message.yml file to your repository with the following content and commit the changes to main branch. ` In this workflow, we've created a job that uses slackapi/slack-github-action action and sends a basic message with an action run id. The important thing is that we need to set our webhook url as an env variable. This was the action can use it to send a message to the correct endpoint. We've configured the action so that it can be triggered manually. Let's trigger it by going to Actions -> Send Slack notification We can run the workflow manually in the top right corner. After running the workflow, we should see our first message in the Slack channel that we've configured earlier. Manually triggering the workflow to send a message is not very useful. However, we now have the basics to create more useful actions. Automatic message on pull request merge Let's create an action that will send a notification to Slack about a new contribution to our repository. We will use Slack's Block Kit to construct our message. Firstly, we need to modify our workflow so that instead of being manually triggered, it runs automatically when a pull requests to main branch is merged. This can be configured in the on section of the workflow file: ` Secondly, let's make sure that we only run the workflow when a pull request is merged and not eg. closed without merging. We can configure that by using if condition on the job: ` We've used a repository name (github.repository) as well as the user login that created a pull request (github.event.pull_request.user.login), but we could customize the message with as many information as we can find in the pull_request event. If you want to quickly edit and preview the message template, you can use the Slack's Block Kit Builder. Now we can create any PR, eg. add some changes to README.md, and after the PR is merged, we will get a Slack message like this. Summary As I have shown in this article, sending Slack messages automatically using GitHub actions is quite easy. If you want to check the real life example, visit the starter.dev project where we are using the slackapi/slack-github-action to get notifications about new contributions (send-slack-notification.yml) If you have any questions, you can always Tweet or DM me at @ktrz. I'm always happy to help!...

Tag and Release Your Project with GitHub Actions Workflows cover image

Tag and Release Your Project with GitHub Actions Workflows

Tag and Release your project with GitHub Actions Workflows GitHub Actions is a powerful automation tool that enables developers to automate various workflows in their repositories. One common use case is to automate the process of tagging and releasing new versions of a project. This ensures that your project's releases are properly versioned, documented, and published in a streamlined manner. In this blog post, we will walk you through two GitHub Actions workflows that can help you achieve this. Understanding GitHub Tags and Releases GitHub tags and releases are essential features that help manage and communicate the progress and milestones of a project. Let's take a closer look at what they are, why they are useful, and how they can be used effectively. GitHub Tags A GitHub tag is a specific reference point in a repository's history that marks a significant point of development, such as a release or a specific commit. Tags are typically used to identify specific versions of a project. They are lightweight and do not contain any additional metadata by default. Tags are useful for several reasons: 1. Versioning: Tags allow you to assign meaningful version numbers to your project, making it easier to track and reference specific releases. 2. Stability: By tagging stable versions of your project, you can provide users with a reliable and tested codebase. 3. Collaboration: Tags enable contributors to work on specific versions of the project, ensuring that everyone is on the same page. GitHub Releases GitHub releases are a way to package and distribute specific versions of your project to users. A release typically includes the source code, compiled binaries, documentation, and release notes. Releases provide a convenient way for users to access and download specific versions of your project. Releases offer several benefits: 1. Communication: Releases allow you to communicate important information about the changes, improvements, and bug fixes included in a specific version. 2. Distribution: By packaging your project into a release, you make it easier for users to download and use your software. 3. Documentation: Including release notes in a release helps users understand the changes made in each version and any potential compatibility issues. Effective Use of Tags and Releases To make the most of GitHub tags and releases, consider the following tips: 1. Semantic Versioning: Follow a consistent versioning scheme, such as semantic versioning (e.g., MAJOR.MINOR.PATCH), to clearly communicate the nature of changes in each release. 2. Release Notes: Provide detailed and concise release notes that highlight the key changes, bug fixes, and new features introduced in each version. This helps users understand the impact of the changes and make informed decisions. 3. Release Automation: Automate the release process using workflows, like the one described in this blog post, to streamline the creation of tags and releases. This saves time and reduces the chances of human error. By leveraging GitHub tags and releases effectively, you can enhance collaboration, improve communication, and provide a better experience for users of your project. The Goal The idea is to have a GitHub action that, once triggered, updates our project's version, creates a new tag for our repository, and pushes the updates to the main branch. Unfortunately, the main branch is a protected branch, and it's not possible to directly push changes to a protected branch through a GitHub action. Therefore, we need to go through a pull request on the main branch, which, once merged, will apply the changes due to the version update to the main branch. We had to split the workflow into two different GitHub actions: one that creates a pull request towards the main branch with the necessary code changes to update the repository's version, and another one that creates a new tag and releases the updated main branch. This way, we have one additional click to perform (the one required to merge the PR), but we also have an intermediate step where we can verify that the version update has been carried out correctly. Let’s dive into these two workflows. Update version and create Release's PR Workflow ` Walkthrough: Step 1: Define the Workflow The workflow starts with specifying the workflow name and the event that triggers it using the on keyword. In this case, the workflow is triggered manually using the "workflow_dispatch" event, which means it can be run on-demand by a user. Additionally, the workflow accepts an input parameter called "version," which allows the user to specify the type of version bump (major, minor, or patch). The workflow_dispatch event allows the user to set the "version" input when running the workflow. Step 2: Prepare the Environment The workflow will run on an Ubuntu environment (ubuntu-latest) using a series of steps under the jobs section. The first job is named "version." Step 3: Checkout the Code The workflow starts by checking out the code of the repository using the actions/checkout@v3 action. This step ensures that the workflow has access to the latest codebase before making any modifications. Step 4: Set up Node.js Next, the workflow sets up the Node.js environment using the actions/setup-node@v3 action and specifying the Node.js version 16.x. It's essential to use the appropriate Node.js version required by your project to avoid compatibility issues. Step 5: Install Dependencies To ensure the project's dependencies are up-to-date, the workflow runs npm install to install the necessary packages as defined in the package.json file. Step 6: Configure Git To perform version bump and create a pull request, the workflow configures Git with a user name and email. This allows Git to identify the author when making changes in the repository. Step 7: Update the Version The workflow now performs the actual version bump using the npm version command. The new version is determined based on the "version" input provided when running the workflow. The updated version number is stored in an output variable named update_version, which can be referenced later in the workflow. Step 8: Update the Changelog After bumping the version, the workflow updates the CHANGELOG.md file to reflect the new release version. It replaces the placeholder "Unreleased" with the updated version using the sed command. [*We will return to this step later*] Step 9: Create a Pull Request Finally, the workflow creates a pull request using the peter-evans/create-pull-request@v5 action. This action automatically creates a pull request with the changes made in the workflow. The pull request will have a branch name following the pattern "release/", where corresponds to the updated version number. The outcome of this workflow will be a new open PR in the project with package.json and CHANGELOG.md file changed. [*we will speak about the changelog file later*] Now we can check if the changes are good, approve the PR and merge it into main. Merge a PR with a title that starts with "Release:" automatically triggers the second workflow Tag & Release Workflow ` Walkthrough: As you can see we added a check for the PR title before starting the job once the PR is merged and closed. Only the PRs with a title that starts with "Release:" will trigger the workflow. The first three steps are the same as the one described in the previous workflow: we check out the code from the repository, we set up node and we install dependencies. Let's start with: Step 4: Check formatting To maintain code quality, we run the npm run format:check command to check if the code adheres to the specified formatting rules. This step helps catch any formatting issues before proceeding further. Step 5: Build The npm run build command is executed in this step to build the project. This step is particularly useful for projects that require compilation or bundling before deployment. Step 6: Set up Git To perform Git operations, such as tagging and pushing changes, we need to configure the Git user's name and email. This step ensures that the correct user information is associated with the Git actions performed later in the workflow. Step 7: Get tag In this step, we retrieve the current version of the project from the package.json file. The version is then stored in an output variable called get_tag.outputs.version for later use. Step 8: Tag the commit Using the version obtained in the previous step, we create a Git tag for the commit. The tag is annotated with a message indicating the version number. Finally, we push the tag and associated changes to the repository. Step 9: Create changelog diff To generate release notes, we extract the relevant changelog entries from the CHANGELOG.md file. This step helps summarize the changes made since the previous release. (We will return to this step later) Step 10: Create release Using the actions/create-release action, we create a new release on GitHub. The release is associated with the tag created in the previous step, and the release notes are provided in the body of the release. Step 11: Delete release_notes file Finally, we delete the temporary release_notes.md file created in Step 9. This step helps keep the repository clean and organized. Once also the second workflow is finished our project is tagged and the new release has been created. The "Changelog Steps" As you can see the release notes are automatically filled, with a detailed description of what has been added, fixed, or updated in the project. This was made possible thanks to the "Changelog steps" in our workflows, but to use them correctly, we need to pay attention to a couple of things while developing our project. Firstly, to the format of the CHANGELOG.md file. This will be our generic template: But the most important aspect, in addition to keeping the file up to date during developments by adding the news or improvements we are making to the code under their respective sections, is that every time we start working on a new project release, we begin the paragraph with ## [Unreleased]. This is because, in the first workflow, the step related to the changelog will replace the word "Unreleased" with the newly created project version. In the subsequent workflow, we will create a temporary file (which will then be deleted in the latest step of the workflow), where we will extract the part of the changelog file related to the new version and populate the release notes with it. Conclusion Following these Tag and Release Workflows, you can automate the process of creating releases for your GitHub projects. This workflow saves time, ensures consistency, and improves collaboration among team members. Remember to customize the workflow to fit your project's specific requirements and enjoy the benefits of streamlined release management....

How to Add Continuous Benchmarking to Your Projects Using GitHub Actions cover image

How to Add Continuous Benchmarking to Your Projects Using GitHub Actions

Over the lifetime of a project performance, issues may arise from time to time. Lots of the time, these issues don't get detected until they get into production. Adding continuous benchmarking to your project and build pipeline can help you catch these issues before that happens. What is Continuous Benchmarking Benchmarking is the process of measuring the performance of an application. Continuous benchmarking builds on top of this by doing so either on a regular basis, or whenever new code is pushed so that performance regressions can be identified and found as soon as they are introduced. Adding continuous benchmarking to your build pipeline can help you effectively catch performance issues before they ever make it to production. Much like with tests, you are still responsible for writing benchmark logic. But once that’s done, integrating it with your build pipeline can be done easily using the continuous-benchmark GitHub Action. github-action-benchmark github-action-benchmark allows you to easily integrate your existing benchmarks written with your benchmark framework of choice with your build pipeline, with a wide range of configuration options. This action allows you to track the performance of benchmarks against branches in your repository over the history of your project. You can also set thresholds on workflows in PRs, so performance regressions automatically prevent PRs from merging. Benchmark results can vary from framework to framework. This action supports a few different frameworks out of the box, and if yours is not supported, then it can be extended. For your benchmark results to be consumed, they must be kept in a file named output.txt, and formatted in a way that the action will understand. Each benchmark framework will have a different format. This action supports a few of the most popular ones. Example Benchmark in Rust Firstly, we need a benchmark to test with, and we’re going to use Rust. I am not going to detail everything to setup Rust projects in general, but a full example can be found here. In this case, there is just a simple fibonacci number generator. ` Then, a benchmark for this function can be written like so: ` In this case, we have two benchmarks that use the fib function with a different amount of iterations. The more iterations that you execute, the more accurate your results will be. Finally, if your project is setup to compile with cargo already, running the benchmarks should be as simple as running cargo bench. Now that the benchmark itself is setup, it’s time to move to the action. GitHub Action Setup The most basic use-case of this action is setting it up against your main branch so it can collect performance data from every merge moving forward. GitHub actions are configured using yaml files. Let’s go over an example configuration that will run benchmarks on a rust project every time code gets pushed to main, starting with the event trigger. ` If you aren’t familiar with GitHub Actions already, the ‘on’ key allows us to specify the circumstances that this workflow will run. In our case, we want it to trigger when pushes happen against the main branch. If we want to, we can add additional triggers and branches as well. But for this example,, we’re only focusing on push for now. ` The jobs portion is relatively standard. The code gets checked out from source control, the tooling needed to build the Rust project is installed, the benchmarks are run, and then the results get pushed. For the results storing step, a GitHub API token is required. This is automatically generated when the workflow runs, and is not something that you need to add yourself. The results are then pushed to a special 'gh-pages' branch where the performance data is stored. This branch does need to exist already for this step to work. Considerations There are some performance considerations to be aware of when utilizing GitHub Actions to execute benchmarks. Although the specifications of machines used for different action executions are similar, the runtime performance may vary. GitHub Actions are executed in virtual machines that are hosted on servers. The workloads of other actions on the same servers can affect the runtime performance of your benchmarks. Usually, this is not an issue at all, and results in minimal deviations. This is just something to keep in mind if you expect the results of each of your runs to be extremely accurate. Running benchmarks with more iterations does help, but isn’t a magic bullet solution. Here are the hardware specifications currently being used by GitHub Actions at the time of writing this article. This information comes from the GitHub Actions Documentation. Hardware specification for Windows and Linux virtual machines: - 2-core CPU (x86_64) - 7 GB of RAM - 14 GB of SSD space Hardware specification for macOS virtual machines: - 3-core CPU (x86_64) - 14 GB of RAM - 14 GB of SSD space If you need more consistent performance out of your runners, then you should use self-hosted runners. Setting these up is outside the scope of this article, and is deserving of its own. Conclusion Continuous benchmarking can help detect performance issues before they cause issues in production, and with GitHub Actions, it is easier than ever to implement it. If you want to learn more about GitHub Qctions and even implementing your own, check out this JS Marathon video by Chris Trzesniewski....

A Deep Dive into SvelteKit Routing with Our Starter.dev GitHub Showcase Example cover image

A Deep Dive into SvelteKit Routing with Our Starter.dev GitHub Showcase Example

Introduction SvelteKit is an excellent framework for building web applications of all sizes, with a beautiful development experience and flexible filesystem-based routing. At the heart of SvelteKit is a filesystem-based router. The routes of your app — i.e. the URL paths that users can access — are defined by the directories in your codebase. In this tutorial, we are going to discuss SvelteKit routing with an awesome SvelteKit GitHub showcase built by This Dot Labs. The showcase is built with the SvelteKit starter kit on starter.dev. We are going to tackle: - Filesystem-based router - +page.svelte - +page.server - +layout.svelte - +layout.server - +error.svelte - Advanced Routing - Rest Parameters - (group) layouts - Matching Below is the current routes folder. Prerequisites You will need a development environment running Node.js; this tutorial was tested on Node.js version 16.18.0, and npm version 8.19.2. Filesystem-based router The src/routes is the root route. You can change src/routes to a different directory by editing the project config. ` Each route directory contains one or more route files, which can be identified by their + prefix. +page.svelte A +page.svelte component defines a page of your app. By default, pages are rendered both on the server (SSR) for the initial request, and in the browser (CSR) for subsequent navigation. In the below example, we see how to render a simple login page component: ` +page.ts Often, a page will need to load some data before it can be rendered. For this, we add a +page.js (or +page.ts, if you're TypeScript-inclined) module that exports a load function. +page.server.ts If your load function can only run on the server— ie, if it needs to fetch data from a database or you need to access private environment variables like API key— then you can rename +page.js to +page.server.js, and change the PageLoad type to PageServerLoad. To pass top user repository data, and user’s gists to the client-rendered page, we do the following: ` The page.svelte gets access to the data by using the data variable which is of type PageServerData. ` +layout.svelte As there are elements that should be visible on every page, such as top-level navigation or a footer. Instead of repeating them in every +page.svelte, we can put them in layouts. The only requirement is that the component includes a for the page content. For example, let's add a nav bar: ` +layout.server.ts Just like +page.server.ts, your +layout.svelte component can get data from a load function in +layout.server.js, and change the type from PageServerLoad type to LayoutServerLoad. ` +error.svelte If an error occurs during load, SvelteKit will render a default error page. You can customize this error page on a per-route basis by adding an +error.svelte file. In the showcase, an error.svelte page has been added for authenticated view in case of an error. ` Advanced Routing Rest Parameters If the number of route segments is unknown, you can use spread operator syntax. This is done to implement Github’s file viewer. ` svelte-kit-scss.starter.dev/thisdot/starter.dev/blob/main/starters/svelte-kit-scss/README.md would result in the following parameters being available to the page: ` (group) layouts By default, the layout hierarchy mirrors the route hierarchy. In some cases, that might not be what you want. In the GitHub showcase, we would like an authenticated user to be able to have access to the navigation bar, error page, and user information. This is done by grouping all the relevant pages which an authenticated user can access. Grouping can also be used to tidy your file tree and ‘group’ similar pages together for easy navigation, and understanding of the project. Matching In the Github showcase, we needed to have a page to show issues and pull requests for a single repo. The route src/routes/(authenticated)/[username]/[repo]/[issues] would match /thisdot/starter.dev-github-showcases/issues or /thisdot/starter.dev-github-showcases/pull-requests but also /thisdot/starter.dev-github-showcases/anything and we don't want that. You can ensure that route parameters are well-formed by adding a matcher— which takes only issues or pull-requests, and returns true if it is valid– to your params directory. ` ` ...and augmenting your routes: ` If the pathname doesn't match, SvelteKit will try to match other routes (using the sort order specified below), before eventually returning a 404. Note: Matchers run both on the server and in the browser. Conclusion In this article, we learned about basic and advanced routing in SvelteKit by using the SvelteKit showcase example. We looked at how to work with SvelteKit's Filesystem-based router, rest parameters, and (group) layouts. If you want to learn more about SvelteKit, please check out the SvelteKit and SCSS starter kit and the SvelteKit and SCSS GitHub showcase. All the code for our showcase project is open source. If you want to collaborate with us or have suggestions, we're always welcome to new contributions. Thanks for reading! If you have any questions, or run into any trouble, feel free to reach out on Twitter....

How to Retry Failed Steps in GitHub Action Workflows cover image

How to Retry Failed Steps in GitHub Action Workflows

Sometimes things can go wrong in your GitHub Action workflow step(s), and you may want to retry them. In this article, we'll cover two methods for doing this! Pre-requisites - Git This should be installed in your path. - GitHub account: We'll need this to use GitHub Actions. Initial setup In order to follow along, here are the steps you can take to setup your GitHub Actions workflow: Initialize your git repository In your terminal, run git init to create an empty git repository or skip this step if you already have an existing git repository. Create a workflow file GitHub workflow files are usually .yaml/.yml files that contain a series of jobs and steps to be executed by GitHub Actions. These files often reside in .github/workflows. If the directories do not exist, go ahead and create them. Create a file retry.yml in .github/workflows. For now, the file can contain the following: ` Testing your workflow You can test your GitHub Action workflow by pushing your changes to GitHub and going to the actions tab of the repository. You can also choose to test locally using Act. Retrying failed steps Approach 1: Using the retry-step action By using the retry-step action, we can retry any failed shell commands. If our step or series of steps are shell commands, we can use the retry-step action to retry them. > If, however, you'd like to try retry a step that is using another action, then the retry-step action will NOT work for you. In that case, you may want to try the alternative steps mentioned below. Modify your action file to contain the following: ` Approach 2: Duplicate steps If you are trying to retry steps that use other actions, the retry-step action may not get the job done. In this case, you can still retry steps by retrying steps conditionally, depending on whether or not a step failed. GitHub provides us with two main additional attributes in our steps: - continue-on-error - Setting this to true means that the even if the current step fails, the job will continue on to the next one (by default failure stops a job's running). - steps.{id}.outcome - where {id} is an id you add to the steps you want to retry. This can be used to tell whether a step failed or not, potential values include 'failure' and 'success'. - if - allows us to conditionally run a step ` Bonus: Retrying multiple steps If you want to retry multiple steps at once, then you can use composite actions to group the steps you want to retry, and then use the duplicate steps approach mentioned above. Conclusion How do you decide which approach to use? - If you are retrying a step that is only shell commands, then you can use the retry step action. - If you are retrying a step that needs to use another action, then you can use duplication of steps with conditional running to manually retry the steps....

Git Reflog: A Guide to Recovering Lost Commits cover image

Git Reflog: A Guide to Recovering Lost Commits

Losing data can be very frustrating. Sometimes data is lost because of hardware dying, but other times it’s done by mistake. Thankfully, Git has tools that can assist with the latter case at least. In this article, I will demonstrate how one can use the git-reflog tool to recover lost code and commits. What is Reflog? Whenever you add data to your local Git repository or perform destructive operations, Git keeps track of all these using reference logs, also known as reflogs. These log entries contain a SHA-1 hash of the commit associated with it and any references, or refs for short. Refs themselves are branch names, tags, and symbolic refs like HEAD, which is always pointing to the ref or commit id that’s currently checked out. These reflogs can prove very useful in assisting with data recovery against a Git repository if some code is lost in a destructive operation. Reflog records contain data such as the SHA-1 hash that HEAD was pointing to when an operation was performed, and a description of the operation that was performed as well. Here is an example of what a reflog might look like: ` The first part 956eb2f is the commit hash of the currently checked out commit when this entry was added to the reflog. If a ref currently exists in the repo that points to the commit id, such as the branch-prefix/v2-1-4 branch in this case, then those refs will be printed alongside the commit id in the reflog entry. It should be noted that the actual refs themselves are not always stored in the entry, but are instead inferred by Git from the commit id in the entry when dumping the reflog. This means that if we were to remove the branch named branch-prefix/v2-1-4, it would no longer appear in the reflog entry here. There’s also a HEAD part as well. This just tells us that HEAD is currently pointing to the commit id in the entry. If we were to navigate to a different branch such as main, then the HEAD -> section would disappear from that specific entry. The HEAD@{n} section is just an index that specifies where HEAD was n moves ago. In this example, it is zero, which means that is where HEAD currently is. Finally what follows is a text description of the operation that was performed. In this case, it was just a commit. Descriptions for supported operations include but are not limited to commit, pull, checkout, reset, rebase, and squash. Basic Usage Running git reflog with no other arguments or git reflog show will give you a list of records that show when the tips of branches and other references in the repository have been updated. It will also be in the order that the operations were done. The output for a fresh repository with an initial commit will look something like this. ` Now let’s create a new branch called feature with git switch -c feature and then commit some changes. Doing this will add a couple of entries to the reflog. One for the checkout of the branch, and one for committing some changes. ` This log will continue to grow as we perform more operations that write data to git. A Rebase Gone Wrong Let’s do something slightly more complex. We’re going to make some changes to main and then rebase our feature branch on top of it. This is the current history once a few more commits are added. ` And this is what main looks like: ` After doing a git rebase main while checked into the feature branch, let’s say some merge conflicts got resolved incorrectly and some code was accidentally lost. A Git log after doing such a rebase might look something like this. ` Fun fact: if the contents of a commit are not used after a rebase between the tip of the branch and the merge base, Git will discard those commits from the active branch after the rebase is concluded. In this example, I entirely discarded the contents of two commits “by mistake”, and this resulted in Git discarding them from the current branch. Alright. So we lost some code from some commits, and in this case, even the commits themselves. So how do we get them back as they’re in neither the main branch nor the feature branch? Reflog to the Rescue Although our commits are inaccessible on all of our branches, Git did not actually delete them. If we look at the output of git reflog, we will see the following entries detailing all of the changes we’ve made to the repository up till this point: ` This can look like a bit much. But we can see that the latest commit on our feature branch before the rebase reads 138afbf HEAD@{6}: commit: here's some more. The SHA1 associated with this entry is still being stored in Git and we can get back to it by using git-reset. In this case, we can run git reset --hard 138afbf. However, git reset --hard ORIG_HEAD also works. The ORIG_HEAD in the latter command is a special variable that indicates the last place of the HEAD since the last drastic operation, which includes but is not limited to: merging and rebasing. So if we run either of those commands, we’ll get output saying HEAD is now at 138afbf here's some more and our git log for the feature branch should look like the following. ` Any code that was accidentally removed should now be accessible once again! Now the rebase can be attempted again. Reflog Pruning and Garbage Collection One thing to keep in mind is that the reflog is not permanent. It is subject to garbage collection by Git on occasion. In reality, this isn’t a big deal since most uses of reflog will be against records that were created recently. By default, reflog records are set to expire after 90 days. The duration of this can be controlled via the gc.reflogExpire key in your git config. Once reflog records are expired, they then become eligible for removal by git-gc. git gc can be invoked manually, but it usually isn’t. git pull, git merge, git rebase and git commit are all examples of commands that will trigger git gc to run behind the scenes. I will abstain from going into detail about git gc as that would be deserving of its own article, but it’s important to know about in the context of git reflog as it does have an effect on it. Conclusion git reflog is a very helpful tool that allows one to recover lost code and commits when used in conjunction with git reset. We learned how to use git reflog to view changes made to a repository since we’ve cloned it, and to undo a bad rebase to recover some lost commits....

Git Strategies For Teams, Another Take cover image

Git Strategies For Teams, Another Take

Recently we published an article about a pragmatic approach to using Git in teams. It outlines a strategy which is easy to implement while safeguarding against common issues when using git. It is, however, in my opinion, a compromise. Allowing some issues with edge cases as a trade-off for ease of use. Status Quo Before jumping into another strategy, let us establish the pros and cons of what we’re up against. This will give us a reference for determining whether we did better, or not. Pro: Squash Merging One of the stronger arguments is the squash merge. It offers freedom to all developers within the team to develop as they see fit. One might prefer to develop all at the same time, and push a big commit in the end. Others might like to play it safe and simply commit every 5 minutes, allowing them to roll back or backup changes. The only rule is that at the end of the work, all changes get squashed into a single commit that has to adhere to the team's standards. Con: You Get A Single Commit Each piece of work gets a single commit. To circumvent this, one could create multiple PRs and break-up the work into bite-size chunks. This is, undeniably, a good practice. But it can be cumbersome and distributive when trying to keep pace. In addition, you put the team at risk of getting git conflicts. Say you’re in the middle of building your feature and uncover a bug which requires fixing. As a good scout, you implement the fix and create a separate PR to deliver the fix to the team. At the same time, you keep the fix in your branch as you need it for your feature. At the point of squashing, your newly squashed commit conflicts with the stand-alone fix you’ve delivered to your team. This is nothing that git can’t fix, but it is a nuisance nonetheless. Con: Review Potential A good PR shouldn’t have too many changes, making it easy to review. In the real world however, things get messy. Commits can give us some insight into how the complete changeset came to be. This requires the team to write well-curated commits though. This conflicts with the strength we’re getting from allowing freedom to commit as one sees fit. The History Rewriting Controversy It is good to know that what I’m about to suggest is considered blasphemy by many. Rewriting history is not without its dangers. Changes can go missing, and others who have based their work on now-changed history need to deal with conflicts. However, when applied prudently, rewriting history can yield benefits as well. In this context, some advanced git knowledge is required. The Alternative There was a soft hint towards using conventional commits in Dustin’s article. Let's go ahead and fully endorse adopting it. The convention is simple enough, and the documentation is exhaustive. Now I hear you think, “but we just concluded that allowing us to commit as we like was a good thing”. And you are not wrong. This is where history rewriting comes into play. As you’re working, commit as you like. Then, when it’s time to put your changes up for review, start editing your branch to ensure each change is nicely wrapped and documented in a proper commit. Finally, after getting a thumbs-up on the PR, rebase your changes on top of the branch you’re merging into, and finally do a normal merge. Most git hosting services offer this workflow for you. While I endorse rewriting history in your own branch, restrain from altering shared branches like “main” or “develop”. By sticking to this small rule, you’ve already negated most disadvantages of rewriting history. Shared Changes If we look back at our scenario where both the main and our feature branch include a fix, we get to the same point where we want to merge. However in this case, given that you’ve made the same commit in both branches, git is clever enough to fix the flow of history and remove the fix from your new changes. The following flow... ... will look like this after merging: Fixes On Your Own Features Although this is part of the conventional commit strategy, I feel it deserves some special attention. If you have introduced a new feature in your branch, and committed the changes. It can happen that you introduced a bug. Your first intuition might be to create a “fix” commit. Instead, consider going back to your feature commit and amending the fix to it. This has two advantages. First of all, the history will be less cluttered. Looking back at what changed, it's easy to see which features got introduced and what bugs we found along the way. On top of that, it will prevent confusion for your reviewers. Now, the code presented to others is fixed code. At no time in its history does it ever contain the bug. Your co-workers are not going to have any comments on it. How To Rewrite Your Branch Now we know why to clean-up, let's look at strategies to actually do the orchestration. The most obvious route is to keep the changeset you want to present in mind. Doing so, one prevents having to go back and rewrite everything from scratch. As an added bonus, I’ve found that it helps me better separate concerns. Complete Wipe If you like making periodic commits (or some other strategy that results in you creating arbitrary commits) chances are you are going to completely wipe all commits (not the changes) in your branch. The simplest way to accomplish this is by doing a soft reset to where you forked from the main branch. This can be achieved by rebasing and resetting to main (given main is where you want to merge into). This is a good approach as you also prepare your branch for being merged back. ` This can also be accomplished by counting the amount of commits and making that amount of steps back from HEAD. For example, if you have made 4 commits in your branch. ` And lastly, you can do this by knowing where you started off. One can find this by looking at the logs: ` Using either of these methods will leave you with no commits in your branch, and all your changes in your workspace. From this point, you can start cherry-picking your changes, and making well-curated commits. Interactive Rebase If you already have somewhat of a structure, interactive rebasing might be a better solution for you. This will allow you to go over each commit, and decide on how to alter them. The most interesting options being: s, squash - this will add the changes from this commit to its parent, followed by allowing you to change the commit message, and thus appending the message with the squashed changes. e, edit - using this option, the rebase will stop right before the commit gets added to the branch as if you went back in time and just did the development work. From this point, you can add files, split the commit in multiple different commits, change the commit message, or do whatever you’d like to do. d, drop - in the rare occasion you simply don’t want this commit anymore. r, reword - like edit, but you’re only offered the option to change the commit message. To start an interactive rebase, simply run ` Conclusion By embracing history rewriting and dropping squash merging. A team could produce an even cleaner git history. This option might not be for everyone, as it requires a little work and git knowledge. But if done well, it will circumvent some of the drawbacks of our pragmatic approach....

GitHub Actions for Serverless Framework Deployments cover image

GitHub Actions for Serverless Framework Deployments

Background Our team was building a Serverless Framework API for a client that wanted to use the Serverless Dashboard) for deployment and monitoring. Based on some challenges from last year, we agreed with the client that using a monorepo tool like Nx) would be beneficial moving forward as we were potentially shipping multiple Serverless APIs and frontend applications. Unfortunately, we discovered several challenges integrating with the Serverless Dashboard, and eventually opted into custom CI/CD with GitHub Actions. We’ll cover the challenges we faced, and the solution we created to mitigate our problems and generate a solution. Serverless Configuration Restrictions By default, the Serverless Framework does all its configuration via a serverless.yml file. However, the framework officially supports alternative formats) including .json, .js, and .ts. Our team opted into the TypeScript format as we wanted to setup some validation for our engineers that were newer to the framework through type checks. When we eventually went to configure our CI/CD via the Serverless Dashboard UI, the dashboard itself restricted the file format to just the YAML format. This was unfortunate, but we were able to quickly revert back to YAML as configuration was relatively simple, and we were able to bypass this hurdle. Prohibitive Project Structures With our configuration now working, we were able to select the project, and launch our first attempt at deploying the app through the dashboard. Immediately, we ran into a build issue: ` What we found was having our package.json in a parent directory of our serverless app prevented the dashboard CI/CD from being able to appropriately detect and resolve dependencies prior to deployment. We had been deploying using an Nx command: npx nx run api:deploy --stage=dev which was able to resolve our dependency tree which looked like: To resolve, we thought maybe we could customize the build commands utilized by the dashboard. Unfortunately, the only way to customize these commands is via the package.json of our project. Nx allows for package.json per app in their structure, but it defeated the purpose of us opting into Nx and made leveraging the tool nearly obsolete. Moving to GitHub Actions with the Serverless Dashboard We thought to move all of our CI/CD to GitHub Actions while still proxying the dashboard for deployment credentials and monitoring. In the dashboard docs), we found that you could set a SERVERLESS_ACCESS_KEY and still deploy through the dashboard. It took us a few attempts to understand exactly how to specify this key in our action code, but eventually, we discovered that it had to be set explicitly in the .env file due to the usage of the Nx build system to deploy. Thus the following actions were born: api-ci.yml ` api-clean.yml ` These actions ran smoothly and allowed us to leverage the dashboard appropriately. All in all this seemed like a success. Local Development Problems The above is a great solution if your team is willing to pay for everyone to have a seat on the dashboard. Unfortunately, our client wanted to avoid the cost of additional seats because the pricing was too high. Why is this a problem? Our configuration looks similar to this (I’ve highlighted the important lines with a comment): serverless.ts ` The app and org variables make it so it is required to have a valid dashboard login. This meant our developers working on the API problems couldn’t do local development because the client was not paying for the dashboard logins. They would get the following error: Resulting Configuration At this point, we had to opt to bypass the dashboard entirely via CI/CD. We had to make the following changes to our actions and configuration to get everything 100% working: serverless.ts - Remove app and org fields - Remove accessing environment secrets via the param option ` api-ci.yml - Add all our secrets to GitHub and include them in the scripts - Add serverless confg ` api-cleanup.yml - Add serverless config - Remove secrets ` Conclusions The Serverless Dashboard is a great product for monitoring and seamless deployment in simple applications, but it still has a ways to go to support different architectures and setups while being scalable for teams. I hope to see them make the following changes: - Add support for different configuration file types - Add better support custom deployment commands - Update the framework to not fail on login so local development works regardless of dashboard credentials The Nx + GitHub actions setup was a bit unnatural as well with the reliance on the .env file existing, so we hope the above action code will help someone in the future. That being said, we’ve been working with this on the team and it’s been a very seamless and positive change as our developers can quickly reference their deploys and know how to interact with Lambda directly for debugging issues already....