Skip to content

Tag and Release Your Project with GitHub Actions Workflows

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Tag and Release your project with GitHub Actions Workflows

GitHub Actions is a powerful automation tool that enables developers to automate various workflows in their repositories. One common use case is to automate the process of tagging and releasing new versions of a project. This ensures that your project's releases are properly versioned, documented, and published in a streamlined manner. In this blog post, we will walk you through two GitHub Actions workflows that can help you achieve this.

Understanding GitHub Tags and Releases

GitHub tags and releases are essential features that help manage and communicate the progress and milestones of a project. Let's take a closer look at what they are, why they are useful, and how they can be used effectively.

GitHub Tags

A GitHub tag is a specific reference point in a repository's history that marks a significant point of development, such as a release or a specific commit. Tags are typically used to identify specific versions of a project. They are lightweight and do not contain any additional metadata by default.

Tags are useful for several reasons:

  1. Versioning: Tags allow you to assign meaningful version numbers to your project, making it easier to track and reference specific releases.

  2. Stability: By tagging stable versions of your project, you can provide users with a reliable and tested codebase.

  3. Collaboration: Tags enable contributors to work on specific versions of the project, ensuring that everyone is on the same page.

GitHub Releases

GitHub releases are a way to package and distribute specific versions of your project to users. A release typically includes the source code, compiled binaries, documentation, and release notes. Releases provide a convenient way for users to access and download specific versions of your project.

Releases offer several benefits:

  1. Communication: Releases allow you to communicate important information about the changes, improvements, and bug fixes included in a specific version.

  2. Distribution: By packaging your project into a release, you make it easier for users to download and use your software.

  3. Documentation: Including release notes in a release helps users understand the changes made in each version and any potential compatibility issues.

Effective Use of Tags and Releases

To make the most of GitHub tags and releases, consider the following tips:

  1. Semantic Versioning: Follow a consistent versioning scheme, such as semantic versioning (e.g., MAJOR.MINOR.PATCH), to clearly communicate the nature of changes in each release.

  2. Release Notes: Provide detailed and concise release notes that highlight the key changes, bug fixes, and new features introduced in each version. This helps users understand the impact of the changes and make informed decisions.

  3. Release Automation: Automate the release process using workflows, like the one described in this blog post, to streamline the creation of tags and releases. This saves time and reduces the chances of human error.

By leveraging GitHub tags and releases effectively, you can enhance collaboration, improve communication, and provide a better experience for users of your project.

The Goal

The idea is to have a GitHub action that, once triggered, updates our project's version, creates a new tag for our repository, and pushes the updates to the main branch. Unfortunately, the main branch is a protected branch, and it's not possible to directly push changes to a protected branch through a GitHub action. Therefore, we need to go through a pull request on the main branch, which, once merged, will apply the changes due to the version update to the main branch.

We had to split the workflow into two different GitHub actions: one that creates a pull request towards the main branch with the necessary code changes to update the repository's version, and another one that creates a new tag and releases the updated main branch. This way, we have one additional click to perform (the one required to merge the PR), but we also have an intermediate step where we can verify that the version update has been carried out correctly.

Let’s dive into these two workflows.

Update version and create Release's PR Workflow

name: Update version and create Release's PR Workflow

on:
	workflow_dispatch:
		inputs:
	    version:
		    description: 'Version name'
		    required: true
		    default: 'minor'
		    type: choice
		    options:
		      - major
		      - minor
		      - patch

jobs:
	version:
	    runs-on: ubuntu-latest
	    steps:
	      - name: Checkout code
	        uses: actions/checkout@v3
	      - name: Setup Node.js
	        uses: actions/setup-node@v3
	        with:
	          node-version: "16.x"
	      - name: Install dependencies
	        run: npm install
		  - name: Set up Git
		    run: |
	          git config user.name "Your GitHub User Name"
	          git config user.email "Your GitHub User Email"
	      - name: Update the version
	        id: update_version
	        run: |
	          echo "version=$(npm version ${{ github.event.inputs.version }} --no-git-				  	tag-version)" >> $GITHUB_OUTPUT
	      - name: Update Changelog
		    id: update_changelog
	        run: |
	          sed -i 's/Unreleased/${{ steps.update_version.outputs.version }}/g'   CHANGELOG.md
	      - name: Create pull request
	        id: create_pr
	        uses: peter-evans/create-pull-request@v5
	        with:
	          token: ${{ secrets.GITHUB_TOKEN }}
	          branch: release/${{ steps.update_version.outputs.version }}
	          title: "Release: ${{ steps.update_version.outputs.version }} Pull Request"
	          body: "This pull request contains the updated package.json with the new   	release version"
	          base: main

Walkthrough:

Step 1: Define the Workflow

The workflow starts with specifying the workflow name and the event that triggers it using the on keyword. In this case, the workflow is triggered manually using the "workflow_dispatch" event, which means it can be run on-demand by a user. Additionally, the workflow accepts an input parameter called "version," which allows the user to specify the type of version bump (major, minor, or patch). The workflow_dispatch event allows the user to set the "version" input when running the workflow.

image1 (1)
image2 (1)

Step 2: Prepare the Environment

The workflow will run on an Ubuntu environment (ubuntu-latest) using a series of steps under the jobs section. The first job is named "version."

Step 3: Checkout the Code

The workflow starts by checking out the code of the repository using the actions/checkout@v3 action. This step ensures that the workflow has access to the latest codebase before making any modifications.

Step 4: Set up Node.js

Next, the workflow sets up the Node.js environment using the actions/setup-node@v3 action and specifying the Node.js version 16.x. It's essential to use the appropriate Node.js version required by your project to avoid compatibility issues.

Step 5: Install Dependencies

To ensure the project's dependencies are up-to-date, the workflow runs npm install to install the necessary packages as defined in the package.json file.

Step 6: Configure Git

To perform version bump and create a pull request, the workflow configures Git with a user name and email. This allows Git to identify the author when making changes in the repository.

Step 7: Update the Version

The workflow now performs the actual version bump using the npm version command. The new version is determined based on the "version" input provided when running the workflow. The updated version number is stored in an output variable named update_version, which can be referenced later in the workflow.

Step 8: Update the Changelog

After bumping the version, the workflow updates the CHANGELOG.md file to reflect the new release version. It replaces the placeholder "Unreleased" with the updated version using the sed command. [We will return to this step later]

Step 9: Create a Pull Request

Finally, the workflow creates a pull request using the peter-evans/create-pull-request@v5 action. This action automatically creates a pull request with the changes made in the workflow. The pull request will have a branch name following the pattern "release/", where <version> corresponds to the updated version number.

The outcome of this workflow will be a new open PR in the project with package.json and CHANGELOG.md file changed. [we will speak about the changelog file later]

image3 (1)
image4 (1)

Now we can check if the changes are good, approve the PR and merge it into main. Merge a PR with a title that starts with "Release:" automatically triggers the second workflow

image5 (1)

Tag & Release Workflow

name: Tag and Release Workflow

on:
	pull_request:
	types:
		- closed

jobs:
	release:
	runs-on: ubuntu-latest
	if: startsWith(github.event.pull_request.title, 'Release:')
	steps:
		- name: Checkout code
		uses: actions/checkout@v3
		- name: Setup Node.js
		uses: actions/setup-node@v3
		with:
			node-version: "16.x"
		- name: Install dependencies
		run: npm install
		- name: Build
		run: npm run build
		- name: Set up Git
		run: |
			git config user.name "Yout GitHub User name"
			git config user.email "Your GitHub User email"
		- name: Get tag
		id: get_tag
		run: |
			git branch --show-current
			git pull
			echo "version=v$(npm pkg get version | tr -d '\"')" >> $GITHUB_OUTPUT
		- name: Tag the commit
		run: |
			next_version=${{ steps.get_tag.outputs.version }}
			git tag -a "$next_version" -m "Version $next_version"
			git push --follow-tags
		- name: Create changelog diff
		id: changelog_diff
		run: |
			sed -n "/^## \[${{ steps.get_tag.outputs.version }}\]/,/^## \[$(git describe --abbrev=0 --tags $(git rev-list --tags --skip=1 --max-count=1))\]/{/^## \[$(git describe --abbrev=0 --tags $(git rev-list --tags --skip=1 --max-count=1))\]/!p;}" CHANGELOG.md > release_notes.md
		- name: Create release
		id: create_release
		uses: actions/create-release@v1
		env:
			GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
		with:
			tag_name: ${{ steps.get_tag.outputs.version }}
			release_name: Release ${{ steps.get_tag.outputs.version }}
			body_path: ./release_notes.md
			draft: false
			prerelease: false
		- name: Delete release_notes file
		run: rm release_notes.md

Walkthrough:

As you can see we added a check for the PR title before starting the job once the PR is merged and closed. Only the PRs with a title that starts with "Release:" will trigger the workflow. The first three steps are the same as the one described in the previous workflow: we check out the code from the repository, we set up node and we install dependencies. Let's start with:

Step 4: Check formatting

To maintain code quality, we run the npm run format:check command to check if the code adheres to the specified formatting rules. This step helps catch any formatting issues before proceeding further.

Step 5: Build

The npm run build command is executed in this step to build the project. This step is particularly useful for projects that require compilation or bundling before deployment.

Step 6: Set up Git

To perform Git operations, such as tagging and pushing changes, we need to configure the Git user's name and email. This step ensures that the correct user information is associated with the Git actions performed later in the workflow.

Step 7: Get tag

In this step, we retrieve the current version of the project from the package.json file. The version is then stored in an output variable called get_tag.outputs.version for later use.

Step 8: Tag the commit

Using the version obtained in the previous step, we create a Git tag for the commit. The tag is annotated with a message indicating the version number. Finally, we push the tag and associated changes to the repository.

Step 9: Create changelog diff

To generate release notes, we extract the relevant changelog entries from the CHANGELOG.md file.

This step helps summarize the changes made since the previous release. (We will return to this step later)

Step 10: Create release

Using the actions/create-release action, we create a new release on GitHub. The release is associated with the tag created in the previous step, and the release notes are provided in the body of the release.

Step 11: Delete release_notes file

Finally, we delete the temporary release_notes.md file created in Step 9. This step helps keep the repository clean and organized.

Once also the second workflow is finished our project is tagged and the new release has been created.

image6 (1)
image7 (1)
image8 (1)

The "Changelog Steps"

As you can see the release notes are automatically filled, with a detailed description of what has been added, fixed, or updated in the project.

This was made possible thanks to the "Changelog steps" in our workflows, but to use them correctly, we need to pay attention to a couple of things while developing our project.

Firstly, to the format of the CHANGELOG.md file. This will be our generic template:

changelog

But the most important aspect, in addition to keeping the file up to date during developments by adding the news or improvements we are making to the code under their respective sections, is that every time we start working on a new project release, we begin the paragraph with ## [Unreleased].

This is because, in the first workflow, the step related to the changelog will replace the word "Unreleased" with the newly created project version. In the subsequent workflow, we will create a temporary file (which will then be deleted in the latest step of the workflow), where we will extract the part of the changelog file related to the new version and populate the release notes with it.

Conclusion

Following these Tag and Release Workflows, you can automate the process of creating releases for your GitHub projects. This workflow saves time, ensures consistency, and improves collaboration among team members. Remember to customize the workflow to fit your project's specific requirements and enjoy the benefits of streamlined release management.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Ensuring Accurate Workflow Status in GitHub for Enhanced Visibility cover image

Ensuring Accurate Workflow Status in GitHub for Enhanced Visibility

Introduction In the world of software development, GitHub workflows are crucial for automating CI/CD processes. However, a key challenge emerges when these workflows report a 'success' despite underlying issues, like failed tests. This is especially common in scenarios involving tests (e.g., Cypress) and notifications (e.g., Slack) within the same workflow. This blog post aims to highlight the importance of accurate GitHub workflow statuses for better visibility and team response, and how to ensure your workflows reflect the true outcome of their runs. The Problem with Misleading Workflow Statuses Consider a scenario in a GitHub workflow where end-to-end tests are run using Cypress. ` If these tests fail, but the workflow proceeds to a subsequent step, like sending a notification via Slack, which completes successfully, the entire workflow might still show a green checkmark. This misleading success status suggests everything is functioning as intended, when in fact, there could be significant underlying issues. The core issue is the determination of workflow success. Even if critical steps like testing fail, later steps without errors can override this, resulting in a false sense of security. This not only delays bug detection but can also lead to faulty code advancing in the CI/CD pipeline. It's crucial for the overall workflow status to accurately reflect failures in critical steps to ensure prompt and appropriate responses to issues. Crafting a Solution and Best Practices Ensuring Accurate Status Reporting To address the issue of misleading workflow statuses, it’s essential to configure your GitHub Actions properly. The goal is to ensure that the workflow accurately reflects the success or failure of critical tasks, such as running tests, regardless of the success of subsequent steps. Adjusting the Workflow Conditional Notifications: First, set up notifications to execute conditionally based on the outcome of critical steps. This ensures you're alerted of the workflow status without altering the overall result. For example, sending a Slack message if a Cypress test fails: ` Explicit Failure Handling: After configuring conditional notifications, explicitly handle failure scenarios. If a critical step like a Cypress test fails, force the workflow to exit with a failure status. This step is crucial to ensure that the overall workflow reflects the true status: ` Best Practices: Clear Step Separation: Clearly separate and label each step in your workflow for easier readability and troubleshooting. Regular Reviews: Periodically review your workflows to ensure they are aligned with the latest project requirements and best practices. Document Workflow Logic: Maintain documentation for your workflows, especially for complex ones, to aid in understanding and future modifications. By first setting up conditional notifications and then enforcing explicit failure handling, you maintain both alertness to issues and accuracy in workflow status. This approach ensures that failures in critical steps like tests are not overshadowed by subsequent successful steps, keeping the reported status of your workflow true to its actual state. Conclusion Accurate GitHub workflow statuses are vital for a transparent and efficient CI/CD process. By implementing conditional notifications and explicit failure handling, we ensure that our workflows truthfully represent the success or failure of critical tasks. This not only fosters better issue awareness and response but also upholds the integrity of our development practices. Embrace these steps as part of your commitment to maintaining clear and reliable automation processes in your projects. Happy coding!...

Deploying Multiple Apps From a Monorepo to GitHub Pages cover image

Deploying Multiple Apps From a Monorepo to GitHub Pages

Deploying Multiple Apps from a Monorepo to GitHub Pages When it comes to deploying static sites, GitHub Pages is a popular solution thanks to being free and easy to set up in CI. The thing is, however, while it's perfectly suited for hosting a single application, such as a demo of your library, it does not support hosting multiple applications out of the box. It kind of just expects you to have a single app in your repository. It just so happened I ended up with a project that originally had a single app deployed to GitHub Pages via a GitHub Actions workflow, and I had to extend it to be a monorepo with multiple apps. Once a second app was deploy-worthy, I had to figure out how to deploy it to GitHub Pages as well. As I found myself struggling a little bit while figuring out the best way to do it, I decided to write this post to share my experience and hopefully help someone else with a similar problem. The Initial Setup Initially, the project had a GitHub Actions workflow to test, build, and deploy the single app to GitHub Pages. The configuration looked something like this: ` The URL structure for GitHub Pages is [your-organization-name].github.io/[your-repo-name], which means on a merge to the main branch, this action deployed my app to thisdot.github.io/my-repo. Accommodating Multiple Apps As I converted the repository to an Nx monorepo and eventually developed the second application, I needed to deploy it to GitHub Pages too. I researched some options and found a solution to deploy the apps as subdirectories. In the end, the changes to the workflow were not very drastic. As Nx was now building my apps into the dist/apps folder alongside each other, I just had to update the build step to build both apps and the upload step to upload the dist/apps directory instead of the dist/my-app directory. The final workflow at this point looked like this: ` And that seemed to work fine. The apps were deployed to thisdot.github.io/my-repo/app1 and thisdot.github.io/my-repo/app2 respectively. But, then I noticed something was off... Addressing Client-Side Routing My apps were both written with React and used react-router-dom. And as GitHub Pages doesn't support client-side routing out of the box, the routing wasn't working properly and I've been getting 404 errors. One of the apps had a workaround using a custom 404.html from spa-github-pages. The script in that file redirects all 404s to the index.html, preserving the path and query string. But that workaround wasn't working anymore at this point, and adding it to the second app didn't work either. The reason why it wasn't working was that the 404.html wasn't in the root directory of the GitHub pages for that repository, as the apps were now deployed to subdirectories. So, the 404.html was not being picked up by the server. I needed to move the 404.html to the root directory of the apps. I moved the 404.html to a shared folder next to the apps and updated the build script to copy it to the dist/apps directory alongside the two app subdirectories: ` So the whole workflow definition now looked like this: ` Another thing to do was to increase the segmentsToKeep variable in the 404.html script to accommodate the app subdirectories: ` Handling Truly Missing URLs At this point, the routing was working fine for the apps and I thought I was done with this ordeal. But then someone mistyped the URL and the page just kept redirecting to itself and I was getting an infinite loop of redirects. It just kept adding ?/&/and/and/and/and/and/and over and over again to the URL. I had to fix this. So I dug into the 404.html page and figured out, that I'll just check the path segment corresponding to the app name and only execute the redirect logic for known app subdirectories. So I added a allowedPathSegments array and check if the path segment matches one of the allowed ones: ` At that point, the infinite redirect loop was gone. But the 404 page was still not very helpful. It was just blank. So I also took this opportunity to enhance the 404.html to list the available apps and provide some helpful information to the user in case of a truly missing page. I just had to add a bit of HTML code into the body: ` And a bit of javascript to populate the list of apps and show the content: ` Now, when a user mistypes the URL, they get a helpful message and a list of available apps to choose from. If they use one of the available apps, the routing works as expected. This is the final version of the 404.html page: ` Conclusion Deploying multiple apps from an Nx monorepo to GitHub Pages required some adjustments, both in the GitHub Actions workflow and in handling client-side routing. With these changes, I was able to deploy and manage two apps effectively and I should be able to deploy even more apps in the future if they get added to the monorepo. And, while the changes were not very drastic, it wasn't easy to find information on the topic and figure out what to do. That's why I decided to write this post. and I hope it will help someone else with a similar problem....

Next.js Route Groups cover image

Next.js Route Groups

Starting from Next.js 13.4, Vercel introduced the App Router with a whole set of new and exciting features. The way we organize the routing in our application has changed radically compared to previous versions of Next.js, as well as the definition and usage of Layouts for our pages. In this article, we will focus on what is called Route Groups, their use cases, and how they can help us in our developer experience. Basic introduction to the new App Router In version 13, Next.js introduced a new App Router built on React Server Components, which supports shared layouts, nested routing, loading states, error handling, and more. The App Router works in a new directory named app Creating a page.tsx file inside the app/test-page folder allows you to define what users are going to see when they navigate to /test-page. So folder’s names inside app directory define your app routes. You can also have nested routes like this: In this case, the URL of your page will be /test-page/nested-page By default, components inside app are React Server Components. This is a performance optimization and allows you to easily adopt them, and you can also use Client Components. Layouts In Next.js, a Layout file is a special component that is used to define the common structure and layout of multiple pages in your application. It acts as a wrapper around the content of each page, providing consistent styling, structure, and functionality. The purpose of a Layout file is to encapsulate shared elements such as headers, footers, navigation menus, sidebars, or any other components that should be present on multiple pages. By using a Layout file, you can avoid duplicating code across multiple pages and ensure a consistent user experience throughout your application. To create a Layout file in Next.js, you typically create a separate component file, such as Layout.tsx, and define the desired layout structure within it. This component can then be imported and used on individual pages where you want to apply the shared layout. By wrapping your page content with the Layout component, Next.js will render the shared layout around each page, providing a consistent look and feel. This approach simplifies the management of common elements and allows for easy updates or modifications to the layout across multiple pages. Here is an example of how to use a Layout file with the new App Router Route Groups In Next.js, the folders in your app directory usually correspond to URL paths. But if you mark a folder as a Route Group, it won't be included in the URL path of the route. This means you can organize your routes and project files into groups without changing the URL structure. Route groups are helpful for: 1. Organizing routes into groups based on site sections, intent, or teams. 2. Creating nested layouts within the same route segment level: - You can have multiple nested layouts in the same segment, even multiple root layouts. - You can add a layout to only a subset of routes within a common segment. To create a route group inside your app folder you just need to wrap the folder’s name in parenthesis: (folderName) Since route groups won’t change the URL structure your page.tsx content will be shown under the/inside-route-group path. Use cases Route groups are amazing when you want to create multiple layouts inside your page: Or if you want to specify a layout for a specific group of pages You need to be careful because all the examples above can lead you to some misunderstanding. *What is root layout? The top-most layout is called the Root Layout. This required layout is shared across all pages in an application.* As you can see, the route folder in the two examples above always has a well-defined root layout. This means that the specific layouts we have defined for the various groups will not replace the root layout, but will be added to it. However, Route Groups also allow us to redefine the root layout. Specifically, they allow us to define different root layouts for different segments of pages. All we have to do is remove the common Root Layout file, create some Route groups, and re-define the different Root layout files for every group in the route folder: In this way, we will have pages with different root layouts, and our paths will once again not be affected by the folder name used in parentheses. Conclusion In conclusion, Next.js route groups offer a powerful and flexible solution for organizing and managing routes in your Next.js applications. By grouping related routes together, you can improve code organization, enhance maintainability, and promote code reusability. Route groups allow for the use of shared layout components, and the customization of root layouts for different segments of pages. With Next.js route groups, you can streamline your development process, create a more intuitive routing structure, and ultimately deliver a better user experience....

Advanced Authentication and Onboarding Workflows with Docusign Extension Apps cover image

Advanced Authentication and Onboarding Workflows with Docusign Extension Apps

Advanced Authentication and Onboarding Workflows with Docusign Extension Apps Docusign Extension Apps are a relatively new feature on the Docusign platform. They act as little apps or plugins that allow building custom steps in Docusign agreement workflows, extending them with custom functionality. Docusign agreement workflows have many built-in steps that you can utilize. With Extension Apps, you can create additional custom steps, enabling you to execute custom logic at any point in the agreement process, from collecting participant information to signing documents. An Extension App is a small service, often running in the cloud, described by the Extension App manifest. The manifest file provides information about the app, including the app's author and support pages, as well as descriptions of extension points used by the app or places where the app can be integrated within an agreement workflow. Most often, these extension points need to interact with an external system to read or write data, which cannot be done anonymously, as all data going through Extension Apps is usually sensitive. Docusign allows authenticating to external systems using the OAuth 2 protocol, and the specifics about the OAuth 2 configuration are also placed in the manifest file. Currently, only OAuth 2 is supported as the authentication scheme for Extension Apps. OAuth 2 is a robust and secure protocol, but not all systems support it. Some systems use alternative authentication schemes, such as the PKCE variant of OAuth 2, or employ different authentication methods (e.g., using secret API keys). In such cases, we need to use a slightly different approach to integrate these systems with Docusign. In this blog post, we'll show you how to do that securely. We will not go too deep into the implementation details of Extension Apps, and we assume a basic familiarity with how they work. Instead, we'll focus on the OAuth 2 part of Extension Apps and how we can extend it. Extending the OAuth 2 Flow in Extension Apps For this blog post, we'll integrate with an imaginary task management system called TaskVibe, which offers a REST API to which we authenticate using a secret API key. We aim to develop an extension app that enables Docusign agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. TaskVibe does not support OAuth 2. We need to ensure that, once the TaskVibe Extension App is connected, the user is prompted to enter their secret API key. We then need to store this API key securely so it can be used for interacting with the TaskVibe API. Of course, the API key can always be stored in the database of the Extension App. Sill, then, the Extension App has a significant responsibility for storing the API key securely. Docusign already has the capability to store secure tokens on its side and can utilize that instead. After all, most Extension Apps are meant to be stateless proxies to external systems. Updating the Manifest To extend OAuth 2, we will need to hook into the OAuth 2 flow by injecting our backend's endpoints into the authorization URL and token URL parts of the manifest. In any other external system that supports OAuth 2, we would be using their OAuth 2 endpoints. In our case, however, we must use our backend endpoints so we can emulate OAuth 2 to Docusign. ` The complete flow will look as follows: In the diagram, we have four actors: the end-user on behalf of whom we are authenticating to TaskVibe, DocuSign, the Extension App, and TaskVibe. We are only in control of the Extension App, and within the Extension App, we need to adhere to the OAuth 2 protocol as expected by Docusign. 1. In the first step, Docusign will invoke the /authorize endpoint of the Extension App and provide the state, client_id, and redirect_uri parameters. Of these three parameters, state and redirect_uri are essential. 2. In the /authorize endpoint, the app needs to store state and redirect_uri, as they will be used in the next step. It then needs to display a user-facing form where the user is expected to enter their TaskVibe API key. 3. Once the user submits the form, we take the API key and encode it in a JWT token, as we will send it over the wire back to Docusign in the form of the code query parameter. This is the "custom" part of our implementation. In a typical OAuth 2 flow, the code is generated by the OAuth 2 server, and the client can then use it to request the access token. In our case, we'll utilize the code to pass the API key to Docusign so it can send it back to us in the next step. Since we are still in control of the user session, we redirect the user to the redirect URI provided by Docusign, along with the code and the state as query parameters. 4. The redirect URI on Docusign will display a temporary page to the user, and in the background, attempt to retrieve the access token from our backend by providing the code and state to the /api/token endpoint. 5. The /api/token endpoint takes the code parameter and decodes it to extract the TaskVibe API secret key. It can then verify if the API key is even valid by making a dummy call to TaskVibe using the API key. If the key is valid, we encode it in a new JWT token and return it as the access token to Docusign. 6. Docusign stores the access token securely on its side and uses it when invoking any of the remaining extension points on the Extension App. By following the above step, we ensure that the API key is stored in an encoded format on Docusign, and the Extension App effectively extends the OAuth 2 flow. The app is still stateless and does not have the responsibility of storing any secure information locally. It acts as a pure proxy between Docusign and TaskVibe, as it's meant to be. Writing the Backend Most Extension Apps are backend-only, but ours needs to have a frontend component for collecting the secret API key. A good fit for such an app is Next.js, which allows us to easily set up both the frontend and the backend. We'll start by implementing the form for entering the secret API key. This form takes the state, client ID, and redirect URI from the enclosing page, which takes these parameters from the URL. The form is relatively simple, with only an input field for the API key. However, it can also be used for any additional onboarding questions. If you will ever need to store additional information on Docusign that you want to use implicitly in your Extension App workflow steps, this is a good place to store it alongside the API secret key on Docusign. ` Submitting the form invokes a server action on Next.js, which takes the entered API key, the state, and the redirect URI. It then creates a JWT token using Jose that contains the API key and redirects the user to the redirect URI, sending the JWT token in the code query parameter, along with the state query parameter. This JWT token can be short-lived, as it's only meant to be a temporary holder of the API key while the authentication flow is running. This is the server action: ` After the user is redirected to Docusign, Docusign will then invoke the /api/token endpoint to obtain the access token. This endpoint will also be invoked occasionally after the authentication flow, before any extension endpoint is invoked, to get the latest access token using a refresh token. Therefore, the endpoint needs to cover two scenarios. In the first scenario, during the authentication phase, Docusign will send the code and state to the /api/token endpoint. In this scenario, the endpoint must retrieve the value of the code parameter (storing the JWT value), parse the JWT, and extract the API key. Optionally, it can verify the API key's validity by invoking an endpoint on TaskVibe using that key. Then, it should return an access token and a refresh token back to Docusign. Since we are not using refresh tokens in our case, we can create a new JWT token containing the API key and return it as both the access token and the refresh token to Docusign. In the second scenario, Docusign will send the most recently obtained refresh token to get a new access token. Again, because we are not using refresh tokens, we can return both the retrieved access token and the refresh token to Docusign. The api/token endpoint is implemented as a Next.js route handler: ` In all the remaining endpoints defined in the manifest file, Docusign will provide the access token as the bearer token. It's up to each endpoint to then read this value, parse the JWT, and extract the secret API key. Conclusion In conclusion, your Extension App does not need to be limited by the fact that the external system you are integrating with does not have OAuth 2 support or requires additional onboarding. We can safely build upon the existing OAuth 2 protocol and add custom functionality on top of it. This is also a drawback of the approach - it involves custom development, which requires additional work on our part to ensure all cases are covered. Fortunately, the scope of the Extension App does not extend significantly. All remaining endpoints are implemented in the same manner as any other OAuth 2 system, and the app remains a stateless proxy between Docusign and the external system as all necessary information, such as the secret API key and other onboarding details, is stored as an encoded token on the Docusign side. We hope this blog post was helpful. Keep an eye out for more Docusign content soon, and if you need help building an Extension App of your own, feel free to reach out. The complete source code for this project is available on StackBlitz....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co