Skip to content

Announcing October JavaScript Marathon - Free, online training!

Announcing October JavaScript Marathon - Free, online training!

JavaScript Marathon - 4 Part Series

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Join us October 28th, 2020 for our next JavaScript Marathon!

JavaScript Marathon is a full day of free, online courses on React, TypeScript, GraphQL, and more. Come learn about some of the leading web development technologies, and concepts!

Stay for one training, or stick around for the whole day! No two sessions will be the same!


React Hooks 201

Featuring Kyle Hill @ 11:00am - 12:00pm EDT

It might have taken a little while, but: have you mastered the use of useState, and maybe even (mostly) understand what useEffect is doing at this point? Cool. Let’s level up our React Hooks skills further by diving into some of the other tools in our box. We’ll leverage useContext and useReducer to create an easy to implement interaction and data flow that scales along with your application and your team.


Getting Started with Eleventy

Featuring Kyle Mitofsky @ 12:30pm - 1:30pm EDT

Eleventy is a JavaScript based Static Site Generator (SSG) that's designed around performance, accessibility, and simplicity. Take your first steps to build out front end designs, websites, and content with this walk through that'll get you set up on your first eleventy site. Learn some of the basics with some patterns and best practices as well.


TypeScript for Beginners

Featuring Dacey Nolan @ 2:00pm - 3:00pm EDT

During this introduction to TypeScript, we will cover what TypeScript is, what benefits it provides, and how you can use it in your projects. Code with me or just follow along as we dive into each of the types and learn how to use them.


Automating GraphQL Client Code Generation

Featuring Kevin Wolf @ 3:30pm - 4:30pm EDT

In this talk, we will learn how to streamline Front End Development by generating type-safe GraphQL Operations for Apollo Client.


Building GraphQL Resolvers

Featuring Josh Oppenheim @ 5:00pm - 6:00pm EDT

Have you ever wanted to know how graphql resolvers work or why we use them? In this training we will talk about the philosophy behind resolvers and how they fit in the greater GraphQL flow. We will take a look at how the graphql-js library implements them and we will write some of our own. We will discuss some possible requests from consumers and write resolvers to accommodate the needs presented.


Need private trainings for your company? If you would like to learn more about how you can leverage This Dot’s expertise to upskill your team, and reinvigorate your developers with new knowledge about the web’s leading development technologies, visit the trainings page.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Incremental Hydration in Angular cover image

Incremental Hydration in Angular

Incremental Hydration in Angular Some time ago, I wrote a post about SSR finally becoming a first-class citizen in Angular. It turns out that the Angular team really treats SSR as a priority, and they have been working tirelessly to make SSR even better. As the previous blog post mentioned, full-page hydration was launched in Angular 16 and made stable in Angular 17, providing a great way to improve your Core Web Vitals. Another feature aimed to help you improve your INP and other Core Web Vitals was introduced in Angular 17: deferrable views. Using the @defer blocks allows you to reduce the initial bundle size and defer the loading of heavy components based on certain triggers, such as the section entering the viewport. Then, in September 2024, the smart folks at Angular figured out that they could build upon those two features, allowing you to mark parts of your application to be server-rendered dehydrated and then hydrate them incrementally when needed - hence incremental hydration. I’m sure you know what hydration is. In short, the server sends fully formed HTML to the client, ensuring that the user sees meaningful content as quickly as possible and once JavaScript is loaded on the client side, the framework will reconcile the rendered DOM with component logic, event handlers, and state - effectively hydrating the server-rendered content. But what exactly does "dehydrated" mean, you might ask? Here's what will happen when you mark a part of your application to be incrementally hydrated: 1. Server-Side Rendering (SSR): The content marked for incremental hydration is rendered on the server. 2. Skipped During Client-Side Bootstrapping: The dehydrated content is not initially hydrated or bootstrapped on the client, reducing initial load time. 3. Dehydrated State: The code for the dehydrated components is excluded from the initial client-side bundle, optimizing performance. 4. Hydration Triggers: The application listens for specified hydration conditions (e.g., on interaction, on viewport), defined with a hydrate trigger in the @defer block. 5. On-Demand Hydration: Once the hydration conditions are met, Angular downloads the necessary code and hydrates the components, allowing them to become interactive without layout shifts. How to Use Incremental Hydration Thanks to Mark Thompson, who recently hosted a feature showcase on incremental hydration, we can show some code. The first step is to enable incremental hydration in your Angular application's appConfig using the provideClientHydration provider function: ` Then, you can mark the components you want to be incrementally hydrated using the @defer block with a hydrate trigger: ` And that's it! You now have a component that will be server-rendered dehydrated and hydrated incrementally when it becomes visible to the user. But what if you want to hydrate the component on interaction or some other trigger? Or maybe you don't want to hydrate the component at all? The same triggers already supported in @defer blocks are available for hydration: - idle: Hydrate once the browser reaches an idle state. - viewport: Hydrate once the component enters the viewport. - interaction: Hydrate once the user interacts with the component through click or keydown triggers. - hover: Hydrate once the user hovers over the component. - immediate: Hydrate immediately when the component is rendered. - timer: Hydrate after a specified time delay. - when: Hydrate when a provided conditional expression is met. And on top of that, there's a new trigger available for hydration: - never: When used, the component will remain static and not hydrated. The never trigger is handy when you want to exclude a component from hydration altogether, making it a completely static part of the page. Personally, I'm very excited about this feature and can't wait to try it out. How about you?...

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) In the first part of my series on the importance of a scientific mindset in software engineering, we explored how the principles of the scientific method can help us evaluate sources and make informed decisions. Now, we will focus on how these principles can help us tackle one of the most crucial and challenging tasks in software engineering: debugging. In software engineering, debugging is often viewed as an art - an intuitive skill honed through experience and trial and error. In a way, it is - the same as a GP, even a very evidence-based one, will likely diagnose most of their patients based on their experience and intuition and not research scientific literature every time; a software engineer will often rely on their experience and intuition to identify and fix common bugs. However, an internist faced with a complex case will likely not be able to rely on their intuition alone and must apply the scientific method to diagnose the patient. Similarly, a software engineer can benefit from using the scientific method to identify and fix the problem when faced with a complex bug. From that perspective, treating engineering challenges like scientific inquiries can transform the way we tackle problems. Rather than resorting to guesswork or gut feelings, we can apply the principles of the scientific method—forming hypotheses, designing controlled experiments, gathering and evaluating evidence—to identify and eliminate bugs systematically. This approach, sometimes referred to as "scientific debugging," reframes debugging from a haphazard process into a structured, disciplined practice. It encourages us to be skeptical, methodical, and transparent in our reasoning. For instance, as Andreas Zeller notes in the book _Why Programs Fail_, the key aspect of scientific debugging is its explicitness: Using the scientific method, you make your assumptions and reasoning explicit, allowing you to understand your assumptions and often reveals hidden clues that can lead to the root cause of the problem on hand. Note: If you'd like to read an excerpt from the book, you can find it on Embedded.com. Scientific Debugging At its core, scientific debugging applies the principles of the scientific method to the process of finding and fixing software defects. Rather than attempting random fixes or relying on intuition, it encourages engineers to move systematically, guided by data, hypotheses, and controlled experimentation. By adopting debugging as a rigorous inquiry, we can reduce guesswork, speed up the resolution process, and ensure that our fixes are based on solid evidence. Just as a scientist begins with a well-defined research question, a software engineer starts by identifying the specific symptom or error condition. For instance, if our users report inconsistencies in the data they see across different parts of the application, our research question could be: _"Under what conditions does the application display outdated or incorrect user data?"_ From there, we can follow a structured debugging process that mirrors the scientific method: - 1. Observe and Define the Problem: First, we need to clearly state the bug's symptoms and the environment in which it occurs. We should isolate whether the issue is deterministic or intermittent and identify any known triggers if possible. Such a structured definition serves as the groundwork for further investigation. - 2. Formulate a Hypothesis: A hypothesis in debugging is a testable explanation for the observed behavior. For instance, you might hypothesize: _"The data inconsistency occurs because a caching layer is serving stale data when certain user profiles are updated."_ The key is that this explanation must be falsifiable; if experiments don't support the hypothesis, it must be refined or discarded. - 3. Collect Evidence and Data: Evidence often includes logs, system metrics, error messages, and runtime traces. Similar to reviewing primary sources in academic research, treat your raw debugging data as crucial evidence. Evaluating these data points can reveal patterns. In our example, such patterns could be whether the bug correlates with specific caching mechanisms, increased memory usage, or database query latency. During this step, it's essential to approach data critically, just as you would analyze the quality and credibility of sources in a research literature review. Don't forget that even logs can be misleading, incomplete, or even incorrect, so cross-referencing multiple sources is key. - 4. Design and Run Experiments: Design minimal, controlled tests to confirm or refute your hypothesis. In our example, you may try disabling or shortening the cache's time-to-live (TTL) to see if more recent data is displayed correctly. By manipulating one variable at a time - such as cache invalidation intervals - you gain clearer insights into causation. Tools such as profilers, debuggers, or specialized test harnesses can help isolate factors and gather precise measurements. - 5. Analyze Results and Refine Hypotheses: If the experiment's outcome doesn't align with your hypothesis, treat it as a stepping stone, not a dead end. Adjust your explanation, form a new hypothesis, or consider additional variables (for example, whether certain API calls bypass caching). Each iteration should bring you closer to a better understanding of the bug's root cause. Remember, the goal is not to prove an initial guess right but to arrive at a verifiable explanation. - 6. Implement and Verify the Fix: Once you're confident in the identified cause, you can implement the fix. Verification doesn't stop at deployment - re-test under the same conditions and, if possible, beyond them. By confirming the fix in a controlled manner, you ensure that the solution is backed by evidence rather than wishful thinking. - Personally, I consider implementing end-to-end tests (e.g., with Playwright) that reproduce the bug and verify the fix to be a crucial part of this step. This both ensures that the bug doesn't reappear in the future due to changes in the codebase and avoids possible imprecisions of manual testing. Now, we can explore these steps in more detail, highlighting how the scientific method can guide us through the debugging process. Establishing Clear Debugging Questions (Formulating a Hypothesis) A hypothesis is a proposed explanation for a phenomenon that can be tested through experimentation. In a debugging context, that phenomenon is the bug or issue you're trying to resolve. Having a clear, falsifiable statement that you can prove or disprove ensures that you stay focused on the real problem rather than jumping haphazardly between possible causes. A properly formulated hypothesis lets you design precise experiments to evaluate whether your explanation holds true. To formulate a hypothesis effectively, you can follow these steps: 1. Clearly Identify the Symptom(s) Before forming any hypothesis, pin down the specific issue users are experiencing. For instance: - "Users intermittently see outdated profile information after updating their accounts." - "Some newly created user profiles don't reflect changes in certain parts of the application." Having a well-defined problem statement keeps your hypothesis focused on the actual issue. Just like a research question in science, the clarity of your symptom definition directly influences the quality of your hypothesis. 2. Draft a Tentative Explanation Next, convert your symptom into a statement that describes a _possible root cause_, such as: - "Data inconsistency occurs because the caching layer isn't invalidating or refreshing user data properly when profiles are updated." - "Stale data is displayed because the cache timeout is too long under certain load conditions." This step makes your assumption about the root cause explicit. As with the scientific method, your hypothesis should be something you can test and either confirm or refute with data or experimentation. 3. Ensure Falsifiability A valid hypothesis must be falsifiable - meaning it can be proven _wrong_. You'll struggle to design meaningful experiments if a hypothesis is too vague or broad. For example: - Not Falsifiable: "Occasionally, the application just shows weird data." - Falsifiable: "Users see stale data when the cache is not invalidated within 30 seconds of profile updates." Making your hypothesis specific enough to fail a test will pave the way for more precise debugging. 4. Align with Available Evidence Match your hypothesis to what you already know - logs, stack traces, metrics, and user reports. For example: - If logs reveal that cache invalidation events aren't firing, form a hypothesis explaining why those events fail or never occur. - If metrics show that data served from the cache is older than the configured TTL, hypothesize about how or why the TTL is being ignored. If your current explanation contradicts existing data, refine your hypothesis until it fits. 5. Plan for Controlled Tests Once you have a testable hypothesis, figure out how you'll attempt to _disprove_ it. This might involve: - Reproducing the environment: Set up a staging/local system that closely mimics production. For instance with the same cache layer configurations. - Varying one condition at a time: For example, only adjust cache invalidation policies or TTLs and then observe how data freshness changes. - Monitoring metrics: In our example, such monitoring would involve tracking user profile updates, cache hits/misses, and response times. These metrics should lead to confirming or rejecting your explanation. These plans become your blueprint for experiments in further debugging stages. Collecting and Evaluating Evidence After formulating a clear, testable hypothesis, the next crucial step is to gather data that can either support or refute it. This mirrors how scientists collect observations in a literature review or initial experiments. 1. Identify "Primary Sources" (Logs, Stack Traces, Code History): - Logs and Stack Traces: These are your direct pieces of evidence - treat them like raw experimental data. For instance, look closely at timestamps, caching-related events (e.g., invalidation triggers), and any error messages related to stale reads. - Code History: Look for related changes in your source control, e.g. using Git bisect. In our example, we would look for changes to caching mechanisms or references to cache libraries in commits, which could pinpoint when the inconsistency was introduced. Sometimes, reverting a commit that altered cache settings helps confirm whether the bug originated there. 2. Corroborate with "Secondary Sources" (Documentation, Q&A Forums): - Documentation: Check official docs for known behavior or configuration details that might differ from your assumptions. - Community Knowledge: Similar issues reported on GitHub or StackOverflow may reveal known pitfalls in a library you're using. 3. Assess Data Quality and Relevance: - Look for Patterns: For instance, does stale data appear only after certain update frequencies or at specific times of day? - Check Environmental Factors: For instance, does the bug happen only with particular deployment setups, container configurations, or memory constraints? - Watch Out for Biases: Avoid seeking only the data that confirms your hypothesis. Look for contradictory logs or metrics that might point to other root causes. You keep your hypothesis grounded in real-world system behavior by treating logs, stack traces, and code history as primary data - akin to raw experimental results. This evidence-first approach reduces guesswork and guides more precise experiments. Designing and Running Experiments With a hypothesis in hand and evidence gathered, it's time to test it through controlled experiments - much like scientists isolate variables to verify or debunk an explanation. 1. Set Up a Reproducible Environment: - Testing Environments: Replicate production conditions as closely as possible. In our example, that would involve ensuring the same caching configuration, library versions, and relevant data sets are in place. - Version Control Branches: Use a dedicated branch to experiment with different settings or configuration, e.g., cache invalidation strategies. This streamlines reverting changes if needed. 2. Control Variables One at a Time: - For instance, if you suspect data inconsistency is tied to cache invalidation events, first adjust only the invalidation timeout and re-test. - Or, if concurrency could be a factor (e.g., multiple requests updating user data simultaneously), test different concurrency levels to see if stale data issues become more pronounced. 3. Measure and Record Outcomes: - Automated Tests: Tests provide a great way to formalize and verify your assumptions. For instance, you could develop tests that intentionally update user profiles and check if the displayed data matches the latest state. - Monitoring Tools: Monitor relevant metrics before, during, and after each experiment. In our example, we might want to track cache hit rates, TTL durations, and query times. - Repeat Trials: Consistency across multiple runs boosts confidence in your findings. 4. Validate Against a Baseline: - If baseline tests manifest normal behavior, but your experimental changes manifest the bug, you've isolated the variable causing the issue. E.g. if the baseline tests show that data is consistently fresh under normal caching conditions but your experimental changes cause stale data. - Conversely, if your change eliminates the buggy behavior, it supports your hypothesis - e.g. that the cache configuration was the root cause. Each experiment outcome is a data point supporting or contradicting your hypothesis. Over time, these data points guide you toward the true cause. Analyzing Results and Iterating In scientific debugging, an unexpected result isn't a failure - it's valuable feedback that brings you closer to the right explanation. 1. Compare Outcomes to the hypothesis. For instance: - Did user data stay consistent after you reduced the cache TTL or fixed invalidation logic? - Did logs show caching events firing as expected, or did they reveal unexpected errors? - Are there only partial improvements that suggest multiple overlapping issues? 2. Incorporate Unexpected Observations: - Sometimes, debugging uncovers side effects - e.g. performance bottlenecks exposed by more frequent cache invalidations. Note these for future work. - If your hypothesis is disproven, revise it. For example, the cache may only be part of the problem, and a separate load balancer setting also needs attention. 3. Avoid Confirmation Bias: - Don't dismiss contrary data. For instance, if you see evidence that updates are fresh in some modules but stale in others, you may have found a more nuanced root cause (e.g., partial cache invalidation). - Consider other credible explanations if your teammates propose them. Test those with the same rigor. 4. Decide If You Need More Data: - If results aren't conclusive, add deeper instrumentation or enable debug modes to capture more detailed logs. - For production-only issues, implement distributed tracing or sampling logs to diagnose real-world usage patterns. 5. Document Each Iteration: - Record the results of each experiment, including any unexpected findings or new hypotheses that arise. - Through iterative experimentation and analysis, each cycle refines your understanding. By letting evidence shape your hypothesis, you ensure that your final conclusion aligns with reality. Implementing and Verifying the Fix Once you've identified the likely culprit - say, a misconfigured or missing cache invalidation policy - the next step is to implement a fix and verify its resilience. 1. Implementing the Change: - Scoped Changes: Adjust just the component pinpointed in your experiments. Avoid large-scale refactoring that might introduce other issues. - Code Reviews: Peer reviews can catch overlooked logic gaps or confirm that your changes align with best practices. 2. Regression Testing: - Re-run the same experiments that initially exposed the issue. In our stale data example, confirm that the data remains fresh under various conditions. - Conduct broader tests - like integration or end-to-end tests - to ensure no new bugs are introduced. 3. Monitoring in Production: - Even with positive test results, real-world scenarios can differ. Monitor logs and metrics (e.g. cache hit rates, user error reports) closely post-deployment. - If the buggy behavior reappears, revisit your hypothesis or consider additional factors, such as unpredicted user behavior. 4. Benchmarking and Performance Checks (If Relevant): - When making changes that affect the frequency of certain processes - such as how often a cache is refreshed - be sure to measure the performance impact. Verify you meet any latency or resource usage requirements. - Keep an eye on the trade-offs: For instance, more frequent cache invalidations might solve stale data but could also raise system load. By systematically verifying your fix - similar to confirming experimental results in research - you ensure that you've addressed the true cause and maintained overall software stability. Documenting the Debugging Process Good science relies on transparency, and so does effective debugging. Thorough documentation guarantees your findings are reproducible and valuable to future team members. 1. Record Your Hypothesis and Experiments: - Keep a concise log of your main hypothesis, the tests you performed, and the outcomes. - A simple markdown file within the repo can capture critical insights without being cumbersome. 2. Highlight Key Evidence and Observations: - Note the logs or metrics that were most instrumental - e.g., seeing repeated stale cache hits 10 minutes after updates. - Document any edge cases discovered along the way. 3. List Follow-Up Actions or Potential Risks: - If you discover additional issues - like memory spikes from more frequent invalidation - note them for future sprints. - Identify parts of the code that might need deeper testing or refactoring to prevent similar issues. 4. Share with Your Team: - Publish your debugging report on an internal wiki or ticket system. A well-documented troubleshooting narrative helps educate other developers. - Encouraging open discussion of the debugging process fosters a culture of continuous learning and collaboration. By paralleling scientific publication practices in your documentation, you establish a knowledge base to guide future debugging efforts and accelerate collective problem-solving. Conclusion Debugging can be as much a rigorous, methodical exercise as an art shaped by intuition and experience. By adopting the principles of scientific inquiry - forming hypotheses, designing controlled experiments, gathering evidence, and transparently documenting your process - you make your debugging approach both systematic and repeatable. The explicitness and structure of scientific debugging offer several benefits: - Better Root-Cause Discovery: Structured, hypothesis-driven debugging sheds light on the _true_ underlying factors causing defects rather than simply masking symptoms. - Informed Decisions: Data and evidence lead the way, minimizing guesswork and reducing the chance of reintroducing similar issues. - Knowledge Sharing: As in scientific research, detailed documentation of methods and outcomes helps others learn from your process and fosters a collaborative culture. Ultimately, whether you are diagnosing an intermittent crash or chasing elusive performance bottlenecks, scientific debugging brings clarity and objectivity to your workflow. By aligning your debugging practices with the scientific method, you build confidence in your solutions and empower your team to tackle complex software challenges with precision and reliability. But most importantly, do not get discouraged by the number of rigorous steps outlined above or by the fact you won't always manage to follow them all religiously. Debugging is a complex and often frustrating process, and it's okay to rely on your intuition and experience when needed. Feel free to adapt the debugging process to your needs and constraints, and as long as you keep the scientific mindset at heart, you'll be on the right track....

Announcing July JavaScript Marathon - Free, online training! cover image

Announcing July JavaScript Marathon - Free, online training!

Join us July 22nd, 2020 for our next JavaScript Marathon! JavaScript Marathon is a full day of free, online courses on Angular, React, Vue, RxJS, and Web Performance. Come learn about some of the leading web development technologies, and concepts! Stay for one training, or stick around for the whole day! No two sessions will be the same! --- Featuring Shawn Wang @ 11:00am - 12:00pm EDT In this session we will learn how to build a fullstack serverless React + GraphQL app from scratch with authentication, storage, and multiplayer realtime collaboration, all atop infinitely scalable AWS components, with AWS Amplify! It's never been this easy to go from idea to prototype, and each piece will be livecoded in front of your very eyes! --- Featuring Michael Hladky @ 12:30pm - 1:30pm EDT The async pipe is boring! Understand the guts of Angulars change detection and why zone.js is your biggest enemy. Learn the tricks on template bindings, component rendering, and where you pay the biggest price. As a cutting edge demo, you will understand how to analyze blocking UIs over flame charts and how to avoid them. In the end, you will be able to get zone-less performance even in zone-full Angular applications! --- Featuring Nathan Walker @ 2:00pm - 3:00pm EDT During this introduction to Nativescript, you’ll get a brief overview of what Nativescript is and how it works. You’ll also learn how to create a TypeScript, Angular, Vue, and React based app, + so much more! --- Featuring Cecelia Martinez @ 3:30pm - 4:30pm EDT Looking to add testing to your skill set or just feel more confident pushing to production? In this beginner-level talk, we will walk through the process of installing, configuring, and writing a critical-path test using Cypress. Written in JavaScript and built on the popular Mocha and Chai libraries, the free and open-source Cypress Test Runner gets you up to speed with end-to-end testing fast. We will also cover general testing strategies for beginners, including how to decide what to test and how to ensure your test suite is effective. --- Featuring Jesse Tomchak @ 5:00pm - 6:00pm EDT Setting up user authorization and authentication can be a minefield of security practices, token verification, valid callback urls, salt hashes, and more. Now take all those struggles and sprinkle them over serverless functions! When all we want to do is get past the login page to our actual application. We'll walk through setting up secure oAuth with AWS Lambda functions, covering common pitfalls, so that you can get back to the fun part of your project. --- Tune in next month for another full day of JavaScript Marathon! Need private trainings for your company? If you would like to learn more about how you can leverage This Dot’s expertise to upskill your team, and reinvigorate your developers with new knowledge about the web’s leading development technologies, visit the trainings page....

Advanced Authentication and Onboarding Workflows with Docusign Extension Apps cover image

Advanced Authentication and Onboarding Workflows with Docusign Extension Apps

Advanced Authentication and Onboarding Workflows with Docusign Extension Apps Docusign Extension Apps are a relatively new feature on the Docusign platform. They act as little apps or plugins that allow building custom steps in Docusign agreement workflows, extending them with custom functionality. Docusign agreement workflows have many built-in steps that you can utilize. With Extension Apps, you can create additional custom steps, enabling you to execute custom logic at any point in the agreement process, from collecting participant information to signing documents. An Extension App is a small service, often running in the cloud, described by the Extension App manifest. The manifest file provides information about the app, including the app's author and support pages, as well as descriptions of extension points used by the app or places where the app can be integrated within an agreement workflow. Most often, these extension points need to interact with an external system to read or write data, which cannot be done anonymously, as all data going through Extension Apps is usually sensitive. Docusign allows authenticating to external systems using the OAuth 2 protocol, and the specifics about the OAuth 2 configuration are also placed in the manifest file. Currently, only OAuth 2 is supported as the authentication scheme for Extension Apps. OAuth 2 is a robust and secure protocol, but not all systems support it. Some systems use alternative authentication schemes, such as the PKCE variant of OAuth 2, or employ different authentication methods (e.g., using secret API keys). In such cases, we need to use a slightly different approach to integrate these systems with Docusign. In this blog post, we'll show you how to do that securely. We will not go too deep into the implementation details of Extension Apps, and we assume a basic familiarity with how they work. Instead, we'll focus on the OAuth 2 part of Extension Apps and how we can extend it. Extending the OAuth 2 Flow in Extension Apps For this blog post, we'll integrate with an imaginary task management system called TaskVibe, which offers a REST API to which we authenticate using a secret API key. We aim to develop an extension app that enables Docusign agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. TaskVibe does not support OAuth 2. We need to ensure that, once the TaskVibe Extension App is connected, the user is prompted to enter their secret API key. We then need to store this API key securely so it can be used for interacting with the TaskVibe API. Of course, the API key can always be stored in the database of the Extension App. Sill, then, the Extension App has a significant responsibility for storing the API key securely. Docusign already has the capability to store secure tokens on its side and can utilize that instead. After all, most Extension Apps are meant to be stateless proxies to external systems. Updating the Manifest To extend OAuth 2, we will need to hook into the OAuth 2 flow by injecting our backend's endpoints into the authorization URL and token URL parts of the manifest. In any other external system that supports OAuth 2, we would be using their OAuth 2 endpoints. In our case, however, we must use our backend endpoints so we can emulate OAuth 2 to Docusign. ` The complete flow will look as follows: In the diagram, we have four actors: the end-user on behalf of whom we are authenticating to TaskVibe, DocuSign, the Extension App, and TaskVibe. We are only in control of the Extension App, and within the Extension App, we need to adhere to the OAuth 2 protocol as expected by Docusign. 1. In the first step, Docusign will invoke the /authorize endpoint of the Extension App and provide the state, client_id, and redirect_uri parameters. Of these three parameters, state and redirect_uri are essential. 2. In the /authorize endpoint, the app needs to store state and redirect_uri, as they will be used in the next step. It then needs to display a user-facing form where the user is expected to enter their TaskVibe API key. 3. Once the user submits the form, we take the API key and encode it in a JWT token, as we will send it over the wire back to Docusign in the form of the code query parameter. This is the "custom" part of our implementation. In a typical OAuth 2 flow, the code is generated by the OAuth 2 server, and the client can then use it to request the access token. In our case, we'll utilize the code to pass the API key to Docusign so it can send it back to us in the next step. Since we are still in control of the user session, we redirect the user to the redirect URI provided by Docusign, along with the code and the state as query parameters. 4. The redirect URI on Docusign will display a temporary page to the user, and in the background, attempt to retrieve the access token from our backend by providing the code and state to the /api/token endpoint. 5. The /api/token endpoint takes the code parameter and decodes it to extract the TaskVibe API secret key. It can then verify if the API key is even valid by making a dummy call to TaskVibe using the API key. If the key is valid, we encode it in a new JWT token and return it as the access token to Docusign. 6. Docusign stores the access token securely on its side and uses it when invoking any of the remaining extension points on the Extension App. By following the above step, we ensure that the API key is stored in an encoded format on Docusign, and the Extension App effectively extends the OAuth 2 flow. The app is still stateless and does not have the responsibility of storing any secure information locally. It acts as a pure proxy between Docusign and TaskVibe, as it's meant to be. Writing the Backend Most Extension Apps are backend-only, but ours needs to have a frontend component for collecting the secret API key. A good fit for such an app is Next.js, which allows us to easily set up both the frontend and the backend. We'll start by implementing the form for entering the secret API key. This form takes the state, client ID, and redirect URI from the enclosing page, which takes these parameters from the URL. The form is relatively simple, with only an input field for the API key. However, it can also be used for any additional onboarding questions. If you will ever need to store additional information on Docusign that you want to use implicitly in your Extension App workflow steps, this is a good place to store it alongside the API secret key on Docusign. ` Submitting the form invokes a server action on Next.js, which takes the entered API key, the state, and the redirect URI. It then creates a JWT token using Jose that contains the API key and redirects the user to the redirect URI, sending the JWT token in the code query parameter, along with the state query parameter. This JWT token can be short-lived, as it's only meant to be a temporary holder of the API key while the authentication flow is running. This is the server action: ` After the user is redirected to Docusign, Docusign will then invoke the /api/token endpoint to obtain the access token. This endpoint will also be invoked occasionally after the authentication flow, before any extension endpoint is invoked, to get the latest access token using a refresh token. Therefore, the endpoint needs to cover two scenarios. In the first scenario, during the authentication phase, Docusign will send the code and state to the /api/token endpoint. In this scenario, the endpoint must retrieve the value of the code parameter (storing the JWT value), parse the JWT, and extract the API key. Optionally, it can verify the API key's validity by invoking an endpoint on TaskVibe using that key. Then, it should return an access token and a refresh token back to Docusign. Since we are not using refresh tokens in our case, we can create a new JWT token containing the API key and return it as both the access token and the refresh token to Docusign. In the second scenario, Docusign will send the most recently obtained refresh token to get a new access token. Again, because we are not using refresh tokens, we can return both the retrieved access token and the refresh token to Docusign. The api/token endpoint is implemented as a Next.js route handler: ` In all the remaining endpoints defined in the manifest file, Docusign will provide the access token as the bearer token. It's up to each endpoint to then read this value, parse the JWT, and extract the secret API key. Conclusion In conclusion, your Extension App does not need to be limited by the fact that the external system you are integrating with does not have OAuth 2 support or requires additional onboarding. We can safely build upon the existing OAuth 2 protocol and add custom functionality on top of it. This is also a drawback of the approach - it involves custom development, which requires additional work on our part to ensure all cases are covered. Fortunately, the scope of the Extension App does not extend significantly. All remaining endpoints are implemented in the same manner as any other OAuth 2 system, and the app remains a stateless proxy between Docusign and the external system as all necessary information, such as the secret API key and other onboarding details, is stored as an encoded token on the Docusign side. We hope this blog post was helpful. Keep an eye out for more Docusign content soon, and if you need help building an Extension App of your own, feel free to reach out. The complete source code for this project is available on StackBlitz....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co