Skip to content

Angular Signals for Simpler State Management and DOM Performance

In this episode of the Modern Web Podcast, host Rob Ocel is joined by Adam Rackis, Danny Thompson, and guest Braydon Coyer, Senior Front-End Developer at LogicGate to talk about using Angular Signals for improved state management and DOM performance. Braydon explains how Signals simplify Angular development and offer better readability and efficiency compared to traditional methods like RxJS. The conversation also touches on hiring in the AI era, discussing challenges around take-home tests and live coding, and how AI tools like ChatGPT are changing the interview process.

Chapters

  • 00:00 - Introduction
  • 00:57 - The Angular Renaissance
  • 02:24 - Signals in Angular
  • 03:27 - Transitioning to Signals
  • 04:19 - Signals in Utility Development
  • 05:09 - RxJS and Signals
  • 07:52 - Signals vs Other State Management Solutions
  • 09:34 - Testing Signals
  • 10:29 - Control Flow and Standalone Components in Angular
  • 12:02 - Angular's Evolution and Accessibility
  • 13:28 - Angular’s Framework Governance
  • 17:10 - Hiring in the Age of AI
  • 19:15 - Pair Programming and Real-Time Problem Solving
  • 22:24 - The Role of AI in Interviews
  • 27:58 - Wrapping Up

Follow Braydon Coyer Twitter: https://x.com/BraydonCoyer Linkedin: https://www.linkedin.com/in/braydon-coyer/ Github: https://github.com/braydoncoyer

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Exploring Angular Forms: A New Alternative with Signals cover image

Exploring Angular Forms: A New Alternative with Signals

Exploring Angular Forms: A New Alternative with Signals In the world of Angular, forms are essential for user interaction, whether you're crafting a simple login page or a more complex user profile interface. Angular traditionally offers two primary approaches: template-driven forms and reactive forms. In my previous series on Angular Reactive Forms, I explored how to harness reactive forms' power to manage complex logic, create dynamic forms, and build custom form controls. A new tool for managing reactivity - signals - has been introduced in version 16 of Angular and has been the focus of Angular maintainers ever since, becoming stable with version 17. Signals allow you to handle state changes declaratively, offering an exciting alternative that combines the simplicity of template-driven forms with the robust reactivity of reactive forms. This article will examine how signals can add reactivity to both simple and complex forms in Angular. Recap: Angular Forms Approaches Before diving into the topic of enhancing template-driven forms with signals, let’s quickly recap Angular's traditional forms approaches: 1. Template-Driven Forms: Defined directly in the HTML template using directives like ngModel, these forms are easy to set up and are ideal for simple forms. However, they may not provide the fine-grained control required for more complex scenarios. Here's a minimal example of a template-driven form: ` ` 2. Reactive Forms: Managed programmatically in the component class using Angular's FormGroup, FormControl, and FormArray classes; reactive forms offer granular control over form state and validation. This approach is well-suited for complex forms, as my previous articles on Angular Reactive Forms discussed. And here's a minimal example of a reactive form: ` ` Introducing Signals as a New Way to Handle Form Reactivity With the release of Angular 16, signals have emerged as a new way to manage reactivity. Signals provide a declarative approach to state management, making your code more predictable and easier to understand. When applied to forms, signals can enhance the simplicity of template-driven forms while offering the reactivity and control typically associated with reactive forms. Let’s explore how signals can be used in both simple and complex form scenarios. Example 1: A Simple Template-Driven Form with Signals Consider a basic login form. Typically, this would be implemented using template-driven forms like this: ` ` This approach works well for simple forms, but by introducing signals, we can keep the simplicity while adding reactive capabilities: ` ` In this example, the form fields are defined as signals, allowing for reactive updates whenever the form state changes. The formValue signal provides a computed value that reflects the current state of the form. This approach offers a more declarative way to manage form state and reactivity, combining the simplicity of template-driven forms with the power of signals. You may be tempted to define the form directly as an object inside a signal. While such an approach may seem more concise, typing into the individual fields does not dispatch reactivity updates, which is usually a deal breaker. Here’s an example StackBlitz with a component suffering from such an issue: Therefore, if you'd like to react to changes in the form fields, it's better to define each field as a separate signal. By defining each form field as a separate signal, you ensure that changes to individual fields trigger reactivity updates correctly. Example 2: A Complex Form with Signals You may see little benefit in using signals for simple forms like the login form above, but they truly shine when handling more complex forms. Let's explore a more intricate scenario - a user profile form that includes fields like firstName, lastName, email, phoneNumbers, and address. The phoneNumbers field is dynamic, allowing users to add or remove phone numbers as needed. Here's how this form might be defined using signals: ` > Notice that the phoneNumbers field is defined as a signal of an array of signals. This structure allows us to track changes to individual phone numbers and update the form state reactively. The addPhoneNumber and removePhoneNumber methods update the phoneNumbers signal array, triggering reactivity updates in the form. ` > In the template, we use the phoneNumbers signal array to dynamically render the phone number input fields. The addPhoneNumber and removePhoneNumber methods allow users to reactively add or remove phone numbers, updating the form state. Notice the usage of the track function, which is necessary to ensure that the ngFor directive tracks changes to the phoneNumbers array correctly. Here's a StackBlitz demo of the complex form example for you to play around with: Validating Forms with Signals Validation is critical to any form, ensuring that user input meets the required criteria before submission. With signals, validation can be handled in a reactive and declarative manner. In the complex form example above, we've implemented a computed signal called formValid, which checks whether all fields meet specific validation criteria. The validation logic can easily be customized to accommodate different rules, such as checking for valid email formats or ensuring that all required fields are filled out. Using signals for validation allows you to create more maintainable and testable code, as the validation rules are clearly defined and react automatically to changes in form fields. It can even be abstracted into a separate utility to make it reusable across different forms. In the complex form example, the formValid signal ensures that all required fields are filled and validates the email and phone numbers format. This approach to validation is a bit simple and needs to be better connected to the actual form fields. While it will work for many use cases, in some cases, you might want to wait until explicit "signal forms" support is added to Angular. Tim Deschryver started implementing some abstractions around signal forms, including validation and wrote an article about it. Let's see if something like this will be added to Angular in the future. Why Use Signals in Angular Forms? The adoption of signals in Angular provides a powerful new way to manage form state and reactivity. Signals offer a flexible, declarative approach that can simplify complex form handling by combining the strengths of template-driven forms and reactive forms. Here are some key benefits of using signals in Angular forms: 1. Declarative State Management: Signals allow you to define form fields and computed values declaratively, making your code more predictable and easier to understand. 2. Reactivity: Signals provide reactive updates to form fields, ensuring that changes to the form state trigger reactivity updates automatically. 3. Granular Control: Signals allow you to define form fields at a granular level, enabling fine-grained control over form state and validation. 4. Dynamic Forms: Signals can be used to create dynamic forms with fields that can be added or removed dynamically, providing a flexible way to handle complex form scenarios. 5. Simplicity: Signals can offer a simpler, more concise way to manage form states than traditional reactive forms, making building and maintaining complex forms easier. Conclusion In my previous articles, we explored the powerful features of Angular reactive forms, from dynamic form construction to custom form controls. With the introduction of signals, Angular developers have a new tool that merges the simplicity of template-driven forms with the reactivity of reactive forms. While many use cases warrant Reactive Forms, signals provide a fresh, powerful alternative for managing form state in Angular applications requiring a more straightforward, declarative approach. As Angular continues to evolve, experimenting with these new features will help you build more maintainable, performant applications. Happy coding!...

Angular 18 Announced: Zoneless Change Detection and More cover image

Angular 18 Announced: Zoneless Change Detection and More

Angular 18 Announced: Zoneless Change Detection and More Angular 18 has officially landed, and yet again, the Angular team has proven that they are listening to the community and are committed to continuing the framework's renaissance. The release polishes existing features, addresses common developer requests, and introduces experimental zoneless change detection. Let's examine the major updates and enhancements that Angular 18 offers. 1. Zoneless Angular One of the most exciting features of Angular 18 is the introduction of experimental support for zoneless change detection. Historically, Zone.js has been responsible for triggering Angular's change detection. However, this approach has downsides, especially regarding performance and debugging complexity. The Angular team has been working towards making Zone.js optional, and Angular 18 marks a significant milestone in this journey. Key Features of Zoneless Angular 1. Hybrid Change Detection: In Angular 18, a new hybrid change detection mode is enabled by default. This mode allows Angular to listen to changes from signals and other notifications regarding changes that occur either inside an Angular zone or not. That effectively means you can write (library) code that works regardless of whether Zone.js is being used, paving the way to fully zoneless apps without compromising backward compatibility. For new applications, Angular enables zone coalescing by default, which removes the number of change detection cycles and improves performance. For existing applications, you can enable zone coalescing by configuring your NgZone provider in bootstrapApplication: ` 2. Experimental API for Zoneless Mode: Angular 18 introduces an experimental API to disable Zone.js entirely. This API allows developers to run applications in a fully zoneless mode, paving the way for improved performance and simpler debugging. The zoneless change detection requires an entirely new scheduler which relies on notifications from the Angular core APIs, such as ChangeDetectorRef.markForCheck (called automatically by the AsyncPipe), ComponentRef.setInput, signal updates, host listener updates, or attaching a view that was marked dirty. 3. Improved Composability and Interoperability: Zoneless change detection enhances composability for micro-frontends and interoperability with other frameworks. It also offers faster initial render and runtime, smaller bundle sizes, more readable stack traces, and simpler debugging. How to Try Zoneless Angular To experiment with zoneless change detection, you can add the provideExperimentalZonelessChangeDetection provider to your application bootstrap: ` After adding the provider, remove Zone.js from your polyfills in angular.json. You can read more about the experimental zoneless change detection in the official documentation. By the way, angular.dev is now considered the official home for Angular developers! 2. Server-side Rendering and Hydration Enhancements A feature I'm particularly excited about is the improvements to Angular's server-side rendering (SSR) and hydration in terms of developer experience and debugging: 1. Enhanced DevTools Support: Angular DevTools now includes overlays and detailed error breakdowns to help visualize and debug hydration issues directly in the browser. This refreshing focus on developer experience shows that the Angular team wants to make the framework more approachable and user-friendly. 2. Hydration Support for Angular Material: All Angular Material components now support client hydration and are no longer skipped, which enhances performance and user experience. 3. Event Replay: Available in developer preview, event replay captures user interactions during SSR and replays them once the application is hydrated, ensuring a seamless user experience before complete hydration. It is powered by the same library as Google Search. 4. i18n Hydration Support: Up to v17, Angular skipped hydration for components with i18n blocks. From v18, hydration support for i18n blocks is in developer preview, allowing developers to use client hydration in internationalized applications. 3. Stable Material Design 3 After introducing experimental support for Material Design 3 in Angular 17, Angular 18 now includes stable support. The key features of Material Design 3 in Angular 18 include: 1. Simplified Theme Styles: Based on CSS variables, the new theming styles offer more granular customization and a flexible API for applying color variants to components. 2. Theming Generation Schematics: Using the Ng CLI, you can generate Material 3 themes. 3. Sass APIs: New Sass APIs allow developers to read colors, typography, and other properties from the Material 3 theme, making it easier to create custom components. How to use Material Design 3 in Angular 18 To use Material Design 3 in Angular 18, you can define a theme in your application's styles.scss file using the mat.defineTheme function: ` Or generate a Material 3 theme using the Ng CLI: ` You can then apply the theme to your application using the mat.theme mixin: ` Head to the official guide for a more detailed guide. You'll also notice they have refreshed the docs with the new themes and further documentation. 4. Signal-Based APIs The path to fully signal-based components includes new signal inputs, model inputs, and signal query APIs. We already wrote about them as they were in developer-preview in v17, but they have been further refined in v18. These APIs offer a type-safe, reactive way to manage state changes and interactions within components: 1. Signal Input API: Signal inputs allow values to be bound from parent to child components. Those values are exposed using a signal and can change during the component's life cycle. 2. Model Input API: Model inputs are a special input type that enables a component to propagate new values back to the parent component. That allows developers to keep the parent component in sync with the child component with two-way binding. 3. Signal Query API: This was a particularly requested feature from the community. The signal query APIs work the same way as ViewChild and ContentChild under the hood, but they return signals, providing more predictable timing and type safety. 5. Fallback Content For ng-content A very requested feature from the community, Angular 18 introduces a new ng-content directive that allows developers to define fallback content when no content is projected into a component. This feature is particularly useful for creating reusable components with default content. Here's an example of using the new ng-content directive. Using the following component ` like this ` will render Howdy World. 6. Other Improvements In addition to the major updates mentioned above, Angular 18 also includes several other improvements and updates: 1. TypeScript 5.4: Angular 18 now supports TypeScript 5.4, which lets you take advantage of new features such as preserved narrowing in closures following last assignments. 2. Global Observable in Angular Forms: Angular 18 introduces a global events observable in Angular forms, which allows you to track all changes around any abstract control and its children, including the touched or dirty in a single observable. Here's an example of how you can use the global observable: ` 3. Stable Deferrable views: Deferrable views are now stable in Angular 18. 4. Stable Control Flow: The built-in control flow is now stable in Angular 18! It is more performant than its predecessor. It also received improved type checking, including guardrails for certain performance-related anti-patterns. 5. Route Redirects as Functions: For added flexibility in managing redirects, Angular v18 now lets you use a function for redirectTo that returns a string, which allows you to create more sophisticated redirection logic based on runtime conditions. For example: ` Conclusion Angular 18 is a significant release that brings many new features, enhancements, and experimental APIs to the Angular ecosystem. The introduction of zoneless change detection, improvements to server-side rendering and hydration, stable Material Design 3 support, signal-based APIs, and fallback content for ng-content are just a few of the highlights of this release. The Angular team has again demonstrated its commitment to improving the framework's developer experience, performance, and flexibility. It also demonstrated a clear vision for Angular's future. If you're curious about what's next, you can check out the Angular roadmap....

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) In the first part of my series on the importance of a scientific mindset in software engineering, we explored how the principles of the scientific method can help us evaluate sources and make informed decisions. Now, we will focus on how these principles can help us tackle one of the most crucial and challenging tasks in software engineering: debugging. In software engineering, debugging is often viewed as an art - an intuitive skill honed through experience and trial and error. In a way, it is - the same as a GP, even a very evidence-based one, will likely diagnose most of their patients based on their experience and intuition and not research scientific literature every time; a software engineer will often rely on their experience and intuition to identify and fix common bugs. However, an internist faced with a complex case will likely not be able to rely on their intuition alone and must apply the scientific method to diagnose the patient. Similarly, a software engineer can benefit from using the scientific method to identify and fix the problem when faced with a complex bug. From that perspective, treating engineering challenges like scientific inquiries can transform the way we tackle problems. Rather than resorting to guesswork or gut feelings, we can apply the principles of the scientific method—forming hypotheses, designing controlled experiments, gathering and evaluating evidence—to identify and eliminate bugs systematically. This approach, sometimes referred to as "scientific debugging," reframes debugging from a haphazard process into a structured, disciplined practice. It encourages us to be skeptical, methodical, and transparent in our reasoning. For instance, as Andreas Zeller notes in the book _Why Programs Fail_, the key aspect of scientific debugging is its explicitness: Using the scientific method, you make your assumptions and reasoning explicit, allowing you to understand your assumptions and often reveals hidden clues that can lead to the root cause of the problem on hand. Note: If you'd like to read an excerpt from the book, you can find it on Embedded.com. Scientific Debugging At its core, scientific debugging applies the principles of the scientific method to the process of finding and fixing software defects. Rather than attempting random fixes or relying on intuition, it encourages engineers to move systematically, guided by data, hypotheses, and controlled experimentation. By adopting debugging as a rigorous inquiry, we can reduce guesswork, speed up the resolution process, and ensure that our fixes are based on solid evidence. Just as a scientist begins with a well-defined research question, a software engineer starts by identifying the specific symptom or error condition. For instance, if our users report inconsistencies in the data they see across different parts of the application, our research question could be: _"Under what conditions does the application display outdated or incorrect user data?"_ From there, we can follow a structured debugging process that mirrors the scientific method: - 1. Observe and Define the Problem: First, we need to clearly state the bug's symptoms and the environment in which it occurs. We should isolate whether the issue is deterministic or intermittent and identify any known triggers if possible. Such a structured definition serves as the groundwork for further investigation. - 2. Formulate a Hypothesis: A hypothesis in debugging is a testable explanation for the observed behavior. For instance, you might hypothesize: _"The data inconsistency occurs because a caching layer is serving stale data when certain user profiles are updated."_ The key is that this explanation must be falsifiable; if experiments don't support the hypothesis, it must be refined or discarded. - 3. Collect Evidence and Data: Evidence often includes logs, system metrics, error messages, and runtime traces. Similar to reviewing primary sources in academic research, treat your raw debugging data as crucial evidence. Evaluating these data points can reveal patterns. In our example, such patterns could be whether the bug correlates with specific caching mechanisms, increased memory usage, or database query latency. During this step, it's essential to approach data critically, just as you would analyze the quality and credibility of sources in a research literature review. Don't forget that even logs can be misleading, incomplete, or even incorrect, so cross-referencing multiple sources is key. - 4. Design and Run Experiments: Design minimal, controlled tests to confirm or refute your hypothesis. In our example, you may try disabling or shortening the cache's time-to-live (TTL) to see if more recent data is displayed correctly. By manipulating one variable at a time - such as cache invalidation intervals - you gain clearer insights into causation. Tools such as profilers, debuggers, or specialized test harnesses can help isolate factors and gather precise measurements. - 5. Analyze Results and Refine Hypotheses: If the experiment's outcome doesn't align with your hypothesis, treat it as a stepping stone, not a dead end. Adjust your explanation, form a new hypothesis, or consider additional variables (for example, whether certain API calls bypass caching). Each iteration should bring you closer to a better understanding of the bug's root cause. Remember, the goal is not to prove an initial guess right but to arrive at a verifiable explanation. - 6. Implement and Verify the Fix: Once you're confident in the identified cause, you can implement the fix. Verification doesn't stop at deployment - re-test under the same conditions and, if possible, beyond them. By confirming the fix in a controlled manner, you ensure that the solution is backed by evidence rather than wishful thinking. - Personally, I consider implementing end-to-end tests (e.g., with Playwright) that reproduce the bug and verify the fix to be a crucial part of this step. This both ensures that the bug doesn't reappear in the future due to changes in the codebase and avoids possible imprecisions of manual testing. Now, we can explore these steps in more detail, highlighting how the scientific method can guide us through the debugging process. Establishing Clear Debugging Questions (Formulating a Hypothesis) A hypothesis is a proposed explanation for a phenomenon that can be tested through experimentation. In a debugging context, that phenomenon is the bug or issue you're trying to resolve. Having a clear, falsifiable statement that you can prove or disprove ensures that you stay focused on the real problem rather than jumping haphazardly between possible causes. A properly formulated hypothesis lets you design precise experiments to evaluate whether your explanation holds true. To formulate a hypothesis effectively, you can follow these steps: 1. Clearly Identify the Symptom(s) Before forming any hypothesis, pin down the specific issue users are experiencing. For instance: - "Users intermittently see outdated profile information after updating their accounts." - "Some newly created user profiles don't reflect changes in certain parts of the application." Having a well-defined problem statement keeps your hypothesis focused on the actual issue. Just like a research question in science, the clarity of your symptom definition directly influences the quality of your hypothesis. 2. Draft a Tentative Explanation Next, convert your symptom into a statement that describes a _possible root cause_, such as: - "Data inconsistency occurs because the caching layer isn't invalidating or refreshing user data properly when profiles are updated." - "Stale data is displayed because the cache timeout is too long under certain load conditions." This step makes your assumption about the root cause explicit. As with the scientific method, your hypothesis should be something you can test and either confirm or refute with data or experimentation. 3. Ensure Falsifiability A valid hypothesis must be falsifiable - meaning it can be proven _wrong_. You'll struggle to design meaningful experiments if a hypothesis is too vague or broad. For example: - Not Falsifiable: "Occasionally, the application just shows weird data." - Falsifiable: "Users see stale data when the cache is not invalidated within 30 seconds of profile updates." Making your hypothesis specific enough to fail a test will pave the way for more precise debugging. 4. Align with Available Evidence Match your hypothesis to what you already know - logs, stack traces, metrics, and user reports. For example: - If logs reveal that cache invalidation events aren't firing, form a hypothesis explaining why those events fail or never occur. - If metrics show that data served from the cache is older than the configured TTL, hypothesize about how or why the TTL is being ignored. If your current explanation contradicts existing data, refine your hypothesis until it fits. 5. Plan for Controlled Tests Once you have a testable hypothesis, figure out how you'll attempt to _disprove_ it. This might involve: - Reproducing the environment: Set up a staging/local system that closely mimics production. For instance with the same cache layer configurations. - Varying one condition at a time: For example, only adjust cache invalidation policies or TTLs and then observe how data freshness changes. - Monitoring metrics: In our example, such monitoring would involve tracking user profile updates, cache hits/misses, and response times. These metrics should lead to confirming or rejecting your explanation. These plans become your blueprint for experiments in further debugging stages. Collecting and Evaluating Evidence After formulating a clear, testable hypothesis, the next crucial step is to gather data that can either support or refute it. This mirrors how scientists collect observations in a literature review or initial experiments. 1. Identify "Primary Sources" (Logs, Stack Traces, Code History): - Logs and Stack Traces: These are your direct pieces of evidence - treat them like raw experimental data. For instance, look closely at timestamps, caching-related events (e.g., invalidation triggers), and any error messages related to stale reads. - Code History: Look for related changes in your source control, e.g. using Git bisect. In our example, we would look for changes to caching mechanisms or references to cache libraries in commits, which could pinpoint when the inconsistency was introduced. Sometimes, reverting a commit that altered cache settings helps confirm whether the bug originated there. 2. Corroborate with "Secondary Sources" (Documentation, Q&A Forums): - Documentation: Check official docs for known behavior or configuration details that might differ from your assumptions. - Community Knowledge: Similar issues reported on GitHub or StackOverflow may reveal known pitfalls in a library you're using. 3. Assess Data Quality and Relevance: - Look for Patterns: For instance, does stale data appear only after certain update frequencies or at specific times of day? - Check Environmental Factors: For instance, does the bug happen only with particular deployment setups, container configurations, or memory constraints? - Watch Out for Biases: Avoid seeking only the data that confirms your hypothesis. Look for contradictory logs or metrics that might point to other root causes. You keep your hypothesis grounded in real-world system behavior by treating logs, stack traces, and code history as primary data - akin to raw experimental results. This evidence-first approach reduces guesswork and guides more precise experiments. Designing and Running Experiments With a hypothesis in hand and evidence gathered, it's time to test it through controlled experiments - much like scientists isolate variables to verify or debunk an explanation. 1. Set Up a Reproducible Environment: - Testing Environments: Replicate production conditions as closely as possible. In our example, that would involve ensuring the same caching configuration, library versions, and relevant data sets are in place. - Version Control Branches: Use a dedicated branch to experiment with different settings or configuration, e.g., cache invalidation strategies. This streamlines reverting changes if needed. 2. Control Variables One at a Time: - For instance, if you suspect data inconsistency is tied to cache invalidation events, first adjust only the invalidation timeout and re-test. - Or, if concurrency could be a factor (e.g., multiple requests updating user data simultaneously), test different concurrency levels to see if stale data issues become more pronounced. 3. Measure and Record Outcomes: - Automated Tests: Tests provide a great way to formalize and verify your assumptions. For instance, you could develop tests that intentionally update user profiles and check if the displayed data matches the latest state. - Monitoring Tools: Monitor relevant metrics before, during, and after each experiment. In our example, we might want to track cache hit rates, TTL durations, and query times. - Repeat Trials: Consistency across multiple runs boosts confidence in your findings. 4. Validate Against a Baseline: - If baseline tests manifest normal behavior, but your experimental changes manifest the bug, you've isolated the variable causing the issue. E.g. if the baseline tests show that data is consistently fresh under normal caching conditions but your experimental changes cause stale data. - Conversely, if your change eliminates the buggy behavior, it supports your hypothesis - e.g. that the cache configuration was the root cause. Each experiment outcome is a data point supporting or contradicting your hypothesis. Over time, these data points guide you toward the true cause. Analyzing Results and Iterating In scientific debugging, an unexpected result isn't a failure - it's valuable feedback that brings you closer to the right explanation. 1. Compare Outcomes to the hypothesis. For instance: - Did user data stay consistent after you reduced the cache TTL or fixed invalidation logic? - Did logs show caching events firing as expected, or did they reveal unexpected errors? - Are there only partial improvements that suggest multiple overlapping issues? 2. Incorporate Unexpected Observations: - Sometimes, debugging uncovers side effects - e.g. performance bottlenecks exposed by more frequent cache invalidations. Note these for future work. - If your hypothesis is disproven, revise it. For example, the cache may only be part of the problem, and a separate load balancer setting also needs attention. 3. Avoid Confirmation Bias: - Don't dismiss contrary data. For instance, if you see evidence that updates are fresh in some modules but stale in others, you may have found a more nuanced root cause (e.g., partial cache invalidation). - Consider other credible explanations if your teammates propose them. Test those with the same rigor. 4. Decide If You Need More Data: - If results aren't conclusive, add deeper instrumentation or enable debug modes to capture more detailed logs. - For production-only issues, implement distributed tracing or sampling logs to diagnose real-world usage patterns. 5. Document Each Iteration: - Record the results of each experiment, including any unexpected findings or new hypotheses that arise. - Through iterative experimentation and analysis, each cycle refines your understanding. By letting evidence shape your hypothesis, you ensure that your final conclusion aligns with reality. Implementing and Verifying the Fix Once you've identified the likely culprit - say, a misconfigured or missing cache invalidation policy - the next step is to implement a fix and verify its resilience. 1. Implementing the Change: - Scoped Changes: Adjust just the component pinpointed in your experiments. Avoid large-scale refactoring that might introduce other issues. - Code Reviews: Peer reviews can catch overlooked logic gaps or confirm that your changes align with best practices. 2. Regression Testing: - Re-run the same experiments that initially exposed the issue. In our stale data example, confirm that the data remains fresh under various conditions. - Conduct broader tests - like integration or end-to-end tests - to ensure no new bugs are introduced. 3. Monitoring in Production: - Even with positive test results, real-world scenarios can differ. Monitor logs and metrics (e.g. cache hit rates, user error reports) closely post-deployment. - If the buggy behavior reappears, revisit your hypothesis or consider additional factors, such as unpredicted user behavior. 4. Benchmarking and Performance Checks (If Relevant): - When making changes that affect the frequency of certain processes - such as how often a cache is refreshed - be sure to measure the performance impact. Verify you meet any latency or resource usage requirements. - Keep an eye on the trade-offs: For instance, more frequent cache invalidations might solve stale data but could also raise system load. By systematically verifying your fix - similar to confirming experimental results in research - you ensure that you've addressed the true cause and maintained overall software stability. Documenting the Debugging Process Good science relies on transparency, and so does effective debugging. Thorough documentation guarantees your findings are reproducible and valuable to future team members. 1. Record Your Hypothesis and Experiments: - Keep a concise log of your main hypothesis, the tests you performed, and the outcomes. - A simple markdown file within the repo can capture critical insights without being cumbersome. 2. Highlight Key Evidence and Observations: - Note the logs or metrics that were most instrumental - e.g., seeing repeated stale cache hits 10 minutes after updates. - Document any edge cases discovered along the way. 3. List Follow-Up Actions or Potential Risks: - If you discover additional issues - like memory spikes from more frequent invalidation - note them for future sprints. - Identify parts of the code that might need deeper testing or refactoring to prevent similar issues. 4. Share with Your Team: - Publish your debugging report on an internal wiki or ticket system. A well-documented troubleshooting narrative helps educate other developers. - Encouraging open discussion of the debugging process fosters a culture of continuous learning and collaboration. By paralleling scientific publication practices in your documentation, you establish a knowledge base to guide future debugging efforts and accelerate collective problem-solving. Conclusion Debugging can be as much a rigorous, methodical exercise as an art shaped by intuition and experience. By adopting the principles of scientific inquiry - forming hypotheses, designing controlled experiments, gathering evidence, and transparently documenting your process - you make your debugging approach both systematic and repeatable. The explicitness and structure of scientific debugging offer several benefits: - Better Root-Cause Discovery: Structured, hypothesis-driven debugging sheds light on the _true_ underlying factors causing defects rather than simply masking symptoms. - Informed Decisions: Data and evidence lead the way, minimizing guesswork and reducing the chance of reintroducing similar issues. - Knowledge Sharing: As in scientific research, detailed documentation of methods and outcomes helps others learn from your process and fosters a collaborative culture. Ultimately, whether you are diagnosing an intermittent crash or chasing elusive performance bottlenecks, scientific debugging brings clarity and objectivity to your workflow. By aligning your debugging practices with the scientific method, you build confidence in your solutions and empower your team to tackle complex software challenges with precision and reliability. But most importantly, do not get discouraged by the number of rigorous steps outlined above or by the fact you won't always manage to follow them all religiously. Debugging is a complex and often frustrating process, and it's okay to rely on your intuition and experience when needed. Feel free to adapt the debugging process to your needs and constraints, and as long as you keep the scientific mindset at heart, you'll be on the right track....

Docusign Momentum 2025 From A Developer’s Perspective cover image

Docusign Momentum 2025 From A Developer’s Perspective

*What if your contract details stuck in PDFs could ultimately become the secret sauce of your business automation workflows?* In a world drowning in PDFs and paperwork, I never thought I’d get goosebumps about agreements – until I attended Docusign Momentum 2025. I went in expecting talks about e-signatures; I left realizing the big push and emphasis with many enterprise-level organizations will be around Intelligent Agreement Management (IAM). It is positioned to transform how we build business software, so let’s talk about it. As Director of Technology at This Dot Labs, I had a front-row seat to all the exciting announcements at Docusign Momentum. Our team also had a booth there showing off the 6 Docusign extension apps This Dot Labs has released this year. We met 1-on-1 with a lot of companies and leaders to discuss the exciting promise of IAM. What can your company accomplish with IAM? Is it really worth it for you to start adopting IAM?? Let’s dive in and find out. After his keynote, I met up with Robert Chatwani, President of Docusign and he said this > “At Docusign, we truly believe that the power of a great platform is that you won’t be able to exactly predict what can be built on top of it,and builders and developers are at the heart of driving this type of innovation. Now with AI, we have entered what I believe is a renaissance era for new ideas and business models, all powered by developers.” Docusign’s annual conference in NYC was an eye-opener: agreements are no longer just documents to sign and shelve, but dynamic data hubs driving key processes. Here’s my take on what I learned, why it matters, and why developers should pay attention. From E-Signatures to Intelligent Agreements – A New Era Walking into Momentum 2025, you could feel the excitement. Docusign’s CEO and product team set the tone in the keynote: “Agreements make the world go round, but for too long they’ve been stuck in inboxes and PDFs, creating drag on your business.” Their message was clear – Docusign is moving from a product to a platform​. In other words, the company that pioneered e-signatures now aims to turn static contracts into live, integrated assets that propel your business forward. I saw this vision click when I chatted with an attendee from a major financial services firm. His team manages millions of forms a year – loan applications, account forms, you name it. He admitted they were still “just scanning and storing PDFs” and struggled to imagine how IAM could help. We discussed how much value was trapped in those documents (what Docusign calls the “Agreement Trap” of disconnected processes​). By the end of our coffee, the lightbulb was on: with the right platform, those forms could be automatically routed, data-extracted, and trigger workflows in other systems – no more black hole of PDFs. His problem wasn’t unique; many organizations have critical data buried in agreements, and they’re waking up to the idea that it doesn’t have to be this way. What Exactly is Intelligent Agreement Management (IAM)? So what is Docusign’s Intelligent Agreement Management? In essence, IAM is an AI-powered platform that connects every part of the agreement lifecycle. It’s not a single product, but a collection of services and tools working in concert​. Docusign IAM helps transform agreement data into insights and actions, accelerate contract cycles, and boost productivity across departments. The goal is to address the inefficiencies in how agreements are created, signed, and managed – those inefficiencies that cost businesses time and money. At Momentum, Docusign showcased the core components of IAM: - Docusign Navigator link: A smart repository to centrally store, search, and analyze agreements. It uses AI to convert your signed documents (which are basically large chunks of text) into structured, queryable data​. Instead of manually digging through contracts for a specific clause, you can search across all agreements in seconds. Navigator gives you a clear picture of your organization’s contractual relationships and obligations (think of it as Google for your contracts). Bonus: it comes with out-of-the-box dashboards for things like renewal dates, so you can spot risks and opportunities at a glance. - Docusign Maestro link: A no-code workflow engine to automate agreement workflows from start to finish. Maestro lets you design customizable workflows that orchestrate Docusign tasks and integrate with third-party apps – all without writing code​. For example, you could have a workflow for new vendor onboarding: once a vendor contract is signed, Maestro could automatically notify your procurement team, create a task in your project tracker, and update a record in your ERP system. At the conference, they demoed how Maestro can streamline processes like employee onboarding and compliance checks through simple drag-and-drop steps or archiving PDFs of signed agreements into Google Drive or Dropbox. - Docusign Iris (AI Engine) link: The brains of the operation. Iris is the new AI engine powering all of IAM’s “smarts” – from reading documents to extracting data and making recommendations​. It’s behind features like automatic field extraction, AI-assisted contract review, intelligent search, and even document summarization. In the keynote, we saw examples of Iris in action: identify key terms (e.g. payment terms or renewal clauses) across a stack of contracts, or instantly generate a summary of a lengthy agreement. These capabilities aren’t just gimmicks; as one Docusign executive put it, they’re “signals of a new way of working with agreements”. Iris essentially gives your agreement workflow a brain – it can understand the content of agreements and help you act on it. - Docusign App Center link: A hub to connect the tools of your trade into Docusign. App Center is like an app store for integrations – it lets you plug in other software (project management, CRM, HR systems, etc.) directly into your Maestro workflows. This is huge for developers (and frankly, anyone tired of building one-off integrations). Instead of treating Docusign as an isolated e-signature tool, App Center makes it a platform you can extend. I’ll dive more into this in the next section, since it’s close to my heart – my team helped build some of these integrations! In short, IAM ties together the stages of an agreement (create → sign → store → manage) and supercharges each with automation and AI. It’s modular, too – you can adopt the pieces you need. Docusign essentially unbundled the agreement process into building blocks that developers and admins can mix-and-match. The future of agreements, as Docusign envisions it, is a world where organizations *“seamlessly add, subtract, and rearrange modular solutions to meet ever-changing needs”* on a single trusted platform. The App Center and Real-World Integrations (Yes, We Built Those!) One of the most exciting parts of Momentum 2025 for me was seeing the Docusign App Center come alive. As someone who works on integrations, I was practically grinning during the App Center demos. Docusign highlighted several partner-built apps that snap into IAM, and I’m proud to say This Dot Labs built six of them – including integrations for Monday.com, Slack, Jira, Asana, Airtable, and Mailchimp. Why are these integrations a big deal? Because developers often spend countless hours wiring up systems that need to talk to each other. With App Center, a lot of that heavy lifting is already done. You can install an app with a few clicks and configure data flows in minutes instead of coding for months​. In fact, a study found it takes the average org 12 months to develop a custom workflow via APIs, whereas with Docusign’s platform you can do it via configuration almost immediately​. That’s a game-changer for time-to-value. At our This Dot Labs booth, I spoke with many developers who were intrigued by these possibilities. For example, we showed how our Docusign Slack Extension lets teams send Slack messages and notifications when agreements are sent and signed.. If a sales contract gets signed, the Slack app can automatically post a notification in your channel and even attach the signed PDF – no more emailing attachments around. People loved seeing how easily Docusign and Slack now talk to each other using this extension​. Another popular one was our Monday.com app. With it, as soon as an agreement is signed, you can trigger actions in Monday – like assigning onboarding tasks for a new client or employee. Essentially, signing the document kicks off the next steps automatically. These integrations showcase why IAM is not just about Docusign’s own features, but about an ecosystem. App Center already includes connectors for popular platforms like Salesforce, HubSpot, Workday, ServiceNow, and more. The apps we built for Monday, Slack, Jira, etc., extend that ecosystem. Each app means one less custom integration a developer has to build from scratch. And if an app you need doesn’t exist yet – well, that’s an opportunity. (Shameless plug: we’re happy to help build it!) The key takeaway here is that Docusign is positioning itself as a foundational layer in the enterprise software stack. Your agreement workflow can now natively include things like project management updates, CRM entries, notifications, and data syncs. As a developer, I find that pretty powerful. It’s a shift from thinking of Docusign as a single SaaS tool to thinking of it as a platform that glues processes together. Not Just Another Contract Tool – Why IAM Matters for Business After absorbing all the Momentum keynotes and sessions, one thing is clear: IAM is not “just another contract management tool.” It’s aiming to be the platform that automates critical business processes which happen to revolve around agreements. The use cases discussed were not theoretical – they were tangible scenarios every developer or IT lead will recognize: - Procurement Automation: We heard how companies are using IAM to streamline procurement. Imagine a purchase order process where a procurement request triggers an agreement that goes out for e-signature, and once signed, all relevant systems update instantly. One speaker described connecting Docusign with their ERP so that vendor contracts and purchase orders are generated and tracked automatically. This reduces the back-and-forth with legal and ensures nothing falls through the cracks. It’s easy to see the developer opportunity: instead of coding a complex procurement approval system from scratch, you can leverage Docusign’s workflow + integration hooks to handle it. Docusign IAM is designed to connect to systems like CRM, HR, and ERP so that agreements flow into the same stream of data. For developers, that means using pre-built connectors and APIs rather than reinventing them. - Faster Employee Onboarding: Onboarding a new hire or client typically involves a flurry of forms and tasks – offer letters or contracts to sign, NDAs, setup of accounts, etc. We saw how IAM can accelerate onboarding by combining e-signature with automated task generation. For instance, the moment a new hire signs their offer letter, Maestro could trigger an onboarding workflow: provisioning the employee in systems, scheduling orientation, and creating tasks in tools like Asana or Monday. All those steps get kicked off by the signed agreement. Docusign Maestro’s integration capabilities shine here – it can tie into HR systems or project management apps to carry the baton forward​. The result is a smoother day-one experience for the new hire and less manual coordination for IT and HR. As developers, we can appreciate how this modular approach saves us from writing yet another “onboarding script”; we configure the workflow, and IAM handles the rest. - Reducing Contract Auto-Renewal Risk: If your company manages a lot of recurring contracts (think vendor services, subscriptions, leases), missing a renewal deadline can be costly. One real-world story shared at Momentum was about using IAM to prevent unwanted auto-renewals. With traditional tracking (spreadsheets or calendar reminders), it’s easy to forget a termination notice and end up locked into a contract for another year. Docusign’s solution: let the AI engine (Iris) handle it. It can scan your repository, surface any renewal or termination dates, and proactively remind stakeholders – or even kick off a non-renewal workflow if desired. As the Bringing Intelligence to Obligation Management session highlighted, “Missed renewal windows lead to unwanted auto-renewals or lost revenue… A forgotten termination deadline locks a company into an unneeded service for another costly term.”​ With IAM, those pitfalls are avoidable. The system can automatically flag and assign tasks well before a deadline hits​. For developers, this means we can deliver risk-reduction features without building a custom date-tracking system – the platform’s AI and notification framework has us covered. These examples all connect to a bigger point: agreements are often the linchpin of larger business processes (buying something, hiring someone, renewing a service). By making agreements “intelligent,” Docusign IAM is essentially automating chunks of those processes. This translates to real outcomes – faster cycle times, fewer errors, and less risk. From a technical perspective, it means we developers have a powerful ally: we can offload a lot of workflow logic to the IAM platform. Why code it from scratch if a combination of Docusign + a few integration apps can do it? Why Developers Should Care about IAM (Big Time) If you’re a software developer or architect building solutions for business teams, you might be thinking: This sounds cool, but is it relevant to me? Let me put it this way – after Momentum 2025, I’m convinced that ignoring IAM would be a mistake for anyone in enterprise software. Here’s why: - Faster time-to-value for your clients or stakeholders: Business teams are always pressuring IT to deliver solutions faster. With IAM, you have ready-made components to accelerate projects. Need to implement a contract approval workflow? Use Maestro, not months of coding. Need to integrate Docusign with an internal system? Check App Center for an app or use their APIs with far less glue code. Docusign’s own research shows that connecting systems via App Center and Maestro can cut development time dramatically (from ~12 months of custom dev to mere weeks or less). For us developers, that means we can deliver results sooner, which definitely wins points with the business. - Fewer custom builds (and less maintenance): Let’s face it – maintaining custom scripts or one-off integrations is not fun. Every time a SaaS API changes or a new requirement comes in, you’re back in the code. IAM’s approach offers more reuse and configuration instead of raw code. The platform is doing the hard work of staying updated (for example, when Slack or Salesforce change something in their API, Docusign’s connector app will handle it). By leveraging these pre-built connectors and templates, you write less custom code, which means fewer bugs and lower maintenance overhead. You can focus your coding effort on the unique parts of your product, not the boilerplate integration logic. - Reusable and modular workflows: I love designing systems as Lego blocks – and IAM encourages that. You can build a workflow once and reuse it across multiple projects or clients with slight tweaks. For instance, an approval workflow for sales contracts might be 90% similar to one for procurement contracts – with IAM, you can reuse that blueprint. The fact that everything is on one platform also means these workflows can talk to each other or be combined. This modularity is a developer’s dream because it leads to cleaner architecture. Docusign explicitly touts this modular approach, noting that organizations can easily rearrange solutions on the fly to meet new needs​. It’s like having a library of proven patterns to draw from. - AI enhancements with minimal effort: Adding AI into your apps can be daunting if you have to build or train models yourself. IAM essentially gives you AI-as-a-service for agreements. Need to extract key data from 1,000 contracts? Iris can do that out-of-the-box​. Want to implement a risk scoring for contracts? The AI can flag unusual terms or deviations. As a developer, being able to call an API or trigger a function that returns “these are the 5 clauses to look at” is incredibly powerful – you’re injecting intelligence without needing a data science team. It means you can offer more value in your applications (and impress those end-users!) by simply tapping into IAM’s AI features. Ultimately, Docusign IAM empowers developers to build more with less code. It’s about higher-level building blocks. This doesn’t replace our jobs – it makes our jobs more focused on the interesting problems. I’d rather spend time designing a great user experience or tackling a complex business rule than coding yet another Docusign-to-Slack integration. IAM is taking care of the plumbing and adding a layer of smarts on top. Don’t Underestimate Agreement Intelligence – Your Call to Action Momentum 2025 left me with a clear call to action: embrace agreement intelligence. If you’re a developer or tech leader, it’s time to explore what Docusign IAM can do for your projects. This isn’t just hype from a conference – it’s a real shift in how we can deliver solutions. Here are a few ways to get started: - Browse the IAM App Center – Take a look at the growing list of apps in the Docusign App Center. You might find that integration you’ve been meaning to build is already available (or one very close to it). Installing an app is trivial, and you can configure it to fit your workflow. This is the low-hanging fruit to immediately add value to your existing Docusign processes. If you have Docusign eSignature or CLM in your stack, App Center is where you extend it. - Think about integrations that could unlock value – Consider the systems in your organization that aren’t talking to each other. Is there a manual step where someone re-enters data from a contract into another system? Maybe an approval that’s done via email and could be automated? Those are prime candidates for an IAM solution. For example, if Legal and Sales use different tools, an integration through IAM can bridge them, ensuring no agreement data falls through the cracks. Map out your agreement process end-to-end and identify gaps – chances are, IAM has a feature to fill them. - Experiment with Maestro and the API – If you’re technical, spin up a trial of Docusign IAM. Try creating a Maestro workflow for a simple use case, or use the Docusign API/SDKs to trigger some AI analysis on a document. Seeing it in action will spark ideas. I was amazed how quickly I could set up a workflow with conditions and parallel steps – things that would take significant coding time if I did them manually. The barrier to entry for adding complex logic has gotten a lot lower. - Stay informed and involved – Docusign’s developer community and IAM documentation are growing. Momentum may be over, but the “agreement intelligence” movement is just getting started. Keep an eye on upcoming features (they hinted at even more AI-assisted tools coming soon). Engage with the community forums or join Docusign’s IAM webinars. And if you’re building something cool with IAM, consider sharing your story – the community benefits from hearing real use cases. My final thought: don’t underestimate the impact that agreement intelligence can have in modern workflows. We spend so much effort optimizing various parts of our business, yet often overlook the humble agreement – the contracts, forms, and documents that initiate or seal every deal. Docusign IAM is shining a spotlight on these and saying, “Here is untapped gold. Let’s mine it.” As developers, we have an opportunity (and now the tools) to lead that charge. I’m incredibly excited about this new chapter. After seeing what Docusign has built, I’m convinced that intelligent agreements can be a foundational layer for digital transformation. It’s not just about getting documents signed faster; it’s about connecting dots and automating workflows in ways we couldn’t before. As I reflect on Momentum 2025, I’m inspired and already coding with new ideas in mind. I encourage you to do the same – check out IAM, play with the App Center, and imagine what you could build when your agreements start working intelligently for you. The future of agreements is here, and it’s time for us developers to take full advantage of it. Ready to explore? Head to the Docusign App Center and IAM documentation and see how you can turn your agreements into engines of growth. Trust me – the next time you attend Momentum, you might just have your own success story to share. Happy building!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co