Skip to content

Transforming Platform Engineering Through Chargeback Programs with Shuchi Mittal

Shuchi Mittal, the Head of Cloud enablement at Honeywell, discusses how she as a leader in platform engineering has been able to transform internal platform engineering teams at other organizations by providing teams with value-added services, and how they effectively managed costs. She talks about how to treat these teams as customers, and delivering services that meet the customer needs, but also charging them for their usage.

By providing policy-compliant infrastructure and unique services tailored to their requirements, platform engineering teams can showcase their commitment to supporting the success of other teams within an organization. This approach not only fosters trust but also positions platform engineering as a strategic partner rather than just a service provider. By going beyond the basic infrastructure provisioning, the platform engineering team can offer services that help development and product teams streamline their processes and improve efficiency.

Shuchi shares her work at Fiserv where she implemented a chargeback system to track usage and costs effectively, incentivizing better development practices. This not only helped in managing costs but also encouraged teams to optimize their resource utilization, leading to improved overall efficiency and more easily scalable systems. Her platform engineering team recognized the importance of effectively managing financial aspects to secure upfront investment for product development. By implementing a billing system based on gigabyte hours, they aligned costs with usage, enabling teams to have a clear understanding of their resource consumption. This approach not only provided transparency but also incentivized teams to adopt cost-effective practices.

By strategically managing financial aspects, the platform engineering team gained the trust of stakeholders and secured the necessary resources to drive innovation and deliver value to the organization. Shuchi’s journey of platform engineering serves as a valuable example of how a team can transition from being a basic service provider to becoming an innovation partner within an organization. By building trust, offering value-added services, and strategically managing financial aspects, the platform engineering team successfully elevated their role and became a strategic enabler of innovation.

Download this episode here.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation & Literature Review) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation & Literature Review)

The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation & Literature Review) Today, I will write about something very dear to me - science. But not about science as a field of study but rather as a way of thinking. It's easy nowadays to get lost in the sea of information, fall for marketing hype, or even be trolled by a hallucinating LLM. A scientific mindset can be a powerful tool for navigating the complex modern world and the world of software engineering in particular. Not only is it a powerful tool, but I'll argue that it's a must nowadays if you want to make informed decisions, solve problems effectively, and become a better engineer. As software engineers, we are constantly confronted with an overwhelming array of frameworks, technologies, and infrastructure choices. Sometimes, it feels like there's a new tool or platform every day, each accompanied by its own wave of hype and marketing. It's easy to feel lost in the myriad of information or even suffer from FOMO and insecurity about not jumping on the latest bandwagon. But it's not only about the abundance of information and making technological decisions. As engineers, we often write documentation, blog posts, talks, or even books. We need to be able to communicate our ideas clearly and effectively. Furthermore, we have to master the art of debugging code, which is essentially a scientific process where we form hypotheses, test them, and iterate until we find the root cause of the problem. Therefore, here's my hot take: engineering is a science; hence, to deserve an engineer title, one needs to think like a scientist. So, let's _review_ (pun intended) what it means to think like a scientist in the context of software engineering. Systematic Review In science, systematic review is not only an essential means to understand a topic and map the current state of knowledge in the field, but it also has a well-defined methodology. You can't just google whatever supports your hypothesis and call it a day. You must define your research question, choose the databases you will search, set your inclusion and exclusion criteria, systematically search for relevant studies, evaluate their quality, and synthesize the results. Most importantly, you must be transparent about and describe your methodology in detail. The general process of systematic review can be summarized in the following steps: 1. Define your research question(s) 2. Choose databases and other sources to search 3. Define keywords and search terms 4. Define inclusion and exclusion criteria a. Define practical criteria such as publication date, language, etc. b. Define methodological criteria such as study design, sample size, etc. 5. Search for relevant studies 6. Evaluate the quality of the studies 7. Synthesize the results Source: Conducting Research Literature Reviews: From the Internet to Paper by Dr. Fink I'm pretty sure you can see where I'm going with this. There are many use cases in software engineering where a process similar to systematic review can be applied. Whether you're evaluating a new technology, choosing a tech stack for a new project, or researching for a blog post or a conference talk, it's important to be systematic in your approach, transparent about your methodology, and honest about the limitations of your research. Of course, when choosing a tech stack to learn or researching for a blog post, you don't have to be as rigorous as in a scientific study. But a few of these steps will always be worth following. Let's focus on those and see how we can apply them in the context of software engineering. Defining Your Research Question(s) Before you start researching, it's important to define your research questions. What are you trying to find out? What problem are you trying to solve? What are the goals of your research? These questions will help you stay focused and avoid focusing on irrelevant information. > A practical example: If you're evaluating, say, whether to use bundler _A_ or bundler _B_ without a clear research question, you might end up focusing on marketing claims about how bundler _A_ is faster than bundler _B_ or how bundler _B_ is more popular than bundler _A_, even though such aspects may have minimal impact on your project. With a clear research question, you can focus on what really matters for your project, like how well each bundler integrates with your existing tools, how well they handle your specific use case, or how well they are maintained. A research question is not a hypothesis - you don't have to have a clear idea of the answer. It's more about defining the scope of your research and setting clear goals. It can be as simple and general as "What are the pros and cons of using React vs. Angular for a particular project?" but also more specific and focused, like "What are the legal implications of using open-source library _X_ for purpose _Y_ in project _Z_?". You can have multiple research questions, but keeping them focused and relevant to your goals is essential. In my personal opinion, part of the scientific mindset is automatically having at least a vague idea of a research question in your head whenever you're facing a problem or a decision, and that alone can make you a more confident and effective engineer. Choosing Databases and Other Sources to Search In engineering, some information (especially when researching rare bugs) can be scarce, and you have to search wherever and take what you can get. Hence, this step is arguably much easier in science, where you can include well-established databases and publications in your search. Information in science is simply more standardized and easier to find. There are, however, still some decisions to be made about where to search. Do you want to include community websites like StackOverflow or Reddit? Do you want to include marketing materials from the companies behind the technologies you're evaluating? These can all be valid sources of information, but they have their limitations and biases, and it's important to be aware of them. Or do you want to ask a LLM? I hadn't included LLMs in the list of valid sources of information on purpose as they are not literature databases in the traditional sense, and I wouldn't consider them a search source for literature research. And for a very good reason - they are essentially a black box, and therefore, you cannot reliably describe a reproducible methodology of your search. That doesn't mean you shouldn't ask an LLM for inspiration or a TL;DR, but you should always verify the information you get from them and be aware of their limitations. Defining Keywords and Search Terms This section will be short, as most of you are familiar with the concept of keywords and search terms and how to use search engines. However, I still wanted to highlight the importance of knowing how to search effectively for a software engineer. It's not just about typing in a few keywords and hoping for the best. It's about learning how to use advanced search operators, filter out irrelevant results, and find the information you need quickly and efficiently. If you're not familiar with advanced search operators, I highly recommend you take some time to learn them, for example, at FreeCodeCamp. Please note, however, that the article is specific to Google and different search engines may have different operators and syntax. This is especially true for scientific databases, which often have their own search syntax and operators. So, if you're doing more formal research, familiarize yourself with the database's search syntax. The underlying principles, however, are pretty much the same everywhere; just the syntax and UI might differ. With a solid search strategy in place, the next critical step is to assess the quality of the information we find. Methodological Criteria and Evaluation of Sources This is where things get interesting. In science, evaluating the quality of the studies is a crucial step in the systematic review process. You can't just take the results of a study at face value - you need to critically evaluate its design, the sample size, the methodology, and the conclusions - and you need to be aware of the limitations of the study and the potential biases that may have influenced the results. In science, there is a pretty straightforward yet helpful categorization of sources that my students surprisingly needed help understanding because no one ever explained it to them. So let me lay out and explain the three categories to you now: 1. Primary sources Primary sources represent original research. You can find them in studies, conference papers, etc. In science, this is what you generally want to cite in your own research. However, remember that only some of what you find in an original research paper is a primary source. Only the parts that present the original research are primary sources. For example, the introduction can contain citations to other studies, which are not primary, but secondary sources. While primary sources can sometimes be perceived as hard to read and understand, in many cases, they can actually be easier to reach and understand as the methods and results are usually presented in a condensed form in the abstract, and often you can only skim the introduction and discussion to get a good idea of the study. In software engineering, primary sources can sometimes be papers, but more often, they are original documentation, case studies, or even blog posts that present original research or data. For example, if you're evaluating a new technology, the official documentation, case studies, and blog posts from its developers can be considered primary sources. 2. Secondary sources Secondary sources are typically reviews, meta-analyses, and other sources that summarize, analyze, or reference the primary sources. A good way to identify a source as secondary is to look for citations to other studies. If a claim has a citation, it's likely a secondary source. On the other hand, something is likely wrong if it doesn't have a citation and doesn't seem to present original research. Secondary sources can be very useful for getting an overview of a topic, understanding the current state of knowledge, and finding relevant primary sources. Meta-analyses, in particular, can provide a beneficial point of view on a subject by combining the results of multiple studies and looking for patterns and trends. The downside of secondary sources is that they can introduce information noise, as they are basically introducing another layer of interpretation and analysis. So, it's always a good idea to go back to the primary sources and verify the information you get from secondary sources. Secondary sources in software engineering include blog posts, talks, or articles that summarize, analyze, or reference primary sources. For example, if you're researching a new technology, a blog post that compares different technologies based on their documentation and/or studies made by their authors can be considered a secondary source. 3. Tertiary sources Tertiary sources represent a further level of abstraction. They are typically textbooks, encyclopedias, and other sources that summarize, analyze, or reference secondary sources. They are useful for getting a broad overview of a topic, understanding the basic concepts, and finding relevant secondary sources. One example I see as a tertiary source is Wikipedia, and while you shouldn't ever cite Wikipedia in academic research, it can be a good starting point for getting an overview of a topic and finding relevant primary and secondary sources as you can easily click through the references. > Note: It's fine to reference Wikipedia in a blog post or a talk to give your audience a convenient explanation of a term or concept. I'm even doing it in this post. However, you should always verify that the article is up to date and that the information is correct. The distinction between primary, secondary, and tertiary sources in software engineering is not as clear-cut as in science, but the general idea still applies. When researching a topic, knowing the different types of sources and their limitations is essential. Primary sources are generally the most reliable and should be your go-to when seeking evidence to support your claims. Secondary sources can help get an overview of a topic, but they should be used cautiously, as they can introduce bias and noise. Tertiary sources are good for getting a broad overview of a topic but should not be used as evidence in academic research. Evaluating Sources Now that we have the categories laid out let's talk about evaluating the quality of the sources because, realistically, not all sources are created equal. In science, we have some well-established criteria for evaluating the quality of a source. Some focus on the general credibility of the source, like the reputation of the journal or the author. In contrast, others focus on the quality of the study itself, like the study design, the sample size, and the methodology. First, we usually look at the number of citations and the impact factor of the journal in which the study was published. These numbers can give us an idea of how well the scientific community received the study and how much other researchers have cited it. In software engineering, we don't have the concept of impact factor when it comes to researching a concept or a technology, but we can still look at how many people are sharing the particular piece of information and how well the professional community receives it and how reputable the person sharing the information is. Second, we look at the study design and the methodology. Does the study have a clear research question? Is the study design appropriate for the research question? Are the methods well-described and reproducible? Are the results presented clearly and honestly? Do the data support the conclusions? Arguably, in software engineering, the honest and clear presentation of the method and results can be even more important than in science, given the amounts of money circulating in the industry and the potential for conflicts of interest. Therefore, it's important to understand where the data is coming from, how it was collected, and how it was analyzed. If a company (or their DevRel person) is presenting data that show their product is the best (fastest, most secure...), it's important to be aware of the potential biases and conflicts of interest that may have influenced the results. The ways in which the results can be skewed may include: - Missing, incomplete, or inappropriate methodology. Often, the methodology is not described in enough detail to be reproducible, or the whole experiment is designed in a way that doesn't actually answer the research question properly. For example, the methodology can omit important details, such as the environment in which the experiment was conducted or even the way the data was collected (e.g., to hide selection bias). - Selection bias can be a common issue in software engineering experiments. For example, if someone is comparing two technologies, they might choose a dataset that they expect to perform better with one of the technologies or a metric that they expect to show a difference. Selection bias can lead to skewed results that don't reflect the technologies' real-world performance. - Publication bias is a common issue in science, where studies that show a positive result are more likely to be published than studies that show a negative outcome. In software engineering, this can manifest as a bias towards publishing success stories and case studies, while ignoring failures and negative results. - Confirmation bias is a problem in science and software engineering alike. It's the tendency to look for evidence that confirms your hypothesis and ignore evidence that contradicts it. Confirmation bias can lead to cherry-picking data, misinterpreting results, and drawing incorrect conclusions. - Conflict of interest. While less common in academic research, conflicts of interest can be a big issue in industry research. If a company is funding a study that shows its product in a positive light, it's important to be aware of the potential biases that may have influenced the results. Another thing we look at is the conclusions. Do the data support the conclusions? Are they reasonable and justified? Are they overstated or exaggerated? Are the limitations of the study acknowledged? Are the implications of the study discussed? It all goes back to honesty and transparency, which is crucial for evaluating the quality of the source. Last but not least, we should look at the citations and references included in the source. In the same way we apply the systematic review process to our research, we should also apply it to the sources we use. I would argue that this is even more important in software engineering, where the information is often less standardized, and you come across many unsupported claims. If a source doesn't provide citations or references to back up their claims, it's a red flag that the information may not be reliable. This brings us to something called anecdotal evidence. Anecdotal evidence is a personal story or experience used to support a claim. While anecdotal evidence can be compelling and persuasive, it is generally considered a weak form of evidence, as it is based on personal experience rather than empirical data. So when someone tells you that X is better than Y because they tried it and it worked for them, or that Z is true because they heard it from someone, take it with a massive grain of salt and look for more reliable sources of information. That, of course, doesn't mean you should ask for a source under every post on social media, but it's important to recognize what's a personal opinion and what's a claim based on evidence. Synthesizing the Results Once you have gathered all the relevant information, it's time to synthesize the results. This is where you combine all the evidence you have collected, analyze it, and draw conclusions. In science, this is often done as part of a meta-analysis, where the results of multiple studies are combined and analyzed to look for patterns and trends using statistical methods. A meta-analysis is a powerful tool for synthesizing the results of multiple studies and drawing more robust conclusions than can be drawn from any single study. You might not be doing a formal meta-analysis in software engineering, but you can still apply the same principles to your research. Look for common themes and trends in the information you have gathered, compare and contrast different sources, and draw conclusions based on the evidence. Conclusion Adopting a scientific way of thinking isn't just a nice-to-have in software engineering - it's essential to make informed decisions, solve problems effectively, and navigate the vast amount of information around you with confidence. Applying systematic review principles to your research allows you to gather reliable information, evaluate it critically, and draw sound conclusions based on evidence. Let's summarize what such a systematic research approach can look like: - Define Clear Research Questions: - Start every project or decision-making process by clearly stating what you aim to achieve or understand. - Example: "What factors should influence our choice between Cloud Service A and Cloud Service B for our application's specific needs?" - Critically Evaluate Sources: - Identify the type of sources (primary, secondary, tertiary) and assess their credibility. - Be wary of biases and seek out multiple perspectives for a well-rounded understanding. - Be Aware of Biases: - Recognize common biases that can cloud judgment, such as confirmation or selection bias. - Actively counteract these biases by seeking disconfirming evidence and questioning assumptions. - Systematically Synthesize Information: - Organize your findings and analyze them methodically. - Use tools and frameworks to compare options based on defined criteria relevant to your project's goals. I encourage you to embrace this scientific approach in your daily work. The next time you're facing a critical decision - be it selecting a technology stack, debugging complex code, or planning a project - apply these principles: - Start with a Question: Clearly define what you need to find out. - Gather and Evaluate Information: Seek out reliable sources and scrutinize them. - Analyze Systematically: Organize your findings and look for patterns or insights. - Make Informed Decisions: Choose the path supported by evidence and sound reasoning. By doing so, you will enhance your problem-solving skills and contribute to a culture of thoughtful, evidence-based practice in the software engineering community. The best part is that once you start applying a critical and systematic approach to your sources of information, it becomes second nature. You'll automatically start asking questions like, "Where did this information come from?" "Is it reliable?" and "Can I reproduce the results?" Doing so will make you much less susceptible to hype, marketing, and new shiny things, ultimately making you happier and more confident. In the next part of this series, we'll look at applying the scientific mindset to debugging and using hypothesis testing and experimentation principles to solve problems more effectively....

Docusign Momentum 2025 From A Developer’s Perspective cover image

Docusign Momentum 2025 From A Developer’s Perspective

*What if your contract details stuck in PDFs could ultimately become the secret sauce of your business automation workflows?* In a world drowning in PDFs and paperwork, I never thought I’d get goosebumps about agreements – until I attended Docusign Momentum 2025. I went in expecting talks about e-signatures; I left realizing the big push and emphasis with many enterprise-level organizations will be around Intelligent Agreement Management (IAM). It is positioned to transform how we build business software, so let’s talk about it. As Director of Technology at This Dot Labs, I had a front-row seat to all the exciting announcements at Docusign Momentum. Our team also had a booth there showing off the 6 Docusign extension apps This Dot Labs has released this year. We met 1-on-1 with a lot of companies and leaders to discuss the exciting promise of IAM. What can your company accomplish with IAM? Is it really worth it for you to start adopting IAM?? Let’s dive in and find out. After his keynote, I met up with Robert Chatwani, President of Docusign and he said this > “At Docusign, we truly believe that the power of a great platform is that you won’t be able to exactly predict what can be built on top of it,and builders and developers are at the heart of driving this type of innovation. Now with AI, we have entered what I believe is a renaissance era for new ideas and business models, all powered by developers.” Docusign’s annual conference in NYC was an eye-opener: agreements are no longer just documents to sign and shelve, but dynamic data hubs driving key processes. Here’s my take on what I learned, why it matters, and why developers should pay attention. From E-Signatures to Intelligent Agreements – A New Era Walking into Momentum 2025, you could feel the excitement. Docusign’s CEO and product team set the tone in the keynote: “Agreements make the world go round, but for too long they’ve been stuck in inboxes and PDFs, creating drag on your business.” Their message was clear – Docusign is moving from a product to a platform​. In other words, the company that pioneered e-signatures now aims to turn static contracts into live, integrated assets that propel your business forward. I saw this vision click when I chatted with an attendee from a major financial services firm. His team manages millions of forms a year – loan applications, account forms, you name it. He admitted they were still “just scanning and storing PDFs” and struggled to imagine how IAM could help. We discussed how much value was trapped in those documents (what Docusign calls the “Agreement Trap” of disconnected processes​). By the end of our coffee, the lightbulb was on: with the right platform, those forms could be automatically routed, data-extracted, and trigger workflows in other systems – no more black hole of PDFs. His problem wasn’t unique; many organizations have critical data buried in agreements, and they’re waking up to the idea that it doesn’t have to be this way. What Exactly is Intelligent Agreement Management (IAM)? So what is Docusign’s Intelligent Agreement Management? In essence, IAM is an AI-powered platform that connects every part of the agreement lifecycle. It’s not a single product, but a collection of services and tools working in concert​. Docusign IAM helps transform agreement data into insights and actions, accelerate contract cycles, and boost productivity across departments. The goal is to address the inefficiencies in how agreements are created, signed, and managed – those inefficiencies that cost businesses time and money. At Momentum, Docusign showcased the core components of IAM: - Docusign Navigator link: A smart repository to centrally store, search, and analyze agreements. It uses AI to convert your signed documents (which are basically large chunks of text) into structured, queryable data​. Instead of manually digging through contracts for a specific clause, you can search across all agreements in seconds. Navigator gives you a clear picture of your organization’s contractual relationships and obligations (think of it as Google for your contracts). Bonus: it comes with out-of-the-box dashboards for things like renewal dates, so you can spot risks and opportunities at a glance. - Docusign Maestro link: A no-code workflow engine to automate agreement workflows from start to finish. Maestro lets you design customizable workflows that orchestrate Docusign tasks and integrate with third-party apps – all without writing code​. For example, you could have a workflow for new vendor onboarding: once a vendor contract is signed, Maestro could automatically notify your procurement team, create a task in your project tracker, and update a record in your ERP system. At the conference, they demoed how Maestro can streamline processes like employee onboarding and compliance checks through simple drag-and-drop steps or archiving PDFs of signed agreements into Google Drive or Dropbox. - Docusign Iris (AI Engine) link: The brains of the operation. Iris is the new AI engine powering all of IAM’s “smarts” – from reading documents to extracting data and making recommendations​. It’s behind features like automatic field extraction, AI-assisted contract review, intelligent search, and even document summarization. In the keynote, we saw examples of Iris in action: identify key terms (e.g. payment terms or renewal clauses) across a stack of contracts, or instantly generate a summary of a lengthy agreement. These capabilities aren’t just gimmicks; as one Docusign executive put it, they’re “signals of a new way of working with agreements”. Iris essentially gives your agreement workflow a brain – it can understand the content of agreements and help you act on it. - Docusign App Center link: A hub to connect the tools of your trade into Docusign. App Center is like an app store for integrations – it lets you plug in other software (project management, CRM, HR systems, etc.) directly into your Maestro workflows. This is huge for developers (and frankly, anyone tired of building one-off integrations). Instead of treating Docusign as an isolated e-signature tool, App Center makes it a platform you can extend. I’ll dive more into this in the next section, since it’s close to my heart – my team helped build some of these integrations! In short, IAM ties together the stages of an agreement (create → sign → store → manage) and supercharges each with automation and AI. It’s modular, too – you can adopt the pieces you need. Docusign essentially unbundled the agreement process into building blocks that developers and admins can mix-and-match. The future of agreements, as Docusign envisions it, is a world where organizations *“seamlessly add, subtract, and rearrange modular solutions to meet ever-changing needs”* on a single trusted platform. The App Center and Real-World Integrations (Yes, We Built Those!) One of the most exciting parts of Momentum 2025 for me was seeing the Docusign App Center come alive. As someone who works on integrations, I was practically grinning during the App Center demos. Docusign highlighted several partner-built apps that snap into IAM, and I’m proud to say This Dot Labs built six of them – including integrations for Monday.com, Slack, Jira, Asana, Airtable, and Mailchimp. Why are these integrations a big deal? Because developers often spend countless hours wiring up systems that need to talk to each other. With App Center, a lot of that heavy lifting is already done. You can install an app with a few clicks and configure data flows in minutes instead of coding for months​. In fact, a study found it takes the average org 12 months to develop a custom workflow via APIs, whereas with Docusign’s platform you can do it via configuration almost immediately​. That’s a game-changer for time-to-value. At our This Dot Labs booth, I spoke with many developers who were intrigued by these possibilities. For example, we showed how our Docusign Slack Extension lets teams send Slack messages and notifications when agreements are sent and signed.. If a sales contract gets signed, the Slack app can automatically post a notification in your channel and even attach the signed PDF – no more emailing attachments around. People loved seeing how easily Docusign and Slack now talk to each other using this extension​. Another popular one was our Monday.com app. With it, as soon as an agreement is signed, you can trigger actions in Monday – like assigning onboarding tasks for a new client or employee. Essentially, signing the document kicks off the next steps automatically. These integrations showcase why IAM is not just about Docusign’s own features, but about an ecosystem. App Center already includes connectors for popular platforms like Salesforce, HubSpot, Workday, ServiceNow, and more. The apps we built for Monday, Slack, Jira, etc., extend that ecosystem. Each app means one less custom integration a developer has to build from scratch. And if an app you need doesn’t exist yet – well, that’s an opportunity. (Shameless plug: we’re happy to help build it!) The key takeaway here is that Docusign is positioning itself as a foundational layer in the enterprise software stack. Your agreement workflow can now natively include things like project management updates, CRM entries, notifications, and data syncs. As a developer, I find that pretty powerful. It’s a shift from thinking of Docusign as a single SaaS tool to thinking of it as a platform that glues processes together. Not Just Another Contract Tool – Why IAM Matters for Business After absorbing all the Momentum keynotes and sessions, one thing is clear: IAM is not “just another contract management tool.” It’s aiming to be the platform that automates critical business processes which happen to revolve around agreements. The use cases discussed were not theoretical – they were tangible scenarios every developer or IT lead will recognize: - Procurement Automation: We heard how companies are using IAM to streamline procurement. Imagine a purchase order process where a procurement request triggers an agreement that goes out for e-signature, and once signed, all relevant systems update instantly. One speaker described connecting Docusign with their ERP so that vendor contracts and purchase orders are generated and tracked automatically. This reduces the back-and-forth with legal and ensures nothing falls through the cracks. It’s easy to see the developer opportunity: instead of coding a complex procurement approval system from scratch, you can leverage Docusign’s workflow + integration hooks to handle it. Docusign IAM is designed to connect to systems like CRM, HR, and ERP so that agreements flow into the same stream of data. For developers, that means using pre-built connectors and APIs rather than reinventing them. - Faster Employee Onboarding: Onboarding a new hire or client typically involves a flurry of forms and tasks – offer letters or contracts to sign, NDAs, setup of accounts, etc. We saw how IAM can accelerate onboarding by combining e-signature with automated task generation. For instance, the moment a new hire signs their offer letter, Maestro could trigger an onboarding workflow: provisioning the employee in systems, scheduling orientation, and creating tasks in tools like Asana or Monday. All those steps get kicked off by the signed agreement. Docusign Maestro’s integration capabilities shine here – it can tie into HR systems or project management apps to carry the baton forward​. The result is a smoother day-one experience for the new hire and less manual coordination for IT and HR. As developers, we can appreciate how this modular approach saves us from writing yet another “onboarding script”; we configure the workflow, and IAM handles the rest. - Reducing Contract Auto-Renewal Risk: If your company manages a lot of recurring contracts (think vendor services, subscriptions, leases), missing a renewal deadline can be costly. One real-world story shared at Momentum was about using IAM to prevent unwanted auto-renewals. With traditional tracking (spreadsheets or calendar reminders), it’s easy to forget a termination notice and end up locked into a contract for another year. Docusign’s solution: let the AI engine (Iris) handle it. It can scan your repository, surface any renewal or termination dates, and proactively remind stakeholders – or even kick off a non-renewal workflow if desired. As the Bringing Intelligence to Obligation Management session highlighted, “Missed renewal windows lead to unwanted auto-renewals or lost revenue… A forgotten termination deadline locks a company into an unneeded service for another costly term.”​ With IAM, those pitfalls are avoidable. The system can automatically flag and assign tasks well before a deadline hits​. For developers, this means we can deliver risk-reduction features without building a custom date-tracking system – the platform’s AI and notification framework has us covered. These examples all connect to a bigger point: agreements are often the linchpin of larger business processes (buying something, hiring someone, renewing a service). By making agreements “intelligent,” Docusign IAM is essentially automating chunks of those processes. This translates to real outcomes – faster cycle times, fewer errors, and less risk. From a technical perspective, it means we developers have a powerful ally: we can offload a lot of workflow logic to the IAM platform. Why code it from scratch if a combination of Docusign + a few integration apps can do it? Why Developers Should Care about IAM (Big Time) If you’re a software developer or architect building solutions for business teams, you might be thinking: This sounds cool, but is it relevant to me? Let me put it this way – after Momentum 2025, I’m convinced that ignoring IAM would be a mistake for anyone in enterprise software. Here’s why: - Faster time-to-value for your clients or stakeholders: Business teams are always pressuring IT to deliver solutions faster. With IAM, you have ready-made components to accelerate projects. Need to implement a contract approval workflow? Use Maestro, not months of coding. Need to integrate Docusign with an internal system? Check App Center for an app or use their APIs with far less glue code. Docusign’s own research shows that connecting systems via App Center and Maestro can cut development time dramatically (from ~12 months of custom dev to mere weeks or less). For us developers, that means we can deliver results sooner, which definitely wins points with the business. - Fewer custom builds (and less maintenance): Let’s face it – maintaining custom scripts or one-off integrations is not fun. Every time a SaaS API changes or a new requirement comes in, you’re back in the code. IAM’s approach offers more reuse and configuration instead of raw code. The platform is doing the hard work of staying updated (for example, when Slack or Salesforce change something in their API, Docusign’s connector app will handle it). By leveraging these pre-built connectors and templates, you write less custom code, which means fewer bugs and lower maintenance overhead. You can focus your coding effort on the unique parts of your product, not the boilerplate integration logic. - Reusable and modular workflows: I love designing systems as Lego blocks – and IAM encourages that. You can build a workflow once and reuse it across multiple projects or clients with slight tweaks. For instance, an approval workflow for sales contracts might be 90% similar to one for procurement contracts – with IAM, you can reuse that blueprint. The fact that everything is on one platform also means these workflows can talk to each other or be combined. This modularity is a developer’s dream because it leads to cleaner architecture. Docusign explicitly touts this modular approach, noting that organizations can easily rearrange solutions on the fly to meet new needs​. It’s like having a library of proven patterns to draw from. - AI enhancements with minimal effort: Adding AI into your apps can be daunting if you have to build or train models yourself. IAM essentially gives you AI-as-a-service for agreements. Need to extract key data from 1,000 contracts? Iris can do that out-of-the-box​. Want to implement a risk scoring for contracts? The AI can flag unusual terms or deviations. As a developer, being able to call an API or trigger a function that returns “these are the 5 clauses to look at” is incredibly powerful – you’re injecting intelligence without needing a data science team. It means you can offer more value in your applications (and impress those end-users!) by simply tapping into IAM’s AI features. Ultimately, Docusign IAM empowers developers to build more with less code. It’s about higher-level building blocks. This doesn’t replace our jobs – it makes our jobs more focused on the interesting problems. I’d rather spend time designing a great user experience or tackling a complex business rule than coding yet another Docusign-to-Slack integration. IAM is taking care of the plumbing and adding a layer of smarts on top. Don’t Underestimate Agreement Intelligence – Your Call to Action Momentum 2025 left me with a clear call to action: embrace agreement intelligence. If you’re a developer or tech leader, it’s time to explore what Docusign IAM can do for your projects. This isn’t just hype from a conference – it’s a real shift in how we can deliver solutions. Here are a few ways to get started: - Browse the IAM App Center – Take a look at the growing list of apps in the Docusign App Center. You might find that integration you’ve been meaning to build is already available (or one very close to it). Installing an app is trivial, and you can configure it to fit your workflow. This is the low-hanging fruit to immediately add value to your existing Docusign processes. If you have Docusign eSignature or CLM in your stack, App Center is where you extend it. - Think about integrations that could unlock value – Consider the systems in your organization that aren’t talking to each other. Is there a manual step where someone re-enters data from a contract into another system? Maybe an approval that’s done via email and could be automated? Those are prime candidates for an IAM solution. For example, if Legal and Sales use different tools, an integration through IAM can bridge them, ensuring no agreement data falls through the cracks. Map out your agreement process end-to-end and identify gaps – chances are, IAM has a feature to fill them. - Experiment with Maestro and the API – If you’re technical, spin up a trial of Docusign IAM. Try creating a Maestro workflow for a simple use case, or use the Docusign API/SDKs to trigger some AI analysis on a document. Seeing it in action will spark ideas. I was amazed how quickly I could set up a workflow with conditions and parallel steps – things that would take significant coding time if I did them manually. The barrier to entry for adding complex logic has gotten a lot lower. - Stay informed and involved – Docusign’s developer community and IAM documentation are growing. Momentum may be over, but the “agreement intelligence” movement is just getting started. Keep an eye on upcoming features (they hinted at even more AI-assisted tools coming soon). Engage with the community forums or join Docusign’s IAM webinars. And if you’re building something cool with IAM, consider sharing your story – the community benefits from hearing real use cases. My final thought: don’t underestimate the impact that agreement intelligence can have in modern workflows. We spend so much effort optimizing various parts of our business, yet often overlook the humble agreement – the contracts, forms, and documents that initiate or seal every deal. Docusign IAM is shining a spotlight on these and saying, “Here is untapped gold. Let’s mine it.” As developers, we have an opportunity (and now the tools) to lead that charge. I’m incredibly excited about this new chapter. After seeing what Docusign has built, I’m convinced that intelligent agreements can be a foundational layer for digital transformation. It’s not just about getting documents signed faster; it’s about connecting dots and automating workflows in ways we couldn’t before. As I reflect on Momentum 2025, I’m inspired and already coding with new ideas in mind. I encourage you to do the same – check out IAM, play with the App Center, and imagine what you could build when your agreements start working intelligently for you. The future of agreements is here, and it’s time for us developers to take full advantage of it. Ready to explore? Head to the Docusign App Center and IAM documentation and see how you can turn your agreements into engines of growth. Trust me – the next time you attend Momentum, you might just have your own success story to share. Happy building!...

Next.js + MongoDB Connection Storming cover image

Next.js + MongoDB Connection Storming

Building a Next.js application connected to MongoDB can feel like a match made in heaven. MongoDB stores all of its data as JSON objects, which don’t require transformation into JavaScript objects like relational SQL data does. However, when deploying your application to a serverless production environment such as Vercel, it is crucial to manage your database connections properly. If you encounter errors like these, you may be experiencing Connection Storming: * MongoServerSelectionError: connect ECONNREFUSED <IP_ADDRESS>:<PORT> * MongoNetworkError: failed to connect to server [<hostname>:<port>] on first connect * MongoTimeoutError: Server selection timed out after <x> ms * MongoTopologyClosedError: Topology is closed, please connect * Mongo Atlas: Connections % of configured limit has gone above 80 Connection storming occurs when your application has to mount a connection to Mongo for every serverless function or API endpoint call. Vercel executes your application’s code in a highly concurrent and isolated fashion. So, if you create new database connections on each request, your app might quickly exceed the connection limit of your database. We can leverage Vercel’s fluid compute model to keep our database connection objects warm across function invocations. Traditional serverless architecture was designed for quick, stateless web app transactions. Now, especially with the rise of LLM-oriented applications built with Next.js, interactions with applications are becoming more sequential. We just need to ensure that we assign our MongoDB connection to a global variable. Protip: Use global variables Vercel’s fluid compute model means all memory, including global constants like a MongoDB client, stays initialized between requests as long as the instance remains active. By assigning your MongoDB client to a global constant, you avoid redundant setup work and reduce the overhead of cold starts. This enables a more efficient approach to reusing connections for your application’s MongoDB client. The example below demonstrates how to retrieve an array of users from the users collection in MongoDB and either return them through an API request to /api/users or render them as an HTML list at the /users route. To support this, we initialize a global clientPromise variable that maintains the MongoDB connection across warm serverless executions, avoiding re-initialization on every request. ` Using this database connection in your API route code is easy: ` You can also use this database connection in your server-side rendered React components. ` In serverless environments like Vercel, managing database connections efficiently is key to avoiding connection storming. By reusing global variables and understanding the serverless execution model, you can ensure your Next.js app remains stable and performant....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co