Skip to content

Reducing Fatigue for On-Call SWEs with AI, Mentorship, & More with Dr. Sally Wabha

In this episode of the Modern Web Podcast, recorded live at All Things Open in Raleigh, NC, hosts Rob Ocel and Danny Thompson sit down with Dr. Sally Wahba, Principal Software Engineer at Splunk. Dr. Wahba shares her experience tackling on-call burnout, offering insights into reducing fatigue through better observability, automation, and thoughtful team practices. The conversation also touches on mentorship and growth in the tech industry, including practical advice for junior engineers navigating the transition from academics to professional roles and tips for companies to better support new talent.

Chapters 00:00:13 - Introduction to Marketing This Dot 00:01:00 - Asking for Help Effectively 00:02:21 - Reducing On-Call Fatigue 00:04:42 - Observability Best Practices 00:07:07 - Balancing Alerts and On-Call Efficiency 00:09:30 - The Role of On-Call in Modern Engineering 00:11:29 - Insights from the Grace Hopper Celebration 00:13:56 - Mentorship and Team Dynamics 00:16:14 - Rapid Changes in Technology and Adaptation 00:18:39 - Automation, Observability, and Debugging Challenges 00:21:04 - Addressing the Talent Gap and Junior Engineer Growth 00:24:00 - Closing Thoughts and Where to Learn More

Follow Dr. Sally Wahba on Social Media Twitter: https://x.com/sallyky Linkedin: https://linkedin.com/in/sallywahba/

Sponsored by This Dot: thisdot.co

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation & Literature Review) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation & Literature Review)

The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation & Literature Review) Today, I will write about something very dear to me - science. But not about science as a field of study but rather as a way of thinking. It's easy nowadays to get lost in the sea of information, fall for marketing hype, or even be trolled by a hallucinating LLM. A scientific mindset can be a powerful tool for navigating the complex modern world and the world of software engineering in particular. Not only is it a powerful tool, but I'll argue that it's a must nowadays if you want to make informed decisions, solve problems effectively, and become a better engineer. As software engineers, we are constantly confronted with an overwhelming array of frameworks, technologies, and infrastructure choices. Sometimes, it feels like there's a new tool or platform every day, each accompanied by its own wave of hype and marketing. It's easy to feel lost in the myriad of information or even suffer from FOMO and insecurity about not jumping on the latest bandwagon. But it's not only about the abundance of information and making technological decisions. As engineers, we often write documentation, blog posts, talks, or even books. We need to be able to communicate our ideas clearly and effectively. Furthermore, we have to master the art of debugging code, which is essentially a scientific process where we form hypotheses, test them, and iterate until we find the root cause of the problem. Therefore, here's my hot take: engineering is a science; hence, to deserve an engineer title, one needs to think like a scientist. So, let's _review_ (pun intended) what it means to think like a scientist in the context of software engineering. Systematic Review In science, systematic review is not only an essential means to understand a topic and map the current state of knowledge in the field, but it also has a well-defined methodology. You can't just google whatever supports your hypothesis and call it a day. You must define your research question, choose the databases you will search, set your inclusion and exclusion criteria, systematically search for relevant studies, evaluate their quality, and synthesize the results. Most importantly, you must be transparent about and describe your methodology in detail. The general process of systematic review can be summarized in the following steps: 1. Define your research question(s) 2. Choose databases and other sources to search 3. Define keywords and search terms 4. Define inclusion and exclusion criteria a. Define practical criteria such as publication date, language, etc. b. Define methodological criteria such as study design, sample size, etc. 5. Search for relevant studies 6. Evaluate the quality of the studies 7. Synthesize the results Source: Conducting Research Literature Reviews: From the Internet to Paper by Dr. Fink I'm pretty sure you can see where I'm going with this. There are many use cases in software engineering where a process similar to systematic review can be applied. Whether you're evaluating a new technology, choosing a tech stack for a new project, or researching for a blog post or a conference talk, it's important to be systematic in your approach, transparent about your methodology, and honest about the limitations of your research. Of course, when choosing a tech stack to learn or researching for a blog post, you don't have to be as rigorous as in a scientific study. But a few of these steps will always be worth following. Let's focus on those and see how we can apply them in the context of software engineering. Defining Your Research Question(s) Before you start researching, it's important to define your research questions. What are you trying to find out? What problem are you trying to solve? What are the goals of your research? These questions will help you stay focused and avoid focusing on irrelevant information. > A practical example: If you're evaluating, say, whether to use bundler _A_ or bundler _B_ without a clear research question, you might end up focusing on marketing claims about how bundler _A_ is faster than bundler _B_ or how bundler _B_ is more popular than bundler _A_, even though such aspects may have minimal impact on your project. With a clear research question, you can focus on what really matters for your project, like how well each bundler integrates with your existing tools, how well they handle your specific use case, or how well they are maintained. A research question is not a hypothesis - you don't have to have a clear idea of the answer. It's more about defining the scope of your research and setting clear goals. It can be as simple and general as "What are the pros and cons of using React vs. Angular for a particular project?" but also more specific and focused, like "What are the legal implications of using open-source library _X_ for purpose _Y_ in project _Z_?". You can have multiple research questions, but keeping them focused and relevant to your goals is essential. In my personal opinion, part of the scientific mindset is automatically having at least a vague idea of a research question in your head whenever you're facing a problem or a decision, and that alone can make you a more confident and effective engineer. Choosing Databases and Other Sources to Search In engineering, some information (especially when researching rare bugs) can be scarce, and you have to search wherever and take what you can get. Hence, this step is arguably much easier in science, where you can include well-established databases and publications in your search. Information in science is simply more standardized and easier to find. There are, however, still some decisions to be made about where to search. Do you want to include community websites like StackOverflow or Reddit? Do you want to include marketing materials from the companies behind the technologies you're evaluating? These can all be valid sources of information, but they have their limitations and biases, and it's important to be aware of them. Or do you want to ask a LLM? I hadn't included LLMs in the list of valid sources of information on purpose as they are not literature databases in the traditional sense, and I wouldn't consider them a search source for literature research. And for a very good reason - they are essentially a black box, and therefore, you cannot reliably describe a reproducible methodology of your search. That doesn't mean you shouldn't ask an LLM for inspiration or a TL;DR, but you should always verify the information you get from them and be aware of their limitations. Defining Keywords and Search Terms This section will be short, as most of you are familiar with the concept of keywords and search terms and how to use search engines. However, I still wanted to highlight the importance of knowing how to search effectively for a software engineer. It's not just about typing in a few keywords and hoping for the best. It's about learning how to use advanced search operators, filter out irrelevant results, and find the information you need quickly and efficiently. If you're not familiar with advanced search operators, I highly recommend you take some time to learn them, for example, at FreeCodeCamp. Please note, however, that the article is specific to Google and different search engines may have different operators and syntax. This is especially true for scientific databases, which often have their own search syntax and operators. So, if you're doing more formal research, familiarize yourself with the database's search syntax. The underlying principles, however, are pretty much the same everywhere; just the syntax and UI might differ. With a solid search strategy in place, the next critical step is to assess the quality of the information we find. Methodological Criteria and Evaluation of Sources This is where things get interesting. In science, evaluating the quality of the studies is a crucial step in the systematic review process. You can't just take the results of a study at face value - you need to critically evaluate its design, the sample size, the methodology, and the conclusions - and you need to be aware of the limitations of the study and the potential biases that may have influenced the results. In science, there is a pretty straightforward yet helpful categorization of sources that my students surprisingly needed help understanding because no one ever explained it to them. So let me lay out and explain the three categories to you now: 1. Primary sources Primary sources represent original research. You can find them in studies, conference papers, etc. In science, this is what you generally want to cite in your own research. However, remember that only some of what you find in an original research paper is a primary source. Only the parts that present the original research are primary sources. For example, the introduction can contain citations to other studies, which are not primary, but secondary sources. While primary sources can sometimes be perceived as hard to read and understand, in many cases, they can actually be easier to reach and understand as the methods and results are usually presented in a condensed form in the abstract, and often you can only skim the introduction and discussion to get a good idea of the study. In software engineering, primary sources can sometimes be papers, but more often, they are original documentation, case studies, or even blog posts that present original research or data. For example, if you're evaluating a new technology, the official documentation, case studies, and blog posts from its developers can be considered primary sources. 2. Secondary sources Secondary sources are typically reviews, meta-analyses, and other sources that summarize, analyze, or reference the primary sources. A good way to identify a source as secondary is to look for citations to other studies. If a claim has a citation, it's likely a secondary source. On the other hand, something is likely wrong if it doesn't have a citation and doesn't seem to present original research. Secondary sources can be very useful for getting an overview of a topic, understanding the current state of knowledge, and finding relevant primary sources. Meta-analyses, in particular, can provide a beneficial point of view on a subject by combining the results of multiple studies and looking for patterns and trends. The downside of secondary sources is that they can introduce information noise, as they are basically introducing another layer of interpretation and analysis. So, it's always a good idea to go back to the primary sources and verify the information you get from secondary sources. Secondary sources in software engineering include blog posts, talks, or articles that summarize, analyze, or reference primary sources. For example, if you're researching a new technology, a blog post that compares different technologies based on their documentation and/or studies made by their authors can be considered a secondary source. 3. Tertiary sources Tertiary sources represent a further level of abstraction. They are typically textbooks, encyclopedias, and other sources that summarize, analyze, or reference secondary sources. They are useful for getting a broad overview of a topic, understanding the basic concepts, and finding relevant secondary sources. One example I see as a tertiary source is Wikipedia, and while you shouldn't ever cite Wikipedia in academic research, it can be a good starting point for getting an overview of a topic and finding relevant primary and secondary sources as you can easily click through the references. > Note: It's fine to reference Wikipedia in a blog post or a talk to give your audience a convenient explanation of a term or concept. I'm even doing it in this post. However, you should always verify that the article is up to date and that the information is correct. The distinction between primary, secondary, and tertiary sources in software engineering is not as clear-cut as in science, but the general idea still applies. When researching a topic, knowing the different types of sources and their limitations is essential. Primary sources are generally the most reliable and should be your go-to when seeking evidence to support your claims. Secondary sources can help get an overview of a topic, but they should be used cautiously, as they can introduce bias and noise. Tertiary sources are good for getting a broad overview of a topic but should not be used as evidence in academic research. Evaluating Sources Now that we have the categories laid out let's talk about evaluating the quality of the sources because, realistically, not all sources are created equal. In science, we have some well-established criteria for evaluating the quality of a source. Some focus on the general credibility of the source, like the reputation of the journal or the author. In contrast, others focus on the quality of the study itself, like the study design, the sample size, and the methodology. First, we usually look at the number of citations and the impact factor of the journal in which the study was published. These numbers can give us an idea of how well the scientific community received the study and how much other researchers have cited it. In software engineering, we don't have the concept of impact factor when it comes to researching a concept or a technology, but we can still look at how many people are sharing the particular piece of information and how well the professional community receives it and how reputable the person sharing the information is. Second, we look at the study design and the methodology. Does the study have a clear research question? Is the study design appropriate for the research question? Are the methods well-described and reproducible? Are the results presented clearly and honestly? Do the data support the conclusions? Arguably, in software engineering, the honest and clear presentation of the method and results can be even more important than in science, given the amounts of money circulating in the industry and the potential for conflicts of interest. Therefore, it's important to understand where the data is coming from, how it was collected, and how it was analyzed. If a company (or their DevRel person) is presenting data that show their product is the best (fastest, most secure...), it's important to be aware of the potential biases and conflicts of interest that may have influenced the results. The ways in which the results can be skewed may include: - Missing, incomplete, or inappropriate methodology. Often, the methodology is not described in enough detail to be reproducible, or the whole experiment is designed in a way that doesn't actually answer the research question properly. For example, the methodology can omit important details, such as the environment in which the experiment was conducted or even the way the data was collected (e.g., to hide selection bias). - Selection bias can be a common issue in software engineering experiments. For example, if someone is comparing two technologies, they might choose a dataset that they expect to perform better with one of the technologies or a metric that they expect to show a difference. Selection bias can lead to skewed results that don't reflect the technologies' real-world performance. - Publication bias is a common issue in science, where studies that show a positive result are more likely to be published than studies that show a negative outcome. In software engineering, this can manifest as a bias towards publishing success stories and case studies, while ignoring failures and negative results. - Confirmation bias is a problem in science and software engineering alike. It's the tendency to look for evidence that confirms your hypothesis and ignore evidence that contradicts it. Confirmation bias can lead to cherry-picking data, misinterpreting results, and drawing incorrect conclusions. - Conflict of interest. While less common in academic research, conflicts of interest can be a big issue in industry research. If a company is funding a study that shows its product in a positive light, it's important to be aware of the potential biases that may have influenced the results. Another thing we look at is the conclusions. Do the data support the conclusions? Are they reasonable and justified? Are they overstated or exaggerated? Are the limitations of the study acknowledged? Are the implications of the study discussed? It all goes back to honesty and transparency, which is crucial for evaluating the quality of the source. Last but not least, we should look at the citations and references included in the source. In the same way we apply the systematic review process to our research, we should also apply it to the sources we use. I would argue that this is even more important in software engineering, where the information is often less standardized, and you come across many unsupported claims. If a source doesn't provide citations or references to back up their claims, it's a red flag that the information may not be reliable. This brings us to something called anecdotal evidence. Anecdotal evidence is a personal story or experience used to support a claim. While anecdotal evidence can be compelling and persuasive, it is generally considered a weak form of evidence, as it is based on personal experience rather than empirical data. So when someone tells you that X is better than Y because they tried it and it worked for them, or that Z is true because they heard it from someone, take it with a massive grain of salt and look for more reliable sources of information. That, of course, doesn't mean you should ask for a source under every post on social media, but it's important to recognize what's a personal opinion and what's a claim based on evidence. Synthesizing the Results Once you have gathered all the relevant information, it's time to synthesize the results. This is where you combine all the evidence you have collected, analyze it, and draw conclusions. In science, this is often done as part of a meta-analysis, where the results of multiple studies are combined and analyzed to look for patterns and trends using statistical methods. A meta-analysis is a powerful tool for synthesizing the results of multiple studies and drawing more robust conclusions than can be drawn from any single study. You might not be doing a formal meta-analysis in software engineering, but you can still apply the same principles to your research. Look for common themes and trends in the information you have gathered, compare and contrast different sources, and draw conclusions based on the evidence. Conclusion Adopting a scientific way of thinking isn't just a nice-to-have in software engineering - it's essential to make informed decisions, solve problems effectively, and navigate the vast amount of information around you with confidence. Applying systematic review principles to your research allows you to gather reliable information, evaluate it critically, and draw sound conclusions based on evidence. Let's summarize what such a systematic research approach can look like: - Define Clear Research Questions: - Start every project or decision-making process by clearly stating what you aim to achieve or understand. - Example: "What factors should influence our choice between Cloud Service A and Cloud Service B for our application's specific needs?" - Critically Evaluate Sources: - Identify the type of sources (primary, secondary, tertiary) and assess their credibility. - Be wary of biases and seek out multiple perspectives for a well-rounded understanding. - Be Aware of Biases: - Recognize common biases that can cloud judgment, such as confirmation or selection bias. - Actively counteract these biases by seeking disconfirming evidence and questioning assumptions. - Systematically Synthesize Information: - Organize your findings and analyze them methodically. - Use tools and frameworks to compare options based on defined criteria relevant to your project's goals. I encourage you to embrace this scientific approach in your daily work. The next time you're facing a critical decision - be it selecting a technology stack, debugging complex code, or planning a project - apply these principles: - Start with a Question: Clearly define what you need to find out. - Gather and Evaluate Information: Seek out reliable sources and scrutinize them. - Analyze Systematically: Organize your findings and look for patterns or insights. - Make Informed Decisions: Choose the path supported by evidence and sound reasoning. By doing so, you will enhance your problem-solving skills and contribute to a culture of thoughtful, evidence-based practice in the software engineering community. The best part is that once you start applying a critical and systematic approach to your sources of information, it becomes second nature. You'll automatically start asking questions like, "Where did this information come from?" "Is it reliable?" and "Can I reproduce the results?" Doing so will make you much less susceptible to hype, marketing, and new shiny things, ultimately making you happier and more confident. In the next part of this series, we'll look at applying the scientific mindset to debugging and using hypothesis testing and experimentation principles to solve problems more effectively....

What Sets the Best Autonomous Coding Agents Apart? cover image

What Sets the Best Autonomous Coding Agents Apart?

Must-have Features of Coding Agents Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained. As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let's delve into the features that you'll absolutely want to get the best results. 1. Customizable System Prompts Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a "planning mode" focused on outlining development steps and gathering requirements, a "coding mode" optimized for generating and testing code, or a "documentation mode" emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you'll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what's important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use. Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, 'pnpm' instead of 'npm' for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details. 2. Usage-based Pricing The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. 3. Autonomous Workflows The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session's context. This should leverage your custom modes, which excel at each phase of your workflow. 4. Working in the Background The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer's machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team. 5. Integrations with your Tools The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests. These integrations ensure the agent can participate meaningfully across the full software development lifecycle. 6. Support for Multiple Model Providers Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues. Final Thoughts Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team's success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive....

Introduction to Vercel’s Flags SDK cover image

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it. You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice. Why should I use it? You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently. It’s simple Let's start with a basic example: ` This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration. Many other SDKs require passing the flag's name and context every single time you check a flag, like this: ` This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application. By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly. It’s flexible Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter. While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit. Latency solutions Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency. These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly: ` You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended. Using Edge Config When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues. To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency. Cookies For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load. Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data: ` You can also encrypt or sign cookies for additional security from the client side. Dedupe Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example: ` Server-side patterns for static pages You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering. To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute. Precompute Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers: ` Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them. ` The user will never notice this because, as we use “rewrite”, they will only see the original URL. Now, on our page, we “invoke” our flags, sending the code from the params: ` By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user. Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests: ` Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell. Generate Permutations If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases. ` ` Conclusion Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co