Skip to content

STATE OF GRAPHQL

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Sashko Stubailo, core team member of the Apollo/GraphQL project gave a state of GraphQL at This.JavaScript. In this update, Sashko talks about the resources available in the GraphQL ecosystem and shares what you need to know to get started.

#History and Introduction to GraphQL GraphQL was introduced in 2015 and is maintained by Facebook and other open source community members. It is a query language for APIs which help define the data in your application. It has features which allow for self-documentation and elimination of both over fetching and request waterfalls. GraphQL also provides an extensive community of clients, servers, and developer tools.

GraphQL is neither a library nor a product, but rather a specification and a community — one that provides a schema for your entire set of data and a query language for retrieving it. One of the benefits of GraphQL is that it is incrementally adoptable in virtually any architecture.

#Using GraphQL Subscriptions vs Live Query GraphQL Subscriptions was originally released with queries for fetching data and mutations to update that data. It allows you to retrieve real-time streams of data.

The way subscriptions work is that a server-side pop-up system (or something similar) is used to push specific messages to clients. Subscriptions are different from live query because live queries implement a method that is similar to optimized polling, whereas subscriptions follow a more finite approach that is easier to manage.

#Open Source Tooling Servers are one of the most important parts of the GraphQL architecture. There are currently a number of them in the works as open source projects. If you are looking for different server libraries, they are available in every popular server-side technology and framework and can be found at graphql.org/code.

One of the most recent GraphQL implementations was made by Walmart Labs. It is a server called Lacinia and serves both samsclub.com and Walmart Grocery.

#Clients By using basic HTTP caching that operates on a per request basis, individuals can fetch all kinds of queries with varying shapes and results. However, greater benefits can be obtained if more sophisticated clients are used to help with caching. Presently, there are two major GraphQL client libraries (for javascript) with 1.0 releases. The first is Apollo, which released on March 30 and works with all javascript technologies, and the second being Relay, a more opinionated architecture announced by facebook.

On a side note, with Native Mobile clients, it is important to make sure that API technology can be accessed from the clients. Therefore, libraries that allow you to fetch GraphQL queries easily from Swift on iOS and Java on Android, are made available. These libraries are also capable of code generation which foster numerous benefits.

Developers are currently trying to integrate similar properties to Javascript clients. They are working with New York Times, Shopify, and Airbnb, to find out ways of implementing the sharing of data between separate parts of an app. As developers are aware, this is due to the case of creating an application using different types of code.

#Developer Tools Since there is a strongly typed API with a query language to go along with it, a whole new space of developer tools becomes accessible. With these tools, APIs and data fetching can be explored in more depth.

The autocompleting query editor called Graphical was one of the first dev tools to be made. This is a tool that the majority of individuals use to interact with their API. Other tools have also been created such as persisted queries and chrome dev tools, as well as a couple others made responsible for query validation as well as extraction of those queries from code.

#GraphQL Language Service Another new project developed by Facebook is the GraphQL language service which gives a similar experience to graphical tool. It contains a standalone web IDE that can be found in one’s normal code editor IDE. This enables individuals to open up source code editors such as Visual Studio Code, and type GraphQL queries right in their code. Validation and autocompletion would then take place, so that API docs don’t have to be referred to.

To find out more about GraphQL, follow them on twitter @GraphQL. By Trinh Kien

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

3 Web Performance Concepts that Will Help Start a Conversation Around Performance cover image

3 Web Performance Concepts that Will Help Start a Conversation Around Performance

In 2021, This Dot Labs released PerfBuddy, the free online platform for testing web and mobile based sites. With the release of this tool, it was our sincere hope to simplify the conversation around web performance, helping team leaders develop easy to understand metrics that they can use to advocate for further investment into their various web technologies. But we also realize that many new to web development, or who work in software but not as developers, might need more clarification on some of the basic key terms to help them engage more actively in conversations surrounding web development. Below, I’ve defined three of the top terms in web performance to help readers better ascertain your site’s performance, and play an active role in refining their technologies to provide the best experience for their customers. First Contentful Paint Time (FCP) FCP, or First Contentful Paint Time, is a critical metric that measures the time that users must wait in order for a page to load its first visible element. For some sites, this could be the entire page. However, for others, the FCP time might measure the seconds between a user navigating to a site, and any responsive element, such as a loading bar, appearing in front of them. This is not a measurement of backend nor even frontend script loading speed, but a metric that affords development teams the ability to infer the quality of their site’s initial UX. According to metrics published by Akamai in 2018, sites are liable to lose nearly half of their visitors if their page takes more than three seconds to load. In fact, just a single second of load time delay can result in a 7% decrease in sales conversions for eCommerce platforms. This is especially true when considering mobile users, whose likelihood of leaving a page increases 90% when made to wait 5 seconds for a page to load And as more eCommerce shoppers turn to using their mobile devices- with 53% of users accessing shopping sites via mobile platforms on 2019’s Cyber Monday, representing a 40% YOY increase- teams need to be acutely aware of their cross platform performance with respect to FCP. Time to First Byte (TTFB) Not to be confused with FCP, TTFB, or Time to First Byte, refers to the amount of time that the browser waits in order to receive initial data from its server. In order for a site to display any information, a browser must make dozens, if not more, data requests. Issues related either to the quality of the host, site functionality, or complexity can all contribute to a site’s latency, or the amount of time it takes for data to be passed between the server and the browser. Of course, reducing site latency will improve user experience by decreasing FCP, and generally increasing browsing speed. However, ensuring low TTFB will also boost your SEO by making your site more quickly crawlable by leading search engines. Page Weight As developers add features and functionality to support more advanced user experience, web pages get heavier. As of 2020, the average desktop webpage weighs 2080 KB, up from an average of 1532 KB in 2017, with the weight of mobile web pages slightly lower, but still seeing a near 40% increase in size when compared to stats from just four years ago. eCommerce websites need to maintain acute awareness of their page weight, and ensure that their latency is not overly impacted by it, due to the tendency for shopping sites to be especially complex, supporting large catalogs of products along with other features to promote customer engagement. And as this era of advanced digital transformation continues to expand, eCommerce sites must develop strategies to meet market expectations for performance without over burdening their sites with heavy plugins and functionalities. Finding Your Path to Performance It starts with equipping yourself with the right tools to test your site’s speed and weight. There are countless platforms used for testing sites, however, there are only a handful that are capable of unlocking the insight that you need to support your most critical websites. Though PerfBuddy is a great place to start in order to identify potential roadblocks, it cannot do the work of actually improving site performance. By leveraging testing platforms such as Lighthouse, and continuously improving your performance metrics with assets such as DevTools, and strategies like Google’s PRPL, eCommerce retailers can ensure that their sites meet user expectations and promote their most critical business objectives. Need help? Contact This Dot Labs to learn more about how developing the tools and strategies to ensure optimal site performance can support scalable growth as you continue refining user experience!...

International Women's Day Recap cover image

International Women's Day Recap

International Women's Day Event For this year's International Women's Day, we hosted a live event with Women Techmakers, featuring talks and a panel discussion on this year's topic: progress over perfection. It was a great conversation on what it's like to be a woman in tech, and how you can help yourself and others thrive in our industry. In case you missed it We have the full event on YouTube if you'd like to watch it yourself (which I highly recommend)! Here's a recap of everything that happened. Getting started in DevRel - Pachi Parra Pachi Parra was up first, sharing her journey into DevRel, and tips on how you can get started too! Some highlights include: - Roles that are available - What a day in the life might look like - Her journey into DevRel - What a DevRel professional actually does - things like public speaking, live coding, writing blogs, and giving talks at conferences. Her best tip for getting started? Find the type of content you like doing, and focus on doing that well! In DevRel, it's easy to spread yourself too thin between all the different types of content available, so focus on the one you like most, find a supportive community, and get yourself out there. :) Breathing Fire: Success and Growth as a Technical Woman - Stacy Devino Stacy Devino was up next, providing all kinds of insight into the cycle women go through in their career, as well as tips for each stage of the journey. She opened with an amazing quote: > Assume all women are technical and capable of breathing fire. - Jessie Frazelle Other highlights: - Igniting your world through learning, timing, your network, and leadership. - Staying warm by managing your focus and chores, recording your achievements, maintaining relationships, and researching ideas. - The key to avoiding burnout - keeping a long-term perspective, doing things for yourself, and allocating time for the things you enjoy and the people who support you. Riding the imposter wave to senior - Jessica Janiuk Jessica Janiuk wrapped up our talks today, providing insight into her journey through tech and the ways we can think about imposter syndrome and allowing ourselves to grow. Some highlights from her talk: - A few things to consider: what being "senior" means to companies and to yourself, if you're in the right place, and what you want in your career. - A great diagram of the imposter wave - the balance between our confidence and feeling like an imposter from this post by Ricardo Luevanos. - Considering how often we're comfortable, and that there's an inverse correlation between feeling comfortable and feeling like an imposter (they're opposite each other). - We have two choices: we can let it control us, or use that discomfort as a tool. - Looking back on our growth and realizing that our current lows are higher than our past highs. - Some senior advice: Be authentic, be proud of your work, have good mentors, remember that your work is not your life, and stay uncomfortable. She wrapped up with a brilliant reminder for us all: > You are capable. You are valid. You are important. Please take care of yourself. People care about you. If you're struggling, you're not alone. Panel Discussion We rounded out the day with a panel discussion featuring these accomplished women: - Erica Stanley - Amal Hussein - Deborah Kurata - Katerina Skroumpelou - Jessica Janiuk - Stacy Devino - Jae Bach Hardie - Linda Thompson The conversation flowed naturally, each panelist feeding off of each other's ideas, and we covered some very powerful and helpful tips and reminders for women in today's developer world. Here's a few of my favorite topics or ideas we talked about: What did you learn in the past year? - Have hobbies that don't involve tech. - Learning to let go of your previous tools. - How valuable close personal friends are - people you can trust and rely on. - Listen to yourself, and take time to introspect and evaluate. - Don't over commit! Finding a community and actively contributing to it - Women Techmakers! (Our joint sponsor) - Make use of social media. - "Reach one teach one" - always be willing to share your knowledge, and be the person you needed when you were getting started. - Engage people who align with your goals and values - reach outside your level of age, career scope, experience. - The interconnection between you and your people is not exclusive, it's inclusive! The more expansive your sphere is, the better you are at your job. - Also - we can find community in open source! The way people comment, commit, and support each other within a project counts too. Foundational knowledge vs tooling - Understand what problems you're actually solving and what to reach for, more than worrying about the specific tech stack itself. - This helps you build your own knowledge map, and pick up skills as you grow. - For more senior folks - realize when you've mastered something, and focus on clearing the road for those who come next. - Learning to delegate - if you always do something yourself, you're taking that opportunity away from someone else. - Knowing when to ask for help, and that asking is NOT a weakness or failure! It's a strength to know when you need to ask for help. Find your learning method - Ask multiple people if you need to until you find the answer that clicks for you! Ask why. - Not understanding the answer someone gives you is also not a failure - it's different viewpoints. - Also learn how the people around you learn - so you can help them in the way that best suits them. Communication and collaboration - Have compassion for everyone on your team. - When you collaborate, you get more done. - Being able to communicate and collaborate is a HUGE strength. Don't let anyone make you feel bad for being strong in those! - Being the person who's able to "glue" the team together is foundational to a strong team. Fighting stereotypes - Things that are commended in men and reprimanded in women, and fighting against those biases - Being authentic is the best way to lead your team - don't play into a stereotype you don't fit. - You don't have to sound "nice" or "pleasing" - you're still a strong woman and you're going to be judged a certain way, so don't compromise! - Unpacking all the social conditioning and learning to be comfortable with yourself all the time, in all the situations we find ourselves in. Wrapping Up The entire event was filled with wisdom, laughter, and camaraderie. We're so thankful to the ladies who came to speak with us, and hope to see you at the next one!...

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) In the first part of my series on the importance of a scientific mindset in software engineering, we explored how the principles of the scientific method can help us evaluate sources and make informed decisions. Now, we will focus on how these principles can help us tackle one of the most crucial and challenging tasks in software engineering: debugging. In software engineering, debugging is often viewed as an art - an intuitive skill honed through experience and trial and error. In a way, it is - the same as a GP, even a very evidence-based one, will likely diagnose most of their patients based on their experience and intuition and not research scientific literature every time; a software engineer will often rely on their experience and intuition to identify and fix common bugs. However, an internist faced with a complex case will likely not be able to rely on their intuition alone and must apply the scientific method to diagnose the patient. Similarly, a software engineer can benefit from using the scientific method to identify and fix the problem when faced with a complex bug. From that perspective, treating engineering challenges like scientific inquiries can transform the way we tackle problems. Rather than resorting to guesswork or gut feelings, we can apply the principles of the scientific method—forming hypotheses, designing controlled experiments, gathering and evaluating evidence—to identify and eliminate bugs systematically. This approach, sometimes referred to as "scientific debugging," reframes debugging from a haphazard process into a structured, disciplined practice. It encourages us to be skeptical, methodical, and transparent in our reasoning. For instance, as Andreas Zeller notes in the book _Why Programs Fail_, the key aspect of scientific debugging is its explicitness: Using the scientific method, you make your assumptions and reasoning explicit, allowing you to understand your assumptions and often reveals hidden clues that can lead to the root cause of the problem on hand. Note: If you'd like to read an excerpt from the book, you can find it on Embedded.com. Scientific Debugging At its core, scientific debugging applies the principles of the scientific method to the process of finding and fixing software defects. Rather than attempting random fixes or relying on intuition, it encourages engineers to move systematically, guided by data, hypotheses, and controlled experimentation. By adopting debugging as a rigorous inquiry, we can reduce guesswork, speed up the resolution process, and ensure that our fixes are based on solid evidence. Just as a scientist begins with a well-defined research question, a software engineer starts by identifying the specific symptom or error condition. For instance, if our users report inconsistencies in the data they see across different parts of the application, our research question could be: _"Under what conditions does the application display outdated or incorrect user data?"_ From there, we can follow a structured debugging process that mirrors the scientific method: - 1. Observe and Define the Problem: First, we need to clearly state the bug's symptoms and the environment in which it occurs. We should isolate whether the issue is deterministic or intermittent and identify any known triggers if possible. Such a structured definition serves as the groundwork for further investigation. - 2. Formulate a Hypothesis: A hypothesis in debugging is a testable explanation for the observed behavior. For instance, you might hypothesize: _"The data inconsistency occurs because a caching layer is serving stale data when certain user profiles are updated."_ The key is that this explanation must be falsifiable; if experiments don't support the hypothesis, it must be refined or discarded. - 3. Collect Evidence and Data: Evidence often includes logs, system metrics, error messages, and runtime traces. Similar to reviewing primary sources in academic research, treat your raw debugging data as crucial evidence. Evaluating these data points can reveal patterns. In our example, such patterns could be whether the bug correlates with specific caching mechanisms, increased memory usage, or database query latency. During this step, it's essential to approach data critically, just as you would analyze the quality and credibility of sources in a research literature review. Don't forget that even logs can be misleading, incomplete, or even incorrect, so cross-referencing multiple sources is key. - 4. Design and Run Experiments: Design minimal, controlled tests to confirm or refute your hypothesis. In our example, you may try disabling or shortening the cache's time-to-live (TTL) to see if more recent data is displayed correctly. By manipulating one variable at a time - such as cache invalidation intervals - you gain clearer insights into causation. Tools such as profilers, debuggers, or specialized test harnesses can help isolate factors and gather precise measurements. - 5. Analyze Results and Refine Hypotheses: If the experiment's outcome doesn't align with your hypothesis, treat it as a stepping stone, not a dead end. Adjust your explanation, form a new hypothesis, or consider additional variables (for example, whether certain API calls bypass caching). Each iteration should bring you closer to a better understanding of the bug's root cause. Remember, the goal is not to prove an initial guess right but to arrive at a verifiable explanation. - 6. Implement and Verify the Fix: Once you're confident in the identified cause, you can implement the fix. Verification doesn't stop at deployment - re-test under the same conditions and, if possible, beyond them. By confirming the fix in a controlled manner, you ensure that the solution is backed by evidence rather than wishful thinking. - Personally, I consider implementing end-to-end tests (e.g., with Playwright) that reproduce the bug and verify the fix to be a crucial part of this step. This both ensures that the bug doesn't reappear in the future due to changes in the codebase and avoids possible imprecisions of manual testing. Now, we can explore these steps in more detail, highlighting how the scientific method can guide us through the debugging process. Establishing Clear Debugging Questions (Formulating a Hypothesis) A hypothesis is a proposed explanation for a phenomenon that can be tested through experimentation. In a debugging context, that phenomenon is the bug or issue you're trying to resolve. Having a clear, falsifiable statement that you can prove or disprove ensures that you stay focused on the real problem rather than jumping haphazardly between possible causes. A properly formulated hypothesis lets you design precise experiments to evaluate whether your explanation holds true. To formulate a hypothesis effectively, you can follow these steps: 1. Clearly Identify the Symptom(s) Before forming any hypothesis, pin down the specific issue users are experiencing. For instance: - "Users intermittently see outdated profile information after updating their accounts." - "Some newly created user profiles don't reflect changes in certain parts of the application." Having a well-defined problem statement keeps your hypothesis focused on the actual issue. Just like a research question in science, the clarity of your symptom definition directly influences the quality of your hypothesis. 2. Draft a Tentative Explanation Next, convert your symptom into a statement that describes a _possible root cause_, such as: - "Data inconsistency occurs because the caching layer isn't invalidating or refreshing user data properly when profiles are updated." - "Stale data is displayed because the cache timeout is too long under certain load conditions." This step makes your assumption about the root cause explicit. As with the scientific method, your hypothesis should be something you can test and either confirm or refute with data or experimentation. 3. Ensure Falsifiability A valid hypothesis must be falsifiable - meaning it can be proven _wrong_. You'll struggle to design meaningful experiments if a hypothesis is too vague or broad. For example: - Not Falsifiable: "Occasionally, the application just shows weird data." - Falsifiable: "Users see stale data when the cache is not invalidated within 30 seconds of profile updates." Making your hypothesis specific enough to fail a test will pave the way for more precise debugging. 4. Align with Available Evidence Match your hypothesis to what you already know - logs, stack traces, metrics, and user reports. For example: - If logs reveal that cache invalidation events aren't firing, form a hypothesis explaining why those events fail or never occur. - If metrics show that data served from the cache is older than the configured TTL, hypothesize about how or why the TTL is being ignored. If your current explanation contradicts existing data, refine your hypothesis until it fits. 5. Plan for Controlled Tests Once you have a testable hypothesis, figure out how you'll attempt to _disprove_ it. This might involve: - Reproducing the environment: Set up a staging/local system that closely mimics production. For instance with the same cache layer configurations. - Varying one condition at a time: For example, only adjust cache invalidation policies or TTLs and then observe how data freshness changes. - Monitoring metrics: In our example, such monitoring would involve tracking user profile updates, cache hits/misses, and response times. These metrics should lead to confirming or rejecting your explanation. These plans become your blueprint for experiments in further debugging stages. Collecting and Evaluating Evidence After formulating a clear, testable hypothesis, the next crucial step is to gather data that can either support or refute it. This mirrors how scientists collect observations in a literature review or initial experiments. 1. Identify "Primary Sources" (Logs, Stack Traces, Code History): - Logs and Stack Traces: These are your direct pieces of evidence - treat them like raw experimental data. For instance, look closely at timestamps, caching-related events (e.g., invalidation triggers), and any error messages related to stale reads. - Code History: Look for related changes in your source control, e.g. using Git bisect. In our example, we would look for changes to caching mechanisms or references to cache libraries in commits, which could pinpoint when the inconsistency was introduced. Sometimes, reverting a commit that altered cache settings helps confirm whether the bug originated there. 2. Corroborate with "Secondary Sources" (Documentation, Q&A Forums): - Documentation: Check official docs for known behavior or configuration details that might differ from your assumptions. - Community Knowledge: Similar issues reported on GitHub or StackOverflow may reveal known pitfalls in a library you're using. 3. Assess Data Quality and Relevance: - Look for Patterns: For instance, does stale data appear only after certain update frequencies or at specific times of day? - Check Environmental Factors: For instance, does the bug happen only with particular deployment setups, container configurations, or memory constraints? - Watch Out for Biases: Avoid seeking only the data that confirms your hypothesis. Look for contradictory logs or metrics that might point to other root causes. You keep your hypothesis grounded in real-world system behavior by treating logs, stack traces, and code history as primary data - akin to raw experimental results. This evidence-first approach reduces guesswork and guides more precise experiments. Designing and Running Experiments With a hypothesis in hand and evidence gathered, it's time to test it through controlled experiments - much like scientists isolate variables to verify or debunk an explanation. 1. Set Up a Reproducible Environment: - Testing Environments: Replicate production conditions as closely as possible. In our example, that would involve ensuring the same caching configuration, library versions, and relevant data sets are in place. - Version Control Branches: Use a dedicated branch to experiment with different settings or configuration, e.g., cache invalidation strategies. This streamlines reverting changes if needed. 2. Control Variables One at a Time: - For instance, if you suspect data inconsistency is tied to cache invalidation events, first adjust only the invalidation timeout and re-test. - Or, if concurrency could be a factor (e.g., multiple requests updating user data simultaneously), test different concurrency levels to see if stale data issues become more pronounced. 3. Measure and Record Outcomes: - Automated Tests: Tests provide a great way to formalize and verify your assumptions. For instance, you could develop tests that intentionally update user profiles and check if the displayed data matches the latest state. - Monitoring Tools: Monitor relevant metrics before, during, and after each experiment. In our example, we might want to track cache hit rates, TTL durations, and query times. - Repeat Trials: Consistency across multiple runs boosts confidence in your findings. 4. Validate Against a Baseline: - If baseline tests manifest normal behavior, but your experimental changes manifest the bug, you've isolated the variable causing the issue. E.g. if the baseline tests show that data is consistently fresh under normal caching conditions but your experimental changes cause stale data. - Conversely, if your change eliminates the buggy behavior, it supports your hypothesis - e.g. that the cache configuration was the root cause. Each experiment outcome is a data point supporting or contradicting your hypothesis. Over time, these data points guide you toward the true cause. Analyzing Results and Iterating In scientific debugging, an unexpected result isn't a failure - it's valuable feedback that brings you closer to the right explanation. 1. Compare Outcomes to the hypothesis. For instance: - Did user data stay consistent after you reduced the cache TTL or fixed invalidation logic? - Did logs show caching events firing as expected, or did they reveal unexpected errors? - Are there only partial improvements that suggest multiple overlapping issues? 2. Incorporate Unexpected Observations: - Sometimes, debugging uncovers side effects - e.g. performance bottlenecks exposed by more frequent cache invalidations. Note these for future work. - If your hypothesis is disproven, revise it. For example, the cache may only be part of the problem, and a separate load balancer setting also needs attention. 3. Avoid Confirmation Bias: - Don't dismiss contrary data. For instance, if you see evidence that updates are fresh in some modules but stale in others, you may have found a more nuanced root cause (e.g., partial cache invalidation). - Consider other credible explanations if your teammates propose them. Test those with the same rigor. 4. Decide If You Need More Data: - If results aren't conclusive, add deeper instrumentation or enable debug modes to capture more detailed logs. - For production-only issues, implement distributed tracing or sampling logs to diagnose real-world usage patterns. 5. Document Each Iteration: - Record the results of each experiment, including any unexpected findings or new hypotheses that arise. - Through iterative experimentation and analysis, each cycle refines your understanding. By letting evidence shape your hypothesis, you ensure that your final conclusion aligns with reality. Implementing and Verifying the Fix Once you've identified the likely culprit - say, a misconfigured or missing cache invalidation policy - the next step is to implement a fix and verify its resilience. 1. Implementing the Change: - Scoped Changes: Adjust just the component pinpointed in your experiments. Avoid large-scale refactoring that might introduce other issues. - Code Reviews: Peer reviews can catch overlooked logic gaps or confirm that your changes align with best practices. 2. Regression Testing: - Re-run the same experiments that initially exposed the issue. In our stale data example, confirm that the data remains fresh under various conditions. - Conduct broader tests - like integration or end-to-end tests - to ensure no new bugs are introduced. 3. Monitoring in Production: - Even with positive test results, real-world scenarios can differ. Monitor logs and metrics (e.g. cache hit rates, user error reports) closely post-deployment. - If the buggy behavior reappears, revisit your hypothesis or consider additional factors, such as unpredicted user behavior. 4. Benchmarking and Performance Checks (If Relevant): - When making changes that affect the frequency of certain processes - such as how often a cache is refreshed - be sure to measure the performance impact. Verify you meet any latency or resource usage requirements. - Keep an eye on the trade-offs: For instance, more frequent cache invalidations might solve stale data but could also raise system load. By systematically verifying your fix - similar to confirming experimental results in research - you ensure that you've addressed the true cause and maintained overall software stability. Documenting the Debugging Process Good science relies on transparency, and so does effective debugging. Thorough documentation guarantees your findings are reproducible and valuable to future team members. 1. Record Your Hypothesis and Experiments: - Keep a concise log of your main hypothesis, the tests you performed, and the outcomes. - A simple markdown file within the repo can capture critical insights without being cumbersome. 2. Highlight Key Evidence and Observations: - Note the logs or metrics that were most instrumental - e.g., seeing repeated stale cache hits 10 minutes after updates. - Document any edge cases discovered along the way. 3. List Follow-Up Actions or Potential Risks: - If you discover additional issues - like memory spikes from more frequent invalidation - note them for future sprints. - Identify parts of the code that might need deeper testing or refactoring to prevent similar issues. 4. Share with Your Team: - Publish your debugging report on an internal wiki or ticket system. A well-documented troubleshooting narrative helps educate other developers. - Encouraging open discussion of the debugging process fosters a culture of continuous learning and collaboration. By paralleling scientific publication practices in your documentation, you establish a knowledge base to guide future debugging efforts and accelerate collective problem-solving. Conclusion Debugging can be as much a rigorous, methodical exercise as an art shaped by intuition and experience. By adopting the principles of scientific inquiry - forming hypotheses, designing controlled experiments, gathering evidence, and transparently documenting your process - you make your debugging approach both systematic and repeatable. The explicitness and structure of scientific debugging offer several benefits: - Better Root-Cause Discovery: Structured, hypothesis-driven debugging sheds light on the _true_ underlying factors causing defects rather than simply masking symptoms. - Informed Decisions: Data and evidence lead the way, minimizing guesswork and reducing the chance of reintroducing similar issues. - Knowledge Sharing: As in scientific research, detailed documentation of methods and outcomes helps others learn from your process and fosters a collaborative culture. Ultimately, whether you are diagnosing an intermittent crash or chasing elusive performance bottlenecks, scientific debugging brings clarity and objectivity to your workflow. By aligning your debugging practices with the scientific method, you build confidence in your solutions and empower your team to tackle complex software challenges with precision and reliability. But most importantly, do not get discouraged by the number of rigorous steps outlined above or by the fact you won't always manage to follow them all religiously. Debugging is a complex and often frustrating process, and it's okay to rely on your intuition and experience when needed. Feel free to adapt the debugging process to your needs and constraints, and as long as you keep the scientific mindset at heart, you'll be on the right track....

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader cover image

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader

Ashley Willis has seen Developer Relations evolve from being on the sidelines of the tech team to having a seat at the strategy table. In her ten years in the space, she’s done more than give great conference talks or build community—she’s helped shape what the DevRel role looks like for software providers. Now as the Senior Director of Developer Relations at GitHub, Ashley is focused on building spaces where developers feel heard, seen, and supported. > “A decade ago, we were seen as amplifiers, not collaborators,” she says. “Now we’re influencing product roadmaps and shaping developer experience end to end.” DevRel Has Changed For Ashley, the biggest shift hasn’t been the work itself—but how it’s understood. > “The work is still outward-facing, but it’s backed by real strategic weight,” she explains. “We’re showing up in research calls and incident reviews, not just keynotes.” That shift matters, but it’s not the finish line. Ashley is still pushing for change when it comes to burnout, representation, and sustainable metrics that go beyond conference ROI. > “We’re no longer fighting to be taken seriously. That’s a win. But there’s more work to do.” Talking Less as a Leader When we asked what the best advice Ashley ever received, she shared an early lesson she received from a mentor: “Your presence should create safety, not pressure.” > “It reframed how I saw my role,” she says. “Not as the one with answers, but the one who holds the space.” Ashley knows what it’s like to be in rooms where it’s hard to speak up. She leads with that memory in mind, and by listening more than talking, normalizing breaks, and creating environments where others can lead too. > “Leadership is emotional labor. It’s not about being in control. It’s about making it safe for others to lead, too.” Scaling More Than Just Tech Having worked inside high-growth companies, Ashley knows firsthand: scaling tech is one thing. Scaling trust is another. > “Tech will break. Roadmaps will shift. But if there’s trust between product and engineering, between company and community—you can adapt.” And she’s learned not to fall for premature optimization. Scale what you have. Don’t over-design for problems you don’t have yet. Free Open Source Isn’t Free There’s one myth Ashley is eager to debunk: that open source is “free.” > “Open source isn’t free labor. It’s labor that’s freely given,” she says. “And it includes more than just code. There’s documentation, moderation, mentoring, emotional care. None of it is effortless.” Open source runs on human energy. And when we treat contributors like an infinite resource, we risk burning them out, and breaking the ecosystem we all rely on. > “We talk a lot about open source as the foundation of innovation. But we rarely talk about sustaining the people who maintain that foundation.” Burnout is Not Admirable Early in her career, Ashley wore burnout like a badge of honor. She doesn’t anymore. > “Burnout doesn’t prove commitment,” she says. “It just dulls your spark.” Now, she treats rest as productive. And she’s learned that clarity is kindness—especially when giving feedback. > “I thought being liked was the same as being kind. It’s not. Kindness is honesty with empathy.” The Most Underrated GitHub Feature? Ashley’s pick: personal instructions in GitHub Copilot. Most users don’t realize they can shape how Copilot writes, like its tone, assumptions, and context awareness. Her own instructions are specific: empathetic, plainspoken, technical without being condescending. For Ashley, that helps reduce cognitive load and makes the tool feel more human. > “Most people skip over this setting. But it’s one of the best ways to make Copilot more useful—and more humane.” Connect with Ashley Willis She has been building better systems for over a decade. Whether it’s shaping Copilot UX, creating safer teams, or speaking truth about the labor behind open source, she’s doing the quiet work that drives sustainable change. Follow Ashley on BlueSky to learn more about her work, her maker projects, and the small things that keep her grounded in a fast-moving industry. Sticker Illustration by Jacob Ashley....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co