Skip to content

RxJS — OSS “Behind Closed Doors”

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Behind every successful open source project is a group of motivated individuals who come together and combine ideas to produce something great.

RxJS Contributor Days brought together this group of people this year. Co-hosted with the RxJS Core Team, it was an event where OSS developers came together to discuss the underlying components of the library. Led by Ben Lesh (@benlesh), project lead for RxJS5+ and software engineer at Google, Jay Phelps (@_jayphelps), core contributor to RxJS and software engineer at Netflix, and Paul Taylor (@trxcllnt), core contributor to RxJS, and Tracy Lee (@ladyleet) the event was a success.

One of the goals of Contributor Days was to strengthen relationships within the existing RxJS community. Large consumers of RxJS, contributors, and a few external developers were invited. A number of topics were brought up for discussion. Subject matters concerning RxJS such as operator philosophy, versioning, standardization, size and modularity vs. ergonomics were presented, and met with a great exchange of ideas.

In addition to maximizing the strengths and improving the weaker elements of the library, RxJS Contributor days was also hosted to create greater opportunities for new contributors to explore. The entire day and all conversations were recorded as a method of sharing a “behind the scenes” look on how decisions are made, and why certain features are implemented, in order to improve visibility in the RxJS community.

As you watch these videos, don’t let your insight or desire to get involved go to waste. Reach out and ask how you can help, or gather ideas and action items from the videos below.

RECORDED DISCUSSIONS

  1. RXJS Contributor Days — A Summary : https://www.youtube.com/watch?v=VBXMso56Ovw&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM
  2. RxJS 5+ and Beyond Talk by Ben Lesh at Contributor Days : https://www.youtube.com/watch?v=KbMLN7oYO7E&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM&index=2
  3. RxJS 5 Operator Architecture Talk by Paul Taylor at Contributor Days : https://www.youtube.com/watch?v=FwsITFVks38&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM&index=3
  4. RxJS Contributor Days Discussion — 1 Operator Philosophy and Versioning : https://www.youtube.com/watch?v=1JKpmJf_BGU&index=4&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM
  5. RxJS Contributor Days Discussion — 2 Size & Modularity vs. Ergonomics vs. Performance : https://www.youtube.com/watch?v=2elgbSierX0&index=5&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM
  6. RxJS Contributor Days Discussion — 3 Breaking Changes & Communication : https://www.youtube.com/watch?v=srPQ3xnDt4M&index=6&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM
  7. RxJS Contributor Days Discussion — 4 Zone.js Integration : https://www.youtube.com/watch?v=nfmS9LtqXnI&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM&index=7
  8. RxJS Contributor Days Discussion — 5 Helping Newbies Understand the value of Rx : https://www.youtube.com/watch?v=22o4-kXt3es&index=8&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM
  9. RxJS Contributor Days Discussion — 6 Debugger : https://www.youtube.com/watch?v=KEpfpheklSM&index=9&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM
  10. RxJS Contributor Days Discussion — 7 Standardization : https://www.youtube.com/watch?v=qXs2YJKCEsA&index=10&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM
  11. RxJS Contributor Days Discussion — 8 Reducing Beginner Pain : https://www.youtube.com/watch?v=a66bjmP1tR4&list=PL-G5r6j4GptEDVrYv140S9PRJZfeFmZoM&index=11

ATTENDEES

CONCLUSION

Contributor Days seeks to develop relationships and unity between developers and users alike, and RxJS Contributor days is just one example. Creating great open source software is a community effort, and by no means could libraries exist without collaboration and input.

You can find out about upcoming Contributor Days at http://contributordays.com.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to Contribute to RxJS cover image

How to Contribute to RxJS

This Dot Media is kicking off a brand new series of videos in 2022 to help developers learn how they can contribute code to some of the world’s most popular JavaScript frameworks and technologies. Subscribe to This Dot Media’s Youtube Channel. To continue our series, highlighting best practices for developers interested in contributing to their favorite technologies, I met up with my good friend and creator of RxJS, Ben Lesh. If you would like to check out that interview, you can see it here. What is RxJS? RxJS is a library for reactive programming that uses Observables to make it easier to compose asynchronous or callback-based code. It is a rewrite of Reactive-Extensions/RxJS with better performance, better modularity, better debuggable call stacks, all while staying mostly backwards compatible, with some breaking changes that reduce the API surface. Ben Lesh Ben Lesh is an international speaker and RxJS Core Team Member. His involvement with the framework started a few years ago, when he was asked to work on a rewrite due to his open-source experience in other projects at Netflix. What to Know Before You Get Started According to Ben, the codeofconduct.md and contributing.md documents are the best places for new developers to start. It is crucial that new contributors abide by community standards and the correct commit format, so be sure to read through! *Quick Tips!* -Everything in RxJS is written under src/internal, where you can look at all the internals of RxJS. -The commit message generates the changelog, which can be found on changelog.md. The Best Way to Get Started When I asked Ben what areas of the repository were best for new contributors, and what areas needed the most attention, he of course said: documentation. But it’s true! According to Ben, devs can simply click the “Improve Documentation” button, and do a direct commit. Presently, the team could significantly benefit from contributors interested in checking links to ensure that they are not broken, revising grammatical errors, ensuring all information is up to date, and adding edit buttons in the documentation for individual pages. However, if developers are looking for a different challenge, they can see open requests for contribution by checking out the “help wanted” tag in the issues section. *Quick Tips!* -Don’t simply submit a bunch of PRs to get on the contributors list! Focus on contributing helpful, meaningful work that will advance RxJS! -Because issues are not checked every day, you want to make sure that there is not already a PR for it, and that it hasn’t already been assigned to someone. Also, make sure to look through the PRs to ensure that the issue is not being addressed in both open and closed PRs. What About Attending Core Team Meetings? Core Team meetings typically address in-depth issues and questions related to RxJS, and are not open to the public. However, contributors can receive invitations based on their PRs and need, so if developers are interested in attending Core Team meetings, the best thing to do is create a presence in the RxJS Community! Though meetings aren’t open or recorded, community members can view the issues, and see the tag “agenda item” to see what topics will be addressed at upcoming meetings. Usually, the discussions are commented on in the issues. *Quick Tips!* -If you are interested in getting involved, but don’t want to contribute code, go out into the world and review external articles or Stack Overflow, and help in the community! -Blog posts and explanations about RxJS are also valued by the community and Core Team! Ready to Begin? You can find the RxJS repository here!...

Using Custom Async Validators in Angular Reactive Forms cover image

Using Custom Async Validators in Angular Reactive Forms

What is a validator in Angular? First, let's talk about Validators. What is a validator, and how we can use them to improve our forms. Ocasionally, we want to validate some input in our fields before submission, which will be validated against an asynchronous source. For instance, you may want to check if a label or some data exists before submission. In Angular, you can do this using Async Validators. Create a basic application We are going to create a very minimalist form that does one thing: check if the username already exists. In this case, we are going to use a mix of async and sync validators to accomplish our task. It's very common, in most registration forms, to require an unique username. Using an API call, we to capture if that username is already in use according to the database. ` Created a basic form that has a FormGroup called registrationForm, with 2 FormControl. Also, I added 2 sync validators for make the input required and also to check if the input has a minimum length, using Validators.required and Validators.minLength in each case. ` I'm using @angular/material just to simplify our styles, and because it provides a clean way to display errors using the mat-error component. This component will be shown to the user if the required error is present. The button also checks if the FormGroup is valid, otherwise it will be disabled. Async validators Our current app works pretty well, and in most cases, will meet all of the requirements. But what if we want to make sure that the username is unique before allowing the user to submit their information?. Creating an Async Validator could be simple. Let's find how to create it, and add it to our current form. ` Our method to check if the username already exists is called checkIfUsernameExists. It returns an observable with a 5 seconds delay to simulate a very slow API call. Now, we can create our Async Validator to check if the username exists against that method. ` Our UsernameValidator class takes our UserService as an argument. This method returns a AsyncValidatorFn which receives the FormControl that is placed on, providing us access to the current value. An AsyncValidatorFn must return either a promise or an Observable of type ValidationErrors. We use the RxJS map operator to check the value emitted, and either return null if the user doesn't exist, or return a ValidationError with an error type of usernameAlreadyExists. ` We import our UsernameValidator and UserService into our component, and declare in the constructor component. The UsernameValidator is added to our FormControl as the third parameter, calling the createValidator method, and passing a reference to the UserService. ` We update our template to check for an additional error called usernameAlreadyExists, so if our UserService finds that the username already exists, it will provide a to the user based on her status. > The validation status of the control. There are four possible validation status values: > > VALID: This control has passed all validation checks. > > INVALID: This control has failed at least one validation check. > > PENDING: This control is in the midst of conducting a validation check. > > DISABLED: This control is exempt from validation checks Let's see StackBlitz in action. Conclusion Creating beautiful forms for our web app might seem simple, but it's typically more complicated than one would expect. Sometimes, we also need to verify the information, because we don't want to blindly send data to the backend, and then reject it. This can lead to a poor experience for users and result in low adoption rates. Creating an AsyncValidator can help us improve the user-experience, and also avoid sending data to the backend, resulting in a rejection....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Quo v[AI]dis, Tech Stack? cover image

Quo v[AI]dis, Tech Stack?

Since we've started extensively leveraging AI at This Dot to enhance development workflows and experimenting with different ways to make it as helpful as possible, there's been a creeping thought on my mind - Is AI just helping us write code faster, or is it silently reshaping what code we choose to write? Eventually, this thought led to an interesting conversation on our company's Slack about the impact of AI on our tech stack choices. Some of the views shared there included: - "The battle between static and dynamic types is over. TypeScript won." - "The fast-paced development of new frameworks and the excitement around new shiny technologies is slowing down. AI can make existing things work with a workaround in a few minutes, so why create or adopt something new?" - "AI models are more trained on the most popular stacks, so they will naturally favor those, leading to a self-reinforcing loop." - "A lot of AI coding assistants serve as marketing funnels for specific stacks, such as v0 being tailored to Next.js and Vercel or Lovable using Supabase and Clerk." All of these points are valid and interesting, but they also made me think about the bigger picture. So I decided to do some extensive research (read "I decided to make the OpenAI Deep Research tool do it for me") and summarize my findings in this article. So without further ado, here are some structured thoughts on how AI is reshaping our tech stack choices, and what it means for the future of software development. 1. LLMs as the New Developer Platform If software development is a journey, LLMs have become the new high-speed train line. Long gone are the days when we used Copilot as a fancy autocomplete tool. Don't get me wrong, it was mind-bogglingly good when it first came out, and I've used it extensively. But now, a few years later, LLMs have evolved into something much more powerful. With the rise of tools like Cursor, Windsurf, Roo Code, or Claude Code, LLMs are essentially becoming the new developer platform. They are no longer just a helper that autocompletes a switch statement or a function signature, but a full-fledged platform that can generate entire applications, write tests, and even refactor code. And it is not just a few evangelists or early adopters who are using these tools. They have become mainstream, with millions of developers relying on them daily. According to Deloitte, nearly 20% of devs in tech firms were already using generative AI coding tools by 2024, with 76% of StackOverflow respondents using or planning to use AI tools in their development process, according to the 2024 StackOverflow Developer Survey. They've become an integral part of the development workflow, mediating how code is written, reviewed, and learned. I've argued in the past that LLMs are becoming a new layer of abstraction in software development, but now I believe they are evolving into something even more powerful - a new developer platform that is shaping how we think about and approach software development. 2. The Reinforcement Loop: Popular Stacks Get Smarter As we travel this AI-guided road, we find that certain routes become highways, while others lead to narrow paths or even dead ends. AI tools are not just helping us write code faster; they are also shaping our preferences for certain tech stacks. The most popular frameworks and languages, such as React.js on the frontend and Node.js on the backend (both with 40% adoption), are the ones that AI tools perform best with. Their increasing popularity is not just a coincidence; it's a result of a self-reinforcing loop. AI models are trained on vast amounts of code, and the most popular stacks naturally have more data available for training, given their widespread use, leading to more questions, answers, and examples in the training data. This means that AI tools are inherently better at understanding and generating code for these stacks. As an anecdotal example, I've noticed that AI tools tend to suggest React.js even when I specify a preference for another framework. As someone working with multiple tech stacks, I can attest that AI tools are significantly more effective with React.js or Node.js than, say, Yii2 or CakePHP. This phenomenon is not limited to just one or two stacks; it applies to the entire ecosystem. The more a stack is used, the more data there is for AI to learn from, and the better it gets at generating code for that stack, resulting in a feedback loop: 1. AI performs better on popular stacks. 2. Popular stacks get more adoption as developers find them easier to work with. 3. More developers using those stacks means more data for AI to learn from. 4. The cycle continues, reinforcing the popularity of those stacks. The issue is maybe even more evident with CSS frameworks. TailwindCSS, for example, has gained immense popularity thanks to its utility-first approach, which aligns well with AI's ability to generate and manipulate styles. As more developers adopt TailwindCSS, AI tools become better at understanding its conventions and generating appropriate styles, further driving its adoption. However, the Tailwind CSS example also highlights a potential pitfall of this reinforcement loop. Tailwind CSS v4 was released in January 2025. From my experience, AI tools still attempt to generate code using v3 concepts and often need to be reminded to use Tailwind CSS v4, requiring spoon-feeding with documentation to get it right. Effectively, this phenomenon can lead to a situation where AI tools not only reinforce the popularity of certain stacks but also potentially slow down the adoption of newer versions or alternatives. 3. Frontend Acceleration: React, Angular, and Beyond Navigating the frontend landscape has always been tricky, but with AI, some paths feel like smooth expressways while others remain bumpy dirt roads. AI is particularly transformative in frontend development, where the complexity and boilerplate code can be overwhelming. Established frameworks like React and Angular, which are already popular, are seeing even more adoption due to AI's ability to generate components, tests, and optimizations. React's widespread adoption and its status as the most popular framework on the frontend make it the go-to choice for many developers, especially with AI tools that can quickly scaffold new components or entire applications. However, Angular's strict structure and type safety also make it a strong contender. Angular's opinionated nature can actually benefit AI-generated code, as it provides a clear framework for the AI to follow, reducing ambiguity and potential bugs. > Call me crazy but I think that long term Angular is going to work better with AI tools for frontend work. > > More strict rules to follow, easier to build and scale. Just like for humans. > > We just need to keep Angular opinionated enough. > > — Daniel Glejzner on X But it's not just about how the frameworks are structured; it's also the documentation they provide. It has recently become the norm for frameworks to have AI-friendly documentation. Angular, for instance, has a llms.txt file that you can reference in your AI prompts to get more relevant results. The best example, however, in my opinion, is the Nuxt.ui documentation, which provides the option to copy each documentation page as markdown or a link to its markdown version, making it easy to reference in AI prompts. Frameworks that incorporate AI-friendly documentation and tooling are likely to experience increased adoption, as they facilitate developers' ability to leverage AI's capabilities. 4. Full-Stack TS/JS: The Sweet Spot On this AI-accelerated journey, some stacks have emerged as the smoothest rides, and full-stack JavaScript/TypeScript is leading the way. The combination of React on the frontend and Node.js on the backend provides a unified language ecosystem, making the road less bumpy for developers. Shared types, common tooling, and mature libraries enable faster prototyping and reduced context switching. AI seems to enjoy these well-paved highways too. I've observed numerous instances where AI tools default to suggesting Next.js and Tailwind CSS for new projects, even when users are prompted otherwise. While you can force a slight detour to something like Nuxt or SvelteKit, the road suddenly gets patchier - AI becomes less confident, requires more hand-holding, and sometimes outright stalls. So while still technically being in the sweet spot of full-stack JavaScript/TypeScript, deviating from the "main highway" even slightly can lead to a much rougher ride. React-based full-stack frameworks are becoming mainstream, not necessarily because they are always the best solution, but because they are the path of least resistance for both humans and AI. 5. The Polyglot Shift: AI Enables Multilingual Devs One fascinating development on this journey is how AI is enabling more developers to become polyglots. Where switching stacks used to feel like taking detours into unknown territory, AI now acts like an on-demand guide. Whether it’s switching from Laravel to Spring Boot or from Angular to Svelte, AI helps bridge those knowledge gaps quickly. At This Dot, we've always taken pride in our polyglot approach, but AI is lowering the barriers for everyone. Yes, we've done this before the rise of AI tooling. If you are an experienced engineer with a strong understanding of programming concepts, you'll be able to adapt to different stacks and projects quickly. But AI is now enabling even junior developers to become polyglots, and it's making it even easier for the experienced ones to switch between stacks seamlessly. AI doesn’t just shorten the journey - it makes more destinations accessible. This "AI boost" not only facilitates the job of a software consultant, such as myself, who often has to switch between different projects, but it also opens the door to unlimited possibilities for companies to mix and match stacks based on their needs - particularly useful for companies that have diverse tech stacks, as it allows them to leverage the strengths of different languages and frameworks without the steep learning curve that usually comes with it. 6. AI-Generated Stack Bundles: The Trojan Horse > Trend I'm seeing: AI app generators are a sales funnel. > > -Chef uses Convex. > > -V0 is optimized for Vercel. > > -Lovable uses Supabase and Clerk. > > -Firebase Studio uses Google services. > > These tools act like a trojan horse - they "sell" a tech stack. > > Choose wisely. > > — Cory House on X Some roads come pre-built, but with toll booths you may not notice until you're halfway through the trip. AI-generated apps from tools like v0, Firebase Studio, or Lovable are convenience highways - fast, smooth, and easy to follow - but they quietly nudge you toward specific tech stacks, backend services, databases, and deployment platforms. It's a smart business model. These tools don't just scaffold your app; they bundle in opinions on hosting, auth providers, and DB layers. The convenience is undeniable, but there's a trade-off in flexibility and long-term maintainability. Engineering leaders must stay alert, like seasoned navigators, ensuring that the allure of speed doesn't lead their teams down the alleyways of vendor lock-in. 7. From 'Buy vs Build' to 'Prompt vs Buy' The classic dilemma used to be _“buy vs build”_ - now it’s becoming “prompt vs buy.” Why pay for a bloated tour bus of a SaaS product, packed with destinations and detours you’ll never take (and priced accordingly), when you can chart a custom route with a few well-crafted prompts and have a lightweight internal tool up and running in days—or even hours? Do you need a simple tool to track customer contacts with a few custom fields and a clean interface? In the past, you might have booked a seat on the nearest SaaS solution - one that gets you close enough to your destination but comes with unnecessary stops and baggage. With AI, you can now skip the crowded bus altogether and build a tailor-made vehicle that drives exactly where you need to go, no more, no less. AI reshapes the travel map of product development. The road to MVPs has become faster, cheaper, and more direct. This shift is already rerouting the internal tooling landscape, steering companies away from bulky, one-size-fits-all platforms toward lean, AI-assembled solutions. And over time, it may change not just _how_ we build, but _where_ we build - with the smoothest highways forming around AI-friendly, modular ecosystems like Node, React, and TypeScript, while older “corporate” expressways like .NET, Java, or even Angular risk becoming the slow scenic routes of enterprise tech. 8. Strategic Implications: Velocity vs Maintainability Every shortcut comes with trade-offs. The fast lane that AI offers boosts productivity but can sometimes encourage shortcuts in architecture and design. Speeding to your destination is great - until you hit the maintenance toll booth further down the road. AI tooling makes it easier to throw together an MVP, but without experienced oversight, the resulting codebases can turn into spaghetti highways. Teams need to implement AI-era best practices: structured code reviews, prompt hygiene, and deliberate stack choices that prioritize long-term maintainability over short-term convenience. Failing to do so can lead to a "quick and dirty" mentality, where the focus is on getting things done fast rather than building robust, maintainable solutions, which is particularly concerning for companies that rely on in-house developers or junior teams who may not have the experience to recognize potential pitfalls in AI-generated code. 9. Closing Reflection: Are We Still Choosing Our Stacks? So, where are we heading? Looking at the current "traffic" on the modern software development pathways, one thing becomes clear: AI isn't just a productivity tool - the roads themselves are starting to shape the journey. What was once a deliberate process of choosing the right vehicle for the right terrain - picking our stacks based on product goals, team expertise, and long-term maintainability - now feels more like following GPS directions that constantly recalculate to the path of least resistance. AI is repaving the main routes, widening the lanes for certain tech stacks, and putting up "scenic route" signs for some frameworks while leaving others on neglected backroads. This doesn't mean we've lost control of the steering wheel, but it does mean that the map is changing beneath us in ways that are easy to overlook. The risk is clear: we may find ourselves taking the smoothest on-ramps without ever asking if they lead to where we actually want to go. Convenience can quietly take priority over appropriateness. Productivity gains in the short term can pave over technical debt potholes that become unavoidable down the road. But the story isn't entirely one of caution. There's a powerful opportunity here too. With AI as a co-pilot, we can explore more destinations than ever before - venturing into unfamiliar tech stacks, accelerating MVP development, or rapidly prototyping ideas that previously seemed out of reach. The key is to remain intentional about when to cruise with AI autopilot and when to take the wheel with both hands and steer purposefully. In this new era of AI-shaped development, the question every engineering team should be asking is not just "how fast can we go?" but "are we on the right road?" and "who's really choosing our route?" And let’s not forget — some of these roads are still being built. Open-source maintainers and framework authors play a pivotal role in shaping which paths become highways. By designing AI-friendly architectures, providing structured, machine-readable documentation, and baking in patterns that are easy for AI models to learn and suggest, they can guide where AI directs traffic. Frameworks that proactively optimize for AI tooling aren’t just improving developer experience — they’re shaping the very flow of adoption in this AI-accelerated landscape. If we're not mindful, we risk becoming passengers on a journey defined by default choices. However, if we remain vigilant, we can utilize AI to create more accurate maps, not just follow the fastest roads, but also chart new ones. Because while the routes may be getting redrawn, the destination should always be ours to choose. In the end, the real competitive advantage will belong to those who can harness AI's speed while keeping their hands firmly on the wheel - navigating not by ease, but by purpose. In this new era, the most valuable skill may not be prompt engineering - it might be strategic discernment....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co