Skip to content

Make it accessible: Navigation in Angular

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Make it Accessible: Navigation in Angular

Today, we are going to talk about navigation. Let's start from the beginning. What's navigation?

Web navigation refers to the process of navigating a network of information resources in the World Wide Web Wikipedia

In those terms, we could say that when a user clicks a link, a navigation event is triggered, the browser captures this event, and then redirects the user to a new page. In pure HTML, this forces the browser to load the entire html. When you use Angular, things are different. The browser doesn't need to load the whole html again. Instead, by using AJAX, you get only what was changed.

I thought that was a magical thing, and that the benefits were huge in comparison to the way HTML links normally behave. But that's true only until a certain point when you want to make accessible applications, things get more complicated. Why? If you have read my last article, Make it Accessible, you know how important HTML5 semantic elements are.

If you haven't read it. You can access here: Make it Accessible: Headings in Angular

Just like native HTML buttons can help make things more accessible out of the box by providing keyboard support and focus ability, anchors are here to make your life easier.

Anchors to the rescue

In pure HTML, we use anchor elements with the href attribute. That way, we can tell the browser which url it has to redirect the user to upon click. This triggers a full load of the app, BUT there's a benefit to it: its support of web accessibility. Screen reader users are used to the way native HTML navigation works. It helps by reading the title of the new page, and setting the focus to the top of the document, also by having the title of the page change, so the user knows the current location.

So it basically allows:

  • Sighted users to know the current page by reading the title
  • Visually impaired users to know the current page from a screen reader announcing the title
  • Setting focus to the top of the document

If you have used the Angular Router, you know that all the accessibility features just mentioned are lost. So, if you are looking to make your Angular app more accessible, sooner than later, you are going to have to face this.

We are going to solve each of the problems, one at a time. If you want to do all the coding processes on your own, access this broken version of the code, and follow my lead.

Current page for Sighted Users

In this step, we are going to focus on making sure the user has a way to know what the current page is. In the code I just gave you, you'll find a simple app with a header and some navigation. Right now, there's no way for the user to know the current page (besides reading the url, hoping it's as readable as in the example).

This could be solved by having a different color for the currently active link in the header, so let's do that.

First, we'll need to use the routerLinkActive directive in the anchors from the navigation. For this, we need to go to the src/app/app.component.html file and replace the nav element with this one.

<nav class="header__nav">
  <ul>
    <li>
      <a routerLink="/page-a" routerLinkActive="active">Page A</a>
    </li>
    <li>
      <a routerLink="/page-b" routerLinkActive="active">Page B</a>
    </li>
  </ul>
</nav>

So now, Angular will make sure to add the class active to the anchor that's route is currently active. Let's change the color of the active anchor. Go to the file src/app/app.component.scss, and add a color white when it has the active class.

a {
  // ...

  &.active {
    color: white;
  }
}

Make sure to put the &.active selector after all the ones that are already there.

Is the navigation accessible? Well, not really. What about color blind users? We need to give them a way to know. For that, we'll add underline and outline to the active anchor. Let's go back to the src/app/app.component.scss file.

a {
  // ...

  &.active,
  &:hover,
  &:focus {
    color: white;
    outline: 1px solid white;
  }

  &.active {
    text-decoration: underline;
  }
}

Since the hover and focus have the outline and the color that we want, I reorganized the selectors to reduce the duplicated code.

Last thing we have to do is to make sure we update the title of the page for every time the url changes. For this, I followed the instructions from Todd Motto in his article Dynamic page titles in Angular 2 with router events and did some changes to it.

This leads us to change the src/app/app-routing.module.ts

const routes: Routes = [
  {
    path: 'page-a',
    data: { title: 'I am the super Page A' },
    loadChildren: () =>
      import('./page-a/page-a.module').then(m => m.PageAModule)
  },
  {
    path: 'page-b',
    data: { title: 'I am the not that super Page B' },
    loadChildren: () =>
      import('./page-b/page-b.module').then(m => m.PageBModule)
  }
];

The key here is that I included a data property to each route, and gave each a title. Next, we have to update the src/app/app.component.ts file.

//...
import {
  map,
  distinctUntilChanged,
  startWith,
  filter,
  mergeMap
} from 'rxjs/operators';
import { Router, ActivatedRoute, NavigationEnd } from '@angular/router';
import { Title } from '@angular/platform-browser';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent {
  // ...
  title$: Observable<string>;

  constructor(
    private router: Router,
    private activatedRoute: ActivatedRoute,
    private titleService: Title
  ) {}

  ngOnInit() {
    // Get the activated route on Navigation end
    const route$ = this.router.events.pipe(
      filter(event => event instanceof NavigationEnd),
      map(() => this.activatedRoute)
    );

    // Get the first child route AKA the root
    const primaryRoute$ = route$.pipe(
      map(route => {
        while (route.firstChild) route = route.firstChild;
        return route;
      }),
      filter(route => route.outlet === 'primary')
    );

    // Get the first child route AKA the root
    const routeData$ = primaryRoute$.pipe(mergeMap(route => route.data));
    // Get the actual title from the route data
    this.title$ = routeData$.pipe(map(({ title }) => title));

    this.title$.subscribe(title => {
      // Set title to the page
      this.titleService.setTitle(title);
    });
  }
}

Above, I injected the services we need, made a stream from the router events in order to get the current title, and changed it in the browser using the Title service. If you want to learn more of this, you can read Todd Motto's article.

You have just solved the first problem.

Current page for Visually Impaired Users

You are here for accessibility, so it's time for visually impaired users to be taken into account. For this you can use the aria-live attribute.

Simple content changes, which are not interactive, should be marked as live regions. MDN Web Docs

That seems to be our use case- we want to announce to users when there was a page transition. For that we'll create an element with aria-live, containing the title content.

To get started, go to the src/app/app.component.html file, and use Angular's async pipe to render the title.

<div *ngIf="title$ | async as title" aria-live="assertive">
  <span [attr.aria-label]="title"></span>
</div>

If we put the title inside the span, instead of using aria-label, we would need to hide this element from sighted users, this is a little trick I love to do instead. Also, notice that we use the aria-live property with assertive to make sure this gets announced as soon as possible.

NOTE: Normally, I wouldn't use the ngIf in the tag with the aria-live because it will start announcing after it gets instantiated. In this case that's exactly what we need because we dont want to announce the title again on first load.

Now every user using the app will know which page they are on, no matter their condition. We are almost there to make a more inclusive navigation.

Manage focus and scroll

Let's make things even better now. You have probably noticed that whenever an Angular page transition occurs, if its possible, the scroll is retained in the same position, unless the page we have just transitioned to has a height that is less than the current scroll. So the first step would be to set the scroll to the very top on every page transition.

Just go back to the src/app/app.component.ts file and do this:

// ...
@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent {
  // ...
  ngOnInit() {
    // ...
    this.title$.subscribe(title => {
      // ...
      // Scroll to top
      window.scrollTo(0, 0);
      // ...
    });
  }
  // ...
}

Add a call to the scrollTo method from window using the parameters (0, 0), that way we tell the browser to scroll to the top of the document.

Whenever a page transition occurs in a pure HTML website, the focus is cleared and set to the first focusable element in the document. It is slightly harder but there's a trick for that, so let's do it together. Go again to the same file, and do this:

import { /* ... */ ViewChild, ElementRef } from '@angular/core';
// ...
@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent {
  // ...
  @ViewChild('appHeader', { static: true }) appHeader: ElementRef;

  ngOnInit() {
    // ...
    this.title$.subscribe(title => {
      // ...
      // Set focus to the appHeader
      this.appHeader.nativeElement.focus();
      // ...
    });
  }
  // ...
}

This is almost as easy as the step before, but instead of just calling a method on the window object, we have to create a reference to an element in the DOM. We've used ViewChild decorator for that. So now, we are setting the title, moving the scroll to top and setting the focus to the header in the title$ subscription.

Don't forget to add the template reference in src/app/app.component.ts and making it focusable.

<header class="header" tabindex="-1" #appHeader>
  <!-- ... -->
</header>

We dont want the focus outline in the header, so you can do this:

.header {
  // ...
  &:focus {
    outline: none;
  }
  // ...
}

Conclusion

After playing a little bit with Angular, we were able to make the navigation feel like the native one. It's not the most accessible navigation in the world, but this can get you there and is WAY BETTER than nothing. If you want a finished solution, look at this working version of the app.

Icons made by Freepik from Flaticon

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

Incremental Hydration in Angular cover image

Incremental Hydration in Angular

Incremental Hydration in Angular Some time ago, I wrote a post about SSR finally becoming a first-class citizen in Angular. It turns out that the Angular team really treats SSR as a priority, and they have been working tirelessly to make SSR even better. As the previous blog post mentioned, full-page hydration was launched in Angular 16 and made stable in Angular 17, providing a great way to improve your Core Web Vitals. Another feature aimed to help you improve your INP and other Core Web Vitals was introduced in Angular 17: deferrable views. Using the @defer blocks allows you to reduce the initial bundle size and defer the loading of heavy components based on certain triggers, such as the section entering the viewport. Then, in September 2024, the smart folks at Angular figured out that they could build upon those two features, allowing you to mark parts of your application to be server-rendered dehydrated and then hydrate them incrementally when needed - hence incremental hydration. I’m sure you know what hydration is. In short, the server sends fully formed HTML to the client, ensuring that the user sees meaningful content as quickly as possible and once JavaScript is loaded on the client side, the framework will reconcile the rendered DOM with component logic, event handlers, and state - effectively hydrating the server-rendered content. But what exactly does "dehydrated" mean, you might ask? Here's what will happen when you mark a part of your application to be incrementally hydrated: 1. Server-Side Rendering (SSR): The content marked for incremental hydration is rendered on the server. 2. Skipped During Client-Side Bootstrapping: The dehydrated content is not initially hydrated or bootstrapped on the client, reducing initial load time. 3. Dehydrated State: The code for the dehydrated components is excluded from the initial client-side bundle, optimizing performance. 4. Hydration Triggers: The application listens for specified hydration conditions (e.g., on interaction, on viewport), defined with a hydrate trigger in the @defer block. 5. On-Demand Hydration: Once the hydration conditions are met, Angular downloads the necessary code and hydrates the components, allowing them to become interactive without layout shifts. How to Use Incremental Hydration Thanks to Mark Thompson, who recently hosted a feature showcase on incremental hydration, we can show some code. The first step is to enable incremental hydration in your Angular application's appConfig using the provideClientHydration provider function: ` Then, you can mark the components you want to be incrementally hydrated using the @defer block with a hydrate trigger: ` And that's it! You now have a component that will be server-rendered dehydrated and hydrated incrementally when it becomes visible to the user. But what if you want to hydrate the component on interaction or some other trigger? Or maybe you don't want to hydrate the component at all? The same triggers already supported in @defer blocks are available for hydration: - idle: Hydrate once the browser reaches an idle state. - viewport: Hydrate once the component enters the viewport. - interaction: Hydrate once the user interacts with the component through click or keydown triggers. - hover: Hydrate once the user hovers over the component. - immediate: Hydrate immediately when the component is rendered. - timer: Hydrate after a specified time delay. - when: Hydrate when a provided conditional expression is met. And on top of that, there's a new trigger available for hydration: - never: When used, the component will remain static and not hydrated. The never trigger is handy when you want to exclude a component from hydration altogether, making it a completely static part of the page. Personally, I'm very excited about this feature and can't wait to try it out. How about you?...

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Quo v[AI]dis, Tech Stack? cover image

Quo v[AI]dis, Tech Stack?

Since we've started extensively leveraging AI at This Dot to enhance development workflows and experimenting with different ways to make it as helpful as possible, there's been a creeping thought on my mind - Is AI just helping us write code faster, or is it silently reshaping what code we choose to write? Eventually, this thought led to an interesting conversation on our company's Slack about the impact of AI on our tech stack choices. Some of the views shared there included: - "The battle between static and dynamic types is over. TypeScript won." - "The fast-paced development of new frameworks and the excitement around new shiny technologies is slowing down. AI can make existing things work with a workaround in a few minutes, so why create or adopt something new?" - "AI models are more trained on the most popular stacks, so they will naturally favor those, leading to a self-reinforcing loop." - "A lot of AI coding assistants serve as marketing funnels for specific stacks, such as v0 being tailored to Next.js and Vercel or Lovable using Supabase and Clerk." All of these points are valid and interesting, but they also made me think about the bigger picture. So I decided to do some extensive research (read "I decided to make the OpenAI Deep Research tool do it for me") and summarize my findings in this article. So without further ado, here are some structured thoughts on how AI is reshaping our tech stack choices, and what it means for the future of software development. 1. LLMs as the New Developer Platform If software development is a journey, LLMs have become the new high-speed train line. Long gone are the days when we used Copilot as a fancy autocomplete tool. Don't get me wrong, it was mind-bogglingly good when it first came out, and I've used it extensively. But now, a few years later, LLMs have evolved into something much more powerful. With the rise of tools like Cursor, Windsurf, Roo Code, or Claude Code, LLMs are essentially becoming the new developer platform. They are no longer just a helper that autocompletes a switch statement or a function signature, but a full-fledged platform that can generate entire applications, write tests, and even refactor code. And it is not just a few evangelists or early adopters who are using these tools. They have become mainstream, with millions of developers relying on them daily. According to Deloitte, nearly 20% of devs in tech firms were already using generative AI coding tools by 2024, with 76% of StackOverflow respondents using or planning to use AI tools in their development process, according to the 2024 StackOverflow Developer Survey. They've become an integral part of the development workflow, mediating how code is written, reviewed, and learned. I've argued in the past that LLMs are becoming a new layer of abstraction in software development, but now I believe they are evolving into something even more powerful - a new developer platform that is shaping how we think about and approach software development. 2. The Reinforcement Loop: Popular Stacks Get Smarter As we travel this AI-guided road, we find that certain routes become highways, while others lead to narrow paths or even dead ends. AI tools are not just helping us write code faster; they are also shaping our preferences for certain tech stacks. The most popular frameworks and languages, such as React.js on the frontend and Node.js on the backend (both with 40% adoption), are the ones that AI tools perform best with. Their increasing popularity is not just a coincidence; it's a result of a self-reinforcing loop. AI models are trained on vast amounts of code, and the most popular stacks naturally have more data available for training, given their widespread use, leading to more questions, answers, and examples in the training data. This means that AI tools are inherently better at understanding and generating code for these stacks. As an anecdotal example, I've noticed that AI tools tend to suggest React.js even when I specify a preference for another framework. As someone working with multiple tech stacks, I can attest that AI tools are significantly more effective with React.js or Node.js than, say, Yii2 or CakePHP. This phenomenon is not limited to just one or two stacks; it applies to the entire ecosystem. The more a stack is used, the more data there is for AI to learn from, and the better it gets at generating code for that stack, resulting in a feedback loop: 1. AI performs better on popular stacks. 2. Popular stacks get more adoption as developers find them easier to work with. 3. More developers using those stacks means more data for AI to learn from. 4. The cycle continues, reinforcing the popularity of those stacks. The issue is maybe even more evident with CSS frameworks. TailwindCSS, for example, has gained immense popularity thanks to its utility-first approach, which aligns well with AI's ability to generate and manipulate styles. As more developers adopt TailwindCSS, AI tools become better at understanding its conventions and generating appropriate styles, further driving its adoption. However, the Tailwind CSS example also highlights a potential pitfall of this reinforcement loop. Tailwind CSS v4 was released in January 2025. From my experience, AI tools still attempt to generate code using v3 concepts and often need to be reminded to use Tailwind CSS v4, requiring spoon-feeding with documentation to get it right. Effectively, this phenomenon can lead to a situation where AI tools not only reinforce the popularity of certain stacks but also potentially slow down the adoption of newer versions or alternatives. 3. Frontend Acceleration: React, Angular, and Beyond Navigating the frontend landscape has always been tricky, but with AI, some paths feel like smooth expressways while others remain bumpy dirt roads. AI is particularly transformative in frontend development, where the complexity and boilerplate code can be overwhelming. Established frameworks like React and Angular, which are already popular, are seeing even more adoption due to AI's ability to generate components, tests, and optimizations. React's widespread adoption and its status as the most popular framework on the frontend make it the go-to choice for many developers, especially with AI tools that can quickly scaffold new components or entire applications. However, Angular's strict structure and type safety also make it a strong contender. Angular's opinionated nature can actually benefit AI-generated code, as it provides a clear framework for the AI to follow, reducing ambiguity and potential bugs. > Call me crazy but I think that long term Angular is going to work better with AI tools for frontend work. > > More strict rules to follow, easier to build and scale. Just like for humans. > > We just need to keep Angular opinionated enough. > > — Daniel Glejzner on X But it's not just about how the frameworks are structured; it's also the documentation they provide. It has recently become the norm for frameworks to have AI-friendly documentation. Angular, for instance, has a llms.txt file that you can reference in your AI prompts to get more relevant results. The best example, however, in my opinion, is the Nuxt.ui documentation, which provides the option to copy each documentation page as markdown or a link to its markdown version, making it easy to reference in AI prompts. Frameworks that incorporate AI-friendly documentation and tooling are likely to experience increased adoption, as they facilitate developers' ability to leverage AI's capabilities. 4. Full-Stack TS/JS: The Sweet Spot On this AI-accelerated journey, some stacks have emerged as the smoothest rides, and full-stack JavaScript/TypeScript is leading the way. The combination of React on the frontend and Node.js on the backend provides a unified language ecosystem, making the road less bumpy for developers. Shared types, common tooling, and mature libraries enable faster prototyping and reduced context switching. AI seems to enjoy these well-paved highways too. I've observed numerous instances where AI tools default to suggesting Next.js and Tailwind CSS for new projects, even when users are prompted otherwise. While you can force a slight detour to something like Nuxt or SvelteKit, the road suddenly gets patchier - AI becomes less confident, requires more hand-holding, and sometimes outright stalls. So while still technically being in the sweet spot of full-stack JavaScript/TypeScript, deviating from the "main highway" even slightly can lead to a much rougher ride. React-based full-stack frameworks are becoming mainstream, not necessarily because they are always the best solution, but because they are the path of least resistance for both humans and AI. 5. The Polyglot Shift: AI Enables Multilingual Devs One fascinating development on this journey is how AI is enabling more developers to become polyglots. Where switching stacks used to feel like taking detours into unknown territory, AI now acts like an on-demand guide. Whether it’s switching from Laravel to Spring Boot or from Angular to Svelte, AI helps bridge those knowledge gaps quickly. At This Dot, we've always taken pride in our polyglot approach, but AI is lowering the barriers for everyone. Yes, we've done this before the rise of AI tooling. If you are an experienced engineer with a strong understanding of programming concepts, you'll be able to adapt to different stacks and projects quickly. But AI is now enabling even junior developers to become polyglots, and it's making it even easier for the experienced ones to switch between stacks seamlessly. AI doesn’t just shorten the journey - it makes more destinations accessible. This "AI boost" not only facilitates the job of a software consultant, such as myself, who often has to switch between different projects, but it also opens the door to unlimited possibilities for companies to mix and match stacks based on their needs - particularly useful for companies that have diverse tech stacks, as it allows them to leverage the strengths of different languages and frameworks without the steep learning curve that usually comes with it. 6. AI-Generated Stack Bundles: The Trojan Horse > Trend I'm seeing: AI app generators are a sales funnel. > > -Chef uses Convex. > > -V0 is optimized for Vercel. > > -Lovable uses Supabase and Clerk. > > -Firebase Studio uses Google services. > > These tools act like a trojan horse - they "sell" a tech stack. > > Choose wisely. > > — Cory House on X Some roads come pre-built, but with toll booths you may not notice until you're halfway through the trip. AI-generated apps from tools like v0, Firebase Studio, or Lovable are convenience highways - fast, smooth, and easy to follow - but they quietly nudge you toward specific tech stacks, backend services, databases, and deployment platforms. It's a smart business model. These tools don't just scaffold your app; they bundle in opinions on hosting, auth providers, and DB layers. The convenience is undeniable, but there's a trade-off in flexibility and long-term maintainability. Engineering leaders must stay alert, like seasoned navigators, ensuring that the allure of speed doesn't lead their teams down the alleyways of vendor lock-in. 7. From 'Buy vs Build' to 'Prompt vs Buy' The classic dilemma used to be _“buy vs build”_ - now it’s becoming “prompt vs buy.” Why pay for a bloated tour bus of a SaaS product, packed with destinations and detours you’ll never take (and priced accordingly), when you can chart a custom route with a few well-crafted prompts and have a lightweight internal tool up and running in days—or even hours? Do you need a simple tool to track customer contacts with a few custom fields and a clean interface? In the past, you might have booked a seat on the nearest SaaS solution - one that gets you close enough to your destination but comes with unnecessary stops and baggage. With AI, you can now skip the crowded bus altogether and build a tailor-made vehicle that drives exactly where you need to go, no more, no less. AI reshapes the travel map of product development. The road to MVPs has become faster, cheaper, and more direct. This shift is already rerouting the internal tooling landscape, steering companies away from bulky, one-size-fits-all platforms toward lean, AI-assembled solutions. And over time, it may change not just _how_ we build, but _where_ we build - with the smoothest highways forming around AI-friendly, modular ecosystems like Node, React, and TypeScript, while older “corporate” expressways like .NET, Java, or even Angular risk becoming the slow scenic routes of enterprise tech. 8. Strategic Implications: Velocity vs Maintainability Every shortcut comes with trade-offs. The fast lane that AI offers boosts productivity but can sometimes encourage shortcuts in architecture and design. Speeding to your destination is great - until you hit the maintenance toll booth further down the road. AI tooling makes it easier to throw together an MVP, but without experienced oversight, the resulting codebases can turn into spaghetti highways. Teams need to implement AI-era best practices: structured code reviews, prompt hygiene, and deliberate stack choices that prioritize long-term maintainability over short-term convenience. Failing to do so can lead to a "quick and dirty" mentality, where the focus is on getting things done fast rather than building robust, maintainable solutions, which is particularly concerning for companies that rely on in-house developers or junior teams who may not have the experience to recognize potential pitfalls in AI-generated code. 9. Closing Reflection: Are We Still Choosing Our Stacks? So, where are we heading? Looking at the current "traffic" on the modern software development pathways, one thing becomes clear: AI isn't just a productivity tool - the roads themselves are starting to shape the journey. What was once a deliberate process of choosing the right vehicle for the right terrain - picking our stacks based on product goals, team expertise, and long-term maintainability - now feels more like following GPS directions that constantly recalculate to the path of least resistance. AI is repaving the main routes, widening the lanes for certain tech stacks, and putting up "scenic route" signs for some frameworks while leaving others on neglected backroads. This doesn't mean we've lost control of the steering wheel, but it does mean that the map is changing beneath us in ways that are easy to overlook. The risk is clear: we may find ourselves taking the smoothest on-ramps without ever asking if they lead to where we actually want to go. Convenience can quietly take priority over appropriateness. Productivity gains in the short term can pave over technical debt potholes that become unavoidable down the road. But the story isn't entirely one of caution. There's a powerful opportunity here too. With AI as a co-pilot, we can explore more destinations than ever before - venturing into unfamiliar tech stacks, accelerating MVP development, or rapidly prototyping ideas that previously seemed out of reach. The key is to remain intentional about when to cruise with AI autopilot and when to take the wheel with both hands and steer purposefully. In this new era of AI-shaped development, the question every engineering team should be asking is not just "how fast can we go?" but "are we on the right road?" and "who's really choosing our route?" And let’s not forget — some of these roads are still being built. Open-source maintainers and framework authors play a pivotal role in shaping which paths become highways. By designing AI-friendly architectures, providing structured, machine-readable documentation, and baking in patterns that are easy for AI models to learn and suggest, they can guide where AI directs traffic. Frameworks that proactively optimize for AI tooling aren’t just improving developer experience — they’re shaping the very flow of adoption in this AI-accelerated landscape. If we're not mindful, we risk becoming passengers on a journey defined by default choices. However, if we remain vigilant, we can utilize AI to create more accurate maps, not just follow the fastest roads, but also chart new ones. Because while the routes may be getting redrawn, the destination should always be ours to choose. In the end, the real competitive advantage will belong to those who can harness AI's speed while keeping their hands firmly on the wheel - navigating not by ease, but by purpose. In this new era, the most valuable skill may not be prompt engineering - it might be strategic discernment....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co