Skip to content

From Pizza Driver to Director of Engineering! Fostering Inclusivity and Growth in Tech Teams with Anthony Martinelli

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In this episode, Tracy Lee chats with Anthony Martinelli, Director of Engineering at Charter Communications. The episode explores Anthony's journey from a high school dropout with early family responsibilities to his current leadership role in the tech industry.

Anthony shares his story, starting with a challenging start, becoming a father at 17, and getting married at 19. He found his initial footing in the pizza business, gradually working his way up through various positions. The discussion shifts to the impact of the 2007 financial crisis, which led Anthony and his family to lose their home. This pivotal moment prompted him to consider a career change, ultimately leading him to the world of tech.

Anthony's passion for mathematics and computers guided his choice to pursue computer science. He embarked on a six-year journey to earn his degree, followed by his first job as a full-stack engineer. He touches on his significant life responsibilities, including a family and full-time job, throughout his six-year educational journey. Anthony attributes his success to determination, caffeine, and learning from past mistakes.

Tracy Lee reflects on the idea that sometimes life's challenges and obstacles shape a person more than their successes, echoing Anthony's sentiment about learning from mistakes.

The two then discuss Anthony’s role as Director of Engineering at Charter. Anthony emphasizes the difficulties that come with overseeing a larger team and the challenges of maintaining one-on-one relationships. He discusses his strategies for maintaining personal connections with his team and allocating time to help team members transition to new technology stacks and actively addressing their questions and challenges.

Tracy asks Anthony about his involvement with the code, noting that not all directors of engineering are as deeply engaged technically. Anthony explains his long tenure at Charter and how his in-depth domain knowledge drives his hands-on approach.

The two discuss the issue of imposter syndrome, with Anthony candidly admitting that he grapples with it daily. He highlights the importance of rational thinking and seeking solutions to overcome self-doubt.

Tracy asks about the challenges in engineering leadership today. Anthony stresses the ongoing importance of retaining engineering talent and fostering inclusivity within teams.

To summarize, Anthony Martinelli's remarkable journey from adversity to leadership in the tech industry serves as a testament to the power of determination and a strong work ethic. His insights into engineering leadership, imposter syndrome, and the challenges of managing a growing team offer valuable lessons for both aspiring and seasoned professionals in the field.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to be an Effective Technology Leader in an Agile Startup Environment with Daniel Chopson cover image

How to be an Effective Technology Leader in an Agile Startup Environment with Daniel Chopson

Daniel Chopson, CTO and co-founder of Cove Tool, discusses key aspects of engineering leadership, team management, and software development in the fast paced startup environment. Cove.Tool, initially a sustainability-focused software company, has evolved to offer AI-driven solutions for architects and engineers. Daniel shared valuable insights on the importance of productive retrospectives, agile planning, and strategic team structuring. Daniel emphasized the significance of conducting productive retrospectives to foster team improvement and effective communication. By celebrating wins and establishing clear action items, teams can identify areas for growth and implement necessary changes. These retrospectives provide a platform for open and honest discussions, enabling teams to learn from their successes and failures. Encouraging a culture of continuous improvement allows engineering leaders to drive innovation and enhance team collaboration. In a startup environment, balancing planning and agility is crucial for success. Cove.Tool prioritizes shorter-term sprint planning to allow for real-time feedback and adaptability. By aligning work towards business objectives while maintaining flexibility in planning, the team can respond quickly to changing market demands. This approach enables Cove.Tool to stay ahead of the curve and deliver high-quality solutions to their clients. The key lies in finding the right balance between long-term strategic planning and the ability to pivot when necessary. Team structuring plays a vital role in engineering leadership. Daniel highlighted the importance of specialized roles like engineering managers and tech leads for effective people development and technical guidance. Engineering managers focus on nurturing the growth and well-being of team members, while tech leads provide technical expertise and mentorship. This division of responsibilities ensures that both the personal and technical aspects of team development are adequately addressed, leading to a more productive and motivated workforce. The conversation underscored the significance of adaptability, feedback-driven decision-making, and strategic team structuring in successful software development endeavors. By embracing change and continuously seeking feedback, engineering leaders can make informed decisions and drive innovation. Strategic team structuring, with specialized roles and clear responsibilities, ensures that the right people are in the right positions to maximize productivity and foster growth. Effective engineering leadership is essential for adapting to changing market demands and building teams equipped to tackle future challenges. Download this episode here....

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader cover image

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader

Ashley Willis has seen Developer Relations evolve from being on the sidelines of the tech team to having a seat at the strategy table. In her ten years in the space, she’s done more than give great conference talks or build community—she’s helped shape what the DevRel role looks like for software providers. Now as the Senior Director of Developer Relations at GitHub, Ashley is focused on building spaces where developers feel heard, seen, and supported. > “A decade ago, we were seen as amplifiers, not collaborators,” she says. “Now we’re influencing product roadmaps and shaping developer experience end to end.” DevRel Has Changed For Ashley, the biggest shift hasn’t been the work itself—but how it’s understood. > “The work is still outward-facing, but it’s backed by real strategic weight,” she explains. “We’re showing up in research calls and incident reviews, not just keynotes.” That shift matters, but it’s not the finish line. Ashley is still pushing for change when it comes to burnout, representation, and sustainable metrics that go beyond conference ROI. > “We’re no longer fighting to be taken seriously. That’s a win. But there’s more work to do.” Talking Less as a Leader When we asked what the best advice Ashley ever received, she shared an early lesson she received from a mentor: “Your presence should create safety, not pressure.” > “It reframed how I saw my role,” she says. “Not as the one with answers, but the one who holds the space.” Ashley knows what it’s like to be in rooms where it’s hard to speak up. She leads with that memory in mind, and by listening more than talking, normalizing breaks, and creating environments where others can lead too. > “Leadership is emotional labor. It’s not about being in control. It’s about making it safe for others to lead, too.” Scaling More Than Just Tech Having worked inside high-growth companies, Ashley knows firsthand: scaling tech is one thing. Scaling trust is another. > “Tech will break. Roadmaps will shift. But if there’s trust between product and engineering, between company and community—you can adapt.” And she’s learned not to fall for premature optimization. Scale what you have. Don’t over-design for problems you don’t have yet. Free Open Source Isn’t Free There’s one myth Ashley is eager to debunk: that open source is “free.” > “Open source isn’t free labor. It’s labor that’s freely given,” she says. “And it includes more than just code. There’s documentation, moderation, mentoring, emotional care. None of it is effortless.” Open source runs on human energy. And when we treat contributors like an infinite resource, we risk burning them out, and breaking the ecosystem we all rely on. > “We talk a lot about open source as the foundation of innovation. But we rarely talk about sustaining the people who maintain that foundation.” Burnout is Not Admirable Early in her career, Ashley wore burnout like a badge of honor. She doesn’t anymore. > “Burnout doesn’t prove commitment,” she says. “It just dulls your spark.” Now, she treats rest as productive. And she’s learned that clarity is kindness—especially when giving feedback. > “I thought being liked was the same as being kind. It’s not. Kindness is honesty with empathy.” The Most Underrated GitHub Feature? Ashley’s pick: personal instructions in GitHub Copilot. Most users don’t realize they can shape how Copilot writes, like its tone, assumptions, and context awareness. Her own instructions are specific: empathetic, plainspoken, technical without being condescending. For Ashley, that helps reduce cognitive load and makes the tool feel more human. > “Most people skip over this setting. But it’s one of the best ways to make Copilot more useful—and more humane.” Connect with Ashley Willis She has been building better systems for over a decade. Whether it’s shaping Copilot UX, creating safer teams, or speaking truth about the labor behind open source, she’s doing the quiet work that drives sustainable change. Follow Ashley on BlueSky to learn more about her work, her maker projects, and the small things that keep her grounded in a fast-moving industry. Sticker Illustration by Jacob Ashley....

What Sets the Best Autonomous Coding Agents Apart? cover image

What Sets the Best Autonomous Coding Agents Apart?

Must-have Features of Coding Agents Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained. As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let's delve into the features that you'll absolutely want to get the best results. 1. Customizable System Prompts Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a "planning mode" focused on outlining development steps and gathering requirements, a "coding mode" optimized for generating and testing code, or a "documentation mode" emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you'll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what's important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use. Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, 'pnpm' instead of 'npm' for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details. 2. Usage-based Pricing The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. 3. Autonomous Workflows The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session's context. This should leverage your custom modes, which excel at each phase of your workflow. 4. Working in the Background The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer's machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team. 5. Integrations with your Tools The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests. These integrations ensure the agent can participate meaningfully across the full software development lifecycle. 6. Support for Multiple Model Providers Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues. Final Thoughts Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team's success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive....

Next.js Rendering Strategies and how they affect core web vitals cover image

Next.js Rendering Strategies and how they affect core web vitals

When it comes to building fast and scalable web apps with Next.js, it’s important to understand how rendering works, especially with the App Router. Next.js organizes rendering around two main environments: the server and the client. On the server side, you’ll encounter three key strategies: Static Rendering, Dynamic Rendering, and Streaming. Each one comes with its own set of trade-offs and performance benefits, so knowing when to use which is crucial for delivering a great user experience. In this post, we'll break down each strategy, what it's good for, and how it impacts your site's performance, especially Core Web Vitals. We'll also explore hybrid approaches and provide practical guidance on choosing the right strategy for your use case. What Are Core Web Vitals? Core Web Vitals are a set of metrics defined by Google that measure real-world user experience on websites. These metrics play a major role in search engine rankings and directly affect how users perceive the speed and smoothness of your site. * Largest Contentful Paint (LCP): This measures loading performance. It calculates the time taken for the largest visible content element to render. A good LCP is 2.5 seconds or less. * Interaction to Next Paint (INP): This measures responsiveness to user input. A good INP is 200 milliseconds or less. * Cumulative Layout Shift (CLS): This measures the visual stability of the page. It quantifies layout instability during load. A good CLS is 0.1 or less. If you want to dive deeper into Core Web Vitals and understand more about their impact on your website's performance, I recommend reading this detailed guide on New Core Web Vitals and How They Work. Next.js Rendering Strategies and Core Web Vitals Let's explore each rendering strategy in detail: 1. Static Rendering (Server Rendering Strategy) Static Rendering is the default for Server Components in Next.js. With this approach, components are rendered at build time (or during revalidation), and the resulting HTML is reused for each request. This pre-rendering happens on the server, not in the user's browser. Static rendering is ideal for routes where the data is not personalized to the user, and this makes it suitable for: * Content-focused websites: Blogs, documentation, marketing pages * E-commerce product listings: When product details don't change frequently * SEO-critical pages: When search engine visibility is a priority * High-traffic pages: When you want to minimize server load How Static Rendering Affects Core Web Vitals * Largest Contentful Paint (LCP): Static rendering typically leads to excellent LCP scores (typically < 1s). The Pre-rendered HTML can be cached and delivered instantly from CDNs, resulting in very fast delivery of the initial content, including the largest element. Also, there is no waiting for data fetching or rendering on the client. * Interaction to Next Paint (INP): Static rendering provides a good foundation for INP, but doesn't guarantee optimal performance (typically ranges from 50-150 ms depending on implementation). While Server Components don't require hydration, any Client Components within the page still need JavaScript to become interactive. To achieve a very good INP score, you will need to make sure the Client Components within the page is minimal. * Cumulative Layout Shift (CLS): While static rendering delivers the complete page structure upfront which can be very beneficial for CLS, achieving excellent CLS requires additional optimization strategies: * Static HTML alone doesn't prevent layout shifts if resources load asynchronously * Image dimensions must be properly specified to reserve space before the image loads * Web fonts can cause text to reflow if not handled properly with font display strategies * Dynamically injected content (ads, embeds, lazy-loaded elements) can disrupt layout stability * CSS implementation significantly impacts CLS—immediate availability of styling information helps maintain visual stability Code Examples: 1. Basic static rendering: ` 2. Static rendering with revalidation (ISR): ` 3. Static path generation: ` 2. Dynamic Rendering (Server Rendering Strategy) Dynamic Rendering generates HTML on the server for each request at request time. Unlike static rendering, the content is not pre-rendered or cached but freshly generated for each user. This kind of rendering works best for: * Personalized content: User dashboards, account pages * Real-time data: Stock prices, live sports scores * Request-specific information: Pages that use cookies, headers, or search parameters * Frequently changing data: Content that needs to be up-to-date on every request How Dynamic Rendering Affects Core Web Vitals * Largest Contentful Paint (LCP): With dynamic rendering, the server needs to generate HTML for each request, and that can't be fully cached at the CDN level. It is still faster than client-side rendering as HTML is generated on the server. * Interaction to Next Paint (INP): The performance is similar to static rendering once the page is loaded. However, it can become slower if the dynamic content includes many Client Components. * Cumulative Layout Shift (CLS): Dynamic rendering can potentially introduce CLS if the data fetched at request time significantly alters the layout of the page compared to a static structure. However, if the layout is stable and the dynamic content size fits within predefined areas, the CLS can be managed effectively. Code Examples: 1. Explicit dynamic rendering: ` 2. Simplicit dynamic rendering with cookies: ` 3. Dynamic routes: ` 3. Streaming (Server Rendering Strategy) Streaming allows you to progressively render UI from the server. Instead of waiting for all the data to be ready before sending any HTML, the server sends chunks of HTML as they become available. This is implemented using React's Suspense boundary. React Suspense works by creating boundaries in your component tree that can "suspend" rendering while waiting for asynchronous operations. When a component inside a Suspense boundary throws a promise (which happens automatically with data fetching in React Server Components), React pauses rendering of that component and its children, renders the fallback UI specified in the Suspense component, continues rendering other parts of the page outside this boundary, and eventually resumes and replaces the fallback with the actual component once the promise resolves. When streaming, this mechanism allows the server to send the initial HTML with fallbacks for suspended components while continuing to process suspended components in the background. The server then streams additional HTML chunks as each suspended component resolves, including instructions for the browser to seamlessly replace fallbacks with final content. It works well for: * Pages with mixed data requirements: Some fast, some slow data sources * Improving perceived performance: Show users something quickly while slower parts load * Complex dashboards: Different widgets have different loading times * Handling slow APIs: Prevent slow third-party services from blocking the entire page How Streaming Affects Core Web Vitals * Largest Contentful Paint (LCP): Streaming can improve the perceived LCP. By sending the initial HTML content quickly, including potentially the largest element, the browser can render it sooner. Even if other parts of the page are still loading, the user sees the main content faster. * Interaction to Next Paint (INP): Streaming can contribute to a better INP. When used with React's <Suspense />, interactive elements in the faster-loading parts of the page can become interactive earlier, even while other components are still being streamed in. This allows users to engage with the page sooner. * Cumulative Layout Shift (CLS): Streaming can cause layout shifts as new content streams in. However, when implemented carefully, streaming should not negatively impact CLS. The initially streamed content should establish the main layout, and subsequent streamed chunks should ideally fit within this structure without causing significant reflows or layout shifts. Using placeholders and ensuring dimensions are known can help prevent CLS. Code Examples: 1. Basic Streaming with Suspense: ` 2. Nested Suspense boundaries for more granular control: ` 3. Using Next.js loading.js convention: ` 4. Client Components and Client-Side Rendering Client Components are defined using the React 'use client' directive. They are pre-rendered on the server but then hydrated on the client, enabling interactivity. This is different from pure client-side rendering (CSR), where rendering happens entirely in the browser. In the traditional sense of CSR (where the initial HTML is minimal, and all rendering happens in the browser), Next.js has moved away from this as a default approach but it can still be achievable by using dynamic imports and setting ssr: false. ` Despite the shift toward server rendering, there are valid use cases for CSR: 1. Private dashboards: Where SEO doesn't matter, and you want to reduce server load 2. Heavy interactive applications: Like data visualization tools or complex editors 3. Browser-only APIs: When you need access to browser-specific features like localStorage or WebGL 4. Third-party integrations: Some third-party widgets or libraries that only work in the browser While these are valid use cases, using Client Components is generally preferable to pure CSR in Next.js. Client Components give you the best of both worlds: server-rendered HTML for the initial load (improving SEO and LCP) with client-side interactivity after hydration. Pure CSR should be reserved for specific scenarios where server rendering is impossible or counterproductive. Client components are good for: * Interactive UI elements: Forms, dropdowns, modals, tabs * State-dependent UI: Components that change based on client state * Browser API access: Components that need localStorage, geolocation, etc. * Event-driven interactions: Click handlers, form submissions, animations * Real-time updates: Chat interfaces, live notifications How Client Components Affect Core Web Vitals * Largest Contentful Paint (LCP): Initial HTML includes the server-rendered version of Client Components, so LCP is reasonably fast. Hydration can delay interactivity but doesn't necessarily affect LCP. * Interaction to Next Paint (INP): For Client Components, hydration can cause input delay during page load, and when the page is hydrated, performance depends on the efficiency of event handlers. Also, complex state management can impact responsiveness. * Cumulative Layout Shift (CLS): Client-side data fetching can cause layout shifts as new data arrives. Also, state changes might alter the layout unexpectedly. Using Client Components will require careful implementation to prevent shifts. Code Examples: 1. Basic Client Component: ` 2. Client Component with server data: ` Hybrid Approaches and Composition Patterns In real-world applications, you'll often use a combination of rendering strategies to achieve the best performance. Next.js makes it easy to compose Server and Client Components together. Server Components with Islands of Interactivity One of the most effective patterns is to use Server Components for the majority of your UI and add Client Components only where interactivity is needed. This approach: 1. Minimizes JavaScript sent to the client 2. Provides excellent initial load performance 3. Maintains good interactivity where needed ` Partial Prerendering (Next.js 15) Next.js 15 introduced Partial Prerendering, a new hybrid rendering strategy that combines static and dynamic content in a single route. This allows you to: 1. Statically generate a shell of the page 2. Stream in dynamic, personalized content 3. Get the best of both static and dynamic rendering Note: At the time of this writing, Partial Prerendering is experimental and is not ready for production use. Read more ` Measuring Core Web Vitals in Next.js Understanding the impact of your rendering strategy choices requires measuring Core Web Vitals in real-world conditions. Here are some approaches: 1. Vercel Analytics If you deploy on Vercel, you can use Vercel Analytics to automatically track Core Web Vitals for your production site: ` 2. Web Vitals API You can manually track Core Web Vitals using the web-vitals library: ` 3. Lighthouse and PageSpeed Insights For development and testing, use: * Chrome DevTools Lighthouse tab * PageSpeed Insights * Chrome User Experience Report Making Practical Decisions: Which Rendering Strategy to Choose? Choosing the right rendering strategy depends on your specific requirements. Here's a decision framework: Choose Static Rendering when * Content is the same for all users * Data can be determined at build time * Page doesn't need frequent updates * SEO is critical * You want the best possible performance Choose Dynamic Rendering when * Content is personalized for each user * Data must be fresh on every request * You need access to request-time information * Content changes frequently Choose Streaming when * Page has a mix of fast and slow data requirements * You want to improve perceived performance * Some parts of the page depend on slow APIs * You want to prioritize showing critical UI first Choose Client Components when * UI needs to be interactive * Component relies on browser APIs * UI changes frequently based on user input * You need real-time updates Conclusion Next.js provides a powerful set of rendering strategies that allow you to optimize for both performance and user experience. By understanding how each strategy affects Core Web Vitals, you can make informed decisions about how to build your application. Remember that the best approach is often a hybrid one, combining different rendering strategies based on the specific requirements of each part of your application. Start with Server Components as your default, use Static Rendering where possible, and add Client Components only where interactivity is needed. By following these principles and measuring your Core Web Vitals, you can create Next.js applications that are fast, responsive, and provide an excellent user experience....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co