Skip to content

Lessons from Building Netlify with Matt Biilmann, CEO at Netlify

Matt Biilmann, CEO and co-founder of Netlify, for an in-depth discussion about the company's incredible growth journey—from a bootstrapped two-person startup to a global platform serving over 5 million developers and powering sites for major companies like Unilever and Asana. Matt reflects on the key lessons he’s learned while scaling Netlify, including raising $212 million in venture capital and growing the team to 200 employees. He shares valuable insights on balancing day-to-day operations with long-term vision, navigating the challenges of hiring experienced leaders, and fostering a culture of clarity and focus. Matt also highlights the importance of reducing friction for web development teams and ensuring fast time-to-market for web projects.

Chapters

  • 00:00 - Introduction
  • 01:00 - The Origins of Netlify
  • 02:30 - Netlify’s Growth Journey
  • 04:00 - Impact of Netlify on the Web Ecosystem
  • 05:30 - Building the Right Team
  • 07:45 - From Developer to CEO: Evolving as a Leader
  • 10:00 - The Balance Between Vision and Operations
  • 12:00 - Delegating vs. Staying Hands-On
  • 15:30 - Hiring Experienced Leaders
  • 18:00 - Building Diverse Teams
  • 20:00 - Intuition in Leadership
  • 22:30 - Simplifying Goals and Objectives
  • 25:00 - The Shift in Tech Leadership
  • 28:00 - Changing Expectations for Engineers
  • 30:00 - Advice for Startup Founders
  • 32:00 - Where to Find Matt Online
  • 33:00 - Conclusion

Follow Matt Biilmann on Social Media Twitter: https://x.com/biilmann Linkedin: https://www.linkedin.com/in/mathias-biilmann-christensen-a5a3805/ Github: https://github.com/biilmann

Sponsored by This Dot: thisdot.co

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Building a Bot to Fetch Discord Scheduled Events with 11ty and Netlify cover image

Building a Bot to Fetch Discord Scheduled Events with 11ty and Netlify

If you haven’t heard of 11ty (Eleventy) yet, we are here to share why you should be excited about this new-ish static site generator on the block! For clarification, a static site generator (SSG) is a tool that generates a static HTML website based on raw data and a set of templates. 11ty is an SSG with a small bundle size and no runtime, which means it only ships your code to the browser. It supports multiple templating engines providing flexibility to use templating languages like njk, .html, .md, and more. For this article, we will be using nunjucks (.njk) templating. It’s worth pointing out that 11ty is NOT a javascript framework. This article is a recap of the JS Drop training by Domitrius Clark where he showed us how to build a Discord Scheduled Event List with 11ty and Netlify. In this workshop, you will learn how to build a static generated site for a list of scheduled events fetched from a Discord server using 11ty and Netlify. A Notion document outlining the steps for this workshop. (Thanks Dom!) 👋 Before getting into the steps, make sure you have: - A GitHub account and a repo created to house your workshop code - A Discord account & a server for us to connect your bot to - A Netlify account to deploy your application at the end Initial Project Scaffold and Install Dependencies npm init -y to init a package.json with default values yarn add -D @11ty/eleventy @netlify/functions tailwindcss node-fetch@2 dotenv npm-run-all Open the new project in your favorite code editor Edit the script property of the package.json file: ` The above scripts are the build commands for 11ty production, CSS build for Tailwind, and also to run the dev server for testing our application. Now that we have our packages and scripts defined, let’s scaffold out the folders and files we’ll need for the site. First edit the .gitignore: ` Next, define the 11ty configs: - Types of templating files (Nunjucks) - Directories to use include build output, where get components, layouts, and includes. - Defining plugins Edit the _eleventy/config.js file with the following: ` Next, we edit the netlify config file netlify.toml to config Netlify deployment with the following: - Build commands - Path to Netlify functions ` Creating base layout and setting up Tailwind We created the _includes folder with two sub-folders, for our components (or macros), simply named components and layouts, which is where we’re going to be focused in this lesson. 11ty exposes the _includes folders so that we have access to our layouts and components inside of our pages and inside of each other (for example using macros inside of other macros). Let’s go ahead and create the HTML scaffold for our pages. Inside of /src/_includes/layouts/ we’ll create the base.njk file. ` The layout will be used to wrap all of our pages. We can also create sub-layouts and new layouts depending on the needs of the page. For this tutorial we will need only this base layout. For the base.njk we: - We made the Tailwind styles visible to our page by adding a link tag for /styles.css - We are using the title variable, because of 11ty’s data cascade, and we’re able to pull variables in from our pages frontmatter. In our files, we’ll need to define a title to ensure our layout doesn’t break. - Notice the {{ content | safe }}. The content variable is the page content itself. So, in our case, this will be our .njk page and components. the safe variable is a builtin filter to nunjucks, to make sure it will not be HTML escaped. Next, we will modify tailwind.config.js to make our Tailwind work as expected: ` And modify the styles.css file to import Tailwind utilities, base, and components: ` Then we edit the index.njk file with the default content and frontmatter: ` Now to test that everything works, start the dev server: ` Everything should work! Now navigate to http://localhost:8080 in your browser. Creating a Navbar component in Nunjucks Let's create a Navbar component for our layout with Nunjucks, in the src/_includes/components/ add a navbar.njk: ` Next, we modify the index.njk file to include the navbar in our pages and add: ` Now the final document should look like this: ` Initialize a Discord BOT from the Discord server Now that we have the template and base file set up, next we should connect the Discord bot to the page. Before we start to initialize a discord bot, we need to put some things in place, so head over to your discord. Go to the User Settings, navigate to the Advanced tab and enable the Developer Mode. Head over to the Discord Developer Portal and click on New Application to create an application. Fill in the necessary details and click on create. On the sidebar menu, navigate to Bot and click on add bot. We will need a few details for our app to connect with the Discord server. Let’s copy them to the .env file. Add environment variables DISCORD_BOT_TOKEN & DISCORD_GUILD_ID and assign the value for discord token from the Bot page by clicking reset token. For the DISCORD_GUILD_ID, head over to a Discord server that you manage, or create one for this tutorial, side-click on the server and click on Copy ID. Now paste the ID to set the value for the DISCORD_GUILD_ID environment variable. Next, add the bot to your server https://discordapp.com/api/oauth2/authorize?scope=bot&client_id=YOUR_CLIENT_ID Find the client ID from the 0Auth2 tab and click on copy. Now we are all set and connected to the server. Using global data files in 11ty to fetch scheduled events from Discord In 11ty, data is merged from multiple different sources before the template is rendered. The data is merged in what 11ty calls the Data Cascade. We will be fetching data from discord from a javascript function in the global data file. Inside of src/_data, create a new file named events.js. Previously, we created environment variables called DISCORD_BOT_TOKEN & DISCORD_GUILD_ID. Now, we can fetch our events endpoint, grab our events, and inject them into our templates. Our file will look like this: ` Creating the events page In the src directory, create an events.njk file: ` Currently we’ve just got a page rendering some static content. Let’s use Nunjucks loops to render a card for each of our events. The data we care about right now from the large event object coming back are: - creator - name - scheduled start time - description - and if it’s not inside of Discord, where is it We also need to make sure we check the event for any meta data that could point us toward an external link for this event. Thankfully, this is another quick fix with Nunjucks if blocks. Our final card (should) end up looking something like below. ` Before we test the application, schedule a test event on Discord, restart the dev server, then click on events tab in the navbar: You should see your newly scheduled events. Pushing to GitHub and deploying to Netlify Pushing to Github Let’s initialize a repo so we can track our changes and deploy live to the web as we go. Start off with a quick command to initialize the repo: ` Then let’s get all of our current changes added and pushed to main, so we can create a repo. ` Using the GitHub CLI, create a repo and push it, ` This will create your repo, name it, and push up the commits all in one command. To confirm that the repo is up, run: ` Deploy to Netlify To create a new project on Netlify with the new repo as the base, run: ` Fill in the prompts. You should be asked the following: - Choosing to create and configure a new site - Choose your team - Set your unique site name Now, you should have an admin URL and base URL link in the console. There will be a few more prompts: - Authenticate Github through Netlify - leave build command blank - leave netlify functions folder blank Once all that is done, we’re going to want to run a few commands: - git push - netlify open If something was wrong with your initial linking of your code, try to run a new production deploy using: - netlify deploy --prod Netlify CLI will deploy the local project to the Netlify server and generate a random URL which you can visit to see the live app. Conclusion In this workshop, you learned to use 11ty to fetch and display your scheduled events from a discord server and deploy an app to Netlify. That was pretty easy! Did you run into any issues? There is more! Watch the full training on the ThisDot YouTube Channel Are you excited about 11ty? What are you building using it? Tell us what excites you!...

What Sets the Best Autonomous Coding Agents Apart? cover image

What Sets the Best Autonomous Coding Agents Apart?

Must-have Features of Coding Agents Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained. As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let's delve into the features that you'll absolutely want to get the best results. 1. Customizable System Prompts Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a "planning mode" focused on outlining development steps and gathering requirements, a "coding mode" optimized for generating and testing code, or a "documentation mode" emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you'll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what's important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use. Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, 'pnpm' instead of 'npm' for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details. 2. Usage-based Pricing The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. 3. Autonomous Workflows The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session's context. This should leverage your custom modes, which excel at each phase of your workflow. 4. Working in the Background The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer's machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team. 5. Integrations with your Tools The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests. These integrations ensure the agent can participate meaningfully across the full software development lifecycle. 6. Support for Multiple Model Providers Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues. Final Thoughts Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team's success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive....

Introduction to Vercel’s Flags SDK cover image

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it. You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice. Why should I use it? You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently. It’s simple Let's start with a basic example: ` This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration. Many other SDKs require passing the flag's name and context every single time you check a flag, like this: ` This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application. By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly. It’s flexible Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter. While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit. Latency solutions Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency. These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly: ` You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended. Using Edge Config When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues. To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency. Cookies For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load. Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data: ` You can also encrypt or sign cookies for additional security from the client side. Dedupe Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example: ` Server-side patterns for static pages You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering. To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute. Precompute Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers: ` Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them. ` The user will never notice this because, as we use “rewrite”, they will only see the original URL. Now, on our page, we “invoke” our flags, sending the code from the params: ` By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user. Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests: ` Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell. Generate Permutations If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases. ` ` Conclusion Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co