Skip to content

Integrating Playwright Tests into Your GitHub Workflow with Vercel

Integrating Playwright Tests into Your GitHub Workflow with Vercel

Vercel previews offer a great way to test PRs for a project. They have a predefined environment and don’t require any additional setup work from the reviewer to test changes quickly. Many projects also use end-to-end tests with Playwright as part of the review process to ensure that no regressions slip uncaught.

Usually, workflows configure Playwright to run against a project running on the GitHub action worker itself, maybe with dependencies in Docker containers as well, however, why bother setting that all up and configuring yet another environment for your app to run in when there’s a working preview right there? Not only that, the Vercel preview will be more similar to production as it’s running on the same infrastructure, allowing you to be more confident about the accuracy of your tests.

In this article, I’ll show you how you can run Playwright against the Vercel preview associated with a PR.

Setting up the Vercel Project

To set up a project in Vercel, we first need to have a codebase. I’m going to use the Next.js starter, but you can use whatever you like. What technology stack you use for this project won’t matter, as integrating Playwright with it will be the same experience.

You can create a Next.js project with the following command:

npx create-next-app@latest

If you’ve selected all of the defaults, you should be able to run npm run dev and navigate to the app at http://localhost:3000.

Setting up Playwright

We will set up Playwright the standard way and make a few small changes to the configuration and the example test so that they run against our site and not the Playwright site. Setup Playwright in our existing project by running the following command:

npm init playwright@latest

Install all browsers when prompted, and for the workflow question, say no since the one we’re going to use will work differently than the default one. The default workflow doesn’t set up a development server by default, and if that is enabled, it will run on the GitHub action virtual machine instead of against our Vercel deployment.

To make Playwright run tests against the Vercel deployment, we’ll need to define a baseUrl in playwright.config.ts and send an additional header called X-Vercel-Protection-Bypass where we'll pass the bypass secret that we generated earlier so that we don’t get blocked from making requests to the deployment. I’ll cover how to add this environment variable to GitHub later.

export default defineConfig({
	...

  use: {
	/* Base URL to use in actions like `await page.goto('/')`. */
	baseURL: process.env.DEPLOYMENT_URL ?? "http://127.0.0.1:3000",
	extraHTTPHeaders: {
  	"X-Vercel-Protection-Bypass":
    	process.env.VERCEL_AUTOMATION_BYPASS_SECRET ?? "",
	},

	/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
	trace: "on-first-retry",
  },

	...
}

Our GitHub workflow will set the DEPLOYMENT_URL environment variable automatically.

Now, in tests/example.spec.ts let’s rewrite the tests to work against the Next.js starter that we generated earlier:

import { test, expect } from "@playwright/test";

test("has title", async ({ page }) => {
  await page.goto("/");
  await expect(page).toHaveTitle(/Create Next App/);
});

test("has deploy button", async ({ page }) => {
  await page.goto("/");
  await expect(page.getByRole("link", { name: "Deploy now" })).toBeVisible();
});

This is similar to the default test provided by Playwright. The main difference is we’re loading pages relative to baseURL instead of Playwright’s website. With that done and your Next.js dev server running, you should be able to run npx playwright test and see 6 passing tests against your local server. Now that the boilerplate is handled let’s get to the interesting part.

The Workflow

There is a lot going on in the workflow that we’ll be using, so we’ll go through it step by step, starting from the top. At the top of the file, we name the workflow and specify when it will run.

name: E2E Tests (Playwright)

on:
  pull_request:
	branches:
  	- main
  push:
	branches:
  	- main

This workflow will run against new PRs against the default branch and whenever new commits are merged against it. If you only want the workflow to run against PRs, you can remove the push object.

Be careful about running workflows against your main branch if the deployment associated with it in Vercel is the production deployment. Some tests might not be safe to run against production such as destructive tests or those that modify customer data. In our simple example, however, this isn’t something to worry about.

Installing Playwright in the Virtual Machine

Workflows have jobs associated with them, and each job has multiple steps. Our test job takes a few steps to set up our project and install Playwright.

jobs:
  test:
	runs-on: ubuntu-latest
	steps:
  	- uses: actions/checkout@v4

  	- uses: actions/setup-node@v4
    	with:
      	node-version: 22
      	cache: 'npm'

  	- name: Install npm dependencies
    	run: npm ci

  	- name: Install system dependencies needed by Playwright
    	run: sudo npx playwright install-deps

  	- name: Install all supported Playwright browsers
    	run: npx playwright install

The actions/checkout@v4 step clones our code since it isn’t available straight out of the gate. After that, we install Node v22 with actions/setup-node@v4, which, at the time of writing this article, is the latest LTS available. The latest LTS version of Node should always work with Playwright. With the project cloned and Node installed, we can install dependencies now. We run npm ci to install packages using the versions specified in the lock file.

After our JS dependencies are installed, we have to install dependencies for Playwright now. sudo npx playwright install-deps installs all system dependencies that Playwright needs to work using apt, which is the package manager used by Ubuntu. This command needs to be run as the administrative user since higher privilege is needed to install system packages. Playwright’s dependencies aren’t all available in npm because the browser engines are native code that has native library dependencies that aren’t in the registry.

Vercel Preview URL and GitHub Action Await Vercel

The next couple of steps is where the magic happens. We need two things to happen to run our tests against the deployment. First, we need the URL of the deployment we want to test. Second, we want to wait until the deployment is ready to go before we run our tests. We have written about this topic before on our blog if you want more information about this step, but we’ll reiterate some of that here.

Thankfully, the community has created GitHub actions that allow us to do this called zentered/vercel-preview-url and UnlyEd/github-action-await-vercel. Here is how you can use these actions:

jobs:
  test:
	runs-on: ubuntu-latest
	steps:
    	...

  	- name: Get the Vercel preview url
    	id: vercel_preview_url
    	uses: zentered/vercel-preview-url@v1.4.0
    	env:
      	VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
    	with:
      	vercel_app: 'playwright-vercel-preview-demo'

  	- uses: UnlyEd/github-action-await-vercel@v2.0.0
    	env:
      	VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
    	with:
      	deployment-url: ${{ format('https://{0}', steps.vercel_preview_url.outputs.preview_url) }}
      	timeout: 420
      	poll-interval: 15

There are a few things to take note of here. Firstly, some variables need to be set that will differ from project to project. vercel_app in the zentered/vercel-preview-url step needs to be set to the name of your project in Vercel that was created earlier.

The other variable that you need is the VERCEL_TOKEN environment variable. You can get this by going to Vercel > Account Settings > Tokens and creating a token in the form that appears. For the scope, select the account that has your project.

To put VERCEL_TOKEN into GitHub, navigate to your repo, go to Settings > Secrets and variables > Actions and add it to Repository secrets.

We should also add VERCEL_AUTOMATION_BYPASS_SECRETl. In Vercel, go to your project then navigate to Settings > Deployment Protection > Protection Bypass for Automation. From here you can add the secret, copy it to your clipboard, and put it in your GitHub action environment variables just like we did with VERCEL_TOKEN.

With the variables taken care of, let’s take a look at how these two steps work together. You will notice that the zentered/vercel-preview-url step has an ID set to vercel_preview_url. We need this so we can pass the URL we receive to the UnlyEd/github-action-await-vercel action, as it needs a URL to know which deployment to wait on.

Running Playwright

After the last steps we just added, our deployment should be ready to go, and we can run our tests! The following steps will run the Playwright tests against the deployment and save the results to GitHub:

jobs:
  test:
	runs-on: ubuntu-latest
	steps:
    	...

  	- name: Run E2E tests
    	run:
      	npx playwright test
    	env:
      	DEPLOYMENT_URL: ${{ format('https://{0}', steps.vercel_preview_url.outputs.preview_url) }}
      	VERCEL_AUTOMATION_BYPASS_SECRET: ${{ secrets.VERCEL_AUTOMATION_BYPASS_SECRET }}

  	- name: Upload the Playwright report
    	uses: actions/upload-artifact@v4
    	if: always() # Always run regardless if the tests pass or fail
    	with:
      	name: playwright-report
      	path: ${{ format('{0}/playwright-report/', github.workspace) }}
      	retention-days: 30

In the first step, where we run the tests, we pass in the environment variables needed by our Playwright configuration that’s stored in playwright.config.ts. DEPLOYMENT_URL uses the Vercel deployment URL we got in an earlier step, and VERCEL_AUTOMATION_BYPASS_SECRET gets passed the secret with the same name directly from the GitHub secret store.

The second step uploads a report of how the tests did to GitHub, regardless of whether they’ve passed or failed. If you need to access these reports, you can find them in the GitHub action log. There will be a link in the last step that will allow you to download a zip file.

Once this workflow is in the default branch, it should start working for all new PRs! It’s important to note that this won’t work for forked PRs unless they are explicitly approved, as that’s a potential security hazard that can lead to secrets being leaked. You can read more about this in the GitHub documentation.

One Caveat

There’s one caveat that is worth mentioning with this approach, which is latency. Since your application is being served by Vercel and not locally on the GitHub action instance itself, there will be longer round-trips to it. This could result in your tests taking longer to execute. How much latency there is can vary based on what region your runner ends up being hosted in and whether the pages you’re loading are served from the edge or not.

Conclusion

Running your Playwright tests against Vercel preview deployments provides a robust way of running your tests against new code in an environment that more closely aligns with production. Doing this also eliminates the need to create and maintain a 2nd test environment under which your project needs to work.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Keeping Costs in Check When Hosting Next.js on Vercel cover image

Keeping Costs in Check When Hosting Next.js on Vercel

Vercel is usually the go-to platform for hosting Next.js apps, and not without reason. Not only are they one of the sponsors of Next.js, but their platform is very straightforward to use, not just for Next.js but for other frameworks, too. So it's no wonder people choose it more and more over other providers. Vercel, however, is a serverless platform, which means there are a few things you need to be aware of to keep your costs predictable and under control. This blog post covers the most important aspects of hosting a Next.js app on Vercel. Vercel's Pricing Structure Vercel's pricing structure is based on fixed and usage-based pricing, which is implemented through two big components of Vercel: the Developer Experience Platform (DX Platform) and the Managed Infrastructure. The DX Platform offers a monthly-billed suite of tools and services focused on building, deploying, and optimizing web apps. Think of it as the developer-focused interface on Vercel, which assists you in managing your app and provides team collaboration tools, deployment infrastructure, security, and administrative services. Additionally, it includes Vercel support. Because the DX Platform is developer-focused, it's also charged per seat on a monthly basis. The more developers have access to the DX Platform, the more you're charged. In addition to charging per seat, there are also optional, fixed charges for non-included, extra features. Observability Plus is one such example feature. The Managed Infrastructure, on the other hand, is what makes your app run and scale. It is a serverless platform priced per usage. Thanks to this infrastructure, you don't need to worry about provisioning, maintaining, or patching your servers. Everything is executed through serverless functions, which can scale up and down as needed. Although this sounds great, this is also one of the reasons many developers hesitate to adopt serverless; it may have unpredictable costs. One day, your site sees minimal traffic, and the next, it goes viral on Hacker News, leading to a sudden spike in costs. Vercel addresses this uncertainty by including a set amount of free serverless usage in each of its DX Platform plans. Once you exceed those limits, additional charges apply based on usage. Pricing Plans The DX Platform can be used in three different pricing plans on a team level. A team can represent a single person, a team within a company, or even a whole company. When creating a team on Vercel, this team can have one or more projects. The first of the three pricing plans is the Hobby plan, which is ideal for personal projects and experimentation. The Hobby plan is free and comes with some of the features and resources of the DX Platform and Managed Infrastructure out of the box, making it suitable for hosting small websites. However, note that the Hobby plan is limited to non-commercial, personal use only. The Pro plan is the most recommended for most teams and can be used for commercial purposes. At the time of this writing, it costs $20 per team member and comes with generous resources that support most teams. The third tier is the Enterprise plan, which is the most advanced and expensive option. This plan becomes necessary when your application requires specific compliance or performance features, such as isolated build infrastructure, Single Sign-On (SSO) for corporate user management or custom support with Service-Level Agreements. The Enterprise plan has a custom pricing model and is negotiated directly with Vercel. What Contributes to Usage Costs and Limiting Them Now that you've selected a plan for your team, you're ready to deploy Next.js. But how do you determine what contributes to your per-usage costs and ensure they stay within your plan limits? The Vercel pricing page has a detailed breakdown of the resource usage limits for each plan, which can help you understand what affects your costs. However, in this section, we've highlighted some of the most impactful factors on pricing that we've seen on many of our client projects. Number of Function Invocations Each time a serverless function runs, it counts as an invocation. Most of the processing on Vercel for your Next.js apps happens through serverless functions. Some might think that only API endpoints or server actions count as serverless function invocations, but this is not true. Every request that comes to the backend goes through a serverless function invocation, which includes: - Invoking server actions (server functions) - Invoking API routes (from the frontend, another system, or even within another serverless function) - Rendering a React server component tree (as part of a request to display a page) To give you an idea of the number of invocations included in a plan, the Pro plan includes 1 million invocations per month for free. After that, it costs $0.60 per million, which can total a significant amount for popular websites. To minimize serverless function invocations, focus on reducing any of the above points. For example: - Batch up server actions: If you have multiple server actions, such as downloading a file and increasing its download count, you can combine them into one server action. - Minimize calls to the backend: Closely related to the previous point, unoptimized client components can call the backend more than they need to, contributing to increased function invocation count. If needed, consider using a library such as useSwr or TanStack Query to keep your backend calls under control. - Use API routes correctly: Next.js recommends using API routes for external systems invoking your app. For instance, Contentful can invoke a blog post through a webhook without incurring additional invocations. However, avoid invoking API routes from server component tree renders, as this counts as at least two invocations. Reducing React server component renders is also possible. Not all pages need to be dynamic - convert dynamic routes to static content when you don’t expect them to change in real-time. On the client, utilize Next.js navigation primitives to use the client-side router cache. Middleware in Next.js runs before every request. Although this doesn't necessarily count as a function invocation (for edge middleware, this is counted in a separate bucket), it's a good idea to minimize the number of times it has to run. To minimize middleware invocations, limit them only to requests that require it, such as protected routes. For static asset requests, you can skip middleware altogether using matchers. For example, the matcher configuration below would prevent invoking the middleware for most static assets: ` Function Execution Time The duration your serverless function takes to execute counts as the execution time, and it impacts your end bill unless it's within the limits of your plan. This means that any inefficient code that takes longer to execute directly adds up to the total function invocation time. Many things can contribute to this, but one common pattern we've seen is not utilizing caching properly or under-caching. Next.js offers several caching techniques you can use, such as: - Using a data cache to prevent unnecessary database calls or API calls - Using memoization to prevent too many API or database calls in the same rendering pass Another reason, especially now in the age of AI APIs, is having a function run too long due to AI processing. In this case, what we could do is utilize some sort of queuing for long-processing jobs, or enable Fluid Compute, a recent feature by Vercel that optimizes function invocations and reusability. Bandwidth Usage The volume of data transferred between users and Vercel, including JavaScript bundles, RSC payload, API responses, and assets, directly contributes to bandwidth usage. In the Pro plan, you receive 1 TB/month of included bandwidth, which may seem substantial but can quickly be consumed by large Next.js apps with: - Large JavaScript bundles - Many images - Large API JSON payloads Image optimization is crucial for reducing bandwidth usage, as images are typically large assets. By implementing image optimization, you can significantly reduce the amount of data transferred. To further optimize your bandwidth usage, focus on using the Link component efficiently. This component performs automatic prefetch of content, which can be beneficial for frequently accessed pages. However, you may want to disable this feature for infrequently accessed pages. The Link component also plays a role in reducing bandwidth usage, as it aids in client-side navigation. When a page is cached client-side, no request is made when the user navigates to it, resulting in reduced bandwidth usage. Additionally, API and RSC payload responses count towards bandwidth usage. To minimize this impact, always return only the minimum amount of data necessary to the end user. Image Transformations Every time Vercel transforms an image from an unoptimized image, this counts as an image transformation. After transformation, every time an optimized image is written to Vercel's CDN network and then read by the user's browser, this counts as an image cache read and an image cache write, respectively. The Pro plan includes 10k transformations per month, 600k CDN cache reads, and 200k CDN cache writes. Given the high volume of image requests in many apps, it's worth checking if the associated costs can be reduced. Firstly, not every image needs to be transformed. Certain types of images, such as logos and icons, small UI elements (e.g., button graphics), vector graphics, and other pre-optimized images you may have optimized yourself already, don't require transformation. You can store these images in the public folder and use the unoptimized property with the Image component to mark them as non-transformable. Another approach is to utilize an external image provider like Cloudinary or AWS CloudFront, which may have already optimized the images. In this case, you can use a custom image loader to take advantage of their optimizations and avoid Vercel's image transformations. Finally, Next.js provides several configuration options to fine-tune image transformation: - images.minimumCacheTTL: Controls the cache duration, reducing the need for rewritten images. - images.formats: Allows you to limit eligible image formats for transformation. - images.remotePatterns: Defines external sources for image transformation, giving you more control over what's optimized. - images.quality: Enables you to set the image quality for transformed images, potentially reducing bandwidth usage. Monitoring The "Usage" tab on the team page in Vercel provides a clear view of your team's resource usage. It includes information such as function invocation counts, function durations, and fast origin transfer amounts. You can easily see how far you are from reaching your team's limit, and if you're approaching it, you'll see the amount. This page is a great way to monitor regularity. However, you don't need to check it constantly. Vercel offers various aspects of spending management, and you can set alert thresholds to get notified when you're close to or exceed your limit. This helps you proactively manage your spending and avoid unexpected charges. One good feature of Vercel is its ability to pause projects when your spending reaches a certain point, acting as an "emergency break" in the case of a DDoS attack or a very unusual spike in traffic. However, this will stop the production deployment, and the users will not be able to use your site, but at least you won't be charged for any extra usage. This option is enabled by default. Conclusion Hosting a Next.js app on Vercel offers a great developer experience, but it's also important to consider how this contributes to your end bill and keep it under control. Hopefully, this blog post will clear up some of the confusion around pricing and how to plan, optimize, and monitor your costs. We hope you enjoyed this blog post. Be sure to check out our other blog posts on Next.js for more in-depth coverage of different features of this framework....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

An Introduction to Laravel Queues and Temporary URLs cover image

An Introduction to Laravel Queues and Temporary URLs

Laravel is a mature, robust, and powerful web framework that makes developing PHP applications a breeze. In particular, I want to demonstrate how to create a website that can be used to convert videos online using queue jobs for processing and temporary URLs for downloading the converted files. This article is aimed at those who aren’t very familiar with Laravel yet. Prerequisites There are many ways to set up Laravel, and which is the best method may depend on your operating system or preference. I have found Laravel Herd to be very easy to use if you’re using Windows or macOS. Herd is a Laravel development environment that has everything you need with minimal configuration required. Command-line tools are installed and added to your path, and background services are configured automatically. If you’re developing on Linux then Herd is not an option. However, Laravel Sail works for all major operating systems and uses a Docker based environment instead. You can find a full list of supported installation methods in the Laravel documentation. To keep things simpl,e this article assumes the use of Herd, though this won’t make a difference when it comes to implementation. You will also need a text editor or IDE that has good PHP support. PhpStorm is a great editor that works great with Laravel, but you can also use VSCode with the Phpactor language server, and I’ve found Phpactor to work quite well. Project Setup With a development environment setup, you can create a new Laravel project using composer, which is the most popular package manager for PHP. Herd installs composer for you. composer installs dependencies and lets you run scripts. Let’s create a Laravel project using it: ` Once that is done you can navigate into the project directory and start the server with artisan: ` Awesome! You can now navigate to http://localhost:8000/ and see the Laravel starter application’s welcome page. Artisan is the command-line interface for Laravel. It comes with other utilities as well such as a database migration tool, scripts for generating classes, and other useful things. Uploading Videos Using Livewire Livewire is a library that allows you to add dynamic functionality to your Laravel application without having to add a frontend framework. For this guide we’ll be using Livewire to upload files to our server and update the status of the video conversion without requiring any page reloads. Livewire can be installed with composer like so. ` With it installed we need to make a Livewire component now. This component will act as the controller of our video upload page. ` With that done you should see two new files were created according to the output of the command, one being a PHP file and the other being a Blade file. Laravel has its own HTML template syntax for views that allow you to make your pages render dynamically. For this demo we’ll make the video conversion page render at the root of the site. You can do this by going to routes/web.php and editing the root route definition to point to our new component. ` However, if we visit our website now it will return an error. This is due to the app template being missing, which is the view that encapsulates all page components and contains elements such as the document head, header, footer, etc. Create a file at resources/views/components/layouts/app.blade.php and put the following contents inside. This will give you a basic layout that we can render our page component inside of. ` The {{ $slot }} string in the main tag is a Blade echo statement. That is where our Livewire component will be injected when loading it. Now, let’s edit the Livewire component’s template so it has something meaningful in it that will allow us to verify that it renders correctly. Edit resources/views/livewire/video-uploader.blade.php and put in the following: ` With that done you can go to the root of the site and see this hello message rendered inside of a box. Seeing that means everything is working as it should. We may as well delete the welcome template since we’re not using it anymore. This file is located at resources/views/welcome.blade.php. Now, let’s go ahead and add uploading functionality. For now we’ll just upload the file into storage and do nothing with it. Go ahead and edit app/Livewire/VideoUploader.php with the following: ` This will only allow uploading files with video file MIME types. The $video class variable can be wired inside of the component’s blade template using a form. Create a form in resources/views/livewire/video-uploader.blade.php like so: ` You will note a wire:submit attribute attached to the form. This will prevent the form submission from reloading the page and will result in Livewire calling the component’s save method using the video as a parameter. The $video property is wired with wire:model="video". Now you can upload videos, and they will be stored into persistent storage in the storage/app/private directory. Awesome! Increase the Filesize Limit If you tried to upload a larger video you may have gotten an error. This is because the default upload size limit enforced by Livewire and PHP is very small. We can adjust these to accommodate our use-case. Let’s start with adjusting the Livewire limit. To do that, we need to generate a configuration file for Livewire. ` All values in the generated file are the defaults we have been using already. Now edit config/livewire.php and make sure the temporary_file_upload looks like this: ` The rules key allows us to change the maximum file size, which in this case is 100 megabytes. This alone isn’t good enough though as the PHP runtime also has a limit of its own. We can configure this by editing the php.ini file. Since this article assumes the use of Herd, I will show how that is done with it. Go to Herd > Settings > PHP > Max File Upload Size > and set it to 100. Once done you need to stop all Herd services in order for the changes to take effect. Also make sure to close any background PHP processes with task manager in-case any are lingering, as this happened with me. Once you’ve confirmed everything is shut off, turn on all the services again. If you’re not using Herd, you can add the following keys to your php.ini file to get the same effect: ` Creating a Background Job Now, let’s get to the more interesting part that is creating a background job to run on an asynchronous queue. First off, we need a library that will allow us to convert videos. We’ll be using php-ffmpeg. It should be noted that FFmpeg needs to be installed and accessible in the system path. There are instructions on their website that tell you how to install it for all major platforms. On macOS this is automatic if you install it with homebrew. On Windows you can use winget. On macOS and Linux you can confirm that ffmpeg is in your path like so: ` If a file path to ffmpeg is returned then it’s installed correctly. Now with FFmpeg installed you can install the PHP library adapter with composer like so: ` Now that we have everything we need to convert videos, let’s make a job class that will use it: ` Edit app/Jobs/ProcessVideo.php and add the following: ` To create a job we need to make a class that implements the ShouldQueue interface and uses the Queueable trait. The handle method is called when the job is executed. Converting videos with php-ffmpeg is done by passing in an input video path and calling the save method on the returned object. In this case we’re going to convert videos to the WebM container format. Additional options can be specified here as well, but for this example we’ll keep things simple. One important thing to note with this implementation is that the converted video is moved to a file path known by the livewire component. Later and to keep things simple we’re going to modify the component to check this file path until the file appears, and while for demo purposes this is fine, in an app deployed at a larger scale with multiple instances it is not. In that scenario it would be better to write to a cache like Redis instead with a URL to the file (if uploaded to something like S3) that can be checked instead. Now let’s use this job! Edit app/Livewire/VideoUploader.php and let’s add some new properties and expand on our save method. ` How this works is we tell the job where it can find the video and tell it where it should output the converted video when it’s done. We have to make the output filename be the same as the original with just the extension changed, so we use pathinfo to extract that for us. The ProcessVideo::dispatch method is fire and forget. We aren’t given a handle of any kind to be able to check the status of a job out of the box. For this example we’ll be waiting for the video to appear at the output location. To process jobs on the queue you need to start a queue worker as jobs are not processed in the same process as the server that we are currently running. You can start the queue with artisan: ` Now the queue is running and ready to process jobs! Technically you can upload videos for conversion right now and have them be processed by the job, but you won’t be able to download the file in the browser yet. Generating a Temporary URL and Sending it with Livewire To download the file we need to generate a temporary URL. Traditionally this feature has only been available for S3, but as of Laravel v11.24.0 this is also usable with the local filesystem, which is really useful for development. Let’s add a place to render the download link and the status of the job. Edit resources/views/livewire/video-uploader.blade.php and add a new section under the form: ` Note the wire:poll attribute. This will cause the Blade echo statements inside of the div to refresh occasionally and will re-render if any of them changed. By default, it will re-render every 2.5 seconds. Let’s edit app/Livewire/VideoUploader.php to check the status of the conversion, and generate a download URL. ` Every time the page polls we check if the video has appeared at the output path. Once it’s there we generate the link, store it to state, and pass it to the view. Temporary URLs are customizable as well. You can change the expiration time to any duration you want, and if you’re using S3, you can also pass S3 request parameters using the optional 3rd argument. Now you should be able to upload videos and download them with a link when they’re done processing! Limitations Although this setup works fine in a development environment with a small application, there are some changes you might need to make if you plan on scaling beyond that. If your application is being served by multiple nodes then you will need to use a remote storage driver such as the S3 driver, which works with any S3 compatible file storage service. The same Laravel API calls are used regardless of the driver you use. You would only have to update the driver passed into the Storage facade methods from local to s3, or whichever driver you choose. You also wouldn’t be able to rely on the same local filesystem being shared between your job workers and your app server either and would have to use a storage driver or database to pass files between them. This demo uses the database driver for simplicity’s sake, but it's also worth noting that by default, queues and jobs use the database driver, but SQS, Redis, and Beanstalkd can also be used. Consider using these other drives instead of depending on how much traffic you need to process. Conclusion In this article, we explored how to utilize queues and temporary URLs to implement a video conversion site. Laravel queues allow for efficient processing of long-running tasks like video conversion in a way that won’t bog down your backend servers that are processing web requests. While this setup works fine for development, some changes would need to be made for scaling this such as using remote storage drivers for passing data between the web server and queue workers. By effectively leveraging Laravel’s features, developers can create robust and scalable applications with relative ease....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co