Skip to content

A Look at Playwright Parallelism

A Look at Playwright Parallelism

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

A Look at Playwright Parallelism

Playwright is an open-source automation library developed by Microsoft, designed for testing web applications across multiple browsers. It enables developers and testers to write scripts that simulate real user actions when interacting with web pages. Playwright supports major browsers like Chrome, Firefox, and Safari, including their headless modes. Recently, it has gained a lot of popularity for its robustness, ease of use, and cross-browser compatibility. In this blog post, we will take a look at one very useful aspect of Playwright: running tests in parallel.

Parallelism in Playwright

Parallelism in Playwright refers to running multiple test spec files or even test cases within a spec file simultaneously, greatly improving test execution speed. This is achieved by running the tests in worker processes, where each worker process is an OS process, running independently, orchestrated by the test runner. All workers have identical environments, and each starts its own browser instance.

Parallel execution is particularly beneficial in continuous integration and continuous deployment (CI/CD) pipelines, reducing overall build and testing times. Playwright's architecture inherently supports parallelism, and most modern test runners integrating with Playwright can leverage this feature. The parallel execution feature makes Playwright a highly efficient tool for large-scale web application testing.

Enabling Parallelism

By default, if you scaffold the project using npm init playwright@latest, parallelism is already enabled. Assuming that Playwright's configuration includes three browsers and there is a single spec file with two test cases, the total number of tests that Playwright needs to execute is 3x2 = 6.

import { test, expect } from '@playwright/test';

test('has title', async ({ page, browserName }) => {
  console.log(`Running 'has title' test in worker ${process.env.TEST_WORKER_INDEX} using browser ${browserName}`)
  await page.goto('https://playwright.dev/');

  // Expect a title "to contain" a substring.
  await expect(page).toHaveTitle(/Playwright/);
});

test('get started link', async ({ page, browserName }) => {
  console.log(`Running 'get started link' test in worker ${process.env.TEST_WORKER_INDEX} using browser ${browserName}`)
  await page.goto('https://playwright.dev/');

  // Click the get started link.
  await page.getByRole('link', { name: 'Get started' }).click();

  // Expects page to have a heading with the name of Installation.
  await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
});

Playwright will decide on how many workers to use based on several factors:

  • Hardware support: Depending on the number of CPU cores and other system resources, the operating system will decide how many processes Playwright can spin up.
  • The workers property in the playwright.config.ts file.
  • The --workers argument to the playwright test command. For example, npx playwright test --workers 4.

The --workers argument overrides the workers property in the configuration. However, in both cases, the number of workers can go up to the number of processes that Playwright can spin up, as decided by the operating system.

Once it has determined the number of workers, and if the number of workers is larger than 1, Playwright will then decide how to split the work between workers. This decision also depends on several factors, such as the number of browsers involved and the granularity of the parallelism, which can be controlled in several ways:

  • In the test spec file, you can specify whether to run the test cases in parallel. To run tests in parallel, you can invoke test.describe.configure({ mode: 'parallel' }); before your test cases.
  • Alternatively, you can configure it per project by setting the fullyParallel: true property.
  • And finally, you can set it globally in the config, using the same property: fullyParallel: true.

Therefore, if there is more than one worker and the parallel mode is enabled, Playwright will assign test cases to each worker as they become available. This scenario is ideal because it means that each test is stateless, and the resources are used most efficiently.

> npx playwright test

Running 6 tests using 4 workers
[chromium] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 0 using browser chromium
[chromium] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 1 using browser chromium
[firefox] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 3 using browser firefox
[firefox] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 2 using browser firefox
[webkit] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 4 using browser webkit
[webkit] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 5 using browser webkit
  6 passed (3.9s)

Disabling Parallelism

What if, however, the tests are not stateless? Imagine one test changes the global configuration of the app via some sort of administration page, and the configuration affects different parts of the app, like enabling or disabling features. Other tests, which may be testing those features, would be impacted and would report incorrect results. In such cases, you might want to disable parallelism.

You can disable any parallelism globally by allowing just a single worker at any time. Following the instructions from the previous sections to configure the number of workers, you can either set the workers: 1 option in the configuration file or pass --workers=1 to the command line.

npx playwright test --workers=1

Let's have a look at our test output in this case:

Running 6 tests using 1 worker
[chromium] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 0 using browser chromium
[chromium] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 0 using browser chromium
[firefox] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 1 using browser firefox
[firefox] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 1 using browser firefox
[webkit] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 2 using browser webkit
[webkit] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 2 using browser webkit
  6 passed (8.2s)

Now, compare the time it took with one worker versus the time it took with four workers. It took 8.2 seconds with one worker compared to only 3.9 seconds with multiple workers. That might be an inefficient usage of resources, of course, especially if you have a large test suite and some of the tests are stateless and can be run without impacting other tests.

Tweaking Parallelism

If you want some tests not to run in parallel, but still want to utilize your workers, you can do that. Again, following the configuration options from the previous sections, you can annotate the entire spec file with test.describe.configure({ mode: 'serial' }); to have the tests run sequentially in that spec file, or use the fullyParallel: false property to run tests sequentially on the project level, or using the same property to run tests sequentially on the global level.

This means you can still split the tests between workers, but the tests would be run sequentially within a worker depending on the configuration. For example, let's set the number of workers to 4, but set fullyParallel: false globally.

> npx playwright test --workers=4

Running 6 tests using 3 workers
[chromium] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 0 using browser chromium
[webkit] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 2 using browser webkit
[chromium] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 0 using browser chromium
[webkit] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 2 using browser webkit
[firefox] › example.spec.ts:3:5 › has title
Running 'has title' test in worker 1 using browser firefox
[firefox] › example.spec.ts:11:5 › get started link
Running 'get started link' test in worker 1 using browser firefox
  6 passed (3.7s)

The tests need to be run sequentially, but since each browser instance effectively provides an isolated environment, this means tests cannot impact each other between environments, and they are safe to be executed sequentially within an environment. This means setting fullyParallel: false on the global level is not the same as having workers: 1, since the browsers themselves offer an isolated environment for the tests to run sequentially. However, since we only have 3 environments (3 browsers), we cannot fully utilize 4 workers as we wanted, so the number of workers is 3.

Conclusion

In conclusion, Playwright's workers are the core of its parallelism capabilities, enabling tests to run concurrently across different environments and browsers with efficiency. Through its many configuration properties, Playwright allows you to configure parallelism at multiple levels, offering a granular approach to optimizing your test runs. Beyond just executing tests in parallel on a single machine, Playwright's parallelism can even extend to splitting work across multiple machines through sharding, significantly enhancing the scalability of your testing.

We hope this blog post was useful. For those interested in delving deeper into the world of Playwright and its powerful features, we've also recently released a JS Drop titled Awesome Web Testing with Playwright, and we also hosted Debbie O'Brien from Microsoft in a Modern Web episode.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

The Future of Dates in JavaScript: Introducing Temporal cover image

The Future of Dates in JavaScript: Introducing Temporal

The Future of Dates in JavaScript: Introducing Temporal What is Temporaal? Temporal is a proposal currently at stage 3 of the TC39 process. It's expected to revolutionize how we handle dates in JavaScript, which has always been a challenging aspect of the language. But what does it mean that it's at stage 3 of the process? * The specification is complete * It has been reviewed * It's unlikely to change significantly at this point Key Features of Temporal Temporal introduces a new global object with a fresh API. Here are some important things to know about Temporal: 1. All Temporal objects are immutable 2. They're represented in local calendar systems, but can be converted 3. Time values use 24-hour clocks 4. Leap seconds aren't represented Why Do We Need Temporal? The current Date object in JavaScript has several limitations: * No support for time zones other than the user's local time and UTC * Date objects can be mutated * Unpredictable behavior * No support for calendars other than Gregorian * Daylight savings time issues While some of these have workarounds, not all can be fixed with the current Date implementation. Let's see some useful examples where Temporal will improve our lives: Some Examples Creating a day without a time zone is impossible using Date, it also adds time beyond the date. Temporal introduces PlainDate to overcome this. ` But what if we want to add timezone information? Then we have ZonedDateTime for this purpose. The timezone must be added in this case, as it also allows a lot of flexibility when creating dates. ` Temporal is very useful when manipulating and displaying the dates in different time zones. ` Let's try some more things that are currently difficult or lead to unexpected behavior using the Date object. Operations like adding days or minutes can lead to inconsistent results. However, Temporal makes these operations easier and consistent. ` Another interesting feature of Temporal is the concept of Duration, which is the difference between two time points. We can use these durations, along with dates, for arithmetic operations involving dates and times. Note that Durations are serialized using the ISO 8601 duration format ` Temporal Objects We've already seen some of the objects that Temporal exposes. Here's a more comprehensive list. * Temporal * Temporal.Duration` * Temporal.Instant * Temporal.Now * Temporal.PlainDate * Temporal.PlainDateTime * Temporal.PlainMonthDay * Temporal.PlainTime * Temporal.PlainYearMonth * Temporal.ZonedDateTime Try Temporal Today If you want to test Temporal now, there's a polyfill available. You can install it using: ` Note that this doesn't install a global Temporal object as expected in the final release, but it provides most of the Temporal implementation for testing purposes. Conclusion Working with dates in JavaScript has always been a bit of a mess. Between weird quirks in the Date object, juggling time zones, and trying to do simple things like “add a day,” it’s way too easy to introduce bugs. Temporal is finally fixing that. It gives us a clear, consistent, and powerful way to work with dates and times. If you’ve ever struggled with JavaScript dates (and who hasn’t?), Temporal is definitely worth checking out....

Implementing Dynamic Types in Docusign Extension Apps cover image

Implementing Dynamic Types in Docusign Extension Apps

Implementing Dynamic Types in Docusign Extension Apps In our previous blog post about Docusign Extension Apps, Advanced Authentication and Onboarding Workflows with Docusign Extension Apps, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them. To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a manifest file. This manifest defines metadata such as the app's author, support links, and the list of extension points it uses (these are the locations in the workflow where your app's logic will be executed). The extension points that are relevant for us in the context of this blog post are GetTypeNames and GetTypeDefinitions. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI. In most apps, these types are static and rarely change. However, they don't have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system. Static vs. Dynamic Types To explain the difference between static and dynamic types, we'll use the example from our previous blog post, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. Our first approach to implementing the GetTypeNames and GetTypeDefinitions endpoints for the TaskVibe Extension App might look like the following. The GetTypeNames endpoint returns a single record named task: ` Given the type name task, the GetTypeDefinitions endpoint would return the following definition for that type: ` As noted in the Docusign documentation, this endpoint must return a Concerto schema representing the type. For clarity, we've omitted most of the Concerto-specific properties. The above declaration states that we have a task type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on. The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone. Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a "Sprint" field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like "Backlog," "ToDo," and so on. With static types, we would need to return every possible field from any project as part of the GetTypeDefinitions response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn't have access to in TaskVibe. With dynamic types, however, we can support this level of customization. Implementing Dynamic Types When Docusign sends a request to the GetTypeNames endpoint and the types are dynamic, the Extension App has a bit more work than before. As we've mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.) Once we find the task definitions on TaskVibe, we then need to return them in the response of GetTypeNames, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task. For example: ` The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we've decided to form the ID by concatenating the string "task_" with the ID of the project on TaskVibe. The implementation of the GetTypeDefinitions endpoint needs to: 1. Extract the project ID from the requested type name. 1. Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project's tasks, including all custom fields. 1. Once the fields are retrieved, map them to the properties of the Concerto schema. The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity): ` Now, type definitions are fully dynamic and project-dependent. Caching of Type Definitions on Docusign Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it's useful to inform them that they may need to refresh their Docusign connection in the App Center UI if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn't be reflected until this refresh occurs. Conclusion In this blog post, we've explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co