Skip to content

Quick Guide to Playwright Fixtures: Enhancing Your Tests

Quick Guide to Playwright Fixtures: Enhancing Your Tests

Introduction

Following our recent blog post on migrating E2E tests from Cypress to Playwright, we've identified opportunities to enhance our test scripts further. In this guide, we'll delve into the basics of Playwright fixtures, demonstrating their utility and flexibility in test environments.

Playwright fixtures are reusable components that set up and tear down the environment or conditions necessary for tests. They are crucial for writing clean, maintainable, and scalable tests. Fixtures can handle tasks like opening a browser, initializing a database, or logging into an application—actions you might need before running your tests.

As a practical example, we'll revisit one of the tests from our previous post, enhancing it with a new fixture to streamline the testing process and significantly improve maintainability. This post is designed to provide the foundational skills to integrate fixtures into your testing workflows effectively, giving you the confidence to manage and maintain your test scripts more efficiently.

Creating Our First Fixture

To illustrate the power of fixtures in Playwright, let’s consider a practical example from a test scenario in our project. Below is a snippet of a test case from our newsletter page:

test.describe("Newsletter page", () => {
  test("subscribing to email newsletter should show success message", async ({ page }) => {
    await page.goto("/newsletter");
    await page
      .locator(`[data-testid="newsletter-email-input"]`)
      .fill(SIGN_UP_EMAIL_TEST);
    await page.locator(`[data-testid="newsletter-submit-button"]`).click();
    await expect(
      page.getByText(/You subscribed to the newsletter successfully!/).first(),
    ).toBeVisible();
  });
});

This test scenario navigates to the newsletter page, fills in an email, submits the form, and checks for a success message. To optimize our test suite, we'll refactor common actions like navigation, form completion, and submission into reusable fixtures. This approach makes our tests cleaner and more maintainable and reduces redundancy across similar test scenarios.

Implementing the Fixture

Here’s how the fixture looks:

// NewsletterPage.fixture.ts
import type { Page, Locator } from "@playwright/test";

export class NewsletterPage {
  private readonly emailInput: Locator;
  private readonly submitButton: Locator;

  constructor(public readonly page: Page) {
    this.emailInput = this.page.locator(
      `[data-testid="newsletter-email-input"]`,
    );
    this.submitButton = this.page.locator(
      `[data-testid="newsletter-submit-button"]`,
    );
  }

  async goto() {
    await this.page.goto("/newsletter");
  }

  async addEmail(text: string) {
    await this.emailInput.fill(text);
  }

  async submitNewsletter() {
    await this.submitButton.click();
  }
}

This fixture encapsulates the actions of navigating to the page, filling out the email field, and submitting the form. By abstracting these actions, we simplify and focus our test cases.

Refactoring the Test

With the fixture in place, let’s see how it changes our original test file:

import { NewsletterPage } from "playwright/fixtures/NewsletterPage.fixture";

test.describe("Newsletter page", () => {
  let newsletterPage: NewsletterPage;

  test.beforeEach(({ page }) => {
    newsletterPage = new NewsletterPage(page);
  });

  test("subscribing to email newsletter should show success message", async ({ page }) => {
    await newsletterPage.goto();
    await newsletterPage.addEmail(SIGN_UP_EMAIL_TEST);
    await newsletterPage.submitNewsletter();
    await expect(
      page.getByText(/You subscribed to the newsletter successfully!/).first(),
    ).toBeVisible();
  });
});

A beforeEach method to reset the state of the NewsletterPage fixture ensures a clean and consistent environment for each test scenario. This practice is crucial for maintaining the integrity and reliability of your tests.

By leveraging the NewsletterPage fixture, each test within the "Newsletter page" suite starts with a clean and pre-configured environment. This setup improves test clarity and efficiency and aligns with best practices for scalable test architecture.

Conclusion

As we've seen, fixtures are powerful tools that help standardize test environments, reduce code redundancy, and ensure that each test operates in a clean state. By abstracting common setup and teardown tasks into fixtures, we can focus our testing efforts on what matters most, verifying the behavior and reliability of the software we're developing.

Remember, the key to successful test management is choosing the right tools and using them wisely to create scalable, maintainable, and robust testing frameworks. Playwright fixtures offer a pathway towards achieving these goals, empowering teams to build better software faster and more confidently.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Testing Accessibility Features With Playwright cover image

Testing Accessibility Features With Playwright

Testing Accessibility Features With Playwright In one of my previous posts, I mentioned the importance of having e2e tests to verify that your keyboard navigation works properly. I also suggested Playwright as a tool to write those tests. I owe you a little tutorial on how to do that. And because keyboard navigation is not the only accessibility feature you should test and since Playwright has some nice built-in tools for accessibility testing, I will also show you how to test other aspects of your website. What is Playwright? Let's start with a short introduction for those who don't know Playwright yet. Playwright is an open-source automation library developed by Microsoft, designed to enable developers and testers to write scripts that can control a web browser's behavior. It is a Node.js library that provides a high-level API to control headless Chrome, Firefox, and WebKit with a single interface. This means you can write your tests once and run them in multiple browsers, ensuring cross-browser compatibility. If you're new to Playwright, you can check out Dane's article, which nicely introduces the basics such as configuration or selectors, and presents some nice test examples. Keyboard Accessibility Testing Now, as promised, let's dive into keyboard accessibility testing with Playwright. As I already mentioned, keyboard navigation is one of the most important accessibility features to test, given that it can break anytime you add new functionality to your application and the problem can easily go unnoticed. It is also one of the easiest to test with Playwright because it provides a neat keyboard API that allows you to simulate keyboard input and verify that the correct elements are focused or activated. Verifying Element Focus To check which element is selected after a series of keyboard inputs, you can use the toBeFocused() method to check whether it is focused or not, like this: ` This code snippet simulates pressing the Tab key three times and then checks that an input is focused. It then presses the Tab key again, verifying that the input is no longer focused. If you read my previous article, you might remember that I mentioned that many websites have a bug that prevents you from tabbing out of a textbox. This simple test can catch this bug and prevent headaches for your users. Simulating Complex Keyboard Interactions Playwright's keyboard class provides a keyboard.press() method to simulate pressing a key and releasing it immediately. It allows you to simulate pressing a single key, a combination of keys, or a key combination with a modifier key: ` You can find the full list of keys you can pass to the keyboard.press function on MDN. If you need finer control over the keyboard, you can use the keyboard.down() and keyboard.up() methods to simulate pressing and releasing a key separately. One use case for this is to simulate pressing a key and holding it down for a certain amount of time, such as holding the Shift key to select multiple elements or lines of text: ` You can also use the keyboard.insertText() method to insert text into an element, but note that modifier keys _do not affect_ this method (e.g. holding down the Shift key will not type the text uppercase) and it only dispatches the input event and does not emit the keydown, keyup or keypress events. ` The keyboard API allows you to test keyboard input, keyboard shortcuts; and ensure that your application responds correctly to user input without a mouse. Although this might seem unnecessary, believe me when I say it is not. As I mentioned in my previous article, many applications try to implement keyboard navigation but contain a nasty bug that renders the effort useless. And it is worth spending some time writing a few simple e2e tests to avoid invalidating your efforts. Testing Other Accessibility Features This article focuses mostly on keyboard accessibility, but I have good news! You can easily test other accessibility aspects with Playwright too! It conveniently integrates with the axe accessibility testing engine using the @axe-core/playwright package to run automated accessibility tests. These tests can catch various issues, such as poor color contrast, missing labels on UI controls, and duplicate IDs that can confuse assistive technologies. Example Test for Entire Page Accessibility Here's how you can scan your entire page for accessibility issues using Playwright: ` Scanning Specific Page Sections You can also target specific parts of your page by using AxeBuilder.include() to focus the accessibility scan: ` Handling Known Accessibility Issues When you have known accessibility issues that you cannot immediately fix, Playwright allows you to exclude certain elements from scans or disable specific rules: ` Reporting Accessibility Issues If you want to report accessibility issues in your test results, you can use the testInfo.attach() method to attach the scan results to your test report: ` You can then use reporters to embed or link the scan results in your test output. Conclusion While your e2e tests should not serve as a replacement for manual testing and you should still check your app yourself to catch any issues that automated tests won't reveal, Playwright provides a convenient way to automate checks whether all of your website's accessibility features work as intended. It allows you to test keyboard navigation and other accessibility aspects such as color contrast or missing labels to make sure you aren't shipping a broken experience. Testing accessibility with Playwright requires little effort, and in my opinion, is very much worth it. So, if you haven't done so yet, I encourage you to give it a try! And if you're still unconvinced, read about how I broke my hand to appreciate the importance of accessibility....

A Look at Playwright Parallelism cover image

A Look at Playwright Parallelism

A Look at Playwright Parallelism Playwright is an open-source automation library developed by Microsoft, designed for testing web applications across multiple browsers. It enables developers and testers to write scripts that simulate real user actions when interacting with web pages. Playwright supports major browsers like Chrome, Firefox, and Safari, including their headless modes. Recently, it has gained a lot of popularity for its robustness, ease of use, and cross-browser compatibility. In this blog post, we will take a look at one very useful aspect of Playwright: running tests in parallel. Parallelism in Playwright Parallelism in Playwright refers to running multiple test spec files or even test cases within a spec file simultaneously, greatly improving test execution speed. This is achieved by running the tests in worker processes, where each worker process is an OS process, running independently, orchestrated by the test runner. All workers have identical environments, and each starts its own browser instance. Parallel execution is particularly beneficial in continuous integration and continuous deployment (CI/CD) pipelines, reducing overall build and testing times. Playwright's architecture inherently supports parallelism, and most modern test runners integrating with Playwright can leverage this feature. The parallel execution feature makes Playwright a highly efficient tool for large-scale web application testing. Enabling Parallelism By default, if you scaffold the project using npm init playwright@latest, parallelism is already enabled. Assuming that Playwright's configuration includes three browsers and there is a single spec file with two test cases, the total number of tests that Playwright needs to execute is 3x2 = 6. ` Playwright will decide on how many workers to use based on several factors: - Hardware support: Depending on the number of CPU cores and other system resources, the operating system will decide how many processes Playwright can spin up. - The workers property in the playwright.config.ts file. - The --workers argument to the playwright test command. For example, npx playwright test --workers 4. The --workers argument overrides the workers property in the configuration. However, in both cases, the number of workers can go up to the number of processes that Playwright can spin up, as decided by the operating system. Once it has determined the number of workers, and if the number of workers is larger than 1, Playwright will then decide how to split the work between workers. This decision also depends on several factors, such as the number of browsers involved and the granularity of the parallelism, which can be controlled in several ways: - In the test spec file, you can specify whether to run the test cases in parallel. To run tests in parallel, you can invoke test.describe.configure({ mode: 'parallel' }); before your test cases. - Alternatively, you can configure it per project by setting the fullyParallel: true property. - And finally, you can set it globally in the config, using the same property: fullyParallel: true. Therefore, if there is more than one worker and the parallel mode is enabled, Playwright will assign test cases to each worker as they become available. This scenario is ideal because it means that each test is stateless, and the resources are used most efficiently. ` Disabling Parallelism What if, however, the tests are not stateless? Imagine one test changes the global configuration of the app via some sort of administration page, and the configuration affects different parts of the app, like enabling or disabling features. Other tests, which may be testing those features, would be impacted and would report incorrect results. In such cases, you might want to disable parallelism. You can disable any parallelism globally by allowing just a single worker at any time. Following the instructions from the previous sections to configure the number of workers, you can either set the workers: 1 option in the configuration file or pass --workers=1 to the command line. ` Let's have a look at our test output in this case: ` Now, compare the time it took with one worker versus the time it took with four workers. It took 8.2 seconds with one worker compared to only 3.9 seconds with multiple workers. That might be an inefficient usage of resources, of course, especially if you have a large test suite and some of the tests are stateless and can be run without impacting other tests. Tweaking Parallelism If you want some tests not to run in parallel, but still want to utilize your workers, you can do that. Again, following the configuration options from the previous sections, you can annotate the entire spec file with test.describe.configure({ mode: 'serial' }); to have the tests run sequentially in that spec file, or use the fullyParallel: false property to run tests sequentially on the project level, or using the same property to run tests sequentially on the global level. This means you can still split the tests between workers, but the tests would be run sequentially within a worker depending on the configuration. For example, let's set the number of workers to 4, but set fullyParallel: false globally. ` The tests need to be run sequentially, but since each browser instance effectively provides an isolated environment, this means tests cannot impact each other between environments, and they are safe to be executed sequentially within an environment. This means setting fullyParallel: false on the global level is not the same as having workers: 1, since the browsers themselves offer an isolated environment for the tests to run sequentially. However, since we only have 3 environments (3 browsers), we cannot fully utilize 4 workers as we wanted, so the number of workers is 3. Conclusion In conclusion, Playwright's workers are the core of its parallelism capabilities, enabling tests to run concurrently across different environments and browsers with efficiency. Through its many configuration properties, Playwright allows you to configure parallelism at multiple levels, offering a granular approach to optimizing your test runs. Beyond just executing tests in parallel on a single machine, Playwright's parallelism can even extend to splitting work across multiple machines through sharding, significantly enhancing the scalability of your testing. We hope this blog post was useful. For those interested in delving deeper into the world of Playwright and its powerful features, we've also recently released a JS Drop titled Awesome Web Testing with Playwright, and we also hosted Debbie O'Brien from Microsoft in a Modern Web episode....

How to create and use custom GraphQL Scalars cover image

How to create and use custom GraphQL Scalars

How to create and use custom GraphQL Scalars In the realm of GraphQL, scalars form the bedrock of the type system, representing the most fundamental data types like strings, numbers, and booleans. As explored in our previous post, "Leveraging GraphQL Scalars to Enhance Your Schema," scalars play a pivotal role in defining how data is structured and validated. But what happens when the default scalars aren't quite enough? What happens when your application demands a data type as unique as its requirements? Enter the world of custom GraphQL scalars. These data types go beyond the conventional, offering the power and flexibility to tailor your schema to precisely match your application's unique needs. Whether handling complex data structures, enforcing specific data formats, or simply bringing clarity to your API, custom scalars open up a new realm of possibilities. In this post, we'll explore how to understand, create, and effectively utilize custom scalars in GraphQL. From conceptualization to implementation, we'll cover the essentials of extending your GraphQL toolkit, empowering you to transform abstract ideas into concrete, practical solutions. So, let's embark together on the journey of understanding and utilizing custom GraphQL scalars, enhancing and expanding the capabilities of your GraphQL schema. Understanding Custom Scalars Custom scalars in GraphQL extend beyond basic types like String or Int, allowing data to be defined, validated, and processed more precisely. They're instrumental when default types don't quite capture the complexity or specificity of the data, such as with specialized date formats or unique identifiers. The use of custom scalars brings several benefits: * Enhanced Clarity: They offer a clearer representation of what data looks like and how it behaves. * Built-in Validation: Data integrity is bolstered at the schema level. * Flexibility: They can be tailored to specific data handling needs, making your schema more adaptable and robust. With this understanding, we'll explore creating and integrating custom scalars into a GraphQL schema, turning theory into practice. Creating a Custom Scalar Defining a Custom Scalar in TypeScript: Creating a custom scalar in GraphQL with TypeScript involves defining its behavior through parsing, serialization, and validation functions. * Parsing: Transforms input data from the client into a server-understandable format. * Serializing: Converts server data back to a client-friendly format. * Validation: Ensures data adheres to the defined format or criteria. Example: A 'Color' Scalar in TypeScript The Color scalar will ensure that every color value adheres to a valid hexadecimal format, like #FFFFFF for white or #000000 for black: ` In this TypeScript implementation: * validateColors: a function that checks if the provided string matches the hexadecimal color format. * parseValue: a method function that converts the scalar’s value from the client into the server’s representation format - this method is called when a client provides the scalar as a variable. See parseValue docs for more information * serialize: a method function that converts the scalar’s server representation format to the client format, see serialize docs for more information * parseLiteral: similar to parseValue, this method function converts the scalar’s value from the client to the server’s representation format. Still, this method is called when the scalar is provided as a hard-coded argument (inline). See parseLiteral docs for more information In the upcoming section, we'll explore how to incorporate and validate these custom scalars within your schema, ensuring they function seamlessly in real-world scenarios. Integrating Custom Scalars into a Schema Incorporating the 'Color' Scalar After defining your custom Color scalar, the next crucial step is effectively integrating it into your GraphQL schema. This integration ensures that your GraphQL server recognizes and correctly utilizes the scalar. Step-by-Step Integration 1. Add the scalar to Type Definitions: Include the Color scalar in your GraphQL type definitions. This inclusion informs GraphQL about this new scalar type. 2. Resolver Mapping: Map your custom scalar type to its resolver. This connection is key for GraphQL to understand how to process this type during queries and mutations. ` 1. Use the scalar: Update your type to use the new custom scalar ` Testing the Integration With your custom Color scalar integrated, conducting thorough testing is vital. Ensure that your GraphQL server correctly handles the Color scalar, particularly in terms of accepting valid color formats and rejecting invalid ones. For demonstration purposes, I've adapted a creation mutation to include the primaryColor field. To keep this post focused and concise, I won't detail all the code changes here, but the following screenshots illustrate the successful implementation and error handling. Calling the mutation (createTechnology) successfully: Calling the mutation with forced fail (bad color hex): Conclusion The journey into the realm of custom GraphQL scalars reveals a world where data types are no longer confined to the basics. By creating and integrating scalars like the Color type, we unlock precision and specificity in our GraphQL schemas, which significantly enhance our applications' data handling capabilities. Custom scalars are more than just a technical addition; they testify to GraphQL's flexibility and power. They allow developers to express data meaningfully, ensuring that APIs are functional, intuitive, and robust. As we've seen, defining, integrating, and testing these scalars requires a blend of creativity and technical acumen. It encourages a deeper understanding of how data flows through your application and offers a chance to tailor that experience to your project's unique needs. So, as you embark on your GraphQL journey, consider the potential of custom scalars. Whether you're ensuring data integrity, enhancing API clarity, or simply making your schema a perfect fit for your application, the possibilities are as vast as they are exciting. Embrace the power of customization, and let your GraphQL schemas shine!...

Introduction to Vercel’s Flags SDK cover image

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it. You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice. Why should I use it? You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently. It’s simple Let's start with a basic example: ` This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration. Many other SDKs require passing the flag's name and context every single time you check a flag, like this: ` This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application. By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly. It’s flexible Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter. While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit. Latency solutions Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency. These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly: ` You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended. Using Edge Config When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues. To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency. Cookies For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load. Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data: ` You can also encrypt or sign cookies for additional security from the client side. Dedupe Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example: ` Server-side patterns for static pages You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering. To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute. Precompute Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers: ` Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them. ` The user will never notice this because, as we use “rewrite”, they will only see the original URL. Now, on our page, we “invoke” our flags, sending the code from the params: ` By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user. Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests: ` Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell. Generate Permutations If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases. ` ` Conclusion Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co