Skip to content

Quick Guide to Playwright Fixtures: Enhancing Your Tests

Quick Guide to Playwright Fixtures: Enhancing Your Tests

Introduction

Following our recent blog post on migrating E2E tests from Cypress to Playwright, we've identified opportunities to enhance our test scripts further. In this guide, we'll delve into the basics of Playwright fixtures, demonstrating their utility and flexibility in test environments.

Playwright fixtures are reusable components that set up and tear down the environment or conditions necessary for tests. They are crucial for writing clean, maintainable, and scalable tests. Fixtures can handle tasks like opening a browser, initializing a database, or logging into an application—actions you might need before running your tests.

As a practical example, we'll revisit one of the tests from our previous post, enhancing it with a new fixture to streamline the testing process and significantly improve maintainability. This post is designed to provide the foundational skills to integrate fixtures into your testing workflows effectively, giving you the confidence to manage and maintain your test scripts more efficiently.

Creating Our First Fixture

To illustrate the power of fixtures in Playwright, let’s consider a practical example from a test scenario in our project. Below is a snippet of a test case from our newsletter page:

test.describe("Newsletter page", () => {
  test("subscribing to email newsletter should show success message", async ({ page }) => {
    await page.goto("/newsletter");
    await page
      .locator(`[data-testid="newsletter-email-input"]`)
      .fill(SIGN_UP_EMAIL_TEST);
    await page.locator(`[data-testid="newsletter-submit-button"]`).click();
    await expect(
      page.getByText(/You subscribed to the newsletter successfully!/).first(),
    ).toBeVisible();
  });
});

This test scenario navigates to the newsletter page, fills in an email, submits the form, and checks for a success message. To optimize our test suite, we'll refactor common actions like navigation, form completion, and submission into reusable fixtures. This approach makes our tests cleaner and more maintainable and reduces redundancy across similar test scenarios.

Implementing the Fixture

Here’s how the fixture looks:

// NewsletterPage.fixture.ts
import type { Page, Locator } from "@playwright/test";

export class NewsletterPage {
  private readonly emailInput: Locator;
  private readonly submitButton: Locator;

  constructor(public readonly page: Page) {
    this.emailInput = this.page.locator(
      `[data-testid="newsletter-email-input"]`,
    );
    this.submitButton = this.page.locator(
      `[data-testid="newsletter-submit-button"]`,
    );
  }

  async goto() {
    await this.page.goto("/newsletter");
  }

  async addEmail(text: string) {
    await this.emailInput.fill(text);
  }

  async submitNewsletter() {
    await this.submitButton.click();
  }
}

This fixture encapsulates the actions of navigating to the page, filling out the email field, and submitting the form. By abstracting these actions, we simplify and focus our test cases.

Refactoring the Test

With the fixture in place, let’s see how it changes our original test file:

import { NewsletterPage } from "playwright/fixtures/NewsletterPage.fixture";

test.describe("Newsletter page", () => {
  let newsletterPage: NewsletterPage;

  test.beforeEach(({ page }) => {
    newsletterPage = new NewsletterPage(page);
  });

  test("subscribing to email newsletter should show success message", async ({ page }) => {
    await newsletterPage.goto();
    await newsletterPage.addEmail(SIGN_UP_EMAIL_TEST);
    await newsletterPage.submitNewsletter();
    await expect(
      page.getByText(/You subscribed to the newsletter successfully!/).first(),
    ).toBeVisible();
  });
});

A beforeEach method to reset the state of the NewsletterPage fixture ensures a clean and consistent environment for each test scenario. This practice is crucial for maintaining the integrity and reliability of your tests.

By leveraging the NewsletterPage fixture, each test within the "Newsletter page" suite starts with a clean and pre-configured environment. This setup improves test clarity and efficiency and aligns with best practices for scalable test architecture.

Conclusion

As we've seen, fixtures are powerful tools that help standardize test environments, reduce code redundancy, and ensure that each test operates in a clean state. By abstracting common setup and teardown tasks into fixtures, we can focus our testing efforts on what matters most, verifying the behavior and reliability of the software we're developing.

Remember, the key to successful test management is choosing the right tools and using them wisely to create scalable, maintainable, and robust testing frameworks. Playwright fixtures offer a pathway towards achieving these goals, empowering teams to build better software faster and more confidently.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

A Look at Playwright Parallelism cover image

A Look at Playwright Parallelism

A Look at Playwright Parallelism Playwright is an open-source automation library developed by Microsoft, designed for testing web applications across multiple browsers. It enables developers and testers to write scripts that simulate real user actions when interacting with web pages. Playwright supports major browsers like Chrome, Firefox, and Safari, including their headless modes. Recently, it has gained a lot of popularity for its robustness, ease of use, and cross-browser compatibility. In this blog post, we will take a look at one very useful aspect of Playwright: running tests in parallel. Parallelism in Playwright Parallelism in Playwright refers to running multiple test spec files or even test cases within a spec file simultaneously, greatly improving test execution speed. This is achieved by running the tests in worker processes, where each worker process is an OS process, running independently, orchestrated by the test runner. All workers have identical environments, and each starts its own browser instance. Parallel execution is particularly beneficial in continuous integration and continuous deployment (CI/CD) pipelines, reducing overall build and testing times. Playwright's architecture inherently supports parallelism, and most modern test runners integrating with Playwright can leverage this feature. The parallel execution feature makes Playwright a highly efficient tool for large-scale web application testing. Enabling Parallelism By default, if you scaffold the project using npm init playwright@latest, parallelism is already enabled. Assuming that Playwright's configuration includes three browsers and there is a single spec file with two test cases, the total number of tests that Playwright needs to execute is 3x2 = 6. ` Playwright will decide on how many workers to use based on several factors: - Hardware support: Depending on the number of CPU cores and other system resources, the operating system will decide how many processes Playwright can spin up. - The workers property in the playwright.config.ts file. - The --workers argument to the playwright test command. For example, npx playwright test --workers 4. The --workers argument overrides the workers property in the configuration. However, in both cases, the number of workers can go up to the number of processes that Playwright can spin up, as decided by the operating system. Once it has determined the number of workers, and if the number of workers is larger than 1, Playwright will then decide how to split the work between workers. This decision also depends on several factors, such as the number of browsers involved and the granularity of the parallelism, which can be controlled in several ways: - In the test spec file, you can specify whether to run the test cases in parallel. To run tests in parallel, you can invoke test.describe.configure({ mode: 'parallel' }); before your test cases. - Alternatively, you can configure it per project by setting the fullyParallel: true property. - And finally, you can set it globally in the config, using the same property: fullyParallel: true. Therefore, if there is more than one worker and the parallel mode is enabled, Playwright will assign test cases to each worker as they become available. This scenario is ideal because it means that each test is stateless, and the resources are used most efficiently. ` Disabling Parallelism What if, however, the tests are not stateless? Imagine one test changes the global configuration of the app via some sort of administration page, and the configuration affects different parts of the app, like enabling or disabling features. Other tests, which may be testing those features, would be impacted and would report incorrect results. In such cases, you might want to disable parallelism. You can disable any parallelism globally by allowing just a single worker at any time. Following the instructions from the previous sections to configure the number of workers, you can either set the workers: 1 option in the configuration file or pass --workers=1 to the command line. ` Let's have a look at our test output in this case: ` Now, compare the time it took with one worker versus the time it took with four workers. It took 8.2 seconds with one worker compared to only 3.9 seconds with multiple workers. That might be an inefficient usage of resources, of course, especially if you have a large test suite and some of the tests are stateless and can be run without impacting other tests. Tweaking Parallelism If you want some tests not to run in parallel, but still want to utilize your workers, you can do that. Again, following the configuration options from the previous sections, you can annotate the entire spec file with test.describe.configure({ mode: 'serial' }); to have the tests run sequentially in that spec file, or use the fullyParallel: false property to run tests sequentially on the project level, or using the same property to run tests sequentially on the global level. This means you can still split the tests between workers, but the tests would be run sequentially within a worker depending on the configuration. For example, let's set the number of workers to 4, but set fullyParallel: false globally. ` The tests need to be run sequentially, but since each browser instance effectively provides an isolated environment, this means tests cannot impact each other between environments, and they are safe to be executed sequentially within an environment. This means setting fullyParallel: false on the global level is not the same as having workers: 1, since the browsers themselves offer an isolated environment for the tests to run sequentially. However, since we only have 3 environments (3 browsers), we cannot fully utilize 4 workers as we wanted, so the number of workers is 3. Conclusion In conclusion, Playwright's workers are the core of its parallelism capabilities, enabling tests to run concurrently across different environments and browsers with efficiency. Through its many configuration properties, Playwright allows you to configure parallelism at multiple levels, offering a granular approach to optimizing your test runs. Beyond just executing tests in parallel on a single machine, Playwright's parallelism can even extend to splitting work across multiple machines through sharding, significantly enhancing the scalability of your testing. We hope this blog post was useful. For those interested in delving deeper into the world of Playwright and its powerful features, we've also recently released a JS Drop titled Awesome Web Testing with Playwright, and we also hosted Debbie O'Brien from Microsoft in a Modern Web episode....

How to test React custom hooks and components with Vitest cover image

How to test React custom hooks and components with Vitest

Introduction In this guide, we'll navigate through the process of testing React hooks and components using Vitest—a powerful JavaScript unit testing framework. Discover how Vitest simplifies testing setups, providing an optimal solution for both Vite-powered projects and beyond. Vitest is a javascript unit testing framework that aims to position itself as the Test Runner of choice for Vite projects and as a solid alternative even for projects not using Vite. Vitest was built primarily for Vite-powered projects, to help reduce the complexity of setting up testing with other testing frameworks like Jest. Vitest uses the same configuration of your App (through vite.config.js), sharing a common transformation pipeline during dev, build, and test time. Prerequisites This article assumes a solid understanding of React and frontend unit testing. Familiarity with tools like React Testing Library and JSDOM will enhance your grasp of the testing process with Vitest. Installation and configuration Let’s see how we can use Vitest for testing React custom hooks and components. But first, we will need to create a new project with Vite! If you already have an existing project, you can skip this step. ` Follow the prompts to create a new React project successfully. For testing, we need the following dependencies installed: Vitest as the unit testing framework JSDOM as the DOM environment for running our tests React Testing Library as the React testing utilities. To do so, we run the following command: ` Once we have those packages installed, we need to configure the vite.config.js file to run tests. By default, some of the extra configs we need to set up Vitest are not available in the Vite config types, so we will need the vite.config.ts file to reference Vitest types by adding /// reference types=”vitest” /> at the top of the file. Add the following code to the vite.config.ts ` We set globals to true because, by default, Vitest does not provide global APIs for explicitness. So with this set to true, we can use keywords like describe, test and it without needing to import them. To get TypeScript working with the global APIs, add vitest/globals to the types field in your tsconfig.json. ` The environment property tells Vitest which environment to run the test. We are using jsdom as the environment. The root property tells Vitest the root folder from where it should start looking for test files. We should add a script for running the test in package.json ` With all that configured, we can now start writing unit tests for customs hooks and React components. Writing test for custom hooks Let’s write a test for a simple useCounter hook that takes an initial value and returns the value, an increment function and a decrement function. ` We can write a test to check the default return values of the hook for value as below: ` To test if the hook works when we increment the value, we can use the act() method from @testing-library/react to simulate the increment function, as shown in the below test case: ` Kindly Note that you can't destructure the reactive properties of the result.current instance, or they will lose their reactivity. Testing hooks with asynchronous logic Now let’s test a more complex logic that contains asynchronous logic. Let’s write a useProducts hook that fetches data from an external api and return that value ` Now, let’s see what the test looks like: ` In the above example, we had to spy on the global fetch API, so that we can mock its return value. We wrapped that inside a beforeAll so that this runs before any test in this file. Then we added an afterAll method and called the mockRestore() to run after all test cases have been completed and return all mock implementations to their original function. We can also use the mockClear() method to clear all the mock's information, such as the number of calls and the mock's results. This method is handy when mocking the same function with different return values for different tests. We usually use mockClear() in beforeEach() or afterEach() methods to ensure our test is isolated completely. Then in our test case, we used a waitFor(), to wait for the return value to be resolved. Writing test for components Like Jest, Vitest provides assertion methods (matchers) to use with the expect methods for asserting values, but to test DOM elements easily, we will need to make use of custom matchers such as toBeInTheDocument() or toHaveTextContent(). Luckily the Vitest API is mostly compatible with the Jest API, making it possible to reuse many tools originally built for Jest. For such methods, we can install the @testing-library/jest-dom package and extend the expect method from Vitest to include the assertion methods in matchers from this package. ` After installing the jest-dom testing library package, create a file named vitest-setup.ts on the root of the project and import the following into the project to extend js-dom custom matchers: ` Since we are using typescript, we also need to include our setup file in our tsconfig.json: ` In vite.config.ts, we need to add the vitest-setup.ts file to the test.setupFiles field: ` Now let’s test the Products.tsx component: ` We start by spying and mocking the useProducts hook with vi.spyOn() method from Vitest: ` Now, we render the Products component using the render method from @testing-library/react and assert that the component renders the list of products as expected and also the product has the title as follows: ` In the above code, we use the render method from @testing-library/react to render the component and this returns some useful methods we can use to extract information from the component like getByTestId and getByText. The getByTestId method will retrieve the element whose data-testid attribute value equals product-list, and we can then assert its children to equal the length of our mocked items array. Using data-testid attribute values is a good practice for identifying a DOM element for testing purposes and avoiding affecting the component's implementation in production and tests. We also used the getByText method to find a text in the rendered component. We were able to call the toBeInTheDocument() because we extended the matchers to work with Vitest earlier. Here is what the full test looks like: ` Conclusion In this article, we delved into the world of testing React hooks and components using Vitest, a versatile JavaScript unit testing framework. We walked through the installation and configuration process, ensuring compatibility with React, JSDOM, and React Testing Library. The comprehensive guide covered writing tests for custom hooks, including handling asynchronous logic, and testing React components, leveraging custom matchers for DOM assertions. By adopting Vitest, developers can streamline the testing process for their React applications, whether powered by Vite or not. The framework's seamless integration with Vite projects simplifies the setup, reducing the complexities associated with other testing tools like Jest....

Efficiently Extract Object References in Shopify Storefront GraphQL API cover image

Efficiently Extract Object References in Shopify Storefront GraphQL API

Efficiently Extract Object References in Shopify Storefront GraphQL API Introduction So, this blog post is born out of necessity and a bit of frustration. If you're diving into the world of Shopify's Storefront API, you've probably realized that while it's powerful, extracting data in the object reference from Metadata fields or Metaobjects (in the GraphQL query) can be a bit like searching for a needle in a haystack. This complexity often arises not from the API's lack of capabilities but from the sparse and sometimes unclear documentation on this specific aspect. That's precisely why I decided to create this post. As a developer, I found myself in a situation where the documentation and community resources were either scarce or not detailed enough for the specific challenges I faced. This guide is the result of my journey - from confusion to clarity. The Situation To understand the crux of my challenge, it's essential to recognize that creating metafields and metaobjects is a common practice for those seeking a more customized and controlled experience with Shopify CMS. In my specific case, I wanted to enrich the information available for each product's vendor beyond what Shopify typically allows, which is just a single text box. I aimed to have each vendor display their name and two versions of their logo: a themed logo that aligns with my website's color scheme and an original logo for use on specific pages. The challenge emerged when I fetched a list of all vendors to display on a page. My GraphQL query for the Storefront API looked like this: ` This was when I hit a roadblock. How do I fetch a field with a more complex type than a simple text or number, like an image? To retrieve the correct data, what specific details must I include in the originalLogo and themedLogo fields? In my quest for a solution, I turned to every resource I could think of. I combed through the Storefront API documentation, searched endlessly on Stack Overflow, and browsed various tech forums. Despite all these efforts, I couldn’t find the clear, detailed answers I needed. It felt like I was looking for something that should be there but wasn’t. Solution Before diving into the solution, it's important to note that this is the method I discovered through trial and error. There might be other approaches, but I want to share the process that worked for me without clear documentation. My first step was to understand the nature of the data returned by the Storefront API. I inspected the value of a metaobject, which looked something like this: ` The key here was the gid, or global unique identifier. What stood out was that it always includes the object type, in this case, MediaImage. This was crucial because it indicated which union to use and what properties to query from this object in the Storefront API documentation. So, I modified my query to include a reference to this object type, focusing on the originalLogo field as an example: ` The next step was to consult the Storefront API documentation for MediaImage at Shopify API Documentation. Here, I discovered the image field within MediaImage, an object containing the url field. With this information, I updated my query: ` Finally, when executing this query, the output for a single object was as follows: ` Through this process, I successfully extracted the necessary data from the object references in the metafields, specifically handling more complex data types like images. Conclusion In wrapping up, it's vital to emphasize that while this guide focused on extracting MediaImage data from Shopify's Storefront API, the methodology I've outlined is broadly applicable. The key is understanding the structure of the gid (global unique identifier) and using it to identify the correct object types within your GraphQL queries. Whether you're dealing with images or any other data type defined in Shopify's Storefront API, this approach can be your compass. Dive into the API documentation, identify the object types relevant to your needs, and adapt your queries accordingly. It's a versatile strategy that can be tailored to suit many requirements. Remember, the world of APIs and e-commerce is constantly evolving, and staying adaptable and resourceful is crucial. This journey has been a testament to the power of perseverance and creative problem-solving in the face of technical challenges. May your ventures into Shopify's Storefront API be equally rewarding and insightful....

Introduction to Vercel’s Flags SDK cover image

Introduction to Vercel’s Flags SDK

Introduction to Vercel’s Flags SDK In this blog, we will dig into Vercel’s Flags SDK. We'll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it. You'll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with Next.js, this SDK isn't limited to just one framework—it's fully compatible with React and SvelteKit. We'll use Next.js for examples, but feel free to follow along with the framework of your choice. Why should I use it? You might wonder, "Why should I care about yet another feature flag library?" Unlike some other solutions, Vercel's Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently. It’s simple Let's start with a basic example: ` This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration. Many other SDKs require passing the flag's name and context every single time you check a flag, like this: ` This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application. By "context", I mean the data needed to evaluate the flag, like user details or environment settings. We'll get into more detail shortly. It’s flexible Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like LaunchDarkly or Statsig using built-in adapters. And if the provider you want to use isn’t supported yet, you can easily create your own custom adapter. While we'll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit. Latency solutions Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, "Is the user ID equal to 12?" Typically, these evaluations involve fetching necessary information from a server, which can introduce latency. These evaluations happen through two primary functions: identify and decide. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named entities to the decide function. Let's revisit our earlier example to see this clearly: ` You could add a custom evaluation context when reading a feature flag, but it’s not the best practice, and it’s not usually recommended. Using Edge Config When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues. To handle latency efficiently, especially in short-lived Serverless Functions, you can use Edge Config. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via Edge Middleware or Serverless Functions, significantly reducing latency. Cookies For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load. Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, App Router or Page Router), low-latency access to evaluation context data: ` You can also encrypt or sign cookies for additional security from the client side. Dedupe Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user's region, Dedupe ensures the method executes only once per runtime, regardless of how many times it's invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let's illustrate this with the following example: ` Server-side patterns for static pages You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering. To maintain static rendering benefits while using server-side flags, the SDK provides a method called precompute. Precompute Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or Route Handlers: ` Next, inside a middleware (or route handler), we will precompute these flags and create static pages per each combination of them. ` The user will never notice this because, as we use “rewrite”, they will only see the original URL. Now, on our page, we “invoke” our flags, sending the code from the params: ` By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user. Finally, after rendering our page, we can enable Incremental Static Regeneration (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests: ` Using precompute is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use precompute() so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell. Generate Permutations If we prefer to generate static pages at build-time instead of runtime, we can use the generatePermutations function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It's especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases. ` ` Conclusion Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you're building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co