This event was co-hosted by Tracy Lee (@ladyleet), This Dot Labs, and Gil Tayar (@giltayar), Senior Architect at Applitools.
A knowledgeable group of panelists gave updates on the state of front end testing through their eyes, which led to a lively discussion. The diverse range of topics covered best practices, the evolution of tools, and the future of testing in 2019.
Conversations ranged from the downsides of implementation testing details to assessing the reasons why more developers aren’t testing regularly and comparing the best use cases for integrated vs. isolated tests.
Featured guests and topics included:
Kent C. Dodds (The dangers of implementation testing details)
Gleb Bahmutov (Code coverage: Is it useless in e2e tests?)
Gil Tayar (Fast cross-browser visual testing with Applitools)
Rotem Mizrachi-Meidan (Eliminating performance regression)
James Evans (State of Selenium)
Kevin Lamping (Why aren’t enough devs testing?)
Shai Reznik (Isolated vs. integrated testing)
Here are some of the key takeaways:
The dangers of implementation testing details
with Kent C. Dodds (@kentcdodds), web development educator
Kent C. Dodds covered the dangers of using implementation details in testing, and the risk of inaccuracy if you don’t avoid them.
What are implementation details?
“You can think of an implementation detail as something about the implementation that has no relevance to the user of that component, library, or application.” — Kent C Dodds
In other words, users of your code won’t typically use, see, or even know about implementation details.
Implementation details are risky business when it comes to testing. But why?
· Testing implementation details can break your test when you refactor application code. This leads to false negatives.
· On the other hand, the test may not fail when you break application code. This leads to false positives.
But: There are alternatives.
To avoid implementation details in testing, start by asking yourself a question: “What part of your untested codebase would be really bad if it broke?”
Next, try to narrow that code down to a unit or few units. Consider who its users are, and write down a list of ways that users could manually test it to make sure it’s not broken. Finally, turn that list into an automated test.
For more on how to avoid implementation details in testing and be as accurate as you can, see Kent’s blog post on this topic.
Code Coverage: Is it useless in e2e tests?
with Gleb Bahmutov (@bahmutov), VP of Engineering at Cypress
Gleb Bahmutov answers the question, how do we know when we’re done running tests?
Code coverage is often used, but it’s tricky. It’s only useful for unit tests. Code coverage doesn’t take all the different kinds of data and possible edge cases into account, making it less than ideal.
Data coverage and code coverage aren’t useful for end-to-end tests. Imagine trying to test each part of a car by just driving it around. Wouldn’t be very effective, right? Similarly, you can’t reach all parts of your code through its user interface alone.
The answer to this problem is element coverage, which covers the application’s entire user interface.
Head to his blog post for more information, and hopefully, more accurate testing.
Fast cross-browser visual testing with Applitools
with Gil Tayar (@giltayar), evangelist and architect at Applitools
Gil Tayar of Applitools educated us about the virtues of fast, cross-browser visual testing.
There are many types of automated testing, but only some have visual components — including component, functional, e2e, and production monitoring tests. And until recently, there weren’t many real tools to do visual testing. But luckily, that’s changing.
In the Applitools approach to testing, developers take screenshots of what they want to visually test and compare it with a baseline. If there are no differences, we’re good! If there are, it could be either a good difference… or a bug.
For curious potential users, Gil also shared a few key features of Applitools Cypress SDK:
· It’s a fast, well thought-out e2e framework.
· It allows for both cross-browser and cross-widths visual testing, because responsive design is so important in today’s digital landscape.
· Applitools Cypress SDK uses DOM snapshots and uploads them to the visual grid. The result? Several tests can be parallelized and run as one test all at once.
· It uses root cause analysis, or diffing the baseline DOM and the actual DOM to figure out what in the CSS or the DOM caused the visual error, for greater accuracy and efficiency.
Go to Applitools.com to try it out for yourself!
Eliminating performance regression
with Rotem Mizrachi-Meidan (@rotemmiz), Detox E2E at Wix
Rotem Mizrachi-Meidan wants to eliminate performance regression. To that end, he shared what the Detoxe2e folks over at Wix are doing to ensure long-term performance success.
The purpose of CI is to automate the complex and repetitive tasks we do. The goal is, as always, more efficient and accurate testing.
To make sure that’s the case long-term, Rotem uses Detox instruments. These tools run inside the instrumented process, sampling and collecting metrics from inside the app. The recorded data is then readily accessible and can be put on graphs and dashboards to chart trends over time.
These instruments could serve as a model for what we can do to ensure that testing remains useful and accurate for the future…not just the moment.
His key takeaway?
“You can’t improve what you don’t measure.” -Rotem Mizrachi-Meidan
State of Selenium
with James Evans (@jimevansmusic), Selenium expert and QA at Salesforce
James Evans, QA at Salesforce, is a Selenium expert. For those who don’t know, Selenium is a browser automation library that’s primarily used for UI-based testing. It excels in cross-browser testing.
Wonderful innovations are in store for Selenium’s evolution in 2019. All WebDriver implementations currently support the spec to a high degree (test results are available at wpt.fyi).
Work has already begun on taking it to the next level for 2019, including bidirectional communication between the test code and the app under test.
Like most of the testing ecosystem, things for Selenium are looking up for the new year! We’re excited to see what comes next.
Why aren’t enough devs testing?
with Kevin Lamping (@klamping), front-end engineer and tester
Data suggests that at least half of developers are writing tests in some form. However, many don’t seem happy with their testing.
But why? Is it the tools? Not exactly: Surveys show that developers have actually been getting happier with the available testing tools over time.
Is it the documentation? Not exactly: Most developers don’t think testing is too hard to learn or implement. Courses are getting the hint that testing is significant, and education is more widely available than ever before.
Instead, many developers say they don’t have enough time or that they don’t know how to start writing automated tests.
Quoting Kent C. Dodds, Kevin thinks the problem is simple. There’s no obvious ROI to testing.If the benefits aren’t obvious, then drawbacks take precedent.
His solution: “Either sell testing, or stop asking permission and just do it.” — Kevin Lamping
You should write a test because it makes it easier to update your code. Your test should be functional for you, but also for your customers.
Better testing makes for better code, and better code makes for happier customers.
Isolated vs. integrated testing
with Shai Reznik(@Shai.Reznik), founder of Hirez.io, workshop leader at TestAngular.com
Shai Reznik stands up for testing in isolation.
The major benefits of all testing are confidence and code quality. But “unit” and “integration” testing are loaded terms. Instead, we should talk about testing dimensions: scope, boundaries, and subject, for example.
By “boundaries,” we’re referring to integrated vs. isolated tests.
Integrated tests involve testing several parts of the code together, while isolated tests test only one part of the code and fake its dependency.
Integrated testing effectively tests connectivity and how the different parts of our code work together.
However, it also has a few problems, like:
· Unclear boundaries
· Taking longer to run
· Longer test setup
· Multi coverage
· Tests break more often
· Harder to trace bugs
· Hard to cover edge cases
· Looser code design
Ultimately, this results in a false sense of coverage.
Meanwhile, isolated tests have a few key benefits:
· Clear boundaries
· No multi-coverage issues
· Faster, easier setup
· Easier to trace bugs
· Easier to cover edge cases
Ultimately, in many cases, this leads to better code design.
So how do you decide between integrated and isolated tests?
The answer depends on what your project is. For small apps, just integrated testing is fine. But with mid-sized or larger apps, isolated testing or a combination is likely best.
🔗Where Do We Go From Here?
The State of Testing solidified the importance of testing for developers. Contrary to popular belief where some may think that only QA engineers test, many developers are doing testing themselves and want better ways to do so.
Some of our key takeaways on front-end testing in 2019:
Devs are testing more than many think, and they want to be armed with the tools to do so more effectively.
Element coverage is more comprehensive, and often more effective, than code coverage alone.
Testing in isolation can lead to better accuracy and better code design for mid-sized and larger apps.
There is a broad scope and rich diversity of testing possibilities when it comes to tools, best practices, and innovation.
If you didn’t catch the State of Testing, you can watch it on YouTube here.
We want to extend special thanks to our co-hosts and sponsors at Applitools, the best way to implement any visual testing in your applications.