Cross-browser Testing Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/cross-browser-testing/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:38:48 +0000 en-US hourly 1 How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid https://applitools.com/blog/cross-browser-testing-at-applitools-using-ultrafast-grid/ Tue, 07 Feb 2023 16:43:16 +0000 https://applitools.com/?p=46837 This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a...

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Ultrafast Grid cross-browser testing

This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a Mac images were not displayed correctly. Applitools Ultrafast Grid (UFG) helped us identify the bug at an early stage in development, before deploying the change. These types of bugs are a regular occurrence in any organization, and without UFG, these bugs can easily make it to production and remain there undetected until a customer complains about the problem. Translation support from Michael Sedley.

Front end development is complicated, and it involves a wide range of knowledge and tools to develop web applications. Regression testing across different systems, browsers, and devices makes it almost impossible to be sure that an application will display correctly on every system and there are no visual regressions as a result of a minor code change.

A real-life example occurred to me during my first week as an Applitools employee, when I fixed a minor bug using my Linux machine. Inadvertently, in the process, I created a more serious visual bug which was only visible on certain devices.

Had UFG not alerted me to the bug, the code would have gone to production and the result would have affected the usability of our flagship product on a Mac. This would have reflected badly on the professionalism of the company and would have affected the company representation, trust in our product, and sales.

Understanding the problem

In recent months, we improved Applitools Eyes’ ability to perform visual testing on images which are semi-transparent. In the past, Eyes would test an entire screen or defined region, but now using the Storybook SDK, users can automatically test each component separately, without needing to define a test for each component.

For example, when testing a gallery component, Eyes can identify visual bugs and regressions over all screen elements, including the appearance of buttons, controls, fonts, shadows, images, as well as backgrounds that include a transparency gradient.

Figure 1: Transparent Background

After implementing the transparency feature, a visual bug was reported.  In a screen capture of a semi-transparent screen region, unexpected grid lines appeared on top of the tested image.

The root cause of these lines wasn’t clear, so as a first step, we developed a test plan to reproduce the issue. I created a semi-transparent image, all gray (rgb = 127,127,127), with a constant alpha (transparency) channel (alpha=0.5). Fortunately, the bug was easily reproduced and I managed to create easily identifiable grid lines:

As I experimented with different transparency settings, it was clear that the color of the grid lines was the same as the color of the image, and it became stronger as the image transparency was lower.

After further investigation, I discovered that the image viewer component uses tiles to represent large images, and the tiles had one pixel overlap. In the past, when all images were RBG images with no transparency, the overlapping pixel was not visible to the human eye.  When I added semi-transparency, as the adjacent tiles were stacked on top of each other, sampling the outcoming color inside the grid line produced a 192 grayscale value, which is the exact outcome of stacking two half transparent gray layers over a white background:

(white ⋅ 0.5 + gray ⋅ 0.5) ⋅ 0.5 + gray ⋅ 0.5
given (white:=255, gray:=128)

Solving the preliminary bug

To resolve this bug, I recalculated the position and scale of each region so that there would not be an overlap and the line between regions would not appear.
For example, if the first tile had width: 480px and left: 0, the next adjacent tile should be positioned using left: 480, so that there is a zero pixel overlap between tiles.

I tested the results on my local (Linux) machine and assumed that the issue was resolved.

I didn’t realize that when I fixed this bug, I had also created a new issue which would have been almost impossible to anticipate.

How UFG identified the bug I created before deployment

At Applitools, we understand the importance of quality visual testing across browsers, so before deployment, every code change that impacts the Eyes interface must be tested by Applitools Eyes using UFG.

We are proud to “eat our own dogfood.” We rely on our visual testing tools to make sure that our products are visually perfect before release.

Our integration pipeline is configured to use UFG to test the UI change on multiple devices and screen settings so that we can confirm that the interface is consistent on every browser, operating system, and screen size.

We discovered that fixing the bug of a one pixel overlap created a new bug on certain systems where there was a gap visible between tiles.  Frustratingly, this bug was not reproducible in any of the devices used in development, and could not have been discovered with conventional manual visual testing.

The bug was only visible on screens with Retina Display which uses HiDPI.

What was interesting about this bug, is that it highlighted an inconsistency in the way that the same browser (Chrome) displays the same UI on different screen types.

What happened?

The bug and the solution

After some research, it turns out that there is (seemingly) a bug in the way Chrome behaves on Mac computers with Retina display (see 1, 2, 3). It turns out that using percentages or fractions of pixels for positioning and scaling of elements can lead to unexpected results.

So, what is the solution?

The solution itself is very elegant – all we had to do was to round each canvas scaling so that the canvas size would always be an integer:

scale = Math.round(scale * canvasSize) / canvasSize;

Thus, if the width of the canvas is 480 and our scale factor is 0.17, the width of our scaled canvas would not be 480 * 0.17 = 81.6, but would be 82 – this way we maintain compatibility with Retina displays and prevent unwanted gaps from being created.

This bug was easy to resolve once we were aware of it, but without UFG we would never have identified it using any of our test computers.

Conclusion

Maintaining a quality front end for all configurations is an ongoing challenge in every company and every organization.

Solving a bug for one audience can create a bigger bug for a wider audience. In this article, we saw a classic example of a malfunction, where the initial solution we implemented only made things worse.

The number of users who use Applitools Eyes for testing semi-transparent components is significantly lower than the number of eyes users who work with Retina displays (most Apple users) – so the initial approach we took to solve the problem could have caused significantly more harm than good. Even worse – we could have caused significant damage to the user experience, and not know about it. No modern organization wants to rely on frustrated customer feedback to discover bugs in their application or websites.

Using UFG reduces the likelihood that errors of this type will pass under the radar and allows developers, product managers, and all stakeholders in the development process to significantly reduce the fear factor in deploying new features. The UFG is insurance against platform-dependent visual bugs and provides the ability to perform true multi-platform coverage.

Don’t wait to discover your visual bugs from user reports. We invite you to try UFG – our team of experts is here to help with any questions or problems, and assist you migrate Applitools Eyes and UFG into your integration pipeline. For more information, see Introduction to the Ultrafast Grid in the Applitools Knowledge Center.

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
Ultrafast Cross Browser Testing with Selenium Java https://applitools.com/blog/cross-browser-testing-selenium/ Fri, 09 Sep 2022 15:51:52 +0000 https://applitools.com/?p=42442 Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>

Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

What is Cross Browser Testing?

Cross-browser testing is a form of functional testing in which an application is tested on multiple browsers (Chrome, Firefox, Edge, Safari, IE, etc.) to validate that functionality performs as expected.

In other words, it is designed to answer the question: Does your app work the way it’s supposed to on every browser your customers use?

Why is Cross Browser Testing Important?

While modern browsers generally conform to key web standards today, important problems remain. Differences in interpretations of web standards, varying support for new CSS or other design features, and rendering discrepancies between the different browsers can all yield a user experience that is different from one browser to the next.

A modern application needs to perform as expected across all major browsers. Not only is this a baseline user expectation these days, but it is critical to delivering a positive user experience and a successful app.

At the same time, the number of screen combinations (between screen sizes, devices and versions) is rising quickly. In recent years the number of screens required to test has exploded, rising to an industry average of 81,480 screens and reaching 681,296 for the top 30% of companies.

Ensuring complete coverage of each screen on every browser is a common challenge. Effective and fast cross-browser testing can help alleviate the bottleneck from all these screens that require testing.

Source: 2019 State of Automated Visual Testing

How to Perform Modern Cross Browser Testing in Selenium with Visual Testing

Traditional approaches to cross-browser testing in Selenium have existed for a while, and while they still work, they have not scaled well to handle the challenge of complex modern applications. They can be time-consuming to build, slow to execute and challenging to maintain in the face of apps that change frequently.

Applitools Developer Advocate and Test Automation University Director Andrew Knight (AKA Pandy Knight) recently conducted a hands-on workshop where he explored the history of cross-browser testing, its evolution over time and the pros and cons of different approaches.

Andrew then explores a modern cross-browser testing solution with Selenium and Applitools. He walks you through a live demo (which you can replicate yourself by following his shared Github repo) and explains the benefits and how to get started. He also covers how you can accelerate test automation with integration into CI/CD to achieve Continuous Testing.

Check out the workshop below, and follow along with the Github repo here.

More on Cross Browser Testing in Cypress, Playwright or Storybook

At Applitools we are dedicated to making software testing faster and easier so that testers can be more effective and apps can be visually perfect. That’s why we use our industry-leading Visual AI and built the Applitools Ultrafast Grid, a key component of the Applitools Test Cloud that enables ultrafast cross-browser testing. If you’re looking to do cross-browser testing better but don’t use Selenium, be sure to check out these links too for more info on how we can help:

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>
What is Cross Browser Testing? Examples & Best Practices https://applitools.com/blog/guide-to-cross-browser-testing/ Thu, 14 Jul 2022 19:20:00 +0000 https://applitools.com/?p=33935 Learn everything you need to know about cross browser testing, including examples, a comparison of different implementation options and how to get started.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.

What is Cross Browser Testing?

Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.

Why is Cross Browser Testing Important?

When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.

A Cross Browser Testing Example

Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop. 

This should be possible while ensuring:

  • The features remain the same
  • The look and feel, UI or cosmetic effects are the same
  • Security standards are maintained

How to Implement Cross Browser Testing 

There are various aspects to consider while implementing your cross-browser testing strategy.

Understand the scope == Data!

“Different devices and browsers: chrome, safari, firefox, edge”

Thankfully IE is not in the list anymore (for most)!

You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from. 

PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).

This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.

Cross Browser Testing Techniques

There are various ways you can perform cross-browser testing. Let’s understand them.

Local Setup -> On a Single (Dev / QA Machine)

We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests. 

If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.

Setting up the Infrastructure

While this may seem the easiest, it can get out of control very quickly. 

Examples:

  • You may not be able to install all supported browsers on your computer (ex: Safari is not supported on Windows OS). 
  • Browser vendors keep releasing new versions very frequently. You need to keep your browser drivers in sync with this.
  • Maintaining / using older versions of the browsers may not be very straightforward.
  • If you need to run tests on mobile devices, you may not have access to all the variety of devices. So setting up local emulators may be a way to proceed.

The choices can actually vary based on the requirements of the project and on a case by case basis.

As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.

In-House Setup of Central Infrastructure

You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices. 

This infrastructure can potentially be used in the following ways:

  • Triggered from local machine
    Tests can be triggered from any dev / QA machine to run on the central infrastructure.
  • For CI execution
    Tests triggered via Continuous Integration (CI), like Jenkins, CircleCI, Azure DevOps, TeamCity, etc. can be run against browsers / emulators setup on the central infrastructure. 

Cloud Solution    

You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.

Modern, AI-Based Cross Browser Testing Solution: Applitools Ultrafast Test Cloud 

It is important to understand the evolution of browsers in recent years. 

  • They have started conforming to the W3C standard. 
  • They seem to have started adopting Continuous Delivery – well, at least releasing new versions at a very fast pace, sometimes multiple versions a week.
  • In a major development a lot of major browsers are adopting and building on the Chromium codebase. This makes these browsers very similar, except the rendering part – which is still pretty browser specific.

We need to factor this change in our cross browser testing strategy. 

In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.

To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.

How Does Applitools Visual AI Work as a Solution for Cross Browser Testing

Integration with Applitools

Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.

Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow(), and you are set to run your test against any browser or device of your choice.

Reference: https://applitools.com/tutorials/overview/how-it-works.html

AI-Based Cross Browser Testing

Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.

What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.

Seems too far-fetched?

It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!

The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements. 

(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)

// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);

// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;

// Set the configuration object to eyes
eyes.setConfiguration(config);

Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.

You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?

This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:

  • When the test starts, the browser configuration is passed from the test execution to the Ultrafast Grid.
  • For every eyes.checkWindow call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.
  • The Ultrafast Grid will render the same page / screen on each browser / device provided by the test – (think of this as playing a downloaded video in airplane mode).
  • Once rendered in each browser / device, a visual comparison is done and the results are sent to the Applitools dashboard.

What I like about this AI-based solution, is that:

  • I create my automation scripts for different purposes – functional, visual, cross browser testing, in one go
  • There is no need of maintaining devices 
  • There is no need to create different set-ups for different types of testing
  • The AI algorithms start providing results from the first run – “no training required”
  • I can leverage the solution on any kind of setup 
    • i.e. running the scripts through my IDE, terminal, or CI/CD 
  • I can leverage the solution for web, mobile web, and native apps
  • I can integrate Visual Testing results in as part of my CI execution
  • Rich information available in the dashboard including ease of updating the baselines, doing Root Cause Analysis, reporting defects in Jira or Rally, etc.
  • I can ensure there are no Contrast issues (part of Accessibility testing) in my execution at scale

Here is the screenshot of the Applitools dashboard after I ran my sample tests:

Cross Browser Testing Tools and Applitools Visual AI

The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.

Cross Browser Testing in Selenium

As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.

Cross Browser Testing in Cypress

Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.

Cross Browser Testing in Playwright

Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.

Pro and Cons of Each Technique (Table of Comparison)

Local SetupIn-House Setup Cloud SolutionAI-Based Solution (Applitools)
InfrastructurePros: 
Fast feedback on local machine
Cons: 
Needs to be repeated for each machine where the tests need to execute
All configurations cannot be set up locally
Pros: 
No inbound / outbound connectivity required
Cons: 
Needs considerable effort to set up, maintain and update the infrastructure on a continued basis
Pros:
No efforts required build / maintain / update the infrastructure
Cons:
Needs inbound and outbound connectivity from internal network
Latency issues may be seen as requests are going to cloud based browsers / devices
Pros:
No effort required to setup
Setup and MaintenanceTo be taken care of by each team member from time to time; including OS/ Browser version updatesTo be taken care of by the internal team from time to time; including OS/ Browser version updatesTo be taken care of by the service providerTo be taken care of by the service provider
Speed of FeedbackSlowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combinationDepends on concurrent usage due to multiple test runsDepends on network latency
Network issues may cause intermittent failures
Depends on reliability and connectivity of the service provider
Fast and seamless scaling
Security Best as in-house, using internal firewalls, vpns, network and data storageBest as in-house, using internal firewalls, vpns, network and data storageHigh Risk: Needs inbound network access from service provider to the internal test environments.
Browsers / devices will have access to the data generated by running the test – cleanup is essential.
No control who has access to the cloud service provider infrastructure, and if they access your internal resources.
Low risk. There is no inbound connection to your internal infrastructure.
Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) 

My Learning from this Experience

  • A good cross browser testing strategy allows you to reduce the risk of functionality and visual experience not working as expected on the browsers and devices used by your users. A good strategy will also optimize the testing efforts required to do this. To allow this, you need data to provide the insights from your users.
  • Having a holistic view of how your team will be leveraging cross browser testing (ex: manual testing, automation, local executions, CI-based execution, etc.) is important to know before you start off with your implementation.
  • Sometimes the easiest way may not be the best – ex: Using the browsers on your computer to automate against that will not scale. At the same time, using technology like Applitools Ultrafast Test Cloud is very easy – you end up writing less code and get increased functional and visual coverage at scale. 
  • You need to think about the ROI of your approach and if it achieves the objectives of the need for cross browser testing. ROI calculation should include:
    • Effort to implement, maintain, execute and scale the tests
    • Effort to set up, and maintain the infrastructure (hardware and software components)
    • Ability to get deterministic & reliable feedback from from test execution

Summary

Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results. 

Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!

Get Started Today

Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.

Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>
Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid https://applitools.com/blog/comparing-cross-browser-testing-tools-selenium-grid-vs-applitools-ultrafast-grid/ Wed, 29 Jun 2022 15:00:00 +0000 https://applitools.com/?p=39529 How can you choose the best cross-browser testing tool? We'll review the challenges of cross-browser testing and consider leading solutions.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>

How can you choose the best cross-browser testing tool for your needs? We’ll review the challenges of cross-browser testing and consider some leading cross-browser testing solutions.

Nowadays, testing a website or an app using one single browser or device will lead to disastrous consequences, and testing the same website or app on multiple browsers using ONLY the traditional functional testing approach may lead to production issues and lots of visual bugs. 

Combinations of browsers, devices, viewports, and screen orientations (portrait or landscape) can reach the thousands. Performing manual testing on this vast amount of possibilities is no longer feasible, same as just running the usual functional testing scripts hoping to cover the most critical aspects, regions, or functionalities of our sites. 

In this article, we are going to focus on the challenges and leading solutions for cross-browser testing. 

The Challenges of Cross Browser Testing 

What is Cross Browser Testing?

Cross-browser testing makes sure that your web apps work across different web browsers and devices. Usually, you want to cover the most popular browser configurations or the ones specified as supported browsers/devices based on your organization’s products and services.

Why Do We Need Cross Browser Testing?

Basically because rendering is different and modern web apps have responsive design, but you also have to consider that each web browser handles JavaScript differently, and each browser may render things differently based on different viewports or device screen sizes. These rendering differences can result in costly bugs and negative user experience.

Challenges of Cross Browser Testing Today

Cross-browser testing has been around for quite some time now. Traditionally, testers run multiple tests and test in parallel on different browsers and this is fine, from a functional point of view. 

Today, we know for a fact that running only these kinds of traditional functional tests across a set of browsers does not guarantee your website or app’s integrity. But let’s define and understand the difference between Traditional Functional Testing and Visual Testing. Traditional functional testing is a type of software testing where the basic functionalities of an app are tested against a set of specifications. On the other hand, Visual Testing allows you to test for visual bugs, which are extremely difficult to uncover with the traditional functional testing approach.

As mentioned, traditional functional testing on its own will not capture the visual testing aspect and could lead to lack of coverage. You have to take into consideration the possibility of visual bugs, regardless of the amount of elements you actually test. Even if you tested all of them, you may encounter visual bugs that could lead to false negatives, which means, your testing was done, your tests passed and you did not capture the bug. 

Today we have mobile and IoT device proliferation, complex responsive design viewport requirements, and dynamic content. Since rendering the UI is subjective, the majority of cross-browser defects are visual.

To handle all these possibilities or scenarios, you need a tool or framework that not only runs tests but provides reliable feedback – and not just false positives or tests pending to be approved or rejected. 

When it comes to cross-browser testing, you have several options, same as for visual testing. In this article, we will explore some of the most popular cross-browser testing tools. 

Cross-Browser Testing with Your Own In-House Selenium Grid 

If you have the resources, time, and knowledge, you can spin up your own Selenium Grid and do some cross-browser testing. This may be useful based on your project size and approach.

As mentioned, if you understand the components and steps to accomplish this, go for it! 

Now, be aware, to maintain a home-grown Selenium grid cluster is not an easy task. You may find some difficulties or issues when running and maintaining hundreds of browser/nodes. Because of this, most companies end up outsourcing this tasks to vendors like Browserstack or LambdaTest, in order to save time and energy and bring more stability to their Selenium Grid infrastructure. 

Most of these vendors are really expensive, which means that you will need to have a dedicated project budget just for running your UI tests on their cloud. Not to mention the packages or plans you’ll have to acquire to run a decent amount of parallel tests.  

Considerations when Choosing Selenium Grid Solutions

When it comes to cross-browser testing and visual testing, you could use any of the available tools or frameworks, for instance LambdaTest or BrowserStack. But how can we choose? Which one is better? Are they all offering the same thing? 

Before choosing any Selenium Grid solutions, there are some key inherit issues that we must take into consideration:

  1. With a Selenium Grid Solution, you need to run each test multiple times on each and every browser/device that you would like to cover, resulting in much higher maintenance (if your tests fails 5% of the times, and you now need to run the test 10 times on 10 different environments, you are adding much more failures/maintenance overhead). 
  1. Cloud-based Selenium Grid solutions require constant connections between the machine inside your network that is running the test to the browser in the cloud for the entire test execution time. Many grid solutions have reliability issues around that causing environment/connection failure on some tests, and when executing tests at scale this results in some additional failures that the team needs to analyze.
  1. If you try to use cloud-based Selenium Grid solutions to test an internal application, you would need to setup a tunnel from the cloud grid to your company’s network, which creates a security risk and adds additional performance/reliability issues.
  2. Another critical factor for traditional “WebDriver-as-a-Service” platforms is speed. Tests could take 2-4x as much time to complete on those platforms compared to running them on local machines. 

Cross-Browser Testing with Applitools Ultrafast Grid

Applitools Ultrafast Grid is the next generation of cross-browser testing. With the Ultrafast Grid, you can run functional and visual tests once, and it instantly renders all screens across all combinations of browsers, devices, and viewports. 

Visual AI is a technology that improves snapshot comparisons. It goes deeper than pixel-to-pixel comparisons to identify changes that would be meaningful to the human eye.

Visual snapshots provide a much more robust, comprehensive, and simpler mechanism for automating verifications. Instead of writing hundreds of lines of assertions with locators, you can write a single-line snapshot capture using Applitools Eyes.

When you compound that stability with the modern cross-platform testing technology of the Ultrafast Test Grid that stability multiplies. This improved efficiency guarantees delivery of high-quality apps, on-time and without the need of multiple suites or test scripts.

Think and analyze the time that it currently takes to complete a full testing cycle on your end using traditional cross-browser testing solutions. Going from installing, writing, running, analyzing, reporting and maintaining your tests. Engineers now have the Ultrafast Grid and Visual AI technology that can be easily set on your framework, and that is also capable of testing large, modern apps across multiple environments in just minutes. 

Traditional cross-browser testing solutions that offer visual testing, are usually providing this as a separate feature or add-on that you have to pay for. What this feature does is basically taking screenshots for you to compare with other screenshots previously taken. So you can just imagine the amount of time that will take to accept or reject all these tests, and take into account that most of them will not necessarily bring useful intel, as the website or app may not change from one day to another. 

The Ultrafast Grid goes beyond simple screenshots. Applitools SDKs uploads DOM snapshots, not screenshots, to the Ultrafast Grid. Snapshots include all the resources to render a page (HTML, CSS …) and are much smaller than screenshots, so they are basically uploaded faster. 

To learn more about the Ultrafast Grid functionality and configuration, take a look at this article > https://applitools.com/docs/topics/overview/using-the-ultrafast-grid.html

Benefits and Differences when using the Applitools Ultrafast Grid

Here are some of the benefits and differences you’ll find when using this framework:

  1. The Ultrafast Grid uses containers to render web pages on different browsers in a much faster and more reliable way, maximizing speed.
  1. The Ultrafast Grid does not always upload a snapshot for every page. If a page’s resources didn’t change, Ultrafast Grid doesn’t upload them again. Since most page resources don’t change from one test run to another, there’s less to transfer, and upload times are measured in milliseconds.
  1. As mentioned above, with Applitools Ultrafast Grid, you only need to run the test once and you’ll get the results from all browsers/devices. Now that most browsers are W3C compliant, the chances of facing functional differences between different browsers (e.g. a button clicks on one browser and doesn’t click on other browsers) are negligible, so it’s sufficient to run the functional tests just once and this will still find the common browser compatibility issues like rendering/visual differences between browsers.
  1. You can use one algorithm on top of the other. Other solutions only offer the possibility of setting a level of comparison based on three modes, either Strict, Suggested (Normal) or Relax, and this is useful to some extent. But what happens if you need to select a certain region of the page to use a different comparison algorithm? Well, this is possible using the Applitools Region Types feature:
  Images courtesy of the AKC

  1. All of the above occurs on multiple browsers and devices combination at the same time. This is possible using the Ultrafast Grid configuration. For more information check out this article > https://applitools.com/docs/topics/sdk/vg-configuration.html
  1. Applitools offers a Free version that allows you to use mostly all the Framework features, This is really cool and helpful, as you will be able to explore and use high level features like Visual AI, cross-browser testing & visual testing without having to worry about the minutes left on your free trial, as with other solutions. 
  1. One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This reduces the overhead involved with managing baselines from different browsers and device configurations.  
Images courtesy of the AKC

Final Thoughts

Selenium Grid Solutions are everywhere, and the price varies between vendors and features. IF you have infinite time, infinite resources and infinite budget, it would be ideal to run all the tests on all the browsers and analyze the results on every code change/build. But for a company trying to optimize their velocity and run tests on every pull request/build, the Applitools Ultrafast Grid provides a compelling balance between performance, stability, cost and risk.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
Our Best Test Automation Videos of 2022 (So Far) https://applitools.com/blog/best-test-automation-videos-2022/ Fri, 20 May 2022 21:07:55 +0000 https://applitools.com/?p=38624 Check out some of our most popular events of the year. All feature test automation experts sharing their knowledge and their stories.

The post Our Best Test Automation Videos of 2022 (So Far) appeared first on Automated Visual Testing | Applitools.

]]>

We’re approaching the end of May, which means we’re just a handful of weeks the midpoint of 2022 already. If you’re like me, you’re wondering where the year has gone. Maybe it has to do with life in the northeastern US where I live, where we’ve really just had our first week of warm weather. Didn’t winter just end?

As always, the year is flying by, and it can be hard to keep up with all the great videos or events you might have wanted to watch or attend. To help you out, we’ve rounded up some of our most popular test automation videos of 2022 so far. These are all top-notch workshops or webinars with test automation experts sharing their knowledge and their stories – you’ll definitely want to check them out.

Cross Browser Test Automation

Cross-browser testing is a well-known challenge to test automation practitioners. Luckily, Andy Knight, AKA the Automation Panda, is here to walk you through a modern approach to getting it done. Whether you use Cypress, Playwright, or are testing Storybook components, we have something for you.

Modern Cross Browser Testing with Cypress

For more, see this blog post: How to Run Cross Browser Tests with Cypress on All Browsers (plus bonus post specifically covering the live Q&A from this workshop).

Modern Cross Browser Testing in JavaScript Using Playwright

For more, see this blog post: Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser.

Modern Cross Browser Testing for Storybook Components

For more, see this blog post: Testing Storybook Components in Any Browser – Without Writing Any New Tests!

Test Automation with GitHub or Chrome DevTools

GitHub and Chrome DevTools are both incredibly popular with the developer and testing communities – odds are if you’re reading this you use one or both on a regular basis. We recently spoke with developer advocates Rizel Scarlett of GitHub and Jecelyn Yeen of Google as they explained how you can leverage these extremely popular tools to become a better tester and improve your own testing experience. Click through for more info about each video and get watching.

Make Testing Easy with GitHub

For more, see this blog post: Using GitHub Copilot to Automate Tests.

Automating Tests with Chrome DevTools Recorder

For more, see this blog post: Creating Your First Test With Google Chrome DevTools Recorder.

Test Automation Stories from Our Customers

When it comes to implementing and innovating around test automation, you’re never alone, even though it doesn’t always feel that way. Countless others are struggling with the same challenges that you are and coming up with solutions. Sometimes all it takes is hearing how someone else solved a similar problem to spark an idea or gain a better understanding of how to solve your own.

Accelerating Visual Testing

Nina Westenbrink, Software Engineer at a leading European telecom, talks about how the visual time to test the company’s design system was decreased and simplified, offering helpful tips and tricks along the way. Nina also speaks about her career as a woman in testing and how to empower women and overcome biases in software engineering.

Continuously Testing UX for Enterprise Platforms

Govind Ramachandran, Head of Testing and Quality Assurance for Asia Technology Services at Manulife Asia, discusses challenges around UI/UX testing for enterprise-wide digital programs. Check out his blueprint for continuous testing of the customer experience using Figma and Applitools.

This is just a taste of our favorite videos that we’ve shared with the community from 2022. What were yours? You can check out our full video library here, and let us know your own favorites @Applitools.

The post Our Best Test Automation Videos of 2022 (So Far) appeared first on Automated Visual Testing | Applitools.

]]>
Testing Storybook Components in Any Browser – Without Writing Any New Tests! https://applitools.com/blog/storybook-components-cross-browser-testing/ Thu, 31 Mar 2022 20:28:00 +0000 https://applitools.com/?p=36060 Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.

The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.

Let’s face it: modern web apps are complex. If a team wants to provide a seamless user experience on a deadline, they need to squeeze the most out of the development resources they have. Component libraries help tremendously. Developers can build individual components for small things like buttons and big things like headers to be used anywhere in the frontend with a consistent look and feel.

Storybook is one of the most popular tools for building web components. It works with all the popular frameworks, like React, Angular, and Vue. With Storybook, you can view tweaks to components as you develop their “stories.” It’s awesome! However, manually inspecting components only works small-scale when you, as the developer, are actively working on any given component. How can a team test their Storybook components at scale? And how does that fit into a broader web app testing strategy?

What if I told you that you could automatically do cross-browser testing for Storybook components without needing to define any new tests or write any new automation code? And what if I told you that it could fit seamlessly into your existing development workflow? You can do this with the power of Applitools and your favorite CI tool! Let’s see how.

Adding Visual Component Testing to Your Strategy

Historically, web app testing strategies divide functional testing into three main levels:

  1. Unit testing
  2. Integration testing for APIs
  3. End-to-end testing for UIs and APIs

These three levels make up the classic Testing Pyramid. Each level of testing mitigates a unique type of risk. Unit tests pinpoint problems in code, integration tests catch problems where entities meet, and end-to-end tests exercise behaviors like a user.

The rise of frontend component libraries raises an interesting question: Where do components belong among these levels? Components are essentially units of the UI. In that sense, they should be tested individually as “UI units” to catch problems before they become widespread across multiple app views. One buggy component could unexpectedly break several pages. However, to test them properly, they should be rendered in a browser as if they were “live.” They might even call APIs indirectly. Thus, arguably, component testing should be sandwiched between traditional integration and end-to-end testing.

Web app testing levels, showing where component testing belongs in relation to other levels.

Wait, another level of testing? Nobody has time for that! It’s hard enough to test adequate coverage at the three other levels, let alone automate those tests. Believe me, I understand the frustration. Unfortunately, component libraries bring new risks that ought to be mitigated.

Thankfully, Applitools provides a way to visually test all the components in a Storybook library with the Applitools Eyes SDK for Storybook. All you need to do is install the @applitools/eyes-storybook package into your web app project, configure a few settings, and run a short command to launch the tests. Applitools Eyes will turn each story for each component into a visual test case. On the first run, it will capture a visual snapshot for each story as a “baseline” image. Then, subsequent runs will capture “checkpoint” snapshots and use Visual AI to detect any changes. You don’t need to write any new test code – tests become a side effect of creating new components and stories!

In this sense, visual component testing with Applitools is like autonomous testing. Test generation and execution is completely automated, and humans review the results. Since testing can be done autonomously, component testing is easy to add to an existing testing strategy. It mitigates lots of risk for low effort. Since it covers components very well, it can also reduce the number of tests at other layers. Remember, the goal of a testing strategy is not to cover all the things but rather to use available resources to mitigate as much risk as possible. Covering a whole component library with an autonomous test run frees up folks to focus on other areas.

Adding Applitools Eyes to Your Web App

Let’s walk through how to set up visual component tests for a Storybook library. You can follow the steps below to add visual component tests to any web app that has a Storybook library. Give it a try on one of your own apps, or use my example React app that I’ll use as an example below. You’ll also need Node.js installed as a prerequisite.

To get started, you’ll need an Applitools account to run visual tests. If you don’t already have an Applitools account, you can register for free using your email or GitHub account. That will let you run visual tests with basic features.

Once you get your account, store your API key as an environment variable. On macOS or Linux, use this command:

export APPLITOOLS_API_KEY=<your-api-key>

On Windows:

set APPLITOOLS_API_KEY=<your-api-key>

Next, you need to add the eyes-storybook package to your project. To install this package into a new project, run:

npm install --save-dev @applitools/eyes-storybook

Finally, you’ll need to add a little configuration for the visual tests. Add a file named applitools.config.js to the project’s root directory, and add the following contents:

module.exports = {
    concurrency: 1,
    batchName: "Visually Testing Storybook Components"
}

The concurrency setting defines how many visual snapshot comparisons the Applitools Ultrafast Test Cloud will perform in parallel. (With a free account, you are limited to 1.) The batchName setting defines a name for the batch of tests that will appear in the Applitools dashboard. You can learn about these settings and more under Advanced Configuration in the docs.

That’s it! Now, we’re ready to run some tests. Launch them with this command:

npx eyes-storybook

Note: If your components use static assets like image files, then you will need to append the -s option with the path to the directory for static files. In my example React app, this would be -s public.

The command line will print progress as it tests each story. Once testing is done, you can see all the results in the Applitools dashboard:

Results for visual component tests establishing baseline snapshots.

Run the tests a second time for checkpoint comparisons:

Results for visual component tests comparing checkpoints to baselines.

If you change any of your components, then tests should identify the changes and report them as “Unresolved.” You can then visually compare differences side-by-side in the Applitools dashboard. Applitools Eyes will highlight the differences for you. Below is the result when I changed a button’s color in my React app:

Comparing visual differences between two buttons after changing the color.

You can give the changes a thumbs-up if they are “right” or a thumbs-down if they are due to a regression. Applitools makes it easy to pinpoint changes. It also provides auto-maintenance features to minimize the number of times you need to accept or reject changes.

Adding Cross-Browser Tests for All Components

When Applitools performs visual testing, it captures snapshots from tests running on your local machine, but it does everything else in the Ultrafast Test Cloud. It rerenders those snapshots – which contain everything on the page – against different browser configurations and uses Visual AI to detect any changes relative to baselines.

If no browsers are specified for Storybook components, Applitools will run visual component tests against Google Chrome running on Linux. However, you can explicitly tell Applitools to run your tests against any browser or mobile device.

You might not think you need to do cross-browser testing for components at first. They’re just small “UI units,” right? Well, however big or small, different browsers render components differently. For example, a button may have rectangular edges instead of round ones. Bigger components are more susceptible to cross-browser inconsistencies. Think about a navbar with responsive rendering based on viewport size. Cross-browser testing is just as applicable for components as it is for full pages.

Configuring cross-browser testing for Storybook components is easy. All you need to do is add a list of browser configs to your applitools.config.js file like this:

module.exports = {
  concurrency: 1,
  batchName: "Visually Testing Storybook Components",
  browser: [
    // Desktop
    {width: 800, height: 600, name: 'chrome'},
    {width: 700, height: 500, name: 'firefox'},
    {width: 1600, height: 1200, name: 'ie11'},
    {width: 1024, height: 768, name: 'edgechromium'},
    {width: 800, height: 600, name: 'safari'},
    // Mobile
    {deviceName: 'iPhone X', screenOrientation: 'portrait'},
    {deviceName: 'Pixel 2', screenOrientation: 'portrait'},
    {deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
    {deviceName: 'Nexus 10', screenOrientation: 'portrait'},
    {deviceName: 'iPad Pro', screenOrientation: 'landscape'},
  ]
}

This declaration includes ten unique browser configurations: five desktop browsers with different viewport sizes, and five mobile devices with both portrait and landscape orientations. Every story will run against every specified browser. If you run the test suite again, there will be ten times as many results!

Results from running visual component tests across multiple browsers.

As shown above, my batch included 90 unique test instances. Even though that’s a high number of tests, Applitools Ultrafast Test Cloud ran them in only 32 seconds! That really is ultrafast for UI tests.

Running Visual Component Tests Autonomously

Applitools Eyes makes it easy to run visual component tests, but to become truly autonomous, these tests should be triggered automatically as part of regular development workflows. Any time someone makes a change to these components, tests should run, and the team should receive feedback.

We can configure Continuous Integration (CI) tools like Jenkins, CircleCI, and others for this purpose. Personally, I like to use GitHub Actions because they work right within your GitHub repository. Here’s a GitHub Action I created to run visual component tests against my example app every time a change is pushed or a pull request is opened for the main branch:

name: Run Visual Component Tests

on:
  push:
  pull_request:
    branches:
      - main

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v2
      
      - name: Set up Node.js
        uses: actions/setup-node@v2

      - name: Install dependencies
        run: npm install

      - name: Run visual component tests
        run: npx eyes-storybook -s public
        env:
          APPLITOOLS_API_KEY: ${{ secrets.APPLITOOLS_API_KEY }}

The only extra configuration needed was to add my Applitools API key as a repository secret

Maximizing Your Testing Value

Components are just one layer of complex modern web apps. A robust testing strategy should include adequate testing at all levels. Thankfully, visual testing with Applitools can take care of the component layer with minimal effort. Unit tests can cover how the code works, such as a component’s play method. Integration tests can cover API requests, and end-to-end tests can cover user-centric behaviors. Tests at all these levels together provide great protection for your app. Don’t neglect any one of them!

The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.

]]>
How to Visually Test a Remix App with Applitools and Cypress https://applitools.com/blog/how-to-visually-test-remix-app-applitools-cypress/ Tue, 22 Mar 2022 20:59:50 +0000 https://applitools.com/?p=35712 Is Remix too new to be visually tested? Let’s find out with Applitools and Cypress.

The post How to Visually Test a Remix App with Applitools and Cypress appeared first on Automated Visual Testing | Applitools.

]]>

Is Remix too new to be visually tested? Let’s find out with Applitools and Cypress.

In this blog post, we answer a single question: how to best visually test a Remix-based app? 

We walk through Remix and build a demo app to best showcase the framework. Then, we take a deep dive into visual testing with Applitools and Cypress. We close on scaling our test coverage with the Ultrafast Test Cloud to perform cross-browser validation of the app.

So let our exciting journey begin in pursuit of learning how to visually test the Remix-based app.

What is Remix?

Remix Logo

Web development is an ever-growing space with almost as many ways to build web apps as there are stars in the sky. And it ultimately translates into just how many different User Interface (UI) frameworks and libraries there are. One such library is React, which most people in the web app space have heard about, or even used to build a website or two. 

For those unfamiliar with React, it’s a declarative, component-based library that developers can use to build web apps across different platforms. While React is a great way to develop robust and responsive UIs, many moving pieces still happen behind the scenes. Things like data loading, routing, and more complex work like Server-Side Rendering are what a new framework called Remix can handle for React apps.

Remix is a full-stack web framework that optimizes data loading and routing, making pages load faster and improving overall User Experience (UX). The days are long past when our customers would wait minutes while a website reloads, while moving from one page to another, or expecting an update on their feed. Features like Server-Side Rendering, effective routing, and data loading have become the must for getting our users the experience they want and need. The Remix framework is an excellent open-source solution for delivering these features to our audience and improving their UX.

What Does Remix Mean For UI Testing?

Testing Remix with Cypress and Applitools

Our end-users shouldn’t care what framework we used to build a website. What matters to our users is that the app works and lets them achieve their goals as fast as possible. In the same way, the testing principles always remain the same, so UI testing shouldn’t be impacted by the frameworks used to create an app. The basics of how we test stay the same although some testing aspects could change. For example, in the case of an Angular app, we might need to adjust how we wait for the site to fully load by using a specialized test framework like Protractor.

Most tests follow a straightforward pattern of Arrange, Act, and Assert. Whether you are writing a unit test, an integration test, or an end-to-end test, everything follows this cycle of setting up the data, running through a set of actions and validating the end state.

When writing these end-to-end tests, we need to put ourselves in the shoes of our users. What matters most in this type of testing is replicating a set of core use-cases that our end-users go through. It could be logging into an app, writing a new post, or navigating to a new page. That’s why UI test automation frameworks like Applitools and Cypress are fantastic for testing – they are largely agnostic of the platform they are testing. With these tools in hand, we can quickly check Remix-based apps the same way we would test any other web application.

What about Remix and Visual Testing?

The main goal of testing is to confirm the app’s behavior that our users see and go through. This reason is why simply loading UI elements and validating inner text or styling is not enough. Our customers are not interested in HTML or CSS. What they care about is what they can see and interact with on our site, not the code behind it. It’s not enough for a robust coverage of the complex UI that modern web apps have. We can close this gap with visual testing. 

Contrasting what our tests see (with an image of code) with what our customers see (with an image of a website's UI).
Functional vs Visual Testing Perspective

Visual testing allows us to see our app from our customers’ point of view. And that’s where the Applitools Eyes SDK comes in! This visual testing tool can enhance the existing end-to-end test coverage to ensure our app is pixel-perfect.

To simplify, what Applitools does for us is that it allows developers to effectively compare visual elements across various screens to find visible defects. Applitools can record our UI elements in their platform and then monitor any visual regressions that our customers might encounter. More specifically, this testing framework exposes the visible differences between baseline snapshots and future snapshots.

Applitools has integrations with numerous testing platforms like Cypress, WebdriverIO, Selenium, and many others. For this article, we will showcase Applitools with Cypress to add visual test coverage to our Remix app.

Introducing Remix Demo App

We can’t talk about a framework like Remix without seeing it in practice. That’s why we put together a demo app to best showcase Remix and later test it with Applitools and Cypress.

A screenshot of a Remix demo app
Remix Demo App

We based this app on the Remix Developer Blog app that highlights the core functionalities of Remix: data loading, actions, redirects, and more. We shared this demo app and all the tests we cover in this article in this repository so that our readers can follow along.

Running Demo App 

Before diving into writing tests, we must ensure that our Remix demo application is running.

To start, we need to clone a project from this repository:

git clone https://github.com/dmitryvinn/remix-demo-app-applitools

Then, we navigate into the project’s root directory and install all dependencies:

cd remix-demo-app-applitools
npm install

After we install the necessary dependencies, our app is ready to start:

npm run dev

After we launch the app, it should be available at http://localhost:3000/, unless the port is already taken. With our Remix demo app fully functional, we can transition into testing Remix with Applitools and Cypress.

Visual Testing of Remix App with Applitools and Cypress

There is this great quote from a famous American economist, Richard Thaler: “If you want people to do something, make it easy.” That’s what Applitools and Cypress did by making testing easy for developers, so people don’t see it as a chore anymore.

To run our visual test automation using Applitools, we first need to set up Cypress, which will play the role of test runner. We can think about Cypress as a car’s body, whereas Applitools is an engine that powers the vehicle and ultimately gets us to our destination: a well-tested Remix web app.

Setting up Cypress

Cypress is an open-source JavaScript end-to-end testing framework developers can use to write fast, reliable, and maintainable tests. But rather than reinventing the wheel and talking about the basics of Cypress, we invite our readers to learn more about using this automation framework on the official site, or from this course at Test Automation University.

To install Cypress, we only need to run a single command:

npm install cypress

Then, we need to initialize the cypress folder to write our tests. The easiest way to do it is by running the following:

npx cypress open

This command will open Cypress Studio, which we will cover later in the article, but for now we can safely close it. We also recommend deleting sample test suites that Cypress created for us under cypress/integration.

Note: If npx is missing on the local machine, follow these steps on how to update the Node package manager, or run ./node_modules/.bin/cypress open instead.

Setting up Applitools

Installing the Applitools Eyes SDK with Cypress is a very smooth process. In our case, because we already had Cypress installed, we only need to run the following:

npm install @applitools/eyes-cypress --save-dev

To run Applitools tests, we need to get the Applitools API key, so our test automation can use the Eyes platform, including recording the UI elements, validating any changes on the screen, and more. This page outlines how to get this APPLITOOLS_API_KEY from the platform.

After getting the API key, we have two options on how to add the key to our tests suite: using a CLI or an Applitools configuration file. Later in this post, we explore how to scale Applitools tests, and the configuration file will play a significant role in that effort. Hence, we continue by creating applitools.config.js in our root directory.

Our configuration file will begin with the most basic setup of running a single test thread (testConcurrency) for one browser (browser field). We also need to add our APPLITOOLS_API_KEY under the `apiKey’ field that will look something like this:

module.exports = {
  testConcurrency: 1,
  apiKey: "DONT_SHARE_OUR_APPLITOOLS_API_KEY",
  browser: [
    // Add browsers with different viewports
    { width: 800, height: 600, name: "chrome" },
  ],
  // set batch name to the configuration
  batchName: "Remix Demo App",
};

Now, we are ready to move onto the next stage of writing our visual tests with Applitools and Cypress.

Writing Tests with Applitools and Cypress

One of the best things about Applitools is that it nicely integrates with our existing tests with straightforward API.

For this example, we visually test a simple form on the Actions page of our Remix app.

An Action form in the demo remix app, showing a question: "What is more useful when it is broken?" with an answer field and an answer button.
Action Form in Remix App

To begin writing our tests, we need to create a new file named actions-page.spec.js in the cypress/integration folder:

Basic Applitools Test File

Since we rely on Cypress as our test runner, we will continue using its API for writing the tests. For the basic Actions page tests where we validate that the page renders visually correctly, we start with this code snippet:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    // ...

    // Act
    // ..

    // Assert
    // ..

    // Cleanup
    // ..
  });
});

We continue following the same pattern of Arrange-Act-Assert, but now we also want to ensure that we close all the resources we used while performing the visual testing. To begin our test case, we need to visit the Action page:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    // ..

    // Assert
    // ..

    // Cleanup
    // ..
  });
});

Now, we can begin the visual validation by using the Applitools Eyes framework. We need to “open our eyes,” so-to-speak by calling cy.eyesOpen(). It initializes our test runner for Applitools to capture critical visual elements just like we would with our own eyes:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    cy.eyesOpen({
      appName: "Remix Demo App",
      testName: "Validate Action Form",
    });

    // Assert
    // ..

    // Cleanup
    // ..
  });
});

Note: Technically speaking, cy.eyesOpen() should be a part of the Arrange step of writing the test, but for educational purposes, we are moving it under the Act portion of the test case.

Now, to move to the validation phase, we need Applitools to take a screenshot and match it against the existing version of the same UI elements. These screenshots are saved on our Applitools account, and unless we are running the test case for the first time, the Applitools framework will match these UI elements against the version that we previously saved:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    cy.eyesOpen({
      appName: "Remi Demo App",
      testName: "Validate Action Form",
    });

    // Assert
    cy.eyesCheckWindow("Action Page");

    // Cleanup
    // ..
  });
});

Lastly, we need to close our test runner for Applitools by calling cy.closeEyes(). With this step, we now have a complete Applitools test case for our Actions page:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    cy.eyesOpen({
      appName: "Remi Demo App",
      testName: "Validate Action Form",
    });

    // Assert
    cy.eyesCheckWindow("Action Page");

    // Cleanup
    cy.eyesClose();
  });
});

Note: Although we added a cleanup-stage with cy.eyesClose() in the test case itself, we highly recommend moving this method outside of the it() function into the afterEach() that will run for every test, avoiding code duplication.

Running Applitools Tests

After the hard work of planning and then writing our test suite, we can finally start running our tests. And it couldn’t be easier than with Applitools and Cypress! 

We have two options of either executing our tests by using Cypress CLI or Cypress Studio.

Cypress Studio is a great option when we first write our tests because we can walk through every case, stop the process at any point, or replay any failures. These reasons are why we should use Cypress Studio to demonstrate best how these tests function.

We begin running our cases by invoking the following from the project’s root directory:

npm run cypress-open

This operation opens Cypress Studio, where we can select what test suite to run:

The Cypress Studio dashboard, where we can select actions-page-spec.js
Actions Tests in Cypress Studio

To validate the result, we need to visit our Applitools dashboard:

The Applitools dashboard, displaying the Remix Demo App test with the Action Page.
Basic Visual Test in the Applitools Dashboard

To make it interesting, we can cause this test to fail by changing the text on the Actions page. We could change the heading to say “Failed Actions!” instead of the original “Actions!” and re-run our test. 

This change will cause our original test case to fail because it will catch a difference in the UI (in our case, it’s because of the intentional renaming of the heading). This error message is what we will see in the Cypress Studio:

Cypress Studio showing a red error message that reads, in part: "Eyes-Cypress detected diffs or errors during execution of visual tests."
Failed Visual Test in Cypress Studio

To further deal with this failure, we need to visit the Applitools dashboard:

Applitools dashboard showing the latest test results as "Unresolved."
Failed Visual Test in Applitools Dashboard

As we can see, the latest test run is shown as Unresolved, and we might need to resolve the failure. To see what the difference in the newest test run is, we only need to click on the image in question:

A closer look at the Applitools visual test results, highlighting the areas where the text changed in magenta.
Closer Look at the Failed Test in Applitools Dashboard

A great thing about Applitools is that their visual AI algorithm is so advanced that it can test our application on different levels to detect content changes as well as layout or color updates. What’s especially important is that Applitools’ algorithm prevents false positives with built-in functionalities like ignoring content changes for apps with dynamic content. 

In our case, the test correctly shows that the heading changed, and it’s now up to us to either accept the new UI or reject it and call this failure a legitimate bug. Applitools makes it easy to choose the correct course of action as we only need to press thumbs up to accept the test result or thumbs down to decline it.

Accepting or Rejecting Test Run in Applitools Dashboard

In our case, the test case failed due to a visual bug that we introduced by “unintentionally” updating the heading. 

After finishing our work in the Applitools Dashboard, we can bring the test results back to the developers and file a bug on whoever made the UI change.

But are we done? What about testing our web app on different browsers and devices? Fortunately, Applitools has a solution to quickly scale the tests automation and add cross-browser coverage.

Scaling Visual Tests Across Browsers

Testing an application against one browser is great, but what about all others? We have checked our Remix app on Chrome, but we didn’t see how the app performs on Firefox, Microsoft Edge, and so on. We haven’t even started looking into mobile platforms and our web app on Android or iOS. Introducing this additional test coverage can get out of hand quickly, but not with Applitools and their Ultrafast Test Cloud. It’s just one configuration change away!

With this cloud solution from Applitools, we can test our app across different browsers without any additional code. We only have to update our Applitools configuration file, applitools.config.js.

Below is an example of how to add coverage for desktop browsers like Chrome, Firefox, Safari and E11, plus two extra test cases for different models of mobile phones:

module.exports = {
  testConcurrency: 1,
  apiKey: "DONT_SHARE_YOUR_APPLITOOLS_API_KEY",
  browser: [
    // Add browsers with different viewports
    { width: 800, height: 600, name: "chrome" },
    { width: 700, height: 500, name: "firefox" },
    { width: 1600, height: 1200, name: "ie11" },
    { width: 800, height: 600, name: "safari" },
    // Add mobile emulation devices in Portrait or Landscape mode
    { deviceName: "iPhone X", screenOrientation: "landscape" },
    { deviceName: "Pixel 2", screenOrientation: "portrait" },
  ],
  // set batch name to the configuration
  batchName: "Remix Demo App",
};

It’s important to note that when specifying the configuration for different browsers, we need to define their width and height, with an additional property for screenOrientation to cover non-desktop devices. These settings are critical for testing responsive apps because many modern websites visually differ depending on the devices our customers use.

After updating the configuration file, we need to re-run our test suite with npm test. Fortunately, with the Applitools Ultrafast Test Cloud, it only takes a few seconds to finish running our tests on all browsers, so we can visit our Applitools Dashboard to view the results right away:

The Applitools dashboard, showing passed tests for our desired suite of browsers.
Cross-browser Coverage with Applitools

The Applitools dashboard, showing a visual checkpoint with a Pixel 2 mobile phone in portrait orientation.
Mobile Coverage with Applitools

As we can see, with only a few lines in the configuration file, we scaled our visual tests across multiple devices and browsers. We save ourselves time and money whenever we can get extra test coverage without explicitly writing new cases. Maintaining test automation that we write is one of the most resource-consuming steps of the Software Development Life Cycle. With solutions like Applitools Ultrafast Test Cloud, we can write fewer tests while increasing our test coverage for the entire app.

Verdict: Can Remix Apps be Visually Tested with Applitools and Cypress?

Hopefully, this article showed that the answer is yes; we can successfully visually test Remix-based apps with Applitools and Cypress! 

Remix is a fantastic framework to take User Experience to the next level, and we invite you to learn more about it during the webinar by Kent C. Dodds “Building Excellent User Experiences with Remix”.

For more information about Applitools, visit their website, blog and YouTube channel. They also provide free courses through Test Automation University that can help take anyone’s testing skills to the next level.

The post How to Visually Test a Remix App with Applitools and Cypress appeared first on Automated Visual Testing | Applitools.

]]>
Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser https://applitools.com/blog/lightning-fast-playwright-tests-cross-browser/ Tue, 15 Mar 2022 19:31:23 +0000 https://applitools.com/?p=35443 Learn how you can run cross browser tests against any stock browser using Playwright – not just the browser projects like Chromium, Firefox, and WebKit, and not just Chrome and Edge.

The post Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser appeared first on Automated Visual Testing | Applitools.

]]>

Learn how you can run cross browser tests against any stock browser using Playwright – not just the browser projects like Chromium, Firefox, and WebKit, and not just Chrome and Edge.

These days, there are a plethora of great web test automation tools. Although Selenium WebDriver seems to retain its top spot in popularity, alternatives like Playwright are quickly growing their market share. Playwright is an open source test framework developed by Microsoft by the same folks who worked on Puppeteer. It is notable for its concise syntax, execution speed, and advanced features. Things like automatic waiting and carefully-designed assertions protect tests against flakiness. And like Selenium, Playwright has bindings for multiple languages: TypeScript, JavaScript, Python, .NET, and Java.

However, Playwright has one oddity that sets it apart from other frameworks: Instead of testing browser applications, Playwright tests browser projects. What does this mean? Major modern browser applications like Chrome, Edge, and Safari are based on browser projects that they use internally as their bases. For example, Google Chrome is based on the Chromium project. Typically, these internal projects are open source and provide a rendering engine for web pages.

The table below shows the browser projects used by major browser apps:

Browser projectBrowser app
ChromiumGoogle Chrome, Microsoft Edge, Opera
Firefox (Gecko)Mozilla Firefox
WebKitApple Safari

Browser projects offer Playwright unique advantages. Setup is super easy, and tests are faster using browser contexts. However, some folks need to test full browser applications, not just browser projects. Some teams are required to test specific configurations for compliance or regulations. Other teams may feel like testing projects instead of “stock” browsers is too risky. Playwright can run tests directly against Google Chrome and Microsoft Edge with a little extra configuration, but it can’t hit Firefox, Safari, or IE, and in my anecdotal experience, tests against Chrome and Edge run many times slower than the same tests against Chromium. Playwright’s focus on browser projects over browser apps is a double-edged sword: While it arguably helps most testers, it inherently precludes others.

Thankfully, there is a way to run Playwright tests against full browser apps, not just browser projects: using Applitools Visual AI with the Ultrafast Test Cloud. With the help of Applitools, you can achieve true cross-browser testing with Playwright at lightning speed, even for large test suites. Let’s see how it’s done. We’ll start with a basic Playwright test JavaScript, and then we’ll add visual snapshots that can be rendered using any browser in the Applitools cloud.

Defining a Test Case

Let’s define a basic web app login test for the Applitools demo site. The site mimics a basic banking app. The first page is a login screen:

A demo login form, with a username field, password field, and a few other selectable items.

You can enter any username or password to login. Then, the main page appears:

A main page for our demo banking app, showing total balance, amount due today, recent transactions and more.

Nothing fancy here. The steps for our test case are straightforward:

Scenario: Successful login
  Given the login page is displayed
  When the user enters their username and password
  And the user clicks the login button
  Then the main page is displayed

These steps would be the same for the login behavior of any other application.

Automating a Playwright Test

Let’s automate our login test in JavaScript using Playwright. We could automate our test in TypeScript (which is arguably better), but I’ll use JavaScript for this example to keep the code plain and simple.

Create a new project, and install Playwright. Under the tests folder, create a new file named login.spec.js, and add the following test stub:

const { test, expect } = require('@playwright/test');

test.describe.configure({ mode: 'parallel' })

test.describe('Login', () => {

   test.beforeEach(async ({ page }) => {
       await page.setViewportSize({width: 1600, height: 1200});
   });

   test('should log into the demo app', async ({ page }) => {
      
       // Load login page
       // ...

       // Verify login page
       // ...
      
       // Perform login
       // ...

       // Verify main page
       // ...
   });
})

Playwright uses a Mocha-like structure for test cases. The test.beforeEach(...) call sets an explicit viewport size for testing to make sure the responsive layout renders as expected. The test(...) call includes sections for each step.

Let’s implement the steps using Playwright calls. Here’s the first step to load the login page:

       // Load login page
       await page.goto('https://demo.applitools.com');

The second step verifies that elements like username and password fields appear on the login page. Playwright’s assertions automatically wait for the elements to appear:

// Verify login page
       await expect(page.locator('div.logo-w')).toBeVisible();
       await expect(page.locator('id=username')).toBeVisible();
       await expect(page.locator('id=password')).toBeVisible();
       await expect(page.locator('id=log-in')).toBeVisible();
       await expect(page.locator('input.form-check-input')).toBeVisible();

The third step actually logs into the site like a human user:

       // Perform login
       await page.fill('id=username', 'andy')
       await page.fill('id=password', 'i<3pandas')
       await page.click('id=log-in')

The fourth and final step makes sure the main page loads correctly. Again, assertions automatically wait for elements to appear:

       // Verify main page
      
       //   Check various page elements
       await expect.soft(page.locator('div.logo-w')).toBeVisible();
       await expect.soft(page.locator('ul.main-menu')).toBeVisible();
       await expect.soft(page.locator('div.avatar-w img')).toHaveCount(2);
       await expect.soft(page.locator('text=Add Account')).toBeVisible();
       await expect.soft(page.locator('text=Make Payment')).toBeVisible();
       await expect.soft(page.locator('text=View Statement')).toBeVisible();
       await expect.soft(page.locator('text=Request Increase')).toBeVisible();
       await expect.soft(page.locator('text=Pay Now')).toBeVisible();
       await expect.soft(page.locator(
           'div.element-search.autosuggest-search-activator > input'
       )).toBeVisible();

       //    Check time message
       await expect.soft(page.locator('id=time')).toContainText(
           /Your nearest branch closes in:( \d+[hms])+/);

       //    Check menu element names
       await expect.soft(page.locator('ul.main-menu li span')).toHaveText([
           'Card types',
           'Credit cards',
           'Debit cards',
           'Lending',
           'Loans',
           'Mortgages'
       ]);

       //    Check transaction statuses
       let statuses =
           await page.locator('span.status-pill + span').allTextContents();
       statuses.forEach(item => {
           expect.soft(['Complete', 'Pending', 'Declined']).toContain(item);
       });

The first three steps are nice and concise, but the code for the fourth step is quite long. Despite making several assertions for various page elements, there are still things left unchecked!

Run the test locally to make sure it works:

$ npx playwright test

This command will run the test against all three Playwright browsers – Chromium, Firefox, and WebKit – in headless mode and in parallel. You can append the “--headed” option to see the browsers open and render the pages. The tests should take only a few short seconds to complete, and they should all pass.

Introducing Visual Snapshots

You could run this login test on your local machine or from your Continuous Integration (CI) service, but in its present form, it can’t run against certain “stock” browsers like Apple Safari or Internet Explorer. If you attempt to use a browser channel to test stock Chrome or Edge browsers, tests would probably run much slower compared to Chromium. To run against any browser at lightning speed, we need the help of visual testing techniques using Applitools Visual AI and the Ultrafast Test Cloud.

Visual testing is the practice of inspecting visual differences between snapshots of screens in the app you are testing. You start by capturing a “baseline” snapshot of, say, the login page to consider as “right” or “expected.” Then, every time you run the tests, you capture a new snapshot of the same page and compare it to the baseline. By comparing the two snapshots side-by-side, you can detect any visual differences. Did a button go missing? Did the layout shift to the left? Did the colors change? If nothing changes, then the test passes. However, if there are changes, a human tester should review the differences to decide if the change is good or bad.

Manual testers have done visual testing since the dawn of computer screens. Applitools Visual AI simply automates the process. It highlights differences in side-by-side snapshots so you don’t miss them. Furthermore, Visual AI focuses on meaningful changes that human eyes would notice. If an element shifts one pixel to the right, that’s not a problem. Visual AI won’t bother you with that noise.

If a picture is worth a thousand words, then a visual snapshot is worth a thousand assertions. We could update our login test to take visual snapshots using Applitools Eyes SDK in place of lengthy assertions. Visual snapshots provide stronger coverage than the previous assertions. Remember how our login test made several checks but still didn’t cover all the elements on the page? A visual snapshot would implicitly capture everything with only one line of code. Visual testing like this enables more effective functional testing than traditional assertions.

But back to the original problem: how does this enable us to run Playwright tests against any stock browser? That’s the magic of snapshots. Notice how I said “snapshot” and not “screenshot.” A screenshot is merely a grid of static pixels. A snapshot, however, captures full page content – HTML, CSS, and JavaScript – that can be re-rendered in any browser configuration. If we update our Playwright test to take visual snapshots of the login page and the main page, then we could run our test one time locally to capture the snapshots, Then, the Applitools Eyes SDK would upload the snapshots to the Applitools Ultrafast Test Cloud to render them in any target browser – including browsers not natively supported by Playwright – and compare them against baselines. All the heavy work for visual checkpoints would be done by the Applitools Ultrafast Test Cloud, not by the local machine. It also works fast, since re-rendering snapshots takes much less time than re-running full tests.

Updating the Playwright Test

Let’s turn our login test into a visual test. First, make sure you have an Applitools account. You can register for a free account to get started.

Next, install the Applitools Eyes SDK for Playwright into your project:

$ npm install -D @applitools/eyes-playwright

Add the following import statement to login.spec.js:

const {
   VisualGridRunner,
   Eyes,
   Configuration,
   BatchInfo,
   BrowserType,
   DeviceName,
   ScreenOrientation,
   Target,
   MatchLevel
} = require('@applitools/eyes-playwright');

Next, we need to specify which browser configurations to run in Applitools Ultrafast Grid. Update the test.beforeEach(...) call to look like this:

test.describe('Login', () => {
   let eyes, runner;

   test.beforeEach(async ({ page }) => {
       await page.setViewportSize({width: 1600, height: 1200});

       runner = new VisualGridRunner({ testConcurrency: 5 });
       eyes = new Eyes(runner);
  
       const configuration = new Configuration();
       configuration.setBatch(new BatchInfo('Modern Cross Browser Testing Workshop'));
  
       configuration.addBrowser(800, 600, BrowserType.CHROME);
       configuration.addBrowser(700, 500, BrowserType.FIREFOX);
       configuration.addBrowser(1600, 1200, BrowserType.IE_11);
       configuration.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
       configuration.addBrowser(800, 600, BrowserType.SAFARI);
  
       configuration.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Galaxy_S5, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Nexus_10, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.iPad_Pro, ScreenOrientation.LANDSCAPE);
  
       eyes.setConfiguration(configuration);
   });
})

That’s a lot of new code! Let’s break it down:

  1. The page.setViewportSize(...) call remains unchanged. It will set the viewport only for the local test run.
  2. The runner object points visual tests to the Ultrafast Grid.
  3. The testConcurrency setting controls how many visual tests will run in parallel in the Ultrafast Grid. A higher concurrency means shorter overall execution time. (Warning: if you have a free account, your concurrency limit will be 1.)
  4. The eyes object watches the browser for taking visual snapshots.
  5. The configuration object sets the test batch name and the various browser configurations to test in the Ultrafast Grid.

This configuration will run our visual login test against 10 different browsers: 5 desktop browsers of various viewports, and 5 mobile browsers of various orientations.

Time to update the test case. We must “open” Applitools Eyes at the beginning of the test to capture screenshots, and we must “close” Eyes at the end:

test('should log into the demo app', async ({ page }) => {
      
       // Open Applitools Eyes
       await eyes.open(page, 'Applitools Demo App', 'Login');

       // Test steps
       // ...

       // Close Applitools Eyes
       await eyes.close(false)
   });

The load and login steps do not need any changes because the interactions are the same. However, the “verify” steps reduce drastically to one-line snapshot calls:

   test('should log into the demo app', async ({ page }) => {
      
       // ...

       // Verify login page
       await eyes.check('Login page', Target.window().fully());
      
       // ...
      
       // Verify main page
       await eyes.check('Main page', Target.window().matchLevel(MatchLevel.Layout).fully());

       // ...
      
   });

These snapshots capture the full window for both pages. The main page also sets a match level to “layout” so that differences in text and color are ignored. Snapshots will be captured once locally and uploaded to the Ultrafast Grid to be rendered on each target browser. Bye bye, long and complicated assertions!

Finally, after each test, we should add safety handling and result dumping:

   test.afterEach(async () => {
       await eyes.abort();

       const results = await runner.getAllTestResults(false);
       console.log('Visual test results', results);
   });

The completed code for login.spec.js should look like this:

const { test } = require('@playwright/test');
const {
   VisualGridRunner,
   Eyes,
   Configuration,
   BatchInfo,
   BrowserType,
   DeviceName,
   ScreenOrientation,
   Target,
   MatchLevel
} = require('@applitools/eyes-playwright');


test.describe.configure({ mode: 'parallel' })

test.describe('A visual test', () => {
   let eyes, runner;

   test.beforeEach(async ({ page }) => {
       await page.setViewportSize({width: 1600, height: 1200});

       runner = new VisualGridRunner({ testConcurrency: 5 });
       eyes = new Eyes(runner);
  
       const configuration = new Configuration();
       configuration.setBatch(new BatchInfo('Modern Cross Browser Testing Workshop'));
  
       configuration.addBrowser(800, 600, BrowserType.CHROME);
       configuration.addBrowser(700, 500, BrowserType.FIREFOX);
       configuration.addBrowser(1600, 1200, BrowserType.IE_11);
       configuration.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
       configuration.addBrowser(800, 600, BrowserType.SAFARI);
  
       configuration.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Galaxy_S5, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Nexus_10, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.iPad_Pro, ScreenOrientation.LANDSCAPE);
  
       eyes.setConfiguration(configuration);
   });

   test('should log into the demo app', async ({ page }) => {
      
       // Open Applitools Eyes
       await eyes.open(page, 'Applitools Demo App', 'Login');

       // Load login page
       await page.goto('https://demo.applitools.com');

       // Verify login page
       await eyes.check('Login page', Target.window().fully());
      
       // Perform login
       await page.fill('id=username', 'andy')
       await page.fill('id=password', 'i<3pandas')
       await page.click('id=log-in')

       // Verify main page
       await eyes.check('Main page', Target.window().matchLevel(MatchLevel.Layout).fully());

       // Close Applitools Eyes
       await eyes.close(false)
   });

   test.afterEach(async () => {
       await eyes.abort();

       const results = await runner.getAllTestResults(false);
       console.log('Visual test results', results);
   });
})

Now, it’s a visual test! Let’s run it.

Running the Test

Your account comes with an API key. Visual tests using Applitools Eyes need this API key for uploading results to your account. On your machine, set this key as an environment variable.

On Linux and macOS:

$ export APPLITOOLS_API_KEY=<value>

On Windows:

> set APPLITOOLS_API_KEY=<value>

Then, launch the test using only one browser locally:

$ npx playwright test —-browser=chromium

(Warning: If your playwright.config.js file has projects configured, you will need to use the “--project” option instead of the “--browser” option. Playwright may automatically configure this if you run npm init playwright to set up the project.)

When this test runs, it will upload snapshots for both the login page and the main page to the Applitools test cloud. It needs to run only one time locally to capture the snapshots. That’s why we set the command to run using only Chromium.

Open the Applitools dashboard to view the visual results:

The Applitools dashboard, displaying the results of our new visual tests, each marked with a Status of 'New'.

Notice how this one login test has one result for each target configuration. All results have “New” status because they are establishing baselines. Also, notice how little time it took to run this batch of tests:

The listed batch duration for all 10 tests, with a total of 20 steps, is 36 seconds.

Running our test across 10 different browser configurations with 2 visual checkpoints each at a concurrency level of 5 took only 36 seconds to complete. That’s ultra fast! Running that many test iterations with a Selenium Grid or similar scale-out platform could take several minutes.

Run the test again. The second run should succeed just like the first. However, the new dashboard results now say “Passed” because Applitools compared the latest snapshots to the baselines and verified that they had not changed:

The Applitools dashboard, displaying the results of our visual tests, each marked with a Status of 'Passed'.

This time, all variations took 32 seconds to complete – about half a minute.

Passing tests are great, but what happens if a page changes? Consider an alternate version of the login page:

A demo login form, with a username field, password field, and a few other selectable items. This time there is a broken image and changed login button.

This version has a broken icon and a different login button. Modify the Playwright call to load the login page to test this version of the site like this:

       await page.goto('https://demo.applitools.com/index_v2.html');

Now, when you rerun the test, results appear as “Unresolved” in the Applitools dashboard:

The Applitools dashboard, displaying the results of our latest visual tests, each marked with a Status of 'Unresolved'.

When you open each result, the dashboard will display visual comparisons for each snapshot. If you click the snapshot, it opens the comparison window:

A comparison window showing the baseline and the new visual checkpoint, with the changes highlighted in magenta by Visual AI.

The baseline snapshot appears on the left, while the latest checkpoint snapshot appears on the right. Differences will be highlighted in magenta. As the tester, you can choose to either accept the change as a new baseline or reject it as a failure.

Taking the Next Steps

Playwright truly is a nifty framework. Thanks to the Applitools Ultrafast Grid, you can upgrade any Playwright test with visual snapshots and run them against any browsers, even ones not natively supported by Playwright. Applitools enables Playwright tests to become cross-browser tests. Just note that this style of testing focuses on cross-browser page rendering, not cross-browser page interactions. You may still want to run your Playwright tests locally against Firefox and WebKit in addition to Chromium, while using the Applitools Ultrafast Grid to validate rendering on different browser and viewport configurations.

Want to see the full code? Check out this GitHub repository: applitools/workshop-cbt-playwright-js.

Want to try visual testing for yourself? Register for a free Applitools account.

Want to see how to do this type of cross-browser testing with Cypress? Check out this article.

The post Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser appeared first on Automated Visual Testing | Applitools.

]]>
Cross Browser Testing with Cypress Workshop Q&A https://applitools.com/blog/cross-browser-testing-cypress-workshop-qa/ Wed, 09 Feb 2022 21:30:32 +0000 https://applitools.com/?p=34261 After our webinar on Cross Browser Testing with Cypress we had so many great questions we couldn’t answer them all at the time, so we're tackling them now.

The post Cross Browser Testing with Cypress Workshop Q&A appeared first on Automated Visual Testing | Applitools.

]]>

On February 1, 2022, I gave a webinar entitled Cross Browser Testing with Cypress, in which I explained how to run Cypress tests against any browser using Applitools Visual AI and the Ultrafast Test Cloud. We had many attendees and lots of great questions – so many questions that we couldn’t answer them all during the event. In this article, I do my best to provide answers to as many remaining questions as possible.

Questions about the Webinar

Is there a recording for the webinar?

Yes, indeed! The recording is here.

Where is the repository for the example code?

Here it is: https://github.com/applitools/workshop-cbt-cypress. The repository also contains a full walkthrough in the WORKSHOP.md file.

Questions about Cypress

Can we run Cypress tests through the command line?

Yes! The npx cypress open command opens the Cypress browser window for launching tests, while the npx cypress run command launches tests purely from the command line. Use npx cypress run for Continuous Integration (CI) environments.

So, in a Cypress test case, we don’t need to create any driver object for opening a browser?

Correct! When you initialize a Cypress project, it sets up all the imports and references you need. Just call the cy object. To navigate to our first page, call cy.visit(…) and provide the URL as a string.

Can Cypress handle testing with iFrames?

Yes! Check out this cookbook recipe, Working with iFrames in Cypress.

How does cy.contains(…) work?

The cy.contains(…) call selects elements based on their text. For example, if a button has the text “Add Account”, then cy.contains(“Account”) would locate it. Check the Cypress docs for more detailed information.

Can Cypress access backend intranet APIs?

I haven’t done that myself, but it looks like there are ways to set up Windows Authentication and proxies with Cypress.

Questions about Applitools

How do I establish baseline snapshots?

The first time you take a visual snapshot with Applitools Eyes, it saves the snapshot as the baseline. The next time the same snapshot is taken, it is treated as a checkpoint and compared against the baseline.

Does Applitools Eyes fail a test if a piece of content changes on a page?

Every time Applitools Eyes detects a change, it asks the tester to decide if the change is good (“thumbs up”) or bad (“thumbs down”). Applitools enables testers to try different match levels for comparisons. For example, if you want to check for layout changes but ignore differences in text and color, you can use “layout” matching. Alternatively, if the text matters to you but layout changes don’t, you can use “content” matching. Applitools also enables testers to ignore regions of the snapshots. For example, a timestamp field will be different for each snapshot, so those could easily be ignored.

How do we save a new baseline snapshot if pages change during development?

When a tester marks a change as “good,” the new snapshot is automatically saved as the new baseline.

What happens if we have thousands of tests and developers change the UI? Will I need to modify thousands of baselines?

Hopefully not! Most UI changes are localized to certain pages or areas of an app. In that case, only those baselines would need updates. However, if the UI changes affect every screen, such as a theme change, then you might need to refresh all baselines. That isn’t as bad as it sounds: Applitools has AI-powered maintenance capabilities. When you accept one new snapshot as a baseline, Applitools will automatically scan all other changes in the current batch and accept similar changes. That way, you don’t need to grind through a thousand “thumbs-up” clicks. Alternatively, you could manually delete old baselines through the Applitools dashboard and rerun your tests to establish fresh ones. You could also establish regions to ignore on snapshots for things like headers or sidebars to mitigate the churn caused by cross-cutting changes.

Does Applitools Eyes wait for assets such as images, videos, and animations to load before taking snapshots?

No. The browser automation tool or the automation code you write must handle waiting.

(* Actually, there is a way when not using the Ultrafast Test Cloud. The classic SDKs include a parameter that you can set for controlling Eyes snapshot retries when there is no match.)

Can we accept some visual changes while rejecting others for one checkpoint?

Yes, you can set regions on a snapshot to use different match levels or to be ignored entirely.

Can we download the snapshot images from our tests?

Yes, you can download snapshot images from the Applitools Eyes dashboard.

Does Applitools offer any SDKs for visually testing Windows desktop apps?

Yes! Applitools offers SDKs for Windows CodedUI, Windows UFT, and Windows Apps. Applitools also works with Power Automate Desktop.

Does the Applitools Ultrafast Grid use real or emulated mobile devices?

Presently, it uses emulated mobile devices.

Can I publicly share my Applitools API key?

No, do NOT share your API key publicly! That should be kept secret. Don’t let others run their tests using your account!

Questions about Applitools with Cypress

How do I set up Applitools to work with Cypress?

Follow the Applitools Cypress Tutorial. You’ll need to:

  1. Register an Applitools account.
  2. Install the @applitools/eyes-cypress package.
  3. Run npx eyes-init to set up Applitools Eyes.
  4. Set the APPLITOOLS_API_KEY environment variable to your API key.

Cypress cannot locally run tests against Safari, Internet Explorer, or mobile browsers. Can Cypress tests check snapshots on these browsers in the Applitools Ultrafast Test Cloud?

Yes! Snapshots capture the whole page, not just pixelated images. Applitools can render snapshots using any browser configuration, even ones not natively supported by Cypress.

Can Applitools Eyes focus on specific elements instead of an entire page?

Yes! You can check a specific web element as a “region” of a page like this:

cy.eyesCheckWindow({
  target: 'region',
  selector: {
    type: 'css',
    selector: '.my-element'
  }
});

Can we run visual tests with Cypress using a free Applitools account?

Yes, but you will not be able to access all of Applitools’ features with a free account, and your test concurrency will be limited to 1.

Can we perform traditional assertions together with visual snapshots?

Sure! Visual testing eliminates the need for most traditional assertions, but sometimes, old-school assertions can be helpful for checking things like text formatting. Cypress uses Chai for assertions.

Questions about Testing

If a project has little-to-no test automation in place, should we start writing visual tests right away, or should we start with traditional functional tests and add visual tests later?

Visual tests are arguably easier to automate than traditional functional tests because they simplify assertions. Apply the 80/20 rule: start with a small “smoke” test suite that simply captures snapshots of different pages in your web app. Run that suite for every code change and see the value it delivers. Next, build on it by covering more interactions than navigation. Then, once those are doing well, try to automate more complicated behaviors. At that point, you might need some traditional assertions to complement visual checkpoints.

Can we compare snapshots from a staging environment to a production environment?

Yes, you can compare results across test environments as long as the snapshots have the same app, test, and tag names.

Can we schedule visual tests to run every day?

Certainly! Both Applitools and Cypress can integrate with any Continuous Integration (CI) system.

Does visual testing have any disadvantages when compared to traditional functional testing?

Presently, visual testing does not check specific text formatting, such as dates or currencies. You’ll need to use traditional assertions for that type of pattern matching. Nevertheless, you can use visual testing together with traditional techniques to automate the best functional tests possible.

How do we test UX things like heading levels, fonts, and text sizes?

If you take visual snapshots of pages, then Applitools Eyes will detect differences like these. You could also automate traditional assertions to verify specific attributes such as a specific heading number or font name, but those kinds of assertions tend to be brittle.

What IDE should we use for developing Cypress tests with Applitools?

Any JavaScript editor should work. Visual Studio Code and JetBrains WebStorm are popular choices.

What tool or framework should we use for API testing?

Cypress has built-in API support with the cy.request(...) method, making it easy to write end-to-end tests that interact with both the frontend and backend. However, if you want to automate tests purely for APIs, then you should probably use a tool other than Cypress. Postman is one of the most popular tools for API testing. If you want to stick to JavaScript, you could look into SuperTest and Nock.

Can we do load testing with cross-browser testing?

Load testing is the practice of adding different intensities of “load” to a system while running functional and performance tests. For web apps, “load” is typically a rate of requests (like 100 requests per second). As load increases, performance degrades, and functionality might start failing. You can do load testing with cross-browser testing, but keep in mind that any failures due to load would probably happen the same way for any browser. Load hits the backend, not the frontend. Repeating load tests for a multitude of different browser configurations may not be worthwhile.

The post Cross Browser Testing with Cypress Workshop Q&A appeared first on Automated Visual Testing | Applitools.

]]>
How to Run Cross Browser Tests with Cypress on All Browsers https://applitools.com/blog/cross-browser-tests-cypress-all-browsers/ Fri, 04 Feb 2022 17:37:50 +0000 https://applitools.com/?p=34121 Learn how you can run cross-browser Cypress tests against any browser, including Safari, IE and mobile browsers.

The post How to Run Cross Browser Tests with Cypress on All Browsers appeared first on Automated Visual Testing | Applitools.

]]>

Learn how you can run cross-browser Cypress tests against any browser, including Safari, IE and mobile browsers.

Ah, Cypress – the darling end-to-end test framework of the JavaScript world. In the past few years, Cypress has surged in popularity due to its excellent developer experience. It runs right in the browser alongside web apps, making it a natural fit for frontend developers. Its API is both concise and powerful. Its interactions automatically handle waiting to avoid any chance of flakiness. Cypress almost seems like a strong contender to dethrone Selenium WebDriver as the king of browser automation tools.

However, Cypress has a critical weakness: it cannot natively run tests against all browser types. At the time of writing this article, Cypress supports only a limited set of browsers: Chrome, Edge, and Firefox. That means no support for Safari or IE. Cypress also doesn’t support mobile web browsers. Ouch! These limitations alone could make you think twice about choosing to automate your tests in Cypress.

Thankfully, there is a way to run Cypress tests against any browser type, including Safari, IE, and mobile browsers: using Applitools Ultrafast Test Grid. With the help of Applitools, you can achieve full cross-browser testing with Cypress, even for large-scale test suites. Let’s see how it’s done. We’ll start with a basic Cypress test, and then we’ll add visual snapshots that can be rendered in any browser in the Applitools cloud.

Defining a Test Case

Let’s define a basic web app login test for the Applitools demo site. The site mimics a basic banking app. The first page is a login screen:

Demo login form including logo, username and password.

You can enter any username or password to login. Then, the main page appears:

Demo main page displaying a financial app, including balance and recent transactions.

Nothing fancy here. The steps for our test case are straightforward:

Scenario: Successful login
  Given the login page is displayed
  When the user enters their username and password
  And the user clicks the login button
  Then the main page is displayed

These steps would be the same for the login behavior of any other application.

Automating a Cypress Test

Let’s automate our login test using Cypress. Create a JavaScript project and install Cypress. Then, create a new test case spec: cypress/integration/login.spec.js. Add the following test case to the spec file:

describe('Login', () => {

    beforeEach(() => {
        cy.viewport(1600, 1200)
    })

    it('should log into the demo app', () => {
        loadLoginPage()
        verifyLoginPage()
        performLogin()
        verifyMainPage()
    })
})

Cypress uses Mocha as its core test framework. The beforeEach call makes sure the browser viewport is large enough to show all elements in the demo app. The test case itself has a helper function for each test step.

The first function, loadLoginPage, loads the login page:

function loadLoginPage() {
    cy.visit('https://demo.applitools.com')
}

The second function, verifyLoginPage, makes sure that the login page loads correctly:

function verifyLoginPage() {
    cy.get('div.logo-w').should('be.visible')
    cy.get('#username').should('be.visible')
    cy.get('#password').should('be.visible')
    cy.get('#log-in').should('be.visible')
    cy.get('input.form-check-input').should('be.visible')
}

The third function, performLogin, actually does the interaction of logging in:

function performLogin() {
    cy.get('#username').type('andy')
    cy.get('#password').type('i<3pandas')
    cy.get('#log-in').click()
}

The fourth and final function, verifyMainPage, makes sure that the main page loads correctly:

function verifyMainPage() {

    // Check various page elements
    cy.get('div.logo-w').should('be.visible')
    cy.get('div.element-search.autosuggest-search-activator > input').should('be.visible')
    cy.get('div.avatar-w img').should('be.visible')
    cy.get('ul.main-menu').should('be.visible')
    cy.contains('Add Account').should('be.visible')
    cy.contains('Make Payment').should('be.visible')
    cy.contains('View Statement').should('be.visible')
    cy.contains('Request Increase').should('be.visible')
    cy.contains('Pay Now').should('be.visible')

    // Check time message
    cy.get('#time').invoke('text').should('match', /Your nearest branch closes in:( \d+[hms])+/)

    // Check menu element names
    cy.get('ul.main-menu li span').should(items => {
        expect(items[0]).to.contain.text('Card types')
        expect(items[1]).to.contain.text('Credit cards')
        expect(items[2]).to.contain.text('Debit cards')
        expect(items[3]).to.contain.text('Lending')
        expect(items[4]).to.contain.text('Loans')
        expect(items[5]).to.contain.text('Mortgages')
    })

    // Check transaction statuses
    const statuses = ['Complete', 'Pending', 'Declined']
    cy.get('span.status-pill + span').each(($span, index) => {
        expect(statuses).to.include($span.text())
    })
}

The first three functions are fairly concise, but the fourth one is a doozy. The main page has so many things to check, and despite its length, this step doesn’t even check everything!

Run this test locally to make sure it works (npx cypress open). It should pass using any local browser (Chrome, Edge, Electron, or Firefox).

Introducing Visual Snapshots

You could run this login test on your local machine or from your Continuous Integration (CI) service, but in its present form, it can’t run on those extra browsers (Safari, IE, mobile). To do that, we need the help of visual testing techniques using Applitools Visual AI and the Ultrafast Test Cloud.

Visual testing is the practice of inspecting visual differences between snapshots of screens in the app you are testing. You start by capturing a “baseline” snapshot of, say, the login page to consider as “right” or “expected.” Then, every time you run the tests, you capture a new snapshot of the same page and compare it to the baseline. By comparing the two snapshots side-by-side, you can detect any visual differences. Did a button go missing? Did the layout shift to the left? Did the colors change? If nothing changes, then the test passes. However, if there are changes, a human tester should review the differences to decide if the change is good or bad.

Manual testers have done visual testing since the dawn of computer screens. Applitools Visual AI simply automates the process. It highlights differences in side-by-side snapshots so you don’t miss them. Furthermore, Visual AI focuses on meaningful changes that human eyes would notice. If an element shifts one pixel to the right, that’s not a problem. Visual AI won’t bother you with that noise.

If a picture is worth a thousand words, then a visual snapshot is worth a thousand assertions. We could update our login test to take visual snapshots using Applitools Eyes SDK in place of lengthy assertions. Visual snapshots provide stronger coverage than the previous assertions. Remember how verifyMainPage had several checks but still didn’t cover all the elements on the page? A visual snapshot would implicitly capture everything with only one line of code. Visual testing like this enables more effective functional testing than traditional assertions.

But back to the original problem: how does this enable us to run Cypress tests in Safari, IE, and mobile browsers? That’s the magic of snapshots. Notice how I said “snapshot” and not “screenshot.” A screenshot is merely a grid of static pixels. A snapshot, however, captures full page content – HTML, CSS, and JavaScript – that can be re-rendered in any browser configuration. If we update our Cypress test to take visual snapshots of the login page and the main page, then we could run our test one time locally to capture the snapshots, Then, the Applitools Eyes SDK would upload the snapshots to the Applitools Ultrafast Test Cloud to render them in any target browser – including browsers not natively supported by Cypress – and compare them against baselines. All the heavy work for visual checkpoints would be done by the Applitools Ultrafast Test Cloud, not by the local machine. It also works fast, since re-rendering snapshots takes much less time than re-running full Cypress tests.

Updating the Cypress Test

Let’s turn our login test into a visual test. First, make sure you have an Applitools account. You can register for a free account to get started.

Your account comes with an API key. Visual tests using Applitools Eyes need this API key for uploading results to your account. On your machine, set this key as an environment variable.

On Linux and macOS:

$ export APPLITOOLS_API_KEY=<value>

On Windows:

> set APPLITOOLS_API_KEY=<value>

Time for coding! The test case steps remain the same, but the test case must be wrapped by calls to Applitools Eyes:

describe('Login', () => {

    it('should log into the demo app', () => {

        cy.eyesOpen({
            appName: 'Applitools Demo App',
            testName: 'Login',
        })

        loadLoginPage()
        verifyLoginPage()
        performLogin()
        verifyMainPage()
    })

    afterEach(() => {
        cy.eyesClose()
    })
})

Before the test begins, cy.eyesOpen(...) tells Applitools Eyes to start watching the browser. It also sets names for the app under test and the test case itself. Then, at the conclusion of the test, cy.eyesClose() tells Applitools Eyes to stop watching the browser.

The interaction functions, loadLoginPage and performLogin, do not need any changes. The verification functions do:

function verifyLoginPage() {
    cy.eyesCheckWindow({
        tag: "Login page",
        target: 'window',
        fully: true
    });
}

function verifyMainPage() {
    cy.eyesCheckWindow({
        tag: "Main page",
        target: 'window',
        fully: true,
        matchLevel: 'Layout'
    });
}

All the assertion calls are replaced by one-line snapshots using Applitools Eyes. These snapshots capture the full window for both pages. The main page also sets a match level to “layout” so that differences in text and color are ignored.

The test code changes are complete, but you need to do one more thing: you must specify browser configurations to test in the Applitools Ultrafast Test Cloud. Add a file named applitools.config.js to the root level of the project, and add the following content:

module.exports = {
    testConcurrency: 5,
    apiKey: 'APPLITOOLS_API_KEY',
    browser: [
        // Desktop
        {width: 800, height: 600, name: 'chrome'},
        {width: 700, height: 500, name: 'firefox'},
        {width: 1600, height: 1200, name: 'ie11'},
        {width: 1024, height: 768, name: 'edgechromium'},
        {width: 800, height: 600, name: 'safari'},
        // Mobile
        {deviceName: 'iPhone X', screenOrientation: 'portrait'},
        {deviceName: 'Pixel 2', screenOrientation: 'portrait'},
        {deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
        {deviceName: 'Nexus 10', screenOrientation: 'portrait'},
        {deviceName: 'iPad Pro', screenOrientation: 'landscape'},
    ],
    batchName: 'Modern Cross-Browser Testing Workshop'
}

This config file contains four settings:

  1. testConcurrency sets the level of parallel execution in the Applitools Ultrafast Test Cloud. (Free accounts are limited to 1 concurrent test.)
  2. apiKey sets the environment variable name for the Applitools API key.
  3. browser declares a list of browser configurations to test. This config file provides ten total configs: five desktop, and five mobile. Notice that Safari and IE11 are included. Desktop browser configs include viewport sizes, while mobile browser configs include screen orientations.
  4. batchName sets a name that all results will share in the Applitools Dashboard.

Done! Let’s run the updated test.

Running our Cypress Cross Browser Test

Run the test locally to make sure it works (npx cypress open). Then, open the Applitools dashboard to view visual test results:

Applitools dashboard with baseline results.

Notice how this one login test has one result for each target configuration. All results have “New” status because they are establishing baselines. Also, notice how little time it took to run this batch of tests:

Results of test showing batch of 10 tests ran in 36 seconds

Running our test across 10 different browser configurations with 2 visual checkpoints each at a concurrency level of 5 took only 36 seconds to complete. That’s ultra fast! Running that many test iterations locally or in a traditional Cypress parallel environment could take several minutes.

Run the test again. The second run should succeed just like the first. However, the new dashboard results now say “Passed” because Applitools compared the latest snapshots to the baselines and verified that they had not changed:

Applitools dashboard with passing results.

This time, all variations took 32 seconds to complete – about half a minute.

Passing tests are great, but what happens if a page changes? Consider an alternate version of the login page:

Demo login form including logo, username and password, with broken icon for logo and different login button.

This version has a broken icon and a different login button. Modify the loadLoginPage function to test this version of the site like this:

function loadLoginPage() {
    cy.visit('https://demo.applitools.com/index_v2.html')
}

Now, when you rerun the test, results appear as “Unresolved” in the Applitools dashboard:

Applitools dashboard with unresolved changes.

When you open each result, the dashboard will display visual comparisons for each snapshot. If you click the snapshot, it opens the comparison window:

Applitools dashboard showing comparison window, which highlights differences.

The baseline snapshot appears on the left, while the latest checkpoint snapshot appears on the right. Differences will be highlighted in magenta. As the tester, you can choose to either accept the change as a new baseline or reject it as a failure.

Taking the Next Steps

Even though Cypress can’t natively run tests against Safari, IE, or mobile browsers, it can with visual testing with Applitools Ultrafast Test Cloud. You can use Cypress tests to capture snapshots and then render them under any number of different browser configurations to achieve true cross-browser testing with visual and functional test coverage.

Want to see the full code? Check out this GitHub repository: applitools/workshop-cbt-cypress.

Want to try visual testing for yourself? Register for a free Applitools account.

Want to see more examples? Check out other articles here, here, and here.

The post How to Run Cross Browser Tests with Cypress on All Browsers appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual AI? https://applitools.com/blog/visual-ai/ Wed, 29 Dec 2021 14:27:00 +0000 https://applitools.com/?p=33518 Learn what Visual AI is, how it’s applied today, and why it’s critical across many industries - in particular software development and testing.

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.

From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.

The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.

What is AI? Background on Artificial Intelligence and Machine Learning

Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.

Image of Frankenstein

Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.

What is Visual Artificial Intelligence (Visual AI)?

Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.

In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.

As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.

Representation of Visual AI

How is Visual AI Used Today?

Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI. 

Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.

How Does Visual AI Help?

One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.

Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant. 

Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.

How Does Visual AI Help in Software Development and Testing Today?

Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation. 

Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use. 

At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.

A table showing the number of screens in production by modern organizations - 81,480 is the market average, and the top 30% of the market is  681,296
Source: The 2019 State of Automated Visual Testing

At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test. 

That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.

Visual AI is 5.8x faster, 5.9x more efficient, 3.8x more stable, and catches 45% more bugs
Source: The Impact of Visual AI on Test Automation Report

How Visual AI Enables Cross Browser/Cross Device Testing

Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.

Traditional test cycle takes 29.2 hours, modern test cycle takes just 1.6 hours.
Source: Modern Cross Browser Testing Through Visual AI Report

How Will Visual AI Advance in the Future?

As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.

In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.

Keep Reading: More about Visual AI and Visual Testing

What is Visual Testing (blog)

The Path to Autonomous Testing (video)

What is Applitools Visual AI (learn)

Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)

How AI Can Help Address Modern Software Testing (blog)

The Impact of Visual AI on Test Automation (report)

How Visual AI Accelerates Release Velocity (blog)

Modern Functional Test Automation Through Visual AI (free course)

Computer Vision defined (Wikipedia)

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>
Automating Functional / End-2-End Tests Across Multiple Platforms https://applitools.com/blog/automating-functional-end-to-end-tests-cross-platform/ Tue, 01 Jun 2021 20:06:00 +0000 https://applitools.com/?p=29024 This post talks about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms.  It shares details on the thought process & criteria involved...

The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.

]]>

This post talks about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms. 

It shares details on the thought process & criteria involved in creating a solution that includes how to write the tests, and run it across the multiple platforms without any code change.

Lastly, the open-sourced solution also has examples on how to implement a test that orchestrates the simulation between multiple devices / browsers to simulate multiple users interacting with each other as part of the same test.

We will cover the following topics.

Background

How many times do we see products available only on a single platform? For example, Android app only, or iOS app only?

Organisations typically start building the product on a particular platform, but then they do expand to other platforms as well. 

Once the product is available on multiple platforms, do they differ in their functionality? There definitely would be some UX differences, and in some cases, the way to accomplish the functionality would be different, but the business objectives and features would still be similar across both platforms. Also, one platform may be ahead of the other in terms of feature parity. 

The above aspects of product development are not new.

The interesting question is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?

Case Study

To answer this question, let’s take an example of any video conferencing application – something that we would all be familiar with in these times. We will refer to this application as “MySocialConnect” for the remainder of this post.

MySocialConnect is available on the following platforms:

  • All modern browsers (Chrome / Firefox / Edge / Safari) available on laptop / desktop computers as well as on mobile devices
  • Android app via Google’s PlayStore
  • iOS app via Apple’s App Store

In terms of functionality, the majority of the functionality is the same across all these platforms. Example:

  • Signup / Login
  • Start an instant call
  • Schedule a call
  • Invite registered users to join an on-going call
  • Invite non-registered users can join a call
  • Share screen
  • Video on-off
  • Audio on-off
  • And so on…

There are also some functionality differences that would exist. Example:

  • Safe driving mode is available only in Android and iOS apps
  • Flip video camera is available only in Android and iOS apps

Test Automation Approach

So, repeating the big question for MySocialConnect is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?

I would approach Functional automation of MySocialConnect as follows:

  1. The test should be specified only once. The implementation details should figure out how to get the execution happening across any of the supported platforms
  2. For the common functionalities, we should implement the business logic only once
  3. There should be a way to address differences in business functionality across platforms
  4. The value of the automation for MySocialConnect is to simulate “real calls” – i.e. more than one user in the call – and interacting with each other

In addition, I need the following capabilities in my automation:

  • Rich reports
    • With on-demand screenshots attached in the report
    • Details of the devices / browsers where the test 
    • Understand trends of test execution results
    • Test Failure analysis capabilities
  • Support parallel / distributed execution of tests to get faster feedback
  • Visual Testing support using Applitools Visual AI
    • To reduce the number of validations I need to write (less code)
    • Increase coverage (functional and UI / UX)
    • Contrast Advisor to ensure my product meets the WCAG 2.0 / 2.1 guidelines for Accessibility
  • Ability to run on local machines or in the CI
  • Ability to run the full suite or a subset of tests, on demand, and without any code change
  • Ability to run tests across any environment
  • Ability to easily specify test data for each supported environment 

Test Automation Implementation

To help implement the criteria mentioned above, I built (and open-sourced on github) my automation framework – teswiz. The implementation is based on the discussion and guidelines in [Visual] Mobile Test Automation Best Practices and Test Automation in the World of AI & ML

Tech Stack

After a lot of consideration, I chose the following tech stack and toolset to implement my automated tests in teswiz.

Test Intent Specification

Using Cucumber, the tests are specified with the following criteria:

  • The test intent should be clear and “speak” business requirements
  • The same test should be able to execute against all supported platforms (assuming feature parity)
  • The clutter of the assertions should not pollute the test intent. That is implementation detail

Based on these criteria, here is a simple example of how the test can be written.

The tags on the above test indicates that the test is implemented and ready for execution against the Android apk and the web browser. 

Multi-User Scenarios

Given the context of MySocialConnect, implementing tests that are able to simulate real meeting scenarios would add the most value – as that is the crux of the product.

Hence, there is support built-in to the teswiz framework to allow implementation of multi-user scenarios. The main criteria for implementing such scenarios are:

  • One test to orchestrate the simulation of multi-user scenarios
  • The test step should indicate “who” is performing the action, and on “which” platform
  • The test framework should be able to manage the interactions for each user on the specified platform.

Here is a simple example of how this test can be specified.

In the above example, there are 2 users – “I” and “you”, each on a different platform – “android” and “web” respectively.

Configurable Framework

The automated tests are run in different ways – depending on the context.

Ex: In CI, we may want to run all the tests, for each of the supported platforms

However, on local machines, the QA / SDET / Developers may want to run only specific subset of the tests – be it for debugging, or verifying the new test implementation.

Also, there may be cases where you want to run the tests pointing to your application for a different environment.

The teswiz framework supports all these configurations, which can be controlled from the command-line. This prevents having to make any code / configuration file changes to run a specific subset type of tests.

teswiz Framework Architecture

This is the high-level architecture of the teswiz framework.

Visual Testing & Contrast Advisor

Based on the data from the study done on the “Impact of Visual AI on Test Automation,” Applitools Visual AI helps automate your Functional Tests faster, while making the execution more stable. Along with this, you will get increased test coverage and will be able to find significantly more functional and visual issues compared to the traditional approach.

You can also scale your Test Automation execution seamlessly with the Applitools UltraFast Test Cloud and use the Contrast Advisor capability to ensure the application-under-test meets the accessibility guidelines of the WCAG 2.0 / 2.1 standards very early in the development stage.

Read this blog post about “Visual Testing – Hype or Reality?” to see some real data of how you can reduce the effort, while increasing the test coverage from our implementation significantly by using Applitools Visual AI.

Hence it was a no-brainer to integrate Applitools Visual AI in the teswiz framework to support adding visual assertions to your implementation simply by providing the APPLITOOLS_API_KEY. Advanced configurations to override the defaults for Applitools can be done via the applitools_config.json file. 

This integration works for all the supported browsers of WebDriver and all platforms supported by Appium.

Reporting

It is very important to have good and rich reports of your test execution. These reports not only make it valuable to pinpoint the reasons of the failing test, but also should be able to give an understanding of the trend of execution and quality of the product under test. 

I have used ReportPortal.io as my reporting tool – it is extremely easy to set up and use and allows me to also add screenshots, log files and other information that may seem important along with the test execution to make root cause analysis easy.

How Can You Get Started?

I have open-sourced this teswiz framework so you do not need to reinvent the wheel. See this page to get started – https://github.com/znsio/teswiz#what-is-this-repository-about

Feel free to raise issues / PRs against the project for adding more capabilities that will benefit all.

The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.

]]>