Ultrafast Grid Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/ultrafast-grid/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:38:48 +0000 en-US hourly 1 The Ultimate Guide To End-to-End Testing With Cypress https://applitools.com/blog/the-ultimate-guide-to-end-to-end-testing-with-cypress/ Mon, 19 Jun 2023 16:54:31 +0000 https://applitools.com/?p=51057 A guide to the anatomy of the Cypress framework, how it compares to other frameworks, and why it's so popular!

The post The Ultimate Guide To End-to-End Testing With Cypress appeared first on Automated Visual Testing | Applitools.

]]>

Today’s software applications are getting more complicated, thus every testing team needs to focus on expanding test coverage. To achieve this goal, it is important to use a combination of testing types, such as unit testing, integration testing, system testing, and end-to-end testing, depending on the software application’s complexity and requirements.

End-to-End (E2E) testing is designed to ensure that all components of a software application are working together correctly and that the system as a whole meets the desired functionality, performance, and reliability requirements.

Cypress is a popular open-source end-to-end testing framework for web applications. It is designed to make the testing process easier and more efficient for developers. One of the unique features of Cypress is that it runs the tests within the browser, which means that it can provide better control and visibility over the application under test.
In this blog on End to End testing, we will deep dive into performing Cypress End to End testing on a local Cypress grid and will explain how to start automating visual tests with Applitools Eyes and the Ultrafast Grid using Cypress in JavaScript.

What is End to End Testing?

End-to-end (E2E) testing is a software testing strategy that verifies an application’s complete flow from beginning to end. It is a type of functional testing that tests the application’s behavior as a complete system, rather than testing individual components in isolation.

E2E testing simulates a real user scenario and covers all aspects of the application, including user interfaces, APIs, databases, and other integrations. It typically involves testing multiple components of an application to ensure that they work together as expected and fulfill the requirements of the business or end-users.

E2E testing is typically performed after other types of testing, such as unit testing and integration testing, have been completed. It is used to validate that the entire system works together seamlessly and to identify any issues that may have been missed in earlier stages of testing.

Why is end-to-end testing necessary?

End-to-end testing (E2E testing) is a type of software testing that tests the entire system or application from start to finish, simulating real-world user scenarios.

Unit testing alone is not enough to ensure the quality and reliability of software. While unit testing is an important part of the testing process, it only verifies the behavior of individual components or modules of the software in isolation. It does not guarantee that the software will work correctly when integrated with other components or modules.

This is where integration testing enters into the picture. Integration testing focuses on testing the interaction between two or more components of a system, to ensure that they work together correctly. However, even if all the individual components pass integration testing, there may still be issues with the overall system when all the components are put together. This is where end-to-end testing comes in – it tests the entire system from start to finish.

Cypress is a popular automation testing framework that is designed specifically for end-to-end testing. It runs tests directly in the browser, allowing it to provide an experience that is similar to how users interact with the application. This makes it easier to identify any issues that users might face, as the testing environment is as close to the real-world experience as possible.

To understand End to End testing, Let’s take a closer look at Mike Cohn’s test automation pyramid. We routinely do each level of testing listed in this pyramid while running automated Cypress testing.

Testing Pyramid Layers

The automation pyramid is a popular framework introduced by Mike Cohn that helps teams to plan and prioritize their testing efforts. It includes three levels of testing, which are:

  1. Unit Tests: At the base of the pyramid are the unit tests, which test individual code components such as functions, methods, and classes. Unit tests are typically written by developers and are executed frequently during the development cycle. They are essential in ensuring that individual components of the application work as expected and can catch issues early in the development process.
  1. Integration Tests: The middle layer of the pyramid consists of integration tests, which test how different components of the system work together. Integration tests ensure that the various parts of the application can communicate and interact with each other seamlessly. These tests are typically automated and are executed after the unit tests have passed.
  1. End-to-End Tests: The top layer of the pyramid is end-to-end testing, which tests the entire application workflow from start to finish. These tests simulate real user scenarios and help ensure that the application functions as expected in a production environment. End-to-end tests are typically automated and are executed less frequently than the lower level tests.

Benefits of End-to-End Testing

There are several benefits of End to End testing. Some of the benefits of E2E testing include:

  1. Increased Confidence: E2E testing provides a higher level of confidence in the software application by testing all components together. This testing approach ensures that all the components are integrated correctly and are working as expected.
  2. Improved Quality: By testing the application from end-to-end, helps to identify and fix bugs earlier in the development process. This enhances the overall quality of the software.
  3. Enhanced User Experience: E2E testing ensures that the application is working as expected for the end user. This helps to provide a better user experience and can lead to increased customer satisfaction.
  4. Time and Cost Savings: E2E testing helps to identify issues early in the development cycle, which can save time and money by reducing the need for costly rework later in the process.
  5. Better Collaboration: E2E testing promotes better collaboration between different teams working on the same application. This testing approach helps to identify issues that may be caused by a lack of communication between teams.
  6. Increased Productivity: By automating the testing process, E2E testing can help to increase productivity by reducing the time and effort required to manually test the application.

Faster Time-to-Market: By catching defects earlier in the development process, end-to-end testing can help to reduce delays and accelerate the time-to-market of the application.

Frameworks for End to End testing

There are several popular frameworks for end-to-end testing, including:

Cypress

Cypress is a JavaScript-based end-to-end testing framework that provides a simple and intuitive API for testing web applications. Cypress supports modern web development technologies like React, Angular, Vue.js, and more. It provides a built-in test runner, and it runs tests in the browser, which makes it fast and reliable.

Cypress runs tests inside the browser; it also provides detailed information about what’s happening at every step of the test, including network requests, console output, and DOM changes. This makes it easier to identify and troubleshoot issues and helps ensure that the application is working as intended.

Cypress Trends on GitHub

The following information is taken from the official website of Cypress GitHub repository:

  • Stars: 43.3k
  • Forks: 2.8k
  • Used By: 797k
  • Releases: 303
  • Contributors: 427

WebdriverIO

WebdriverIO is a popular open-source testing framework for Node.js that allows developers to automate web applications in a simple and efficient way. It uses the WebDriver API to communicate with browsers and supports a variety of testing frameworks, including Mocha, Jasmine, and Cucumber.

.WebdriverIO Trends on GitHub

The following information is taken from the official website of WebdriverIO GitHub repository:

  • Stars: 8.1k
  • Forks: 2.3k
  • Used By: 50.5k
  • Releases: 305
  • Contributors: 491

Nightwatch.js

Nightwatch.js is an open-source Node.js-based end-to-end testing framework used to automate browser testing. It provides a simple and easy-to-use syntax for writing automated tests in JavaScript and allows you to run tests in real web browsers like Chrome, Firefox, and Safari.

Nightwatch.js uses the WebDriver protocol to communicate with the browser and control its behavior. It also includes a powerful built-in assertion library that makes it easy to write test assertions and helps you quickly identify issues with your web application.

Nightwatch.js Trends on GitHub

The following information is taken from the official website of Nightwatch.js GitHub repository:

  • Stars: 11.4k
  • Forks: 1.1k
  • Used By: 142k
  • Releases: 219
  • Contributors: 112

Protractor

Protractor is an open-source end-to-end testing framework for Angular and AngularJS applications. It is built on top of WebDriverJS and uses Jasmine syntax for writing test scripts. Protractor is designed to simulate user interactions with the application and to verify that the application behaves as expected.

Protractor Trends on GitHub

The following information is taken from the official website of Protractor GitHub repository:

  • Stars: 8.8k
  • Forks: 2.4k
  • Used By: 1.9m
  • Contributors: 250

TestCafe

TestCafe is an open-source end-to-end testing framework that allows you to automate web testing without using browser plugins. TestCafe is built on top of Node.js and provides a simple and powerful API for testing web applications.

TestCafe Trends on GitHub

The following information is taken from the official website of TestCafe GitHub repository:

  • Stars: 9.6k
  • Forks: 677
  • Used By: 12.3k
  • Releases: 390
  • Contributors: 117

Benefits End to End Testing Using Cypress

Here are some of the features of Cypress End to End testing:

  1. Easy Setup: Cypress has a simple setup process that doesn’t require any additional drivers or libraries. You can get started with Cypress by installing a single package.
  2. Automatic Waiting: Cypress automatically waits for elements to appear and become intractable before executing commands. This ensures that the tests are not affected by the timing of the application’s response.
  3. Real-time Reloads: Cypress provides real-time reloads, which means that as you make changes to your code or tests, the application will automatically reload, and the tests will be re-run.
  4. Interactive Debugging: Cypress provides an interactive test runner, which allows you to debug your tests by stepping through them, setting breakpoints, and viewing the application’s state at any point in time.
  5. Time Travel: Cypress allows you to go back and forth in time to see what happened during the execution of a test. This feature is useful for debugging and understanding the behavior of your application.
  6. Cross-browser Testing: Cypress allows you to run your tests on multiple browsers and viewports simultaneously. This helps you ensure that your application works correctly across different environments.
  7. Network Traffic Control: Cypress allows you to control the network traffic of your application. You can stub, spy, and mock network requests to simulate different scenarios.
  8. Automatic screenshots and videos: Cypress automatically takes screenshots and records videos of your tests, which makes it easy to see what went wrong when a test fails.

Frameworks for End to End testing

There are several popular frameworks for end-to-end testing, including:

Cypress

Cypress is a JavaScript-based end-to-end testing framework that provides a simple and intuitive API for testing web applications. Cypress supports modern web development technologies like React, Angular, Vue.js, and more. It provides a built-in test runner, and it runs tests in the browser, which makes it fast and reliable.

Cypress runs tests inside the browser; it also provides detailed information about what’s happening at every step of the test, including network requests, console output, and DOM changes. This makes it easier to identify and troubleshoot issues and helps ensure that the application is working as intended.

Cypress Trends on GitHub

The following information is taken from the official website of Cypress GitHub repository:

  • Stars: 43.3k
  • Forks: 2.8k
  • Used By: 797k
  • Releases: 303
  • Contributors: 427

WebdriverIO

WebdriverIO is a popular open-source testing framework for Node.js that allows developers to automate web applications in a simple and efficient way. It uses the WebDriver API to communicate with browsers and supports a variety of testing frameworks, including Mocha, Jasmine, and Cucumber.

.WebdriverIO Trends on GitHub

The following information is taken from the official website of WebdriverIO GitHub repository:

  • Stars: 8.1k
  • Forks: 2.3k
  • Used By: 50.5k
  • Releases: 305
  • Contributors: 491

Nightwatch.js

Nightwatch.js is an open-source Node.js-based end-to-end testing framework used to automate browser testing. It provides a simple and easy-to-use syntax for writing automated tests in JavaScript and allows you to run tests in real web browsers like Chrome, Firefox, and Safari.

Nightwatch.js uses the WebDriver protocol to communicate with the browser and control its behavior. It also includes a powerful built-in assertion library that makes it easy to write test assertions and helps you quickly identify issues with your web application.

Nightwatch.js Trends on GitHub

The following information is taken from the official website of Nightwatch.js GitHub repository:

  • Stars: 11.4k
  • Forks: 1.1k
  • Used By: 142k
  • Releases: 219
  • Contributors: 112

Protractor

Protractor is an open-source end-to-end testing framework for Angular and AngularJS applications. It is built on top of WebDriverJS and uses Jasmine syntax for writing test scripts. Protractor is designed to simulate user interactions with the application and to verify that the application behaves as expected.

Protractor Trends on GitHub

The following information is taken from the official website of Protractor GitHub repository:

  • Stars: 8.8k
  • Forks: 2.4k
  • Used By: 1.9m
  • Contributors: 250

TestCafe

TestCafe is an open-source end-to-end testing framework that allows you to automate web testing without using browser plugins. TestCafe is built on top of Node.js and provides a simple and powerful API for testing web applications.

TestCafe Trends on GitHub

The following information is taken from the official website of TestCafe GitHub repository:

  • Stars: 9.6k
  • Forks: 677
  • Used By: 12.3k
  • Releases: 390
  • Contributors: 117

Benefits End to End Testing Using Cypress

Here are some of the features of Cypress End to End testing:

  1. Easy Setup: Cypress has a simple setup process that doesn’t require any additional drivers or libraries. You can get started with Cypress by installing a single package.
  2. Automatic Waiting: Cypress automatically waits for elements to appear and become intractable before executing commands. This ensures that the tests are not affected by the timing of the application’s response.
  3. Real-time Reloads: Cypress provides real-time reloads, which means that as you make changes to your code or tests, the application will automatically reload, and the tests will be re-run.
  4. Interactive Debugging: Cypress provides an interactive test runner, which allows you to debug your tests by stepping through them, setting breakpoints, and viewing the application’s state at any point in time.
  5. Time Travel: Cypress allows you to go back and forth in time to see what happened during the execution of a test. This feature is useful for debugging and understanding the behavior of your application.
  6. Cross-browser Testing: Cypress allows you to run your tests on multiple browsers and viewports simultaneously. This helps you ensure that your application works correctly across different environments.
  7. Network Traffic Control: Cypress allows you to control the network traffic of your application. You can stub, spy, and mock network requests to simulate different scenarios.
  8. Automatic screenshots and videos: Cypress automatically takes screenshots and records videos of your tests, which makes it easy to see what went wrong when a test fails.

Set up Cypress For End to End Testing

To create a new project for Cypress automated testing, follow the steps listed below.

Step 1: Generate package.json.

  • Create a project, let’s name it as cypress_applitools
  • Use the npm init command to create a package.json file

Step 2: Install Cypress.

Install Cypress by running the command in the newly created folder:

npm install cypress –save-dev

OR

yarn add cypress –dev

Above command will install Cypress locally as a dev dependency for your project.

As shown below, Cypress version 12.11.0 is reflected after installation. The newest Cypress version at the time this blog was being written was 12.11.0.

Below is a diagram of Cypress’s default folder layout. The “e2e” folder is where test cases can be created.

About Project structure of Cypress

Cypress has built a default folder hierarchy when it opens for the first time, as can be seen in the screenshots. Each of these files and folders that Cypress created is described in detail below.

  • e2e: All test cases are stored under this folder. This folder contains the actual test files, written in JavaScript, that define the tests to be run.
  • Fixtures: This folder contains any data files that are needed for the tests, such as JSON or CSV files.
  • Support: There are two files inside the support folder: commands.js and e2e.js
    • command.js: Is the file where your frequently used functions and unique commands are added. It has functions like the login function that you may use in various tests. You can alter some of the functions Cypress generated for you right here.
    • e2e.js: This file is executed before each and every spec file. This file is an excellent location for global configuration and behavior that alters Cypress in the same way as before or before. It just imports commands.js by default, but you can import or need more files to keep things organized.
  • Node_Modules: The node_modules directory will have all the node packages installed and all test files will have access to them. When you install Node packages using NPM, they are downloaded and installed in the node_modules directory, which is located in the root directory of your project
  • cypress.config.json: cypress.config.json is a configuration file used by Cypress to override the default configuration settings for a project. It is similar to cypress.json, but it is intended to be used as a per-environment configuration file.

Some examples of configuration options that can be set in cypress.config.json include:

  • baseUrl: The base URL for the application being tested.
  • testFiles: A list of test files to include or exclude from the test suite.
  • video: Configuration options for Cypress video recording.
  • screenshots: Configuration options for Cypress screenshots.

Basic constructs of Cypress

Cypress used Mocha’s syntax for developing test cases. Key constructs that are frequently used in Cypress test development are listed below.

  • describe(): This method is used in Cypress (using Mocha’s syntax) to group together related test cases. It takes two arguments. It takes two arguments: A string that describes the group of test cases (e.g. “Login Page Tests”) and another argument, a callback function that contains the individual test cases (using the it() method).
  • it(): This method is used to define an individual test case. It requires two arguments: a string that specifies the test scenario and a callback function that has the test code itself.
  • before():  This method is used to run the code under before() block before any test case. The before() method takes one argument: a callback function that contains the setup code to be executed before any of the test cases
  • after(): This method is used to run a cleanup once all the test cases are executed. The after() method takes one argument: a callback function that contains the cleanup code to be executed after all the test cases
  • beforeEach(): This method is used to run the code under beforeEach() block beforeEach.The beforeEach() method takes one argument: a callback function that contains the code to be executed before each test case
  • afterEach(): This method is used to run a cleanup function after each test case. The afterEach() function takes one argument: a callback function that contains the cleanup code to be executed after each test case
  • .only(): It is used to run a specified suite or test exclusively, ignoring all other tests and suites. This can be useful when you’re debugging a specific test case or working on a specific suite of tests, and you want to focus on that specific test case or suite without running any others.
  • .skip(): It is used to skip a specified suite or test, effectively ignoring it during test execution. This can be useful when you’re working on a test suite or test case that isn’t ready to be run yet, or when you want to temporarily disable a test without deleting it.

The post The Ultimate Guide To End-to-End Testing With Cypress appeared first on Automated Visual Testing | Applitools.

]]>
How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid https://applitools.com/blog/cross-browser-testing-at-applitools-using-ultrafast-grid/ Tue, 07 Feb 2023 16:43:16 +0000 https://applitools.com/?p=46837 This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a...

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Ultrafast Grid cross-browser testing

This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a Mac images were not displayed correctly. Applitools Ultrafast Grid (UFG) helped us identify the bug at an early stage in development, before deploying the change. These types of bugs are a regular occurrence in any organization, and without UFG, these bugs can easily make it to production and remain there undetected until a customer complains about the problem. Translation support from Michael Sedley.

Front end development is complicated, and it involves a wide range of knowledge and tools to develop web applications. Regression testing across different systems, browsers, and devices makes it almost impossible to be sure that an application will display correctly on every system and there are no visual regressions as a result of a minor code change.

A real-life example occurred to me during my first week as an Applitools employee, when I fixed a minor bug using my Linux machine. Inadvertently, in the process, I created a more serious visual bug which was only visible on certain devices.

Had UFG not alerted me to the bug, the code would have gone to production and the result would have affected the usability of our flagship product on a Mac. This would have reflected badly on the professionalism of the company and would have affected the company representation, trust in our product, and sales.

Understanding the problem

In recent months, we improved Applitools Eyes’ ability to perform visual testing on images which are semi-transparent. In the past, Eyes would test an entire screen or defined region, but now using the Storybook SDK, users can automatically test each component separately, without needing to define a test for each component.

For example, when testing a gallery component, Eyes can identify visual bugs and regressions over all screen elements, including the appearance of buttons, controls, fonts, shadows, images, as well as backgrounds that include a transparency gradient.

Figure 1: Transparent Background

After implementing the transparency feature, a visual bug was reported.  In a screen capture of a semi-transparent screen region, unexpected grid lines appeared on top of the tested image.

The root cause of these lines wasn’t clear, so as a first step, we developed a test plan to reproduce the issue. I created a semi-transparent image, all gray (rgb = 127,127,127), with a constant alpha (transparency) channel (alpha=0.5). Fortunately, the bug was easily reproduced and I managed to create easily identifiable grid lines:

As I experimented with different transparency settings, it was clear that the color of the grid lines was the same as the color of the image, and it became stronger as the image transparency was lower.

After further investigation, I discovered that the image viewer component uses tiles to represent large images, and the tiles had one pixel overlap. In the past, when all images were RBG images with no transparency, the overlapping pixel was not visible to the human eye.  When I added semi-transparency, as the adjacent tiles were stacked on top of each other, sampling the outcoming color inside the grid line produced a 192 grayscale value, which is the exact outcome of stacking two half transparent gray layers over a white background:

(white ⋅ 0.5 + gray ⋅ 0.5) ⋅ 0.5 + gray ⋅ 0.5
given (white:=255, gray:=128)

Solving the preliminary bug

To resolve this bug, I recalculated the position and scale of each region so that there would not be an overlap and the line between regions would not appear.
For example, if the first tile had width: 480px and left: 0, the next adjacent tile should be positioned using left: 480, so that there is a zero pixel overlap between tiles.

I tested the results on my local (Linux) machine and assumed that the issue was resolved.

I didn’t realize that when I fixed this bug, I had also created a new issue which would have been almost impossible to anticipate.

How UFG identified the bug I created before deployment

At Applitools, we understand the importance of quality visual testing across browsers, so before deployment, every code change that impacts the Eyes interface must be tested by Applitools Eyes using UFG.

We are proud to “eat our own dogfood.” We rely on our visual testing tools to make sure that our products are visually perfect before release.

Our integration pipeline is configured to use UFG to test the UI change on multiple devices and screen settings so that we can confirm that the interface is consistent on every browser, operating system, and screen size.

We discovered that fixing the bug of a one pixel overlap created a new bug on certain systems where there was a gap visible between tiles.  Frustratingly, this bug was not reproducible in any of the devices used in development, and could not have been discovered with conventional manual visual testing.

The bug was only visible on screens with Retina Display which uses HiDPI.

What was interesting about this bug, is that it highlighted an inconsistency in the way that the same browser (Chrome) displays the same UI on different screen types.

What happened?

The bug and the solution

After some research, it turns out that there is (seemingly) a bug in the way Chrome behaves on Mac computers with Retina display (see 1, 2, 3). It turns out that using percentages or fractions of pixels for positioning and scaling of elements can lead to unexpected results.

So, what is the solution?

The solution itself is very elegant – all we had to do was to round each canvas scaling so that the canvas size would always be an integer:

scale = Math.round(scale * canvasSize) / canvasSize;

Thus, if the width of the canvas is 480 and our scale factor is 0.17, the width of our scaled canvas would not be 480 * 0.17 = 81.6, but would be 82 – this way we maintain compatibility with Retina displays and prevent unwanted gaps from being created.

This bug was easy to resolve once we were aware of it, but without UFG we would never have identified it using any of our test computers.

Conclusion

Maintaining a quality front end for all configurations is an ongoing challenge in every company and every organization.

Solving a bug for one audience can create a bigger bug for a wider audience. In this article, we saw a classic example of a malfunction, where the initial solution we implemented only made things worse.

The number of users who use Applitools Eyes for testing semi-transparent components is significantly lower than the number of eyes users who work with Retina displays (most Apple users) – so the initial approach we took to solve the problem could have caused significantly more harm than good. Even worse – we could have caused significant damage to the user experience, and not know about it. No modern organization wants to rely on frustrated customer feedback to discover bugs in their application or websites.

Using UFG reduces the likelihood that errors of this type will pass under the radar and allows developers, product managers, and all stakeholders in the development process to significantly reduce the fear factor in deploying new features. The UFG is insurance against platform-dependent visual bugs and provides the ability to perform true multi-platform coverage.

Don’t wait to discover your visual bugs from user reports. We invite you to try UFG – our team of experts is here to help with any questions or problems, and assist you migrate Applitools Eyes and UFG into your integration pipeline. For more information, see Introduction to the Ultrafast Grid in the Applitools Knowledge Center.

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
What is Cross Browser Testing? Examples & Best Practices https://applitools.com/blog/guide-to-cross-browser-testing/ Thu, 14 Jul 2022 19:20:00 +0000 https://applitools.com/?p=33935 Learn everything you need to know about cross browser testing, including examples, a comparison of different implementation options and how to get started.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.

What is Cross Browser Testing?

Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.

Why is Cross Browser Testing Important?

When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.

A Cross Browser Testing Example

Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop. 

This should be possible while ensuring:

  • The features remain the same
  • The look and feel, UI or cosmetic effects are the same
  • Security standards are maintained

How to Implement Cross Browser Testing 

There are various aspects to consider while implementing your cross-browser testing strategy.

Understand the scope == Data!

“Different devices and browsers: chrome, safari, firefox, edge”

Thankfully IE is not in the list anymore (for most)!

You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from. 

PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).

This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.

Cross Browser Testing Techniques

There are various ways you can perform cross-browser testing. Let’s understand them.

Local Setup -> On a Single (Dev / QA Machine)

We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests. 

If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.

Setting up the Infrastructure

While this may seem the easiest, it can get out of control very quickly. 

Examples:

  • You may not be able to install all supported browsers on your computer (ex: Safari is not supported on Windows OS). 
  • Browser vendors keep releasing new versions very frequently. You need to keep your browser drivers in sync with this.
  • Maintaining / using older versions of the browsers may not be very straightforward.
  • If you need to run tests on mobile devices, you may not have access to all the variety of devices. So setting up local emulators may be a way to proceed.

The choices can actually vary based on the requirements of the project and on a case by case basis.

As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.

In-House Setup of Central Infrastructure

You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices. 

This infrastructure can potentially be used in the following ways:

  • Triggered from local machine
    Tests can be triggered from any dev / QA machine to run on the central infrastructure.
  • For CI execution
    Tests triggered via Continuous Integration (CI), like Jenkins, CircleCI, Azure DevOps, TeamCity, etc. can be run against browsers / emulators setup on the central infrastructure. 

Cloud Solution    

You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.

Modern, AI-Based Cross Browser Testing Solution: Applitools Ultrafast Test Cloud 

It is important to understand the evolution of browsers in recent years. 

  • They have started conforming to the W3C standard. 
  • They seem to have started adopting Continuous Delivery – well, at least releasing new versions at a very fast pace, sometimes multiple versions a week.
  • In a major development a lot of major browsers are adopting and building on the Chromium codebase. This makes these browsers very similar, except the rendering part – which is still pretty browser specific.

We need to factor this change in our cross browser testing strategy. 

In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.

To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.

How Does Applitools Visual AI Work as a Solution for Cross Browser Testing

Integration with Applitools

Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.

Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow(), and you are set to run your test against any browser or device of your choice.

Reference: https://applitools.com/tutorials/overview/how-it-works.html

AI-Based Cross Browser Testing

Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.

What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.

Seems too far-fetched?

It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!

The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements. 

(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)

// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);

// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;

// Set the configuration object to eyes
eyes.setConfiguration(config);

Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.

You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?

This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:

  • When the test starts, the browser configuration is passed from the test execution to the Ultrafast Grid.
  • For every eyes.checkWindow call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.
  • The Ultrafast Grid will render the same page / screen on each browser / device provided by the test – (think of this as playing a downloaded video in airplane mode).
  • Once rendered in each browser / device, a visual comparison is done and the results are sent to the Applitools dashboard.

What I like about this AI-based solution, is that:

  • I create my automation scripts for different purposes – functional, visual, cross browser testing, in one go
  • There is no need of maintaining devices 
  • There is no need to create different set-ups for different types of testing
  • The AI algorithms start providing results from the first run – “no training required”
  • I can leverage the solution on any kind of setup 
    • i.e. running the scripts through my IDE, terminal, or CI/CD 
  • I can leverage the solution for web, mobile web, and native apps
  • I can integrate Visual Testing results in as part of my CI execution
  • Rich information available in the dashboard including ease of updating the baselines, doing Root Cause Analysis, reporting defects in Jira or Rally, etc.
  • I can ensure there are no Contrast issues (part of Accessibility testing) in my execution at scale

Here is the screenshot of the Applitools dashboard after I ran my sample tests:

Cross Browser Testing Tools and Applitools Visual AI

The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.

Cross Browser Testing in Selenium

As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.

Cross Browser Testing in Cypress

Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.

Cross Browser Testing in Playwright

Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.

Pro and Cons of Each Technique (Table of Comparison)

Local SetupIn-House Setup Cloud SolutionAI-Based Solution (Applitools)
InfrastructurePros: 
Fast feedback on local machine
Cons: 
Needs to be repeated for each machine where the tests need to execute
All configurations cannot be set up locally
Pros: 
No inbound / outbound connectivity required
Cons: 
Needs considerable effort to set up, maintain and update the infrastructure on a continued basis
Pros:
No efforts required build / maintain / update the infrastructure
Cons:
Needs inbound and outbound connectivity from internal network
Latency issues may be seen as requests are going to cloud based browsers / devices
Pros:
No effort required to setup
Setup and MaintenanceTo be taken care of by each team member from time to time; including OS/ Browser version updatesTo be taken care of by the internal team from time to time; including OS/ Browser version updatesTo be taken care of by the service providerTo be taken care of by the service provider
Speed of FeedbackSlowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combinationDepends on concurrent usage due to multiple test runsDepends on network latency
Network issues may cause intermittent failures
Depends on reliability and connectivity of the service provider
Fast and seamless scaling
Security Best as in-house, using internal firewalls, vpns, network and data storageBest as in-house, using internal firewalls, vpns, network and data storageHigh Risk: Needs inbound network access from service provider to the internal test environments.
Browsers / devices will have access to the data generated by running the test – cleanup is essential.
No control who has access to the cloud service provider infrastructure, and if they access your internal resources.
Low risk. There is no inbound connection to your internal infrastructure.
Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) 

My Learning from this Experience

  • A good cross browser testing strategy allows you to reduce the risk of functionality and visual experience not working as expected on the browsers and devices used by your users. A good strategy will also optimize the testing efforts required to do this. To allow this, you need data to provide the insights from your users.
  • Having a holistic view of how your team will be leveraging cross browser testing (ex: manual testing, automation, local executions, CI-based execution, etc.) is important to know before you start off with your implementation.
  • Sometimes the easiest way may not be the best – ex: Using the browsers on your computer to automate against that will not scale. At the same time, using technology like Applitools Ultrafast Test Cloud is very easy – you end up writing less code and get increased functional and visual coverage at scale. 
  • You need to think about the ROI of your approach and if it achieves the objectives of the need for cross browser testing. ROI calculation should include:
    • Effort to implement, maintain, execute and scale the tests
    • Effort to set up, and maintain the infrastructure (hardware and software components)
    • Ability to get deterministic & reliable feedback from from test execution

Summary

Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results. 

Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!

Get Started Today

Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.

Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>
Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid https://applitools.com/blog/comparing-cross-browser-testing-tools-selenium-grid-vs-applitools-ultrafast-grid/ Wed, 29 Jun 2022 15:00:00 +0000 https://applitools.com/?p=39529 How can you choose the best cross-browser testing tool? We'll review the challenges of cross-browser testing and consider leading solutions.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>

How can you choose the best cross-browser testing tool for your needs? We’ll review the challenges of cross-browser testing and consider some leading cross-browser testing solutions.

Nowadays, testing a website or an app using one single browser or device will lead to disastrous consequences, and testing the same website or app on multiple browsers using ONLY the traditional functional testing approach may lead to production issues and lots of visual bugs. 

Combinations of browsers, devices, viewports, and screen orientations (portrait or landscape) can reach the thousands. Performing manual testing on this vast amount of possibilities is no longer feasible, same as just running the usual functional testing scripts hoping to cover the most critical aspects, regions, or functionalities of our sites. 

In this article, we are going to focus on the challenges and leading solutions for cross-browser testing. 

The Challenges of Cross Browser Testing 

What is Cross Browser Testing?

Cross-browser testing makes sure that your web apps work across different web browsers and devices. Usually, you want to cover the most popular browser configurations or the ones specified as supported browsers/devices based on your organization’s products and services.

Why Do We Need Cross Browser Testing?

Basically because rendering is different and modern web apps have responsive design, but you also have to consider that each web browser handles JavaScript differently, and each browser may render things differently based on different viewports or device screen sizes. These rendering differences can result in costly bugs and negative user experience.

Challenges of Cross Browser Testing Today

Cross-browser testing has been around for quite some time now. Traditionally, testers run multiple tests and test in parallel on different browsers and this is fine, from a functional point of view. 

Today, we know for a fact that running only these kinds of traditional functional tests across a set of browsers does not guarantee your website or app’s integrity. But let’s define and understand the difference between Traditional Functional Testing and Visual Testing. Traditional functional testing is a type of software testing where the basic functionalities of an app are tested against a set of specifications. On the other hand, Visual Testing allows you to test for visual bugs, which are extremely difficult to uncover with the traditional functional testing approach.

As mentioned, traditional functional testing on its own will not capture the visual testing aspect and could lead to lack of coverage. You have to take into consideration the possibility of visual bugs, regardless of the amount of elements you actually test. Even if you tested all of them, you may encounter visual bugs that could lead to false negatives, which means, your testing was done, your tests passed and you did not capture the bug. 

Today we have mobile and IoT device proliferation, complex responsive design viewport requirements, and dynamic content. Since rendering the UI is subjective, the majority of cross-browser defects are visual.

To handle all these possibilities or scenarios, you need a tool or framework that not only runs tests but provides reliable feedback – and not just false positives or tests pending to be approved or rejected. 

When it comes to cross-browser testing, you have several options, same as for visual testing. In this article, we will explore some of the most popular cross-browser testing tools. 

Cross-Browser Testing with Your Own In-House Selenium Grid 

If you have the resources, time, and knowledge, you can spin up your own Selenium Grid and do some cross-browser testing. This may be useful based on your project size and approach.

As mentioned, if you understand the components and steps to accomplish this, go for it! 

Now, be aware, to maintain a home-grown Selenium grid cluster is not an easy task. You may find some difficulties or issues when running and maintaining hundreds of browser/nodes. Because of this, most companies end up outsourcing this tasks to vendors like Browserstack or LambdaTest, in order to save time and energy and bring more stability to their Selenium Grid infrastructure. 

Most of these vendors are really expensive, which means that you will need to have a dedicated project budget just for running your UI tests on their cloud. Not to mention the packages or plans you’ll have to acquire to run a decent amount of parallel tests.  

Considerations when Choosing Selenium Grid Solutions

When it comes to cross-browser testing and visual testing, you could use any of the available tools or frameworks, for instance LambdaTest or BrowserStack. But how can we choose? Which one is better? Are they all offering the same thing? 

Before choosing any Selenium Grid solutions, there are some key inherit issues that we must take into consideration:

  1. With a Selenium Grid Solution, you need to run each test multiple times on each and every browser/device that you would like to cover, resulting in much higher maintenance (if your tests fails 5% of the times, and you now need to run the test 10 times on 10 different environments, you are adding much more failures/maintenance overhead). 
  1. Cloud-based Selenium Grid solutions require constant connections between the machine inside your network that is running the test to the browser in the cloud for the entire test execution time. Many grid solutions have reliability issues around that causing environment/connection failure on some tests, and when executing tests at scale this results in some additional failures that the team needs to analyze.
  1. If you try to use cloud-based Selenium Grid solutions to test an internal application, you would need to setup a tunnel from the cloud grid to your company’s network, which creates a security risk and adds additional performance/reliability issues.
  2. Another critical factor for traditional “WebDriver-as-a-Service” platforms is speed. Tests could take 2-4x as much time to complete on those platforms compared to running them on local machines. 

Cross-Browser Testing with Applitools Ultrafast Grid

Applitools Ultrafast Grid is the next generation of cross-browser testing. With the Ultrafast Grid, you can run functional and visual tests once, and it instantly renders all screens across all combinations of browsers, devices, and viewports. 

Visual AI is a technology that improves snapshot comparisons. It goes deeper than pixel-to-pixel comparisons to identify changes that would be meaningful to the human eye.

Visual snapshots provide a much more robust, comprehensive, and simpler mechanism for automating verifications. Instead of writing hundreds of lines of assertions with locators, you can write a single-line snapshot capture using Applitools Eyes.

When you compound that stability with the modern cross-platform testing technology of the Ultrafast Test Grid that stability multiplies. This improved efficiency guarantees delivery of high-quality apps, on-time and without the need of multiple suites or test scripts.

Think and analyze the time that it currently takes to complete a full testing cycle on your end using traditional cross-browser testing solutions. Going from installing, writing, running, analyzing, reporting and maintaining your tests. Engineers now have the Ultrafast Grid and Visual AI technology that can be easily set on your framework, and that is also capable of testing large, modern apps across multiple environments in just minutes. 

Traditional cross-browser testing solutions that offer visual testing, are usually providing this as a separate feature or add-on that you have to pay for. What this feature does is basically taking screenshots for you to compare with other screenshots previously taken. So you can just imagine the amount of time that will take to accept or reject all these tests, and take into account that most of them will not necessarily bring useful intel, as the website or app may not change from one day to another. 

The Ultrafast Grid goes beyond simple screenshots. Applitools SDKs uploads DOM snapshots, not screenshots, to the Ultrafast Grid. Snapshots include all the resources to render a page (HTML, CSS …) and are much smaller than screenshots, so they are basically uploaded faster. 

To learn more about the Ultrafast Grid functionality and configuration, take a look at this article > https://applitools.com/docs/topics/overview/using-the-ultrafast-grid.html

Benefits and Differences when using the Applitools Ultrafast Grid

Here are some of the benefits and differences you’ll find when using this framework:

  1. The Ultrafast Grid uses containers to render web pages on different browsers in a much faster and more reliable way, maximizing speed.
  1. The Ultrafast Grid does not always upload a snapshot for every page. If a page’s resources didn’t change, Ultrafast Grid doesn’t upload them again. Since most page resources don’t change from one test run to another, there’s less to transfer, and upload times are measured in milliseconds.
  1. As mentioned above, with Applitools Ultrafast Grid, you only need to run the test once and you’ll get the results from all browsers/devices. Now that most browsers are W3C compliant, the chances of facing functional differences between different browsers (e.g. a button clicks on one browser and doesn’t click on other browsers) are negligible, so it’s sufficient to run the functional tests just once and this will still find the common browser compatibility issues like rendering/visual differences between browsers.
  1. You can use one algorithm on top of the other. Other solutions only offer the possibility of setting a level of comparison based on three modes, either Strict, Suggested (Normal) or Relax, and this is useful to some extent. But what happens if you need to select a certain region of the page to use a different comparison algorithm? Well, this is possible using the Applitools Region Types feature:
  Images courtesy of the AKC

  1. All of the above occurs on multiple browsers and devices combination at the same time. This is possible using the Ultrafast Grid configuration. For more information check out this article > https://applitools.com/docs/topics/sdk/vg-configuration.html
  1. Applitools offers a Free version that allows you to use mostly all the Framework features, This is really cool and helpful, as you will be able to explore and use high level features like Visual AI, cross-browser testing & visual testing without having to worry about the minutes left on your free trial, as with other solutions. 
  1. One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This reduces the overhead involved with managing baselines from different browsers and device configurations.  
Images courtesy of the AKC

Final Thoughts

Selenium Grid Solutions are everywhere, and the price varies between vendors and features. IF you have infinite time, infinite resources and infinite budget, it would be ideal to run all the tests on all the browsers and analyze the results on every code change/build. But for a company trying to optimize their velocity and run tests on every pull request/build, the Applitools Ultrafast Grid provides a compelling balance between performance, stability, cost and risk.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Recognized as ‘Major Player’ in IDC MarketScape: Worldwide Cloud Testing 2022 Vendor Assessment https://applitools.com/blog/applitools-recognized-major-player-idc-marketscape-worldwide-cloud-testing-2022/ Fri, 29 Apr 2022 19:31:44 +0000 https://applitools.com/?p=37669 Applitools is proud to announce its positioning in the Major Players category in the IDC MarketScape: Worldwide Cloud Testing 2022 Assessment.

The post Applitools Recognized as ‘Major Player’ in IDC MarketScape: Worldwide Cloud Testing 2022 Vendor Assessment appeared first on Automated Visual Testing | Applitools.

]]>

Applitools is proud to announce its positioning in the Major Players category in the IDC MarketScape: Worldwide Cloud Testing 2022 Vendor Assessment — Empowering Business Velocity. Applitools was also positioned in the Major Players category in the IDC MarketScape: Worldwide Mobile Testing and Digital Quality 2022 Vendor Assessment — Enabling Multimodal Dynamism for Digital Innovation.

Written by Melinda-Carol Ballou, Research Director, Agile ALM, Quality & Portfolio Strategies at IDC Research, this IDC study uses the IDC MarketScape model to provide an assessment of 24 vendors for worldwide cloud testing and enterprise automated software quality (ASQ) SaaS solutions.

“Participating vendors needed to have sufficient cloud testing automated software quality capabilities available in key areas of concern (e.g., test infrastructure provisioning and configuration management; deep analytics for analysis of performance optimization, service virtualization, and architectural and other analysis to enable visibility into the health of applications deployed in native and hybrid cloud; readiness for software targeting the cloud; and/or delivery of their ASQ software solution in the cloud with partner integration for other capabilities) for IDC clients.”

Source: IDC

Applitools products considered as part of these vendor assessments include Applitools Eyes and Applitools Ultrafast Test Cloud (the combination of Applitools Eyes, Applitools Ultrafast Grid and Applitools Native Mobile Grid).

How Applitools Enables Automated Software Quality

As software development teams rapidly deliver new products and services to market through more frequent and shorter release cycles, they struggle to fully test the customer experience due to increasing application complexity and an explosion of device/browser combinations. When development teams are confident they can fix functional and visual bugs faster, they can push more high-quality code faster than ever before.

To enable automated software quality consistently, quickly, and at a fraction of the cost, Applitools extends to its customers the power of Applitools Visual AI—the only AI-powered computer vision that accurately mimics the human eye and brain to avoid undetected functional and visual bugs, minimizing false positive bug alerts.

Visual AI is trained on more than 1 billion images and supports analysis across custom regions, with advanced comparison modes/match levels and auto-maintenance of test results. Tests infused with Visual AI (via Applitools Eyes) are created 5.8x faster, run 3.8x more stable, and catch 45% more bugs vs. traditional functional testing. In addition, tests powered by Visual AI can take advantage of the ultrafast speed and stability of Applitools Ultrafast Test Cloud.

The Ultrafast Test Cloud can instantly validate entire application pages and detect issues on even the most complex and dynamic pages. It allows users to write and execute tests once locally with support for more than 50 test frameworks and programming languages including: Cypress, Storybook, Selenium Java, Selenium JavaScript, Selenium C#, Selenium Python, Selenium Ruby, Selenium IDE, Webdriver.IO, TestCafe, and more.

A single functional test run captures the DOM & CSS rules for every browser state, automatically rendering it in parallel across all browsers (Chrome, Firefox, Safari, Edge, and Internet Explorer) and viewports using the Ultrafast Grid. Screenshots are then instantly analyzed by Applitools Eyes to find functional and visual bugs. 

Applitools integrates with all the major CI/CD platforms, including GitHub, GitLab, Bitbucket, Jenkins, Azure DevOps, Travis CI, Circle CI, Semaphore, TeamCity, and Bamboo, as well as with defect- tracking/collaboration systems, including Jira, CA Rally, Microsoft Teams, and Slack. The Applitools Eyes dashboard also enables collaboration between design, product, development, testing, and DevOps teams.

Automated software testing powered by Visual AI can help developers test 18 times faster across the full test cycle including writing, running, analyzing, reporting, and maintaining tests. That’s because Visual AI-powered automated tests that leverage the Ultrafast Test Cloud can run 30-50x faster than traditional solutions, with 99.9999% accuracy.* 

“References contacted by IDC found that Applitools’ solution led to greater efficiency in testing operations. They reported cost savings of up to 95% by making the switch to Applitools in combination with adoption of containers — additional testing tools were not required. Another customer reported that it was able to increase the speed of testing to reduce approval time for moving websites to production from 29 days to 1.5 hours. This is especially significant for a company maintaining 2,400 websites — Applitools helped ensure that sites maintained the same digital experience look and feel across pages as they changed dynamically, supporting up to five browsers and four viewpoints”.

Source: IDC

Applitools Customers, Use Cases and Community 

Applitools is helping more than 400 of the world’s top digital brands release, test, and monitor flawless mobile, web, and native apps in a fully automated way. We help our customers modernize critical test automation use cases — functional and visual regression testing, web and mobile UI/UX testing, cross browser / cross device testing, localization testing, PDF testing, digital accessibility and legal/compliance testing — to transform the way their businesses deliver innovation at the speed of DevOps without jeopardizing brand integrity.

Learn about the impact Applitools is having on these organizations by reading the customer stories in our case study library: https://applitools.com/case-studies/.  

To support the community, Applitools also maintains and manages Test Automation University, a free online learning community with over 100,000 members, that hosts more than 50 courses on a wide range of test automation topics and best practices.

“Being recognized as a major player in the IDC MarketScape for worldwide cloud testing underscores our ability to bring speed and stability to automated software quality with Visual AI and Ultrafast Test Cloud. We’re the world’s fastest-growing software testing company because our customers are using the most advanced, yet simple way to ensure brand integrity across any digital end-user experience. Their success is our success.”

Moshe Milman, COO and co-founder, Applitools

Thank You

It takes a village. Our fantastic team has worked long and hard to achieve this recognition from IDC Research, but this is also the direct result of the feedback and collaboration we’ve had with our customers – we could not have done it without you. Because of our strong community and our valued partnerships, the industry has also recognized Applitools as the:

Thank you for trusting Applitools to deliver flawless automated testing for you, and we’re excited to head into the future of testing together.

Learn More

For more, schedule a demo and see for yourself how Visual AI is helping industry leaders deliver visually perfect digital experiences.

*Modern Cross Browser Testing Through Visual AI Report https://applitools.com/modern-cross-browser-testing-report/ 

The post Applitools Recognized as ‘Major Player’ in IDC MarketScape: Worldwide Cloud Testing 2022 Vendor Assessment appeared first on Automated Visual Testing | Applitools.

]]>
Introducing the Next Generation of Native Mobile Test Automation https://applitools.com/blog/introducing-next-generation-native-mobile-test-automation/ Tue, 29 Jun 2021 17:38:15 +0000 https://applitools.com/?p=29695 Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps...

The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>

Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps with stability, speed, and security – in parallel across dozens of devices. The new offer extends the innovation of Ultrafast Cloud beyond browsers and into the mobile applications.

You can sign up for the early access program today!

The Challenge of Testing Native Mobile Apps

Mobile testing has a long and difficult history. Many industry-standard tools and solutions have struggled with the challenge of testing across an extremely wide range of devices, viewports and operating systems.

The approach currently in use by much of the industry today is to utilize a lab made up of emulators, or simulators, or even large farms of real devices. Then the tests must be run on every device independently. The process is not only costly, slow and insecure, but it is prone to errors as well.

At Applitools, we had already developed technology to solve a similar problem for web testing, and we were determined to solve this issue for mobile testing too.

Announcing the Ultrafast Test Cloud for Native Mobile

Today, we are introducing the Ultrafast Test Cloud for Native Mobile. We built on the success of the Ultrafast Test Cloud Platform, which is already being used to boost the performance and quality of responsive web testing by 150 of the world’s top brands. The Ultrafast Test Cloud for Native Mobile allows teams to run automated tests on native mobile apps on a single device, and instantly render it across any desired combination of devices.

“This is the first meaningful evolution of how to test native mobile apps for the software industry in a long time,” said Gil Sever, CEO and co-founder of Applitools. “People are increasingly going to mobile for everything. One major area of improvement needed in delivering better mobile apps faster, is centered around QA and testing. We’re building upon the success of Visual AI and the Ultrafast Test Cloud to make the delivery and execution of tests for native mobile apps more consistent and faster than ever, and at a fraction of the cost.”

The Power of Visual AI and Ultrafast Test Grid

Last year we introduced our Ultrafast Test Grid, enabling teams to test for the web and responsive web applications against all combinations of browsers, devices and viewports with blazing speed. We’ve seen how some of the largest companies in the world have used the power of Visual AI and the Ultrafast Test Grid to execute their visual and functional tests more rapidly and reliably on the web.

We’re excited to now be able to offer the same speed and agility, and security for native mobile applications. If you’re familiar with our current Ultrafast Test Grid offering, you’ll find the experience a familiar one.

A side-by-side image of the Ultrafast Test Cloud for Native Mobile comparing an iPhone 7 to an iPhone 8
The Ultrafast Test Cloud for Native Mobile comparing an iPhone 7 to an iPhone 8

Mobile Apps Are an Increasingly Critical Channel

Mobile usage continues to rise globally, and more and more critical activity – from discovery to research and purchase – is taking place online via mobile devices. Consumers are demanding higher and higher quality mobile experiences, and a poorly functioning site or visual bugs can detract significantly from the user’s experience. There is a growing portion of your audience you can only convert with a five-star quality app experience.

While testing has traditionally been challenging on mobile, the Ultrafast Test Cloud for Native Mobile increases your ability to test quickly, early and often. That means you can develop a superior mobile experience at less cost than the competition, and stand out from the crowd.

Get Early Access

With this announcement, we’re also launching our free early access program, with access to be granted on a limited basis at first. Prioritization will be given to those who register early. To learn more, visit the link below.

The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>
Skeptics Who Recommend Cross Browser Testing https://applitools.com/blog/skeptics-who-recommend-cross-browser-testing/ Fri, 12 Feb 2021 15:50:42 +0000 https://applitools.com/?p=26860 Whether you classify yourself as a cross browser skeptic or a grudging participant, it is clear that Applitools Ultrafast Grid offers something that differs from your current conception of cross browser testing.

The post Skeptics Who Recommend Cross Browser Testing appeared first on Automated Visual Testing | Applitools.

]]>

Who recommends cross browser testing to their organizations? 

In this series, we discuss the results of the Applitools Ultrafast Cross Browser Hackathon. Today, we will explain how those who were skeptics about cross browser testing would recommend that their organizations run Applitools Ultrafast Grid for cross browser testing.

Reviewing Past Posts In This Series

In this series we have covered the results of the Applitools Ultrafast Cross Browser Hackathon. 

  • My first post covered the overall results. I wrote specifically about the ease of creating cross browser tests with Applitools and introduced the topics to follow. 
  • My second post covered how the speed of Applitools Ultrafast Grid makes cross browser testing a reality within the application build process.
  • In the third, I explained how Applitools Visual AI tests provide much greater code stability compared with legacy cross browser tests, making the code easier to develop and maintain over time.

With this post, I will cover the survey topic in the hackathon: would Hackathon participants recommend that their organizations adopt legacy cross browser testing approach to their peers, and would they recommend Applitools Ultrafast Grid for that purpose?

Methodology

For this survey, Applitools used an approach called Net Promoter Score (NPS). Net Promoter Score uses a survey question with the highest correlation to satisfaction:

“On a scale of 0 to 10, with 0 being not likely and 10 being highly likely, how likely are you to recommend [the survey object] to others?”

Researchers have shown that this question correlates most highly with satisfaction. Respondents who give a 9 or 10 (highly likely to recommend) get classified as “promoters.” Promoters have high satisfaction. Those who give a 7 or 8 get classified as neutral – they are neither satisfied or dissatisfied. Others with a 6 or below get classified as detractors. Detractors have been dissatisfied with something and have no willingness to recommend the survey object.

Break the survey responses into respondents by value and count. Add 1 for each 10 and 9. Give 0 for each 7 or 8. Subtract 1 for each 6 or below. Then, normalize the results to 100 by taking your count and dividing it by the number of respondents, and multiplying by 100. 

Results can range from -100 to 100. According to Bain & Co, the developers of NPS:

  • 80 rates as ‘world class’
  • 50 rates as ‘excellent’
  • 20 rates as ‘favorable’
  • Around 0 rates as ‘neutral’
  • -10 and lower rates as ‘negative’

Asking Hackathon Participants

Applitools surveyed the the 2,224 Hackathon participants about their willingness to recommend the use of Applitools Ultrafast Grid to their peers. The survey also asked their willingness to recommend the use of legacy cross browser testing.  Of the participants, 203 engineers were able to run both the legacy and the Ultrafast cross browser tests. 

Here were the survey responses:

Few Fans Of Traditional Cross Browser Testing

For legacy cross browser testing, using a traditional test application and validation process, 68% of participants got classified as detractors. 17% got classified as passives. Only 15% promoted the legacy approach. This gave a NPS of -54 (rounded down). Participants were, in general, not fans of legacy cross browser testing.

This result mirrors how much people use the legacy approach to cross browser testing. Companies listed in the graphic above provide the infrastructure to run tests. They don’t reduce the test load, or the code load. Their pricing reflects the cheaper cost  for them to set up and maintain that infrastructure of devices, browsers, operating systems, and viewport sizes. Given the cost of setting up and maintaining legacy cross browser tests, it makes sense that not a lot of companies use cross browser testing actively.

Willing to Recommend Applitools Ultrafast Grid for Cross Browser Testing

Also, the survey asked participants their willingness to recommend Applitools Ultrafast Grid for Cross Browser testing. Here, 75% gave a 10 or 9 and got classified as promoters. Another 20% responded 7 or 8 and got classified as neutral. 

The promoters valued:

  • Speed of tests – noting they could run their tests in the build process
  • Simplicity of management – no tests needed to be run and tuned across multiple plaforms
  • Simplified code management – fewer locators meant easier to set up and manage
  • More accurate – the underlying Visual AI wasn’t plagued by false positives and caught all the code errors

The promoters discovered the ease of creating and maintaining cross browser tests using Applitools Ultrafast Grid. The promoters also realized that, with tests completed and analzyed accurately in well under 10 minutes, Applitools Ultrafast Grid made it possible to run cross browser tests in the scope of a build or during unit tests. Legacy tests, even when run in parallel, took tens of minutes to complete and analyze.

Implications of Recommending Cross Browser Testing

If you read my earlier posts, you know that two camps existed related to cross browser testing. One camp ran cross browser tests because they had encountered issues in the past and saw cross browser tests as a safe approach. The other camp avoided it altogether and thought cross browser testing unnecessary.

There is a third camp using Applitools Ultrafast Grid. This camp recognizes that the combination of:

  • test speed appropriate for software build processes
  • ease of deployment
  • lack of infrastructure to manage, and 
  • simplified test code management, 

made cross browser testing feasible. This third camp can deploy Applitools Ultrafast Grid to run and validate rendering behavior for any combination of browser, operating system, and viewport size and run this test set quickly at the unit, build, merge, and final test timeframes. 

What then are the implications for the Applitools Ultrafast Cross Browser Hackathon? 75% of the 208 engineers who completed both sets of tests could see the value of Applitools Ultrafast Grid just by using it. And, they would be willing to recommend it.  They realized that, whether they had previously released a bug to the field that they could have caught with cross browser testing, that Ultrafast Grid changed their understanding about this kind of testing completely.

Applitools users know that Applitools Visual AI makes it possible to run visual tests as part of their unit tests. These users can incorporate visual tests at every build and merge. And with Applitools Ultrafast Grid, they can incorporate cross browser tests as well.

Importantly, Hackathon participants learned these lessons just through their time on the Hackathon. 

What These Recommendations Mean For You

When you have 75% of engineers recommending something, it might be worth trying. Whether you classify yourself as a cross browser skeptic or a grudging participant, it is clear that Applitools Ultrafast Grid offers something that differs from your current conception of cross browser testing. 

At the very least, read the Applitools results in detail. You will learn why the engineers gave these recommendations.

More importantly, why not give Applitools a try? Sign up for a free account? Or, if you prefer, request a demo from an Applitools representative. 

Next Week

Next week, in my last blog post in this series, I will help you draw your own conclusions about Modern Cross Browser Testing.

The post Skeptics Who Recommend Cross Browser Testing appeared first on Automated Visual Testing | Applitools.

]]>
Stability In Cross Browser Test Code https://applitools.com/blog/stability-in-cross-browser-test-code/ Thu, 04 Feb 2021 23:56:57 +0000 https://applitools.com/?p=26674 If you read my previous blog, Fast Testing Across Multiple Browsers, you know that participants in the Applitools Ultrafast Cross Browser Hackathon learned the following: Applitools Ultrafast Grid requires an...

The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.

]]>
Test Code Stability

If you read my previous blog, Fast Testing Across Multiple Browsers, you know that participants in the Applitools Ultrafast Cross Browser Hackathon learned the following:

  • Applitools Ultrafast Grid requires an application test to be run just once. Legacy approaches require repeating tests for each browser, operating system, and viewport size of interest.
  • Cross browser tests and analysis complete typically within 10 minutes, meaning that test times match the scale of application build times. Legacy test and analysis times involve several hours to generate results
  • Applitools makes it possible to incorporate cross browser tests into the build process, with both speed and accuracy.

Today, we’re going to talk about another benefit of using Applitools Visual AI and Ultrafast Grid: test code stability.

What is Test Code Stability?

Test code stability is the property of test code continuing to give consistent and appropriate results over time. With stable test code, tests that pass continue to pass correctly, and tests that fail continue to fail correctly. Stable tests do not generate false positives (report a failure in error) or generate false negatives (missing a failure).

Stable test code produces consistent results. Unstable test code requires maintenance to address the sources of instability. So, what causes test code instability?

Anand Bagmar did a great review of the sources of flaky tests. Some of the key sources of instability:

  • Race conditions – you apply inputs too quickly to ensure a consistent output
  • Ignoring settling time – your output becomes stable only after your sampling time
  • Network delay – your network infrastructure causes unexpected behavior
  • Dynamic environments – your inputs cannot guarantee all the outputs
  • Incompletely scoped test conditions – you have not specified the correct changes
  • Myopia – you only look for expected changes and actual changes occur elsewhere
  • Code changes – your code uses obsolete controls or measures obsolete output.

When you develop tests for an evolving application, code changes introduce the most instability in your tests. UI tests, whether testing the UI or complete end-to-end behavior, depends on the underlying UI code. You use your knowledge of the app code to build the test interfaces. Locator changes – whether changes to coded identifiers or CSS or Xpath locators – can cause your tests to break.

When test code depends on the App code, each app release will require test maintenance. Otherwise, no engineer can ensure that a “passing” test omitted an actual failure, or that  a “failing” test indicates a real failure and not a locator change.

Test Code Stability and Cross Browser Testing

Considering the instability sources, a tester like you takes on a huge challenge with cross browser tests. You need to ensure that your cross browser test infrastructure addresses these sources of instability so that your cross browser behavior matches expected results.

If you use a legacy approach to cross browser testing, you need to ensure that your physical infrastructure does not introduce network or other infrastructure sources of test flakiness.  Part of your maintenance ensures that your test infrastructure does not become a source of false positives or false negatives.  

Another check you make relates to responsive app design. How do you ensure responsive app behavior? How do you validate page location based on viewport size?

If you use legacy approaches, you spend a lot of time ensuring that your infrastructure, your tests, and your results all match expected app user behavior. In contrast, the Applitools approach does not require debugging and maintenance of multiple test infrastructures, since the purpose of the test involves ensuring proper rendering of server response.

Finally, you have to account for the impact of every new app coding change on your tests. How do you update your locators? How do you ensure that your test results match your expected user behavior?

Improving Stability: Limiting Dependency on Code Changes

One thing we have observed over time: code changes drive test code maintenance. We demonstrated this dependency relationship in the Applitools Visual AI Rockstar Hackathon, and again in the Applitools Ultrafast Cross Browser Hackathon. 

The legacy approach uses locators to both apply test conditions and measure application behavior. As locators can change from release to release, test authors must consider appropriate actions.

Many teams have tried to address the locator dependency in test code. 

Some test developers sit inside the development team. They create their tests as they develop their application, and they build the dependencies into the app development process. This approach can ensure that locators remain current. On the flip side, they provide little information on how the application behavior changes over time. 

Some developers provide a known set of identifiers in the development process. They work to ensure that the UI tests use a consistent set of identifiers. These tests can run the risk of myopic inspection. By depending on supplied identifiers – especially to measure application behavior, these tests run the risk of false negatives. While the identifiers do not change, they may no longer reflect the actual behavior of the application. 

The modern approach limits identifier use to applying test conditions. Applitools Visual AI measures the application response of the UI. This approach still depends on identifier consistency – but with way fewer identifiers. In both hackathons, participants cut their dependence on identifiers by 75% to 90% – basically, they used way fewer identifiers. Their code ran more consistently and required less maintenance.

Modern cross browser testing reduces locator dependence by up to 90% - resulting in more stable tests over time.

Implications of Modern Cross Browser Testing

Applitools Ultrafast Grid overcomes many of the hurdles that testers experience running legacy cross browser test approaches. Beyond the pure speed gains, Applitools offers improved stability and reduced test maintenance.

Modern cross browser testing reduces dependency on locators. By using Visual AI instead of locators to measure application response, Applitools Ultrafast Grid can show when an application behavior has changed – even if the locators remain the same. Or, alternatively, Ultrafast Grid can show when the behavior remains stable even though locators have changed. By reducing dependency on locators, Applitools ensures a higher degree of stability in test results.

Also, Applitools Ultrafast Grid reduces infrastructure setup and maintenance for cross browser tests. In the legacy setup, each unique browser requires its own setup and connection to the server. Each setup can have physical or other failure modes that must be identified and isolated independent of the application behavior. By capturing the response from a server once and validating the DOM across other target browsers, operating systems, and viewport sizes, Applitools reduces the infrastructure debug and maintenance efforts.

Conclusions

Participant feedback from the Hackathon provided us with consistent views on cross browser testing. From their perspective, participants viewed legacy cross browser tests as:

  • Likely to break on an app update
  • Susceptible to infrastructure problems
  • Expensive to maintain over time

In contrast, they saw Applitools Ultrafast Grid as:

  • Less expensive to maintain
  • More likely to expose rendering errors
  • Providing more consistent results.

You can read the entire report here.

What’s Next

What holds companies back from cross browser testing? Bad experiences getting results. But, what if they could get good test results and have a good experience at the same time? We ask participants about their experience on the Applitools Cross Browser Hackathon.

The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.

]]>
How I ran 100 UI tests in just 20 seconds [Revisited] https://applitools.com/blog/100-ui-tests-in-20-seconds-revisited/ Fri, 04 Dec 2020 07:13:55 +0000 https://applitools.com/?p=25031 Applitools Ultrafast Grid grabs a screenshot of each page, compares it to its baseline screenshot, and determines visual differences and the root cause unpinning them. In less than a minute, your cross-browser testing is done and your developers have what they need to fix any visual bugs.

The post How I ran 100 UI tests in just 20 seconds [Revisited] appeared first on Automated Visual Testing | Applitools.

]]>

From time to time, we revisit blog posts that show the benefit of Applitools across different use cases. This post by Bilal Haidar from 2019 was one of the first outlining the benefits of Applitools Ultrafast Grid.

Recently I’ve put together a bunch of tutorials: an overview of Applitools, how to test Storybook with Angular, how to quickly troubleshoot bugs in React apps, and how to test Vue apps with Cypress.

If you’ve gone through these, you’ll notice that the visual tests complete in seconds — faster than Nic Cage can jack your ride.

How so fast?

The answer is simple: Applitools Ultrafast Grid. Each time you run a visual test in Applitools, Ultrafast Grid renders that page across a broad range of browsers and configurations. In parallel. In seconds.

Applitools Visual AI then grabs a screenshot of each page, compares it to its baseline screenshot, and determines visual differences and the root cause unpinning them. In less than a minute, your cross-browser testing is done and your developers have what they need to fix any visual bugs.

It’s our very own ludicrous mode (a.k.a. how a Tesla can go from zero to 60 mph in a little over two seconds).

But more than a cool technical trick, Ultrafast Grid can help you keep your app development projects on schedule.

In this article, I will demonstrate the Ultrafast Grid system with test runs and a variety of configuration settings, such as multiple browsers, viewports, and devices, to show Applitools’ efficiency in delivering results.

First off, make sure you have the following installed and configured on your machine:

Before we delve into writing code, let’s take a behind-the-scenes look at how Applitools Ultrafast Grid works.

Life before Ultrafast Grid

Before the arrival of the Ultrafast Grid, visual tests were run in this specific order:

  • A test run checked the pages one by one.
  • A screenshot was taken for each page.
  • The screenshot was uploaded to the backend server, where it was analyzed (that is, compared to previous baselines).
  • Based on the results, the test would either fail or keep running, checking other pages.

So far, so good!

But problems arise when hundreds of screenshots are uploaded to the backend server to be analyzed. Varying parameters ranging from different viewports, different browsers, and different device emulators can also slow things down.

To help solve this, Applitools designed the Ultrafast Grid system to run tests quickly.

Life with Ultrafast Grid

Applitools Ultrafast Grid is split into two components: server-side and client-side. Here’s a diagram of how it works:

pasted image 0 7

The Ultrafast Grid client is a Node JS library, used internally by both Applitools Storybook and Cypress SDKs. Here’s more on the Ultrafast Grid client.

View the code on Gist.

That code snippet instantiates a new Ultrafast Grid client and returns an object. In this case, we are only interested in the openEyes() method returned.

When called, the openEyes() method starts a new test run and sends the Ultrafast Grid backend server details about it– including, but not limited to the application name, batch or test name, the browsers used, and different variations of browser settings to be adapted while running the tests. This method returns a Promise to an object with the following functions: checkWindow() and close().

View the code on Gist.

This method call instructs the Ultrafast Grid backend server to prepare a testing environment for a new test run that requires running the test over two different browsers. The first browser emulates an iPhone X, while the second specifies the desktop viewport to run the test.

To start collecting snapshots, the test issues a call to checkWindow() function. The Ultrafast Grid client handles this call and collects all the needed resources to store the snapshot locally.

At the end of the test, you issue a call to close() method to close and finish the test run.

At this moment, all the DOM Snapshots collected locally during the test run are sent to the Ultrafast Grid backend server. Based on the test run requirements, sent previously with the openEyes() method, the Ultrafast Grid:

  • Launches a number of browsers with different viewports and device emulators.
  • Runs all of the DOM snapshots in parallel.
  • Collects image screenshots.
  • Sends all the screenshots to the Applitools AI server to do all the testing analysis and returns the results.

Finally, it’s worth mentioning that Ultrafast Grid sits behind a family of Applitools SDKs including the following:

Now that you understand how the Ultrafast Grid system works so efficiently, let’s throw a spanner in the works by running tests with a variety of browser configuration settings. Then, we’ll review the results on the Applitools Test Manager.

Source code

For this article, I’ve chosen to write a few Storybook stories and use Applitools for the custom Angular ContentEditable component. I’ve described and used this in my previous article on Mixing Storybook with Angular with a sprinkle of Applitools.

There have been some changes to the Applitools Eyes SDK for Storybook npm package since my previous article was published. I’d recommend removing the old package and installing the new one by following these steps:

Step 1: Remove the old @applitools/eyes.storybook NPM package by issuing the following command:

npm remove @applitools/eyes.storybook

copy code to clipboard

Step 2: Install the new @applitools/eyes-storybook NPM package by issuing the following command:

npm i @applitools/eyes-storybook

copy code to clipboard

That’s it!

Applitools Ultrafast Grid Demo

Before writing our tests, let’s run the application and make sure it’s working fine.

Step 1: Clone the application repo.

git clone https://github.com/thisdot/storybook-angular-applitools.git

copy code to clipboard

Step 2: Install all app dependencies by issuing the following command:

Go to the folder /storybook-angular-applitools then run this command:

npm install

copy code to clipboard

Step 3: Run the app by issuing the following command:

ng serve

copy code to clipboard

1

Now that we are sure the application is up and running switch back to the Terminal and hit Ctrl+c twice. This terminates the running application.

Step 4: Run the Storybook stories by issuing the following command:

npm run storybook

copy code to clipboard

2

Step 5: Run the Applitools Eyes SDK for Storybook by issuing the following command:

npx eyes-storybook

copy code to clipboard

Make sure you get an Applitool API Key and store it on your machine.

To set the APPLITOOLS_API_KEY environment variable locally on your machine you can use the export command on Mac OS or set command on Windows as follows.

View the code on Gist.

For a complete tutorial on how to install and run Applitools Storybook SDK for Angular, check out this article: Storybook Angular Tutorial.

Back to the command above, this command simply opens the Storybook stories, runs them one by one, and then sends all the DOM snapshots to Applitools Ultrafast Grid, which then renders them.

3

If you expand the first test run, you get the following results:

4

Now that the app, Storybook stories, and Applitools are running locally, let’s start playing with our visual tests.

Visual UI tests with custom viewports

Keeping in mind that Applitools with Storybook doesn’t need any kind of configuration, the important test configurations are auto-inferred by Applitools. However, if you want to play around with the test configuration, you have two options to specify:

  • Environment variables
  • The applitools.config.js file

For this demonstration, I will make use of the applitools.config.js file. For a full list of test configuration, check the Advanced Configuration section on the @applitools/eyes-storybook repo on Github.

When you’re using Applitools with Storybook, there’s no need to add any specific Applitools commands or set test configuration to enable visual testing.

Applitools Eyes SDK runs the Storybook stories, takes a snapshot of each, and sends the DOM snapshots to the Ultrafast Grid backend server. It saves the test run with the default test configuration.

Step 1: Add a new JavaScript file to the root of your application. Give it a name of applitools.config.js.

Step 2: Add the appName and batchName configuration settings as follows:

View the code on Gist.

Step 3: Run the test case on two different browsers with two different viewports by adding the following:

View the code on Gist.

Consequently, Ultrafast Grid launches a set of instances for each of the browsers and runs the tests simultaneously. The first time these tests are run, we create baseline images to compare against during subsequent test runs.

Step 4: Save and run the tests again by issuing the following command:

npx eyes-storybook

copy code to clipboard

Step 5: Check the test run in Applitools Test Manager.

Screenshot 2019 03 25 at 23.34.25

I’ve switched the Test Manager view to the Batch Summary View.

Notice how Applitools Ultrafast Grid ran your tests twice, as per the test configuration file above. The test configuration above specified two different browsers with two different viewports. This means that the Ultrafast Grid runs every test twice– once per browser configuration set.

Running this test yields an execution time of just four seconds! To be clear, this is what we spent rendering baseline images, once the DOM snapshots were uploaded. Even so, that’s really fast.

It’s the duty of the Ultrafast Grid to analyze the test configuration file attached to the test and spawn, in parallel, a number of browser instances to handle the test run efficiently.

Visual UI tests with device emulation

Let’s run another test. This time, instead of specifying the browser viewports, we will specify a device emulation for the browser. Device emulation uses a Chrome browser, with the viewport dimensions, and user agent string, specific to that device.

Step 1: Replace the content of the applitools.config.js file with the following configuration:

View the code on Gist.

In this test configuration, we are specifying to the Ultrafast Grid to run this test over a Chrome browser with a device emulation of an iPhone X using a landscape screen orientation.

Ultrafast Grid supports only the Chrome browser for running tests over a device emulation, so you can skip the name of the browser. I’m explicitly specifying the browser name to make it very clear that this is a browser emulation and not a real device.

For a complete list of device emulators supported by the Chrome browser, take a look at this page: Chromium Device Emulators.

Step 2: Save and run the tests by issuing the following command:

npx eyes-storybook

copy code to clipboard

Step 3: Check the test run in Applitools Test Manager.

Screenshot 2019 03 25 at 23.24.51

I’ve switched the Test Manager view to the Batch Summary View.

As per the new test configuration file, the Ultrafast Grid runs your tests on a Chrome browser with a device emulation for iPhone X. Hence, the browser name shown is Safari 11.0 and landscape with a viewport of 812×375.

Notice that running this test yields an execution time of just two seconds! Again, this is once the DOM snapshot has been uploaded. But still…

Finally, in Applitools, baseline snapshots are stored as per test configuration settings. Different test configuration settings generate different baseline snapshots. You can’t compare apples to oranges, right?

That’s why, if you click on the snapshot above (after switching back to Batch steps view) and show both the baseline and the current test run, you won’t get any baseline snapshot–because it’s the first time we’re running the test with these configuration settings.

7

100 Visual UI tests

Let’s go crazy and run 100 Visual/UI tests with a combination of test configurations.

Step 1: Replace the content of the applitools.config.js file with the following configuration:

View the code on Gist.

In this test configuration, we are running 100 tests with 50 unique view ports on two major browsers: Firefox and Google Chrome.

Step 2: Save and run the tests by issuing the following command:

npx eyes-storybook

copy code to clipboard

During this time, the Eyes Storybook SDK will read all Storybook stories in the application, run them one by one using the Storybook engine, generate a DOM snapshot for each test, and finally upload the results to the Eyes server. At this moment, the Eyes server will run each and every DOM snapshot, render an image snapshot, and store it as a baseline snapshot.

The Eyes SDK/Server took only 21 seconds to complete all the above tasks, create baselines, and report the results on Applitools Test Manager Dashboard, for the entire batch of 100 test runs for a single Storybook story.

Screenshot 2019 03 28 at 22.54.11

Step 3: Check the test run in Applitools Test Manager.

Screenshot 2019 03 28 at 23.02.24

I’ve switched the Test Manager view to the Batch Summary View.

Let’s see the power of the Eyes server when running a regression testing cycle.

Step 4: Let’s introduce a visual change on the Storybook story and run the Eyes SDK once again.

Replace the content of the /src/stories/index.stories.js file with the following:

View the code on Gist.

view rawregression-testing-cycle hosted with ❤ by GitHub

Now the ContentEditableComponent should render with a Blue background.

Step 5: Run the tests again by issuing the following command:

npx eyes-storybook

copy code to clipboard

Once again, the Eyes SDK/Server took only 21 seconds to upload the DOM snapshot, run the entire batch of visual UI tests, and display the results on the Applitools Test Manager Dashboard.

Screenshot 2019 03 28 at 23.09.17

Notice the error message generated saying that “A total of 100 differences were found”.

Because the change we introduced affected the entire batch of test runs, therefore, all the current tests failed in comparison with the baseline snapshots, due to the visual change introduced earlier.

Step 6: Check the test run in Applitools Test Manager.

Screenshot 2019 03 28 at 23.13.38

The Test Manager highlights the differences between the baseline snapshot and the current test run and displays a mismatch icon near the snapshots.

At this moment, you can either accept the new changes and make them the new baseline or reject them and keep the old baseline.

Once again, it takes the power of the Ultrafast Grid to analyze the test configuration file and set up the testing environment as needed.

See for yourself

Here’s a step-by-step tutorial video of how I did all the steps above. You can see how it took just 20 seconds for the Applitools Storybook SDK to upload all the DOM snapshots, then use Applitools Visual AI to look for visual regressions:

Conclusion

By now, you have some insight into how Applitools Ultrafast Grid functions behind the scenes when running your visual tests.

Want to get run your own tests ludicrously fast? Applitools Ultrafast grid is GA for StorybookCypress, and Selenium IDE. If you use Selenium WebDriver with Java, JavaScript, C#, Python, or Ruby, request to join our early access program.

Happy Testing!

Photo by Sander Jeurissen on Unsplash

The post How I ran 100 UI tests in just 20 seconds [Revisited] appeared first on Automated Visual Testing | Applitools.

]]>
Can Automated Cross Browser Testing Be Ultrafast? https://applitools.com/blog/can-automated-cross-browser-testing-be-ultrafast/ Wed, 16 Sep 2020 00:07:23 +0000 https://applitools.com/?p=22991 Early January this year, Applitools announced the results of their Visual AI Rockstar Hackathon winners of which I am lucky to be included as one of their silver winners. I blogged about...

The post Can Automated Cross Browser Testing Be Ultrafast? appeared first on Automated Visual Testing | Applitools.

]]>

Early January this year, Applitools announced the results of their Visual AI Rockstar Hackathon winners of which I am lucky to be included as one of their silver winners. I blogged about my experience as to how I approached the hackathon and my honest feedback as to why I think it’s modernising the way we do test automation which you can find in this post – Applitools: Modernising the way we do test automation.

6 months in and they announced another hackathon but this time the focus is on cross browser testing via Ultrafast Grid and comparing it with traditional solutions such as running the same functional tests on different browsers and viewports locally. I participated in the hackathon again and ended up as one of their gold winners this time, which I’m extremely pleased, because not only did I win one of their amazing prizes but also, I improved my technical skill and learned a lot about the true cost of cross browser testing.

Why Automated Cross Browser Testing?

First, let’s talk about why cross browser testing? Why another hackathon?

If you’re like me and have been testing web applications for some time, you’ll know that cross browser testing is a pain and time consuming to test. There is no way you can test 100% cross browser coverage, unless you have a lot of time and want to devote all your testing efforts in cross browser testing alone plus don’t forget that you also need to check on other viewports. In time, it gets really boring and tedious. This is why it’s a great idea to automate cross browser tests as much as possible so we can focus on other areas where we are also needed such as exploratory and accessibility testing.

Nowadays, there are not a lot of functional differences between different browsers. The code that gets deployed is mostly similar on any platform but the way the code is rendered visually exposes differences that we need to catch. Rather than doing cross browser functional testing where we are testing the functionality across different browsers, a better alternative is to do cross browser visual testing instead where we validate the look of our pages instead because this is what our users see.

The problem is automated cross browser testing, whether it’s functional or visual, can still take a considerable amount of time to set up because you need to create an automation framework that can scale and easily be maintained. This can be quite a challenge for testers who are considerably new in test automation.

Screenshot 2020 09 05 at 13.04.19

Cross Browser Testing in a nutshell

The purpose of this hackathon was to show how easy and how fast cross browser visual testing can be if you’re using modern tools such as Applitools. It’s also to highlight that existing testing tools are great for doing cross browser functional testing but not so great with cross browser visual testing which I’ll expand on later.

The Hackathon Experience: Cypress

The hackathon was split into automating three different scenarios for a sample e-commerce website called AppliFashion. The scenarios should be automated using any testing tool of your choice and also with the same testing tool but with using Applitools alongside. The automated tests will then be executed on two versions of the website – version 1, which is assumed to be bug free, and version 2, which has new bugs added in. On version 2, you have to update the automated tests to fix the bugs introduced and then compare the effort of doing this with the traditional testing tool that you have chosen as opposed to using Applitools.

I decided to use Cypress as my testing tool and while it’s a great tool for automating the browser, I spent 5.5 hours doing cross browser visual testing with this approach and still felt that I missed a lot of visual bugs. Let’s look at this in more detail.

  • Installing Cypress: 5 mins
  • Writing tests for version 1: 2 hours
  • Test maintenance for version 2 and finding bugs: 1 hour
  • Test reporting and project refactoring: 2 hours
  • Documentation: 30 mins
  • Total Time: 5 hrs, 35 mins

Writing the tests took quite some time. I needed to verify a lot of the elements and get their selectors so I could assert that they were visible on the page. Some of the elements were hidden if you are on different viewports so these had to be reflected on the tests. You can find an example code on how I handled this on my applitools ultrafast grid github repo. The test execution time was also slightly longer because I had more viewports (desktop, tablet, mobile) and browsers (Chrome and Firefox) to cover locally.

When it was time to run the same test on version 2, I had to make some adjustments to my tests and log the bugs that my automated tests found. This took me an hour because I had to update the selectors to fix my tests but also, I wasn’t confident that my tests found all the visual bugs on version 2. I had to find some of the bugs manually since verifying CSS changes was difficult for me with just using Cypress alone.

When it came to test reporting and project refactoring, I invested 2 hours on this task. As I knew from experience, good reporting helps everyone make sense of test data. I wanted to integrate Mochawesome reporter so I could present the test results nicely to the hackathon judges. I wrote a tutorial on how to do this which you can find in this post – Test Reporting with Cypress and Mochawesome. I also started noticing that my test code was getting a lot of duplication so I did some refactoring to clean up my automation framework.

The Hackathon Experience: Cypress with Applitools

Now let’s look at how long it took me to do cross browser visual testing with Cypress and Applitools.

  • Install Applitools Cypress SDK: 2 mins
  • Setup project structure with Cypress and Applitools: 10 mins
  • Writing tests for version 1: 20 mins
  • Running tests for version 2: 5 mins
  • Bug reporting with Applitools: 25 mins
  • Documentation: 10 mins
  • Total time: 1 hr, 12 min

In total, I spent around just over one hour writing the tests for both version 1 and version 2 when using Cypress with Applitools. The time difference was massive! There were a few visual bugs that I missed that Applitools caught but even if this was the case, I didn’t have to rewrite my tests at all. All the adjustments were done on the Applitools Dashboard directly and marking the bugs through its annotation feature.

Writing the tests was considerably faster. As opposed to verifying individual selectors and checking that it’s visible, I took a screenshot of the whole page which is a better approach for visual validation.

carbon 8

Visual validation test with Cypress and Applitools

The code snippet above is simpler and will catch more visual bugs with less or even no test code maintenance.

So you might be wondering from the above code, how did I handle the cross browser capabilities? This was easily achieved by creating a file called `applitools.config.js`  on the root of your project and specifying the list of browsers that you want your tests to execute on. By utilising Ultrafast Grid and setting my concurrency to 10, I was able to run the tests quicker too.

carbon 9

Achieving Cross Browser Visual Testing with Applitools

The Impact of Visual Cross Browser Testing for Testers

Overall, this was another excellent hackathon from Applitools and showed that cross browser testing can be easy and fast. I’ve mentioned this in the past already that one of the trends that I’m seeing is more and more testing tools are becoming user friendly and if you are new to test automation, this is great news!

Also, from my experience, the production bugs that get missed most frequently are visual bugs. A test that hasn’t loaded any of its CSS files can still work functionally and your automated functional test will still pass. Rather than doing cross browser functional testing, it’s better to do cross browser visual testing to get maximum value.

Finally, the massive time saving that it provides means that we, as testers, have more time to explore the areas that automated tests can’t catch and that is a big win.

For More Information

The post Can Automated Cross Browser Testing Be Ultrafast? appeared first on Automated Visual Testing | Applitools.

]]>
How Easy Is Cross Browser Testing? https://applitools.com/blog/how-easy-is-cross-browser-testing/ Mon, 03 Aug 2020 22:21:23 +0000 https://applitools.com/?p=20477 In June Applitools invited any and all to its “Ultrafast Grid Hackathon”. Participants tried out the Applitools Ultrafast Grid for cross-browser testing on a number of hands-on real-world testing tasks....

The post How Easy Is Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.

]]>

In June Applitools invited any and all to its “Ultrafast Grid Hackathon”. Participants tried out the Applitools Ultrafast Grid for cross-browser testing on a number of hands-on real-world testing tasks.

As a software tester of more than 6 years, the majority of my time was spent on Web projects. On these projects, cross-browser compatibility is always a requirement. Since we cannot control how our customers access websites, we have to do our best to validate their potential experience. We need to validate functionality, layout, and design across operating systems, browser engines, devices, and screen sizes.

Applitools offers an easy, fast and intelligent approach to cross browser testing that requires no extra infrastructure for running client tests.

Getting Started

We needed to demonstrate proficiency with two different approaches to the task:

  • What Applitools referred to as modern tests (using their Ultrafast Grid)
  • Traditional tests, where we would set up a test framework from scratch, using our preferred tools.

In total there were 3 tasks that needed to be automated, on different breakpoints, in all major desktop browsers, and on Mobile Safari:

  • validating a product listing page’s responsiveness and layout,
  • using the product listing page’s filters and validating that their functionality is correct
  • validating the responsiveness and layout of a product details page

These tasks would be executed against a V1 of the website (considered “bug-free”) and would then be used as a regression pack against a V2 / rewrite of the website.

Setting up Cypress for the Ultrafast Grid

I chose Cypress as I wanted a tool where I could quickly iterate, get human-readable errors and feel comfortable. The required desktop browsers (Chrome, Firefox and Edge Chromium) are all compatible. The system under test was on a single domain, which meant I would not be disadvantaged choosing Cypress. None of Cypress’ more advanced features were needed (e.g. stubbing or intercepting network responses).

The modern cross browser tests were extremely easy to set up. The only steps required were two npm package installs (Cypress and the Applitools SDK) and running

npx eyes-setup

to import the SDK.

Easy cross browser testing means easy to maintain as well. Configuring the needed browsers, layouts and concurrency happened inside `applitools.config.js`, a mighty elegant approach to the many, many lines of capabilities that plague Selenium-based tools.

Screenshot 2020 07 23 at 16.47.28

In total, I added three short spec files (between 23 and 34 lines, including all typical boilerplate). We were instructed to execute these tasks against the V1 website then mark the runs as our baselines. We would then perform the needed refactors to execute the tasks against the V2 website and mark all the bugs in Applitools.

Applitools’ Visual AI did its job so well, all I had to do was mark the areas it detected and do a write-up!

In summary, for the modern tests:

  • two npm dependencies,
  • a one-line initialisation of the Applitools SDK,
  • 6 CSS selectors,
  • 109 total lines of code,
  • a 3 character difference to “refactor” the tests to run against a second website,

all done in under an hour. 

Performing a visual regression for all seven different configurations added no more than 20 seconds to the execution time. It all worked as advertised, on the first try. That is the proof of easy cross browser testing

Screenshot 2020 07 23 at 17.17.05

Setting up the traditional cross-browser tests

For the traditional tests I implemented features that most software testers are either used to or would implement themselves: a spec-file per layout, page objects, custom commands, (attempted) screenshot diff-ing, linting and custom logging.

This may sound like overkill compared to the above, but I aimed for feature parity and reached this end structure iteratively.

Screenshot 2020 07 23 at 17.18.41

Unfortunately, neither one of the plug-ins I tried for screenshot diff-ing (`cypress-image-snapshot`, `cypress-visual-regression` and `cypress-plugin-snapshots`) gave results in any way similar to Applitools. I will not blame the plug-ins, though, as I had a limited amount of time to get everything working and most likely gave up way sooner than one should have.

Since screenshot diff-ing was off the table, I chose to check each individual element. In total, I ended up with 57 CSS selectors and to make future refactoring easier I implemented page objects. Additionally, I used a custom method to log test results to a text file, as this was a requirement for the hackathon.

I did not count all the lines of code in the traditional approach as the comparison would have been absurd, but I did keep track of the work needed to refactor for V2 — 12 lines of code, meaning multiple CSS selectors and assertions. This work does not need to be done if Applitools is used, “selector maintenance” just isn’t a thing!

View the code on Gist.
View the code on Gist.

Applitools will intelligently find every single visual difference between your pages, while traditionally you’d have to know what to look for, define it and define what the difference should be. Is the element missing? Is it of a different colour? A different font or font size? Does the button label differ? Is the distance between these two elements the same? All of this investigative work is done automatically.

Conclusion

All in all, it has genuinely been an eye-opening experience, as the tasks were similar to what we’d need to do “in the real world” and the total work done exceeds the scope of usual PoCs.

My thanks to everyone at Applitools for offering this opportunity, with a special shout out to Stas M.!

For More Information

Dan Iosif serves as SDET at Dunelm in the United Kingdom. He participated in the recently-completed Applitools Ultrafast Cross Browser Hackathon.

The post How Easy Is Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Modern Cross Browser Testing with Cypress and Applitools https://applitools.com/blog/modern-cross-browser-testing-cypress/ Thu, 16 Jul 2020 22:35:04 +0000 https://applitools.com/?p=20096 Cypress, among other things, validates the structure of your DOM. It verifies that a button is visible or that it has the correct text. But what about the look and...

The post Modern Cross Browser Testing with Cypress and Applitools appeared first on Automated Visual Testing | Applitools.

]]>

Cypress, among other things, validates the structure of your DOM. It verifies that a button is visible or that it has the correct text. But what about the look and styling of our app? How can we test that our application looks good visually? We can use Cypress to verify that it has the correct CSS properties but then our code would look very long and complex. It will be a guarantee that engineers will avoid maintaining this test ?

Visual Testing

This is why we need visual testing as part of our test strategy. With visual testing, we are validating what the user sees on different browsers/viewports/mobile devices. However, it’s very time consuming when you do it manually.

Imagine if someone told you that you have to test the below image manually.

There are 30 differences. You can probably find them after quite some time. But then this provides a really slow feedback loop. The obvious solution would be of course to automate this!

Now, automated visual testing is not new. There are so many tools out there which can help you with visual testing. These tools use pixel-by-pixel comparison to compare the two images. The idea is you have a baseline image which is your source of truth. Every time there is a code change and our automated tests are run, a test image will be taken. The visual testing tool will then compare this test image with the baseline image and then check the differences. At the end, it will report to us whether our test passed or failed.

Problem with pixel-by-pixel visual testing

The problem with pixel-by-pixel visual testing though is it’s very sensitive even with small changes. Even if you have a 1px difference, your test will fail even though on the human eye, the two images look exactly similar.

You also get the issue that if you try to run these tests on your build pipelines, you might see a lot of pixel differences especially if the base image is generated locally such as the image above. Looking at this test image, if you ignore the mismatch image in the middle and compare the two images on left and the right, some of the changes that were reported look ok.  But because these images were taken on different machine setups , the tool has reported a lot of pixel differences. You can use Docker to solve this and generate the base image using the same configuration as the test image, but from personal experience, I still get flakiness with using pixel-by-pixel comparison tools.

Also, what if you have dynamic data? In the test image above, we have a change of data here but the overall layout is similar. You can probably set the mismatch threshold to be slightly higher so your tests will fail only if they reach the threshold that you defined. The problem with this though is that you might miss actual visual bugs.

Cross Browser Visual Validation

Most of the existing open source tools for visual testing only runs on 1 or 2 browsers.  For example, one of the tools that we were using before, called BackstopJS,  which is a popular visual testing framework, only runs visual tests on Chrome headlessly.  AyeSpy, which is a tool that was actually created by one of the teams here at News UK, is another visual testing tool, which hooks into your Selenium Grid to run your visual tests on different browsers. But still, it’s a bit limited. If you are using the Selenium Docker images, they only have images for Chrome and Firefox. What if you want to run your visual tests on Safari or Internet Explorer?  You can definitely verify this yourself, but again as mentioned, it’s time consuming.

How can we solve these different visual testing issues?

Applitools

This is where Applitools comes in. Unlike existing visual tools, Applitools uses AI comparison algorithms so images are compared just like a human would. It was founded in 2013 and pretty much integrates with almost all testing frameworks out there. You name it – Selenium, Cypress, Appium, WebdriverIO, Espresso, XCUITest, even Storybook! With Applitools, you can validate visual differences on a wide range of browsers and viewports. By using different comparison algorithms (exact, strict, layout and content), you have different options to compare images and can cater different scenarios such as dynamic data or animations.

Rather than taking a screenshot, Applitools will extract a snapshot of your DOM. This is one of the reasons why visual tests are fast to run in Applitools. Once the DOM snapshots have been uploaded to the Applitools Cloud, the Applitools Ultrafast Grid, which offers users a way to render screens in multiple browsers and viewports simultaneously to generate the screenshots and do the AI powered image comparison.

Getting Started

To get you started, you need to install the following package to your project. This is specific to Cypress so you would need to install the correct package depending on your test framework of choice.

npm i --D @applitools/eyes-cypress

Once this has been installed, you need to configure the plugin and the easiest way to do this is to run the following command on your terminal

npx eyes-setup

This will automatically add the necessary imports needed to get Applitools working in your Cypress project.

Writing our first test with just Cypress

Let’s start doing some coding and add some validations on a sample react image app that I created a while back. It is a simple image gallery which uses unsplash API for the backend. An example github repo which contains the different code examples can be found here.

Our Cypress test can look like the following code snippet. Keep in mind, this only asserts the application to some extent. I can add more code to verify that it has the correct CSS attributes but I don’t want to make the code too lengthy.

Writing our first test with Cypress and Applitools

Now, let’s look at how we can write the test using Cypress and Applitools.

Applitools provides the following commands to Cypress as a minimum setup. `cy.eyesOpen` initiates the Cypress eyes SDK and we pass some arguments such as our app name, batch name and the browser size (defaults to Chrome). The command `cy.eyesCheckWindow` will take a DOM snapshot so every call to this command means a DOM snapshot will be generated. You can call this every time you do an action such as visiting your page under test or clicking buttons, and dropdown menus. Finally, once you are finished, you just call `cy.eyesClose`. To know more about the Cypress eyes SDK, please visit their documentation here to find more information.

In order to run this in Applitools, you need to export an API key which is detailed on this article. Once you have the API key, on your terminal you need to run:

export APPLITOOLS_API_KEY=<your_key>
npx cypress open

Once the test is finished, if you go to the Applitools dashboard, you should see your test being run. The first time you run it, there will be no baseline image then when you reran the test and everything looks good, you should see the following.

Handling Dynamic Data

Since we are using the unsplash API, we don’t have control as to what data gets returned. When we refresh the app, we might get different results. As an example, the request that I am making to unsplash will get the popular photos on a given day. If I reran my test again tomorrow, then the images will be different like the one shown below.

The good thing is we can apply a layout region so the actual data will be ignored or we can also set the match level as Layout in our test which we can preview on the dashboard. If the layout of our image gallery has changed, Applitools will report it as an issue.

Making code changes to our application

Now, let’s create some changes in our application (code references found here) and introduce the following:

  • Update the background colour of the search bar (bug)
  • Add a new footer component (new feature)
  • Remove camera icon on the subtitle (bug)

If we run the test where we only use Cypress, how many of these changes do you think your test will catch? Will it catch that there is a new footer component? How about updating the background colour of the search bar? How about the missing header icon? Probably not because we didn’t write any assertions for it. Now, let’s rerun the test written in Cypress and Applitools.

Looking at the image above, it caught all the changes and we didn’t had to update our test since all the maintenance is done on the Applitools side. Any issues can be raised directly on the dashboard and you can also configure it to integrate to your JIRA projects.

Cross Browser Visual Validation

To run the same test on different browsers, you just need to specify the browser options on your Applitools configuration. I’ve refactored the code a bit and created a file called `applitools.config.js` and moved some of the setup we added initially in our `cy.eyesOpen` in this class.

Just simply reran your test and check the results in the Applitools dashboard.

Final Thoughts

This is just an introductory post on how you can use Applitools so if you want to know more about its other features, check out the following resources:

While open source pixel by pixel comparison tools can help you get started with visual testing, using Applitools can modernize the way you do testing. As always, do a thorough analysis of a tool to see if it will meet your testing needs ?

The post Modern Cross Browser Testing with Cypress and Applitools appeared first on Automated Visual Testing | Applitools.

]]>