Visual Testing Tools Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/visual-testing-tools/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 18:10:59 +0000 en-US hourly 1 Power Up Your Test Automation with Playwright https://applitools.com/blog/power-up-your-test-automation-with-playwright/ Thu, 31 Aug 2023 12:53:00 +0000 https://applitools.com/?p=52108 As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust...

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
Locator Strategies with Playwright

As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust set of features to create fast, reliable, and maintainable tests.

In a recent webinar, Playwright Ambassador and TAU instructor Renata Andrade shared several use cases and best practices for using the framework. Here are some of the most valuable takeaways for test automation engineers:

Use Playwright’s built-in locators for resilient tests.
Playwright recommends using attributes like “text”, “aria-label”, “alt”, and “placeholder” to find elements. These locators are less prone to breakage, leading to more robust tests.

Speed up test creation with the code generator.
The Playwright code generator can automatically generate test code for you. This is useful when you’re first creating tests to quickly get started. You can then tweak and build on the generated code.

Debug tests and view runs with UI mode and the trace viewer.
Playwright’s UI mode and VS Code extension provide visibility into your test runs. You can step through tests, pick locators, view failures, and optimize your tests. The trace viewer gives you a detailed trace of all steps in a test run, which is invaluable for troubleshooting.

Add visual testing with Applitools Eyes.
For complete validation, combine Playwright with Applitools for visual and UI testing. Applitools Eyes catches unintended changes in UI that can be missed by traditional test automation.

Handle dynamic elements with the right locators.
Use a combination of attributes like “text”, “aria-label”, “alt”, “placeholder”, CSS, and XPath to locate dynamic elements that frequently change. This enables you to test dynamic web pages.

Set cookies to test personalization.
You can set cookies in Playwright to handle scenarios like A/B testing where the web page or flow differs based on cookies. This is important for testing personalization on websites.

Playwright provides a robust set of features to build, run, debug, and maintain end-to-end web tests. By leveraging the use cases and best practices shared in the webinar, you can power up your test automation and build a successful testing strategy using Playwright. Watch the full recording and see the session materials.

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
Modern Cross Device Testing for Android & iOS Apps https://applitools.com/blog/cross-device-testing-mobile-apps/ Wed, 13 Jul 2022 20:47:15 +0000 https://applitools.com/?p=40383 Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>

Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

What is Cross Device Testing

Modern cross device testing is the system by which you verify that an application delivers the desired results on a wide variety of devices and formats. Ideally this testing will be done quickly and continuously.

There are many articles explaining how to do CI/CD for web applications, and many companies are already doing it successfully, but there is not much information available out there about how to achieve the same for native mobile apps.

This post will shed light on the cross device testing practices you need to implement to get a step closer to Continuous Delivery for native mobile apps.

Why is Cross Device Testing Important

The number of mobile devices used globally is staggering. Based on the data from bankmycell.com, we have 6.64 billion smartphones in use.

Source: https://www.bankmycell.com/blog/how-many-phones-are-in-the-world#part-1

Even if we are building and testing an app which impacts a fraction of this number, that is still a very huge number.

The below chart shows the market share by some leading smartphone vendors over the years.

Source: https://www.statista.com/statistics/271496/global-market-share-held-by-smartphone-vendors-since-4th-quarter-2009/

Challenges of Cross Device Testing

One of the biggest challenges for testing mobile apps is that across all manufacturers combined, there are 1000s of device types in use today. Depending on the popularity of your app, this means there are a huge number of devices your users could be using. 

These devices will have variations based on:

  • OS types and versions
  • potentially customized OS
  • hardware resources (memory, processing power, etc.)
  • screen sizes
  • screen resolutions
  • storage with different available capacity for each
  • Wifi Vs using mobile data (from different carriers)
  • And many more

It is clear that you cannot run our tests on each type of device that may be used by your users. 

So how do you get quick feedback and confidence from your testing that (almost) no user will get impacted negatively when you release a new version of your app?

Mobile Test Automation Execution Strategy

Mobile Testing Strategy

Before we think about the strategy for running your automated tests for mobile apps, we need to have a good and holistic mobile testing strategy

Along with testing the app functionality, mobile testing has additional dimensions, and hence complexities as compared with web-app testing. 

You need to understand the impact of the aspects mentioned above and see what may, or may not be applicable to you.

Here are some high-level aspects to consider in your mobile testing strategy:

  • Know where and how to run the tests – real devices, emulators / simulators available locally versus in some cloud-based device farm
  • Increasing test coverage by writing less code – using Applitools Visual AI to validate functionality and user-experience
  • Scaling your test execution – using Applitools Native Mobile Grid
  • Testing on different text fonts and display densities 
  • Testing for accessibility conformance and impact of dark mode on functionality and user experience
  • Chaos & Monkey Testing
  • Location-based testing
  • Testing the impact of Network bandwidth
  • Planning and setting up the release strategy for your mobile application including beta-testing, on-field testing, staged-rollouts. This differs for Google PlayStore and Apple
  • Building and testing for Observability & Analytics events

Once you have figured out your Mobile Testing Strategy, you now need to think about how and what type of automated tests can give you good, reliable, deterministic and fast feedback about the quality of your apps. This will result in you identifying the different layers of your test automation pyramid.

Remember: It is very important to execute all types of automated tests, on every code change and new app that is built. The functional / end-2-end / UI tests for your app should also be run at this time.

Additionally, you need to be able to run the tests on a local developer / qa machine, as well in your Continuous Integration (CI) system. In case of native / hybrid mobile apps, developers and QAs should be able to install the app on the (local) devices they have available with them, and run the tests against that. For CI-based execution, you need to have some form of device farm available locally in your network, or cloud-based to allow execution of the tests.

This continuous testing approach will provide you with quick feedback and allow you to fix issues almost as soon as they creep in the app. 

How to Run Functional Tests against Your Mobile Apps

Testing and automating mobile apps have additional complexities. You need to install the app in some device before your automated tests can be run against it.

Let’s explore your options for devices.

Real Devices

Real devices are ideal to run the tests. Your users / customers are going to use your app using a variety of real devices. 

In order to allow proper development and testing to be done, each team member needs access to relevant types of devices (which is subject to their user-base).

However, it is not as easy to have a variety of devices available for running the automated tests, for each team member (developer / tester). 

The challenges of having the real devices could be related to:

  • cost of procuring devices for each team member of a good variety to allow seamless development and testing work. 
  • maintenance of the devices (OS/software updates, battery issues, other problems the device may have at any point in time, etc.)
  • logistical issues like time to order and get devices, tracking of the devices assigned to the team, etc.
  • deprecating / disposing the older devices that are not used / required anymore.

Hence we need a different strategy for executing tests on mobile devices. Emulators and Simulators come to the rescue!

What is the Difference between Emulators & Simulators

Before we get into specifics about the execution strategy, it is good to understand the differences between an emulator and simulator.

Android-device emulators and iOS-device simulators make it easy for any team member to easily spin up a device.

An emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system

An emulator can mimic the operating system, software and the hardware features of the android device.

A Simulator runs on your Mac and behaves like a standard Mac app while simulating iPhone, iPad, Apple Watch, or Apple TV environments. Each combination of a simulated device and software version is considered its own simulation environment, independent of the others, with its own settings and files. These settings and files exist on every device you test within a simulation environment. 

An iOS simulator mimics the internal behavior of the device. It cannot mimic the OS / hardware features of the device.

Emulators / Simulators are a great and cost-effective way to overcome the challenges of real devices. These can easily be created as per the requirements and needs by any team member and can be used for testing and also running automated tests. You can also relatively easily set up and use the emulators / simulators in your CI execution environment.

While emulators / simulators may seem like they will solve all the problems, that is not the case. Like with anything, you need to do a proper evaluation and figure out when to use real devices versus emulators / simulators.

Below are some guidelines that I refer to.

When to use Emulators / Simulators

  • You are able to validate all application functionality
  • There is no performance impact on the application-under-test

Why use Emulators / Simulators

  • To reduce cost
  • Scale as per needs, resulting in faster feedback
  • Can use in CI environment as well

When to use Real Devices for Testing

  • If Emulators / Simulators are used, then run “Sanity” / focussed testing on real devices before release
  • If Emulators / Simulators cannot validate all application functionality reliably, then invest in Real-Device testing
  • If Emulators / Simulators cause performance issues or slowness of interactions with the application-under-test

Cases when Emulators / Simulators May not Help

  • If the application-under-test has streaming content, or has high resource requirements
  • Applications relying on hardware capabilities
  • Applications dependent on customized OS version

Cross-Device Test Automation Strategy

The above approach of using real-devices or emulators / simulators will help your team to  shift-left and achieve continuous testing. 

There is one challenge that still remains – scaling! How do you ensure your tests run correctly on all supported devices?

A classic, or rather, a traditional way to solve this problem is to repeat the automated test execution on a carefully chosen variety of devices. This would mean, if you have 5 important types of devices, and you have a 100 tests automated, then you are essentially running 500 tests. 

This approach has multiple disadvantages:

  1. The feedback cycle is substantially delayed. If 100 tests took 1 hour to complete on 1 device, 500 tests would take 5 hours (for 5 devices). 
  2. The time to analyze the test results increases by 5x 
  3. The added number of tests could have flaky behavior based on device setup / location, network issues. This could result in re-runs or specific manual re-testing for validation.
  4. You need 5x more test data
  5. You are putting 5x more load on your backend systems to cater to executing the same test 5 times

We all know these disadvantages, however, there is no better way to overcome this. Or, is there?

Modern Cross-Device Device Test Automation Strategy

The Applitools Native Mobile Grid for Android and iOS apps can easily help you to overcome the disadvantages of traditional cross-device testing.

It does this by running your test on 1 device, but getting the execution results from all the devices of your choice, automatically. Well, almost automatically. This is how the Applitools Native Mobile Grid works:

  1. Integrate Applitools SDK in your functional automation.
  2. In the Applitools Eyes configuration, specify all the devices you want to do your functional testing. Added bonus, you will be able to leverage the Applitools Visual AI capabilities to also get increased functional and visual test coverage.

Below is an example of how to specify Android devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Pixel_4, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_ULTRA, ScreenOrientation.PORTRAIT));
eyes.setConfiguration(config);

Below is an example of how to specify iOS devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_mini));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XS));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_X));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XR));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_8));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_7));
eyes.setConfiguration(config);   

  1. Run the test on any 1 device – available locally or in CI. It could be a real device or a simulator / emulator. 

Every call to Applitools to do a visual validation will automatically do the functional and visual validation for each device specified in the configuration above.

  1. See the results from all the devices in the Applitools dashboard

Advantages of using the Applitools Native Mobile Grid

The Applitools Native Mobile Grid has many advantages.

  1. You do not need to repeat the same test execution on multiple devices. This will save the team members a lot of time for execution, flaky tests and result analysis
  2. Very fast feedback of test execution across all specified devices (10x faster than traditional cross device testing approach)
  3. There is no additional test data requirements 
  4. You do not need to procure, build and maintain the devices
  5. There is less load on your application backend-system
  6. A secure solution where your application does not need to be shared out of your corporate network
  7. Using visual assertions instead of functional assertions gives you increased test coverage while writing less code

Read this post on How to Scale Mobile Automation Testing Effectively for more specific details of this amazing solution!

Summary of Modern Cross-Device Testing of Mobile Apps

Using the Applitools Visual AI allows you to extend coverage at the top of your Test Automation Pyramid by including AI-based visual testing along with your UI/UX testing. 

Using the Applitools Native Mobile Grid for cross device testing of Android and iOS apps makes your CI loop faster by providing seamless scaling across all supported devices as part of the same test execution cycle. 

You can watch my video on Mobile Testing 360deg (https://applitools.com/event/mobile-testing-360deg/) where I share many examples and details related to the above to include as part of your mobile testing strategy.

To start using the Native Mobile Grid, simply sign up at the link below to request access. You can read more about the Applitools Native Mobile Grid in our blog post or on our website.

Happy testing!

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>
Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid https://applitools.com/blog/comparing-cross-browser-testing-tools-selenium-grid-vs-applitools-ultrafast-grid/ Wed, 29 Jun 2022 15:00:00 +0000 https://applitools.com/?p=39529 How can you choose the best cross-browser testing tool? We'll review the challenges of cross-browser testing and consider leading solutions.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>

How can you choose the best cross-browser testing tool for your needs? We’ll review the challenges of cross-browser testing and consider some leading cross-browser testing solutions.

Nowadays, testing a website or an app using one single browser or device will lead to disastrous consequences, and testing the same website or app on multiple browsers using ONLY the traditional functional testing approach may lead to production issues and lots of visual bugs. 

Combinations of browsers, devices, viewports, and screen orientations (portrait or landscape) can reach the thousands. Performing manual testing on this vast amount of possibilities is no longer feasible, same as just running the usual functional testing scripts hoping to cover the most critical aspects, regions, or functionalities of our sites. 

In this article, we are going to focus on the challenges and leading solutions for cross-browser testing. 

The Challenges of Cross Browser Testing 

What is Cross Browser Testing?

Cross-browser testing makes sure that your web apps work across different web browsers and devices. Usually, you want to cover the most popular browser configurations or the ones specified as supported browsers/devices based on your organization’s products and services.

Why Do We Need Cross Browser Testing?

Basically because rendering is different and modern web apps have responsive design, but you also have to consider that each web browser handles JavaScript differently, and each browser may render things differently based on different viewports or device screen sizes. These rendering differences can result in costly bugs and negative user experience.

Challenges of Cross Browser Testing Today

Cross-browser testing has been around for quite some time now. Traditionally, testers run multiple tests and test in parallel on different browsers and this is fine, from a functional point of view. 

Today, we know for a fact that running only these kinds of traditional functional tests across a set of browsers does not guarantee your website or app’s integrity. But let’s define and understand the difference between Traditional Functional Testing and Visual Testing. Traditional functional testing is a type of software testing where the basic functionalities of an app are tested against a set of specifications. On the other hand, Visual Testing allows you to test for visual bugs, which are extremely difficult to uncover with the traditional functional testing approach.

As mentioned, traditional functional testing on its own will not capture the visual testing aspect and could lead to lack of coverage. You have to take into consideration the possibility of visual bugs, regardless of the amount of elements you actually test. Even if you tested all of them, you may encounter visual bugs that could lead to false negatives, which means, your testing was done, your tests passed and you did not capture the bug. 

Today we have mobile and IoT device proliferation, complex responsive design viewport requirements, and dynamic content. Since rendering the UI is subjective, the majority of cross-browser defects are visual.

To handle all these possibilities or scenarios, you need a tool or framework that not only runs tests but provides reliable feedback – and not just false positives or tests pending to be approved or rejected. 

When it comes to cross-browser testing, you have several options, same as for visual testing. In this article, we will explore some of the most popular cross-browser testing tools. 

Cross-Browser Testing with Your Own In-House Selenium Grid 

If you have the resources, time, and knowledge, you can spin up your own Selenium Grid and do some cross-browser testing. This may be useful based on your project size and approach.

As mentioned, if you understand the components and steps to accomplish this, go for it! 

Now, be aware, to maintain a home-grown Selenium grid cluster is not an easy task. You may find some difficulties or issues when running and maintaining hundreds of browser/nodes. Because of this, most companies end up outsourcing this tasks to vendors like Browserstack or LambdaTest, in order to save time and energy and bring more stability to their Selenium Grid infrastructure. 

Most of these vendors are really expensive, which means that you will need to have a dedicated project budget just for running your UI tests on their cloud. Not to mention the packages or plans you’ll have to acquire to run a decent amount of parallel tests.  

Considerations when Choosing Selenium Grid Solutions

When it comes to cross-browser testing and visual testing, you could use any of the available tools or frameworks, for instance LambdaTest or BrowserStack. But how can we choose? Which one is better? Are they all offering the same thing? 

Before choosing any Selenium Grid solutions, there are some key inherit issues that we must take into consideration:

  1. With a Selenium Grid Solution, you need to run each test multiple times on each and every browser/device that you would like to cover, resulting in much higher maintenance (if your tests fails 5% of the times, and you now need to run the test 10 times on 10 different environments, you are adding much more failures/maintenance overhead). 
  1. Cloud-based Selenium Grid solutions require constant connections between the machine inside your network that is running the test to the browser in the cloud for the entire test execution time. Many grid solutions have reliability issues around that causing environment/connection failure on some tests, and when executing tests at scale this results in some additional failures that the team needs to analyze.
  1. If you try to use cloud-based Selenium Grid solutions to test an internal application, you would need to setup a tunnel from the cloud grid to your company’s network, which creates a security risk and adds additional performance/reliability issues.
  2. Another critical factor for traditional “WebDriver-as-a-Service” platforms is speed. Tests could take 2-4x as much time to complete on those platforms compared to running them on local machines. 

Cross-Browser Testing with Applitools Ultrafast Grid

Applitools Ultrafast Grid is the next generation of cross-browser testing. With the Ultrafast Grid, you can run functional and visual tests once, and it instantly renders all screens across all combinations of browsers, devices, and viewports. 

Visual AI is a technology that improves snapshot comparisons. It goes deeper than pixel-to-pixel comparisons to identify changes that would be meaningful to the human eye.

Visual snapshots provide a much more robust, comprehensive, and simpler mechanism for automating verifications. Instead of writing hundreds of lines of assertions with locators, you can write a single-line snapshot capture using Applitools Eyes.

When you compound that stability with the modern cross-platform testing technology of the Ultrafast Test Grid that stability multiplies. This improved efficiency guarantees delivery of high-quality apps, on-time and without the need of multiple suites or test scripts.

Think and analyze the time that it currently takes to complete a full testing cycle on your end using traditional cross-browser testing solutions. Going from installing, writing, running, analyzing, reporting and maintaining your tests. Engineers now have the Ultrafast Grid and Visual AI technology that can be easily set on your framework, and that is also capable of testing large, modern apps across multiple environments in just minutes. 

Traditional cross-browser testing solutions that offer visual testing, are usually providing this as a separate feature or add-on that you have to pay for. What this feature does is basically taking screenshots for you to compare with other screenshots previously taken. So you can just imagine the amount of time that will take to accept or reject all these tests, and take into account that most of them will not necessarily bring useful intel, as the website or app may not change from one day to another. 

The Ultrafast Grid goes beyond simple screenshots. Applitools SDKs uploads DOM snapshots, not screenshots, to the Ultrafast Grid. Snapshots include all the resources to render a page (HTML, CSS …) and are much smaller than screenshots, so they are basically uploaded faster. 

To learn more about the Ultrafast Grid functionality and configuration, take a look at this article > https://applitools.com/docs/topics/overview/using-the-ultrafast-grid.html

Benefits and Differences when using the Applitools Ultrafast Grid

Here are some of the benefits and differences you’ll find when using this framework:

  1. The Ultrafast Grid uses containers to render web pages on different browsers in a much faster and more reliable way, maximizing speed.
  1. The Ultrafast Grid does not always upload a snapshot for every page. If a page’s resources didn’t change, Ultrafast Grid doesn’t upload them again. Since most page resources don’t change from one test run to another, there’s less to transfer, and upload times are measured in milliseconds.
  1. As mentioned above, with Applitools Ultrafast Grid, you only need to run the test once and you’ll get the results from all browsers/devices. Now that most browsers are W3C compliant, the chances of facing functional differences between different browsers (e.g. a button clicks on one browser and doesn’t click on other browsers) are negligible, so it’s sufficient to run the functional tests just once and this will still find the common browser compatibility issues like rendering/visual differences between browsers.
  1. You can use one algorithm on top of the other. Other solutions only offer the possibility of setting a level of comparison based on three modes, either Strict, Suggested (Normal) or Relax, and this is useful to some extent. But what happens if you need to select a certain region of the page to use a different comparison algorithm? Well, this is possible using the Applitools Region Types feature:
  Images courtesy of the AKC

  1. All of the above occurs on multiple browsers and devices combination at the same time. This is possible using the Ultrafast Grid configuration. For more information check out this article > https://applitools.com/docs/topics/sdk/vg-configuration.html
  1. Applitools offers a Free version that allows you to use mostly all the Framework features, This is really cool and helpful, as you will be able to explore and use high level features like Visual AI, cross-browser testing & visual testing without having to worry about the minutes left on your free trial, as with other solutions. 
  1. One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This reduces the overhead involved with managing baselines from different browsers and device configurations.  
Images courtesy of the AKC

Final Thoughts

Selenium Grid Solutions are everywhere, and the price varies between vendors and features. IF you have infinite time, infinite resources and infinite budget, it would be ideal to run all the tests on all the browsers and analyze the results on every code change/build. But for a company trying to optimize their velocity and run tests on every pull request/build, the Applitools Ultrafast Grid provides a compelling balance between performance, stability, cost and risk.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual Regression Testing? https://applitools.com/blog/visual-regression-testing/ Fri, 17 Jun 2022 16:24:42 +0000 https://applitools.com/?p=39297 Learn what visual regression testing is and why it's important. Explore a use case with an example, how to get started and how to choose the best tool.

The post What is Visual Regression Testing? appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, you’ll learn what visual regression testing is and why visual regression tests are important. We’ll go through a use case with an example and talk about how to get started and choose the best tool for your needs.

What is Visual Regression Testing?

Visual regression testing is a method of validating that changes made to an application do not negatively affect the visual appearance of the application’s user interface (UI). By verifying that the layout and visual elements align with expectations, the goal of visual regression testing is to ensure the user experience is visually perfect.

Visual regression testing is a kind of regression testing. In regression testing, an application is tested to ensure that a new change to the code doesn’t break existing functionality. Visual regression testing specifically focuses on verifying the appearance and the usability of the UI after a code change.

In other words, visual regression testing (also called just visual testing or UI testing) is focused on validating the appearance of all the visual elements a user interacts with or sees. These visual validations include the location, brightness, contrast and color of buttons, menus, components, text and much more.

Why is Visual Regression Testing Important?

Visual regression tests are important to prevent costly visual bugs from escaping into production. Failure to visually validate can severely compromise the user experience and in many cases lead directly to lost sales. This is because traditional functional testing works by simply validating data input and output. This method of testing catches many bugs, but it can’t discover visual bugs. Without visual testing these bugs are prone to slipping through even on an otherwise well tested application. 

As an example, here is a screenshot of a visual bug in production on the Southwest Airlines website:

Visual Bug on Southwest Airlines App
Visual Bug on Southwest Airlines App

This page would pass a typical suite of functional tests because all of the elements are present on the page and have loaded successfully. However, the visual bug is obvious. Not only that, but because the Terms and Conditions are inadvertently overlapping the button, the user literally cannot check out and complete their purchase. Visual regression testing would catch this kind of bug easily before it slipped into production.

Visual testing can also enhance functional testing practices and make them more efficient. Because visual tests can “see” the elements on a page they do not have to rely on individual coded assertions using unique selectors to validate each element. In a traditional functional testing suite, these assertions are often time-consuming to create and maintain as the application changes. Visual testing greatly simplifies that process.


How Do Visual Regression Tests Work?

At its core, visual regression testing works by capturing screenshots of the UI before a change is made and comparing it to a screenshot taken after. Differences are then highlighted for a test engineer to review. In practice, there are several different visual regression testing techniques available.

Types of Visual Regression Testing

  • Manual visual testing: Visual regression testing can be done manually and without any tools. Designers and developers take time during every release to scan pages, manually looking for visual defects. While it is slow and extremely cumbersome to do this for an entire application, not to mention prone to human error, manual testing in this way can allow for ad-hoc or exploratory testing of the UI, especially at early stages of development.
  • Pixel-by-Pixel comparison: This approach compares the two screenshots and analyzes each at the pixel level, alerting the test engineer of any discrepancies found. Pixel comparison, also called pixel diffs, will be certain to flag all possible issues, but will also include many irrelevant differences that are invisible to the human eye and have no effect on usability (such as rendering, anti-aliasing, or padding/margin differences). These “false-positives” must be painstakingly sifted through manually by the test engineer with every test run.
  • DOM-based comparison: A comparison based on the Document Object Model (DOM) analyzes the DOM before and after a state change and flags any differences. This will be effective in drawing attention to any alterations in the code that comprises the DOM, but is not truly a visual comparison. False negatives/positives are frequently produced when the code does not change but the UI does (e.g.: dynamic content, embedded content, etc.) or when the code changes but the UI does not. As a result, test results are often flaky and must be slowly and carefully reviewed to avoid escaped visual bugs.
  • Visual AI comparison: This type of visual regression testing leverages Visual AI, which uses computer vision to “see” the UI the same way a human would. A well-trained AI will be able to assist test engineers by only surfacing the kind of differences a human would notice, eliminating the time-consuming “false-positive” issues that plague pixel and DOM comparison tests. It can also include other capabilities, such as the ability to test dynamic content and flag issues only in the areas or regions where changes are not expected.

Automated Visual Testing Use Case and Example

Getting started with automated visual regression testing takes only a few steps. Let’s walk through the typical visual regression testing process and then consider a brief example.

  1. Define your test scenarios. What will be captured in the screenshots, and at what point in the test will they be taken? With some automated tools, a basic test can be as simple as a single line of code that will take a screenshot of an entire page at the end of a test.
  2. Use an automated testing tool to compare the new screenshots against a baseline image. The baseline is the most recent existing screenshot of the application that has already been approved by a tester.
  3. The tool will automatically generate a report highlighting the differences found between the two images. Using pixel diff this will be every pixel difference found, or with Visual AI, you will see a report showing only meaningful differences. 
  4. A test engineer reviews the report and determines what is a bug and what is an acceptable or valid change (a false positive). After all bugs are resolved, the baseline is updated with the new screenshot.

Visual Regression Testing Example

Let’s review a quick example of the four steps above with a basic use case, such as a login screen.

  1. Define your test scenario: In this case, we’ll capture the entire screen and review for any changes. Our baseline might look like this:
  2. Next, we’ll make some changes to the code, such as adding a row of social buttons. Unfortunately, doing so has pushed up the login button so that it is unusable. Our new login screen might look like this:
  3. The tool will then compare the two and generate a report. In our example, we’ll use Visual AI, which will highlight only the relevant areas of change that a user would notice. In this case, that’s the row with the new social buttons, and the area with the now unusable button. The comparison would look like this:
  4. A test engineer will then review the comparison. If any intentional changes were flagged, these are marked as accepted changes. Similarly, if there are expected changes in dynamic areas, these can be flagged for Visual AI to ignore going forward. Remaining areas flagged are marked as bugs to be addressed. In this case, every area flagged in red is problematic – the social buttons need to be shifted down, and the button needs to come down out of the password field. Once these are addressed, the test is run again, and a new baseline is created only when everything passes. The end result is free of visual defects:

How to Choose a Visual Testing Tool

Choosing the best tool for your visual regression tests will depend on your needs, as there are many options available. Here are some questions you should be asking as you consider a new tool:

  • Automated or Manual? How frequently do you want to conduct visual tests? For occasional spot checks, manual testing may suffice. If you want to run tests with every change to ensure no visual bugs escape, automated testing will be much more efficient. For automated testing, consider the learning curve of the tool and how easy it is to integrate into your existing CI/CD workflow.
  • Is your UI dynamic or static? How often does your user interface change? For completely static pages, simpler tools may serve to spot any visual bugs. Pages with dynamic content that changes regularly may be better served by tools with advanced capabilities like Visual AI.
  • How many browsers/devices/platforms? Do you have many browsers, devices or platforms to cover with your tests? A tool may be efficient for a single combination but quite inefficient when attempting to cover a wide range of configurations. If you need to cover a broad range of situations, you need to make sure you pick a tool that can quickly re-render visual snapshots on different configurations, or achieving full coverage can become a time-consuming headache.
  • Does your team have time? How much time does your QA team have to spend on UI testing? If they have capacity, sifting through potential false positives from pixel diff tools may not be an issue, or manual testing could be an option. For teams looking to be as efficient as possible, particularly with large or dynamic applications, automated visual testing with Visual AI will save time.
  • What is your build/release frequency? Are you running tests infrequently, daily, or even multiple times a day? If testing is quite infrequent, you may be able to absorb some inefficiency in test execution. Organizations running tests regularly, or seeking to increase their test velocity, should place significant value in a tool that can enable their QA team to achieve full coverage by easily executing a large quantity of tests quickly.
  • How many bugs are slipping through? For many teams, due to the increasing complexity of web and mobile development, visual bugs that can harm a company’s reputation or even sales escape more than they would like. In this case the value of automated visual testing is clear. However, if your team is catching all bugs or you can live with the level of bugs escaping, you may not need to invest in visual testing, at least for now.

Automated visual testing tools can be paid or open source. Visual testing tools are typically paired with an automated testing tool to automatically handle interactions and take screenshots. Some popular open source automated testing tools compatible with visual testing include Selenium for web testing and Appium for mobile testing.

Why Choose Automated Visual Regression Testing with Applitools

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Read More

The post What is Visual Regression Testing? appeared first on Automated Visual Testing | Applitools.

]]>
The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions https://applitools.com/blog/visual-ai-vs-pixel-matching-dom-based-comparisons/ Fri, 10 Jun 2022 02:37:44 +0000 https://applitools.com/?p=39178 Customers expect apps and sites to be visually flawless. How does Visual AI compare to pixel-matching and DOM-based solutions for visual testing?

The post The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions appeared first on Automated Visual Testing | Applitools.

]]>

The visual aspect of a website or an app is the first thing that end users will encounter when using the application. For businesses to deliver the best possible user experience, having appealing and responsive websites is an absolutely necessity.

More than ever, customers expect apps and sites to be intuitive, fast, and visually flawless. The number of screens across applications, websites, and devices is growing faster, with the cost of testing rising high. Managing visual quality effectively is now becoming a MUST.

Visual testing is the automated process of comparing the visible output of an app or website against an expected baseline image.

In its most basic form, visual testing, sometimes referred to as Visual UI testing, Visual diff testing or Snapshot testing, compares differences in a website page or device screen by looking at pixel variations. In other words, testing a web or native mobile application by looking at the fully rendered pages and screens as they appear before customers.

The Different Approaches Of Visual Testing

While visual testing has been a popular solution for validating UIs, there have been many flaws in the traditional methods of getting it done. In the past, there have been two traditional methods of visual testing: DOM Diffs and Pixel Diffs. These methods have led to an enormous amount of false positives and lack of confidence from the teams that have adopted them.

Applitools Eyes, the only visual testing solution to use Visual AI, solves for all the shortcomings of visual testing – vastly improves test creation, execution, and maintenance.

The Pixel-Matching Approach

This refers to Pixel-by-pixel comparisons, in which the testing framework will flag literally any difference it sees between two images, regardless of whether the difference is visible to the human eye, or not.

While such comparisons provide an entry-level into visual testing, it tends to be flaky and can lead to a lot of false positives, which is time-consuming.

When working with the web, you must take into consideration that things tend to render slightly different between page loads and browser updates. If the browser renders the page off by 1 pixel due to a rendering change, your text cursor is showing, or an image renders differently, your release may be blocked due to these false positives.

Here are some examples of what this approach cannot handle:

Pixel-based comparisons exhibit the following deficiencies:

  • They will be considered successful ONLY if the compared image/checkpoint and the baseline image are identical, which means that every single pixel of every single component has been placed in the exact same way. 
  • These types of comparisons are very sensitive, so if anything changes (the font, colors, component size) or the page is rendered differently, you will get a false positive.
  • As mentioned above, these comparisons cannot handle dynamic content, shifting elements or different screen sizes, so it’s not a good approach for modern responsive websites.

Take for instance these two examples:

  1. When a “-” sign used in a line of text is changed to a “+” sign, many browsers will add literally single digit pixels of padding around the line based on formatting rules. This small change will throw off your entire baseline and render the entire page a massive bug. 
  2. When the version of your favorite browser updates, oftentimes the engine it uses to transform colors can improve and throw off small changes that are not even visible to the human eye into the pixels of your UI. This means that colors that have made no perceptible changes will fail visual tests.

The DOM-Based Approach

Images courtesy of the AKC

In this approach, the tool captures the DOM of the page and compares it with the DOM captured of a previous version of the page.

Comparing DOM snapshots does not mean the output in the browser is visually identical. Your browser renders the page from the HTML, CSS and JavaScript, which comprises the DOM. Identical DOM structures can have different visual outputs and different DOM outputs can render identically.

Some differences that a DOM diff misses:

  •  IFrame changes but the filename stays the same
  •  Broken embedded content
  •  Cross-browser issues
  •  Dynamic content behavior (DOM is static)

DOM comparators exhibit three clear deficiencies:

  1. Code can change and yet render identically, and the DOM comparator flags a false positive.
  2. Code can be identical and yet render differently, and the DOM comparator ignores the difference, leading to a false negative.
  3. The impact of responsive pages on the DOM. If the viewport changes or the app is loaded on a different device, components size and location may change, this will flag another set of false positives.

In short, DOM diffing ensures that the page structure remains the same from page to page. DOM comparisons on their own are insufficient for ensuring visual integrity.

A combination of Pixel and DOM diffs can mitigate some of these limitations (e.g. identify DOM differences that render identically) but are still suspect to many false-positive results.

The Visual AI Approach

Modern approaches have incorporated artificial intelligence, known as Visual AI, to view as a human eye would and avoid false positives.

Visual AI is a form of computer vision invented by Applitools in 2013 to help quality engineers test and monitor today’s modern apps at the speed of CI/CD. It is a combination of hundreds of AI and ML algorithms that help identify when things go wrong in your UI that actually matter. Visual AI inspects every page, screen, viewport, and browser combination for both web and native mobile apps and reports back any regression it sees. Visual AI looks at applications the same way the human eye and brain do, but without tiring or making mistakes.  It helps teams greatly reduce false positives that arise from small, inconceivable differences in regressions, which has been the biggest challenge for teams adopting visual testing

Visual AI overcomes the problems of pixel and DOM for visual validations, and has 99.9999% accuracy to be used in production functional testing. Visual AI captures the screen image, breaks it into visual elements using AI, compares the visual elements with an older screen image broken into visual elements (using AI), and identifies visible differences.

Each given page renders as a visual image composed of visual elements. Visual AI treats elements as they appear:

  • Text, not a collection of pixels
  • Geometric elements (rectangles, circles), not a collection of pixels
  • Pictures as images, not a collection of pixels

Check Entire Page With One Test

QA Engineers can’t reasonably test the hundreds of UI elements on every page of a given app, they are usually forced to test a subset of these elements, leading to a lot of production bugs due to lack of coverage.

With Visual AI, you take a screenshot and validate the entire page. This limits the tester’s reliance on DOM locators, labels, and messages. Additionally, you can test all elements rather than having to pick and choose. 

Fine Tune the Sensitivity Of Tests

Visual AI identifies the layout at multiple levels – using thousands of data points between location and spacing. Within the layout, Visual AI identifies elements algorithmically. For any checkpoint image compared against a baseline, Visual AI identifies all the layout structures and all the visual elements and can test at different levels. Visual AI can swap between validating the snapshot from exact preciseness to focusing differences in the layout, as well as differences within the content contained within the layout.

Easily Handle Dynamic Content

Visual AI can intelligently test interfaces that have dynamic content like ads, news feeds, and more with the fidelity of the human eye. No more false positives due to a banner that constantly rotates or the newest sale pop-up your team is running.

Quickly Compare Across Browsers & Devices

Visual AI also understands the context of the browser and viewport for your UI so that it can accurately test across them at scale. Visual testing tools using traditional methods will get tripped up by the small, inconsistencies in browsers and your UIs elements. Visual AI understands them and can validate across hundreds of different browser combinations in minutes.

Automate Maintenance At Scale

One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This significantly reduces the overhead involved with managing baselines from different browsers and device configurations.  

When it comes to reviewing your test results, this is a major step towards saving team’s and testers time, as it will help to apply the same change on a large number of tests and will identify this same change for future tests as well. Reducing the amount of time required to accomplish these tasks translates to reducing the cost of the project.

Use Cases of Visual AI

Testing eCommerce Sites

ECommerce websites and applications are some of the best candidates for visual testing, as buyers are incredibly sensitive to poor UI/UX. But previously, eCommerce sites had too many moving parts to be practically tested by visual testing tools that use DOM Diffs or Pixel Diffs. Items that are constantly changing and going in and out of stock, sales that happening all the time, and the growth of personalization in digital commerce has made it impossible to validate with AI. Too many things get flagged on each change!

Using Visual AI, tests can omit entire sections of the UI from tripping up tests, validate only layouts, or dynamically assert changing data. 

Testing Dashboards 

Dashboards can be incredibly difficult to test via traditional methods due to the large amount of customized data that can change in real-time.

Visual AI can help not only visually test around these dynamic regions of heavy data, but it can actually replace many of the repeated and customized assertions used on dashboards with a single line of code. 

Let’s take the example of a simple bank dashboard below.

It has hundreds of different assertions, like the Name, Total Balance, Recent Transactions, Amount Due, and more. With visual AI, you can assign profiles to full-page screenshots meaning that the entire UI of “Jack Gomez’s” bank dashboard can be tested via a single assertion. 

Testing Components Across Browsers

Design Systems are a common way to have design and development collaborate on building frontends in a fast, consistent manner. Design Systems output components, which are reusable pieces of UI, like a date-picker or a form entry, that can be mixed and matched together to build application screens and interfaces.

Visual AI can test these components across hundreds of different browsers and mobile devices in just seconds, making sure that they are visibly correct on any size screen. 

Testing PDF Documents 

PDFs are still a staple of many business and legal transactions between businesses of all sizes. Many PDFs get generated automatically and need to be manually tested for accuracy and correctness. Visual AI can scan through hundreds of pages of PDFs in just seconds making sure that they are pixel-perfect. 

Conclusion

DOM-based tools don’t make visual evaluations. DOM-based tools identify DOM differences. These differences may or may not have visual implications. DOM-based tools result in false positives – differences that don’t matter but require human judgment to render a decision that the difference is unimportant. They also result in false negatives, which means they will pass something that is visually different.

Pixel-based tools don’t make evaluations, either. Pixel based tools highlight pixel differences. They are liable to report false positives due to pixel differences on a page. In some cases, all the pixels shift due to an enlarged element at the beginning – pixel technology cannot distinguish the elements as elements, this means pixel technology cannot see the forest from the trees.

Automated Visual Testing powered by Visual AI, can successfully work with the challenges of Digital Transformation and CI-CD by driving higher testing coverage while at the same time helping teams increase their release velocity and improve visual quality.

Be mindful when selecting the right tool for your team and/or project, and always take into consideration:

  • Organizational maturity and opportunities for test tool support 
  • Appropriate objectives for test tool support 
  • Analyze tool information against objectives and project constraints 
  • Estimate the cost-benefit ratio based on a solid business case 
  • Identify compatibility of the tool with the current system under test components

The post The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions appeared first on Automated Visual Testing | Applitools.

]]>
Creating Your First Test With Google Chrome DevTools Recorder https://applitools.com/blog/creating-first-test-google-chrome-devtools-recorder/ Wed, 04 May 2022 01:27:50 +0000 https://applitools.com/?p=37910 Chrome’s new Recorder tool allows you to record and replay tests from the browser and more. See how you can get started with it today.

The post Creating Your First Test With Google Chrome DevTools Recorder appeared first on Automated Visual Testing | Applitools.

]]>

There are many record and playback tools available, such as Selenium IDE, Testim, Katalon, and others.

Just recently Google has decided to launch its own Recorder tool embedded directly into Chrome. When Google joins the game it’s interesting to see. We decided to check it out.

The new tool is called Chrome DevTools Recorder.

What Is Chrome DevTools Recorder?

Chrome’s new Recorder tool allows you to record and replay tests from the browser, export them as a JSON file (and more), as well as measure test performance. The DevTools Recorder was released in November 2021, and you can read all about it here.

Right off the bat, we were excited to see that the tool is straightforward and simple. Since it is embedded in the browser we have the convenience of not having to context switch or deal with an additional external tool.

Let’s see what Google has in store with us with the tool and check out just how easily and quickly we can run our first test.

We’ll do so by recording a test on the Coffee cart website and exporting it as a Puppeteer Replay script. To top it off, we will be sprinkling some Applitools magic onto it and see how easy it is to integrate visual testing into the new tool. Let’s go!

First things first, let’s open up our new tool and record a test. 

How to Record a Test with Chrome DevTools Recorder

  • Open this page. We will be using this demo page for recording
  • Open Chrome DevTools
  • Click on More options() > More tools > Recorder
  • Click on the Start new recording button to begin
  • Enter your recording name
  • Click the Start a new recording button at the bottom of the recording window
  • The recording is started (the panel is showing Recording… indicates the recording is in progress)
  • Try to click around and order some coffees
  • Every interaction with the webpage is now recorded – clicking on buttons, switching web pages, waiting for elements to load and much more

Once the recording is done, we have our first automation script ready to run.

Given the recording, we can see some options before us:

  1. Replay the recording – playback what was recorded.
  2. Measure performance – replays the tests and opens the new Performance insights panel or the Performance panel of the DevTools. This way we can analyze the performance of our test, the amount of time it took to load each resource.
  3. Edit and add steps – we can manually edit our tests and add steps, even complex ones (e.g. wait until 9 images have been loaded and then continue the test). All without code.

Lastly, we have the option to export the test as a JSON file. This is a great feature as you can share the files with other users. 

You can also export it as a Puppeteer Replay script right away. It allows you to customize, extend and replay the tests with the Puppeteer Replay library, which makes the tool even more useful for more experienced users.

One of the main ‘weaknesses’ of Chrome’s Recorder tools is the very basic validation and a pretty standardized flow, with no option in the UI to add on top of it. 

The ability to quickly record a stable automated test and export it to make it more customizable is an incredible feature. It can help create tests quickly and efficiently.

Downloading the Puppeteer Script

  • Click on the Export button in the Recorder panel
  • Click on the Export as a @puppeter/replay script and save it as main.mjs (we will customize this file to add in Applitools visual testing)
  • Also click on the Export as a JSON file as shown below (we will customize our script later to read from this JSON file)

Understanding the Puppeteer Replay script

Open the main.mjs file we exported just now. This is what the script looks like:

import url from 'url';
import { createRunner } from '@puppeteer/replay';

export const flow = {
    "title": "order-a-coffee",
    "steps": [
        {
            "type": "setViewport",
            "width": 380,
            "height": 604,
            "deviceScaleFactor": 1,
            "isMobile": false,
            "hasTouch": false,
            "isLandscape": false
        },
        ...
    ]
};

export async function run(extension) {
  const runner = await createRunner(flow, extension);
  await runner.run();
}

if (process && import.meta.url === url.pathToFileURL(process.argv[1]).href) {
  await run();
}

After we npm install all the dependencies, we can replay the script above with the node main.mjs command.

The Puppeteer Replay library provides us with an API to replay and stringify recordings created using the Chrome DevTools Recorder. 

The flow variable is our recorded test steps. It is a JSON object. You can replace the flow value to read from a JSON file instead. Here is an example:

/* main.mjs */
import url from 'url';
import fs from 'fs';
import { createRunner, parse } from '@puppeteer/replay';

// Puppeteer: read the JSON user flow
const recordingText = fs.readFileSync('./your-exported-file.json', 'utf8');
export const flow = parse(JSON.parse(recordingText));


export async function run(extension) {
  ...
}

...

Run the script again. It returns the same result.

Extend the Puppeteer Replay script

The Puppeteer Replay offers a way to customize how a recording is run using the PuppeteerRunnerExtension class, which introduces very powerful and simple hooks such as beforeEachStep and afterAllSteps

Puppeteer must be installed to customize the tests further. For example, the tests will launch in headless mode by default. In order for us to see how the browser runs the automated test we can turn it off.

Below you can see an example on extending this class and running in headful mode:

/* main.mjs */
...
import puppeteer from 'puppeteer';
import { PuppeteerRunnerExtension } from "@puppeteer/replay";

// Extend runner to log message in the Console
class Extension extends PuppeteerRunnerExtension {
  async beforeAllSteps(flow) {
    await super.beforeAllSteps(flow);
    console.log("starting");
  }

  async afterEachStep(step, flow) {
    await super.afterEachStep(step, flow);
    console.log("after", step);
  }
}

// Puppeteer: launch browser
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

// Puppeteer: read the JSON user flow
..

// Puppeteer: Replay the script
if (process && import.meta.url === url.pathToFileURL(process.argv[1]).href) {
  // add extension
  await run(new Extension(browser, page));
}

// Puppeteer: clean up
await browser.close();

Now that we understand the code it’s time to kick it up a notch by adding Applitools Eyes to the mix to enable visual testing.

How to add Applitools Eyes to Chrome DevTools Recorder

Applitools Eyes is powered by Visual AI, the only AI-powered computer vision that replicates the human eyes and brain to quickly spot functional and visual regressions. Tests infused with Visual AI are created 5.8x faster, run 3.8x more stably, and detect 45% more bugs vs traditional functional testing.

Applitools also offers the Ultrafast Grid, which provides massively parallel test automation across all browsers, devices, and viewports. With the Ultrafast Grid, you run your test setup script once on your local machine, then the Applitools code takes a snapshot of the page HTML & CSS, and sends it to the grid for processing. This provides an out-of-the-box solution for cross-browser tests, so you don’t have to set up and maintain an in-house QA lab with multiple machines and devices.

Incorporating Applitools Eyes into Chrome DevTools Recorder only takes a few steps. Here’s an overview of the process, with the full details about each step below.

  1. Install Applitools Puppeteer SDK
    using npm: npm i -D @applitools/eyes-puppeteer
  2. Add the eyes-puppeteer dependency:
    const {Eyes, Target, VisualGridRunner, BrowserType, DeviceName} = require('@applitools/eyes-puppeteer')
    Eyes – the Eyes instance
    Target – Eyes Fluent API
    VisualGridRunner – using the Ultrafast Grid (UFG) with Eyes
    BrowserType – UFG browsers configuration
    DeviceName – UFG devices configuration
  3. Initialize Eyes and set the desired configuration and browsers for the UFG
  4. We must first open Eyes in order to perform visual validations – the browser must be defined for this step
  5. Perform visual validation with Eyes in the afterEachStep hook
  6. Finally, after closing the browser we can close Eyes and gather the test results
  7. to run the test, run the command:
    node <path_to_test.js>

Step 1 – Install Applitools Puppeteer SDK

As indicated above, to install the Applitools Puppeteer SDK run the following command:

npm i -D @applitools/eyes-puppeteer

Step 2 – Dependencies 

/* main.mjs */
import { Eyes, Target, VisualGridRunner } from '@applitools/eyes-puppeteer';
/* applitools.config.mjs */
import { BrowserType, DeviceName } from '@applitools/eyes-puppeteer';

Step 3 – Instance and Configuration

We define an Eyes instance alongside a Visual Grid runner, which is used with Applitools Ultrafast Grid. We can use the runner at the end of the test to gather all the test results across all Eyes instances in the test, therefore, the runner is defined globally. Eyes is usually defined globally as well but may also be defined locally for a specific test case. The terminology for a test in Applitools is equivalent to opening Eyes, performing any number of visual validations, and closing Eyes when we’re done. This will define a batch in Applitools that will hold our test, meaning we can have multiple tests in a single batch.

/* main.mjs */

// Puppeteer: launch browser
...

// Applitools: launch visual grid runner & eyes
const visualGridRunner = new VisualGridRunner({ testConcurrency: 5 });
const eyes = new Eyes(visualGridRunner);

We then create a function, setupEyes, that will set our configuration to Eyes before starting the test and before opening Eyes.  

/* applitools.config.mjs */

import { BrowserType, DeviceName } from '@applitools/eyes-puppeteer';

export async function setupEyes(eyes, batchName, apiKey) {
  eyes.setApiKey(apiKey);

  const configuration = eyes.getConfiguration();

  configuration.setBatch({ name: batchName })
  configuration.setStitchMode('CSS');

  // Add browser
  configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.CHROME });
  configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.FIREFOX });
  configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.SAFARI });
  configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.EDGE_CHROMIUM });
  configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.IE_11 });
  configuration.addBrowser({ deviceName: DeviceName.Pixel_2 });
  configuration.addBrowser({ deviceName: DeviceName.iPhone_X });

  eyes.setConfiguration(configuration);
};

Step 4 – Opening Eyes

In this step we open Eyes right after initializing the browser and defining the page. The page is required in order to communicate and interact with the browser.

/* main.mjs */

// Applitools: launch visual grid runner & eyes
...
const apiKey = process.env.APPLITOOLS_API_KEY || 'REPLACE_YOUR_APPLITOOLS_API_KEY';
const name = 'Chrome Recorder Demo';

await setupEyes(eyes, name, apiKey);
await eyes.open(page, {
  appName: 'Order a coffee',
  testName: 'My First Applitools Chrome Recorder test!',
  visualGridOptions: { ieV2: true }
});

This is a good opportunity to explain what a Baseline is – A Baseline is a set of images that represent the expected result of a specific test that runs on a specific application in a specific environment. A baseline is created the first time you run a test in a specific environment. This baseline will then be updated whenever you make changes to any of the pages in your app, and accept these changes in Applitools Eyes Test Manager. Any future run of the same test on the same environment will be compared against the baseline.

By default, creating a test on a specific browser for the first time (e.g. Firefox) will create a new Baseline, thus running the same test on a different browser (e.g. Chrome) will form a new baseline.

The baseline is a unique combination of the following parameters:

  • OS
  • Viewport Size
  • Browser
  • Application name
  • Test name

This means that by default a new baseline will be created for every combination that was not used before.

Step 5 – Visual Validation

By calling eyes.check(), we are telling Eyes to perform a visual validation. Using the Fluent API we can specify which target we would like to capture. Here we are performing visual validation in an afterEachStep hook to validate each step of the replay along the way. The target is specified to capture the window (the viewport) without the fully flag, which will force a full-page screenshot.

/* main.mjs */
...
// Extend runner to take screenshot after each step
class Extension extends PuppeteerRunnerExtension {
  async afterEachStep(step, flow) {
    await super.afterEachStep(step, flow);
    await eyes.check(`recording step: ${step.type}`, Target.window().fully(false));
    console.log(`after step: ${step.type}`);
  }
}

Step 6 – Close Eyes and Gather Results

We must close Eyes at the end of our test, as not closing Eyes will result in an Applitools test running in an endless loop. This is due the fact that when Eyes are open, you may perform any amount of visual validations you desire.

By using the eyes.abortAsync functionality, we essentially tell Eyes to abort the test in case that Eyes were not properly closed for some reason.

/* main.mjs */
...

// Puppeteer: clean up
await browser.close();

// Applitools: clean up
await eyes.closeAsync();
await eyes.abortAsync(); // abort if Eyes were not properly closed

Finally, after Eyes and the browser are closed, we may gather the test results using the runner.

/* main.mjs */
...

// Manage tests across multiple Eyes instances
const testResultsSummary = await visualGridRunner.getAllTestResults()
for (const testResultContainer of testResultsSummary) {
  const testResults = testResultContainer.getTestResults();
  console.log(testResults);
}

You can find the full code in this GitHub repository.

Viewing Test Results in the Applitools Dashboard

After running the test, you’ll see the results populate in the Applitools dashboard. In this case, our baseline and our checkpoint have no visual differences, so everything passed.

Last but not Least – Export Cypress Tests from Google Chrome DevTools Recorder

As we have already mentioned, the ability to quickly record a stable automated test and export it to make it more customizable is an incredible feature. For advanced users, you may also customize how a recording is stringified by extending the PuppeteerStringifyExtension class.

For example, I’d like to introduce you to the Cypress Chrome Recorder library, where you can convert the JSON file into a Cypress test script with one simple command. The library is built on top of Puppeteer Replay’s stringified feature.

We can convert our JSON recording file to a Cypress test with the following CLI command:

npm install -g @cypress/chrome-recorder

npx @cypress/chrome-recorder <relative path to target test file>

The output will be written to the cypress/integration folder. If you do not have that folder, you can get it by installing Cypress with the npm install -D cypress in your project.

Once the test file is ready, we can simply run the test as we would run a standard Cypress test.

Conclusion

Although record and playback testing tools have their setbacks and challenges, this looks like a very simple and useful tool from Google. It can be a good solution for creating simple scenarios or quick tests, seeing how easy it is to use. 

What we loved most about the tool was its simplicity. Plain record and playback at the moment with no advanced features, it’s a great stepstone for beginners in testing or even non-code individuals. 

Like with any Record-Playback tool one of the challenges is validation. Combined with the ease and speed of adding and running Applitools Eyes you can start validating your UI in no time. Find all the visual regressions and make sure your application is visually perfect.

Applitools Eyes has many advanced features, including AI-powered auto-maintenance, which analyzes differences across all your tests and shows only distinct differences, allowing you to approve or reject changes that automatically apply across all similar changes in your test suite. Learn more about the Applitools platform and sign up for your own free account today.

The post Creating Your First Test With Google Chrome DevTools Recorder appeared first on Automated Visual Testing | Applitools.

]]>
Proving a Concept, Automation Style https://applitools.com/blog/how-to-choose-test-automation-tool/ Thu, 07 Apr 2022 21:15:49 +0000 https://applitools.com/?p=36399 Learn how to choose a new test automation tool and the top considerations you need to keep in mind.

The post Proving a Concept, Automation Style appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to choose a new test automation tool and the top considerations you need to keep in mind as you develop a proof-of-concept.

When people ask me about which tool they should use for their automation, I typically explain my view of the automation ecosystem to them. As I discuss in my Bad Spin blog post, this ecosystem is made up of strategyaudience, and environment, but as Alton Brown says, that’s a different show; I have a one-hour talk about the automation ecosystem and choosing a tool; you can contact me if you are interested in hearing it…. But I digress.

As part of the aforementioned talk, I recommend doing one or more proofs of concept or prototypes using the tools that you’ve decided are possible candidates. Yes, there is a subtle, or not so subtle, difference between a prototype and proof of concept, but for our purposes in this writing, we’ll call them the same thing. With that assumption in mind, here are some considerations that are usually appropriate for most automation prototypes; these thoughts have served me well over the years.

Prototype Against Your Application or Product

Creating automation prototypes against “test” websites such as The Internet by Dave Haeffner or Restful Booker by Mark Winteringham can be a good way to exercise an automation tool across multiple application constructs; I’m a big fan of these sites and I do use them from time to time. Nothing, however, compares to creating your prototype against your applications. You know where the “icky bits” of your app are, you know where the 3rd party components are used… or you can find out by asking the developers. There is no substitute for prototyping against your own app(s).

When doing this prototype, don’t shy away from the “hard to automate” portions of your app. These portions are very important because, depending on the frequency with which they are used, they might rule in or rule out specific tools.

Use a Free or a Trial License

As attractive as it may be to focus on “free”, i.e., open source software, you should not automatically discount vendor-sold software. If you find that a vendor-sold product might be a viable candidate for your automation tool, you should consider creating a prototype with it; if you don’t, you won’t know whether it is, in fact, an appropriate tool for you. In fact, it might be the most appropriate tool for you. To be clear, I’m not saying that you should buy a license for that tool just to do a prototype. Most vendors have free-with-limited-features versions or temporary trial licenses. Trial licenses usually have a time limit; trial durations of 7 – 30 days are common.

Run Against Your App’s APIs

Does the tool or framework you’re using for your prototype support testing web services? If yes, awesome! Make sure you prototype against your application’s API in addition to any GUI you provide. Note that most API-capable tools can handle your basic APIs, so make sure to automate against “more challenging” APIs.

Also, does the tool work with your squirrelly authentication and authorization scheme? Does it work with your 3rd party authentication provider? Is there some non-standard payload that your APIs deliver? If so, make sure you check the automation tool against that; to insert some concreteness, I’m living through a difficult authentication paradigm at the time of this writing.

Exercise Concurrency or Parallelization

Not all automation tools support concurrent (i.e., parallel) execution. Even the ones that do may have limitations with respect to your specific context. Try running test scripts in parallel to ensure you are getting the behavior you expect in addition to the performance you desire. Of note, are the logs and reports you get when running in parallel less helpful than those you get when running sequentially?

3rd Party Partners

Which 3rd party service providers does the tool support? More specifically, which managed browser grids and device farms does it support? Is this capability open-ended or does the tool only support specific 3rd parties? Be sure to automate against as many 3rd parties as is feasible to make a responsible decision.

Note that if a tool only supports specific 3rd party infrastructure that is not necessarily an issue. If, however, you do choose that tool, you must be willing to also work with the supported 3rd parties or avoid them completely, e.g., managing your own Selenium grid, device farm, etc.

Simulate a Major Change

One of the challenges in any automation endeavor, regardless of tool choice, is keeping maintenance effort to a minimum, so it’s important to understand a tool’s capability to handle a refactor or pervasive change. During your prototyping activities, try to simulate having to change values in, say, 500 or more test scripts. This simulation may not be easy to set up, but the information you’ll gain about your future maintainability with this tool will be invaluable.

Look at the Result, Log, and Report Files

Though we understand that test automation development is, in fact, software development, there is an important difference from general application development. For example, the result of buying a product on a website is not an email with an order number; the result is that the buyer receives the product that they ordered. In contrast, the result of a test automation script is not just a pass or fail, a yes or a no, a red or a green. The most valuable “products” of a test automation script are its log/report/result files. This is where we can determine the pass/fail status but also, we can determine what did and didn’t happen during a script run. When prototyping with a tool, evaluating the generated artifacts is essential to performing a responsible evaluation of the tool itself.

Some considerations when performing this part of an evaluation include:

  • Are the logged steps sufficient for you to understand what did and did not occur during the script’s execution?
  • If the script failed, is the failure reason sufficiently descriptive for you to debug the issue or report it to another team member?
  • Can you add additional log messages or other execution artifacts to the test run to make it easier to debug?

Most assuredly, the considerations above are a subset of what you want to exercise during a prototyping activity. Every team, organization, application, company, etc. has different needs and requirements. In fact, some of the above may not apply to your specific context.

There is one other thing of which to be mindful. When we create code for a prototype or proof of concept, we are creating it to prove that a concept or an implementation is feasible and is a good candidate for our needs. The code we create during this process should be developed as quickly and economically as is responsible. This means taking shortcuts, “making” things work, driving to an “it works” or “it doesn’t work” for us as soon as is reasonable. Further, this means that we need to be prepared to throw away the code we created during these endeavors.

“Wait! No! We just spent weeks creating this and it’s working! We can’t just throw it away!”

Yes, you can, and you should; in some cases, you must. Because this code was created taking shortcuts, “making” things work and driving to an “it works” or “it doesn’t work” for us conclusion, this code is typically not in a supportable and future-thinking state. In many cases, it will be more economical to rewrite the code than to maintain it over the life of that code. For code that is sufficiently close to an appropriate state of supportability, a refactor of the existing code may be more appropriate than a complete rewrite but that decision is situationally dependent.

Like this? Catch me at an upcoming event!

The post Proving a Concept, Automation Style appeared first on Automated Visual Testing | Applitools.

]]>
New Release: Applitools Eyes 10.14 https://applitools.com/blog/applitools-eyes-10-14/ Tue, 01 Mar 2022 18:21:23 +0000 https://applitools.com/?p=34687 We are excited to announce the latest update to Applitools Eyes, 10.14. Applitools Eyes 10.14 ensures that users in large and small organizations are able to get their work done...

The post New Release: Applitools Eyes 10.14 appeared first on Automated Visual Testing | Applitools.

]]>

We are excited to announce the latest update to Applitools Eyes, 10.14. Applitools Eyes 10.14 ensures that users in large and small organizations are able to get their work done – and to that as fast as possible. With this new release, you’ll be able to easily onboard new team members, manage your teams’ test results, and share them with all team members. We’ve also implemented several usability and accessibility enhancements to meet your organization’s compliance needs. We hope you’ll find these enhancements useful!

Assign A Test

The ‘Assign test’ functionality has been extended in Applitools Eyes 10.14 and improved to allow users to follow up on their assigned tests not only for sessions. Users can now filter by assignee and efficiently manage their tests.
Learn more

Assigning a test in the new release of Applitools Eyes 10.14

Auto-Reject

The “Reject” functionality has been updated so that now, when users reject a specific checkpoint image, Eyes will automatically mark this checkpoint image as rejected on the next test runs as well. This will reduce the amount of time needed for reviewing the test results.
Learn more

Export Batch Results API

You can now export full Batch Results via our API, making it easy to teams to pull down large sets of test results from the Applitools Test Cloud to use in their own internal systems and workflows.
Learn more

New Filter

One of our most requested features, you can now filter test results and batches by who ran the test. A new “Run by” filter makes it easy to deep dive into a particular team members tests quickly.

Adding a new filter in the new release of Applitools Eyes 10.14

Onboarding Videos

Users who are new to Applitools Eyes or looking to spruce up their knowledge will benefit from a new video tours section in the learning center. This section includes short video tours for both new and advanced users.

Creating a test in the new release of Applitools Eyes 10.14

And there is more…

  • Eyes accessibility improvement – Additional accessibility enhancements were added to the product as part of our ongoing effort to make Eyes as accessible as possible to all users.
  • Step-by-step onboarding guide – New users in an existing team can benefit from a step-by-step guide focused on their framework needs. The guide helps users with their first test creation and follows up with a “Getting Started” panel containing relevant tutorial links.
  • An easier provisioning process for dedicated cloud and on-prem systems – The SAML integration now supports automatic deletion of Eyes users when removed from the organization. Contact support to enable this functionality.

How to upgrade

To upgrade your version of Applitools Eyes, just login to the Applitools Test Cloud, and you’ll be updated to the latest version 10.14.

The post New Release: Applitools Eyes 10.14 appeared first on Automated Visual Testing | Applitools.

]]>
Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io https://applitools.com/blog/applitools-testim-io-codeless-end-to-end-ai-powered-cross-browser-ui-testing/ Fri, 18 Feb 2022 17:27:57 +0000 https://applitools.com/?p=34425 The newly enhanced integration makes it easier for all testers to use Applitools and our AI-powered visual testing platform with Testim.io.

The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on Automated Visual Testing | Applitools.

]]>

As a product manager at Applitools I am excited to announce an enriched and updated integration with Testim.io! This enhanced integration makes it easier for testers of any technical ability to use Applitools and our AI-powered visual testing platform by using Testim.io to easily create your test scripts.

What Is Testim.io Used For?

Testim.io is a cloud platform that allows users to create, execute, and maintain automated tests without using code.

It is a perfect tool for getting started with your first automated tests, if you do not have an existing automated testing framework or if you have not started to run tests yet. Testim.io allows you to integrate your own custom code into their steps so you can implement custom validations if you need to.

How Do Applitools and Testim.io Integrate?

The visual validation empowered by Applitools Eyes allows you to compare the visual differences between expected results (baseline) with actual results after creating the tests in Testim.io. By using Visual AI to compare snapshots Applitools Eyes can spot any unexpected changes and highlight them visually. This lets you expand your test coverage to include everything on a given page as well as visually verify your results quickly.

As part of the integration, you can modify test parameters to customize Eyes while working with the Testim UI.

This AI-based visual validation functionality is provided by Applitools and requires simple integration setup in the Eyes application. Learn more.

So, What’s New With Applitools and Testim.io?

This up-to-date integration provides access to Applitools’ latest and greatest capabilities, including Ultrafast Test Cloud, enabling ultrafast cross-browser and cross-platform functional and visual testing. Testim users also now have access to Root Cause Analysis and many more powerful Applitools features!

The new integration also greatly improves on the user experience of test creators adding Applitools Eyes checkpoints to their Testim.io tests. Visual validations can be added right inside Testim and the maintenance and analysis of test results is much simpler.

What Kind of Visual Validations Can You Do?

You can perform the following visual validations:

  • Element Visualization – The Validate Element visualization step allows you to compare visual differences of a specific element between your baseline and your current test run.
  • Viewport Visualization – The Validate Viewport allows you to compare the visual difference between your baseline and the current test run of your viewport.
  • Full-page Visualization – Full-page validation allows you to compare the visual differences between your baseline and your current test run of your entire page.

What Are the New Visual Validation Settings?

Whether you select the element, viewport, or full-page visualization option you can always override the visual setting for that test or step.

The following Applitools Eyes settings can be accessed via the Testim.io UI:

  • Add Environment (New) – allows you to select Ultrafast Test Cloud environments. You can select the same test to run on multiple environments: different browser types and viewports for web, Chrome emulation, or iOS simulation for mobile devices. Using Applitools Ultrafast Test Cloud you can now increase your coverage and accelerate your release cycles.
  • Match Level – When writing a visual test, sometimes we will want to change the comparison method between our test and its baseline, especially when dealing with applications that consist of dynamic content. Here you can update the Applitools Eyes match level directly from Testim UI.
  • Enable RCA [Root Cause Analysis] (New) – when this flag is on it will provide insights into the causes of visual mismatches so that when looking at the Eyes dashboard you will be able to see the DOM and CSS that generated with the image.
  • Ignore displacement (New) – when this flag is on it will hide differences caused by element displacements. This feature is useful, for example, where content is added or deleted, causing other elements on the page to be displaced and generating additional differences. 

User Experience Improvements

In addition to exposing new features in the Testim UI, we have provided better visibility to Testim tests in Applitools Eyes:

  • Testim test properties are passed to the Eyes Dashboard to allow better filtering and grouping with all Testim tests properties.
  • Testim multi-step and test suites are now also grouped on the Applitools Eyes dashboard and are displayed as one batch to create a better user experience when moving between the two products.
  • Testim Selenium and extension modes are supported.

Complete and Scalable AI-Powered UI Testing

Testim.io allows users to quickly create and maintain tests through record and playback. Adding Applitools visual testing with Ultrafast Test Cloud capabilities will make sure your release cycles are short and test analysis and maintenance are easier than ever!

Learn More about Testim.io-Applitools Integration

If you want to learn more about how you can integrate your codeless Testim tests with Applitools and benefit from the latest Applitools capabilities, head over to Testim.io documentation.

Contact us if you have any queries about Applitools!

Happy testing!

The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Recognized as Testing Leader by DevOps.com, Deloitte, GetApp and More https://applitools.com/blog/applitools-recognized-testing-leader-by-industry-customers/ Tue, 30 Nov 2021 22:24:02 +0000 https://applitools.com/?p=33109 We know Applitools can make a dramatic difference in our customers' lives, and many organizations recognized the value Applitools brings just this month.

The post Applitools Recognized as Testing Leader by DevOps.com, Deloitte, GetApp and More appeared first on Automated Visual Testing | Applitools.

]]>

Here at Applitools, we are relentlessly focused on making software testing easier, faster and more reliable. Our industry-leading Visual AI, trained on more than a billion images to deliver 99.9999% accuracy, can automatically detect visual and functional bugs just as a human would. The blazing fast Ultrafast Test Cloud, powered by our Ultrafast Grid, makes cross-browser, cross-device and cross-platform testing a snap. And we’re not done there, as we continue to grow and innovate along the path to autonomous testing.

Don’t just take our word for it though.  Hundreds of clients have determined that Applitools is the best visual testing tool and are using Applitools today, including nine of the top 10 software companies in the world, seven of the top 10 banks in the US and two of the top three retailers. We’ve empirically measured the impact of Visual AI and we know that the results – 5.8x faster test creation, 5.9x more efficient code (in terms of lines of code), 3.8x improvement in test code stability and 45% increase in effectiveness catching bugs early – speak for themselves.

We know that Applitools can make a dramatic difference in the lives of our customers – but again, you don’t need to just hear this from us. Numerous organizations have recognized the value Applitools brings just this month.

Update 1/19/22: We’re excited to share that we have also just been named the Best Testing Service/Tool in the DevOps Dozen² Awards for 2021. We’re honored by this latest recognition as well as the ones below!

Applitools Awarded Leading Vendor in Testing for North America

Software Testing News recently named Applitools Leading Vendor. This award is granted only to the vendor who receives top marks for both their product/service and customer service. Software Testing News specifically looked for commitment to a high quality product with excellent customer satisfaction, strong value for its cost, rock-solid reliability and ease of use, and proof of thought leadership that can drive the software testing/QE industry forward. 

It’s not easy to score highly on each of those metrics, and we’re thrilled that Software Testing News has recognized us with this award as the as the leading software testing vendor.

Applitools Named Fastest-Growing Software Testing Vendor in North America

It’s one thing to be recognized for product innovation and customer service – and make no mistake, we are proud of that – but when the rubber meets the road it is only the businesses that are growing that can meet the increasingly complex needs of more customers over the long term. That’s why we’re so excited to be named the fastest-growing software testing vendor in North America by Deloitte

This placement on the Deloitte Technology Fast 500™ is a testament to the demand in the market for a solution that can make the demanding lives of test engineers easier and significantly increase the reliability of software.

Applitools a Leader on GetApp and Capterra

We’re always grateful to the testing community and our customers for their feedback, and we’re pleased that many of them are not shy about sharing their love for Applitools and how it helps make their lives easier and more productive. That’s why we’re a Category Leader in Automated Testing on GetApp and have a strong 4.6/5.0 rating on Capterra – with no reviewer giving us fewer than four stars.

Our commitment to our customers is ironclad. We know that it’s only by our continued dedication to giving our customers exactly what they need that we can continue to win awards for growth and innovation, and we’re honored that our customers choose to share their experiences with the world.

Try it for Yourself

See for yourself why everyone from our customers to market-defining publications are calling Applitools a leader in automated software testing and showcasing the tremendous impact we’re having on the software testing industry. Get your free account and dive in, or reach out to schedule your demo today.

The post Applitools Recognized as Testing Leader by DevOps.com, Deloitte, GetApp and More appeared first on Automated Visual Testing | Applitools.

]]>
Hot off the Press: How to Get Started with the New Applitools / Robot Framework Library! https://applitools.com/blog/how-to-get-started-applitools-robot-framework/ Tue, 23 Nov 2021 16:21:50 +0000 https://applitools.com/?p=32952 In this tutorial, learn how to use Robot Framework to perform automated visual tests with the new Applitools library.

The post Hot off the Press: How to Get Started with the New Applitools / Robot Framework Library! appeared first on Automated Visual Testing | Applitools.

]]>

I was really excited to hear that there was a new Applitools EyesLibrary. Being a Robot Framework Coach and Mentor I was really excited to see that this new library come out. I have been working with several people on older versions of Applitools EyesLibrary for Robot Framework. But with the recent updates to Robot Framework and Applitools these were in the state of needing repair.

Here I am going to take you through my first insight or exploration into the new EyesLibrary and provide you with a short tutorial which you can follow along, gaining a good introduction to the new EyesLibrary.

Robot Framework and its Libraries

For those not familiar with Robot Framework it is a natural language, keyword based, testing framework. That means instead of reading as a syntactic programming language it reads like a testing story. My test might read like “Navigate To The Home Page”, “Edit The User’s Preferences”, “Add Applitools To My Skill Set”, “Verify Applitools Is In My User Profile”, “Perform Visual Check Of User Profile Ignore Newly Added Skills” etc. And being a framework means that it can be applied to many different areas, like visual testing. This is done through Libraries which provide a set of task specific keywords.

Here I am going to talk about Applitools EyesLibrary, which is a library for performing visual testing using Applitools with Robot Framework.

Dive into the Applitools Library for Robot Framework

Coming from the Robot Framework side the first thing I wanted to do was go look at the keyword documentation for the library. I was curious to see what keywords were there and what I could do with the library. The first thing I noticed was the large number of keywords; much larger than I expected. Luckily I could use the tag filtering so I could filter out the keywords based upon categories. From the categories I can start to see there were keywords for visual checks, targeting parts of the screen, some configuration, and something called ultrafast grid (which I won’t cover in this article).

Get started with Applitools today

Sign up for your free account

You can get started with a free Applitools account today and follow along with this tutorial.

Get My Free Account

Documentation Is Key to Understanding

Taking a step back I decided to go review the documentation for Applitools. There is “Overview of visual UI testing” which really outlines what is visual testing and what are the steps in the process of visual testing. I felt I had these concepts pretty well understood in my mind. To get to the answer of how do I use the EyesLibrary to do visual testing I found the Robot Eyes Library sdk/api documentation key.

The Robot Eyes Library documentation under the SDK section really outlines the “how” whereas the keyword documentation gives us the “what” and Overview/The Core Concepts gives us the “why”.

The “how”, what we need in our robot scripts, really just breaks down to this: one most open their eyes, perform a visual check on a region maybe with some special configurations, and then close one’s eyes.

Setting up my Environment

Let’s start by setting up our environment. You will need Python (version 3.6 or greater, I recommend Python 3.8) installed on your system. The setup involves installing and initializing the library and then setting your API key. To simplify setup, I’ve created a script, setup_eyes.bat for Windows or setup_eyes.sh for Linux. Run the corresponding batch file or bash script at the command line or terminal prompt. It will ask you for your Applitools API Key so have that handy.

My First Test

For my first script, I wanted to do something simple – perform a visual check on the demo page without changing anything. Essentially I wanted a very simple passing test to validate everything was set up properly and to start to explore. Here it is:

View the code on Gist.

Go ahead and run this script by typing at the command line or terminal prompt robot firstsight.robot. You might see a verbose result summary showing matches. You should also see the batch appear within your Applitools Dashboard for which you could set the baseline image. Rerun the script seeing it pass each time.

Now execute the firsterror.robot script (robot firsterror.robot) which contains the one additional keyword line Click Link ?diff just before the visual check. This will change the demo page causing several visual differences. You should notice an error in the robot output, post-execution, noting the difference and referring you to the Applitools Dashboard URL.

Initial Observations

The first thing I noticed with these two scripts is the output or result summary from the visual checks. Passing or failing we get information about the status of the visual checks. Also the status of the visual check does not affect the status of the robot framework check. That is the visual check is outside of the context of the robot checks which has interesting possibilities for “context switching”.

The next initial observation, which is hidden in the scripts above, is the addition and usage of the Fully keyword/setting. We see from the documentation this sets the visual check to the whole page. Initially I did not use this and the size of the visual area I was checking would change although the content did not and my tests were failing. Think about that, the content or what I was seeing was the same but the area I was checking was not, thus a “difference”. This led me to a deeper understanding of what factors into a match against the baseline.

What Factors into a Match Against the Baseline

Quickly the first factor is Viewport size which is what I saw when I did or did not have Fully in my test. There is Host environment and Version information which relates what I call the environment under which I execute. Finally there are test suite/test case related factors testName and the appName which label the application under test. We can set appName in a few spots.

Although these factors seem straightforward I do feel it is important to mention, so that one can see what factors into matching and how one sets these with either the robot script or the environment the script is run on.

Targeted Window, Region, Frame versus Generic Target

Looking beyond the basic Eyes Check on a specific window or region or frame, we see one could do, instead, the generic Eyes Check and then target an area. Taking a small step forward let’s run both the Eyes Check Window and the equivalent Eyes Check using the target of Target Window, targeting the full window, and with a name.

View the code on Gist.

Running to use the specific Eyes Check Window keyword we type robot windowtarget.robot at the command line or prompt. Then re-run this time typing robot -v useCheckWindow:False windowtarget.robot to execute the generic Eyes Check instead.

More Observations

Up to now I haven’t discussed how a Robot Framework Test Suite and Test Case relate to Applitools objects via EyesLibrary. As you have seen in the Applitools Dashboard each time we execute we get a new batch. And from the examples above each batch has a test. As we have had only one robot test case per file (test suite) we see only one Applitools test per batch. These tests have a visual representation for each visual check in the test case. They are steps in Applitools vernacular. The name which we have used in keywords label those steps. Within EyesLibrary (as well as elsewhere) they also refer to a tag which appears to be the same as name and these are used interchangeably.

In manipulating my robot test cases one observation was that the visual order of steps relates to the order in which multiple visual checks take place within a test case. This raised in my mind the question: how do I see the history of a visual check? It appears one can see history by grouping results within the Applitools Dashboard. There are also branches which allow you to version your baseline visual checks.

If these batches, tests, steps, branches, tags and names are confusing to you don’t worry. I experienced the same when first looking at Applitools Dashboard. With some experimentation I was able to start mapping the relationship between robot test suites and cases and Applitools.

The design of this EyesLibrary is slightly different then other libraries. Here keywords, especially the check settings and target keywords, act like what would be arguments in other libraries. But due to the large amount of configuration, it works. One should also note that the keywords are case sensitive within the library. For example, to check all the contents of the window if we used FULLY (all caps) we would receive an incorrect keyword argument error. The proper usage, as we have seen, would be Fully.

Advanced Use: Configuring Your Visual Check and EyesLibrary Keywords

Let’s explore this “building block” design of keywords with the script regioncheck.robot. First, as we have done before, we check a region using a single specific keyword, using the new keyword Eyes Check Region By Selector, naming the check. Next we build a visual check that involves the full window and, in addition, ignores a region; that being the random number within the sentence. Finally we start with the full window and then we ignore multiple regions. We can run this script by typing at the command line or terminal prompt robot regioncheck.robot.

View the code on Gist.

This example starts to show how we can build complex visual checks using the keywords and the structure of the EyesLibrary.

Additional Features?

Asking what additional features could be added, I could see how the EyesLibrary could provide a keyword for getting the coordinates of a region that encompasses several different elements. That is, given the various selectors or elements as arguments, return a coordinate set which encompasses them all.

Maybe a configuration option to fail within the test stopping the execution of remaining tests. as well as the error at the end. In addition a method for saying here is a visual check and we expect an error. If an error is not found then fail. Similar to the Robot Framework keywords Run Keyword And Expect Error and Run Keyword And Ignore Error.

Build versus Buy Decision

Before I conclude, I want to address a frequently asked question: “Can’t I just build my own visual checking tool?” The answer is yes but the real question is at what cost? There are a lot of factors that one needs to consider when considering to build or buy a solution. Image processing is not a simple task and it takes a lot of factors to make it work successfully. One should ask how much effort will go into getting it right and dealing with false positives. Another factor is maintaining the solution. If your developer leaves will anybody be able to maintain your visual checking? It is always an option to build but there’s also a cost to building oneself. I encourage every organization to perform an in-depth build versus buy cost analysis.

What’s Next

I’ve explored WebTesting using Selenium. There are other areas that Applitools works in too – mobile, responsive design, even accessibility. How could Robot Framework and EyesLibrary help in testing those areas?

Denali Lumma at the 2015 Selenium Conference in Portland gave an excellent talk outlining what should testing in the future look like. Included in her points was the idea of easily context switching such as adding in visual checking as a goal. I would like to see examples of this vision, making it a reality using Robot Framework and EyesLibrary.

I encourage you to explore the EyesLibrary even further. I look forward to seeing users combine Robot Framework and Applitools using the EyesLibrary.

The post Hot off the Press: How to Get Started with the New Applitools / Robot Framework Library! appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual Testing? https://applitools.com/blog/visual-testing/ Mon, 22 Nov 2021 15:48:00 +0000 https://applitools.com/blog/?p=5069 Visual testing evaluates the visible output of an application and compares that output against the results expected by design. You can run visual tests at any time on any application with a visual user interface.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Visual testing

Learn what visual testing is, why visual testing is important, the differences between visual and functional testing and how you can get started with automated visual testing today.

Editor’s Note: This post was originally published in 2019, and has been recently updated for accuracy and completeness.

What is Meant By Visual Testing?

Visual testing evaluates the visible output of an application and compares that output against the results expected by design. In other words, it helps catch “visual bugs” in the appearance of a page or screen, which are distinct from strictly functional bugs. Automated visual testing tools, like Applitools, can help speed this visual testing up and reduce errors that are occur with manual verification.

You can run visual tests at any time on any application with a visual user interface. Most developers run visual tests on individual components during development, and on a functioning application during end-to-end tests.

In today’s world, in the world of HTML, web developers create pages that appear on a mix of browsers and operating systems. Because HTML and CSS are standards, frontend developers want to feel comfortable with a ‘write once, run anywhere’ approach to their software. Which also translates to “Let QA sort out the implementation issues.” QA is still stuck checking each possible output combination for visual bugs.

This explains why, when I worked in product management, QA engineers would ask me all the time, “Which platforms are most important to test against?” If you’re like most QA team members, your test matrix has probably exploded: multiple browsers, multiple operating systems, multiple screen sizes, multiple fonts — and dynamic responsive content that renders differently on each combination.

If you are with me so far, you’re starting to answer the question: why do visual testing?

Why is Visual Testing Important?

We do visual testing because visual errors happen — more frequently than you might realize. Take a look at this visual bug on Instagram’s app:

The text and ad are crammed together. If this was your ad, do you think there would be a revenue impact? Absolutely.

Visual bugs happen at other companies too: Amazon. GoogleSlack. Robin Hood. Poshmark. Airbnb. Yelp. Target. Southwest. United. Virgin Atlantic. OpenTable. These aren’t cosmetic issues. In each case, visual bugs are blocking revenue.

If you need to justify spending money on visual testing, share these examples with your boss.

All these companies are able to hire some of the smartest engineers in the world. If it happens to Google, or Instagram, or Amazon, it probably can happen to you, too.

Why do these visual bugs occur? Don’t they do functional testing? They do — but it’s not enough.

Visual bugs are rendering issues. And rendering validation is not what functional testing tools are designed to catch. Functional testing measures functional behavior.

Why can’t functional test cover visual issues?

Sure, functional test scripts can validate the size, position, and color scheme of visual elements. But if you do this, your test scripts will soon balloon in size due to checkpoint bloat.

To see what I mean, let’s look at an Instagram ad screen that’s properly rendered. There are 21 visual elements by my count — various icons, text. (This ignores iOS elements at the top like WiFi signal and time, since those aren’t controlled by the Instagram app.)


If you used traditional checkpoints in a functional testing tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium, you’d have to check the following for each of those 21 visual elements:

  1. Visible (true/false)
  2. Upper-left x,y coordinates
  3. Height
  4. Width
  5. Background color

That means you’d need the following number of assertions:

21 visual elements x 5 assertions per element = 105 lines of assertion code

Even with all this assertion code, you wouldn’t be able to detect all visual bugs. Such as whether a visual element can’t be accessed because it’s being covered up, which blocked revenue in the above examples from Yelp, Southwest, United, and Virgin Atlantic. And, you’d miss subtleties like the brand logo, or the red dot under the heart.

But it gets worse: if OS, browser, screen orientation, screen size, or font size changes, your app’s appearance will change as a result. That means you have to write another 105 lines of functional test assertions. For EACH combination of OS/browser/font size/screen size/screen orientation/font size.

You could end up with thousands of lines of assertion code — any of which might need to change with a new release. Trying to maintain that would be sheer madness. No one has time for that.

You need visual testing because visual errors occur. And you need visual testing because you cannot rely on functional tests to catch visual errors.

What is Manual Visual Testing?

Because automated functional testing tools are poorly suited for finding visual bugs, companies find visual glitches using manual testers. Lots of them (more on that in a bit).

For these manual testers, visual testing behaves a lot like this spot-the-difference game:

To understand how time-consuming visual testing can be, get out your phone and time how long it takes for you to find all six visual differences. I took a minute to realize that the writing in the panels doesn’t count. It took me about 3 minutes to spot all six. Or, you can cheat and look at the answers.

Why does it take so long? Some differences are difficult to spot. In other cases, our eyes trick us into finding differences that don’t exist.

Manual visual testing means comparing two screenshots, one from your known good baseline image, and another from the latest version of your app. For each pair of images, you have to invest time to ensure you’ve caught all issues. Especially if the page is long, or has a lot of visual elements. Think “Where’s Waldo”…

Challenges of manual testing

If you’re a manual tester or someone who manages them, you probably know how hard it is to visually test.

If you are a test engineer reading this paragraph, you already know this: web page testing only starts with checking the visual elements and their function on a single operating system, browser, browser orientation, and browser dimension combination. Then continue on to other combinations. And, that’s where a huge amount of test effort lies – not in the functional testing, but in the inspection of visual elements across the combination of an operating system, browser, screen orientation, and browser dimensions.

To put it in perspective, imagine you need to test your app on:

  • 5 operating systems: Windows, MacOS, Android, iOS, and Chrome.
  • 5 popular browsers: Chrome, Firefox, Internet Explorer (Windows only) Microsoft Edge (Windows Only), and Safari (Mac only).
  • 2 screen orientations for mobile devices: portrait and landscape.
  • 10 standard mobile device display resolutions and 18 standard desktop/laptop display resolutions from XGA to 4G.

If you’re doing the math, you think it’s the browsers running on each platform (a total of 21 combinations) multiplied by the two orientations of the ten mobiles (2×10)=20 added to the 18 desktop display resolutions.

21 x (20+18) = 21 x 38 = 798 Unique Screen Configurations to test

That’s a lot of testing — for just one web page or screen in your mobile app.

Except that it’s worse. Let’s say your app has 100 pages or screens to test.

798 Screen Configurations x 100 Screens in-app = 79,800 Screen Configurations to test

Meanwhile, companies are releasing new app versions into production as frequently as once a week, or even once a day.

How many manual testers would you need to test 79,800 screen configurations in a week? Or a day? Could you even hire that many people?

Wouldn’t it be great if there was a way to automate this crazy-tedious process?

Well, yes there is…

What is Automated Visual Testing?

Automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover visual defects.

Automated visual testing piggybacks on your existing functional test scripts running in a tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium. As your script drives your app, your app creates web pages with static visual elements. Functional testing changes visual elements, so each step of a functional test creates a new UI state you can visually test.

Automated visual testing evolved from functional testing. Rather than descending into the madness of writing assertions to check the properties of each visual element, automated visual testing tools visually check the visual appearance of an entire screen with just one assertion statement. This leads to test scripts that are MUCH simpler and easier to maintain.

But, if you’re not careful, you can go down an unproductive rat hole. I’m talking about Snapshot Testing.

What is Snapshot Testing?

First generation automated visual testing uses a technology called snapshot testing. With snapshot testing, a bitmap of a screen is captured at various points of a test run and its pixels are compared to a baseline bitmap.

Snapshot testing algorithms are very simplistic: iterate through each pixel pair, then check if the color hex code is the same. If the color codes are different, raise a visual bug.

Because they can be built relatively easily, there are a number of open-source and commercial snapshot testing tools. Unlike human testers, snapshot testing tools can spot pixel differences quickly and consistently. And that’s a step forward. A computer can highlight the visual differences in the Hocus Focus cartoon easily. A number of these tools market themselves as enabling “pixel perfect testing”.

Sounds like a good idea, right?

What are Problems With Snapshot Testing?

Alas, pixels aren’t visual elements. Font smoothing algorithms, image resizing, graphics cards, and even rendering algorithms generate pixel differences. And that’s just static content. The actual content can vary between any two interfaces. As a result, a tool that expects exact pixel matches between two images can be filled with pixel differences.

If you want to see some examples of bitmap differences affecting snapshot testing, take a look at the blog post we wrote on this topic last year.

Unfortunately, while you might think snapshot testing makes intuitive sense, practitioners like you are finding that the conditions for running successful bitmap comparisons require a stationary target, while your company continues to develop dynamic websites across a range of browsers and operating systems. You can try to force your app to behave a certain way – but you may not always succeed.

Can you share some details of Snapshot Testing Problems?

For example, when testing on a single browser and operating system:

  • Identify and isolate (mute) fields that change over time, such as radio signal strength, battery state, and blinking cursors.
  • Ignore user data that might otherwise change over time, such as visitor count.
  • Determine how to support testing content on your site that must change frequently – especially if you are a media company or have an active blog.
  • Consider how different hardware or software affects antialiasing.

When doing cross-browser testing, you must also consider:

  • Text wrapping, because you cannot guarantee the locations of text wrapping between two browsers using the same specifications. The text can break differently between two browsers, even with identical screen size.
  • Image rendering software, which can affect the pixels of font antialiasing as well as images and can vary from browser to browser (and even on a single browser among versions).
  • Image rendering hardware, which may render bitmaps differently.
  • Variations in browser font size and other elements that affect the text.

If you choose to pursue snapshot testing in spite of these issues, don’t be surprised if you end up joining the group of experienced testers who have tried, and then ultimately abandoned, snapshot testing tools.

Can I See Some Snapshot Testing Problems In Real Life?

Here are some quick examples of these real-life bitmap issues.

If you use pixel testing for mobile apps, you’ll need to deal with the very dynamic data at the top of nearly every screen: network strength, time, battery level, and more:

When you have dynamic content that shifts over time — news, ads, user-submitted content — where you want to check to ensure that everything is laid out with proper alignment and no overlaps. Pixel comparison tools can’t test for these cases. Twitter’s user-generated content is even more dynamic, with new tweets, like, retweet, and comment counts changing by the second.

Your app doesn’t even need to change to confuse pixel tools. If your baselines and test screenshots were captured on different machines with different display settings for anti-aliasing, that can turn pretty much the entire page into a false positive, like this:

Source: storybook.js.org

If you’re using pixel tools and you still have to track down false positives and expose false negatives, what does that say about your testing efficiency?

For these reasons, many companies throw out their pixel tools and go back to manual visual testing, with all of its issues.

There’s a better alternative: using AI — specifically computer vision — for visual testing.

How Do I Use AI for Automated Visual Testing?

The current generation of automated visual testing uses a class of artificial intelligence algorithms called computer vision as a core engine for visual comparison. Typically these algorithms are used to identify objects with images, such as with facial recognition. We call them visual AI testing tools.

AI-powered automated visual testing combines a learning algorithm to interpret the relationship between a rendered page and intended display of visual elements with actual visual elements and locations. Like pixel tools, AI-powered automated visual testing takes page snapshots as your functionally tests run. Unlike pixel-based comparators, AI-powered automated visual test tools use algorithms instead of pixels to determine when errors have occurred.

Unlike snapshot testers, AI-powered automated visual testing tools do not need special environments that remain static to ensure accuracy. Testing and real-world customer data show that AI testing tools have a high degree of accuracy even with dynamic content because the comparisons are based on relationships and not simply pixels.

Here’s a comparison of the kinds of issues that AI-powered visual testing tools can handle compared to snapshot testing tools:

Visual Testing Use CaseSnapshot TestingVisual AI
Cross-browser testingNoYes
Account balancesNoYes
Mobile device status barsNoYes
News contentNoYes
Ad contentNoYes
User submitted contentNoYes
Suggested contentNoYes
Notification iconsNoYes
Content shiftsNoYes
Mouse hoversNoYes
CursorsNoYes
Anti-aliasing settingsNoYes
Browser upgradesNoYes

Some AI-powered test tools have been tested at a false positive rate of 0.001% (or 1 in every 100,000 errors).

AI-Powered Test Tools In Action

An AI-powered automated visual testing tool can test a wide range of visual elements across a range of OS/browser/orientation/resolution combinations. Just running the first baseline of rendering and functional test on a single combination is sufficient to guide an AI-powered tool to test results across the range of potential platforms

Here are some examples of how AI-powered automated visual testing improves visual test results by awareness of content.

This is a comparison of two different USA Today homepage images. When an AI-powered tool looks at the layout comparison, the layout framework matters, not the content. Layout comparison ignores content differences; instead, layout comparison validates the existence of the content and relative placement. Compare that with a bitmap comparison of the same two pages (also called “exact comparison:):

Literally, every non-white space (and even some of the white space) is called out.

Which do you think would be more useful in your validation of your own content?

When Should I Use Visual Testing?

You can do automated visual testing with each check-in of front-end code, after unit testing and API testing, and before functional testing — ideally as part of your CI/CD pipeline running in Jenkins, Travis, or another continuous integration tool.

How often? On days ending with “y”. 🙂

Because of the accuracy of AI-powered automated visual testing tools, they can be deployed in more than just functional and visual testing pre-production. AI-powered automated visual testing can help developers understand how visual element components will render across various systems. In addition to running in development, test engineers can also validate new code against existing platforms and new platforms against running code.

AI-powered tools like Applitools allow different levels of smart comparison.

AI-powered visual testing tools are a key validation tool for any app or web presence that requires a regular change in content and format. For example, media companies change their content as frequently as twice per hour use AI-powered automated testing to isolate real errors that affect paying customers without impacting. And, AI-powered visual test tools are key tools in the test arsenal for any app or web presence going through brand revision or merger, as the low error rate and high accuracy lets companies identify and fix problems associated with major DOM, CSS and Javascript changes that are core to those updates.

Talk to Applitools

Applitools is the pioneer and leading vendor in AI-powered automated visual testing. Applitools has a range of options to help you become incredibly productive in application testing. We can help you test components in development. We can help you find the root cause of the visual errors you have encountered. And, we can run your tests on an Ultrafast Grid that allows you to recreate your visual test in one environment across a number of others on various browser and OS configurations. Our goal is to help you realize the vision we share with our customers – you need to create functional tests for only one environment and let Applitools run the validation across all your customer environments after your first test has passed. We’d love to talk testing with you – feel free to reach out to contact us anytime.

More To Read About Visual Testing

If you liked reading this, here are some more Applitools posts and webinars for you.

  1. Visual Testing for Mobile Apps by Angie Jones
  2. Visual Assertions – Hype or Reality? – by Anand Bagmar
  3. The Many Uses of Visual Testing by Angie Jones
  4. Visual UI Testing as an Aid to Functional Testing by Gil Tayar
  5. Visual Testing: A Guide for Front End Developers by Gil Tayar
  6. Visual Testing FAQ

Find out more about Applitools. Setup a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>