Cross-device Testing Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/cross-device-testing/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 18:22:31 +0000 en-US hourly 1 What is Cross Browser Testing? Examples & Best Practices https://applitools.com/blog/guide-to-cross-browser-testing/ Thu, 14 Jul 2022 19:20:00 +0000 https://applitools.com/?p=33935 Learn everything you need to know about cross browser testing, including examples, a comparison of different implementation options and how to get started.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.

What is Cross Browser Testing?

Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.

Why is Cross Browser Testing Important?

When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.

A Cross Browser Testing Example

Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop. 

This should be possible while ensuring:

  • The features remain the same
  • The look and feel, UI or cosmetic effects are the same
  • Security standards are maintained

How to Implement Cross Browser Testing 

There are various aspects to consider while implementing your cross-browser testing strategy.

Understand the scope == Data!

“Different devices and browsers: chrome, safari, firefox, edge”

Thankfully IE is not in the list anymore (for most)!

You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from. 

PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).

This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.

Cross Browser Testing Techniques

There are various ways you can perform cross-browser testing. Let’s understand them.

Local Setup -> On a Single (Dev / QA Machine)

We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests. 

If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.

Setting up the Infrastructure

While this may seem the easiest, it can get out of control very quickly. 

Examples:

  • You may not be able to install all supported browsers on your computer (ex: Safari is not supported on Windows OS). 
  • Browser vendors keep releasing new versions very frequently. You need to keep your browser drivers in sync with this.
  • Maintaining / using older versions of the browsers may not be very straightforward.
  • If you need to run tests on mobile devices, you may not have access to all the variety of devices. So setting up local emulators may be a way to proceed.

The choices can actually vary based on the requirements of the project and on a case by case basis.

As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.

In-House Setup of Central Infrastructure

You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices. 

This infrastructure can potentially be used in the following ways:

  • Triggered from local machine
    Tests can be triggered from any dev / QA machine to run on the central infrastructure.
  • For CI execution
    Tests triggered via Continuous Integration (CI), like Jenkins, CircleCI, Azure DevOps, TeamCity, etc. can be run against browsers / emulators setup on the central infrastructure. 

Cloud Solution    

You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.

Modern, AI-Based Cross Browser Testing Solution: Applitools Ultrafast Test Cloud 

It is important to understand the evolution of browsers in recent years. 

  • They have started conforming to the W3C standard. 
  • They seem to have started adopting Continuous Delivery – well, at least releasing new versions at a very fast pace, sometimes multiple versions a week.
  • In a major development a lot of major browsers are adopting and building on the Chromium codebase. This makes these browsers very similar, except the rendering part – which is still pretty browser specific.

We need to factor this change in our cross browser testing strategy. 

In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.

To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.

How Does Applitools Visual AI Work as a Solution for Cross Browser Testing

Integration with Applitools

Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.

Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow(), and you are set to run your test against any browser or device of your choice.

Reference: https://applitools.com/tutorials/overview/how-it-works.html

AI-Based Cross Browser Testing

Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.

What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.

Seems too far-fetched?

It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!

The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements. 

(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)

// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);

// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;

// Set the configuration object to eyes
eyes.setConfiguration(config);

Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.

You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?

This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:

  • When the test starts, the browser configuration is passed from the test execution to the Ultrafast Grid.
  • For every eyes.checkWindow call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.
  • The Ultrafast Grid will render the same page / screen on each browser / device provided by the test – (think of this as playing a downloaded video in airplane mode).
  • Once rendered in each browser / device, a visual comparison is done and the results are sent to the Applitools dashboard.

What I like about this AI-based solution, is that:

  • I create my automation scripts for different purposes – functional, visual, cross browser testing, in one go
  • There is no need of maintaining devices 
  • There is no need to create different set-ups for different types of testing
  • The AI algorithms start providing results from the first run – “no training required”
  • I can leverage the solution on any kind of setup 
    • i.e. running the scripts through my IDE, terminal, or CI/CD 
  • I can leverage the solution for web, mobile web, and native apps
  • I can integrate Visual Testing results in as part of my CI execution
  • Rich information available in the dashboard including ease of updating the baselines, doing Root Cause Analysis, reporting defects in Jira or Rally, etc.
  • I can ensure there are no Contrast issues (part of Accessibility testing) in my execution at scale

Here is the screenshot of the Applitools dashboard after I ran my sample tests:

Cross Browser Testing Tools and Applitools Visual AI

The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.

Cross Browser Testing in Selenium

As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.

Cross Browser Testing in Cypress

Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.

Cross Browser Testing in Playwright

Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.

Pro and Cons of Each Technique (Table of Comparison)

Local SetupIn-House Setup Cloud SolutionAI-Based Solution (Applitools)
InfrastructurePros: 
Fast feedback on local machine
Cons: 
Needs to be repeated for each machine where the tests need to execute
All configurations cannot be set up locally
Pros: 
No inbound / outbound connectivity required
Cons: 
Needs considerable effort to set up, maintain and update the infrastructure on a continued basis
Pros:
No efforts required build / maintain / update the infrastructure
Cons:
Needs inbound and outbound connectivity from internal network
Latency issues may be seen as requests are going to cloud based browsers / devices
Pros:
No effort required to setup
Setup and MaintenanceTo be taken care of by each team member from time to time; including OS/ Browser version updatesTo be taken care of by the internal team from time to time; including OS/ Browser version updatesTo be taken care of by the service providerTo be taken care of by the service provider
Speed of FeedbackSlowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combinationDepends on concurrent usage due to multiple test runsDepends on network latency
Network issues may cause intermittent failures
Depends on reliability and connectivity of the service provider
Fast and seamless scaling
Security Best as in-house, using internal firewalls, vpns, network and data storageBest as in-house, using internal firewalls, vpns, network and data storageHigh Risk: Needs inbound network access from service provider to the internal test environments.
Browsers / devices will have access to the data generated by running the test – cleanup is essential.
No control who has access to the cloud service provider infrastructure, and if they access your internal resources.
Low risk. There is no inbound connection to your internal infrastructure.
Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) 

My Learning from this Experience

  • A good cross browser testing strategy allows you to reduce the risk of functionality and visual experience not working as expected on the browsers and devices used by your users. A good strategy will also optimize the testing efforts required to do this. To allow this, you need data to provide the insights from your users.
  • Having a holistic view of how your team will be leveraging cross browser testing (ex: manual testing, automation, local executions, CI-based execution, etc.) is important to know before you start off with your implementation.
  • Sometimes the easiest way may not be the best – ex: Using the browsers on your computer to automate against that will not scale. At the same time, using technology like Applitools Ultrafast Test Cloud is very easy – you end up writing less code and get increased functional and visual coverage at scale. 
  • You need to think about the ROI of your approach and if it achieves the objectives of the need for cross browser testing. ROI calculation should include:
    • Effort to implement, maintain, execute and scale the tests
    • Effort to set up, and maintain the infrastructure (hardware and software components)
    • Ability to get deterministic & reliable feedback from from test execution

Summary

Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results. 

Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!

Get Started Today

Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.

Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>
Modern Cross Device Testing for Android & iOS Apps https://applitools.com/blog/cross-device-testing-mobile-apps/ Wed, 13 Jul 2022 20:47:15 +0000 https://applitools.com/?p=40383 Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>

Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

What is Cross Device Testing

Modern cross device testing is the system by which you verify that an application delivers the desired results on a wide variety of devices and formats. Ideally this testing will be done quickly and continuously.

There are many articles explaining how to do CI/CD for web applications, and many companies are already doing it successfully, but there is not much information available out there about how to achieve the same for native mobile apps.

This post will shed light on the cross device testing practices you need to implement to get a step closer to Continuous Delivery for native mobile apps.

Why is Cross Device Testing Important

The number of mobile devices used globally is staggering. Based on the data from bankmycell.com, we have 6.64 billion smartphones in use.

Source: https://www.bankmycell.com/blog/how-many-phones-are-in-the-world#part-1

Even if we are building and testing an app which impacts a fraction of this number, that is still a very huge number.

The below chart shows the market share by some leading smartphone vendors over the years.

Source: https://www.statista.com/statistics/271496/global-market-share-held-by-smartphone-vendors-since-4th-quarter-2009/

Challenges of Cross Device Testing

One of the biggest challenges for testing mobile apps is that across all manufacturers combined, there are 1000s of device types in use today. Depending on the popularity of your app, this means there are a huge number of devices your users could be using. 

These devices will have variations based on:

  • OS types and versions
  • potentially customized OS
  • hardware resources (memory, processing power, etc.)
  • screen sizes
  • screen resolutions
  • storage with different available capacity for each
  • Wifi Vs using mobile data (from different carriers)
  • And many more

It is clear that you cannot run our tests on each type of device that may be used by your users. 

So how do you get quick feedback and confidence from your testing that (almost) no user will get impacted negatively when you release a new version of your app?

Mobile Test Automation Execution Strategy

Mobile Testing Strategy

Before we think about the strategy for running your automated tests for mobile apps, we need to have a good and holistic mobile testing strategy

Along with testing the app functionality, mobile testing has additional dimensions, and hence complexities as compared with web-app testing. 

You need to understand the impact of the aspects mentioned above and see what may, or may not be applicable to you.

Here are some high-level aspects to consider in your mobile testing strategy:

  • Know where and how to run the tests – real devices, emulators / simulators available locally versus in some cloud-based device farm
  • Increasing test coverage by writing less code – using Applitools Visual AI to validate functionality and user-experience
  • Scaling your test execution – using Applitools Native Mobile Grid
  • Testing on different text fonts and display densities 
  • Testing for accessibility conformance and impact of dark mode on functionality and user experience
  • Chaos & Monkey Testing
  • Location-based testing
  • Testing the impact of Network bandwidth
  • Planning and setting up the release strategy for your mobile application including beta-testing, on-field testing, staged-rollouts. This differs for Google PlayStore and Apple
  • Building and testing for Observability & Analytics events

Once you have figured out your Mobile Testing Strategy, you now need to think about how and what type of automated tests can give you good, reliable, deterministic and fast feedback about the quality of your apps. This will result in you identifying the different layers of your test automation pyramid.

Remember: It is very important to execute all types of automated tests, on every code change and new app that is built. The functional / end-2-end / UI tests for your app should also be run at this time.

Additionally, you need to be able to run the tests on a local developer / qa machine, as well in your Continuous Integration (CI) system. In case of native / hybrid mobile apps, developers and QAs should be able to install the app on the (local) devices they have available with them, and run the tests against that. For CI-based execution, you need to have some form of device farm available locally in your network, or cloud-based to allow execution of the tests.

This continuous testing approach will provide you with quick feedback and allow you to fix issues almost as soon as they creep in the app. 

How to Run Functional Tests against Your Mobile Apps

Testing and automating mobile apps have additional complexities. You need to install the app in some device before your automated tests can be run against it.

Let’s explore your options for devices.

Real Devices

Real devices are ideal to run the tests. Your users / customers are going to use your app using a variety of real devices. 

In order to allow proper development and testing to be done, each team member needs access to relevant types of devices (which is subject to their user-base).

However, it is not as easy to have a variety of devices available for running the automated tests, for each team member (developer / tester). 

The challenges of having the real devices could be related to:

  • cost of procuring devices for each team member of a good variety to allow seamless development and testing work. 
  • maintenance of the devices (OS/software updates, battery issues, other problems the device may have at any point in time, etc.)
  • logistical issues like time to order and get devices, tracking of the devices assigned to the team, etc.
  • deprecating / disposing the older devices that are not used / required anymore.

Hence we need a different strategy for executing tests on mobile devices. Emulators and Simulators come to the rescue!

What is the Difference between Emulators & Simulators

Before we get into specifics about the execution strategy, it is good to understand the differences between an emulator and simulator.

Android-device emulators and iOS-device simulators make it easy for any team member to easily spin up a device.

An emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system

An emulator can mimic the operating system, software and the hardware features of the android device.

A Simulator runs on your Mac and behaves like a standard Mac app while simulating iPhone, iPad, Apple Watch, or Apple TV environments. Each combination of a simulated device and software version is considered its own simulation environment, independent of the others, with its own settings and files. These settings and files exist on every device you test within a simulation environment. 

An iOS simulator mimics the internal behavior of the device. It cannot mimic the OS / hardware features of the device.

Emulators / Simulators are a great and cost-effective way to overcome the challenges of real devices. These can easily be created as per the requirements and needs by any team member and can be used for testing and also running automated tests. You can also relatively easily set up and use the emulators / simulators in your CI execution environment.

While emulators / simulators may seem like they will solve all the problems, that is not the case. Like with anything, you need to do a proper evaluation and figure out when to use real devices versus emulators / simulators.

Below are some guidelines that I refer to.

When to use Emulators / Simulators

  • You are able to validate all application functionality
  • There is no performance impact on the application-under-test

Why use Emulators / Simulators

  • To reduce cost
  • Scale as per needs, resulting in faster feedback
  • Can use in CI environment as well

When to use Real Devices for Testing

  • If Emulators / Simulators are used, then run “Sanity” / focussed testing on real devices before release
  • If Emulators / Simulators cannot validate all application functionality reliably, then invest in Real-Device testing
  • If Emulators / Simulators cause performance issues or slowness of interactions with the application-under-test

Cases when Emulators / Simulators May not Help

  • If the application-under-test has streaming content, or has high resource requirements
  • Applications relying on hardware capabilities
  • Applications dependent on customized OS version

Cross-Device Test Automation Strategy

The above approach of using real-devices or emulators / simulators will help your team to  shift-left and achieve continuous testing. 

There is one challenge that still remains – scaling! How do you ensure your tests run correctly on all supported devices?

A classic, or rather, a traditional way to solve this problem is to repeat the automated test execution on a carefully chosen variety of devices. This would mean, if you have 5 important types of devices, and you have a 100 tests automated, then you are essentially running 500 tests. 

This approach has multiple disadvantages:

  1. The feedback cycle is substantially delayed. If 100 tests took 1 hour to complete on 1 device, 500 tests would take 5 hours (for 5 devices). 
  2. The time to analyze the test results increases by 5x 
  3. The added number of tests could have flaky behavior based on device setup / location, network issues. This could result in re-runs or specific manual re-testing for validation.
  4. You need 5x more test data
  5. You are putting 5x more load on your backend systems to cater to executing the same test 5 times

We all know these disadvantages, however, there is no better way to overcome this. Or, is there?

Modern Cross-Device Device Test Automation Strategy

The Applitools Native Mobile Grid for Android and iOS apps can easily help you to overcome the disadvantages of traditional cross-device testing.

It does this by running your test on 1 device, but getting the execution results from all the devices of your choice, automatically. Well, almost automatically. This is how the Applitools Native Mobile Grid works:

  1. Integrate Applitools SDK in your functional automation.
  2. In the Applitools Eyes configuration, specify all the devices you want to do your functional testing. Added bonus, you will be able to leverage the Applitools Visual AI capabilities to also get increased functional and visual test coverage.

Below is an example of how to specify Android devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Pixel_4, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_ULTRA, ScreenOrientation.PORTRAIT));
eyes.setConfiguration(config);

Below is an example of how to specify iOS devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_mini));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XS));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_X));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XR));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_8));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_7));
eyes.setConfiguration(config);   

  1. Run the test on any 1 device – available locally or in CI. It could be a real device or a simulator / emulator. 

Every call to Applitools to do a visual validation will automatically do the functional and visual validation for each device specified in the configuration above.

  1. See the results from all the devices in the Applitools dashboard

Advantages of using the Applitools Native Mobile Grid

The Applitools Native Mobile Grid has many advantages.

  1. You do not need to repeat the same test execution on multiple devices. This will save the team members a lot of time for execution, flaky tests and result analysis
  2. Very fast feedback of test execution across all specified devices (10x faster than traditional cross device testing approach)
  3. There is no additional test data requirements 
  4. You do not need to procure, build and maintain the devices
  5. There is less load on your application backend-system
  6. A secure solution where your application does not need to be shared out of your corporate network
  7. Using visual assertions instead of functional assertions gives you increased test coverage while writing less code

Read this post on How to Scale Mobile Automation Testing Effectively for more specific details of this amazing solution!

Summary of Modern Cross-Device Testing of Mobile Apps

Using the Applitools Visual AI allows you to extend coverage at the top of your Test Automation Pyramid by including AI-based visual testing along with your UI/UX testing. 

Using the Applitools Native Mobile Grid for cross device testing of Android and iOS apps makes your CI loop faster by providing seamless scaling across all supported devices as part of the same test execution cycle. 

You can watch my video on Mobile Testing 360deg (https://applitools.com/event/mobile-testing-360deg/) where I share many examples and details related to the above to include as part of your mobile testing strategy.

To start using the Native Mobile Grid, simply sign up at the link below to request access. You can read more about the Applitools Native Mobile Grid in our blog post or on our website.

Happy testing!

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual AI? https://applitools.com/blog/visual-ai/ Wed, 29 Dec 2021 14:27:00 +0000 https://applitools.com/?p=33518 Learn what Visual AI is, how it’s applied today, and why it’s critical across many industries - in particular software development and testing.

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.

From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.

The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.

What is AI? Background on Artificial Intelligence and Machine Learning

Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.

Image of Frankenstein

Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.

What is Visual Artificial Intelligence (Visual AI)?

Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.

In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.

As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.

Representation of Visual AI

How is Visual AI Used Today?

Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI. 

Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.

How Does Visual AI Help?

One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.

Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant. 

Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.

How Does Visual AI Help in Software Development and Testing Today?

Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation. 

Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use. 

At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.

A table showing the number of screens in production by modern organizations - 81,480 is the market average, and the top 30% of the market is  681,296
Source: The 2019 State of Automated Visual Testing

At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test. 

That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.

Visual AI is 5.8x faster, 5.9x more efficient, 3.8x more stable, and catches 45% more bugs
Source: The Impact of Visual AI on Test Automation Report

How Visual AI Enables Cross Browser/Cross Device Testing

Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.

Traditional test cycle takes 29.2 hours, modern test cycle takes just 1.6 hours.
Source: Modern Cross Browser Testing Through Visual AI Report

How Will Visual AI Advance in the Future?

As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.

In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.

Keep Reading: More about Visual AI and Visual Testing

What is Visual Testing (blog)

The Path to Autonomous Testing (video)

What is Applitools Visual AI (learn)

Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)

How AI Can Help Address Modern Software Testing (blog)

The Impact of Visual AI on Test Automation (report)

How Visual AI Accelerates Release Velocity (blog)

Modern Functional Test Automation Through Visual AI (free course)

Computer Vision defined (Wikipedia)

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>
Introducing the Next Generation of Native Mobile Test Automation https://applitools.com/blog/introducing-next-generation-native-mobile-test-automation/ Tue, 29 Jun 2021 17:38:15 +0000 https://applitools.com/?p=29695 Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps...

The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>

Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps with stability, speed, and security – in parallel across dozens of devices. The new offer extends the innovation of Ultrafast Cloud beyond browsers and into the mobile applications.

You can sign up for the early access program today!

The Challenge of Testing Native Mobile Apps

Mobile testing has a long and difficult history. Many industry-standard tools and solutions have struggled with the challenge of testing across an extremely wide range of devices, viewports and operating systems.

The approach currently in use by much of the industry today is to utilize a lab made up of emulators, or simulators, or even large farms of real devices. Then the tests must be run on every device independently. The process is not only costly, slow and insecure, but it is prone to errors as well.

At Applitools, we had already developed technology to solve a similar problem for web testing, and we were determined to solve this issue for mobile testing too.

Announcing the Ultrafast Test Cloud for Native Mobile

Today, we are introducing the Ultrafast Test Cloud for Native Mobile. We built on the success of the Ultrafast Test Cloud Platform, which is already being used to boost the performance and quality of responsive web testing by 150 of the world’s top brands. The Ultrafast Test Cloud for Native Mobile allows teams to run automated tests on native mobile apps on a single device, and instantly render it across any desired combination of devices.

“This is the first meaningful evolution of how to test native mobile apps for the software industry in a long time,” said Gil Sever, CEO and co-founder of Applitools. “People are increasingly going to mobile for everything. One major area of improvement needed in delivering better mobile apps faster, is centered around QA and testing. We’re building upon the success of Visual AI and the Ultrafast Test Cloud to make the delivery and execution of tests for native mobile apps more consistent and faster than ever, and at a fraction of the cost.”

The Power of Visual AI and Ultrafast Test Grid

Last year we introduced our Ultrafast Test Grid, enabling teams to test for the web and responsive web applications against all combinations of browsers, devices and viewports with blazing speed. We’ve seen how some of the largest companies in the world have used the power of Visual AI and the Ultrafast Test Grid to execute their visual and functional tests more rapidly and reliably on the web.

We’re excited to now be able to offer the same speed and agility, and security for native mobile applications. If you’re familiar with our current Ultrafast Test Grid offering, you’ll find the experience a familiar one.

A side-by-side image of the Ultrafast Test Cloud for Native Mobile comparing an iPhone 7 to an iPhone 8
The Ultrafast Test Cloud for Native Mobile comparing an iPhone 7 to an iPhone 8

Mobile Apps Are an Increasingly Critical Channel

Mobile usage continues to rise globally, and more and more critical activity – from discovery to research and purchase – is taking place online via mobile devices. Consumers are demanding higher and higher quality mobile experiences, and a poorly functioning site or visual bugs can detract significantly from the user’s experience. There is a growing portion of your audience you can only convert with a five-star quality app experience.

While testing has traditionally been challenging on mobile, the Ultrafast Test Cloud for Native Mobile increases your ability to test quickly, early and often. That means you can develop a superior mobile experience at less cost than the competition, and stand out from the crowd.

Get Early Access

With this announcement, we’re also launching our free early access program, with access to be granted on a limited basis at first. Prioritization will be given to those who register early. To learn more, visit the link below.

The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>
Automating Functional / End-2-End Tests Across Multiple Platforms https://applitools.com/blog/automating-functional-end-to-end-tests-cross-platform/ Tue, 01 Jun 2021 20:06:00 +0000 https://applitools.com/?p=29024 This post talks about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms.  It shares details on the thought process & criteria involved...

The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.

]]>

This post talks about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms. 

It shares details on the thought process & criteria involved in creating a solution that includes how to write the tests, and run it across the multiple platforms without any code change.

Lastly, the open-sourced solution also has examples on how to implement a test that orchestrates the simulation between multiple devices / browsers to simulate multiple users interacting with each other as part of the same test.

We will cover the following topics.

Background

How many times do we see products available only on a single platform? For example, Android app only, or iOS app only?

Organisations typically start building the product on a particular platform, but then they do expand to other platforms as well. 

Once the product is available on multiple platforms, do they differ in their functionality? There definitely would be some UX differences, and in some cases, the way to accomplish the functionality would be different, but the business objectives and features would still be similar across both platforms. Also, one platform may be ahead of the other in terms of feature parity. 

The above aspects of product development are not new.

The interesting question is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?

Case Study

To answer this question, let’s take an example of any video conferencing application – something that we would all be familiar with in these times. We will refer to this application as “MySocialConnect” for the remainder of this post.

MySocialConnect is available on the following platforms:

  • All modern browsers (Chrome / Firefox / Edge / Safari) available on laptop / desktop computers as well as on mobile devices
  • Android app via Google’s PlayStore
  • iOS app via Apple’s App Store

In terms of functionality, the majority of the functionality is the same across all these platforms. Example:

  • Signup / Login
  • Start an instant call
  • Schedule a call
  • Invite registered users to join an on-going call
  • Invite non-registered users can join a call
  • Share screen
  • Video on-off
  • Audio on-off
  • And so on…

There are also some functionality differences that would exist. Example:

  • Safe driving mode is available only in Android and iOS apps
  • Flip video camera is available only in Android and iOS apps

Test Automation Approach

So, repeating the big question for MySocialConnect is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?

I would approach Functional automation of MySocialConnect as follows:

  1. The test should be specified only once. The implementation details should figure out how to get the execution happening across any of the supported platforms
  2. For the common functionalities, we should implement the business logic only once
  3. There should be a way to address differences in business functionality across platforms
  4. The value of the automation for MySocialConnect is to simulate “real calls” – i.e. more than one user in the call – and interacting with each other

In addition, I need the following capabilities in my automation:

  • Rich reports
    • With on-demand screenshots attached in the report
    • Details of the devices / browsers where the test 
    • Understand trends of test execution results
    • Test Failure analysis capabilities
  • Support parallel / distributed execution of tests to get faster feedback
  • Visual Testing support using Applitools Visual AI
    • To reduce the number of validations I need to write (less code)
    • Increase coverage (functional and UI / UX)
    • Contrast Advisor to ensure my product meets the WCAG 2.0 / 2.1 guidelines for Accessibility
  • Ability to run on local machines or in the CI
  • Ability to run the full suite or a subset of tests, on demand, and without any code change
  • Ability to run tests across any environment
  • Ability to easily specify test data for each supported environment 

Test Automation Implementation

To help implement the criteria mentioned above, I built (and open-sourced on github) my automation framework – teswiz. The implementation is based on the discussion and guidelines in [Visual] Mobile Test Automation Best Practices and Test Automation in the World of AI & ML

Tech Stack

After a lot of consideration, I chose the following tech stack and toolset to implement my automated tests in teswiz.

Test Intent Specification

Using Cucumber, the tests are specified with the following criteria:

  • The test intent should be clear and “speak” business requirements
  • The same test should be able to execute against all supported platforms (assuming feature parity)
  • The clutter of the assertions should not pollute the test intent. That is implementation detail

Based on these criteria, here is a simple example of how the test can be written.

The tags on the above test indicates that the test is implemented and ready for execution against the Android apk and the web browser. 

Multi-User Scenarios

Given the context of MySocialConnect, implementing tests that are able to simulate real meeting scenarios would add the most value – as that is the crux of the product.

Hence, there is support built-in to the teswiz framework to allow implementation of multi-user scenarios. The main criteria for implementing such scenarios are:

  • One test to orchestrate the simulation of multi-user scenarios
  • The test step should indicate “who” is performing the action, and on “which” platform
  • The test framework should be able to manage the interactions for each user on the specified platform.

Here is a simple example of how this test can be specified.

In the above example, there are 2 users – “I” and “you”, each on a different platform – “android” and “web” respectively.

Configurable Framework

The automated tests are run in different ways – depending on the context.

Ex: In CI, we may want to run all the tests, for each of the supported platforms

However, on local machines, the QA / SDET / Developers may want to run only specific subset of the tests – be it for debugging, or verifying the new test implementation.

Also, there may be cases where you want to run the tests pointing to your application for a different environment.

The teswiz framework supports all these configurations, which can be controlled from the command-line. This prevents having to make any code / configuration file changes to run a specific subset type of tests.

teswiz Framework Architecture

This is the high-level architecture of the teswiz framework.

Visual Testing & Contrast Advisor

Based on the data from the study done on the “Impact of Visual AI on Test Automation,” Applitools Visual AI helps automate your Functional Tests faster, while making the execution more stable. Along with this, you will get increased test coverage and will be able to find significantly more functional and visual issues compared to the traditional approach.

You can also scale your Test Automation execution seamlessly with the Applitools UltraFast Test Cloud and use the Contrast Advisor capability to ensure the application-under-test meets the accessibility guidelines of the WCAG 2.0 / 2.1 standards very early in the development stage.

Read this blog post about “Visual Testing – Hype or Reality?” to see some real data of how you can reduce the effort, while increasing the test coverage from our implementation significantly by using Applitools Visual AI.

Hence it was a no-brainer to integrate Applitools Visual AI in the teswiz framework to support adding visual assertions to your implementation simply by providing the APPLITOOLS_API_KEY. Advanced configurations to override the defaults for Applitools can be done via the applitools_config.json file. 

This integration works for all the supported browsers of WebDriver and all platforms supported by Appium.

Reporting

It is very important to have good and rich reports of your test execution. These reports not only make it valuable to pinpoint the reasons of the failing test, but also should be able to give an understanding of the trend of execution and quality of the product under test. 

I have used ReportPortal.io as my reporting tool – it is extremely easy to set up and use and allows me to also add screenshots, log files and other information that may seem important along with the test execution to make root cause analysis easy.

How Can You Get Started?

I have open-sourced this teswiz framework so you do not need to reinvent the wheel. See this page to get started – https://github.com/znsio/teswiz#what-is-this-repository-about

Feel free to raise issues / PRs against the project for adding more capabilities that will benefit all.

The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.

]]>
Fast Testing Across Multiple Browsers https://applitools.com/blog/fast-testing-multiple-browsers/ Thu, 28 Jan 2021 08:22:47 +0000 https://applitools.com/?p=26281 Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>

If you think like the smartest people in software, you conclude that testing time detracts from software productivity. Investments in parallel test platforms pay off by shortening the time to validate builds and releases. But, you wonder about the limits of parallel testing. If you invest in infrastructure for fast testing across multiple browsers, do you capture failures that justify such an investment?

The Old Problem: Browser Behavior

Back in the day, browsers used different code bases. In the 2000s and early 2010s, most application developers struggled to ensure cross browser behavior. There were known behavior differences among Chrome, Firefox, Safari, and Internet Explorer. 

Annoyingly, each major version of Internet Explorer had its own idiosyncrasies. When do you abandon users who still run IE 6 beyond its end of support date? How do you handle the IE 6 thorough IE 10 behavioral differences? 

While Internet Explorer differences could be tied to major versions of operating systems, Firefox and Chrome released updates multiple times per year. Behaviors could change slightly between releases. How do you maintain your product behavior with browsers in the hands of your customers that you might not have developed with or tested against? 

Cross browser testing proved itself a necessary evil to catch potential behavior differences. In the beginning, app developers needed to build their own cross browser infrastructure. Eventually, companies arose to provide cross browser (and then cross device) testing as a service.

The Current Problem: Speed Vs Coverage

In the 2020s, speed can provide a core differentiator for app providers. An app that delivers features more quickly can dominate a market. Quality issues can derail that app, so coverage matters. But, how do app developers ensure that they get a quality product without sacrificing speed of releases?

In this environment, some companies invest in cross browser test infrastructure or test services. They invest in the large parallel infrastructure needed in creating and maintaining cross browser tests. And, the bulk of uncovered errors end up being rendering and visual differences. So, these tests require some kind of visual validation. But, do you really need to repeatedly run each test? 

Applitools concluded that repeating tests required costly infrastructure as well as costly test maintenance. App developers intend that one server response work for all browsers. With its Ultrafast Grid, Applitools can capture the DOM state on one browser and then repeat it across the Applitools Ultrafast Test Cloud. Testers can choose among browsers, devices, viewport sizes and multiple operating systems. How much faster can this be?

Hackathon Goal – Fast Testing With Multiple Browsers

In the Applitools Ultrafast Cross Browser Hackathon, participants used the traditional legacy method of running tests across multiple browsers to compare behavior results. Participants then compared their results with the more modern approach using the Applitools Ultrafast Grid. Read here about one participant’s experiences.

The time that matters is the time that lets a developer know the details about a discovered error after a test run. For the legacy approach, coders wrote tests for each platform of interest, including validating and debugging the function of each app test on each platform. Once the legacy test had been coded, the tests were run, analyzed, and reports were generated. 

For the Ultrafast approach, coders wrote their tests using Applitools to validate the application behavior. These tests used fewer lines of code and fewer locators. Then, the coders called the Applitools Ultrafast Grid and specified the browsers, viewports, and operating systems of interest to match the legacy test infrastructure.

Hackathon Results – Faster Tests Across Multiple Browsers

The report included this graphic showing the total test cycle time for the average Hackathon submission of legacy versus Ultrafast:

Here is a breakdown of the average participant time used for legacy versus Ultrafast across the Hackathon:

ActivityLegacyUltrafast
Actual Run Time9 minutes2 minutes
Analysis Time270 minutes10 minutes
Report Time245 minutes15 minutes
Test Coding Time1062 minutes59 minutes
Code Maintenance Time120 minutes5 minutes

The first three activities, test run, analysis, and report, make up the time between initiating a test and taking action. Across the three scenarios in the hackathon, the average legacy test required a total of 524 minutes. The average for Ultrafast was 27 minutes. For each scenario, then, the average was 175 minutes – almost three hours – for the legacy result, versus 9 minutes for the Ultrafast approach.

On top of the operations time for testing, the report showed the time taken to write and maintain the test code for the legacy and Ultrafast approaches. Legacy test coding took over 1060 minutes (17 hours, 40 minutes), while Ultrafast only required an hour. And, code maintenance for legacy took 2 hours, while Ultrafast only required 5 minutes.

Why Fast Testing Across Multiple Browsers Matters

As the Hackathon results showed, Ultrafast testing runs more quickly and gives results more quickly. 

Legacy cross-browser testing imposes a long time from test start to action. Their long run and analysis times do not make them suitable for any kind of software build validation. Most of these legacy tests get run in final end-to-end acceptance, with the hope that no visual differences get uncovered. 

Ultrafast approaches enable app developers to build fast testing across multiple browsers into software build. Ultrafast analysis catches unexpected build differences quickly so they can be resolved during the build cycle.

By running tests across multiple browsers during build, Ultrafast Grid users shorten their find-to-resolve cycle to branch validation even prior to code merge. They catch the rendering differences and resolve them as part of the feature development process instead of the final QA process. 

Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

Combine the other speed differences in coding and maintenace, and it becomes clear why Ultrafast testing across multiple browsers makes it possible for developers to run Ultrafast Grid in development.

What’s Next

Next, we will cover code stability – the reason why Ultrafast tests take, on average, 5 minutes to maintain, instead of two hours. 

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>
The Many Uses of Visual Testing https://applitools.com/blog/many-uses-of-visual-testing/ Fri, 20 Nov 2020 01:53:50 +0000 https://applitools.com/?p=24593 As toolsmiths, let’s explore how else we might be able to use visual testing tools to meet our regression testing needs.

The post The Many Uses of Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>

Often times, when we’re talking about tools to help us with testing, specifically automation tools, we hear a lot of preaching about not misusing these tools.

For example, people often ask how to use Selenium WebDriver – which is a browser automation tool – to do API testing. This clearly isn’t the right tool for the job.

While I most certainly agree that using the wrong tool for the job is not really efficient, I can also appreciate creative uses of tools for other means.

People “misuse” tools every day to meet their needs and end up realizing that while this specific tool was not designed for a particular use case, it actually works extremely well!

For example, here is a clothes hanger. It is obviously designed to hang clothing.

But necessity is the mother of innovation. So when you lock yourself out of your car, this tool all of a sudden has a new use!

Hangers can be used to unlock car doors

Coca-cola was actually created as a medicine but after the manufacturing company was purchased, they began selling Coca-cola as a soft drink.

As if that wasn’t enough of a repurpose, coke can also be used to clean corrosion from batteries! (I should probably stop drinking this ?)

Coca-cola can be used to clean batteries

So as we see, misusing a tool isn’t always bad. Some tools can be used for more than their intended purpose.

As engineers, most of us are curious and creative. This is a recipe for innovation!

Visual Testing

I’m working with automated visual testing a lot these days. It’s an approach that uses UI snapshots to allow you to verify that your application looks the way it’s supposed to.

Applitools does this by taking a screenshot of your application when you first write your test, and then comparing that screenshot with how the UI looks during regression runs. It is intended to find cosmetic bugs that could negatively impact your customer’s experience. It’s designed to find visual bugs that otherwise cannot be discovered by functional testing tools that query the DOM.

Take a look at a few examples of visual bugs:

Credit card prompt shows unaligned labels making it confusing for users
Sponsored Instagram post shows garbled, unreadable text and no image

While Applitools is second to none in finding cosmetic issues that may be costly for companies, I began to wonder how I could misuse this tool for good. I explored some of the hard problems in test automation to see if I can utilize the Applitools Eyes API to solve those as well!

Increase Coverage

Let’s look at a common e-commerce scenario where I want to test the flow of buying this nice dress. I select the color, size, and quantity. Then I add it to the cart and head over to the cart to verify.

Shopping flow

And here’s the code to test this scenario:

View the code on Gist.

Looking at the shopping cart, we’ve only verified that it contains the Tokyo Talkies dress. And that verification is by name. There’s a LOT more on this screen that is going unverified. And not just the look and feel, but the color, size, quantity, price, buttons, etc.

Sure, we can write more assertions, but this starts getting really long. We have doubled the size of the test here and this is just a subset of the all the checks we could possibly do.

View the code on Gist.

What if I used visual testing to not only make sure the app looks good, but to also increase my coverage?

On line 10 here, I added a visual assertion. This covers everything I’ve thought about and even the stuff that I didn’t. And I’m now back to 11 lines here – so less code and more coverage!

View the code on Gist.

Localization Testing

I worked on a localized product and automating the tests was really tough. We originally only supported the English version of the product; but after the product was internationalized, we synched into the localized Strings that development used for the product so we were at least able to assert on the text we needed.

However, not all languages are written left to right. Some are right to left, like Arabic. How could I verify this without visual testing?

Twitter home page in Arabic

Netflix internationalized their product and quickly saw UI issues. Their product assumed English string lengths, line heights, and graphic symbols. They translated the product into 26 languages – which is essentially 26 different versions of the product that need to be maintained and regression-tested.

And good localization also accounts for cultural variances and includes things like icon and image replacements. All of these are highly visual – which makes it a good case for visual testing.

Using Applitools, writing the test for different locales is not too bad, especially since you don’t need to deal with the translated context in the assertions. And the visual tests will verify the sites of each locale.

View the code on Gist.

Looking at the English-translated version of this website, I can see a few bugs here.

CNN website translated from Spanish to English
  • The Spanish logo is being used
  • The image overlay is still in Spanish
  • The video captions are also still in Spanish

Trying to verify everything on this page programmatically without visual testing would be painful and can easily miss some of these localization issues.

Cross-Platform Testing

I’m sure anyone who has had to write test automation to work on multiple platforms would agree with me that this is not fun at all! In fact, this is quite the chore. And yet, our applications are expected o work on so many different configurations. Multiple browsers, multiple phones, tablets, you name it!

For example, here’s a web view and a mobile view of the Doordash application.

Doordash app in mobile and web views

There are quite a few differences, such as:

  • On the web view, the address is on the top row to the right of the menu, but on mobile it’s on the 2nd row and centered
  • The site title exists on the web view but is not on the mobile view at all – only the logo
  • The search field exists on the web view but only the search icon on the mobile view
  • And the shopping cart shows the quantity on the web view but not on the mobile view

Because of these differences, we either need to write totally different framework code for the various configurations, or include conditional logic for every place where the app differs. Like I said, painful!

View the code on Gist.

But the worse part of it all is that the return on investment is really low. I hardly find any cross-platform bugs using this approach. And it’s not because they don’t exist. It’s because most cross-platform bugs are visual bugs!

The viewport size changes, and all of a sudden, your app looks goofy! ?

Visual bugs on mobile viewports

So what if instead of just using visual testing to make sure my app looks nice, I bended this technology a bit to execute my cross-platform tests more efficiently?

Like instead of a functional grid that executes my tests step by step across all of the devices I specify, what about a visual grid that allows me to write my test only once, without the conditional viewport logic? Then executes my test and blasts the application’s state across all of my supported devices, browsers, and viewports in parallel so that I can find the visual bugs? ?

That’s pretty powerful and yes, we can use visual testing to do this too!

Accessibility Testing

There’s a lot of talk about accessibility testing lately. It’s one of those important things that often gets missed by development teams.

You may have heard of the recent Supreme Court case where a blind man sued a pizza franchise because their site was not accessible.

This is not a game. We have to take this seriously. Can visual testing help with this at all?

Yep, what if we used visual testing to be able to detect accessibility violations like the contrast between colors, the font sizes, etc? This could make a nice complement to other accessibility tools that are analyzing from the DOM.

Visual testing detecting unacceptable contrast levels

A/B Testing

A/B testing is a nightmare for test automation, and sometimes impossible. It’s where you have two variations of your product as an experiment to see which one performs better with your users.

Let’s say Variation B did much better than Variation A. We’d assume that’s because our users really liked Variation B. But what if Variation A had a serious bug that prevented many users from converting?

The problem is that many teams don’t automate tests for both variations because it’s throw away code, and you’re not entirely sure which variation you’ll get each time the test runs.

Instead of writing a bunch of conditionals and trying to maintain the locators for both variations, what if we used visual testing instead? Could that make things easier to automate?

Indeed! Applitools supports A/B testing by allowing you to save multiple baseline images for your app’s variations.

I could write one test and instead of coding all the differences between the two variations, I could simply do the visual check and provide it with photos of both variations.

Dark Mode

All the cool apps are now providing a dark mode option. But how do you write automated tests for this? It’s kind of an A/B type of scenario where the app can be in either variation. But the content is the same. So that makes it relatively easy to write the code but then we miss stuff.

For example, when Slack first offered dark mode, I noticed that I couldn’t see any code samples.

As much as I work with visual testing, it didn’t dawn on me that I could use visual testing for this until Richard Bradshaw pointed it out to me. In hindsight, DUH of course this can be tackled by visual testing. But in the moment, it wasn’t apparent to me because visual tools don’t advertise this as a use case.

Which brings me back to my original point…

Misuse Your Tools!

Most creators make a solution for a specific problem. They aren’t thinking of ALL of our use cases. So, I encourage you to not just explore your products, but explore your toolset and don’t be afraid to misuse (but not abuse) your tools where it makes sense.

The post The Many Uses of Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>
Can Automated Cross Browser Testing Be Ultrafast? https://applitools.com/blog/can-automated-cross-browser-testing-be-ultrafast/ Wed, 16 Sep 2020 00:07:23 +0000 https://applitools.com/?p=22991 Early January this year, Applitools announced the results of their Visual AI Rockstar Hackathon winners of which I am lucky to be included as one of their silver winners. I blogged about...

The post Can Automated Cross Browser Testing Be Ultrafast? appeared first on Automated Visual Testing | Applitools.

]]>

Early January this year, Applitools announced the results of their Visual AI Rockstar Hackathon winners of which I am lucky to be included as one of their silver winners. I blogged about my experience as to how I approached the hackathon and my honest feedback as to why I think it’s modernising the way we do test automation which you can find in this post – Applitools: Modernising the way we do test automation.

6 months in and they announced another hackathon but this time the focus is on cross browser testing via Ultrafast Grid and comparing it with traditional solutions such as running the same functional tests on different browsers and viewports locally. I participated in the hackathon again and ended up as one of their gold winners this time, which I’m extremely pleased, because not only did I win one of their amazing prizes but also, I improved my technical skill and learned a lot about the true cost of cross browser testing.

Why Automated Cross Browser Testing?

First, let’s talk about why cross browser testing? Why another hackathon?

If you’re like me and have been testing web applications for some time, you’ll know that cross browser testing is a pain and time consuming to test. There is no way you can test 100% cross browser coverage, unless you have a lot of time and want to devote all your testing efforts in cross browser testing alone plus don’t forget that you also need to check on other viewports. In time, it gets really boring and tedious. This is why it’s a great idea to automate cross browser tests as much as possible so we can focus on other areas where we are also needed such as exploratory and accessibility testing.

Nowadays, there are not a lot of functional differences between different browsers. The code that gets deployed is mostly similar on any platform but the way the code is rendered visually exposes differences that we need to catch. Rather than doing cross browser functional testing where we are testing the functionality across different browsers, a better alternative is to do cross browser visual testing instead where we validate the look of our pages instead because this is what our users see.

The problem is automated cross browser testing, whether it’s functional or visual, can still take a considerable amount of time to set up because you need to create an automation framework that can scale and easily be maintained. This can be quite a challenge for testers who are considerably new in test automation.

Screenshot 2020 09 05 at 13.04.19

Cross Browser Testing in a nutshell

The purpose of this hackathon was to show how easy and how fast cross browser visual testing can be if you’re using modern tools such as Applitools. It’s also to highlight that existing testing tools are great for doing cross browser functional testing but not so great with cross browser visual testing which I’ll expand on later.

The Hackathon Experience: Cypress

The hackathon was split into automating three different scenarios for a sample e-commerce website called AppliFashion. The scenarios should be automated using any testing tool of your choice and also with the same testing tool but with using Applitools alongside. The automated tests will then be executed on two versions of the website – version 1, which is assumed to be bug free, and version 2, which has new bugs added in. On version 2, you have to update the automated tests to fix the bugs introduced and then compare the effort of doing this with the traditional testing tool that you have chosen as opposed to using Applitools.

I decided to use Cypress as my testing tool and while it’s a great tool for automating the browser, I spent 5.5 hours doing cross browser visual testing with this approach and still felt that I missed a lot of visual bugs. Let’s look at this in more detail.

  • Installing Cypress: 5 mins
  • Writing tests for version 1: 2 hours
  • Test maintenance for version 2 and finding bugs: 1 hour
  • Test reporting and project refactoring: 2 hours
  • Documentation: 30 mins
  • Total Time: 5 hrs, 35 mins

Writing the tests took quite some time. I needed to verify a lot of the elements and get their selectors so I could assert that they were visible on the page. Some of the elements were hidden if you are on different viewports so these had to be reflected on the tests. You can find an example code on how I handled this on my applitools ultrafast grid github repo. The test execution time was also slightly longer because I had more viewports (desktop, tablet, mobile) and browsers (Chrome and Firefox) to cover locally.

When it was time to run the same test on version 2, I had to make some adjustments to my tests and log the bugs that my automated tests found. This took me an hour because I had to update the selectors to fix my tests but also, I wasn’t confident that my tests found all the visual bugs on version 2. I had to find some of the bugs manually since verifying CSS changes was difficult for me with just using Cypress alone.

When it came to test reporting and project refactoring, I invested 2 hours on this task. As I knew from experience, good reporting helps everyone make sense of test data. I wanted to integrate Mochawesome reporter so I could present the test results nicely to the hackathon judges. I wrote a tutorial on how to do this which you can find in this post – Test Reporting with Cypress and Mochawesome. I also started noticing that my test code was getting a lot of duplication so I did some refactoring to clean up my automation framework.

The Hackathon Experience: Cypress with Applitools

Now let’s look at how long it took me to do cross browser visual testing with Cypress and Applitools.

  • Install Applitools Cypress SDK: 2 mins
  • Setup project structure with Cypress and Applitools: 10 mins
  • Writing tests for version 1: 20 mins
  • Running tests for version 2: 5 mins
  • Bug reporting with Applitools: 25 mins
  • Documentation: 10 mins
  • Total time: 1 hr, 12 min

In total, I spent around just over one hour writing the tests for both version 1 and version 2 when using Cypress with Applitools. The time difference was massive! There were a few visual bugs that I missed that Applitools caught but even if this was the case, I didn’t have to rewrite my tests at all. All the adjustments were done on the Applitools Dashboard directly and marking the bugs through its annotation feature.

Writing the tests was considerably faster. As opposed to verifying individual selectors and checking that it’s visible, I took a screenshot of the whole page which is a better approach for visual validation.

carbon 8

Visual validation test with Cypress and Applitools

The code snippet above is simpler and will catch more visual bugs with less or even no test code maintenance.

So you might be wondering from the above code, how did I handle the cross browser capabilities? This was easily achieved by creating a file called `applitools.config.js`  on the root of your project and specifying the list of browsers that you want your tests to execute on. By utilising Ultrafast Grid and setting my concurrency to 10, I was able to run the tests quicker too.

carbon 9

Achieving Cross Browser Visual Testing with Applitools

The Impact of Visual Cross Browser Testing for Testers

Overall, this was another excellent hackathon from Applitools and showed that cross browser testing can be easy and fast. I’ve mentioned this in the past already that one of the trends that I’m seeing is more and more testing tools are becoming user friendly and if you are new to test automation, this is great news!

Also, from my experience, the production bugs that get missed most frequently are visual bugs. A test that hasn’t loaded any of its CSS files can still work functionally and your automated functional test will still pass. Rather than doing cross browser functional testing, it’s better to do cross browser visual testing to get maximum value.

Finally, the massive time saving that it provides means that we, as testers, have more time to explore the areas that automated tests can’t catch and that is a big win.

For More Information

The post Can Automated Cross Browser Testing Be Ultrafast? appeared first on Automated Visual Testing | Applitools.

]]>
Modern Cross Browser Testing with Cypress and Applitools https://applitools.com/blog/modern-cross-browser-testing-cypress/ Thu, 16 Jul 2020 22:35:04 +0000 https://applitools.com/?p=20096 Cypress, among other things, validates the structure of your DOM. It verifies that a button is visible or that it has the correct text. But what about the look and...

The post Modern Cross Browser Testing with Cypress and Applitools appeared first on Automated Visual Testing | Applitools.

]]>

Cypress, among other things, validates the structure of your DOM. It verifies that a button is visible or that it has the correct text. But what about the look and styling of our app? How can we test that our application looks good visually? We can use Cypress to verify that it has the correct CSS properties but then our code would look very long and complex. It will be a guarantee that engineers will avoid maintaining this test ?

Visual Testing

This is why we need visual testing as part of our test strategy. With visual testing, we are validating what the user sees on different browsers/viewports/mobile devices. However, it’s very time consuming when you do it manually.

Imagine if someone told you that you have to test the below image manually.

There are 30 differences. You can probably find them after quite some time. But then this provides a really slow feedback loop. The obvious solution would be of course to automate this!

Now, automated visual testing is not new. There are so many tools out there which can help you with visual testing. These tools use pixel-by-pixel comparison to compare the two images. The idea is you have a baseline image which is your source of truth. Every time there is a code change and our automated tests are run, a test image will be taken. The visual testing tool will then compare this test image with the baseline image and then check the differences. At the end, it will report to us whether our test passed or failed.

Problem with pixel-by-pixel visual testing

The problem with pixel-by-pixel visual testing though is it’s very sensitive even with small changes. Even if you have a 1px difference, your test will fail even though on the human eye, the two images look exactly similar.

You also get the issue that if you try to run these tests on your build pipelines, you might see a lot of pixel differences especially if the base image is generated locally such as the image above. Looking at this test image, if you ignore the mismatch image in the middle and compare the two images on left and the right, some of the changes that were reported look ok.  But because these images were taken on different machine setups , the tool has reported a lot of pixel differences. You can use Docker to solve this and generate the base image using the same configuration as the test image, but from personal experience, I still get flakiness with using pixel-by-pixel comparison tools.

Also, what if you have dynamic data? In the test image above, we have a change of data here but the overall layout is similar. You can probably set the mismatch threshold to be slightly higher so your tests will fail only if they reach the threshold that you defined. The problem with this though is that you might miss actual visual bugs.

Cross Browser Visual Validation

Most of the existing open source tools for visual testing only runs on 1 or 2 browsers.  For example, one of the tools that we were using before, called BackstopJS,  which is a popular visual testing framework, only runs visual tests on Chrome headlessly.  AyeSpy, which is a tool that was actually created by one of the teams here at News UK, is another visual testing tool, which hooks into your Selenium Grid to run your visual tests on different browsers. But still, it’s a bit limited. If you are using the Selenium Docker images, they only have images for Chrome and Firefox. What if you want to run your visual tests on Safari or Internet Explorer?  You can definitely verify this yourself, but again as mentioned, it’s time consuming.

How can we solve these different visual testing issues?

Applitools

This is where Applitools comes in. Unlike existing visual tools, Applitools uses AI comparison algorithms so images are compared just like a human would. It was founded in 2013 and pretty much integrates with almost all testing frameworks out there. You name it – Selenium, Cypress, Appium, WebdriverIO, Espresso, XCUITest, even Storybook! With Applitools, you can validate visual differences on a wide range of browsers and viewports. By using different comparison algorithms (exact, strict, layout and content), you have different options to compare images and can cater different scenarios such as dynamic data or animations.

Rather than taking a screenshot, Applitools will extract a snapshot of your DOM. This is one of the reasons why visual tests are fast to run in Applitools. Once the DOM snapshots have been uploaded to the Applitools Cloud, the Applitools Ultrafast Grid, which offers users a way to render screens in multiple browsers and viewports simultaneously to generate the screenshots and do the AI powered image comparison.

Getting Started

To get you started, you need to install the following package to your project. This is specific to Cypress so you would need to install the correct package depending on your test framework of choice.

npm i --D @applitools/eyes-cypress

Once this has been installed, you need to configure the plugin and the easiest way to do this is to run the following command on your terminal

npx eyes-setup

This will automatically add the necessary imports needed to get Applitools working in your Cypress project.

Writing our first test with just Cypress

Let’s start doing some coding and add some validations on a sample react image app that I created a while back. It is a simple image gallery which uses unsplash API for the backend. An example github repo which contains the different code examples can be found here.

Our Cypress test can look like the following code snippet. Keep in mind, this only asserts the application to some extent. I can add more code to verify that it has the correct CSS attributes but I don’t want to make the code too lengthy.

Writing our first test with Cypress and Applitools

Now, let’s look at how we can write the test using Cypress and Applitools.

Applitools provides the following commands to Cypress as a minimum setup. `cy.eyesOpen` initiates the Cypress eyes SDK and we pass some arguments such as our app name, batch name and the browser size (defaults to Chrome). The command `cy.eyesCheckWindow` will take a DOM snapshot so every call to this command means a DOM snapshot will be generated. You can call this every time you do an action such as visiting your page under test or clicking buttons, and dropdown menus. Finally, once you are finished, you just call `cy.eyesClose`. To know more about the Cypress eyes SDK, please visit their documentation here to find more information.

In order to run this in Applitools, you need to export an API key which is detailed on this article. Once you have the API key, on your terminal you need to run:

export APPLITOOLS_API_KEY=<your_key>
npx cypress open

Once the test is finished, if you go to the Applitools dashboard, you should see your test being run. The first time you run it, there will be no baseline image then when you reran the test and everything looks good, you should see the following.

Handling Dynamic Data

Since we are using the unsplash API, we don’t have control as to what data gets returned. When we refresh the app, we might get different results. As an example, the request that I am making to unsplash will get the popular photos on a given day. If I reran my test again tomorrow, then the images will be different like the one shown below.

The good thing is we can apply a layout region so the actual data will be ignored or we can also set the match level as Layout in our test which we can preview on the dashboard. If the layout of our image gallery has changed, Applitools will report it as an issue.

Making code changes to our application

Now, let’s create some changes in our application (code references found here) and introduce the following:

  • Update the background colour of the search bar (bug)
  • Add a new footer component (new feature)
  • Remove camera icon on the subtitle (bug)

If we run the test where we only use Cypress, how many of these changes do you think your test will catch? Will it catch that there is a new footer component? How about updating the background colour of the search bar? How about the missing header icon? Probably not because we didn’t write any assertions for it. Now, let’s rerun the test written in Cypress and Applitools.

Looking at the image above, it caught all the changes and we didn’t had to update our test since all the maintenance is done on the Applitools side. Any issues can be raised directly on the dashboard and you can also configure it to integrate to your JIRA projects.

Cross Browser Visual Validation

To run the same test on different browsers, you just need to specify the browser options on your Applitools configuration. I’ve refactored the code a bit and created a file called `applitools.config.js` and moved some of the setup we added initially in our `cy.eyesOpen` in this class.

Just simply reran your test and check the results in the Applitools dashboard.

Final Thoughts

This is just an introductory post on how you can use Applitools so if you want to know more about its other features, check out the following resources:

While open source pixel by pixel comparison tools can help you get started with visual testing, using Applitools can modernize the way you do testing. As always, do a thorough analysis of a tool to see if it will meet your testing needs ?

The post Modern Cross Browser Testing with Cypress and Applitools appeared first on Automated Visual Testing | Applitools.

]]>