Native Mobile Grid Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/native-mobile-grid/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Tue, 10 Jan 2023 01:18:47 +0000 en-US hourly 1 iOS 16 – What’s New for Test Engineers https://applitools.com/blog/ios-16-whats-new-test-engineers/ Fri, 16 Sep 2022 17:07:38 +0000 https://applitools.com/?p=42763 Learn about what's new in iOS 16, including some new updates Test Engineers should be looking out for.

The post iOS 16 – What’s New for Test Engineers appeared first on Automated Visual Testing | Applitools.

]]>

Learn about what’s new in iOS 16, including some new updates test engineers should be looking out for.

It’s an exciting time of the year for anyone who uses Apple devices – and that includes QA engineers charged with mobile testing. Apple has just unveiled iOS 16, and as usual it is filled with new features for iOS users to enjoy.

Many of these new features, of course, affect the look and feel and usability of any application running on iOS. If you’re in QA, that means you’ve now got a lot of new testing to do to make sure your application works as perfectly on iOS 16 as it did on previous versions of the operating system.

For example, Apple has just upgraded their iconic “notch” into a “Dynamic Island.” This is significant redesign of a small but highly visual component that your users will see every time they look at their phone. If your app doesn’t function appropriately with this new UI change, your users will notice.

If you’re using Native Mobile Grid for your mobile testing, no need to worry, as Native Mobile Grid already supports automated testing of iOS 16 on Apple devices.

With this in mind, let’s take a look through some of the most exciting new features of iOS 16, with a focus on how they can affect your life as a test engineer.

Customizable Lock Screen

The lockscreen on iOS 16 devices can now be customized far more than before, going beyond changing the background image – you can now alter the appearance of the time as well as add new widgets. Another notable change here is that notifications now pop up from the bottom instead of the top.

As a QA engineer, there are a few things to consider here. First, if your app will have a new lockscreen widget, you certainly need to test it carefully. Performing visual regression testing and getting contrast right will be especially important on an uncertain background.

Even if you don’t develop a widget, it’s worth thinking about (and then verifying) whether the user experience could be affected by your notifications moving from the top of the user’s screen to the bottom. Be sure and take a look at how they will appear when stacked as well to make sure the right information is always visible.

Stacked bottom notifications in iOS 16 – Image via Apple

Notch –> Dynamic Island

As we mentioned above, the notch is getting redesigned into a “Dynamic Island.” This new version of the cutout required for the front-facing camera can now present contextual information about the app you’re using. It will expand and contract based on the info it’s displaying, so it’s not a fixed size.

That means your app may now be resizing around the new “Dynamic Island” in ways it never did with the old notch. Similarly, your contextual notifications may not look quite the same either. This is definitely something worth testing to make sure the user experience is still exactly the way you meant it to be.

dynamic island transitioning from smaller to bigger
Dynamic Island – Image via Apple

Other New iOS 16 Features

There are a lot of other new features, of course. Some of these may not have as direct an impact on the UI or functionality of you own applications, but it’s worth being familiar with them all. Here are a few of the other biggest changes – check them carefully against your own app and be sure to test accordingly.

  • Send, Edit and Unsend Messages: You can now send, edit and unsend content in the Messages app, and you can now send/unsend (as well as schedule delivery) in the Mail app as well
  • Notifications and Live Activities: As mentioned, notifications now come up from the bottom. They can also “update” so that you don’t need to get repeated new notifications from the same app (eg: sports games scores, rideshare ETAs)
  • Live Text and Visual Lookup: iOS users can now extract live text from both photos and videos, as well as copy the subject of an image out of its background and paste it elsewhere
  • Focus Mode and Focus Filters: Focus mode (to limit distractions) can now be attached to custom lockscreens, and applied not just to an app but within an app (eg: restricting specific tabs in a browser)
  • Private Access Tokens: For some apps and websites, Apple will use these tokens to verify that users are human and bypass traditional CAPTCHA checks
  • Other improvements: The Fitness app, Health app, Maps app, iCloud, Wallet and more all got various improvements as well. Siri did too (you can now “speak” emojis ?). See the full list of iOS 16 updates.

Make Your Mobile Testing Easier

Mobile testing is a challenge for many organizations. The number of devices, browsers and screens in play make achieving full coverage extremely time-consuming using traditional mobile testing solutions. At Applitools, we’re focused on making software testing easier and more effective – that’s why we pioneered our industry-leading Visual AI. With the new Native Mobile Grid, you can significantly reduce the time you spend testing mobile apps while ensuring full coverage in a native environment.

Learn more about how you can scale your mobile automation testing with Native Mobile Grid, and sign up for access to get started with Native Mobile Grid today.

The post iOS 16 – What’s New for Test Engineers appeared first on Automated Visual Testing | Applitools.

]]>
Mobile Testing for the First Time with Android, Appium, and Applitools https://applitools.com/blog/mobile-testing-android-appium-applitools/ Thu, 21 Jul 2022 16:41:51 +0000 https://applitools.com/?p=40910 Learn how to get started with mobile testing using Android and Appium, and then how to incorporate native mobile visual testing using Applitools.

The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.

]]>

For some of us, it’s hard to believe how long smartphones have existed. I remember when the first iPhone came out in June 2007. I was working at my first internship at IBM, and I remember hearing in the breakroom that someone on our floor got one. Oooooooh! So special! That was 15 years ago!

In that decade and a half, mobile devices of all shapes and sizes have become indispensable parts of our modern lives: The first thing I do every morning when I wake up is check my phone. My dad likes to play Candy Crush on his tablet. My wife takes countless photos of our French bulldog puppy on her phone. Her mom uses her tablet for her virtual English classes. I’m sure, like us, you would feel lost if you had to go a day without your device.

It’s vital for mobile apps to have high quality. If they crash, freeze, or plain don’t work, then we can’t do the things we need to do. So, being the Automation Panda, I wanted to give mobile testing a try! I had three main goals:

  1. Learn about mobile testing for Android – specifically how it relates to other kinds of testing.
  2. Automate my own Appium tests – not just run someone else’s examples.
  3. Add visual assertions to my tests with Applitools – instead of coding a bunch of checks with complicated locators.

This article covers my journey. Hopefully, it can help you get started with mobile testing, too! Let’s jump in.

Getting Started with Mobile

The mobile domain is divided into two ecosystems: Android and iOS. That means any app that wants to run on both operating systems must essentially have two implementations. To keep things easier for me, I chose to start with Android because I already knew Java and I actually did a little bit of Android development a number of years ago.

I started by reading a blog series by Gaurav Singh on getting started with Appium. Gaurav’s articles showed me how to set up my workbench and automate a basic test:

  1. Hello Appium, Part 1: What is Appium? An Introduction to Appium and its Tooling
  2. Hello Appium, Part 2: Writing Your First Android Test
  3. Appium Fast Boilerplate GitHub repository

Test Automation University also has a set of great mobile testing courses that are more than a quickstart guide:

Choosing an Android App

Next, I needed an Android app to test. Thankfully, Applitools had the perfect app ready: Applifashion, a shoe store demo. The code is available on GitHub at https://github.com/dmitryvinn/applifashion-android-legacy.

To do Android development, you need lots of tools:

I followed Gaurav’s guide to a T for setting these up. I also had to set the ANDROID_HOME environment variable to the SDK path.

Be warned: it might take a long time to download and install these tools. It took me a few hours and occupied about 13 GB of space!

Once my workbench was ready, I opened the Applifashion code in Android Studio, created a Pixel 3a emulator in Device Manager, and ran the app. Here’s what it looked like:

The Applifashion main page

An Applifashion product page

I chose to use an emulator instead of a real device because, well, I don’t own a physical Android phone! Plus, managing a lab full of devices can be a huge hassle. Phone manufacturers release new models all the time, and phones aren’t cheap. If you’re working with a team, you need to swap devices back and forth, keep them protected from theft, and be careful not to break them. As long as your machine is powerful and has enough storage space, you can emulate multiple devices.

Choosing Appium for Testing

It was awesome to see the Applifashion app running through Android Studio. I played around with scrolling and tapping different shoes to open their product pages. However, I really wanted to do some automated testing. I chose to use Appium for automation because its API is very similar to Selenium WebDriver, with which I am very familiar.

Appium adds on its own layer of tools:

Again, I followed Gaurav’s guide for full setup. Even though Appium has bindings for several popular programming languages, it still needs a server for relaying requests between the client (e.g., the test automation) and the app under test. I chose to install the Appium server via the NPM module, and I installed version 1.22.3. Appium Doctor gave me a little bit of trouble, but I was able to resolve all but one of the issues it raised, and the one remaining failure regarding ANDROID_HOME turned out to be not a problem for running tests.

Before jumping into automation code, I wanted to make sure that Appium was working properly. So, I built the Applifashion app into an Android package (.apk file) through Android Studio by doing BuildBuild Bundle(s) / APK(s)Build APK(s). Then, I configured Appium Inspector to run this .apk file on my Pixel 3a emulator. My settings looked like this:

My Appium Inspector configuration for targeting the Applifashion Android package in my Pixel 3a emulator (click for larger image)

Here were a few things to note:

  • The Appium server and Android device emulator were already running.
  • I used the default remote host (127.0.0.1) and remote port (4723).
  • Since I used Appium 1.x instead of 2.x, the remote path had to be /wd/hub.
  • appium: automationName had to be uiautomator2 – it could not be an arbitrary name.
  • The platform version, device name, and app path were specific to my environment. If you try to run this yourself, you’ll need to set them to match your environment.

I won’t lie – I needed a few tries to get all my capabilities right. But once I did, things worked! The app appeared in my emulator, and Appium Inspector mirrored the page from the emulator with the app source. I could click on elements within the inspector to see all their attributes. In this sense, Appium Inspector reminded me of my workflow for finding elements on a web page using Chrome DevTools. Here’s what it looked like:

The Appium Inspector with the Applifashion app loaded

Writing my First Appium Test

So far in my journey, I had done lots of setup, but I hadn’t yet automated any tests! Mobile testing certainly required a heftier stack than web app testing, but when I looked at Gaurav’s example test project, I realized that the core concepts were consistent.

I set up my own Java project with JUnit, Gradle, and Appium:

  • I chose Java to match the app’s code.
  • I chose JUnit to be my core test framework to keep things basic and familiar.
  • I chose Gradle to be the dependency manager to mirror the app’s project.

My example code is hosted here: https://github.com/AutomationPanda/applitools-appium-android-webinar.

Warning: The example code I share below won’t perfectly match what’s in the repository. Furthermore, the example code below will omit import statements for brevity. Nevertheless, the code in the repository should be a full, correct, executable example.

My build.gradle file looked like this with the required dependencies:

plugins {
    id 'java'
}

group 'com.automationpanda'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    testImplementation 'io.appium:java-client:8.1.1'
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.2'
    testImplementation 'org.seleniumhq.selenium:selenium-java:4.2.1'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.8.2'
}

test {
    useJUnitPlatform()
}

My test case class was located at /src/test/java/com/automationpanda/ApplifashionTest.java. Inside the class, I had two instance variables: the Appium driver for mobile interactions, and a WebDriver waiting object for synchronization:

public class ApplifashionTest {

    private AppiumDriver driver;
    private WebDriverWait wait;

    // …
}

I added a setup method to initialize the Appium driver. Basically, I copied all the capabilities from Appium Inspector:

    @BeforeEach
    public void setUpAppium(TestInfo testInfo) throws IOException {

        // Create Appium capabilities
        // Hard-coding these values is typically not a recommended practice
        // Instead, they should be read from a resource file (like a properties or JSON file)
        // They are set here like this to make this example code simpler
        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.setCapability("platformName", "android");
        capabilities.setCapability("appium:automationName", "uiautomator2");
        capabilities.setCapability("appium:platformVersion", "12");
        capabilities.setCapability("appium:deviceName", "Pixel 3a API 31");
        capabilities.setCapability("appium:app", "/Users/automationpanda/Desktop/Applifashion/main-app-debug.apk");
        capabilities.setCapability("appium:appPackage", "com.applitools.applifashion.main");
        capabilities.setCapability("appium:appActivity", "com.applitools.applifashion.main.activities.MainActivity");
        capabilities.setCapability("appium:fullReset", "true");

        // Initialize the Appium driver
        driver = new AppiumDriver(new URL("http://127.0.0.1:4723/wd/hub"), capabilities);
        wait = new WebDriverWait(driver, Duration.ofSeconds(30));
    }

I also added a cleanup method to quit the Appium driver after each test:

    @AfterEach
    public void quitDriver() {
        driver.quit();
    }

I wrote one test case that performs shoe shopping. It loads the main page and then opens a product page using locators I found with Appium Inspector:

    @Test
    public void shopForShoes() {

        // Tap the first shoe
        final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
        driver.findElement(shoeMainImageLocator).click();

        // Wait for the product page to appear
        final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));
    }

At this stage, I hadn’t written any assertions yet. I just wanted to see if my test could successfully interact with the app. Indeed, it could, and the test passed when I ran it! As the test ran, I could watch it interact with the app in the emulator.

Adding Visual Assertions

My next step was to write assertions. I could have picked out elements on each page to check, but there were a lot of shoes and words on those pages. I could’ve spent a whole afternoon poking around for locators through the Appium Inspector and then tweaking my automation code until things ran smoothly. Even then, my assertions wouldn’t capture things like layout, colors, or positioning.

I wanted to use visual assertions to verify app correctness. I could use the Applitools SDK for Appium in Java to take one-line visual snapshots at the end of each test method. However, I wanted more: I wanted to test multiple devices, not just my Pixel 3a emulator. There are countless Android device models on the market, and each has unique aspects like screen size. I wanted to make sure my app would look visually perfect everywhere.

In the past, I would need to set up each target device myself, either as an emulator or as a physical device. I’d also need to run my test suite in full against each target device. Now, I can use Applitools Native Mobile Grid (NMG) instead. NMG works just like Applitools Ultrafast Grid (UFG), except that instead of browsers, it provides emulated Android and iOS devices for visual checkpoints. It’s a great way to scale mobile test execution. In my Java code, I can set up Applitools Eyes to upload results to NMG and run checkpoints against any Android devices I want. I don’t need to set up a bunch of devices locally, and the visual checkpoints will run much faster than any local Appium reruns. Win-win!

To get started, I needed my Applitools account. If you don’t have one, you can register one for free.

Then, I added the Applitools Eyes SDK for Appium to my Gradle dependencies:

   testImplementation 'com.applitools:eyes-appium-java5:5.12.0'

I added a “before all” setup method to ApplifashionTest to set up the Applitools configuration for NMG. I put this in a “before all” method instead of a “before each” method because the same configuration applies for all tests in this suite:

    private static InputReader inputReader;
    private static Configuration config;
    private static VisualGridRunner runner;

    @BeforeAll
    public static void setUpAllTests() {

        // Create the runner for the Ultrafast Grid
        // Warning: If you have a free account, then concurrency will be limited to 1
        runner = new VisualGridRunner(new RunnerOptions().testConcurrency(5));

        // Create a configuration for Applitools Eyes
        config = new Configuration();

        // Set the Applitools API key so test results are uploaded to your account
        config.setApiKey("<insert-your-API-key-here>");

        // Create a new batch
        config.setBatch(new BatchInfo("Applifashion in the NMG"));

        // Add mobile devices to test in the Native Mobile Grid
        config.addMobileDevices(
                new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21),
                new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10),
                new AndroidDeviceInfo(AndroidDeviceName.Pixel_4));
    }

The configuration for NMG was almost identical to a configuration for UFG. I created a runner, and I created a config object with my Applitools API key, a batch name, and all the devices I wanted to target. Here, I chose three different phones: Galaxy S21, Galaxy Note 10, and Pixel 4. Currently, NMG supports 18 different Android devices, and support for more is coming soon.

At the bottom of the “before each” method, I added code to set up the Applitools Eyes object for capturing snapshots:

    private Eyes eyes;

    @BeforeEach
    public void setUpAppium(TestInfo testInfo) throws IOException {

        // …

        // Initialize Applitools Eyes
        eyes = new Eyes(runner);
        eyes.setConfiguration(config);
        eyes.setIsDisabled(false);
        eyes.setForceFullPageScreenshot(true);

        // Open Eyes to start visual testing
        eyes.open(driver, "Applifashion Mobile App", testInfo.getDisplayName());
    }

Likewise, in the “after each” cleanup method, I added code to “close eyes,” indicating the end of a test for Applitools:

    @AfterEach
    public void quitDriver() {

        // …

        // Close Eyes to tell the server it should display the results
        eyes.closeAsync();
    }

Finally, I added code to each test method to capture snapshots using the Eyes object. Each snapshot is a one-line call that captures the full screen:

    @Test
    public void shopForShoes() {

        // Take a visual snapshot
        eyes.check("Main Page", Target.window().fully());

        // Tap the first shoe
        final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
        driver.findElement(shoeMainImageLocator).click();

        // Wait for the product page to appear
        final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));

        // Take a visual snapshot
        eyes.check("Product Page", Target.window().fully());
    }

When I ran the test with these visual assertions, it ran one time locally, and then NMG ran each snapshot against the three target devices I specified. Here’s a look from the Applitools Eyes dashboard at some of the snapshots it captured:

My first visual snapshots of the Applifashion Android app using Applitools Native Mobile Grid!

The results are marked “New” because these are the first “baseline” snapshots. All future checkpoints will be compared to these images.

Another cool thing about these snapshots is that they capture the full page. For example, the main page will probably display only 2-3 rows of shoes within its viewport on a device. However, Applitools Eyes effectively scrolls down over the whole page and stitches together the full content as if it were one long image. That way, visual snapshots capture everything on the page – even what the user can’t immediately see!

The full main page for the Applifashion app, fully scrolled and stitched

Injecting Visual Bugs

Capturing baseline images is only the first step with visual testing. Tests should be run regularly, if not continuously, to catch problems as soon as they happen. Visual checkpoints should point out any differences to the tester, and the tester should judge if the change is good or bad.

I wanted to try this change detection with NMG, so I reran tests against a slightly broken “dev” version of the Applifashion app. Can you spot the bug?

The “main” version of the Applifashion product page compared to a “dev” version

The formatting for the product page was too narrow! “Traditional” assertions would probably miss this type of bug because all the content is still on the page, but visual assertions caught it right away. Visual checkpoints worked the same on NMG as they would on UFG or even with the classic (e.g. local machine) Applitools runner.

When I switched back to the “main” version of the app, the tests passed again because the visuals were “fixed:”

Applifashion tests marked as “passed” after fixing visual bugs

While running all these tests, I noticed that mobile test execution is pretty slow. The one test running on my laptop took about 45 seconds to complete. It needed time to load the app in the emulator, make its interactions, take the snapshots, and close everything down. However, I also noticed that the visual assertions in NMG were relatively fast compared to my local runs. Rendering six snapshots took about 30 seconds to complete – three times the coverage in significantly less time. If I had run tests against more devices in parallel, I could probably have seen an even greater coverage-to-time ratio.

Conclusion

My first foray into mobile testing was quite a journey. It required much more tooling than web UI testing, and setup was trickier. Overall, I’d say testing mobile is indeed more difficult than testing web. Thankfully, the principles of good test automation were the same, so I could still develop decent tests. If I were to add more tests, I’d create a class for reading capabilities as inputs from environment variables or resource files, and I’d create another class to handle Applitools setup.

Visual testing with Applitools Native Mobile Grid also made test development much easier. Setting everything up just to start testing was enough of a chore. Coding the test cases felt straightforward because I could focus my mental energy on interactions and take simple snapshots for verifications. Trying to decide all the elements I’d want to check on a page and then fumbling around the Appium Inspector to figure out decent locators would multiply my coding time. NMG also enabled me to run my tests across multiple different devices at the same time without needing to pay hundreds of dollars per device or sucking up a few gigs of storage and memory on my laptop. I’m excited to see NMG grow with support for more devices and more mobile development frameworks in the future.

Despite the prevalence of mobile devices in everyday life, mobile testing still feels far less mature as a practice than web testing. Anecdotally, it seems that there are fewer tools and frameworks for mobile testing, fewer tutorials and guides for learning, and fewer platforms that support mobile environments well. Perhaps this is because mobile test automation is an order of magnitude more difficult and therefore more folks shy away from it. There’s no reason for it to be left behind anymore. Given how much we all rely on mobile apps, the risks of failure are just too great. Technologies like Visual AI and Applitools Native Mobile Grid make it easier for folks like me to embrace mobile testing.

The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.

]]>
Modern Cross Device Testing for Android & iOS Apps https://applitools.com/blog/cross-device-testing-mobile-apps/ Wed, 13 Jul 2022 20:47:15 +0000 https://applitools.com/?p=40383 Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>

Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

What is Cross Device Testing

Modern cross device testing is the system by which you verify that an application delivers the desired results on a wide variety of devices and formats. Ideally this testing will be done quickly and continuously.

There are many articles explaining how to do CI/CD for web applications, and many companies are already doing it successfully, but there is not much information available out there about how to achieve the same for native mobile apps.

This post will shed light on the cross device testing practices you need to implement to get a step closer to Continuous Delivery for native mobile apps.

Why is Cross Device Testing Important

The number of mobile devices used globally is staggering. Based on the data from bankmycell.com, we have 6.64 billion smartphones in use.

Source: https://www.bankmycell.com/blog/how-many-phones-are-in-the-world#part-1

Even if we are building and testing an app which impacts a fraction of this number, that is still a very huge number.

The below chart shows the market share by some leading smartphone vendors over the years.

Source: https://www.statista.com/statistics/271496/global-market-share-held-by-smartphone-vendors-since-4th-quarter-2009/

Challenges of Cross Device Testing

One of the biggest challenges for testing mobile apps is that across all manufacturers combined, there are 1000s of device types in use today. Depending on the popularity of your app, this means there are a huge number of devices your users could be using. 

These devices will have variations based on:

  • OS types and versions
  • potentially customized OS
  • hardware resources (memory, processing power, etc.)
  • screen sizes
  • screen resolutions
  • storage with different available capacity for each
  • Wifi Vs using mobile data (from different carriers)
  • And many more

It is clear that you cannot run our tests on each type of device that may be used by your users. 

So how do you get quick feedback and confidence from your testing that (almost) no user will get impacted negatively when you release a new version of your app?

Mobile Test Automation Execution Strategy

Mobile Testing Strategy

Before we think about the strategy for running your automated tests for mobile apps, we need to have a good and holistic mobile testing strategy

Along with testing the app functionality, mobile testing has additional dimensions, and hence complexities as compared with web-app testing. 

You need to understand the impact of the aspects mentioned above and see what may, or may not be applicable to you.

Here are some high-level aspects to consider in your mobile testing strategy:

  • Know where and how to run the tests – real devices, emulators / simulators available locally versus in some cloud-based device farm
  • Increasing test coverage by writing less code – using Applitools Visual AI to validate functionality and user-experience
  • Scaling your test execution – using Applitools Native Mobile Grid
  • Testing on different text fonts and display densities 
  • Testing for accessibility conformance and impact of dark mode on functionality and user experience
  • Chaos & Monkey Testing
  • Location-based testing
  • Testing the impact of Network bandwidth
  • Planning and setting up the release strategy for your mobile application including beta-testing, on-field testing, staged-rollouts. This differs for Google PlayStore and Apple
  • Building and testing for Observability & Analytics events

Once you have figured out your Mobile Testing Strategy, you now need to think about how and what type of automated tests can give you good, reliable, deterministic and fast feedback about the quality of your apps. This will result in you identifying the different layers of your test automation pyramid.

Remember: It is very important to execute all types of automated tests, on every code change and new app that is built. The functional / end-2-end / UI tests for your app should also be run at this time.

Additionally, you need to be able to run the tests on a local developer / qa machine, as well in your Continuous Integration (CI) system. In case of native / hybrid mobile apps, developers and QAs should be able to install the app on the (local) devices they have available with them, and run the tests against that. For CI-based execution, you need to have some form of device farm available locally in your network, or cloud-based to allow execution of the tests.

This continuous testing approach will provide you with quick feedback and allow you to fix issues almost as soon as they creep in the app. 

How to Run Functional Tests against Your Mobile Apps

Testing and automating mobile apps have additional complexities. You need to install the app in some device before your automated tests can be run against it.

Let’s explore your options for devices.

Real Devices

Real devices are ideal to run the tests. Your users / customers are going to use your app using a variety of real devices. 

In order to allow proper development and testing to be done, each team member needs access to relevant types of devices (which is subject to their user-base).

However, it is not as easy to have a variety of devices available for running the automated tests, for each team member (developer / tester). 

The challenges of having the real devices could be related to:

  • cost of procuring devices for each team member of a good variety to allow seamless development and testing work. 
  • maintenance of the devices (OS/software updates, battery issues, other problems the device may have at any point in time, etc.)
  • logistical issues like time to order and get devices, tracking of the devices assigned to the team, etc.
  • deprecating / disposing the older devices that are not used / required anymore.

Hence we need a different strategy for executing tests on mobile devices. Emulators and Simulators come to the rescue!

What is the Difference between Emulators & Simulators

Before we get into specifics about the execution strategy, it is good to understand the differences between an emulator and simulator.

Android-device emulators and iOS-device simulators make it easy for any team member to easily spin up a device.

An emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system

An emulator can mimic the operating system, software and the hardware features of the android device.

A Simulator runs on your Mac and behaves like a standard Mac app while simulating iPhone, iPad, Apple Watch, or Apple TV environments. Each combination of a simulated device and software version is considered its own simulation environment, independent of the others, with its own settings and files. These settings and files exist on every device you test within a simulation environment. 

An iOS simulator mimics the internal behavior of the device. It cannot mimic the OS / hardware features of the device.

Emulators / Simulators are a great and cost-effective way to overcome the challenges of real devices. These can easily be created as per the requirements and needs by any team member and can be used for testing and also running automated tests. You can also relatively easily set up and use the emulators / simulators in your CI execution environment.

While emulators / simulators may seem like they will solve all the problems, that is not the case. Like with anything, you need to do a proper evaluation and figure out when to use real devices versus emulators / simulators.

Below are some guidelines that I refer to.

When to use Emulators / Simulators

  • You are able to validate all application functionality
  • There is no performance impact on the application-under-test

Why use Emulators / Simulators

  • To reduce cost
  • Scale as per needs, resulting in faster feedback
  • Can use in CI environment as well

When to use Real Devices for Testing

  • If Emulators / Simulators are used, then run “Sanity” / focussed testing on real devices before release
  • If Emulators / Simulators cannot validate all application functionality reliably, then invest in Real-Device testing
  • If Emulators / Simulators cause performance issues or slowness of interactions with the application-under-test

Cases when Emulators / Simulators May not Help

  • If the application-under-test has streaming content, or has high resource requirements
  • Applications relying on hardware capabilities
  • Applications dependent on customized OS version

Cross-Device Test Automation Strategy

The above approach of using real-devices or emulators / simulators will help your team to  shift-left and achieve continuous testing. 

There is one challenge that still remains – scaling! How do you ensure your tests run correctly on all supported devices?

A classic, or rather, a traditional way to solve this problem is to repeat the automated test execution on a carefully chosen variety of devices. This would mean, if you have 5 important types of devices, and you have a 100 tests automated, then you are essentially running 500 tests. 

This approach has multiple disadvantages:

  1. The feedback cycle is substantially delayed. If 100 tests took 1 hour to complete on 1 device, 500 tests would take 5 hours (for 5 devices). 
  2. The time to analyze the test results increases by 5x 
  3. The added number of tests could have flaky behavior based on device setup / location, network issues. This could result in re-runs or specific manual re-testing for validation.
  4. You need 5x more test data
  5. You are putting 5x more load on your backend systems to cater to executing the same test 5 times

We all know these disadvantages, however, there is no better way to overcome this. Or, is there?

Modern Cross-Device Device Test Automation Strategy

The Applitools Native Mobile Grid for Android and iOS apps can easily help you to overcome the disadvantages of traditional cross-device testing.

It does this by running your test on 1 device, but getting the execution results from all the devices of your choice, automatically. Well, almost automatically. This is how the Applitools Native Mobile Grid works:

  1. Integrate Applitools SDK in your functional automation.
  2. In the Applitools Eyes configuration, specify all the devices you want to do your functional testing. Added bonus, you will be able to leverage the Applitools Visual AI capabilities to also get increased functional and visual test coverage.

Below is an example of how to specify Android devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Pixel_4, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_ULTRA, ScreenOrientation.PORTRAIT));
eyes.setConfiguration(config);

Below is an example of how to specify iOS devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_mini));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XS));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_X));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XR));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_8));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_7));
eyes.setConfiguration(config);   

  1. Run the test on any 1 device – available locally or in CI. It could be a real device or a simulator / emulator. 

Every call to Applitools to do a visual validation will automatically do the functional and visual validation for each device specified in the configuration above.

  1. See the results from all the devices in the Applitools dashboard

Advantages of using the Applitools Native Mobile Grid

The Applitools Native Mobile Grid has many advantages.

  1. You do not need to repeat the same test execution on multiple devices. This will save the team members a lot of time for execution, flaky tests and result analysis
  2. Very fast feedback of test execution across all specified devices (10x faster than traditional cross device testing approach)
  3. There is no additional test data requirements 
  4. You do not need to procure, build and maintain the devices
  5. There is less load on your application backend-system
  6. A secure solution where your application does not need to be shared out of your corporate network
  7. Using visual assertions instead of functional assertions gives you increased test coverage while writing less code

Read this post on How to Scale Mobile Automation Testing Effectively for more specific details of this amazing solution!

Summary of Modern Cross-Device Testing of Mobile Apps

Using the Applitools Visual AI allows you to extend coverage at the top of your Test Automation Pyramid by including AI-based visual testing along with your UI/UX testing. 

Using the Applitools Native Mobile Grid for cross device testing of Android and iOS apps makes your CI loop faster by providing seamless scaling across all supported devices as part of the same test execution cycle. 

You can watch my video on Mobile Testing 360deg (https://applitools.com/event/mobile-testing-360deg/) where I share many examples and details related to the above to include as part of your mobile testing strategy.

To start using the Native Mobile Grid, simply sign up at the link below to request access. You can read more about the Applitools Native Mobile Grid in our blog post or on our website.

Happy testing!

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>
How to Scale Mobile Automation Testing Effectively https://applitools.com/blog/how-to-scale-mobile-automation-testing/ Wed, 15 Jun 2022 16:53:24 +0000 https://applitools.com/?p=39239 Learn about mobile automation testing strategies and why each is and isn't effective, and what the best way is to scale mobile automation testing today.

The post How to Scale Mobile Automation Testing Effectively appeared first on Automated Visual Testing | Applitools.

]]>

Learn the best way to scale mobile automation testing today. We’ll look at a history of traditional mobile automation testing strategies and discuss why each one is and isn’t effective, and then see why a new tool, the Native Mobile Grid, is different.

In today’s mobile testing world there are many different approaches to scaling our test automation for native mobile applications. Your options range from running locally with virtual devices (simulators/emulators) or real devices, to a local mobile grid/lab, to docker containers / virtual machines, to remote cloud test services. 

As you’re probably aware, testing native mobile applications can sometimes be a difficult endeavor. There are a lot of moving parts and many points of failure involved. To successfully execute, everything needs to work in complete harmony.  

For example, just executing a single Appium test you need:

  • An Appium server & all required dependencies installed
  • A mobile device or emulator/simulator
  • Valid test code logic
  • A compiled mobile application 
  • Application web service APIs running and stable (if applicable)

Now let’s say you want to scale your tests across multiple devices for your cross-device validation needs. We’re now introducing more points of failure for each device that is tested. A test on one device may execute just fine but on another, it may fail for various unknown reasons. We then often have to spend a vast amount of time investigating and debugging these failures to find the root cause.

Let’s also consider that because we’re adding more devices we will likely need to add more conditional logic to our test code to accommodate these different devices. We may need to add conditionals for different device resolutions/screen sizes, OS versions, orientations, scroll views, locators/selectors, or maybe gestures for specific devices. This all adds more coded logic to our test suite or framework we need to maintain and thus refactor in the future when our application changes. 

Because of these reasons and/or perhaps others, many people often don’t scale their mobile test coverage across different devices. Either it might introduce more test maintenance, more test flakiness, a longer test execution time, or access to different devices is not possible. Crossing fingers and hoping for the best has been the approach for many…

Scaling Mobile Automation Testing

Now let’s cover some common mobile test scaling approaches. I’ll go over the benefits and drawbacks of each of these approaches and finally introduce you to a new modern approach and technology, the Applitools Native Mobile Grid.

Sequential Execution

The simplest but also the most inefficient and slowest approach. The example diagram below shows executing two tests (Test A & Test B) executing across three different mobile devices.

As stated before, this is the simplest approach but very inefficient and slow. Perhaps some people aren’t aware of different or better approaches. Or maybe some bad test practices are being used such as one test (Test B) is dependent on (Test A) to complete in order to proceed. Generally, a good test practice is to never have any tests dependent on another to execute. Test suites/frameworks should be architected in a way to run any test in any order by utilizing test hooks/annotations, database seeding, mocking, or API test data injection to set up each test independently. 

For example, say I have an app that has a shopping cart and I have a test “Add items to shopping cart” which failed due to a bug in the app. If I relied on that test to then run my “shopping cart” specific tests, I couldn’t. That is where good test architecture comes into play by setting up each test independently by some of the methods mentioned above or other means.

Parallel Device Execution

We live in a fast-paced CI/CD world these days, and feedback loops that are as fast as possible are crucial. So what can you do to help get this test feedback ASAP? Parallelization! 

In this scenario, we are parallelizing the devices needed for cross-device validation. It’s however still inefficient since the tests themselves are running sequentially. But still much better than the former sequential approach.

Parallel Test Execution

This approach to mobile automation testing is a bit more efficient as it’s parallelizing the tests in your test suite. Ultimately it should reduce the execution time and also promote good test practices keeping tests independent from one another to execute. However, the inefficiency now is that each device is sequentially tested. 

Parallel Test & Parallel Device Execution

This is the most efficient and fastest traditional approach as it parallelizes both your test suite and devices for cross-device validations. However, it comes at a cost of machine resources, cloud concurrency or minutes usage on license limits (if applicable), and added complexity in your test framework having to manage different sets of parallelization. Some frameworks have this architecture “baked” into them but for many others, it’s up to the developer/tester (or whomever) to implement this logic themselves.

With that said, all of the approaches above come with but are not limited to some of the challenges below:

  • Susceptible to continual dependency and versioning conflicts. 
  • Any single device (or devices) either locally or on a cloud test service can have issues at any given time resulting in test flakiness.
  • Network or latency timeout issues can occur.
  • Service crashes or issues occurring on any running parallel process or thread.
  • Added test code complexity for parallelization.
  • Added test code conditionals to accommodate different device form factors/resolutions and application layouts. 

Applitools Native Mobile Grid

The Future Is Now!

Now that we’ve talked about the typical traditional approaches to mobile automation testing, let’s talk about the next generation of mobile cross-device testing using the Applitiools Native Mobile Grid! Our Native Mobile Grid (NMG) uses a new technological approach to asynchronously validate your native mobile application in parallel and easily across many different devices in a single execution. 

What this means is the parallelization of devices is handled on the Applitools NMG. Since it’s asynchronous you are not waiting on the device to connect or the test results, which frees your tests to execute as fast as possible! 

Some key benefits:

  • Execute your mobile tests on just one device (emulator, simulator, or a real device)! 
  • Simplistic test authoring! ​​
    • Create tests with only one device and form factor/app layout in mind! You won’t need to add additional logic in your code for different devices or resolutions.
  • Less code maintenance. 
    • No extra coded conditionals for specific devices or supporting multiple execution environments (Local and Cloud). Tests only need to work on any single device and the environment you execute in.
    • Visually Perfect with Applitools Visual AI to perform full-page mobile UI validations. No more coded UI assertion logic!
  • Fast and easy cross-device scaling without complicated multi-parallelization logic and achieve 10x faster executions compared to traditional approaches.
  • Fewer points of failure.
    • No longer is there a need to execute the same test redundantly across different devices increasing test flakiness and execution time. Instead, run once on any real device or simulator/emulator and get validations across many devices.
  • Fast CI/CD feedback loop. 
    • Release faster with an assurance of cross-device coverage and your application UI is visually perfect. 
  • No Appium version pinning.
    • Use any version of Appium that works for you and your tests.
  • Secure!
    • No need to upload an application to 3rd party cloud test vendors and risk exposing sensitive proprietary data and/or information.  
    • Tests can run locally on your network without opening firewall access or proxies.

Now let’s look at some example execution architectures using the Applitools Native Mobile Grid! 

Asynchronous Parallel Device Execution

This next diagram is similar to the traditional Parallel Device Execution we talked about above. However, the parallelization is offloaded to the NMG to handle and no additional logic for parallel threads or processes is required. This approach is still not the most efficient since each test is running sequentially but it’s lightyears faster than the traditional approach.

Asynchronous Parallel Test Execution

Out of all the approaches for mobile test automation we’ve discussed thus far, this next example is by far the most superior, efficient, and fastest approach possible to scaling your native mobile test coverage! The added benefit of this approach is your entire test suite execution time will now only take as long as it takes to run your slowest test in your suite! 

This example is similar to the traditional Parallel Test & Parallel Device Execution above but the device parallelization is now offloaded to the NMG and handled asynchronously.

Code Example using Mobile Native Grid for Mobile Test Automation

The below code is an example using Appium Java, testing an iOS native application. For those of you familiar with the Applitools Ultrafast Test Cloud for desktop and mobile web applications, this should look very similar. All we need to do in the test code is define the devices we want to validate for our cross-device coverage needs. In this case, we’ve specified 15 iOS devices we’ll validate in the same execution time as it takes to run the tests on just one device! Additional settings such as setting the OS version and orientation per device can be specified (not shown).

public class iOSNativeUFGTest {
   private WebDriverWait wait;
   private IOSDriver driver;
   private VisualGridRunner runner;
   private Eyes eyes;

   private static IOSDriver startApp() throws Exception {
       DesiredCapabilities capabilities = new DesiredCapabilities();
       capabilities.setCapability("platformName", "iOS");
       capabilities.setCapability("automationName", "XCUITest");
       capabilities.setCapability("deviceName", "iPhone 12 Pro Max"); //Launch a single local simulator or real device
       capabilities.setCapability("platformVersion", "15.4");
       capabilities.setCapability("app", "/Users/justin/Desktop/MyApp.app");
       return new IOSDriver<>(new URL("http://localhost:4723/wd/hub"), capabilities);
   }

   @Before
   public void setup() throws Exception {
       runner = new VisualGridRunner(new RunnerOptions().testConcurrency(15));
       eyes = new Eyes(runner);
       eyes.setApiKey(System.getenv("APPLITOOLS_API_KEY"));;

       Configuration conf = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro_Max));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro_Max));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_mini));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro_Max));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XS));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_X));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XR));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_8));
       conf.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_7));

       eyes.setConfiguration(conf);

       driver = startApp();
   }

   @Test
   public void iOSNativeTest() throws Exception {
       eyes.open(driver, "My Native App", "Login View"); //Start a visual test
       eyes.check("Login", Target.window().fully(true)); //Capture the mobile UI view
       eyes.closeAsync(); //End the visual test
   }

   @After
   public void tearDownTest(){
       /The /results object contains validation results on all 15 devices which can then be asserted for visual and accessibility checks
       TestResultsSummary results = runner.getAllTestResults(false); 
       eyes.abortAsync();
       driver.quit();
   }
}

Conclusion

Now for the first time ever, native mobile developers can perform continuous testing, running their entire test suites on every pull request or code push across many different devices to get immediate quality feedback at once. Just like web application developers have for many years now! 

Hopefully, this article illustrated the benefits and superiority of the Applitools Native Mobile Grid to other traditional approaches for mobile automation testing. As you now can see how simple and efficient it is to expand your cross-device coverage with the Native Mobile Grid there are no more excuses not to. So what’s stopping you?! 

How do you get started with the Native Mobile Grid you may ask? To start using the Native Mobile Grid, simply sign up at the link below to request access. You can read more about the Applitools Native Mobile Grid in our introductory blog post or on our website.

 Happy testing!

The post How to Scale Mobile Automation Testing Effectively appeared first on Automated Visual Testing | Applitools.

]]>
Introducing Applitools Native Mobile Grid https://applitools.com/blog/introducing-applitools-native-mobile-grid/ Thu, 14 Apr 2022 14:09:56 +0000 https://applitools.com/?p=36100 Last year, Applitools launched the Ultrafast Grid, the next generation of browser testing clouds for faster testing across multiple browsers in parallel. The success of the new grid with our...

The post Introducing Applitools Native Mobile Grid appeared first on Automated Visual Testing | Applitools.

]]>

Last year, Applitools launched the Ultrafast Grid, the next generation of browser testing clouds for faster testing across multiple browsers in parallel. The success of the new grid with our customer base has been nothing short of amazing, having over 200 customers using Ultrafast Grid in the last year. But our customers are hungry for more innovation and we wanted to focus on extending our Applitools Test Cloud to the next frontier: native mobile apps.

Today, Applitools is excited to announce that the Native Mobile Grid is now ready for general availability – giving companies’ engineering and QA teams access to  the next generation of cross-device testing.

For those developing native mobile apps, there are often many challenges with testing across multiple devices and orientations, resulting in a high number of bugs slipping into production. Local devices are hard to set up and owning a vast collection don’t work well across remote companies in a post-Covid world. Not to mention each different device takes a bit of custom configuration and wizardry to get running without flakiness. And mobile test frameworks are often flaky on the big cloud providers. 

Applitools Native Mobile Grid is a cloud based testing grid that allows testers and developers to automate testing of their mobile applications across different iOS and Android devices quickly, accurately, and without hassle. After running just one test locally, the Applitools Native Mobile Grid will asynchronously run the tests in parallel using Visual AI, speeding up total execution tremendously and reducing flakiness. We’ve seen test time reduce by over 80% when run against other popular testing clouds.

The Benefits Of The Native Mobile Grid

Faster Test Execution, Broad Coverage

With access to over 40 devices, Applitools revolutionary async parallel test execution can reduce testing time by up to 90% compared to traditional device clouds while still expanding coverage over that single device you’ve been testing with.

Less Test Flakiness

Visual AI helps power Applitools industry leading stability and reliability, with flakiness and false positives reduced by 99%.

More Bugs Caught

Testing faster, on more devices, with Visual AI means that more bugs & defects are caught without having to write more tests.

Added Security

The Native Mobile Grid does not need to open a tunnel into your network so your application stays safe and secure

Get Started

To get started with Native Mobile Grid, just head on over and fill out this form.

The post Introducing Applitools Native Mobile Grid appeared first on Automated Visual Testing | Applitools.

]]>