Android Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/android/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 18:44:44 +0000 en-US hourly 1 Mobile Testing for the First Time with Android, Appium, and Applitools https://applitools.com/blog/mobile-testing-android-appium-applitools/ Thu, 21 Jul 2022 16:41:51 +0000 https://applitools.com/?p=40910 Learn how to get started with mobile testing using Android and Appium, and then how to incorporate native mobile visual testing using Applitools.

The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.

]]>

For some of us, it’s hard to believe how long smartphones have existed. I remember when the first iPhone came out in June 2007. I was working at my first internship at IBM, and I remember hearing in the breakroom that someone on our floor got one. Oooooooh! So special! That was 15 years ago!

In that decade and a half, mobile devices of all shapes and sizes have become indispensable parts of our modern lives: The first thing I do every morning when I wake up is check my phone. My dad likes to play Candy Crush on his tablet. My wife takes countless photos of our French bulldog puppy on her phone. Her mom uses her tablet for her virtual English classes. I’m sure, like us, you would feel lost if you had to go a day without your device.

It’s vital for mobile apps to have high quality. If they crash, freeze, or plain don’t work, then we can’t do the things we need to do. So, being the Automation Panda, I wanted to give mobile testing a try! I had three main goals:

  1. Learn about mobile testing for Android – specifically how it relates to other kinds of testing.
  2. Automate my own Appium tests – not just run someone else’s examples.
  3. Add visual assertions to my tests with Applitools – instead of coding a bunch of checks with complicated locators.

This article covers my journey. Hopefully, it can help you get started with mobile testing, too! Let’s jump in.

Getting Started with Mobile

The mobile domain is divided into two ecosystems: Android and iOS. That means any app that wants to run on both operating systems must essentially have two implementations. To keep things easier for me, I chose to start with Android because I already knew Java and I actually did a little bit of Android development a number of years ago.

I started by reading a blog series by Gaurav Singh on getting started with Appium. Gaurav’s articles showed me how to set up my workbench and automate a basic test:

  1. Hello Appium, Part 1: What is Appium? An Introduction to Appium and its Tooling
  2. Hello Appium, Part 2: Writing Your First Android Test
  3. Appium Fast Boilerplate GitHub repository

Test Automation University also has a set of great mobile testing courses that are more than a quickstart guide:

Choosing an Android App

Next, I needed an Android app to test. Thankfully, Applitools had the perfect app ready: Applifashion, a shoe store demo. The code is available on GitHub at https://github.com/dmitryvinn/applifashion-android-legacy.

To do Android development, you need lots of tools:

I followed Gaurav’s guide to a T for setting these up. I also had to set the ANDROID_HOME environment variable to the SDK path.

Be warned: it might take a long time to download and install these tools. It took me a few hours and occupied about 13 GB of space!

Once my workbench was ready, I opened the Applifashion code in Android Studio, created a Pixel 3a emulator in Device Manager, and ran the app. Here’s what it looked like:

The Applifashion main page

An Applifashion product page

I chose to use an emulator instead of a real device because, well, I don’t own a physical Android phone! Plus, managing a lab full of devices can be a huge hassle. Phone manufacturers release new models all the time, and phones aren’t cheap. If you’re working with a team, you need to swap devices back and forth, keep them protected from theft, and be careful not to break them. As long as your machine is powerful and has enough storage space, you can emulate multiple devices.

Choosing Appium for Testing

It was awesome to see the Applifashion app running through Android Studio. I played around with scrolling and tapping different shoes to open their product pages. However, I really wanted to do some automated testing. I chose to use Appium for automation because its API is very similar to Selenium WebDriver, with which I am very familiar.

Appium adds on its own layer of tools:

Again, I followed Gaurav’s guide for full setup. Even though Appium has bindings for several popular programming languages, it still needs a server for relaying requests between the client (e.g., the test automation) and the app under test. I chose to install the Appium server via the NPM module, and I installed version 1.22.3. Appium Doctor gave me a little bit of trouble, but I was able to resolve all but one of the issues it raised, and the one remaining failure regarding ANDROID_HOME turned out to be not a problem for running tests.

Before jumping into automation code, I wanted to make sure that Appium was working properly. So, I built the Applifashion app into an Android package (.apk file) through Android Studio by doing BuildBuild Bundle(s) / APK(s)Build APK(s). Then, I configured Appium Inspector to run this .apk file on my Pixel 3a emulator. My settings looked like this:

My Appium Inspector configuration for targeting the Applifashion Android package in my Pixel 3a emulator (click for larger image)

Here were a few things to note:

  • The Appium server and Android device emulator were already running.
  • I used the default remote host (127.0.0.1) and remote port (4723).
  • Since I used Appium 1.x instead of 2.x, the remote path had to be /wd/hub.
  • appium: automationName had to be uiautomator2 – it could not be an arbitrary name.
  • The platform version, device name, and app path were specific to my environment. If you try to run this yourself, you’ll need to set them to match your environment.

I won’t lie – I needed a few tries to get all my capabilities right. But once I did, things worked! The app appeared in my emulator, and Appium Inspector mirrored the page from the emulator with the app source. I could click on elements within the inspector to see all their attributes. In this sense, Appium Inspector reminded me of my workflow for finding elements on a web page using Chrome DevTools. Here’s what it looked like:

The Appium Inspector with the Applifashion app loaded

Writing my First Appium Test

So far in my journey, I had done lots of setup, but I hadn’t yet automated any tests! Mobile testing certainly required a heftier stack than web app testing, but when I looked at Gaurav’s example test project, I realized that the core concepts were consistent.

I set up my own Java project with JUnit, Gradle, and Appium:

  • I chose Java to match the app’s code.
  • I chose JUnit to be my core test framework to keep things basic and familiar.
  • I chose Gradle to be the dependency manager to mirror the app’s project.

My example code is hosted here: https://github.com/AutomationPanda/applitools-appium-android-webinar.

Warning: The example code I share below won’t perfectly match what’s in the repository. Furthermore, the example code below will omit import statements for brevity. Nevertheless, the code in the repository should be a full, correct, executable example.

My build.gradle file looked like this with the required dependencies:

plugins {
    id 'java'
}

group 'com.automationpanda'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    testImplementation 'io.appium:java-client:8.1.1'
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.2'
    testImplementation 'org.seleniumhq.selenium:selenium-java:4.2.1'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.8.2'
}

test {
    useJUnitPlatform()
}

My test case class was located at /src/test/java/com/automationpanda/ApplifashionTest.java. Inside the class, I had two instance variables: the Appium driver for mobile interactions, and a WebDriver waiting object for synchronization:

public class ApplifashionTest {

    private AppiumDriver driver;
    private WebDriverWait wait;

    // …
}

I added a setup method to initialize the Appium driver. Basically, I copied all the capabilities from Appium Inspector:

    @BeforeEach
    public void setUpAppium(TestInfo testInfo) throws IOException {

        // Create Appium capabilities
        // Hard-coding these values is typically not a recommended practice
        // Instead, they should be read from a resource file (like a properties or JSON file)
        // They are set here like this to make this example code simpler
        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.setCapability("platformName", "android");
        capabilities.setCapability("appium:automationName", "uiautomator2");
        capabilities.setCapability("appium:platformVersion", "12");
        capabilities.setCapability("appium:deviceName", "Pixel 3a API 31");
        capabilities.setCapability("appium:app", "/Users/automationpanda/Desktop/Applifashion/main-app-debug.apk");
        capabilities.setCapability("appium:appPackage", "com.applitools.applifashion.main");
        capabilities.setCapability("appium:appActivity", "com.applitools.applifashion.main.activities.MainActivity");
        capabilities.setCapability("appium:fullReset", "true");

        // Initialize the Appium driver
        driver = new AppiumDriver(new URL("http://127.0.0.1:4723/wd/hub"), capabilities);
        wait = new WebDriverWait(driver, Duration.ofSeconds(30));
    }

I also added a cleanup method to quit the Appium driver after each test:

    @AfterEach
    public void quitDriver() {
        driver.quit();
    }

I wrote one test case that performs shoe shopping. It loads the main page and then opens a product page using locators I found with Appium Inspector:

    @Test
    public void shopForShoes() {

        // Tap the first shoe
        final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
        driver.findElement(shoeMainImageLocator).click();

        // Wait for the product page to appear
        final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));
    }

At this stage, I hadn’t written any assertions yet. I just wanted to see if my test could successfully interact with the app. Indeed, it could, and the test passed when I ran it! As the test ran, I could watch it interact with the app in the emulator.

Adding Visual Assertions

My next step was to write assertions. I could have picked out elements on each page to check, but there were a lot of shoes and words on those pages. I could’ve spent a whole afternoon poking around for locators through the Appium Inspector and then tweaking my automation code until things ran smoothly. Even then, my assertions wouldn’t capture things like layout, colors, or positioning.

I wanted to use visual assertions to verify app correctness. I could use the Applitools SDK for Appium in Java to take one-line visual snapshots at the end of each test method. However, I wanted more: I wanted to test multiple devices, not just my Pixel 3a emulator. There are countless Android device models on the market, and each has unique aspects like screen size. I wanted to make sure my app would look visually perfect everywhere.

In the past, I would need to set up each target device myself, either as an emulator or as a physical device. I’d also need to run my test suite in full against each target device. Now, I can use Applitools Native Mobile Grid (NMG) instead. NMG works just like Applitools Ultrafast Grid (UFG), except that instead of browsers, it provides emulated Android and iOS devices for visual checkpoints. It’s a great way to scale mobile test execution. In my Java code, I can set up Applitools Eyes to upload results to NMG and run checkpoints against any Android devices I want. I don’t need to set up a bunch of devices locally, and the visual checkpoints will run much faster than any local Appium reruns. Win-win!

To get started, I needed my Applitools account. If you don’t have one, you can register one for free.

Then, I added the Applitools Eyes SDK for Appium to my Gradle dependencies:

   testImplementation 'com.applitools:eyes-appium-java5:5.12.0'

I added a “before all” setup method to ApplifashionTest to set up the Applitools configuration for NMG. I put this in a “before all” method instead of a “before each” method because the same configuration applies for all tests in this suite:

    private static InputReader inputReader;
    private static Configuration config;
    private static VisualGridRunner runner;

    @BeforeAll
    public static void setUpAllTests() {

        // Create the runner for the Ultrafast Grid
        // Warning: If you have a free account, then concurrency will be limited to 1
        runner = new VisualGridRunner(new RunnerOptions().testConcurrency(5));

        // Create a configuration for Applitools Eyes
        config = new Configuration();

        // Set the Applitools API key so test results are uploaded to your account
        config.setApiKey("<insert-your-API-key-here>");

        // Create a new batch
        config.setBatch(new BatchInfo("Applifashion in the NMG"));

        // Add mobile devices to test in the Native Mobile Grid
        config.addMobileDevices(
                new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21),
                new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10),
                new AndroidDeviceInfo(AndroidDeviceName.Pixel_4));
    }

The configuration for NMG was almost identical to a configuration for UFG. I created a runner, and I created a config object with my Applitools API key, a batch name, and all the devices I wanted to target. Here, I chose three different phones: Galaxy S21, Galaxy Note 10, and Pixel 4. Currently, NMG supports 18 different Android devices, and support for more is coming soon.

At the bottom of the “before each” method, I added code to set up the Applitools Eyes object for capturing snapshots:

    private Eyes eyes;

    @BeforeEach
    public void setUpAppium(TestInfo testInfo) throws IOException {

        // …

        // Initialize Applitools Eyes
        eyes = new Eyes(runner);
        eyes.setConfiguration(config);
        eyes.setIsDisabled(false);
        eyes.setForceFullPageScreenshot(true);

        // Open Eyes to start visual testing
        eyes.open(driver, "Applifashion Mobile App", testInfo.getDisplayName());
    }

Likewise, in the “after each” cleanup method, I added code to “close eyes,” indicating the end of a test for Applitools:

    @AfterEach
    public void quitDriver() {

        // …

        // Close Eyes to tell the server it should display the results
        eyes.closeAsync();
    }

Finally, I added code to each test method to capture snapshots using the Eyes object. Each snapshot is a one-line call that captures the full screen:

    @Test
    public void shopForShoes() {

        // Take a visual snapshot
        eyes.check("Main Page", Target.window().fully());

        // Tap the first shoe
        final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
        driver.findElement(shoeMainImageLocator).click();

        // Wait for the product page to appear
        final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));

        // Take a visual snapshot
        eyes.check("Product Page", Target.window().fully());
    }

When I ran the test with these visual assertions, it ran one time locally, and then NMG ran each snapshot against the three target devices I specified. Here’s a look from the Applitools Eyes dashboard at some of the snapshots it captured:

My first visual snapshots of the Applifashion Android app using Applitools Native Mobile Grid!

The results are marked “New” because these are the first “baseline” snapshots. All future checkpoints will be compared to these images.

Another cool thing about these snapshots is that they capture the full page. For example, the main page will probably display only 2-3 rows of shoes within its viewport on a device. However, Applitools Eyes effectively scrolls down over the whole page and stitches together the full content as if it were one long image. That way, visual snapshots capture everything on the page – even what the user can’t immediately see!

The full main page for the Applifashion app, fully scrolled and stitched

Injecting Visual Bugs

Capturing baseline images is only the first step with visual testing. Tests should be run regularly, if not continuously, to catch problems as soon as they happen. Visual checkpoints should point out any differences to the tester, and the tester should judge if the change is good or bad.

I wanted to try this change detection with NMG, so I reran tests against a slightly broken “dev” version of the Applifashion app. Can you spot the bug?

The “main” version of the Applifashion product page compared to a “dev” version

The formatting for the product page was too narrow! “Traditional” assertions would probably miss this type of bug because all the content is still on the page, but visual assertions caught it right away. Visual checkpoints worked the same on NMG as they would on UFG or even with the classic (e.g. local machine) Applitools runner.

When I switched back to the “main” version of the app, the tests passed again because the visuals were “fixed:”

Applifashion tests marked as “passed” after fixing visual bugs

While running all these tests, I noticed that mobile test execution is pretty slow. The one test running on my laptop took about 45 seconds to complete. It needed time to load the app in the emulator, make its interactions, take the snapshots, and close everything down. However, I also noticed that the visual assertions in NMG were relatively fast compared to my local runs. Rendering six snapshots took about 30 seconds to complete – three times the coverage in significantly less time. If I had run tests against more devices in parallel, I could probably have seen an even greater coverage-to-time ratio.

Conclusion

My first foray into mobile testing was quite a journey. It required much more tooling than web UI testing, and setup was trickier. Overall, I’d say testing mobile is indeed more difficult than testing web. Thankfully, the principles of good test automation were the same, so I could still develop decent tests. If I were to add more tests, I’d create a class for reading capabilities as inputs from environment variables or resource files, and I’d create another class to handle Applitools setup.

Visual testing with Applitools Native Mobile Grid also made test development much easier. Setting everything up just to start testing was enough of a chore. Coding the test cases felt straightforward because I could focus my mental energy on interactions and take simple snapshots for verifications. Trying to decide all the elements I’d want to check on a page and then fumbling around the Appium Inspector to figure out decent locators would multiply my coding time. NMG also enabled me to run my tests across multiple different devices at the same time without needing to pay hundreds of dollars per device or sucking up a few gigs of storage and memory on my laptop. I’m excited to see NMG grow with support for more devices and more mobile development frameworks in the future.

Despite the prevalence of mobile devices in everyday life, mobile testing still feels far less mature as a practice than web testing. Anecdotally, it seems that there are fewer tools and frameworks for mobile testing, fewer tutorials and guides for learning, and fewer platforms that support mobile environments well. Perhaps this is because mobile test automation is an order of magnitude more difficult and therefore more folks shy away from it. There’s no reason for it to be left behind anymore. Given how much we all rely on mobile apps, the risks of failure are just too great. Technologies like Visual AI and Applitools Native Mobile Grid make it easier for folks like me to embrace mobile testing.

The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.

]]>
Writing Your First Appium Test For iOS Devices https://applitools.com/blog/how-to-write-appium-ios-test/ Wed, 13 Apr 2022 01:47:33 +0000 https://applitools.com/?p=36632 Learn how to create your first Appium test for iOS. Set up and run your first test in this easy-to-follow guide.

The post Writing Your First Appium Test For iOS Devices appeared first on Automated Visual Testing | Applitools.

]]>

This is the third and final post in our Hello World introduction series to Appium, and we’ll discuss how to create your first Appium test for iOS. You can read the first post for an introduction to Appium, or the second to learn how to create your first Appium test for Android.

Congratulations on having made it so far. I hope you are slowly becoming more comfortable with Appium and realizing just how powerful a tool it really is for mobile automation, and that it’s not that difficult to get started with it.

This is the final post in this short series on helping you start with Appium and write your first tests. If you need a refresher on what Appium is and writing your first Android test with it, you can read the earlier parts here:

Say Hello to iOS! ?

In this post, we’ll learn how to set up your dev environment and write your first Appium based iOS test.

Setup Dependencies

We’ll need some dependencies to be preinstalled on your dev machine.

Let’s go over them one by one.

Also, remember it’s completely okay if you don’t understand all the details of these in the beginning since Appium pretty much abstracts those details away and you can always dig deeper later on if you need some very specific capabilities of these libraries.

Step 1: Install XCode Select

To run iOS tests, we need a machine running macOS with Xcode installed.

The below command would setup Command-line scripts that are needed for us to be able to run our first test:

xcode-select --install

Step 2: Install Carthage

You can think of Carthage as a tool to allow adding frameworks to your Cocoa applications and to build required dependencies:

brew install carthage

Step 3: Install libimobiledevice

libimobiledevice library allows Appium to talk to iOS devices using native protocols:

brew install libimobiledevice

Step 4: Install ios-deploy

ios-deploy helps to install and debug iOS apps from the command line:

brew install ios-deploy

Step 5: Install ios-webkit-debug-proxy

  • ios_webkit_debug_proxy (aka iwdp) proxies requests from usbmuxd daemon over a web socket connection
  • Allows developers to send commands to MobileSafari and UIWebViews on real and simulated iOS devices.
brew install ios-webkit-debug-proxy

Step 6: Optional Dependencies

IDB (iOS Device bridge) is a node JS wrapper over IDB that are a set of utilities made by Facebook:

brew tap facebook/fb
brew install idb-companion
pip3.6 install fb-idb

If you are curious, you could read the below reference blogs below that helped me come up with this shortlist of dependencies and are good reads for more context:

Your First Appium iOS Test

For our first iOS test, we’ll use a sample demo app provided by Appium.

You can download the zip file from here, unzip it and ensure you copy it under src/test/resources dir in the project, such that we have a TestApp.app file under the test resources folder.

If you are following these tests along by checking out the GitHub repo appium-fast-boilerplate, you’ll see the iOS app path is mentioned under a file ios-caps.json under src/main/resources/.

This file represents Appium capabilities in JSON format and you can change them based on which iOS device you want to run them on.

When we run the test DriverManager will pick these up and help create the Appium session. You can read part 2 of this blog series to know more about this flow.

{
  "platformName": "iOS",
  "automationName": "XCUITest",
  "deviceName": "iPhone 13",
  "app": "src/test/resources/TestApp.app"
}

Which Steps Would We Automate?

Our app has a set of UI controls with one section representing a calculator wherein we could enter two numbers and get their sum (see below snapshot):

TestApp from appium showing two text buttons, a compute sum button and a result textbox

We would automate the below flow:

  1. Open AUT (Application under test)
  2. Enter first no in Textbox
  3. Enter second no in Textbox
  4. Tap on Compute sum button
  5. Verify the total is indeed correct

Pretty basic right?

Below is how a sample test would look like (see the code here):

import constants.TestGroups;
import org.testng.Assert;
import org.testng.annotations.Test;
import pages.testapp.home.HomePage;

public class IOSTest extends BaseTest {

   @Test(groups = {TestGroups.IOS})
   public void addNumbers() {
       String actualSum = new HomePage(this.driver)
               .enterTwoNumbersAndCompute("5", "5")
               .getSum();

       Assert.assertEquals(actualSum, "10");
   }
}

Here, we follow the same good patterns that have served us well (like using Fluent, page objects, a base test, and driver manager) in our tests just as we did in our Android test.

You can read about these in detail in this earlier blog.

Your First iOS Page Object

The beauty of the page object pattern is that it looks very similar regardless of the platform.

Below is the complete page object for the above test that implements the desired behavior for this test.

package pages.testapp.home;

import core.page.BasePage;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;

public class HomePage extends BasePage {
   private final By firstNumber = By.name("IntegerA");
   private final By secondNumber = By.name("IntegerB");
   private final By computeSumButton = By.name("ComputeSumButton");
   private final By answer = By.name("Answer");

   public HomePage(AppiumDriver driver) {
       super(driver);
   }

   public HomePage enterTwoNumbersAndCompute(String first, String second) {
       typeFirstNumber(first);
       typeSecondNumber(second);
       compute();
       return this;
   }

   public HomePage typeFirstNumber(String number) {
       WebElement firstNoElement = getElement(firstNumber);
       type(firstNoElement, number);
       return this;
   }

   public HomePage typeSecondNumber(String number) {
       WebElement secondNoElement = getElement(secondNumber);
       type(secondNoElement, number);
       return this;
   }

   public HomePage compute() {
       WebElement computeBtn = getElement(computeSumButton);
       click(computeBtn);
       return this;
   }

   public String getSum() {
       waitForElementToBePresent(answer);
       return getText(getElement(answer));
   }
}

Let’s unpack this and understand its components.

We create a HomePage class that inherits from BasePage that has wrappers over Appium API methods.

public class HomePage extends BasePage

We define our selectors of type By, using the Appium inspector to discover that name is the unique selector for these elements, in your projects trying to depend on ID is probably a safer bet.

private final By firstNumber = By.name("IntegerA");
private final By secondNumber = By.name("IntegerB");
private final By computeSumButton = By.name("ComputeSumButton");
private final By answer = By.name("Answer");

Next, we initialize this class with a driver instance that’s passed the test and also its parent class to ensure we have the appropriate driver instance set:

public HomePage(AppiumDriver driver) {
   super(driver);
}

We then create a wrapper function that takes two numbers as strings, types numbers in the two text boxes, and taps on the button.

public HomePage enterTwoNumbersAndCompute(String first, String second) {
   typeFirstNumber(first);
   typeSecondNumber(second);
   compute();
   return this;
}

We implement these methods by reusing methods from BasePage while ensuring the correct page object is returned.

Since there is no redirection happening in these tests and it’s a single screen we just return this (i.e. the current page object in Java syntax). This enables writing tests in the Fluent style that you saw earlier.

public HomePage typeFirstNumber(String number) {
   WebElement firstNoElement = getElement(firstNumber);
   type(firstNoElement, number);
   return this;
}

public HomePage typeSecondNumber(String number) {
   WebElement secondNoElement = getElement(secondNumber);
   type(secondNoElement, number);
   return this;
}

public HomePage compute() {
   WebElement computeBtn = getElement(computeSumButton);
   click(computeBtn);
   return this;
}

Finally, we return the string that has the sum of two numbers in the getSum() method and let the test perform desired assertions:

public String getSum() {
   waitForElementToBePresent(answer);
   return getText(getElement(answer));
}

Running the Test

Before running the test, ensure that the Appium server is running in another terminal and that your appium 2.0 server has the XCUITest driver installed by following the below steps

# Ensure driver is installed
appium driver install xcuitest

# Start the appium server before running your test
appium

Within the project, you could run the test using the below command or use IntelliJ or equivalent editors test runner to run the desired test.

gradle wrapper clean build runTests -Dtag="IOS" -Dtarget="IOS"

Conclusion

With this, we come to an end to this short three-part series on getting started with Appium, from a general introduction to Appium to working with Android to this post on iOS. Hopefully, this series makes it a little bit easier for you or your friends to get set up with Appium.

Exploring the remainder of Appium’s API, capabilities and tooling is left as an exercise to you, my brave and curious reader. I’m sure pretty soon you’ll also be sharing similar posts and hopefully, I’ll learn a thing or two from you as well. Remember Appium docs, the community, and Appium Conf are great sources to go deeper into Appium ?.

So, what are you waiting for? Go for it!

Remember, you can see the entire project on Github at appium-fast-boilerplate, clone or fork it, and play around with it. Hopefully, this post helps you a little bit in starting on iOS automation using Appium. If you found it valuable, do leave a star on the repo and in case there is any feedback, don’t hesitate to create an issue.

You could also check out https://automationhacks.io for other posts that I’ve written about Software engineering and Testing and this page for a talk that I gave on the same topic.

As always, please do share this with your friends or colleagues and if you have thoughts or feedback, I’d be more than happy to chat over on Twitter or in the comments. Until next time. Happy testing and coding.

The post Writing Your First Appium Test For iOS Devices appeared first on Automated Visual Testing | Applitools.

]]>
Writing Your First Appium Test For Android Mobile Devices https://applitools.com/blog/how-to-write-android-test-appium/ Fri, 11 Mar 2022 20:51:39 +0000 https://applitools.com/?p=35292 Learn how to create your first Appium test for Android in this comprehensive, step-by-step walkthrough.

The post Writing Your First Appium Test For Android Mobile Devices appeared first on Automated Visual Testing | Applitools.

]]>

This is the second post in our Hello World introduction series to Appium, and we’ll discuss how to create your first Appium test for Android. You can read the first post where we discussed what Appium is, including its core concepts and how to set up the Appium server. You can also read the next post on setting up your first Appium iOS test.

Key Takeaways

In this post, we’ll build on top of earlier basics and focus on the below areas:

  • Setup Android SDK
  • Setup Android emulator/real devices
  • Setup Appium Inspector
  • Setup our first Android project with framework components

We have lots to cover but don’t worry, by the end of this post, you will have run your first Appium-based Android test. Excited? ✊? Let’s go.

Setup Android SDK

To run Android tests, we need to set up an Android SDK, ADB (Android debug bridge), and some other utilities. 

  • Android SDK will set up Android on your machine and provide you with all the tools required to do the development or in this case automation.
  • Android Debug Bridge (ADB) is a CMD line tool that lets you communicate with the Android device (either emulator or physical device).

The easiest way to set these up is to go to the Android site and download Android Studio (An IDE to develop Android apps), which will install all the desired libraries and also give us everything we need to run our first Android test.

Setup SDK Platforms and Tools

Once downloaded and installed, open Android Studio, click on Configure, and then SDK Manager. Using this you can download any android API version from SDK Platforms.

Shows Android studio home page and how someone can open SDK Manager by tapping on Configure button

Also, You can install any desired SDK Tools from here. 

We’ll go with the defaults for now. This tool can also install any required updates for these tools which is quite a convenient way of upgrading.

Shows SDK Tools in android studio

Add Android Home to Environment Variables

The Appium server needs to know where the Android SDK and other tools like Emulator, Platform Tools are present to help us run the tests. 

We can do so by adding the below variables in the system environment.

On Mac/Linux:

  • Add the below environment variables in the shell of your choice (.bash_profile for bash or .zshrc for zsh). These are usually the paths where Android studio installs these.
  • Run source <shell_profile_file_name> for e.g. source.zshrc
export ANDROID_HOME=$HOME/Library/Android/sdk
export PATH=$ANDROID_HOME/emulator:$PATH
export PATH=$ANDROID_HOME/tools:$PATH
export PATH=$ANDROID_HOME/tools/bin:$PATH
export PATH=$ANDROID_HOME/platform-tools:$PATH

If you are on Windows, you’ll need to add the path to Android SDK in the ANDROID_HOME variable under System environment variables.

Once done, run the adb command on the terminal to verify ADB is set up:

➜  appium-fast-boilerplate git:(main) adb
Android Debug Bridge version 1.0.41
Version 30.0.5-6877874
Installed as /Users/gauravsingh/Library/Android/sdk/platform-tools/adb

These are a lot of tedious steps and in case you want to set these up quickly, you can execute this excellent script written by Anand Bagmar.

Set up an Android Emulator or Real Device

Our Android tests will run either on an emulator or a real Android device plugged in. Let’s see how to create an Android emulator image.

  • Open Android Studio
  • Click configure
  • Click on AVD Manager

Shows Android studio home page and how someone can open AVD Manager by tapping on Configure button

You’ll be greeted with a blank screen with no virtual devices listed. Tap on Create a virtual device to launch the Virtual device Configuration flow:

Shows blank virtual devices screen with create virtual device button

Next, select an Android device like TV, Phone, Tablet, etc., and the desired size and resolution combination. 

It’s usually a good idea to set up an emulator with Play Store services available (see the Play Store icon under the column) as certain apps might need the latest play services to be installed.

We’ll go with Pixel 3a with Play Store available.

Shows device definition screen to select a device model

Next, we’ll need to select which Android version this emulator should have. You can choose any of the desired versions and download the image. We’ll choose Android Q (10.0 version).

Shows Android OS system image with option to Download and select one

You need to give this image a name. We’ll need to use this later in Appium capabilities so you can give any meaningful name or go with the default. We’ll name it Automation.

Shows Android emulator configuration with AVD Name

Nice, ✋?We have created our emulator. You can fire it up by tapping the Play icon under the Actions section.

Android Virtual Device Manager screen with and emulator named Automation

You should see an emulator boot up on your device similar to a real phone. 

Setting up a Real Device

While the emulator is an in-memory image of the Android OS that you can quickly spin up and destroy, it does take physical resources like Memory, RAM, and CPU. It’s always a good idea to verify your app on a real device.

We’ll see how to set up a real device so that we can run automation on it.

You need to connect your device to your machine via USB. Once done:

  • Go to About Phone.
  • Tap on Android build number multiple times until developer mode is enabled, you’ll see a Toast message show up under as you get closer to enabling this mode.
  • Go to Developer Options and Enable USB Debugging.

USB Debugging under Developer Options

And thats all you need to potentially run our automation on a connected real device.

Setting up Appium Inspector

Appium comes with a nifty inspector desktop app that can inspect your Application under test and help you identify element locators (i.e. ways to identify elements on the screen) and even play around with the app. 

It can easily run on any running Appium server and is really a cool utility to help you identify element locators and develop Appium scripts.

Download and Install Appium Inspector

You can download it by just going to the Appium Github repo, and searching for appium-inspector.

Appium Inspector Github page with releases option highlighted

Go to release and find the latest .dmg (on Mac) or .exe (on Windows) to download and install.

Appium Inspector Github releases page with mac .dmg file highlighted

On Mac, you may get a warning stating: “Appium Inspector” can’t be opened because Apple cannot check it for malicious software. 

 Apple warning for malicious software

To mitigate this, just go to System preferences > Security & Privacy > General and say Open Anyway for Appium Inspector. For more details see Issue 1217.

Give open anyway permission to Appium in System preferences

Setting up Desired Capabilities

Once you launch Appium inspector, you’ll be greeted with the below home screen. If you see there is a JSON representation sector under Desired Capabilities section.

 Steps highlighting how we can add our desired caps and save this config

What are Desired Capabilities?

Think of them as properties that you want your driver’s session to have. For example, you may want to reset your app after your test, or launch the app in a particular orientation. These are all achievable by specifying a Key-Value pair in a JSON that we provide to the Appium server session. 

Please see Appium docs for more idea on these capabilities.

Below are some sample capabilities we can give for Android:

{
    "platformName": "android",
    "automationName": "uiautomator2",
    "platformVersion": "10",
    "deviceName": "Automation",
    "app": "/<absolute_path_to_app>/ApiDemos-debug.apk",
    "appPackage": "io.appium.android.apis",
    "appActivity": "io.appium.android.apis.ApiDemos"
}

For this post, we’ll use the sample apps provided by Appium. You can see them here, once you’ve downloaded them. Keep track of its absolute path and update it in-app key in the JSON.

We’ll add this JSON under JSON representation and then tap on the Save button.

Saved caps in grid

It would be a good idea to save this config for the future. You can tap on ‘Save as’ and give it a meaningful name.

 Saving and giving caps a name

Starting the Inspector Session

To start the inspection session, You need to have an Appium server run locally, run it via typing appium on the command line:

➜  appium-fast-boilerplate git:(main) appium
[Appium] Welcome to Appium v2.0.0-beta.25
[Appium] Attempting to load driver uiautomator2...
[Appium] Appium REST http interface listener started on 0.0.0.0:4723
[Appium] Available drivers:
[Appium]   - uiautomator2@2.0.1 (automationName 'UiAutomator2')
[Appium] No plugins have been installed. Use the "appium plugin" command to install the one(s) you want to use.

Let’s make sure our emulator is also up and running. We can start the emulator via AVD Manager in Android studio, or in case you are more command-line savvy, I have written an earlier post on how to do this via CMDline as well.

Once done, tap on the Start Session button, this should bring launch the API Demos app and show the inspector home screen.

Appium inspector home screen with controls highlighted

Using this you can tap on any element in the app and see its Element hierarchy, elements properties. This is very useful to author Appium scripts and I’ll encourage you to explore each section of this tool and get familiar since you’ll be using this a lot.

Writing Our First Android Test Using Appium

Phew, that seemed like a lot of setup steps but don’t worry, you only have to do this once and now we can get down to the real business of writing our first Automated test on Android.

You can download the project using Github from appium-fast-boilerplate.

We’ll also understand what the fundamental concepts of writing page objects and tests are based on these. Let’s take a look at the high-level architecture.

High-level architecture of an Android test written using Appium

What Would This Test Do?

Before automating any test, we need to be clear on what is the purpose of that test. I’ve found the Arrange Act Assert pattern quite useful to reason about it. Read this post by Andrew Knight in case you are interested to know more about it.

Our test would perform the below:

  • Opens API Demos app
  • Taps around and navigates to Log Text box and taps on Add once
  • Verify the log text is displayed

Our Test Class

Let’s start by seeing our test.

import constants.TestGroups;
import org.testng.Assert;
import org.testng.annotations.Test;
import pages.apidemos.home.APIDemosHomePage;

public class AndroidTest extends BaseTest {

    @Test(groups = {TestGroups.ANDROID})
    public void testLogText() {
        String logText = new APIDemosHomePage(this.driver)
                .openText()
                .tapOnLogTextBox()
                .tapOnAddButton()
                .getLogText();

        Assert.assertEquals(logText, "This is a test");
    }
}

There are a few things to notice above:

public class AndroidTest extends BaseTest

Our class extends a BaseTest, this is useful since we can perform common setup and tear down functions, including setting up driver sessions and closing it once our script is done. 

This ensures that the tests are as simple as possible and does not overload the reader with any more details than they need to see.

String logText = new APIDemosHomePage(this.driver)
                .openText()
                .tapOnLogTextBox()
                .tapOnAddButton()
                .getLogText();

We see our tests read like plain English with a series of actions following each other. This is called a Fluent pattern and we’ll see how this is set up in just a moment.

Base Test and Driver Setup

Let’s see our BaseTest class:

import constants.Target;
import core.driver.DriverManager;
import core.utils.PropertiesReader;
import exceptions.PlatformNotSupportException;
import io.appium.java_client.AppiumDriver;
import org.testng.ITestContext;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;

import java.io.IOException;

public class BaseTest {
    protected AppiumDriver driver;
    protected PropertiesReader reader = new PropertiesReader();

    @BeforeMethod(alwaysRun = true)
    public void setup(ITestContext context) {
        context.setAttribute("target", reader.getTarget());

        try {
            Target target = (Target) context.getAttribute("target");
            this.driver = new DriverManager().getInstance(target);
        } catch (IOException | PlatformNotSupportException e) {
            e.printStackTrace();
        }
    }

    @AfterMethod(alwaysRun = true)
    public void teardown() {
        driver.quit();
    }
}

Let’s unpack this class.

protected AppiumDriver driver;

We set our driver instance as protected so that all test classes will have access to it.

protected PropertiesReader reader = new PropertiesReader();

We create an instance of PropertiesReader class to read relevant properties. This is useful since we want to be able to switch our driver instances based on different test environments and conditions. If curious, please see its implementation here.

Target target = (Target) context.getAttribute("target");
this.driver = new DriverManager().getInstance(target);

We get the relevant Target and then using that gets an instance of AppiumDriver from a class called DriverManager.

Driver Manager to Setup Appium Driver

We’ll use this reusable class to:

  • Read capabilities JSON file based on platform (Android/iOS)
  • Setup a local driver instance with these capabilities
  • This class could independently evolve to setup desired driver instances, either local, the house or remote cloud lab
package core.driver;

import constants.Target;
import exceptions.PlatformNotSupportException;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.remote.DesiredCapabilities;

import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.HashMap;

import static core.utils.CapabilitiesHelper.readAndMakeCapabilities;

public class DriverManager {
    private static AppiumDriver driver;
    // For Appium < 2.0, append /wd/hub to the APPIUM_SERVER_URL
    String APPIUM_SERVER_URL = "http://127.0.0.1:4723";

    public AppiumDriver getInstance(Target target) throws IOException, PlatformNotSupportException {
        System.out.println("Getting instance of: " + target.name());
        switch (target) {
            case ANDROID:
                return getAndroidDriver();
            case IOS:
                return getIOSDriver();
            default:
                throw new PlatformNotSupportException("Please provide supported target");
        }
    }

    private AppiumDriver getAndroidDriver() throws IOException {
        HashMap map = readAndMakeCapabilities("android-caps.json");
        return getDriver(map);
    }

    private AppiumDriver getIOSDriver() throws IOException {
        HashMap map = readAndMakeCapabilities("ios-caps.json");
        return getDriver(map);
    }

    private AppiumDriver getDriver(HashMap map) {
        DesiredCapabilities desiredCapabilities = new DesiredCapabilities(map);

        try {
            driver = new AppiumDriver(
                    new URL(APPIUM_SERVER_URL), desiredCapabilities);
        } catch (MalformedURLException e) {
            e.printStackTrace();
        }

        return driver;
    }
}

You can observe:

  • getInstance method takes a target and based on that tries to get either an Android or an IOS Driver. In the future, if we want to run our tests against a cloud provider like Headspin, SauceLabs, BrowserStack, Applitools, then this class could handle creating the relevant session.
  • Both getAndroidDriver and getIOSDriver read a JSON file with similar capabilities as we saw in the Inspector section and then convert it into a Java Hashmap which could be passed into getDriver method that returns an Appium Instance. With this, we could later pass say an environmental context and based on that choose a different capabilities file.

A Simple Page Object with the Fluent Pattern

Let’s take a look at the example of a page object that enables a Fluent pattern.

package pages.apidemos.home;

import core.page.BasePage;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import pages.apidemos.logtextbox.LogTextBoxPage;

public class APIDemosHomePage extends BasePage {
    private final By textButton = By.xpath("//android.widget.TextView[@content-desc=\"Text\"]");
    private final By logTextBoxButton = By.xpath("//android.widget.TextView[@content-desc=\"LogTextBox\"]");

    public APIDemosHomePage(AppiumDriver driver) {
        super(driver);
    }

    public APIDemosHomePage openText() {
        WebElement text = getElement(textButton);
        click(text);

        return this;
    }

    public LogTextBoxPage tapOnLogTextBox() {
        WebElement logTextBoxButtonElement = getElement(logTextBoxButton);
        waitForElementToBeVisible(logTextBoxButtonElement);

        click(logTextBoxButtonElement);

        return new LogTextBoxPage(driver);
    }
}

Notice the following:

Above is an example page object class:

  • With 2 XPath locators defined with By clause
  • We init the driver in BasePage class
  • The test actions return either this (i.e. current page) or next page object to enable a Fluent pattern. Following this, we can create a sort of action graph where any action that a test takes connects to the next set of actions this page can take and makes writing test scripts a breeze.

Base Page Class

package core.page;

import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;

import java.util.List;

public class BasePage {
    protected AppiumDriver driver;

    public BasePage(AppiumDriver driver) {
        this.driver = driver;
    }

    public void click(WebElement elem) {
        elem.click();
    }

    public WebElement getElement(By by) {
        return driver.findElement(by);
    }

    public List<WebElement> getElements(By by) {
        return driver.findElements(by);
    }

    public String getText(WebElement elem) {
        return elem.getText();
    }

    public void waitForElementToBeVisible(WebElement elem) {
        WebDriverWait wait = new WebDriverWait(driver, 10);
        wait.until(ExpectedConditions.visibilityOf(elem));
    }

    public void waitForElementToBePresent(By by) {
        WebDriverWait wait = new WebDriverWait(driver, 10);
        wait.until(ExpectedConditions.presenceOfElementLocated(by));
    }

    public void type(WebElement elem, String text) {
        elem.sendKeys(text);
    }
}

Every page object inherits from a BasePage that wraps Appium methods.

  • This provides us an abstraction and allows us to create our own project-specific reusable actions and methods. This is a very good pattern to follow for a few reasons
    • Say we want to provide some custom functionality like adding a logger line in our own logging infrastructure whenever an Appium action is performed, we can wrap those methods and perform this.
    • Also in the case in future Appium breaks an API, this change is not cascaded to all our page objects, rather only to this class where we can handle it in an appropriate manner to provide backward compatibility.
  • A word of caution ⚠: Do not dump every method in this class, try to compose only relevant actions.

Congratulations, you’ve written your first Appium Android test. You can run this either via the IDE or via a Gradle command

./gradlew clean build runTests -Dtag="ANDROID" -Ddevice="ANDROID"

Conclusion

You can see the entire project on the Github appium-fast-boilerplate, clone it and play around with it. Hopefully, this post helps you a little bit in starting on Android automation using Appium. 

In the next post, we’ll dive into the world of iOS Automation with Appium and write our first hello world test.

You could also check out https://automationhacks.io for other posts that I’ve written about software engineering and testing. 

As always, do share this with your friends or colleagues and if you have thoughts or feedback, I’d be more than happy to chat over at Twitter or elsewhere. Until next time. Happy testing and coding.

The post Writing Your First Appium Test For Android Mobile Devices appeared first on Automated Visual Testing | Applitools.

]]>
Introducing the Next Generation of Native Mobile Test Automation https://applitools.com/blog/introducing-next-generation-native-mobile-test-automation/ Tue, 29 Jun 2021 17:38:15 +0000 https://applitools.com/?p=29695 Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps...

The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>

Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps with stability, speed, and security – in parallel across dozens of devices. The new offer extends the innovation of Ultrafast Cloud beyond browsers and into the mobile applications.

You can sign up for the early access program today!

The Challenge of Testing Native Mobile Apps

Mobile testing has a long and difficult history. Many industry-standard tools and solutions have struggled with the challenge of testing across an extremely wide range of devices, viewports and operating systems.

The approach currently in use by much of the industry today is to utilize a lab made up of emulators, or simulators, or even large farms of real devices. Then the tests must be run on every device independently. The process is not only costly, slow and insecure, but it is prone to errors as well.

At Applitools, we had already developed technology to solve a similar problem for web testing, and we were determined to solve this issue for mobile testing too.

Announcing the Ultrafast Test Cloud for Native Mobile

Today, we are introducing the Ultrafast Test Cloud for Native Mobile. We built on the success of the Ultrafast Test Cloud Platform, which is already being used to boost the performance and quality of responsive web testing by 150 of the world’s top brands. The Ultrafast Test Cloud for Native Mobile allows teams to run automated tests on native mobile apps on a single device, and instantly render it across any desired combination of devices.

“This is the first meaningful evolution of how to test native mobile apps for the software industry in a long time,” said Gil Sever, CEO and co-founder of Applitools. “People are increasingly going to mobile for everything. One major area of improvement needed in delivering better mobile apps faster, is centered around QA and testing. We’re building upon the success of Visual AI and the Ultrafast Test Cloud to make the delivery and execution of tests for native mobile apps more consistent and faster than ever, and at a fraction of the cost.”

The Power of Visual AI and Ultrafast Test Grid

Last year we introduced our Ultrafast Test Grid, enabling teams to test for the web and responsive web applications against all combinations of browsers, devices and viewports with blazing speed. We’ve seen how some of the largest companies in the world have used the power of Visual AI and the Ultrafast Test Grid to execute their visual and functional tests more rapidly and reliably on the web.

We’re excited to now be able to offer the same speed and agility, and security for native mobile applications. If you’re familiar with our current Ultrafast Test Grid offering, you’ll find the experience a familiar one.

A side-by-side image of the Ultrafast Test Cloud for Native Mobile comparing an iPhone 7 to an iPhone 8
The Ultrafast Test Cloud for Native Mobile comparing an iPhone 7 to an iPhone 8

Mobile Apps Are an Increasingly Critical Channel

Mobile usage continues to rise globally, and more and more critical activity – from discovery to research and purchase – is taking place online via mobile devices. Consumers are demanding higher and higher quality mobile experiences, and a poorly functioning site or visual bugs can detract significantly from the user’s experience. There is a growing portion of your audience you can only convert with a five-star quality app experience.

While testing has traditionally been challenging on mobile, the Ultrafast Test Cloud for Native Mobile increases your ability to test quickly, early and often. That means you can develop a superior mobile experience at less cost than the competition, and stand out from the crowd.

Get Early Access

With this announcement, we’re also launching our free early access program, with access to be granted on a limited basis at first. Prioritization will be given to those who register early. To learn more, visit the link below.

The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>
A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile https://applitools.com/blog/guide-testing-automating-data-analytics-events-web-mobile/ Mon, 14 Jun 2021 21:07:07 +0000 https://applitools.com/?p=29312 I have been testing Analytics for the past 10+ years. In the initial days, it was very painful and error-prone, as I was doing this manually. Over the years, as...

The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.

]]>

I have been testing Analytics for the past 10+ years. In the initial days, it was very painful and error-prone, as I was doing this manually. Over the years, as I understood this niche area better, and spent time understanding the reason and impact of data Analytics on any product and business, I started getting smarter about how to test analytics events well.

This post will focus on how to test Analytics for Mobile apps (Android / iOS), and also answer some questions I have gotten from the community regarding the same.

What is Analytics?

Analytics is the “air your product breathes”.  Analytics allows teams to:

  • Know their users
  • Measure outcome and value
  • Take decisions

Why is Analytics important?

Analytics allows the business team and product team to understand how well (or not) the features are being used by the users of the system. Without this data, the team would (almost) be shooting in the dark for the ways the product needs to evolve.

The analytics information is critical data to understand where in the feature journeys the user “drops off” and then the inference will provide insights if the drop is because of the way the features have been designed, or if the user experience is not adequate, or of course, there is a defect in the way the implementation has been done.

How do teams use Analytics?

For any team to know how their product is used by the users, you need to instrument your product so that it can share with you meaningful (non-private) information about the usage of your product. From this data, the team would try to infer context and usage patterns which would serve as inputs to make the product better. 

The instrumentation I refer to above is of different types. 

This can be logs sent to your servers – typically these are technical information about the product. 

Another form of instrumentation would be analytics events. These capture the nature of interaction and associated metadata, and send that information to (typically) a separate server / tool. This information is sent asynchronously and does not have any impact on the functioning, nor performance of the product.

This is typically a 4 step process:

  • Capture
    • You need to know what data you want, and why. 
    • Implement the capturing of data based on specific user action(s)
  • Collect
    • The captured data needs to be collected in a central server. 
    • There are many Analytics tools available (commercial & open-source)
    • Many organizations end up building their own tool based on specific customisations / requirements
  • Prepare data for Analysis
    • The collected data needs to be analysed and put in context to make meaning
  • Report
    • Based on the context of the analysed data, reports would be generated that show patterns with details and reasons
    • This allows teams to evolve the product in better ways for business and their users

How to implement Analytics in your product?

Once you know what information you want to capture and when, implementing Analytics into your product goes through the same process as for your regular product features & functionalities.

How to implement Analytics in your product?

Implementing Analytics

Embedding and Triggering an Analytics Library

Step 1: Embed Analytics library

The analytics library is typically a very light-weight library, and is added as a part of your web pages or your native apps (android or iOS).

Step 2: Trigger the event

Once the library is embedded in the product, whenever the user does any specific, predetermined actions, the front-end client code would capture all the relevant information regarding the event, and then trigger a call to the analytic tool being used with that information.

Ex: Trigger an analytics event when user “clicks on the search button”

The data in the triggered event can be sent in 2 ways:

  1. As part of query parameters in the request.
  2. As part of the POST body in the request. This is a preferred approach if the data to be sent is large.

What is an Analytics Event?

An analytics event is a simple https request sent to the Analytics tool(s) your product uses. Yes, your product may be using multiple tools to capture and visualise different types of information.

Below is an example of an analytics event.

An example of an analytics event

Let’s dissect this call to understand what it is doing:

  • The request in the above example is from my blog, which is using Google Analytics as the tool to capture and understand the readers of my blog.
  • The request itself is a straightforward https call to the “collect” API endpoint. 
  • The real information, as shown in this call, is all the query parameters associated with the request. 
    • For the above request, here is a closer look at the query parameters
A look at the query parameters of our example
  • The query parameters are the collection of information captured to understand what the user (reader of my blog) did.
  • The name-value pairs of query parameters may seem cryptic – and that is not wrong. It is probably designed in this fashion for the following reasons:
  • To reduce the packet size of these requests – which reduces network load, and eventual processing load on the analytics tool as well
    • To try and mask what information is being captured. This probably was more relevant in the http days. Ex: “dnt=1” may indicate that the user has set preferences for “do-not-track=true”
    • The mapping is created based on the analytic tool
  • Even if the request is sent as part of the POST body, it would have similar payload
  • When the request reaches the analytic tool, the tool processes each request based on the mapping it had created, and creates the reports and charts based on what information was received by it

Different ways to test Analytics events

There are different ways to test Analytics events. Let’s understand the same.

Test at the source

Well, if testing the end report is too late, then we need to shift-left and test at the source.

During Development

Based on requirements, the (front-end) developers would be adding the analytics library to the web pages or native apps. Then they set the trigger points when the event should be captured and sent to the analytics tool. 

A good practice is for the analytics event generation and trigger to be implemented as a common function / module, which will be called by any functionality that needs to send an analytics event.

This will allow the developers to write unit tests to ensure:

  1. All the functionalities that need to trigger an event are collecting the correct and expected data (which will be converted to query parameters) to be sent to the common module
  2. The event generation module is working as expected – i.e. the request is created with the right structure and parameters (as received from its callers)
  3. The event can be sent / triggered with the correct structure and details as expected

This approach will ensure that your event triggering and generation logic is well tested. Also, these tests will be able to be run on developer machines as well as your build pipelines / jobs in your CI (Continuous Integration) server. So you get quick feedback in case anything goes wrong.

During Manual / Exploratory Testing

While the unit testing is critical to ensure all aspects of the code works as expected, the context of dynamic data based on real users is not possible to understand from the unit tests. Hence, we also need the System Tests / End-2-End tests to understand if analytics is working well.

A sink next to an motion sensing paper towel dispenser, arranged so that turning on the faucet automatically activates the dispenser. Titled "when you write 2 unit tests and no integration tests".

Reference: https://devrant.com/rants/754857/when-you-write-2-unit-tests-and-no-integration-tests

Let’s look at the details of how you can test Analytics Events during Testing in any of your internal testing environments:

  1. Requirements for testing
    1. You need the ability to capture / see the events being sent from your browser / mobile app
    2. For Browsers, you can simply refer to the Network tab in the Developer Tools
    3. For Native Apps, set up a proxy server on your computer, configure the device to route the traffic through the proxy server. Now launch the app and perform actions / interact with the functionality. All API requests (including Analytics event requests) will be captured in the Proxy server on your computer
  2. Based on the types of actions performed by you in the browser or the native app, you will be able to verify the details of those requests from the Network tab / Proxy server.

The details include – name of the event, and the details in the query parameter

This step is very important, and different from what your unit tests are able to validate. With this approach, you would be able to verify:

  • Aspects like dynamic data (in the query parameters)
  • If any request is repeated / duplicated
  • Whether any request is not getting triggered from your product
  • If requests get triggered on different browsers or devices

All the above is possible to be tested and verified even if you do not have the Analytic tool setup or configured as per business requirements.

The advantage of this approach is that it complements the unit testing, and ensures that your product is behaving as expected in all scenarios.

The only challenge / disadvantage of this aspect is that this is manual testing. Hence, it is very possible to miss out certain scenarios or details to be validated on every manual test cycle. Also, it is impossible to scale and repeat this approach.

As part of Test Automation 

Hence, we need a better approach. The way unit tests are automated, the above activity of testing should also be automated. The next section talks about a solution for how you can automate testing of Analytics events as part of your System / end-2-end test automation.

Test the end-report

This is unfortunately the most common approach teams take to test if the analytics events are being captured correctly, and that too may end up happening in production / or when the app is released for its users. But you need to test early. Hence the above technique of Testing at the source is critical for the team to know if the events are been triggered and validated as soon as the implementation is completed. 

I would recommend this strategy after you have completed Testing at the Source

A collection of charts and graphs for Testing the End Report

There are pros and cons of this approach.

Pros and Cons of Testing the End Report - pros include ensuring the report is set up correctly, cons include licensing, reports not yet set up, and validating all requests are sent / captured.

The biggest disadvantage though of the above approach is that it is too late!

The biggest problem with testing the end report is that it's too late!

That said, there is still a lot of value in doing this. This indicates that your Analytics tool is also configured correctly to accept the data and you are actually able to set up meaningful charts and reports that can indicate patterns and allows you to identify and prioritise the next steps to make the product better.

Automating Analytics Events 

Let’s look at the approach to automate testing of Analytics events as part of your System / end-2-end Test Automation.

We will talk separately about Web & Mobile – as both of them need a slightly different approach.

Web

Assumptions

  • The below technique assumes you are using Selenium WebDriver for your System / end-2-end automation. But you could implement a similar solution based on any other tools / technologies of your choice.

Prerequisites

  1. You already have System / end-2-end test automated using Selenium Webdriver
  2. For each System / end-2-end test automated, have a full list of the Analytics events that are expected to be triggered, with all the expected query parameters (name & value)

Integrating with Functional Automation

There are 2 options to accomplish the Analytics event test automation for Web. They are as follows:

  1. Use WAAT

I built WAAT – Web Analytics Automation Testing in Java & Ruby back in 2010. Integrate this in your automation framework using the instructions in the corresponding github pages.

Here is an example of how this test would look using WAAT.

A test shown using WAAT - Web Analytics Automation Testing.

This approach will let you find the correct request and do the appropriate matching of parameters automatically.

  1. Selenium 4 (beta) with Chrome Developer Protocol 

With Selenium 4 almost available, you could potentially use the new APIs to query the network requests from Chrome Developer Protocol

With this approach, you will need to write code to query the appropriate Analytics request from the list of requests captured, and compare the actual query parameters with what is expected.

That said, I will be working on enhancing WAAT to support Chrome Developer Protocol based plugin in the near future. Keep an eye out for updates to the WAAT project in the near future.

Mobile (Android & iOS)

Assumptions

  • The below technique assumes you are using Appium for your System / end-2-end automation. But you could implement a similar solution based on any other tools / technologies of your choice.

Prerequisites

  1. You already have System / end-2-end test automated using Appium
  2. For each System / end-2-end test automated, have a full list of the Analytics events that are expected to be triggered, with all the expected query parameters (name & value)

Integrating with Functional Automation

There are 2 options to accomplish the Analytics event test automation for Mobile apps (Android / iOS). They are as follows:

  1. Use WAAT

As described for the web, you can integrate WAAT – Web Analytics Automation Testing in your automation framework using the instructions in the corresponding github pages.

On the device where the test is running, you would need to do the following additional setup as described in the  Proxy setup for Android device

This approach will let you find the correct request and do the appropriate matching of parameters automatically.

  1. Instrument app 

This is a customized implementation, but can work great in some contexts. This is what you can do:

  • Taking developer help, instrument the app to add the analytics events as a log message, in a clear and easily identifiable way
  • For each System / end-2-end test you run, follow these steps
    1. Have a list of expected analytics events with query parameters (in sequence) for this test
    2. Clear the logs on the device
    3. Run the System / end-2-end test
    4. Retrieve the logs from the device
    5. Retrieve all the analytics events that would be added to the logs while running the System / end-2-end tests
    6. Compare the actual analytics events captured with the expected results

This approach will allow us to validate events as they are being sent as a result of running the System / end-2-end tests. 

Differences in Analytics for Mobile Apps Vs Web sites

As you may have noticed in the above sections for Web and Mobile, the actual testing of Analytics events is really the same in either case. The differences arise a little about how to capture the events, and maybe some proxy setup required. 

There is another aspect that is different for Analytics testing for Mobile.

The Analytics tool sdk / library that is added to the Mobile app has an optimising feature – batching! This configurable feature (in most tools) allows customizing the number of requests that should be collected together. Once the batch is full, or on trigger of some specific events (like closing the app), all the events in the batch will be sent to the Analytics tool and then cleared / reset. 

This feature is important for mobile devices, as the users may be on the move, (or using the apps in Airplane mode) and may not have internet connectivity when using the app. In such cases, if the device does not cache the analytics requests, then that data may be lost. Hence it is important for the app to store the analytics events and then send it at a later point when there is connectivity available.

Also, another reason batching of analytics events helps is to minimize the network traffic generated by the app.

So when we are doing the Mobile Analytics events automation, when the test completes, ensure the events are triggered from the app (i.e. from the batch), only then it will be seen in the logs or proxy server, and then validation can be done.

While batching can be a problem for Test Automation (since the events will not be generated / seen immediately), you could take one of these 2 approaches to make your tests deterministic:

  • Configure the batch size to be 1, or turn of batching to enable triggering the events immediately. This can be done for your apps for the non-prod environments or as part of debug builds.
  • Trigger the flushing of the batch through an action in the app (ex. Closing / minimizing the app). Talk to the developers to understand what actions will work for your app.

A Comprehensive System / end-2-end Test Automation Solution

I like to have my System Tests / end-2-end Test Automation solution to have the following capabilities built in:

  • Ability to run tests on multiple platforms (web and native mobile – android & iOS)
  • Run tests in parallel
  • Tests manage their own test data
  • Rich reporting
  • Visual Testing using Applitools Visual AI
  • Analytics Events validation

See this post on Automating Functional / End-2-End Tests Across Multiple Platforms for implementation details for building a robust, scalable and maintainable cross-platform Test Automation Framework

Answers to questions from community

  • How do you add the automated event tests on Android and iOS to the delivery pipeline?
    • If you have automated the tests for Mobile using either of the above approaches, the same would work from CI as well. Of course, the CI agent machines (where the tests will be running) would need to have the same setup as discussed here.
  • How do you make sure that new and old builds are working fine?
    • The expected analytics events are compared with every new build / app generated. Any difference found there will be highlighted as part of your System / end-2-end test execution
  • Is UI testing mandatory to do event testing?
    • There are different levels of testing for Analytics. Refer to the Test at the source section. Ask yourself the question – what is the risk to the business team IF any set of events does not work correctly or relevant details are not captured? If there is a big risk, then it is advisable to do some form of System / end-2-end test automation and integrate Analytics automation along with that.
  • Any suggestions on a shift-right approach here?
    • We should actually be shifting left. Ensure Analytics requirements are part of the requirements, and build and test this along with the actual implementation and testing to prevent surprises later.
  • How do we make sure everything is working fine in Production? Should we consider an alerting mechanism in case of sudden spike or loss of events?
    • You could have a smoke test suite that runs against Production. These tests can validate functionality and analytics events.
    • Regarding the alerts, it is always good to have these setup. The alerts would depend on the Analytics tool that you are using. That said, the nature of alerts would depend on specific functionality of the application-under-test. 
  • What happens when there are a lot of events to be automated? How do you prioritize?
    • Take help from your product team to help prioritise. While all events are important, not all are critical. Do cost / value analysis and based on that, start.
  • Testing at source means only UI testing or native testing? You mentioned about a debug app file, so is it possible to automate the events with native frameworks like espresso and XCUITest or only with Appium?
    • There are 2 aspects of testing at the source – development & testing. Based on this understanding, figure out what unit testing can be done, and what will trigger the tests in context of an integrated testing. If your automated tests using either espresso or XCUITest  can simulate the user actions, which will in-turn trigger the events from the app when the test runs, then you can do Analytics automation at that level as well.
  • Once the events are sent to the Analytics tool, the data would be stored in the database. How do you ensure that events are saved in the database? Did you have any other end to end tests to verify that? How do we make sure that? Verifying the network logs alone doesn’t guarantee that events will be dispatched to database
    • The product / app does not write the events to the database. 
      • You are testing your product, and not the Analytics tool
      • The app makes an https call to send the event with details to an independent Analytics server – which chooses to put this in some data store, in their defined schema. 
      • This aspect in most likelihood will be transparent to you. 
      • Also, in most cases, no one will have access to the Analytics tools’s data store directly. So it does not make sense to verify the data is there in the database. 
    • Another thing to consider – you / the team would be choosing the Analytics tool based on the features it offers, its reliability and stability. So you should not be needing to “test” the Analytics tool, but instead, focus on the integration to ensure everything your team is building, is tested well.
    • So my suggestion is:
      • Test at the source (unit tests + System / end-2-end tests), for each new build
      • Test the end report to ensure final integration is working well, in test and production
  • Sometimes we make use of 3rd party products like https://segment.com/ to segregate events to different endpoints. As a result sometimes only a subset of the events (basis business rules / cost optimizations) might reach the target endpoint. How to manage these in an automation environment?
    • Same answer as above.

The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.

]]>
What is Mobile Testing? https://applitools.com/blog/what-is-mobile-testing/ Thu, 15 Apr 2021 18:29:57 +0000 https://applitools.com/?p=28516 In this guide, you’ll learn the basics of what it means to test mobile applications. We’ll talk about why mobile testing is important, key types of mobile testing, as well as considerations and best practices to keep in mind.

The post What is Mobile Testing? appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, you’ll learn the basics of what it means to test mobile applications. We’ll talk about why mobile testing is important, key types of mobile testing, as well as considerations and best practices to keep in mind.

What is Mobile Application Testing?

Mobile testing is the process by which applications for modern mobile devices are tested for functionality, usability, performance and much more. 

Note: This includes testing for native mobile apps as well as for responsive web or hybrid apps. We’ll talk more about the differences between these types of mobile applications below.

Mobile application testing can be automated or manual, and helps you ensure that the application you’re delivering to users meets all business requirements as well as user expectations.

Why is Mobile Testing Important?

Mobile internet usage continues to rise even as desktop/laptop internet usage is declining, a trend that has continued unabated for years. As more and more users spend an increasing amount of their time on mobile devices, it’s critical to provide a good experience on your mobile apps.

If you’re not testing the mobile experience your users are receiving, then you can’t know how well your application serves a large and growing portion of your users. Failing to understand this leads to dreaded one-star app reviews and negative feedback on social media.

Mobile app testing ensures your mobile experience is strong, no matter what kind of app you’re using or what platform it is developed for.

Key Considerations of Mobile Testing

As you consider your mobile testing strategy, there are a number of things that are important to keep in mind in order to plan and execute an optimal approach.

Types of Mobile Apps

There are three general categories of mobile applications that you may need to test today:

  • Native Apps are designed specifically for a particular mobile platform (today this typically means either Android or iOS) and are generally downloaded and installed via an app store like Apple’s App Store or Google’s Play Store. This includes both pure native apps built on Java/Kotlin for Android or Objective-C/Swift for iOS, as well as cross-platform native applications built with frameworks like ReactNative, Flutter and NativeScript.
  • Responsive Web Apps are designed to be accessed on a mobile browser. Web apps can be either a responsive version of a website or a progressive web app (PWA), which adds additional mobile-friendly features.
  • Hybrid Apps are designed as a compromise between native and web apps. Hybrid apps can be installed via app stores just like native apps and may have some native functionality, but at least partially rely on operating essentially as web apps wrapped in a native shell.

Differences between Mobile and Web Testing

There are additional complexities that you need to consider when testing mobile applications, even if you are testing a web app. Mobile users will interact with your app on a large variety of operating systems and devices (Android in particular has numerous operating system versions and devices in wide circulation), with any number of standard resolutions and device-specific functionalities. 

Even beyond the unique devices themselves, mobile users find themselves in different situations than desktop/laptop web users that need to be accounted for in testing. This includes signal strength, battery life, even contrast and brightness as the environment frequently changes.

Ensuring broad test coverage across even just the most common scenarios can be a complex challenge.

Key Types of Mobile Testing

There are a lot of different and important ways to test your mobile application. Here are some of the most common.

Functional Testing

Functional testing is necessary to ensure the basic functions are performing as expected. It provides the appropriate input and verifies the output. It focuses on things like checking standard functionalities and error conditions, along with basic usability.

Usability Testing

Usability testing, or user experience testing, goes further than functional testing in evaluating ease of use and intuitiveness. It focuses on trying to simulate the real experience of a customer using the app to find places where they might get stuck or struggle to utilize the application as intended, or just generally have a poor experience.

Compatibility, performance, accessibility and load testing are other common types of mobile tests to consider.

Manual Testing vs Automated Testing for Mobile

Manual testing is testing done solely by a human, who independently tests the app and methodically searches for issues that a user might encounter and logs them. Automated testing takes certain tasks out of the hands of humans and places them into an automation tool, freeing up human testers for other tasks.

Both types of testing have their advantages. Manual testing can take advantage of human intuitiveness to uncover unexpected errors, but can also be extremely time-consuming. Automated testing saves much of this time and is particularly effective on repetitive tests, but can miss less obvious cases that manual testing might catch.

Whether you use one method or a hybrid approach in your testing will depend on the requirements of your application.

Top Open Source Tools for Mobile Test Automation

There are a number of popular and open source tools and frameworks for testing your mobile apps. A few of the most common include:

  • Espresso – Android-specific and geared towards developers (recommended by Google).
  • XCUITest – iOS specific and geared towards developers (recommended by Apple).
  • Appium – Cross-platform and easy to use, with strong community support.
  • Calabash – Cross-platform with support for Cucumber, Xamarin-based and also easy to use.

For more, you can see a comparison of Appium vs Espresso vs XCUITest here.

Automated Visual Testing for Mobile

Another type of testing to keep in mind is automated visual testing. Traditional testing experiences rely on validating against code, but this can result in flaky tests in some situations, particularly in complex mobile environments. Visual testing works by comparing visual screenshots instead.

Visual testing can be powerful on mobile applications. While the traditional pixel-to-pixel approach can still be quite flaky and prone to false-positives, advances in visual AI – trained against billions of images – make automated visual testing today increasingly accurate.

You can read more about the benefits of visual testing for mobile apps and see a quick example here.

Wrapping Up

Mobile testing can be a complex challenge due to the wide variety of hardware and software variations in common usage today. However, as mobile internet use continues to soar, the quality of your mobile applications is more critical than ever. Understanding the types of tests you need to run, and then executing them with the tools that will make you most effective, will ensure you can deliver your mobile apps in less time and with a superior user experience.

Happy testing!

Keep Reading: Top Educational Resources about Mobile Testing

The post What is Mobile Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Appium vs Espresso vs XCUITest – Understanding how Appium Compares to Espresso & XCUITest https://applitools.com/blog/appium-vs-espresso-vs-xcui/ Fri, 12 Mar 2021 02:06:42 +0000 https://applitools.com/?p=27600 In this article we shall look at the Appium, Espresso and XCUITest test automation frameworks. We’ll learn the key differences between them, as well as when and why you should use...

The post Appium vs Espresso vs XCUITest – Understanding how Appium Compares to Espresso & XCUITest appeared first on Automated Visual Testing | Applitools.

]]>

In this article we shall look at the Appium, Espresso and XCUITest test automation frameworks. We’ll learn the key differences between them, as well as when and why you should use them in your own testing environment.

What is Appium

Appium is an open source test automation framework which is completely maintained by the community. Appium can automate Native, Hybrid, mWeb, Mac Apps and Windows Apps. Appium follows the Selenium W3C protocol which enables the use of the same test code for both Android and iOS applications.

Under the hood Appium uses Espresso or UIAutomator2 as the mode of communication to Android Apps and XCUI for iOS. In a nutshell, Appium provides a stable webdriver interface on top of automation backends provided by Google and Apple.

Installing Appium was a bit of hassle for a long time, hence from Appium 2.0 architecturally we could choose to install the drivers and plugins as we wanted. You can find more details about Appium 2.0 here.

Highlights of Appium

  • When Espresso or XCUI upgrades the API contract, Appium under the hood will make necessary changes and the test will remain unchanged and work as before.
  • supports Cross platform testing, i.e., write one test that runs across many platforms.
  • allows users to write tests in WebDriver compatible languages – Java, Python, C#, Ruby, JS, etc.
  • Does not require application to be recompiled as it uses standard automation APIs across all platforms.
  • A Black box testing tool which also supports Gray box testing to some extent with Espresso’s driver backdoor capability.
  • Can switch between Espresso driver and UIAutomator2 driver for Android in a single session. (For example: Creating a session with Espresso Driver and then we can move to UIAutomator2 to perform actions outside of application under test.)
View the code on Gist.
  • The latest Webdriver W3C Actions API is designed in such a way that any complex gestures can be designed and executed on any platform, e.g., Android, iOS. Below is an example of a swipe gesture that runs on Android and iOS platforms.
View the code on Gist.
View the code on Gist.
  • Appium has a locator strategy specific to Espresso, e.g., Data Matcher strategy, and another for XCUI, e.g., NSPredicates and ClassChain
View the code on Gist.

What is Espresso

Espresso is an Android test framework developed by Google for UI testing. Espresso automatically synchronizes test actions with the user interface of the mobile app and ensures that activity is started well before the actual test run.

Highlights of Espresso

  • Feedback cycle is fast as it doesn’t require server communication. 
  • Allows users to create custom view matchers and is based on Hamcrest matchers.
  • Espresso Framework can be categorized between black box and white box testing, commonly called as gray box testing framework.
  • Accessibility testing is possible in Native Espresso.
  • Requires the entire code base to run the test.
  • Locator strategy is Id from R file.
  • Applitools Ultrafast Grid can be used to perform visual testing.
  • For testing webviews, Espresso internally uses WebDriver APIs to control the behaviour of a webview.
View the code on Gist.

What is XCUITest

The XCUITest framework from Apple helps users write UI tests straight inside the Xcode with the separate UI testing target in the app.

XCUITest uses accessibility identifiers to interact with the main iOS app. XCUITests can be written in Swift or Objective-C. 

There isn’t a reliable framework out there which easily supports testing on Apple TV devices. XCUITest is the only way to verify tvOS apps. SinceXcode 7, Apple has shipped XCTest prebuilt into its development kit. 

Highlights of XCUITest

  • Runs in a separate process from our main iOS app and it doesn’t access an application’s internal methods.
  • XCUIElement class in XCUITest provides gestures such as tap, press, swipe, pinch and rotate.
View the code on Gist.
  • Offers several inbuilt assertions e.g., XCTKVOExpectation, XCTNSNotificationExpectation, XCTDarwinNotificationExpectation, XCTNSPredicateExpectation, etc.
View the code on Gist.

Conclusion

Appium, Espresso and XCUI can each fill different needs for UI testing. The way to choose between them is to consider the requirements of your project. If your scope is limited just to one platform and you want comprehensive and embedded UI testing, XCUI or Espresso are great fits. For cross-platform testing across iOS, Android, and Hybrid then Appium is your best choice. 

The post Appium vs Espresso vs XCUITest – Understanding how Appium Compares to Espresso & XCUITest appeared first on Automated Visual Testing | Applitools.

]]>
How Do I Test Mobile Apps At Scale With Google Firebase TestLab And Applitools? https://applitools.com/blog/validate-google-firebase/ Tue, 01 Sep 2020 23:56:57 +0000 https://applitools.com/?p=22033 Google Firebase Test Lab is a cloud-based app-testing infrastructure. With one operation, you can test your Android or iOS app across a wide variety of devices and device configurations, and see the results—including...

The post How Do I Test Mobile Apps At Scale With Google Firebase TestLab And Applitools? appeared first on Automated Visual Testing | Applitools.

]]>

Google Firebase Test Lab is a cloud-based app-testing infrastructure. With one operation, you can test your Android or iOS app across a wide variety of devices and device configurations, and see the results—including logs, videos, and screenshots—in the Firebase console.

Firebase Test Lab runs Espresso and UI Automator 2.0  tests on Android apps, and XCTest   tests on iOS apps. Write tests using one of those frameworks, then run them through the Firebase console or the gcloud command line interface.

Firebase Test Lab lets you run the following types of tests:

  • Instrumentation test: A test you write that allows you to drive the UI of your app with the actions you specify. An instrumentation test can also make explicit assertions about the state of your app to verify correct functionality using AndroidJUnitRunner APIs. Test Lab supports Espresso and UI Automator 2.0 instrumentation test frameworks.
  • Robo test: A test that analyses your app’s interface and then explores it automatically by simulating user activities.
  • Game loop test: A test that uses a “demo mode” to simulate player actions in game apps.

How do I validate the Visual UI in Multiple Devices?

As with all web and mobile applications. Applitools offers an easy, consistent way to collect visual data from multiple device types running different viewport sizes. In the rest of this article, you will run through a demonstration for using Applitools with Google Firebase Test Lab.

Demo

In this Demo I have choose a simple “Hello World” app,  and to keep you running we already have an example Espresso Instrumentation Test you can find the Complete Project here https://github.com/applitools/eyes-android-hello-world 

Prerequisites

I know you have looked into the GitHub repo. Let’s just get few more prerequisites installed and make sure they are ready to use and deep dive. Make sure you have installed and/or configured the following:

  • Java Installed
  • JAVA_HOME environment variable is set to the Java SDK path
  • Android Studio IDE

Installing the Android Studio

Now let’s install Android Studio / SDK so that you can run the test script on an emulator or real device. You could install the Android SDK only but then you have to do additional advanced steps to properly configure the Android environment on your computer. I highly recommend installing the Android Studio as it makes your life easier.

Download the   Android Studio executable. Follow the steps below to install locally on your computer:

Run the Visual UI Test locally

1. Get the code:

  • Option 1: git clone https://github.com/applitools/eyes-android-hello-world
  • Option 2: Download it as a Zip file and unzip it.

2. Import the project into Android Studio

image2

The Script

Let’s look at the Instrumented test ExampleInstrumentedTest under androidTest.    

https://github.com/applitools/eyes-android-hello-world/blob/master/app/src/androidTest/java/com/applitools/helloworld/android/ExampleInstrumentedTest.java

View the code on Gist.

Before we run the test on Firebase lets run it on local emulator.

  1. Configure a local device Emulator from the AVD Manager in your android studio
  2. Insert Your Applitools API KEY in the test
  3. Launch the SimpleTest () from the ExampleInstrumentedTest.java. This will launch the Instrumentation test in the emulator.

That’s pretty easy isn’t it? Applitools will now capture each screen where the eyes.CheckWindow() is called and create a baseline on the first run.

Once the test completes, you can analyze the test results on the Applitools dashboard.

image3

Firebase

Now let’s run the test on Firebase devices. To do this we need first need an account so let’s get that

Step1: Navigate to https://firebase.google.com/ and click on sign in

Step2: Click on Go to Console, Navigates to the Console Dashboard

image7

Step3:  Create a Project, once you create a project, all good to explore the dashboard and see through all the features available here.

image6

 Step 4: Lets Add the Configurations in android studio to run the tests

  • Click on Run and select the edit configurations.
  • Click on the + button to create a new android launch/debug configuration based on templates.
  • Select the Android Instrumentation Tests > In the Module select app >
    Test as Class > com.applitools.helloworld.android.ExampleInstrumentedTest
  • Now in the Deployment Target Options select Firebase Test lab Device Matrix

Step 5: Sign in with Google Firebase account and click ok

Step 6: Re-open the edit configurations

Now you can see the configure settings for Matrix configuration and cloud project

Select your project and add one or more custom devices from list of 150. For now let’s add 2 devices, Platform Android 9.x, API Level 28 (pie), Locale, Orientation.

We will use these devices to run our Instrumentation test on Firebase.

image1

That’s it we are all good to run our test on the Firebase

Click on Run Example Instrumented Test this will now execute you tests on the devices you have selected on Firebase.

 Let’s Go back to The Test lab on Firebase and you can see your tests running over there Parallelly with visual comparison checks done on Applitools AI Platform.

image5
image4

Conclusion

Applitools allows you to test your mobile app by running it on any device lab. Google Firebase allows a streamline platform for developers (build) and quality engineers (test) to run tests on any device configuration. The integration make it easier to use the best of platforms for best quality applications.

For More Information

The post How Do I Test Mobile Apps At Scale With Google Firebase TestLab And Applitools? appeared first on Automated Visual Testing | Applitools.

]]>
Visual Testing with Applitools, Appium, and Amazon AWS Device Farm https://applitools.com/blog/visual-testing-appium-amazon/ Thu, 18 Jun 2020 17:28:18 +0000 https://applitools.com/?p=19734 Visual UI testing is more than just testing your app on Desktop browsers and Mobile emulators. In fact, you can do more with Visual UI testing to run your tests...

The post Visual Testing with Applitools, Appium, and Amazon AWS Device Farm appeared first on Automated Visual Testing | Applitools.

]]>

Visual UI testing is more than just testing your app on Desktop browsers and Mobile emulators. In fact, you can do more with Visual UI testing to run your tests over physical mobile devices.

Visual UI testing compares the visually-rendered output of an application against itself in older iterations. Users call this type of test version checking. Some users apply visual testing for cross-browser tests. They run the same software version across different target devices/operating systems/browsers/viewports. For either purpose, we need a testing solution that has high accuracy, speed, and works with a range of browsers and devices. For these reasons, we chose Applitools.

Running your Visual UI testing across physical devices means having to set up your own local environment to run the tests. Imagine the number of devices, screen resolutions, operating systems, and computers you’d need! It would be frustratingly boring, expensive, and extremely time-consuming.

This is where Amazon’s AWS Device Farm comes into play. This powerful service can build a testing environment. It uses physical mobile devices to run your tests! All you do is upload your tests to Amazon, specify the devices you want, and it will take it from there!

In one of my recent articles, How Visual UI Testing can speed up DevOps flow I showed how you can configure a CD/CI service to run your Visual UI tests. The end result would be the same, whether you are running your tests locally, or via such services. Once the tests run, you can always check the results over the Applitools Test Manager Dashboard.

In this article, I will show you how you can run your Visual UI tests, whether you’ve written them for your mobile or web app, on real physical mobile devices in the cloud. For this, I will be employing Applitools, Appium, and AWS Device Farm.

AWS Device Farm for mobile visual testing

AWS Device Farm is a mobile app testing platform that helps developers automatically test their apps on hundreds of real devices in minutes.

When it comes to testing your app over mobile devices, the choices are numerous. Amazon helps to build a “Device Farm” on behalf of the developers and testers, hence the name.

Here are some of the major advantages and features for using this service:

  1. Cross-platform. Android and iOS platforms (Native, Hybrid, and Web) are all supported. This includes native apps built with:
    • Java/Kotlin for Android.
    • Swift for iOS. 
    • PhoneGap.
    • Xamarin.
    • Unity.
    • And web apps built for mobile browsers.
  1. Scale. AWS Device Farm supports hundreds of unique physical devices, categorized by make, model, and operating system. You may also choose to run your tests across multiple instances of the same device. All these devices are available for you with a few mouse clicks!
  1. Safety and Security. AWS Device Farm provides full hardware and software isolation. The devices are physically isolated from one another! They cannot feed each other, so there’s no way for one phone to take a photo, video, or audio recording of a device sitting next to it. In addition, the devices are not visible to each other from a wireless or network point of view. Bluetooth and Wi-Fi traffic is not shared. On the software side, the devices are dynamically tethered to a host machine. When you run your test on a host machine, it has the device plugged into it over USB. That very host machine, that executes your code, is brought up on the fly, runs your code, and is then torn down. It’s never reused between customers.
  1. Reporting. The test results, together with any screenshots, videos, logs and performance logs, are all logged, and saved in the cloud. The AWS Device Farm offers a rich Dashboard to allow you to browse any of these logs in order to debug your test runs.

AWS Device Farm supports a number of test runners. This includes Appium Java JUnit, Appium Python, Appium Ruby, and Appium Java TestNG. Back in January 2019, Amazon announced support for the Appium Node.js test runner. This means you can build your tests with Selenium Webdriver, for instance, and have it run on top of AWS Device Farm.

Now that you have an idea about AWS Device Farm, let’s move on, and discover the Appium automation testing framework.

Selenium Webdriver for browser app automation

Selenium WebDriver is a browser automation framework that allows a developer to write commands, and send them to the browser. It offers a set of clients with a variety of programming languages (Java, JavaScript, Ruby, Python, PHP and others). 

Figure 1 below shows the Selenium WebDriver architecture:

Figure 1: Selenium WebDriver Architecture

Selenium WebDriver architecture consists of:

  • Selenium Language Bindings. These bindings are Selenium Client Libraries, offered in multiple programming languages, that developers use to send control commands to the browser. For instance, a developer can open a browser instance, and query for an element in DOM, among other tasks.
  • JSON Wire Protocol. This is a REST API Protocol (JSONWP) that all WebDriver Server implementations adhere to and understand. Each of the queries and commands the developers write using the Selenium Client Library are converted to HTTP Requests, with the query or command, as payload in the JavaScript Object Notation (JSON) format, and is sent to the WebDriver Server.
  • Browser Drivers. These are WebDriver Server implementations for a variety of browsers. A WebDriver Server implementation is nothing but an HTTP Server that receives requests from the Selenium Client Library using the JSON Wire Protocol format. It then analyzes the HTTP Request and prepares a browser-specific command to execute against the browser. If the request is a GET request, the browser-driver should return a response. Otherwise, a POST request is a one-way request to execute an action only without any response.
  • Browsers. These are the browsers that have a corresponding WebDriver Server implementation. The WebDriver Server implementations communicate with the Browsers over HTTP via the DevTools Protocol of each browser.

Selenium 4 is obseleting the JSONWP in favor of the new W3C WebDriver standard.

Here’s a quick tutorial on using and learning Selenium WebDriver.

With that brief overview of Selenium WebDriver, let’s move on and explore Appium.

Appium for mobile app automation

Appium is an open-source tool to automate Mobile app testing. It’s a cross-platform that supports both OS (Android and iOS) test scripts. It is tested on simulators (iOS), emulators (Android) and real devices (iOS, Android).

It’s an HTTP Server written in Node.js that creates and handles WebDriver sessions. When you install Appium, you are actually installing the Appium Server. It follows the same approach as the Selenium WebDriver, which receives HTTP requests from the Client Libraries in JSON format with the help of JSONWP. It then handles those HTTP Requests in different ways. That’s why you can make use of Selenium WebDriver language bindings, client libraries and infrastructure to connect to the Appium Server. 

Instead of connecting a Selenium WebDriver to a specific browser WebDriver, you will be connecting it to the Appium Server. Appium uses an extension of the JSONWP called the Mobile JSON Wire Protocol (MJSONWP) to support the automation of testing for native and hybrid mobile apps. 

It supports the same Selenium WebDriver clients with a variety of multiple programming languages such as Java, JavaScript, Ruby, Python, PHP and others. 

Being a Node.js HTTP Server, it works in a client-server architecture. Figure 2 below depicts the Appium Client-Server Architecture model:

Figure 2: Appium Server Architecture

Appium architecture consists of:

  • Appium Client. The Client Library that communicates with the Appium Server via a session by sending the commands and queries over HTTP in JSON format. These requests are eventually executed against the specific Mobile device (emulator or real device). 
  • Mobile JSONWP. This is the communication Protocol that both Appium Clients and Appium Server understand and use to pass along the commands and queries to be executed. The Appium Server differentiates between iOS and Android Request using the Desired Capabilities argument. It’s a collection of keys and values encoded in a JSON object, sent by Appium clients to the server when a new automation session is requested. They contain all the information about the device to be used to run the tests against. Here’s a detailed tutorial on all possible desired capabilities to use with Appium. Desired Capabilities for Appium
  • Appium Server. It’s a Node.js HTTP server that receives requests from the client libraries, translates them into meaningful commands and passes them over to the specific UI Automator. 
  • UI Automators. They are used to execute commands against the Mobile device/emulator/simulator. Examples are: UI Automator2 and XCUITest (iOS)

The results of the test session are then communicated back to the Appium Server, and back to the Client in the form of logs, using the Mobile JSONWP.

Now that you are well equipped with knowledge for Selenium WebDriver and Appium, let’s go to the demo section of this article.

Demo

In this section, we will write a Visual UI test script to test a Web page. We will run the tests over an Android device both locally and on AWS Device Farm. 

I will be using both Selenium WebDeriver and Appium to write the test script.

Prerequisites

Before you can start writing and running the test script, you have to make sure you have the following components installed and ready to be used on your computer:

  • Java installed
  • JAVA_HOME environment variable is set to the Java SDK path
  • Node.js installed

Assuming you are working on a MacOS computer, you can verify the above installations by running the following bash commands:

echo $JAVA_HOME // this should print the Java SDK path
node -v // this should print the version of Node.js installed
npm -v // this should print the version of the Node Package Manager installed

Component Installations

For this demo we need to install Appium Server, Android Studio / SDK and finally make sure to have a few environment variables properly set.

Let’s start by installing Appium Server. Run the following command to install Appium Server locally on your computer.

npm install -g appium

The command installs the Appium NPM package globally on your computer. To verify the installation, run the command:

appium -v // this should print the Appium version

Now let’s install Android Studio / SDK so that you can run the test script on an emulator or real device. You could install the Android SDK only but then you have to do additional advanced steps to properly configure the Android environment on your computer. I highly recommend installing the Android Studio as it makes your life easier.

Download the Android Studio executable. Follow the steps below to install locally on your computer:

Notice the location where the Android SDK was installed. It’s /Users/{User Account}/Library/Android/sdk.

Wait until the download and installation is complete. That’s all!

Because I want to run the test script locally over an Android emulator, let’s add one.

Open the Android Studio app:

Click the Configure icon:

Select the AVD Manager menu item.

Click the + Create Virtual Device button.

Locate and click the Pixel XL device then hit Next.

Locate the Q release and click the Download link. 

Read and accept the Terms and Conditions then hit Next.

The Android 10, also known as Q release, starts downloading.

Once the installation is complete, click the Next button to continue setting up an Android device emulator.

The installation is complete. Grab the AVD Name as you will use it later on in the test script, and hit Finish.

Finally, we need to make sure the following environment variables are set on your computer. Open the ~/.bash_profile file, and add the following environment variables:

APPLITOOLS_API_KEY={Get the Applitools API Key from Applitools Test Manager}
export APPLITOOLS_API_KEY
ANDROID_HOME=/Users/{Use your account name here}/Library/Android/sdk
export ANDROID_HOME
ANDROID_HOME_TOOLS=$ANDROID_HOME/tools
export ANDROID_HOME_TOOLS
ANDROID_HOME_TOOLS_BIN=$ANDROID_HOME_TOOLS/bin
export ANDROID_HOME_TOOLS_BIN
ANDROID_HOME_PLATFORM=$ANDROID_HOME/platform-tools
export ANDROID_HOME_PLATFORM
APPIUM_ENV="Local"
export APPIUM_ENV

Finally, add the above environment variables to the $PATH as follows:

export $PATH=$PATH:$ANDROID_HOME:$ANDROID_HOME_TOOLS:$ANDROID_HOME_TOOLS_BIN:$ANDROID_HOME_PLATFORM

One last major component that you need to download, and have on your computer, is the ChromeDriver. Navigate to the Appium ChromeDriver website, and download the latest workable ChromeDriver release for Appium. Once downloaded, make sure to move the file to the location: /usr/local/bin/chromedriver

That’s it for the installations! Let’s move on and explore the Visual UI test script in depth.

Run the Visual UI Test Script locally

You can find the source code demo of this article on this GitHub repo.

Let’s explore the main test script in this repo.

"use strict";

;(async () => {

    const webdriver = require("selenium-webdriver");
    const LOCAL_APPIUM = "https://web.archive.org/web/20221206000829/http://127.0.0.1:4723/wd/hub";

    // Initialize the eyes SDK and set your private API key.
    const { Eyes, Target, FileLogHandler, BatchInfo, StitchMode } = require("@applitools/eyes-selenium");

    const batchInfo = new BatchInfo("AWS Device Farm");
    batchInfo.id = process.env.BATCH_ID
    batchInfo.setSequenceName('AWS Device Farm Batches');
    
    // Initialize the eyes SDK
    let eyes = new Eyes();
    eyes.setApiKey(process.env.APPLITOOLS_API_KEY);
    eyes.setLogHandler(new FileLogHandler(true));
    eyes.setForceFullPageScreenshot(true)
    eyes.setStitchMode(StitchMode.CSS)
    eyes.setHideScrollbars(true)
    eyes.setBatch(batchInfo);

    const capabilities = {
        platformName: "Android",
        deviceName: "Android Emulator",
        automationName: "UiAutomator2",
        browserName: 'Chrome',
        waitforTimeout: 30000,
        commandTimeout: 30000,
    };

    if (process.env.APPIUM_ENV === "Local") {
        capabilities["avd"] = 'Pixel_XL_API_29';
    }
    
    // Open browser.
    let driver = new webdriver
        .Builder()
        .usingServer(LOCAL_APPIUM)
        .withCapabilities(capabilities)
        .build();

    try {
        // Start the test
        await eyes.open(driver, 'Vuejs.org Conferences', 'Appium on Android');

        await driver.get('https://us.vuejs.org/');

        // Visual checkpoint #1.
        await eyes.check('Home Page', Target.window());

        // display title of the page
        await driver.getTitle().then(function (title) {
            console.log("Title: ", title);
        });

        // locate and click the burger button
        await driver.wait(webdriver.until.elementLocated(webdriver.By.tagName('button.navbar__burger')), 2000).click();
        
        // locate and click the hyperlink with href='/#location' inside the second nav element
        await driver.wait(webdriver.until.elementLocated(webdriver.By.xpath("//web.archive.org/web/20221206000829/https://nav[2]/ul/li[3]/a[contains(text(), 'Location')]")), 2000).click();

        const h2 = await driver.wait(webdriver.until.elementLocated(webdriver.By.xpath("(//h2[@class='section-title'])[4]")), 2000);
        console.log("H2 Text: ", await h2.getText());

        // Visual checkpoint #2.
        await eyes.check('Home Loans', Target.window());

        // Close Eyes
        await eyes.close();
    } catch (error) {
        console.log(error);
    } finally {
        // Close the browser.
        await driver.quit();

        // If the test was aborted before eyes.close was called, ends the test as aborted.
        await eyes.abort();
    }
})();

The test script starts by importing the selenium-webdriver NPM package.

It imports a bunch of objects from the @applitools/eyes-selenium NPM package.

It constructs a BatchInfo object used by Applitools API. 

const batchInfo = new BatchInfo("AWS Device Farm");
batchInfo.id = process.env.BATCH_ID
batchInfo.setSequenceName('AWS Device Farm Batches');

It then creates the Eyes object that we will use to interact with the Applitools API.

// Initialize the eyes SDK
let eyes = new Eyes();
eyes.setApiKey(process.env.APPLITOOLS_API_KEY);
eyes.setLogHandler(new FileLogHandler(true));
eyes.setForceFullPageScreenshot(true)
eyes.setStitchMode(StitchMode.CSS)
eyes.setHideScrollbars(true)
eyes.setBatch(batchInfo);

It’s so important to set the Applitools API key at this stage. Otherwise, you won’t be able to run this test. The code above also directs the Applitools API logs to a File located at the root of the project under the name of eyes.log.

Next, we define the device capabilities that we are going to send to Appium.

const capabilities = {
        platformName: "Android",
        deviceName: "Android Emulator",
        automationName: "UiAutomator2",
        browserName: 'Chrome',
        waitforTimeout: 30000,
        commandTimeout: 30000,
    };
    if (process.env.APPIUM_ENV === "Local") {
        capabilities["avd"] = 'Pixel_XL_API_29';
    }

We are using an Android emulator to run our test script over a Chrome browser with the help of the UIAutomator 2 library.

We need to set the avd capability only when running this test script locally. For this property, grab the AVD ID of the Android Device Emulator we set above.

Now, we create and build a new WebDriver object by specifying the Appium Server local URL and the device capabilities as:

const LOCAL_APPIUM = "http://127.0.0.1:4723/wd/hub";
let driver = new webdriver
        .Builder()
        .usingServer(LOCAL_APPIUM)
        .withCapabilities(capabilities)
        .build();

Appium is configured to listen on Port 4723 under the path of /wd/hub.

The rest of the script is usual Applitools business. In brief, the script:

  • Opens a new Applitools test session
  • Sends a command to navigate the browser to https://us.vuejs.org/
  • Grabs the page title and displays it on screen.
  • Clicks the Burger Button to expand the menu on a Mobile device.
  • Finds the Location section on the page.
  • Finally, it prints the H2 text of the Location section.

Notice that the script asserts two Eyes SDK Snapshots. The first captures the home page of the website, while the second captures the Location section.

Finally, some important cleanup is happening to close the WebDriver and Eyes SDK sessions.

Open the package.json file, and locate the two scripts there:

"appium": "appium --chromedriver-executable /usr/local/bin/chromedriver --log ./appium.log",
"test": "node appium.js"

The first runs and starts the Appium Server, and the second to run the test script.

Let’s first run the Appium server by issuing this command:

npm run-script appium

Then, once Appium is running, let’s run the test script by issuing this command:

npm run-script test

Verify Test Results on Applitools Test Manager

Login to the Applitools Test Manager located at: https://applitools.com/users/login

You will see the following test results:

The two snapshots have been recorded!

Run the Visual UI Test Script on AWS Device Farm

Now that the test runs locally, let’s run it on AWS Device Farm. Start by creating a new account on Amazon Web Service website.

Login to your AWS account on this page: https://console.aws.amazon.com/devicefarm

Create a new project by following the steps below:

  • Select the Mobile Device Project and name your project Appium.
  • Click the Create project button.
  • Locate and click the + Create a new run button.
  • Select the HTML5 option since we are testing a Web Page on a mobile device. 
  • Assign your test run a name. 
  • Click the Next step button.
  • Select the Appium Node.js test runner
  • Upload your tests packaged in a zip file.

Let’s package our app in a zip file in order to upload it at this step.

Switch back to the code editor, open a command window, and run the following:

npm install

This command is essential to make sure all the NPM package dependencies for this app are installed.

npm install -g npm-bundle

The command above installs the npm-bundle NPM package globally on your machine.

Then, run the command to package and bundle your app:

npm-bundle

The command bundles and packages your app files, and folders, including the node_modules folder.

The output of this step creates the file with the .tgz extension.

The final step before uploading is to compress the file by running the command:

zip -r appium-aws.zip *.tgz 

Name the file whatever you wish.

Now you can upload the .zip file to AWS Device Farm.

Once the file uploads, scroll down the page to edit the .yaml file of this test run like so:

  • Make sure you insert your Applitools API Key as shown in the diagram.
  • Add the node appium.js command to run your test script.
  • Click the Save Testspec file.
  • You can name it anything you want.
  • It’s time to select the devices that you want to run the test script against. I will pick up a customized list of devices. Therefore, click the Create a new device pool button.
  • Give this new pool a name.
  • Pick up the selected Android Devices. You may select others too.
  • Click the Save device pool button.
  • Now, you can see the new device pool selected with the devices listed. 
  • Click the Next step button.
  • Locate and click the Confirm and start run button.
  • The test run starts!
  • Select the test run listed.
  • You can watch the progress on both devices as the test is running. Usually, this step takes a bit of time to run and complete.
  • Finally, the results are displayed clearly, and in our case, all the green indicates a pass.

Verify Test Results on Applitools Test Manager

Switch back to the Applitools Test Manager, and verify the results of this second run via AWS Device Farm.

As expected, we get exactly the same results as running the test script locally.

Conclusion

Given the massive integrations that Applitools offers with its rich SDKs, we saw how easily and quickly we can run our Visual UI tests in the cloud using the AWS Device Farm service. This service, and similar services, enrich the Visual regression testing ecosystem, and make perfect sense when performing them.

For More Information

The post Visual Testing with Applitools, Appium, and Amazon AWS Device Farm appeared first on Automated Visual Testing | Applitools.

]]>
Using Genymotion, Appium & Applitools to visually test Android apps https://applitools.com/blog/genymotion-appium-android/ Thu, 13 Jun 2019 05:29:12 +0000 https://applitools.com/blog/?p=5553 How to use Genymotion, Appium, and Applitools to do visual UI testing of native mobile Android applications.

The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on Automated Visual Testing | Applitools.

]]>

If you want to run mobile applications, you want to run on Android. Android devices dominate the smartphone market. Genymotion allows you to run your Appium tests in parallel on a range of virtual Android devices. Applitools lets you rapidly validate how each device renders each Appium test. Together, Genymotion and Applitools give you coverage with speed for your functional and visual tests.

As a QA automation professional, you know you need to test on Android.  Then you look at the market and realize just how fragmented the market is.

How fragmented is Android?

Fragmented is an understatement. A study by OpenSignal measured over 24,000 unique models of Android devices in use, running nine different Android OS versions across over three dozen different screen sizes, manufactured by 1,294 distinct device vendors. That is fragmentation. These numbers are mind-boggling, so here’s a chart to explain. Each box represents the usage share of one phone model.

Plenty of other studies confirm this. There are 19 major vendors of Android devices. Leading manufacturers include Samsung, Huawei, OnePlus, Xiaome, and Google. The market share of the leading Android device is less than 2% of the market, and the market share of the top 10 devices is 11%. The most popular  Android version accounts for only 31% of the market.

We would all like to think that Android devices behave exactly the same way.  But, no one knows for sure without testing. If you check through the Google Issue Tracker, you’ll find a range of issues that end up as platform-specific.

Implications for Android Test Coverage

So, if every Android device might behave differently, exactly how should you test your Android apps? One way is to run the test functionally on each platform and measure behavior in code – that’s costly. Another way is to run functionally on one platform and hope the code works on the others. Functionally, this can tell you that the app works – but you are left vulnerable to device-specific behaviors that may not be obvious without testing.

To visualize the challenge of testing against 24,000 unique platforms, imagine your application has just 10 screens. If you placed these ten different screens on 24,000 unique devices end-to-end, they would stretch over 30 miles. That’s longer than the distance of a marathon!

Could you imagine manually checking a marathon’s worth of screens with every release?

I can’t run a marathon, much less do while examining thousands of screens. Thankfully there’s a better way, which I’ll explain in this post: using Genymotion, Appium, and Applitools.

What is Genymotion?

Genymotion is the industry-leading provider of cloud-based Android emulation and virtual mobile infrastructure solutions. Genymotion frees you from having to build your own Android device farm.

Once you integrate your Appium tests with Genymotion Cloud, you can run them in parallel across many Android devices at once, to detect bugs as soon as possible and spend less time on test runs. That’s powerful.

With Genymotion Cloud, you can choose to test against just the most popular Android device/OS combinations. Or, you can test the combinations for a specific platform vendor in detail. Genymotion gives you the flexibility to run whatever combination of Androids you need.

pasted image 0 3

Why use Genymotion Cloud & Applitools?

Genymotion Cloud can run your Android functional tests across multiple platforms. However, functional tests are a subset of the device and OS version issues you might encounter with your application. In addition to functional tests. you can run into visual issues that affect how your app looks as well as how it runs. How do you run visual UI tests with Genymotion Cloud? Applitools.

Applitools provides AI-powered visual testing of applications and allows you to test cross-platform easily to identify visual bugs. Visual regressions seem like they might be simply a distraction to your customers. At worst, though, visual errors block your customers from completing transactions. Visual errors have real costs – and without visual testing, they often don’t appear until a user encounters them in the field.

Here’s one example of what I’m talking about. This messed-up layout blocked Instagram from making any money on this ad, and probably led to an upset customer and engineering VP. All the elements are present, so this screen probably passed functional testing.

pasted image 0 1

You can find plenty of other examples of visual regressions by following #GUIGoneWrong on Twitter.

Applitools uses an AI-powered visual testing engine to highlight issues that customers would identify. More importantly, Applitools ignores differences that customers would not notice. If you ever used snapshot testing, you may have stopped because you tracked down too many false positives. Applitools finds the issues that matter and ignores the ones that don’t.

How to use Genymotion, Appium & Applitools?

Applitools already works with Appium to provide visual testing for your Android OS applications. Now, you can use Applitools and Genymotion to run your visual tests across numerous Android virtual devices.  To sum up:

  1. Write your tests in Appium using the Applitools SDK to capture visual images.
  2. Launch the Genymotion cloud devices via command line.
  3. Your Appium scripts will run visual tests across the Genymotion virtual devices.

That’s the overview. To dive into the details, check out this step-by-step tutorial on using Genymotion, Appium, and Applitools.

While it’s pretty complete, here’s some additional information you’ll need:

We’ve put together a series of step-by-step tutorial videos using Genymotion, Appium, and Applitools. Here’s the first one:

https://www.youtube.com/watch?v=qXuMglfNEeo

Genymotion, Appium, and Applitools: Better Together

When you run Appium, Applitools, and Genymotion together, you get a huge boost in test productivity. You get to re-use your existing Appium test scripts. Genymotion lets you run all your functional and visual tests in parallel. And, with the accuracy of Applitools AI-powered visual testing, you track down only issues that matter, without the distraction of false positives.

Find Out More

Read more about how to use our products together from this Genymotion blog post.

Visit Applitools at the Appium Conference 2019 in Bengaluru, India.

Sign up for our upcoming webinar on July 9 with Jonathan Lipps: Easy Distributed Visual Testing for Mobile Apps and Sites.

Find out more about Genymotion Cloud, and sign up for a free account to get started.

Find out more about Applitools. You can request a demo, sign up for a free account, and view our tutorials.

 

The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation for Android Wearable Devices with Appium https://applitools.com/blog/test-automation-android-wearable-devices-appium/ Wed, 13 Aug 2014 08:02:00 +0000 http://162.243.59.116/2014/08/13/test-automation-for-android-wearable-devices-with/ Have you already hopped on the Android Wear train? It’s riding fast since Google’s recent SDK release. In this post I will help you ensure a safe ride by showing...

The post Test Automation for Android Wearable Devices with Appium appeared first on Automated Visual Testing | Applitools.

]]>

Have you already hopped on the Android Wear train? It’s riding fast since Google’s recent SDK release. In this post I will help you ensure a safe ride by showing you how to automatically test your new ‘wearable app’.

This post contains advanced techniques for test automation using Appium and is mainly intended for technical readers, but non-technical readers will surely benefit from reading it as well (and watching the demo video).

So, here we go: 

Please remember these two words: automation & validation.

We need automation in order to run our tests without any manual intervention; we need validation in order to assure everything looks as we intended. If automation replaces the hands of the manual QA testers – validation replaces their eyes.

After thoroughly researching this topic – and trying it first-hand, I created the following example for automating a wearable device. I recorded all the steps described below for your convenience.

Prerequisites:

  • Android SDK with wearable SDK and emulator image
  • Appium server
  • My language of choice was Java + JUnit, but you can use your favorite programming language
  • UI Validation tool – I used Applitools Eyes
  • ‘Android wear app’ installed on your Android device (it can be a physical or emulated device)
  • Last, but not least, you will obviously need an application to test – I chose Google Keep

I used Android Virtual Devices Manager (AVD). I wanted it to look cool, so I selected a round device. Make sure to select “Android 4.4W – API Level 20” with ARM CPU. Do not select “Use host GPU”, otherwise visual validation won’t work. Use “AndroidWearRound” skin to display round layout (that’s the coolness factor).

After running the new wearable device I created, I connected my Android device to the emulator device. Make sure to forward TCP Port 5601 via ADB, as shown in the video, so the wearable device will display “Connected status” on the Android host device.

Now I was almost ready to go (and so are you, if you’re following my lead on this one…).

Once the devices are all connected, there is one thing left to do before running the automation code and it’s to start the Appium server. Since I automated two devices in parallel (the wearable device and the hosting Android device), I needed two Appium servers on two different ports – make sure to specify port and device UID as shown in the video.

Now I was ready to go.

Let’s create awesome automation code; mine is shown in the video.

And… run it!

In order to make validation reliable and covered as much as possible, I used Applitools Eyes UI validation tool. It allows to use one line of code to validate the entire screen, so it keeps you agile and fast, and most importantly: free of UI issues as much as possible.

Wearable Tech Testing
My Applitools account with UI changes highlighted in pink, after the test

And here is the demo video: 

 

If you have any questions, or comments, please feel free to leave them below, or contact me directly: yanir.taflev(at)applitools.com.

Additional reading: Want to radically reduce UI automation code and increase test coverage? Learn how CloudShare achieved this dual goal by simply automating UI testing

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

 

Boost your Selenium tests with Automated Visual Testing - Open your FREE account now.

The post Test Automation for Android Wearable Devices with Appium appeared first on Automated Visual Testing | Applitools.

]]>