Product Archives - Automated Visual Testing | Applitools https://applitools.com/blog/category/product/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:38:48 +0000 en-US hourly 1 AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take https://applitools.com/blog/ai-and-the-future-of-test-automation-with-adam-carmi/ Mon, 16 Oct 2023 18:23:49 +0000 https://applitools.com/?p=52314 We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of...

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>

We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of room for you to form your own impressions.

Dave Piacente

Curious if the software robots are here to take our jobs? Or maybe you’re not a fan of the AI hype train? During a recent session, The Future of AI-Based Test Automation, CTO Adam Carmi discussed—in practical terms—the current and future state of AI-based test automation, why it matters, and what you can do today to level up your automation practice.

  • He describes how AI can be used to overcome common everyday challenges in end-to-end test automation, how the need for skilled testers will only increase, and how AI-based tooling can help supercharge any automated testing practice.
  • He also puts his money where his mouth is by demonstrating how the neverending maintenance overhead of tests can be mitigated using AI-driven tooling which already exists today using concrete examples (e.g., visual validation and self-healing locators).
  • He also discusses the role that AI will play in the future, including the development of autonomous testing platforms. These platforms will be able to automatically explore applications, add validations, and fill gaps in test coverage. (Spoiler alert: Applitools is building one, and Adam shows a bit of a teaser for it using a real-time in-browser REPL to automate the browser which uses natural language similar to ChatGPT.)

You can watch the full recording and find the session materials here, and I’ve included a quick breakdown with timestamps for ease of reference.

  • Challenges with automating end-to-end tests using traditional approaches (02:34-10:22)
  • How AI can be used to overcome these challenges (10:23-44:56)
  • The role of AI in the future of test automation (e.g., autonomous testing) (44:57-58:56)
  • The role of testers in the future (58:57-1:01:47)
  • Q&A session with the speaker (1:01:48-1:12:30)

Want to see more? Don’t miss Future of Testing: AI in Automation.

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>
Functional Testing’s New Friend: Applitools Execution Cloud https://applitools.com/blog/functional-testings-new-friend-applitools-execution-cloud/ Mon, 11 Sep 2023 19:59:03 +0000 https://applitools.com/?p=51735 Dmitry Vinnik explores how the Execution Cloud and its self-healing capabilities can be used to run functional test coverage.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>

In the fast-paced and competitive landscape of software development, ensuring the quality of applications is of utmost importance. Functional testing plays a vital role in verifying the robustness and reliability of software products. With the increasing complexity of applications with a long list of use cases and the need for faster release cycles, organizations are challenged to conduct thorough functional testing across different platforms, devices, and screen resolutions. 

This path to a better quality of software products is where Applitools, a leading provider of functional testing solutions, becomes a must-have tool with its innovative offering, the Execution Cloud.

Applitools’ Execution Cloud is a game-changing platform that revolutionizes functional testing practices. By harnessing the power of cloud computing, the Execution Cloud eliminates the need for resource-heavy local infrastructure, providing organizations with enhanced efficiency, scalability, and reliability in their testing efforts. The cloud-based architecture integrates with existing testing frameworks and tools, empowering development teams to execute tests across various environments effortlessly.

This article explores how the Execution Cloud and its self-healing capabilities can be used to run our functional test coverage. We demonstrate this cloud platform’s features, like auto-fixing selectors caused by a change in the production code. 

Why Execution Cloud

As discussed, the Applitools Execution Cloud is a great tool to enhance any team’s quality pipeline.

One of the main features of this cloud platform is that it can “self-heal” our tests using AI. For example, if, during refactoring or debugging, one of the web elements had its selectors changed and we forgot to update related test coverage, the Execution Cloud would automatically fix our tests. This cloud platform would use one of the previous runs to deduce another relevant selector and let our tests continue running. 

This self-healing capability of the Execution Cloud allows us to focus on actual production issues without getting distracted by outdated tests. 

Functional Testing and Execution Cloud

It’s fair to say that Applitools has been one of the leading innovators and pioneers in visual testing with its Eyes platform. However, with the Execution Cloud in place, Applitools offers its users broader, more scalable test capabilities. This cloud platform lets us focus on all types of functional testing, including non-Visual testing.

One of the best features of the Execution Cloud is that it’s effortless to integrate into any test case with just one line. There is also no requirement to use the Applitools Eyes framework. In other words, we can run any functional test without creating screenshots for visual validation while utilizing the self-healing capability of the Execution Cloud.

Adam Carmi, Applitools CTO, demos the Applitools Execution Cloud and explores how self-healing works under the hood in this on-demand session.

Writing Test Suite

As we mentioned earlier, the Execution Cloud can be integrated with most test cases we already have in place! The only consideration is at the time of writing this post, the current version of the Execution Cloud only supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. However, more test frameworks will be supported in the near future.

Fortunately, Selenium is a highly used testing framework, giving us plenty of room to demonstrate the power of the Execution Cloud and functional testing.

Setting Up Demo App

Our demo application will be a documentation site built using the Vercel Documentation template. It’s a simple app that uses Next.js, a React framework created by Vercel, a cloud platform that lets us deploy web apps quickly and easily.

To note, all the code for our version of the application is available here.

First, we need to clone the demo app’s repository: 

git clone git@github.com:dmitryvinn/docs-demo-app.git

We will need Node.js of version 10.13 to work with this demo app, which can be installed by following the steps here.

After we set up Node.js, we should open a terminal and run the following command to install the necessary dependencies:

npm install

The next step is to navigate into the project’s directory and start the app locally:

cd docs-demo-app

npm run dev

Now our demo app is accessible at ‘http://localhost:3000/’ and ready to be tested.

Docs Demo App 

Deploying Demo App

While the Execution Cloud allows us to run the tests against a local deployment, we will simulate the production use case by running our demo app on Vercel. The steps for deploying a basic app are very well outlined here, so we won’t spend time reviewing them. 

After we deploy our demo app, it will appear as running on the Vercel Dashboard:

Demo App Deployed on Vercel

Now, we can write our tests for a production URL of our demo application available at `https://docs-demo-app.vercel.app/`.

Setting Up Test Automation

Execution Cloud offers great flexibility when it comes to working with our tests. Rather than re-writing our test suites to run against this self-healing cloud platform, we simply need to update a few lines of code in the setup part of our tests, and we can use the Execution Cloud. 

For our article, our test case will validate navigating to a specific page and pressing a counter button. 

To make our work even more effortless, Applitools offers a great set of quickstart examples that were recently updated to support the Execution Cloud. We will start with one of these samples using JavaScript with Selenium WebDriver and Jest as our baseline.

We can use any Integrated Development Environment (IDE) to write tests like IntelliJ IDEA or Visual Studio Code. Since we use JavaScript as our programming language, we will rely on NPM for the build system and our test runner.

Our tests will use Jest as its primary testing framework, so we must add a particular configuration file called `jest.config.js`. We can copy-paste a basic setup from here, but in its shortest form, the required configurations are the following.

module.exports = {

    clearMocks: true,

    coverageProvider: "v8",

  };

Our tests will require a `package.json` file which should include Jest, Selenium WebDriver, and Applitools packages. Our dependencies’ part of the `package.json` file should eventually look like the one below:

"dependencies": {

      "@applitools/eyes-selenium": "^4.66.0",

      "jest": "^29.5.0",

      "selenium-webdriver": "^4.9.2"

    },

After we install the above dependencies, we are ready to write and execute our tests.

Writing the Tests

Since we are running a purely functional Applitools test with its Eyes disabled (meaning we do not have a visual component), we will need to initialize the test and have a proper wrap-up for it.

In `beforeAll()`, we can set our test batching and naming along with configuring an Applitools API key.

To enable Execution Cloud for our tests, we need to ensure that we activate this cloud platform on the account level. After that’s done, in our tests’ setup, we will need to initialize the WebDriver using the following code:

let url = await Eyes.getExecutionCloudUrl();

driver = new Builder().usingServer(url).withCapabilities(capabilities).build();

For our test case, we will open a demo app, navigate to another page, press a counter button, and validate that the click incremented the value of clicks by one.

describe('Documentation Demo App', () => {

…

    test('should navigate to another page and increment its counter', async () => {

       // Arrange - go to the home page

       await driver.get('https://docs-demo-app.vercel.app/');

       // Act - go to another page and click a counter button

        await driver.findElement(By.xpath("//*[text() = 'Another Page']")).click();

        await driver.findElement(By.className('button-counter')).click();

      // Assert - validate that the counter was clicked

        const finalClickCount = await driver.findElement(By.className('button-counter')).getText();

        await expect(finalClickCount).toContain('Clicked 1 times');

    }

…

Another critical aspect of running our test is that it’s a non-Eyes test. Since we are not taking screenshots, we need to tell the Execution Cloud when a test begins and ends. 

To start the test, we should add the following snippet inside the `beforeEach()` that will name the test and assign it to a proper test batch:

await driver.executeScript(

            'applitools:startTest',

            {

                'testName': expect.getState().currentTestName,

                'appName': APP_NAME,

                'batch': { "id": batch.getId() }

            }

        )

Lastly, we need to tell our automation when the test is done and what were its results. We will add the following code that sets the status of our test in the `afterEach()` hook:

await driver.executeScript('applitools:endTest', 

       { 'status': testStatus })

Now, our test is ready to be run on the Execution Cloud.

Running test

To run our test, we need to set the Applitools API key. We can do it in a terminal or have it set as a global variable:

export APPLITOOLS_API_KEY=[API_KEY]

In the above command, we need to replace [API_KEY] with the API key for our account. The key can be found in the Applitools Dashboard, as shown in this FAQ article.

Now, we need to navigate to the location where our tests are located and run the following npm test command in the terminal:

npm test

It will trigger the test suite that can be seen on the Applitools Dashboard:

Applitools Dashboard with Execution Cloud enabled

Execution Cloud in Action

It’s a well-known fact that apps go through a lifecycle. They get created, get bugs, change, and ultimately shut down. This ever-changing lifecycle of any app is what causes our tests to break. Whether it’s due to a bug or an accidental regression, it’s widespread for a test to fail after a change in an app.

Let’s say a developer working on a counter button component changes its class name to `button-count` from the original `button-counter`. There could be many reasons this change could happen, but nevertheless, these modifications to the production code are extremely common. 

What’s even more common is that the developer who made the change might forget or not find all the tests using the original class name, `button-counter`, to validate this component. As a result, these outdated tests would start failing, distracting us from investigating real production issues, which could significantly impact our users.

Execution Cloud and its self-healing capabilities were built specifically to address this problem. This cloud platform would be able to “self-heal” our tests that were previously running against a class name `button-counter`, and rather than failing these tests, the Execution Cloud would find another selector that hasn’t changed. With this highly scalable solution, our test coverage would remain the same and let us focus on correcting issues that are actually causing a regression in production.

Although we are running non-Eyes tests, the Applitools Dashboard still allows us to see several valuable materials, like a video recording of our test or to export WebDriver commands! 

Want to see more? Request a free trial of Applitools Execution Cloud.

Conclusion

Whether you are a small startup that prioritizes quick iterations, or a large organization that focuses on scale, Applitools Execution Cloud is a perfect choice for any scenario. It offers a reliable way for tests to become what they should be – the first line of defense in ensuring the best customer experience for our users.

With the self-healing capabilities of the Execution Cloud, we get to focus on real production issues that actively affect our customers. With this cloud platform, we are moving towards a space where tests don’t become something we accept as constantly failing or a detriment to our developer velocity. Instead, we treat our test coverage as a trusted companion that raises problems before our users do. 

With these functionalities, Applitools and its Execution Cloud quickly become a must-have for any developer workflow that can supercharge the productivity and efficiency of every engineering team.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>
Welcome Preflight To The Applitools Family https://applitools.com/blog/welcome-preflight-to-the-applitools-family/ Thu, 29 Jun 2023 15:32:12 +0000 https://applitools.com/?p=51159 We're thrilled to announce the acquisition of Preflight by Applitools!

The post Welcome Preflight To The Applitools Family appeared first on Automated Visual Testing | Applitools.

]]>

We are excited to share some fantastic news with our valued customers and the broader testing community. Applitools has acquired Preflight, a pioneering no-code platform that streamlines the creation, execution, and management of complex end-to-end tests. This acquisition marks a significant step in our journey to provide you with breakthrough technology that empowers your teams to increase test coverage, reduce test execution time, and deliver superior applications that your customers will love.

Introducing Applitools Preflight

Preflight is a robust no-code testing tool that empowers teams of all skill levels to automate complex testing scenarios. It runs these tests at an impressive scale across various browsers and screen sizes. Preflight’s user-friendly web recorder captures every element accurately and includes a data generator to simulate even the most complex test cases. This is a game-changer for manual testers, QA engineers, and product teams as it empowers them to automate test scenarios regardless of their skillset, effectively multiplying their QA abilities instantly.

Preflight ensures businesses achieve the test coverage necessary to consistently delight customers with each new experience, all without writing a single line of code.

The Benefits of Preflight

Simplified Test Creation: With Preflight, anyone on the team can create and run tests, democratizing the testing process. This inclusivity leads to more thorough testing and faster feedback cycles.

Expanded Test Coverage: Preflight enables teams to create comprehensive test suites that cover more functionality in less time. It can easily create UI tests, API tests, verify emails during sign-up, generate synthetic data, and more. This means teams can test more scenarios and edge cases that may have been overlooked with manual testing or traditional automated testing.

Enhanced Maintainability and Reusability: Preflight allows customers to reuse sections of test suites, workflows, login profiles, data, and more across different tests, reducing redundancy. It also simplifies test maintenance with a powerful test editor and live test replay that makes editing tests fast and intuitive, reducing one of the biggest gripes of record-and-replay tools.

The Future of Applitools and Preflight

While Preflight will continue to be available as a standalone product, we are actively integrating it into the Applitools platform to bring Visual AI to the masses! To get an exclusive first look at Preflight today, we invite you to sign up for a demo with one of our engineers.

The post Welcome Preflight To The Applitools Family appeared first on Automated Visual Testing | Applitools.

]]>
Add self-healing to your Selenium tests with Applitools Execution Cloud https://applitools.com/blog/add-self-healing-to-your-selenium-tests-with-applitools-execution-cloud/ Tue, 06 Jun 2023 07:05:55 +0000 https://applitools.com/?p=50888 A tutorial to get you started with the Applitools Execution Cloud!

The post Add self-healing to your Selenium tests with Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>

Applitools just released an exciting new product: the Applitools Execution Cloud

The Applitools Execution Cloud is extraordinary. Like several other testing platforms (such as Selenium Grid), it runs web browser sessions in the cloud – rather than on your machine – to save you the hassle of scaling and maintaining your own resources. However, unlike other platforms, Execution Cloud will automatically wait for elements to be ready for interactions and then fix locators when they need to be updated, which solves two of the biggest struggles when running end-to-end tests. It’s the first test cloud that adds AI power to your tests with self-healing capabilities. It also works with open source tools like Selenium rather than proprietary “low-code-no-code” tools.

Execution Cloud can run any WebDriver-based test today, even ones that don’t use Applitools Eyes. Execution Cloud also works seamlessly with Applitools Ultrafast Grid, so tests can still cover multiple browser types, devices, and viewports. The combination of Execution Cloud with Ultrafast Grid enables functional and visual testing to work together beautifully!

I wanted to be one of the first engineers to give this new platform a try. The initial release supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. Future releases will support others like Cypress and Playwright. In this article, I’m going to walk through my first experiences with Execution Cloud using Selenium WebDriver in my favorite language – Python. Let’s go!

Starting with plain-old functional tests

Recently, I’ve been working on a little full-stack Python web app named Bulldoggy, the reminders app. Bulldoggy has a login page and a reminders page. It uses HTMX to handle dynamic interactions like adding, editing, and deleting reminder lists and items. (If you want to learn how I built this app, watch my PyTexas 2023 keynote.) Here are quick screenshots of the login and reminders pages:

The Bulldoggy login page.

The Bulldoggy reminders page.

Writing a test with Selenium

My testing setup for Bulldoggy is very low-tech: I run the app locally in one terminal, and I launch my tests against it from a second terminal. I wrote a fairly basic login test with Selenium WebDriver and pytest. Here’s the test code:

import pytest

from selenium.webdriver import Chrome, ChromeOptions
from selenium.webdriver.common.by import By


@pytest.fixture(scope='function')
def local_webdriver():
  options = ChromeOptions()
  driver = Chrome(options=options)
  yield driver
  driver.quit()


def test_login_locally(local_webdriver: Chrome):

  # Load the login page
  local_webdriver.get("http://127.0.0.1:8000/login")

  # Perform login
  local_webdriver.find_element(By.NAME, "username").send_keys('pythonista')
  local_webdriver.find_element(By.NAME, "password").send_keys("I<3testing")
  local_webdriver.find_element(By.XPATH, "//button[.='Login']").click()

  # Check the reminders page
  assert local_webdriver.find_element(By.ID, 'bulldoggy-logo')
  assert local_webdriver.find_element(By.ID, 'bulldoggy-title').text == 'Bulldoggy'
  assert local_webdriver.find_element(By.XPATH, "//button[.='Logout']")
  assert local_webdriver.title == 'Reminders | Bulldoggy reminders app'

If you’re familiar with Selenium WebDriver, then you’ll probably recognize the calls in this code, even if you haven’t used Python before. The local_webdriver function is a pytest fixture – it handles setup and cleanup for a local ChromeDriver instance. The test_login_locally function is a test case function that calls the fixture and receives the ChromeDriver instance via dependency injection. The test then loads the Bulldoggy web page, performs login, and checks that the reminders page loads correctly.

When I ran this test locally, it worked just fine: the browser window opened, the automation danced across the pages, and the test reported a passing result. That was all expected. It was a happy path, after all.

Hitting broken locators

Oftentimes, when making changes to a web app, we (or our developers) will change the structure of a page’s HTML or CSS without actually changing what the user sees. Unfortunately, this frequently causes test automation to break because locators fall out of sync. For example, the input elements on the Bulldoggy login page had the following HTML markup:

<input type="text" placeholder="Enter username" name="username" required />
<input type="password" placeholder="Enter password" name="password" required />

My test used the following locators to interact with them:

local_webdriver.find_element(By.NAME, "username").send_keys("pythonista")
local_webdriver.find_element(By.NAME, "password").send_keys("I<3testing")

My locators relied on the input elements’  name attributes. If I changed those names, then the locators would break and the test would crash. For example, I could shorten them like this:

<input type="text" placeholder="Enter username" name="user" required />
<input type="password" placeholder="Enter password" name="pswd" required />

What seems like an innocuous change on the front-end can be devastating for automated tests. It’s impossible to know if an HTML change will break tests without deeply investigating the test code or cautiously running the whole test suite to shake out discrepancies.

Sure enough, when I ran my test against this updated login page, it failed spectacularly with the following error message:

selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[name="username"]"}

It was no surprise. The CSS selectors no longer found the desired elements.

A developer change like the one I showed here with the Bulldoggy app is only one source of fragility for locators. Many Software-as-a-Service (SaaS) applications like Salesforce and even some front-end development frameworks generate element IDs dynamically, which makes it hard to build stable locators. A/B testing can also introduce page structure variations that break locators. Web apps in development are always changing for one reason or another, making locators perpetually susceptible to failure.

Automatically healing broken locators

One of the most appealing features of Execution Cloud is that it can automatically heal broken locators. Instead of running your WebDriver session on your local machine, you run it remotely on Execution Cloud. In that sense, it’s like Selenium Grid or other popular cross-browser testing platforms. However, unlike those other platforms, it learns the interactions your tests take, and it can dynamically substitute broken locators for working ones whenever they happen. That makes your tests robust against flakiness for any reason: changes in page structure, poorly-written selectors, or dynamically-generated IDs.

Furthermore, Execution Cloud can run “non-Eyes” tests. A non-Eyes test is a traditional, plain-old functional test with no visual assertions or “visual testing.” Our basic login test is a non-Eyes test because it does not capture any checkpoints with Visual AI – it relies entirely on Selenium-based interactions and verifications.

I wanted to put these self-healing capabilities to the test with our non-Eyes test.

Setting up the project

To start, I needed my Applitools account handy (which you can register for free), and I needed to set my API key as the APPLITOOLS_API_KEY environment variable. I also installed the latest version of the Applitools Eyes SDK for Selenium in Python (eyes-selenium).

In the test module, I imported the Applitools Eyes SDK:

from applitools.selenium import *

I wrote a fixture to create a batch of tests:

@pytest.fixture(scope='session')
def batch_info():
  return BatchInfo("Bulldoggy: The Reminders App")

I also wrote another fixture to create a remote WebDriver instance that would run in Execution Cloud:

@pytest.fixture(scope='function')
def non_eyes_driver(
  batch_info: BatchInfo,
  request: pytest.FixtureRequest):

  options = ChromeOptions()
  options.set_capability('applitools:tunnel', 'true')

  driver = Remote(
    command_executor=Eyes.get_execution_cloud_url(),
    options=options)

  driver.execute_script(
    "applitools:startTest",
    {
      "testName": request.node.name,
      "appName": "Bulldoggy: The Reminders App",
      "batch": {"id": batch_info.id}
    }
  )
  
  yield driver

  status = 'Failed' if request.node.test_result.failed else 'Passed'
  driver.execute_script("applitools:endTest", {"status": status})
  driver.quit()

Execution Cloud setup requires a few extra things. Let’s walk through them together:

  • Since I’m running the Bulldoggy app on my local machine, I need to set up a tunnel between the remote session and my machine. There are two ways to do this. One way is to set up ChromeOptions with options.set_capability('applitools:tunnel', 'true'), which I put in the code above. If you don’t want to hardcode the Applitools tunnel setting, the second way is to set the APPLITOOLS_TUNNEL environment variable to True. That way, you could toggle between local web apps and publicly-accessible ones. Tunnel configuration is documented at the bottom of the Execution Cloud setup and installation page.
  • The WebDriver session will be a remote one in Execution Cloud. Instead of creating a local ChromeDriver instance, the test creates a remote instance using the Execution Cloud URL by calling driver = Remote(command_executor=Eyes.get_execution_cloud_url(), options=options).
  • Since this is a non-Eyes test, we need to explicitly indicate when a test starts and stops. The driver.execute_script call sends a "applitools:startTest" event with inputs for the test name, app name, and batch ID.
  • At the end of the test, we need to likewise explicitly indicate the ending with the test status. That’s the second driver.execute_script call. Then, we can quit the browser.

In order to get the test result from pytest using request.node.test_result, I had to add the following hook to my conftest.py file:

import pytest

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
  outcome = yield
  setattr(item, 'test_result', outcome.get_result())

This is a pretty standard pattern for pytest.

Updating the test case

The only change I had to make to the test case function was the fixture it called. The body of the function remained the same:

def test_login_with_execution_cloud(non_eyes_driver: Remote):

  # Load the login page
  non_eyes_driver.get("http://127.0.0.1:8000/login")

  # Perform login
  non_eyes_driver.find_element(By.NAME, "username").send_keys('pythonista')
  non_eyes_driver.find_element(By.NAME, "password").send_keys("I<3testing")
  non_eyes_driver.find_element(By.XPATH, "//button[.='Login']").click()

  # Check the reminders page
  assert non_eyes_driver.find_element(By.ID, 'bulldoggy-logo')
  assert non_eyes_driver.find_element(By.ID, 'bulldoggy-title').text == 'Bulldoggy'
  assert non_eyes_driver.find_element(By.XPATH, "//button[.='Logout']")
  assert non_eyes_driver.title == 'Reminders | Bulldoggy reminders app'

Running the test in Execution Cloud

I reverted the login page’s markup to its original state, and then I ran the test using the standard command for running pytest: python -m pytest tests. (I also had to set my APPLITOOLS_API_KEY environment variable, as previously mentioned.) Tests ran like normal, except that the browser session did not run on my local machine; it ran in the Execution Cloud.

To view the results, I opened the Eyes Test Manager. Applitools captured a few extra goodies as part of the run. When I scrolled all the way to the right and clicked the three-dots icon for one of the tests, there was a new option called “Execution Cloud details”. Under that option, there were three more options:

  1. Download video
  2. Download WebDriver commands
  3. Download console log

Execution Cloud details for a non-Eyes test.

The option that stuck out to me the most was the video. Video recordings are invaluable for functional test analysis because they show how a test runs in real time. Screenshots along the way are great, but they aren’t always helpful when an interaction goes wrong or just takes too long to complete. When running a test locally, you can watch the automation dance in front of your eyes, but you can’t do that when running remotely or in Continuous Integration (CI).

Here’s the video recording for one of the tests:

The WebDriver log and the console log can be rather verbose, but they can be helpful traces to investigate when something fails in a test. For example, here’s a snippet from the WebDriver log showing one of the commands:

{
  "id": 1,
  "request": {
    "path": "execute/sync",
    "params": {
      "wdSessionId": "9c65e0c2-6742-4bc1-a2af-4672166faf21",
      "*": "execute/sync"
    },
    "method": "POST",
    "body": {
      "script": "return (function(arg){\nvar s=function(){\"use strict\";var t=function(t){var n=(void 0===t?[]:t)[0],e=\"\",r=n.ownerDocument;if(!r)return e;for(var o=n;o!==r;){var a=Array.prototype.filter.call(o.parentNode.childNodes,(function(t){return t.tagName===o.tagName})).indexOf(o);e=\"/\"+o.tagName+\"[\"+(a+1)+\"]\"+e,o=o.parentNode}return e};return function(){var n,e,r;try{n=window.top.document===window.document||\"root-context\"===window.document[\"applitools-marker\"]}catch(t){n=!1}try{e=!window.parent.document===window.document}catch(t){e=!0}if(!e)try{r=t([window.frameElement])}catch(t){r=null}return[document.documentElement,r,n,e]}}();\nreturn s(arg)\n}).apply(null, arguments)",
      "args": [
        null
      ]
    }
  },
  "time": "2023-05-01T03:52:03.917Z",
  "offsetFromCreateSession": 287,
  "duration": 47,
  "response": {
    "statusCode": 200,
    "body": "{\"value\":[{\"element-6066-11e4-a52e-4f735466cecf\":\"ad7cff25-c2d8-4558-9034-b1727ed289d6\"},null,true,false]}"
  }
}

It’s pretty cool to see the Eyes Test Manager providing all these helpful testing artifacts.

Running the test with self-healing locators

After the first test run with Execution Cloud, I changed the names for those input fields:

<input type="text" placeholder="Enter username" name="user" required />
<input type="password" placeholder="Enter password" name="pswd" required />

The login page effectively looked the same, but its markup had changed. I also had to update these form values in the get_login_form_creds function in the app.utils.auth module.

I reran the test (python -m pytest tests), and sure enough, it passed! The Eyes Test Manager showed a little wand icon next to its name:

The wand icon in the Eyes Test Manager showing locators that were automatically healed.

The wand icon indicates that locators in the test were broken but Execution Cloud was able to heal them. I clicked the wand icon and saw this:

Automatically healed locators.

Execution Cloud changed the locators from using CSS selectors for the name attributes to using XPaths for the placeholder text. That’s awesome! With Applitools, the test overcame page changes so it could run to completion. Applitools also provided the “healed” locators it used so I could update my test code as appropriate.

Running tests with Execution Cloud and Ultrafast Grid together

Visual assertions backed by Visual AI can greatly improve the coverage of traditional functional tests, like our basic login scenario for the Bulldoggy app. If we scrutinize the steps we automated, we can see that (a) we didn’t check anything on the login page itself, and (b) we only checked the basic appearance of three elements on the reminders page plus the title. That’s honestly very shallow coverage. The test doesn’t check important facets like layout, placement, or color. We could add assertions to check more elements, but that would add more brittle locators for us to maintain as well as take more time to develop. Visual assertions could cover everything on the page implicitly with a one-line call.

We can use the Applitools Eyes SDK for Selenium in Python to add visual assertions to our Bulldoggy test. That would transform it from a “non-Eyes” test to an “Eyes” test, meaning it would use Applitools Eyes to capture visual snapshots and find differences with Visual AI in addition to making standard functional interactions. Furthermore, we can perform cross-browser testing with Eyes tests using Applitools Ultrafast Grid, which will re-render the snapshots it captures during testing on any browser configurations we declare.

Thankfully Execution Cloud and Ultrafast Grid can run Eyes tests together seamlessly. I updated my login test to make it happen.

Setting up Applitools Eyes

Setting up Applitools Eyes for our test will be no different than the setup for any other visual test you may have written with Applitools. I already created a fixture for the batch info, so I needed to add fixtures for the Ultrafast Grid runner and the browsers to test on the Ultrafast Grid:

@pytest.fixture(scope='session')
def runner():
  run = VisualGridRunner(RunnerOptions().test_concurrency(5))
  yield run
  print(run.get_all_test_results())


@pytest.fixture(scope='session')
def configuration(batch_info: BatchInfo):
  config = Configuration()
  config.set_batch(batch_info)

  config.add_browser(800, 600, BrowserType.CHROME)
  config.add_browser(1600, 1200, BrowserType.FIREFOX)
  config.add_browser(1024, 768, BrowserType.SAFARI)
  config.add_device_emulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT)
  config.add_device_emulation(DeviceName.Nexus_10, ScreenOrientation.LANDSCAPE)

  return config

In this configuration, I targeted three desktop browsers and two mobile browsers.

I also wrote a simpler fixture for creating the remote WebDriver session:

@pytest.fixture(scope='function')
def remote_webdriver():
  options = ChromeOptions()
  options.set_capability('applitools:tunnel', 'true')

  driver = Remote(
    command_executor=Eyes.get_execution_cloud_url(),
    options=options)

  yield driver
  driver.quit()

This fixture still uses the Execution Cloud URL and the tunnel setting, but since our login test will become an Eyes test, we won’t need to call execute_script to declare when a test begins or ends. The Eyes session will do that for us.

Speaking of which, I had to write a fixture to create that Eyes session:

@pytest.fixture(scope='function')
def eyes(
  runner: VisualGridRunner,
  configuration: Configuration,
  remote_webdriver: Remote,
  request: pytest.FixtureRequest):

  eyes = Eyes(runner)
  eyes.set_configuration(configuration)

  eyes.open(
    driver=remote_webdriver,
    app_name='Bulldoggy: The Reminders App',
    test_name=request.node.name,
    viewport_size=RectangleSize(1024, 768))
  
  yield eyes
  eyes.close_async()

Again, all of this is boilerplate code for running tests with the Ultrafast Grid. I copied most of it from the Applitools tutorial for Selenium in Python. SDKs for other tools and languages need nearly identical setup. Note that the fixtures for the runner and configuration have session scope, meaning they run one time before all tests, whereas the fixture for the Eyes object has function scope, meaning it runs one time per test. All tests can share the same runner and config, while each test needs a unique Eyes session.

Rewriting the test with visual assertions

I had to change two main things in the login test:

  1. I had to call the new remote_webdriver and eyes fixtures.
  2. I had to add visual assertions with Applitools Eyes.

The code looked like this:

def test_login_with_eyes(remote_webdriver: Remote, eyes: Eyes):

  # Load the login page
  remote_webdriver.get("http://127.0.0.1:8000/login")

  # Check the login page
  eyes.check(Target.window().fully().with_name("Login page"))

  # Perform login
  remote_webdriver.find_element(By.NAME, "username").send_keys('pythonista')
  remote_webdriver.find_element(By.NAME, "password").send_keys("I<3testing")
  remote_webdriver.find_element(By.XPATH, "//button[.='Login']").click()

  # Check the reminders page
  eyes.check(Target.window().fully().with_name("Reminders page"))
  assert non_eyes_driver.title == 'Reminders | Bulldoggy reminders app'

I actually added two visual assertions – one for the login page, and one for the reminders page. In fact, I removed all of the traditional assertions that verified elements since the visual checkpoints are simpler and add more coverage. The only traditional assertion I kept was for the page title, since that’s a data-oriented verification. Eyes tests can handle both functional and visual testing!

Fewer locators means less risk of breakage, and Execution Cloud’s self-healing capabilities should take care of any lingering locator problems. Furthermore, if I wanted to add any more tests, then I already have all the fixtures ready, so test case code should be fairly concise.

Running the Eyes test

I ran the test one more time with the same command. This time, Applitools treated it as an Eyes test, and the Eyes Test Manager showed the visual snapshots along with all the Execution Cloud artifacts:

Test results for an Eyes tests run with both Execution Cloud and Ultrafast Grid.

Execution Cloud worked together great with Ultrafast Grid!

Taking the next steps

Applitools Execution Cloud is a very cool new platform for running web UI tests. As an engineer, what I like about it most is that it provides AI-powered self-healing capabilities to locators without requiring me to change my test cases. I can make the same, standard Selenium WebDriver calls as I’ve always coded. I don’t need to rewrite my interactions, and I don’t need to use a low-code/no-code platform to get self-healing locators. Even though Execution Cloud supports only Selenium WebDriver for now, there are plans to add support for other open source test frameworks (like Cypress) in the future.

If you want to give Execution Cloud a try, all you need to do is register a free Applitools account and request access! Then, take one of our Selenium WebDriver tutorials – they’ve all been updated with Execution Cloud support.

The post Add self-healing to your Selenium tests with Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>
Introducing Applitools Execution Cloud Self-Healing Test Infrastructure https://applitools.com/blog/ai-self-healing-test-cloud/ Thu, 04 May 2023 21:11:28 +0000 https://applitools.com/?p=50170 Introducing Applitools Execution Cloud: The World’s First Self-Healing Test Infrastructure for Open-Source Test Frameworks We are excited to announce the launch of Applitools Execution Cloud, a revolutionary self-healing, cloud-based testing...

The post Introducing Applitools Execution Cloud Self-Healing Test Infrastructure appeared first on Automated Visual Testing | Applitools.

]]>

Introducing Applitools Execution Cloud: The World’s First Self-Healing Test Infrastructure for Open-Source Test Frameworks

We are excited to announce the launch of Applitools Execution Cloud, a revolutionary self-healing, cloud-based testing platform that enables teams to run their existing tests against AI-powered testing infrastructure. This new addition to the Applitools Ultrafast Test platform is designed to provide teams that use open-source frameworks like Selenium or WebDriver.io, with best-in-class AI capabilities, such as self-healing, that are only currently available in proprietary tools.

For years, Applitools Eyes has brought Visual AI to the validation portion of tests, helping engineers reduce assertion code while boosting test coverage. While Eyes has continued to grow as the industry leader in AI validation, we were able to work closely with our customers to help solve problem with the other portion of testing: interaction.

Test flakiness most often occurs during the interaction phase of tests – and more specifically when a test uses a locator as it’s anchor for navigation that has changed for some reason. This can be due to dynamic Class or ID generation on certain builds or just some changes to the framework from the dev team. Either situation can wreak havoc on tests running soundly.

Reduce test flakiness with Applitools Execution Cloud

With Execution Cloud, teams can run tests at infinite scale in parallel while quickly healing broken tests as they run, reducing flakiness and execution time. Small changes to the UI, like text, color, or slight layout changes that would normally fail a Selenium test will be able to heal themselves.

Execution Cloud self-healing

The platform also allows for testing at extreme scale, allowing teams to run tests in the cloud in parallel for faster CI/CD pipelines.

And remember Execution Cloud, teams can easily run both functional and visual tests, as well as any Selenium and WebDriver.io tests using any binding language.

The platform also features implicit waits, which automatically waits for all critical elements to load before running its next process, drastically reducing test flakiness. Furthermore, teams can access test logs, including video, command logs, and console browser logs, to help debug faster.

Unlike its competitors, Execution Cloud is the world’s first intelligent test infrastructure for running open-source test frameworks. It is not locked in with a specific test creation tool, and it operates on a pay-as-you-go model that is cost-effective for developers and test engineers. Additionally, Execution Cloud is designed with AI capabilities, which its open-source competitors lack, making it the smart choice for teams that want to accelerate their product delivery speed and improve testing resilience.

Get started today

Overall, Applitools Execution Cloud offers a complete testing solution that helps teams improve their testing process and accelerate their product delivery speed. Run faster, more resilient Selenium tests with the Applitools Self-Healing Execution Cloud. Try it today and see the difference for yourself!

Learn more about Applitools Execution Cloud in our upcoming webinar.

The post Introducing Applitools Execution Cloud Self-Healing Test Infrastructure appeared first on Automated Visual Testing | Applitools.

]]>
Announcing Applitools Centra: UI Validation From Design to Implementation https://applitools.com/blog/announcing-applitools-centra-ui-validation-from-design-to-implementation/ Mon, 10 Apr 2023 21:11:21 +0000 https://applitools.com/?p=49087 The user interface (UI) is the last frontier of differentiation for companies of all sizes. When you think about financial institutions, a lot of the services that they offer digitally...

The post Announcing Applitools Centra: UI Validation From Design to Implementation appeared first on Automated Visual Testing | Applitools.

]]>
Centra collaboration

The user interface (UI) is the last frontier of differentiation for companies of all sizes. When you think about financial institutions, a lot of the services that they offer digitally are exactly the same. A lot of the services and the data they all tap into have been commoditized. What hasn’t been commoditized is the actual digital online experience – what it looks like and how you complete actions.

“Examined at an organizational level, a mature design thinking practice can achieve an ROI between 71% and 107%, based on a consistent series of inputs and outputs.”

The ROI Of Design Thinking, Forrester Business Case Report

The challenges of building UI

Easy version of design to production

Modern UIs today are built by a diverse set of teams that work together at different parts of the process. The pace at which these design, development, QA, operations, marketing, and product teams ship their work is continuing to accelerate – creating new challenges around communication, collaboration, and validation across the workflow.

Realistic version of design to production

Getting from design mock-ups in Figma to live UI is a process that includes a lot of feedback and testing. It starts with the designer who passes to the product manager for approval before the developer can start building. Feedback in the development process requires rework to make those updates before it can get approval from the product manager. This is all before the testing team has even started their review.

You can see the game of telephone that’s played through different stakeholders into production, and we get something that’s slightly different at multiple levels. This makes measuring what actually happened and what actually needs to change incredibly hard, making it a huge burden on teams to ship clean UIs at a fast pace. Some of our main challenges here are:

  • Lack of communication between the growing group of stakeholders
  • Breadth of technology during implementation causing inconsistencies
  • No continued source of truth across tooling as the app UI evolves

How Applitools Centra helps UI teams collaborate

Applitools’ newest product Centra is a collaboration platform for teams of all sizes to alleviate these challenges. Applitools Centra enables organizations to track, validate, and collaborate on UIs from design to production. Centra uploads application designs from tools like Figma to the Applitools Test Cloud. Then, Centra compares the designs against current baselines in local, staging, or production environments. Designers, developers, testers, or digital leaders then validate that their application interface looks exactly as it was intended.

Benefits of using Applitools Centra

  • Less drift in the UI: By comparing design and implementation throughout the development lifecycle, teams can cut down on the amount of drift between design and production that occurs in their UI.
  • Design as documentation: Disseminate designs as a single source of truth across teams so that QA teams will know exactly what interfaces are supposed to look like during validation. 
  • Increased cross-functional collaboration: Teams from different functions across the design-to-experience process can all communicate over the interfaces that they are shipping. Product Managers, Designers, and Developers can all have equal visibility into what actually makes it to production.
  • Catching bugs earlier: Shift left into design and catch bugs earlier in the SDLC – right at the moment of implementation, when the cost to fix is at its lowest.

Start using Applitools Centra

Check out the full demo of Centra in our announcement webinar. Centra is free to use for teams, and you can sign up for the waitlist to start using it on your teams.

The post Announcing Applitools Centra: UI Validation From Design to Implementation appeared first on Automated Visual Testing | Applitools.

]]>
Top 10 Visual Testing Tools https://applitools.com/blog/top-10-visual-testing-tools/ Fri, 03 Mar 2023 18:06:59 +0000 https://applitools.com/?p=48210 Introduction Visual regression testing, a process to validate user interfaces, is a critical aspect of the DevOps and CI/CD pipelines. UI often determines the drop-off rate of an application and...

The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.

]]>

Introduction

Visual regression testing, a process to validate user interfaces, is a critical aspect of the DevOps and CI/CD pipelines. UI often determines the drop-off rate of an application and is directly concerned with customer experience. A misbehaving front end is detrimental to a tech brand and must be avoided like the plague.

What is Visual Testing?

Manual testing procedures are not enough to understand intricate UI modifications. Automation scripts could be a solution but are often tedious to write and deploy. Visual testing, therefore, is a crucial element that determines changes to the UI and helps devs flag unwanted modifications. 

Every visual regression testing cycle has a similar structure – some baseline images or screenshots of a UI are captured and stored. After every change to the source code, a visual testing tool takes snapshots of the visual interface and compares them with the initial baseline repository. The test fails if the images do not match and a report is generated for your dev team.

Revolutionizing visual testing is Visual AI – a game-changing technology that automates the detection of visual issues in user interfaces. It also enables software testers to improve the accuracy and speed of testing. With machine learning algorithms, Visual AI can analyze visual elements and compare them to an established baseline to identify changes that may affect user experience. 

From font size and color to layout inconsistencies, Visual AI can detect issues that would otherwise go unnoticed. Automated visual testing tools powered by Visual AI, such as Applitools, improve testing efficiency and provide faster and more reliable feedback. The future of visual testing lies in Visual AI, and it has the potential to significantly enhance the quality of software applications.

Benefits of Visual Testing for Functional Testing

Visual testing is a critical aspect of software testing that involves analyzing the user interface and user experience of an application. It aims to ensure that the software looks and behaves as expected, and all elements are correctly displayed on different devices and platforms. Visual testing detects issues such as layout inconsistencies, broken images, and text overlaps that can negatively impact the user experience. 

Automated visual testing tools like Applitools can scan web and mobile applications and identify any changes to visual elements. Effective visual testing can help improve application usability, increase user satisfaction, and ultimately enhance brand loyalty.

Visual testing and functional testing are two essential components of software testing that complement each other. While functional testing ensures the application’s features work as expected, visual testing verifies that the application’s visual elements, such as layout, fonts, and images, are displayed correctly. Visual testing benefits functional testing by enhancing test coverage, reducing testing time and resources, and improving the accuracy of the testing process.

Some more benefits of visual testing for functional testing are as follows:

  1. Quicker test script creation: Tedious functional tests through undependable assertion code can be eliminated by automated visual tests for a page or region. This can be achieved with Applitools Eyes, which captures your screen and sends it to the Visual AI system for in-depth analysis.
  1. Slash debugging time to minutes: Visual testing slashes debugging functional tests to minutes. Applitools’ Root Cause Analysis on web app bugs shows the CSS and DOM differences, enhancing visual variance and cutting time requirements.
  1. Maintaining functional tests more effectively: Applitools Eyes, that uses Visual AI, makes a collection of similar modifications from various screens of the application. Each change can then be classified as expected or unexpected with one easy click, making it much simpler than evaluating assertion codes.

Further reading: https://applitools.com/solutions/functional-testing/

Top 10 Visual Testing Tools

The following section consists of 10 visual testing tools that you can integrate with your current testing suite.

1. Aye Spy

A visual regression tool, often underrated, Aye Spy is open-source and heavily inspired by BackstopJS and Wraith. At its core, the creators had one issue they wanted to challenge- performance. The visual regression tools in the market are missing this key element that Aye Spy finally decided to incorporate with 40 UI comparisons in under 60 seconds (with optimum setup, of course)!

Features:

  • Aye Spy requires Selenium Grid to work. Selenium Grid aids parallel testing on several computers, helping devs breeze through cross-browser testing. The creators of Aye Spy recommend using Docker images of Selenium for consistent results.
  • Amazon’s S3 is a data storage service used by firms across the globe. Aye Spy supports AWS S3 bucket for storing snapshots in the cloud.
  • The tool aims to maximize the testing performance by comparing up to 40 images in less than a minute with a robust setup. 

Advantages:

  • Aye Spy comes with clean documentation that helps you navigate the tool efficiently.
  • It is easy to set up and use. Aye Spy comes in a Docker package that is simple and straightforward to execute on multiple machines.

2. Applitools

One of the most popular tools in the market, Applitools, is best known for employing AI in visual regression testing. It offers feature-rich products like Eyes, Ultrafast Test Cloud, and Ultrafast Grid for efficient, intelligent, and automated testing. 

Applitools is 20x faster than conventional test clouds, is highly scalable for your growing enterprise, and is super simple to integrate with all popular frameworks, including Selenium, WebDriver IO, and Cypress. The tool is state of the art for all your visual testing requirements, with the ‘smarts’ to know what minor changes to ignore, without any prior settings.

Applitools’ Auto-Maintenance and Auto-Grouping features are handy. According to the World Quality Report 2022-23, maintainability is the most important factor in determining test automation approaches, but it often requires a sea of testers and DevOps professionals on their toes, ready to resolve a wave of bugs. 

Cumbersome and expensive, this can break your strategies and harm your reputation. Auto-Grouping categorizes the bugs as Auto-Maintenance resolves them while offering you the flexibility to jump in wherever needed. Applitools enters the movie here. 

Applitools Eyes is a Visual AI product that dramatically minimizes coding while maximizing bug detection and test updation. Eyes mimics the human eye to catch visual regressions with every app release. It can identify dynamic elements like ads or other customizations and ignore or compare them as desired.

Features:

  • Applitools invented Visual AI – a concept combining artificial intelligence with visual testing, making the tool indispensable in a competitive market. 
  • Applitools Eyes is intelligent enough to ignore dynamic content and minor modifications, without your intervention.
  • Applitools act as an extension to your available test suite. It integrates seamlessly with all popular leading test automation frameworks like Selenium, Cypress, Playwright and others, as well as low-code tools like Tosca, Testim.io, and Selenium IDE.
  • Applitools provides Smart Assist that suggests improvements to your tests. You can analyze the generated report containing high-fidelity snapshots with regressions highlighted and execute the recommended tests with one click. 
  • Applitools simplifies bug fixes by automating maintenance – a feature that can minimize your testing hassles to almost zero.

Advantages:

  • Applitools makes cross-browser testing a breeze. With its Ultrafast Test Cloud, you can test your app across varying devices, browsers, and viewports with much faster and more efficient throughput. 
  • Not only does Eyes allow mobile and web access, but it also facilitates testing on PDFs and Components. 
  • Applitools is all for cyber security and eliminates the requirement for tunnel configuration. You can choose where to deploy the tool – a private cloud or a public one, without any security woes. 
  • Applitools uses Root Cause Analysis to tell you exactly where the regressions are without any unnecessary information or jargon.

Read more: Applitools makes your cross-browser testing 20x faster. Sign up for a free account to try this feature.

3. Hermione.js

Hermione, an open-source tool, streamlines integration and visual regression testing although only for more straightforward websites. It is easier to kickstart Hermione with prior knowledge of Mocha and WebdriverIO, and the tool facilitates parallel testing across multiple browsers. Additionally, Hermione effectively uses subprocesses to tackle the computation issues associated with parallel testing. Besides this, the tool allows you to segregate tests from a test suite by only adding a path to the test folder. 

Features:

  • Hermione reruns failed tests but uses new browser sessions to eliminate issues related to dynamic environments. 
  • Hermione can be configured either with the DevTools or the WebDriver Protocol, requiring Selenium Grid (you can use Selenium-standalone packages) for the latter. 

Advantages:

  • Hermione is user-friendly, allows custom commands, and offers plugins as hooks. Developers use this attribute to design test ecosystems.
  • Incidental test fails are considerably minimized with Hermione by re-executing failed testing events.

4. Needle

Needle, supported by Selenium and Nose, is an open-source tool that is free to use. It follows the conventional visual testing structure and uses a standard suite of previously collected images to compare the layout of an app.

Features:

  • Needle executes the ‘baseline saving’ settings first to capture the initial screenshots of the interface. Running the same test again takes you to the testing mode where newer snapshots are taken and compared against the test suite.
  • Needle allows you to play with viewport sizes to optimize testing interactive websites.
  • Needle uses ImageMagick, PerceptualDiff, and PIL for screenshots, the latter being the default. ImageMagick and PerceptualDiff are faster than PIL and generate separate PNG files for failed test cases, distinguishing between the test and current layouts.

Advantages:

  • Needle saves images to your local machine, allowing you to archive or delete them. File cleanup can be easily activated from the CLI.
  • Needle has straightforward documentation that is beginner friendly and easy to follow.

5. Vizregress

Vizregress, a popular open-source tool, was created as a research project based on AForge.Net. Colin Williamson, the creator of the tool, tried to resolve a crucial issue- Selenium WebDriver (that Vizregress uses in the background) could not distinguish between layouts if the CSS elements stayed the same and only the visual representation was modified. This was a problem that could disrupt a website. 

Vizregress uses AForge attributes to compare every pixel of the images (new and baseline) to determine if they are equal. This is a complex task that does not deny its fragility. 

Features:

  • Vizregress automates visual regression testing using Selenium WebDriver. It uses Jenkins for continuous delivery. 
  • Vizregress allows you to mark zones on your webpage that you would like the tool to ignore during testing.
  • Vizregress requires consistent browser attributes like version and size.

Advantages:

  • Vizregress combines the features of Selenium WebDriver and AForge to provide a robust solution to a complex problem. 
  • Based on pixel analysis, the tool does an excellent job of identifying differences between baseline and new screenshots.

6. iOSSnapshotTestCase

Created by Jonathan Dann and Todd Krabach, iOSSnapshotTestCase was previously known as FBSnapshotTestCase and developed within Facebook – although Uber now maintains it. The tool uses the visual testing structure, where test screenshots are compared with baseline images of the UI.

iOSSnapshotTestCase uses tools like Core Animation and UIKit to generate screenshots of an iOS interface. These are then compared to specimen images in a repository. The test inevitably fails if the snapshots do not match. 

Features:

  • iOSSnapshotTestCase renames screenshots on the disk automatically. The names are generated based on the image’s selector and test class. Additionally, the tool generates a description of all failed tests.
  • The tool must be executed inside an app bundle or the Simulator to access UIKit. However, screenshot tests can still be written inside a framework but have to be saved as a test library bundle devoid of a Test Host.
  • A single test on iOSSnapshotTestCase can accommodate several screenshots. The tool also offers an identifier for this purpose.

Advantages:

  • iOSSnapshotTestCase facilitates a screenshot to have multiple tests for devices and several operating systems.
  • The tool automates manual tasks like renaming test cases and generates failure messages.

7. VisualCeption

VisualCeption uses a straightforward, 5-step process to perform visual regression testing. It uses WebDriver to capture a snapshot, JavaScript for calculating element sizes and positions, and Imagick for cropping and comparing visual components. An exception, if raised, is handled by Codeception.

It is essential to note here that VisualCeption is a function created for Codeception. Hence, you cannot use it as a standalone tool – you must have access to Codeception, Imagick, and WebDriver to make the most out of it.

Features:

  • VisualCeption generates HTML reports for failed tests.
  • The visual testing process spans 5 steps. However, the long list of tool prerequisites could become a team’s limitation.

Advantages:

  • VisualCeption is user-friendly once the setup is complete.
  • The report generation is automated on VisualCeption and can help you visualize the cause of test failure.

8. BacktopJS

BackstopJS is a testing tool that can be seamlessly integrated with CI/CD pipelines for catching visual regressions. Like others mentioned above, BackstopJS compares webpage screenshots with a standard test suite to flag any modifications exceeding a minimum threshold.

A popular visual testing tool, BackstopJS has formed the basis of similar tools like Aye Spy. 

Features

  • BackstopJS can be easily automated using CI/CD pipelines to catch and fix regressions as and when they appear.
  • Report generation is hassle-free and elaborates why a test failed – with appropriately marked components highlighting the regressions.
  • BackstopJS can be configured for multiple devices and operating systems, taking into account varying resolutions and viewport sizes.

Advantages:

  • BackstopJS is open-source and hence, free to use. You can customize the tool per your demands (although this could often be more expensive in terms of resources).
  • The tool is easy to operate with an intuitive, beginner-friendly interface.

9. Visual Regression Tracker

Visual Regression Tracker is an exciting tool that goes the extra mile to protect your data. It is self-hosted, meaning your information is unavailable outside your intranet network. 

In addition to the usual visual testing procedure, the tool helps you track your baseline images to understand how they change over time. Moreover, Visual Regression Tracker supports multiple languages including Python, Java, and JavaScript. 

Features:

  • Visual Regression Tracker is simple to use and more straightforward to automate. It has no preferences in terms of automation tools and can be integrated easily with any of your preferences. 
  • The tool can ignore areas of an image you don’t want it to consider during testing.
  • Visual Regression Tracker can work on any device, including smartphones, as long it provides the provision for screenshots. 

Advantages:

  • The tool is open-source and user-friendly. It is available in a Docker container, making it easy to set up and kickstart testing.
  • Your data is kept safe within your network with the self-hosting capabilities of Visual Regression Tracker.

10. Galen Framework

Galen Framework is an open-source tool for testing web UI. It is primarily used for interactive websites. Although developed in Java, the tool offers multi-language support, including CSS and JavaScript. Galen Framework runs on Selenium Grid and can be integrated with any cloud testing platform. 

Features:

  • Galen is great for testing responsive website designs. It allows you to specify the screen size and then reformat the browser window to capture screenshots as required.
  • Galen has built-in functions that facilitate more straightforward testing methods. These modules support complex operations like color scheme verification.

Advantages:

  • Galen Framework simplifies testing with enhanced syntax readability. 
  • The tool also offers HTML reports generated automatically for easy visualization of test failures.

Takeaway

Here is a quick recap of all the 10 tools mentioned above:

  1. Aye Spy: It helps you take 40 screenshots in less than a minute. Aye Spy could be your solution if you are looking for a high-performance tool. 
  1. Applitools: It has numerous offerings from Eyes to Ultrafast Test Cloud that automate the visual testing process and make it smart. Customers have noted a 50% reduction in maintenance efforts and a 75% reduction in testing time. With Applitools, AI validation takes the front-row seat and helps you create robust test cases effortlessly while saving you the most critical resource in the world – time.
  1. Hermione: Hermione.js eliminates environment issues by re-implementing failed tests in a new browser session. This minimizes unexpected failures. 
  1. Needle: Besides the usual visual regression testing functionalities, the tool makes file clearance easy. You choose to either archive or delete your test images.
  1. Vizregress: Vizregress analyzes and compares every pixel to mark regressions. If your browser attributes (like size and version) remain constant throughout your testing process, Vizregress can be a good tool.
  1. iOSSnapshotTestCase: The tool caters to all apps for your iOS devices and automates test case naming and report generation.
  1. VisualCeption: Built for Codeception, VisualCeption uses several frameworks to achieve the desirable results. The con is that the prerequisites are plenty and can be easily avoided with any of the top 2 tools on this list (note: Aye Spy requires Selenium Grid to function). 
  1. BackstopJS: Multiple viewport sizes and screen resolutions can be seamlessly handled by BackstopJS. Want a tool for multi-device testing? BackstopJS could be a good choice.
  1. Visual Regression Tracker: A holistic tool overall, Visual Regression Tracker allows you to mark sections of your image that you would like the tool to ignore. This makes your testing process more flexible and efficient.
  1. Galen Framework: Galen has built-in methods that make repetitive functionalities easier.

The following comparison chart gives you an overview of all crucial features at a glance. Note how most tools have attributes that are ambiguous or undocumented. Applitools stands out in this list, giving you a clear view of its properties.

This summary gives you a good idea of the critical features of all the tools mentioned in this article. However, if you are looking for one tool that does it all with minimal resources and effort, select Applitools. Not only did they spearhead Visual AI testing, but they also fully automate cross-browser testing, requiring little to no intervention from you.

Customers have reported excellent results – 75% less time required for testing and 50% minimization in upkeep endeavors. To know how Applitools can seamlessly integrate with your DevOps pipeline, request your demo today. 
Register for a free Applitools account.

The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.

]]>
How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid https://applitools.com/blog/cross-browser-testing-at-applitools-using-ultrafast-grid/ Tue, 07 Feb 2023 16:43:16 +0000 https://applitools.com/?p=46837 This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a...

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Ultrafast Grid cross-browser testing

This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a Mac images were not displayed correctly. Applitools Ultrafast Grid (UFG) helped us identify the bug at an early stage in development, before deploying the change. These types of bugs are a regular occurrence in any organization, and without UFG, these bugs can easily make it to production and remain there undetected until a customer complains about the problem. Translation support from Michael Sedley.

Front end development is complicated, and it involves a wide range of knowledge and tools to develop web applications. Regression testing across different systems, browsers, and devices makes it almost impossible to be sure that an application will display correctly on every system and there are no visual regressions as a result of a minor code change.

A real-life example occurred to me during my first week as an Applitools employee, when I fixed a minor bug using my Linux machine. Inadvertently, in the process, I created a more serious visual bug which was only visible on certain devices.

Had UFG not alerted me to the bug, the code would have gone to production and the result would have affected the usability of our flagship product on a Mac. This would have reflected badly on the professionalism of the company and would have affected the company representation, trust in our product, and sales.

Understanding the problem

In recent months, we improved Applitools Eyes’ ability to perform visual testing on images which are semi-transparent. In the past, Eyes would test an entire screen or defined region, but now using the Storybook SDK, users can automatically test each component separately, without needing to define a test for each component.

For example, when testing a gallery component, Eyes can identify visual bugs and regressions over all screen elements, including the appearance of buttons, controls, fonts, shadows, images, as well as backgrounds that include a transparency gradient.

Figure 1: Transparent Background

After implementing the transparency feature, a visual bug was reported.  In a screen capture of a semi-transparent screen region, unexpected grid lines appeared on top of the tested image.

The root cause of these lines wasn’t clear, so as a first step, we developed a test plan to reproduce the issue. I created a semi-transparent image, all gray (rgb = 127,127,127), with a constant alpha (transparency) channel (alpha=0.5). Fortunately, the bug was easily reproduced and I managed to create easily identifiable grid lines:

As I experimented with different transparency settings, it was clear that the color of the grid lines was the same as the color of the image, and it became stronger as the image transparency was lower.

After further investigation, I discovered that the image viewer component uses tiles to represent large images, and the tiles had one pixel overlap. In the past, when all images were RBG images with no transparency, the overlapping pixel was not visible to the human eye.  When I added semi-transparency, as the adjacent tiles were stacked on top of each other, sampling the outcoming color inside the grid line produced a 192 grayscale value, which is the exact outcome of stacking two half transparent gray layers over a white background:

(white ⋅ 0.5 + gray ⋅ 0.5) ⋅ 0.5 + gray ⋅ 0.5
given (white:=255, gray:=128)

Solving the preliminary bug

To resolve this bug, I recalculated the position and scale of each region so that there would not be an overlap and the line between regions would not appear.
For example, if the first tile had width: 480px and left: 0, the next adjacent tile should be positioned using left: 480, so that there is a zero pixel overlap between tiles.

I tested the results on my local (Linux) machine and assumed that the issue was resolved.

I didn’t realize that when I fixed this bug, I had also created a new issue which would have been almost impossible to anticipate.

How UFG identified the bug I created before deployment

At Applitools, we understand the importance of quality visual testing across browsers, so before deployment, every code change that impacts the Eyes interface must be tested by Applitools Eyes using UFG.

We are proud to “eat our own dogfood.” We rely on our visual testing tools to make sure that our products are visually perfect before release.

Our integration pipeline is configured to use UFG to test the UI change on multiple devices and screen settings so that we can confirm that the interface is consistent on every browser, operating system, and screen size.

We discovered that fixing the bug of a one pixel overlap created a new bug on certain systems where there was a gap visible between tiles.  Frustratingly, this bug was not reproducible in any of the devices used in development, and could not have been discovered with conventional manual visual testing.

The bug was only visible on screens with Retina Display which uses HiDPI.

What was interesting about this bug, is that it highlighted an inconsistency in the way that the same browser (Chrome) displays the same UI on different screen types.

What happened?

The bug and the solution

After some research, it turns out that there is (seemingly) a bug in the way Chrome behaves on Mac computers with Retina display (see 1, 2, 3). It turns out that using percentages or fractions of pixels for positioning and scaling of elements can lead to unexpected results.

So, what is the solution?

The solution itself is very elegant – all we had to do was to round each canvas scaling so that the canvas size would always be an integer:

scale = Math.round(scale * canvasSize) / canvasSize;

Thus, if the width of the canvas is 480 and our scale factor is 0.17, the width of our scaled canvas would not be 480 * 0.17 = 81.6, but would be 82 – this way we maintain compatibility with Retina displays and prevent unwanted gaps from being created.

This bug was easy to resolve once we were aware of it, but without UFG we would never have identified it using any of our test computers.

Conclusion

Maintaining a quality front end for all configurations is an ongoing challenge in every company and every organization.

Solving a bug for one audience can create a bigger bug for a wider audience. In this article, we saw a classic example of a malfunction, where the initial solution we implemented only made things worse.

The number of users who use Applitools Eyes for testing semi-transparent components is significantly lower than the number of eyes users who work with Retina displays (most Apple users) – so the initial approach we took to solve the problem could have caused significantly more harm than good. Even worse – we could have caused significant damage to the user experience, and not know about it. No modern organization wants to rely on frustrated customer feedback to discover bugs in their application or websites.

Using UFG reduces the likelihood that errors of this type will pass under the radar and allows developers, product managers, and all stakeholders in the development process to significantly reduce the fear factor in deploying new features. The UFG is insurance against platform-dependent visual bugs and provides the ability to perform true multi-platform coverage.

Don’t wait to discover your visual bugs from user reports. We invite you to try UFG – our team of experts is here to help with any questions or problems, and assist you migrate Applitools Eyes and UFG into your integration pipeline. For more information, see Introduction to the Ultrafast Grid in the Applitools Knowledge Center.

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale https://applitools.com/blog/automating-test-maintenance-and-analysis/ Mon, 07 Nov 2022 11:54:26 +0000 https://applitools.com/?p=44166 As teams get bigger and mature their testing strategy alongside the needs of business, new challenges in their process often arise. One of those challenges is that the analysis and...

The post How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale appeared first on Automated Visual Testing | Applitools.

]]>

As teams get bigger and mature their testing strategy alongside the needs of business, new challenges in their process often arise. One of those challenges is that the analysis and maintenance of tests and their results at scale can be incredibly cumbersome and time-consuming.

While a lot of emphasis gets put on “creating” new tests and reducing the time it takes to run them across different environments, there doesn’t seem to be the same emphasis on dealing with the results and repercussions of them.

Let’s say you have a test that validates a checkout experience and you want to expand that testing to the top 10 browsers. Just two bugs along that test scenario would produce 20 errors that need to be analyzed and then actioned on. This entire back and forth can become untenable in the rapid CI/CD environments present in many businesses. We basically have to choose to ignore our test results at this point if we want to get anything productive done.

This is where Auto-Grouping and Auto-Maintenance from Applitools come in, as it allows AI to quickly and accurately assess results just as an army of testers would!

What Is Automatic Analysis?

Applitools Auto-Grouping helps group together similar bugs that occur in different environments like browsers, devices, screen sizes, etc. Applitools even allows you to group these bugs between entire test runs, test steps, or even specific environments allowing you to really fine-tune your automation. 

In the above scenario, let’s assume we found 2 bugs across our 20 browsers for a total of 40 bugs! When we enable Auto Grouping, our errors are grouped together to present only 2 bugs – making it much easier to analyze what actually is going wrong in our interface and cutting down on chasing repeat bugs.

What is Automatic Maintenance?

Auto-Maintenance builds on Auto-Grouping by automating the process of updating tests based on their test results. Auto-Maintenance also enables users to set granular controls over what gets updated automatically between checkpoints, test runs, and more.

Again, taking a look at the above example, if we accepted a new baseline on one browser, we’d have to accept it on the other 19 browsers manually – taking up a ton of time. When a new baseline is accepted, Auto-Maintenance can apply that acceptance across all similar environments saving you hours of writing new tests that would accommodate those new baselines.

How Sonatype Saves Money & Time With Automatic Maintenance

Jamie Whitehouse and everyone on the development team spent time on each release working to uncover and address new failures and bugs across different browsers. Often, this work occurred as spot checks of the 1,000+ pages of the application during development. In reality, this work, and the inherent risk of unintended changes, slowed the delivery of the product to market.

Now, if Sonatype engineers make a change in their margins across a number of pages, all the differences show up as highlights in Applitools. Features in Applitools like Auto-Maintenance make visual validation a time saver. Auto-Maintenance lets engineers quickly accept identical changes across a number of pages – leaving only the unanticipated differences. As Jamie says, Applitools takes the guesswork out of testing the rendered pages. 

Start Saving Your Testing Time

To get started with automatically maintaining and analyzing your tests, you can check out our documentation here.

You’ll need a free account, so be sure to sign up for Applitools.

The post How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale appeared first on Automated Visual Testing | Applitools.

]]>
Introducing Monorepo Support, New APIs, and more with Applitools 10.15 https://applitools.com/blog/introducing-monorepo-support-new-apis-and-more-with-applitools-10-15/ Tue, 19 Jul 2022 13:57:04 +0000 https://applitools.com/?p=40513 The latest release for Applitools Eyes is jam-packed with new features for Git, new APIs, and more.

The post Introducing Monorepo Support, New APIs, and more with Applitools 10.15 appeared first on Automated Visual Testing | Applitools.

]]>

We’re excited to announce the latest release of Applitools Eyes, which comes with a number of new enhancements that our customers have been asking for. Applitools Eyes 10.15 is now available and can be accessed in the dashboard.

Support For Monorepos in Git

Applitools now supports Monorepos for all major git providers, allowing teams to add Visual AI to large, complex, single-tenant code repositories using tags and PR titles to separate teams and logic inside Applitools. A monorepo is a popular method for repository organization in teams looking for maximum speed and collaboration across their codebase, but it can introduce complexity when it comes to tools that work with the repo. Applitools now has the ability to granularly run and test sections of the repo as if they were separated.

Support For Multiple Git Repos In One Account

Continuing on our Git hot streak, Applitools now also supports integrating multiple GitHub organizations into a single Applitools team. For partners, agencies, or just large organizations that separate organizations you can now work on multiple projects with one Applitools account.

Enhanced Support For Dynamic Region Validation

When using coded regions based on an element identifier, Applitools Eyes 10.15 can now adjust the region automatically and make sure it covers the most up-to-date element dimensions. This ignores irrelevant diffs and saves more of your time!

New REST API Endpoints

The Applitools REST API has a few new endpoints that enable teams to interact with Applitools at scale. In Applitools Eyes 10.15 we’ve added the ability to validate API keys and edit batches programmatically.

To try out these great new enhancements in the latest release of Applitools Eyes 10.15, get started with Applitools today.

The post Introducing Monorepo Support, New APIs, and more with Applitools 10.15 appeared first on Automated Visual Testing | Applitools.

]]>
Storybook Play Functions Now Supported in Applitools https://applitools.com/blog/storybook-play-functions-now-supported-in-applitools/ Tue, 07 Jun 2022 17:59:30 +0000 https://applitools.com/?p=39154 Summer is a time for new things and a time for play. We’re excited to announce that the Applitools Storybook SDK now supports Play Functions, giving modern frontend teams even...

The post Storybook Play Functions Now Supported in Applitools appeared first on Automated Visual Testing | Applitools.

]]>

Example screenshot of Storybook Play Functions in Applitools

Summer is a time for new things and a time for play. We’re excited to announce that the Applitools Storybook SDK now supports Play Functions, giving modern frontend teams even more power when it comes to testing their component systems before production. Play Functions enable rich functionality in your Storybook designs, enabling you to interact with components and test scenarios that otherwise required user intervention. This means interacting and testing interactions such as form fills or date pickers in your component system is now possible! This capability was made available in Storybook version 6.4 and now is available in Applitools Storybook SDK version 3.28.

Applitools Storybook SDK can now consume these interactions through Play Functions and apply Visual AI to help your team spot any visual regression or defect in a component. For stories that have the play function, Applitools will automatically take a screenshot after the play function is finished.

To learn more about this specific feature, you can read our Storybook readme on NPM or the official Storybook Play Article.

Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code in Testing Storybook Components in Any Browser by Andrew Knight.

Happy testing!

The post Storybook Play Functions Now Supported in Applitools appeared first on Automated Visual Testing | Applitools.

]]>
Introducing Applitools Native Mobile Grid https://applitools.com/blog/introducing-applitools-native-mobile-grid/ Thu, 14 Apr 2022 14:09:56 +0000 https://applitools.com/?p=36100 Last year, Applitools launched the Ultrafast Grid, the next generation of browser testing clouds for faster testing across multiple browsers in parallel. The success of the new grid with our...

The post Introducing Applitools Native Mobile Grid appeared first on Automated Visual Testing | Applitools.

]]>

Last year, Applitools launched the Ultrafast Grid, the next generation of browser testing clouds for faster testing across multiple browsers in parallel. The success of the new grid with our customer base has been nothing short of amazing, having over 200 customers using Ultrafast Grid in the last year. But our customers are hungry for more innovation and we wanted to focus on extending our Applitools Test Cloud to the next frontier: native mobile apps.

Today, Applitools is excited to announce that the Native Mobile Grid is now ready for general availability – giving companies’ engineering and QA teams access to  the next generation of cross-device testing.

For those developing native mobile apps, there are often many challenges with testing across multiple devices and orientations, resulting in a high number of bugs slipping into production. Local devices are hard to set up and owning a vast collection don’t work well across remote companies in a post-Covid world. Not to mention each different device takes a bit of custom configuration and wizardry to get running without flakiness. And mobile test frameworks are often flaky on the big cloud providers. 

Applitools Native Mobile Grid is a cloud based testing grid that allows testers and developers to automate testing of their mobile applications across different iOS and Android devices quickly, accurately, and without hassle. After running just one test locally, the Applitools Native Mobile Grid will asynchronously run the tests in parallel using Visual AI, speeding up total execution tremendously and reducing flakiness. We’ve seen test time reduce by over 80% when run against other popular testing clouds.

The Benefits Of The Native Mobile Grid

Faster Test Execution, Broad Coverage

With access to over 40 devices, Applitools revolutionary async parallel test execution can reduce testing time by up to 90% compared to traditional device clouds while still expanding coverage over that single device you’ve been testing with.

Less Test Flakiness

Visual AI helps power Applitools industry leading stability and reliability, with flakiness and false positives reduced by 99%.

More Bugs Caught

Testing faster, on more devices, with Visual AI means that more bugs & defects are caught without having to write more tests.

Added Security

The Native Mobile Grid does not need to open a tunnel into your network so your application stays safe and secure

Get Started

To get started with Native Mobile Grid, just head on over and fill out this form.

The post Introducing Applitools Native Mobile Grid appeared first on Automated Visual Testing | Applitools.

]]>