Functional Testing Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/functional-testing/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:30:36 +0000 en-US hourly 1 Functional Testing’s New Friend: Applitools Execution Cloud https://applitools.com/blog/functional-testings-new-friend-applitools-execution-cloud/ Mon, 11 Sep 2023 19:59:03 +0000 https://applitools.com/?p=51735 Dmitry Vinnik explores how the Execution Cloud and its self-healing capabilities can be used to run functional test coverage.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>

In the fast-paced and competitive landscape of software development, ensuring the quality of applications is of utmost importance. Functional testing plays a vital role in verifying the robustness and reliability of software products. With the increasing complexity of applications with a long list of use cases and the need for faster release cycles, organizations are challenged to conduct thorough functional testing across different platforms, devices, and screen resolutions. 

This path to a better quality of software products is where Applitools, a leading provider of functional testing solutions, becomes a must-have tool with its innovative offering, the Execution Cloud.

Applitools’ Execution Cloud is a game-changing platform that revolutionizes functional testing practices. By harnessing the power of cloud computing, the Execution Cloud eliminates the need for resource-heavy local infrastructure, providing organizations with enhanced efficiency, scalability, and reliability in their testing efforts. The cloud-based architecture integrates with existing testing frameworks and tools, empowering development teams to execute tests across various environments effortlessly.

This article explores how the Execution Cloud and its self-healing capabilities can be used to run our functional test coverage. We demonstrate this cloud platform’s features, like auto-fixing selectors caused by a change in the production code. 

Why Execution Cloud

As discussed, the Applitools Execution Cloud is a great tool to enhance any team’s quality pipeline.

One of the main features of this cloud platform is that it can “self-heal” our tests using AI. For example, if, during refactoring or debugging, one of the web elements had its selectors changed and we forgot to update related test coverage, the Execution Cloud would automatically fix our tests. This cloud platform would use one of the previous runs to deduce another relevant selector and let our tests continue running. 

This self-healing capability of the Execution Cloud allows us to focus on actual production issues without getting distracted by outdated tests. 

Functional Testing and Execution Cloud

It’s fair to say that Applitools has been one of the leading innovators and pioneers in visual testing with its Eyes platform. However, with the Execution Cloud in place, Applitools offers its users broader, more scalable test capabilities. This cloud platform lets us focus on all types of functional testing, including non-Visual testing.

One of the best features of the Execution Cloud is that it’s effortless to integrate into any test case with just one line. There is also no requirement to use the Applitools Eyes framework. In other words, we can run any functional test without creating screenshots for visual validation while utilizing the self-healing capability of the Execution Cloud.

Adam Carmi, Applitools CTO, demos the Applitools Execution Cloud and explores how self-healing works under the hood in this on-demand session.

Writing Test Suite

As we mentioned earlier, the Execution Cloud can be integrated with most test cases we already have in place! The only consideration is at the time of writing this post, the current version of the Execution Cloud only supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. However, more test frameworks will be supported in the near future.

Fortunately, Selenium is a highly used testing framework, giving us plenty of room to demonstrate the power of the Execution Cloud and functional testing.

Setting Up Demo App

Our demo application will be a documentation site built using the Vercel Documentation template. It’s a simple app that uses Next.js, a React framework created by Vercel, a cloud platform that lets us deploy web apps quickly and easily.

To note, all the code for our version of the application is available here.

First, we need to clone the demo app’s repository: 

git clone git@github.com:dmitryvinn/docs-demo-app.git

We will need Node.js of version 10.13 to work with this demo app, which can be installed by following the steps here.

After we set up Node.js, we should open a terminal and run the following command to install the necessary dependencies:

npm install

The next step is to navigate into the project’s directory and start the app locally:

cd docs-demo-app

npm run dev

Now our demo app is accessible at ‘http://localhost:3000/’ and ready to be tested.

Docs Demo App 

Deploying Demo App

While the Execution Cloud allows us to run the tests against a local deployment, we will simulate the production use case by running our demo app on Vercel. The steps for deploying a basic app are very well outlined here, so we won’t spend time reviewing them. 

After we deploy our demo app, it will appear as running on the Vercel Dashboard:

Demo App Deployed on Vercel

Now, we can write our tests for a production URL of our demo application available at `https://docs-demo-app.vercel.app/`.

Setting Up Test Automation

Execution Cloud offers great flexibility when it comes to working with our tests. Rather than re-writing our test suites to run against this self-healing cloud platform, we simply need to update a few lines of code in the setup part of our tests, and we can use the Execution Cloud. 

For our article, our test case will validate navigating to a specific page and pressing a counter button. 

To make our work even more effortless, Applitools offers a great set of quickstart examples that were recently updated to support the Execution Cloud. We will start with one of these samples using JavaScript with Selenium WebDriver and Jest as our baseline.

We can use any Integrated Development Environment (IDE) to write tests like IntelliJ IDEA or Visual Studio Code. Since we use JavaScript as our programming language, we will rely on NPM for the build system and our test runner.

Our tests will use Jest as its primary testing framework, so we must add a particular configuration file called `jest.config.js`. We can copy-paste a basic setup from here, but in its shortest form, the required configurations are the following.

module.exports = {

    clearMocks: true,

    coverageProvider: "v8",

  };

Our tests will require a `package.json` file which should include Jest, Selenium WebDriver, and Applitools packages. Our dependencies’ part of the `package.json` file should eventually look like the one below:

"dependencies": {

      "@applitools/eyes-selenium": "^4.66.0",

      "jest": "^29.5.0",

      "selenium-webdriver": "^4.9.2"

    },

After we install the above dependencies, we are ready to write and execute our tests.

Writing the Tests

Since we are running a purely functional Applitools test with its Eyes disabled (meaning we do not have a visual component), we will need to initialize the test and have a proper wrap-up for it.

In `beforeAll()`, we can set our test batching and naming along with configuring an Applitools API key.

To enable Execution Cloud for our tests, we need to ensure that we activate this cloud platform on the account level. After that’s done, in our tests’ setup, we will need to initialize the WebDriver using the following code:

let url = await Eyes.getExecutionCloudUrl();

driver = new Builder().usingServer(url).withCapabilities(capabilities).build();

For our test case, we will open a demo app, navigate to another page, press a counter button, and validate that the click incremented the value of clicks by one.

describe('Documentation Demo App', () => {

…

    test('should navigate to another page and increment its counter', async () => {

       // Arrange - go to the home page

       await driver.get('https://docs-demo-app.vercel.app/');

       // Act - go to another page and click a counter button

        await driver.findElement(By.xpath("//*[text() = 'Another Page']")).click();

        await driver.findElement(By.className('button-counter')).click();

      // Assert - validate that the counter was clicked

        const finalClickCount = await driver.findElement(By.className('button-counter')).getText();

        await expect(finalClickCount).toContain('Clicked 1 times');

    }

…

Another critical aspect of running our test is that it’s a non-Eyes test. Since we are not taking screenshots, we need to tell the Execution Cloud when a test begins and ends. 

To start the test, we should add the following snippet inside the `beforeEach()` that will name the test and assign it to a proper test batch:

await driver.executeScript(

            'applitools:startTest',

            {

                'testName': expect.getState().currentTestName,

                'appName': APP_NAME,

                'batch': { "id": batch.getId() }

            }

        )

Lastly, we need to tell our automation when the test is done and what were its results. We will add the following code that sets the status of our test in the `afterEach()` hook:

await driver.executeScript('applitools:endTest', 

       { 'status': testStatus })

Now, our test is ready to be run on the Execution Cloud.

Running test

To run our test, we need to set the Applitools API key. We can do it in a terminal or have it set as a global variable:

export APPLITOOLS_API_KEY=[API_KEY]

In the above command, we need to replace [API_KEY] with the API key for our account. The key can be found in the Applitools Dashboard, as shown in this FAQ article.

Now, we need to navigate to the location where our tests are located and run the following npm test command in the terminal:

npm test

It will trigger the test suite that can be seen on the Applitools Dashboard:

Applitools Dashboard with Execution Cloud enabled

Execution Cloud in Action

It’s a well-known fact that apps go through a lifecycle. They get created, get bugs, change, and ultimately shut down. This ever-changing lifecycle of any app is what causes our tests to break. Whether it’s due to a bug or an accidental regression, it’s widespread for a test to fail after a change in an app.

Let’s say a developer working on a counter button component changes its class name to `button-count` from the original `button-counter`. There could be many reasons this change could happen, but nevertheless, these modifications to the production code are extremely common. 

What’s even more common is that the developer who made the change might forget or not find all the tests using the original class name, `button-counter`, to validate this component. As a result, these outdated tests would start failing, distracting us from investigating real production issues, which could significantly impact our users.

Execution Cloud and its self-healing capabilities were built specifically to address this problem. This cloud platform would be able to “self-heal” our tests that were previously running against a class name `button-counter`, and rather than failing these tests, the Execution Cloud would find another selector that hasn’t changed. With this highly scalable solution, our test coverage would remain the same and let us focus on correcting issues that are actually causing a regression in production.

Although we are running non-Eyes tests, the Applitools Dashboard still allows us to see several valuable materials, like a video recording of our test or to export WebDriver commands! 

Want to see more? Request a free trial of Applitools Execution Cloud.

Conclusion

Whether you are a small startup that prioritizes quick iterations, or a large organization that focuses on scale, Applitools Execution Cloud is a perfect choice for any scenario. It offers a reliable way for tests to become what they should be – the first line of defense in ensuring the best customer experience for our users.

With the self-healing capabilities of the Execution Cloud, we get to focus on real production issues that actively affect our customers. With this cloud platform, we are moving towards a space where tests don’t become something we accept as constantly failing or a detriment to our developer velocity. Instead, we treat our test coverage as a trusted companion that raises problems before our users do. 

With these functionalities, Applitools and its Execution Cloud quickly become a must-have for any developer workflow that can supercharge the productivity and efficiency of every engineering team.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>
Top 10 Visual Testing Tools https://applitools.com/blog/top-10-visual-testing-tools/ Fri, 03 Mar 2023 18:06:59 +0000 https://applitools.com/?p=48210 Introduction Visual regression testing, a process to validate user interfaces, is a critical aspect of the DevOps and CI/CD pipelines. UI often determines the drop-off rate of an application and...

The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.

]]>

Introduction

Visual regression testing, a process to validate user interfaces, is a critical aspect of the DevOps and CI/CD pipelines. UI often determines the drop-off rate of an application and is directly concerned with customer experience. A misbehaving front end is detrimental to a tech brand and must be avoided like the plague.

What is Visual Testing?

Manual testing procedures are not enough to understand intricate UI modifications. Automation scripts could be a solution but are often tedious to write and deploy. Visual testing, therefore, is a crucial element that determines changes to the UI and helps devs flag unwanted modifications. 

Every visual regression testing cycle has a similar structure – some baseline images or screenshots of a UI are captured and stored. After every change to the source code, a visual testing tool takes snapshots of the visual interface and compares them with the initial baseline repository. The test fails if the images do not match and a report is generated for your dev team.

Revolutionizing visual testing is Visual AI – a game-changing technology that automates the detection of visual issues in user interfaces. It also enables software testers to improve the accuracy and speed of testing. With machine learning algorithms, Visual AI can analyze visual elements and compare them to an established baseline to identify changes that may affect user experience. 

From font size and color to layout inconsistencies, Visual AI can detect issues that would otherwise go unnoticed. Automated visual testing tools powered by Visual AI, such as Applitools, improve testing efficiency and provide faster and more reliable feedback. The future of visual testing lies in Visual AI, and it has the potential to significantly enhance the quality of software applications.

Benefits of Visual Testing for Functional Testing

Visual testing is a critical aspect of software testing that involves analyzing the user interface and user experience of an application. It aims to ensure that the software looks and behaves as expected, and all elements are correctly displayed on different devices and platforms. Visual testing detects issues such as layout inconsistencies, broken images, and text overlaps that can negatively impact the user experience. 

Automated visual testing tools like Applitools can scan web and mobile applications and identify any changes to visual elements. Effective visual testing can help improve application usability, increase user satisfaction, and ultimately enhance brand loyalty.

Visual testing and functional testing are two essential components of software testing that complement each other. While functional testing ensures the application’s features work as expected, visual testing verifies that the application’s visual elements, such as layout, fonts, and images, are displayed correctly. Visual testing benefits functional testing by enhancing test coverage, reducing testing time and resources, and improving the accuracy of the testing process.

Some more benefits of visual testing for functional testing are as follows:

  1. Quicker test script creation: Tedious functional tests through undependable assertion code can be eliminated by automated visual tests for a page or region. This can be achieved with Applitools Eyes, which captures your screen and sends it to the Visual AI system for in-depth analysis.
  1. Slash debugging time to minutes: Visual testing slashes debugging functional tests to minutes. Applitools’ Root Cause Analysis on web app bugs shows the CSS and DOM differences, enhancing visual variance and cutting time requirements.
  1. Maintaining functional tests more effectively: Applitools Eyes, that uses Visual AI, makes a collection of similar modifications from various screens of the application. Each change can then be classified as expected or unexpected with one easy click, making it much simpler than evaluating assertion codes.

Further reading: https://applitools.com/solutions/functional-testing/

Top 10 Visual Testing Tools

The following section consists of 10 visual testing tools that you can integrate with your current testing suite.

1. Aye Spy

A visual regression tool, often underrated, Aye Spy is open-source and heavily inspired by BackstopJS and Wraith. At its core, the creators had one issue they wanted to challenge- performance. The visual regression tools in the market are missing this key element that Aye Spy finally decided to incorporate with 40 UI comparisons in under 60 seconds (with optimum setup, of course)!

Features:

  • Aye Spy requires Selenium Grid to work. Selenium Grid aids parallel testing on several computers, helping devs breeze through cross-browser testing. The creators of Aye Spy recommend using Docker images of Selenium for consistent results.
  • Amazon’s S3 is a data storage service used by firms across the globe. Aye Spy supports AWS S3 bucket for storing snapshots in the cloud.
  • The tool aims to maximize the testing performance by comparing up to 40 images in less than a minute with a robust setup. 

Advantages:

  • Aye Spy comes with clean documentation that helps you navigate the tool efficiently.
  • It is easy to set up and use. Aye Spy comes in a Docker package that is simple and straightforward to execute on multiple machines.

2. Applitools

One of the most popular tools in the market, Applitools, is best known for employing AI in visual regression testing. It offers feature-rich products like Eyes, Ultrafast Test Cloud, and Ultrafast Grid for efficient, intelligent, and automated testing. 

Applitools is 20x faster than conventional test clouds, is highly scalable for your growing enterprise, and is super simple to integrate with all popular frameworks, including Selenium, WebDriver IO, and Cypress. The tool is state of the art for all your visual testing requirements, with the ‘smarts’ to know what minor changes to ignore, without any prior settings.

Applitools’ Auto-Maintenance and Auto-Grouping features are handy. According to the World Quality Report 2022-23, maintainability is the most important factor in determining test automation approaches, but it often requires a sea of testers and DevOps professionals on their toes, ready to resolve a wave of bugs. 

Cumbersome and expensive, this can break your strategies and harm your reputation. Auto-Grouping categorizes the bugs as Auto-Maintenance resolves them while offering you the flexibility to jump in wherever needed. Applitools enters the movie here. 

Applitools Eyes is a Visual AI product that dramatically minimizes coding while maximizing bug detection and test updation. Eyes mimics the human eye to catch visual regressions with every app release. It can identify dynamic elements like ads or other customizations and ignore or compare them as desired.

Features:

  • Applitools invented Visual AI – a concept combining artificial intelligence with visual testing, making the tool indispensable in a competitive market. 
  • Applitools Eyes is intelligent enough to ignore dynamic content and minor modifications, without your intervention.
  • Applitools act as an extension to your available test suite. It integrates seamlessly with all popular leading test automation frameworks like Selenium, Cypress, Playwright and others, as well as low-code tools like Tosca, Testim.io, and Selenium IDE.
  • Applitools provides Smart Assist that suggests improvements to your tests. You can analyze the generated report containing high-fidelity snapshots with regressions highlighted and execute the recommended tests with one click. 
  • Applitools simplifies bug fixes by automating maintenance – a feature that can minimize your testing hassles to almost zero.

Advantages:

  • Applitools makes cross-browser testing a breeze. With its Ultrafast Test Cloud, you can test your app across varying devices, browsers, and viewports with much faster and more efficient throughput. 
  • Not only does Eyes allow mobile and web access, but it also facilitates testing on PDFs and Components. 
  • Applitools is all for cyber security and eliminates the requirement for tunnel configuration. You can choose where to deploy the tool – a private cloud or a public one, without any security woes. 
  • Applitools uses Root Cause Analysis to tell you exactly where the regressions are without any unnecessary information or jargon.

Read more: Applitools makes your cross-browser testing 20x faster. Sign up for a free account to try this feature.

3. Hermione.js

Hermione, an open-source tool, streamlines integration and visual regression testing although only for more straightforward websites. It is easier to kickstart Hermione with prior knowledge of Mocha and WebdriverIO, and the tool facilitates parallel testing across multiple browsers. Additionally, Hermione effectively uses subprocesses to tackle the computation issues associated with parallel testing. Besides this, the tool allows you to segregate tests from a test suite by only adding a path to the test folder. 

Features:

  • Hermione reruns failed tests but uses new browser sessions to eliminate issues related to dynamic environments. 
  • Hermione can be configured either with the DevTools or the WebDriver Protocol, requiring Selenium Grid (you can use Selenium-standalone packages) for the latter. 

Advantages:

  • Hermione is user-friendly, allows custom commands, and offers plugins as hooks. Developers use this attribute to design test ecosystems.
  • Incidental test fails are considerably minimized with Hermione by re-executing failed testing events.

4. Needle

Needle, supported by Selenium and Nose, is an open-source tool that is free to use. It follows the conventional visual testing structure and uses a standard suite of previously collected images to compare the layout of an app.

Features:

  • Needle executes the ‘baseline saving’ settings first to capture the initial screenshots of the interface. Running the same test again takes you to the testing mode where newer snapshots are taken and compared against the test suite.
  • Needle allows you to play with viewport sizes to optimize testing interactive websites.
  • Needle uses ImageMagick, PerceptualDiff, and PIL for screenshots, the latter being the default. ImageMagick and PerceptualDiff are faster than PIL and generate separate PNG files for failed test cases, distinguishing between the test and current layouts.

Advantages:

  • Needle saves images to your local machine, allowing you to archive or delete them. File cleanup can be easily activated from the CLI.
  • Needle has straightforward documentation that is beginner friendly and easy to follow.

5. Vizregress

Vizregress, a popular open-source tool, was created as a research project based on AForge.Net. Colin Williamson, the creator of the tool, tried to resolve a crucial issue- Selenium WebDriver (that Vizregress uses in the background) could not distinguish between layouts if the CSS elements stayed the same and only the visual representation was modified. This was a problem that could disrupt a website. 

Vizregress uses AForge attributes to compare every pixel of the images (new and baseline) to determine if they are equal. This is a complex task that does not deny its fragility. 

Features:

  • Vizregress automates visual regression testing using Selenium WebDriver. It uses Jenkins for continuous delivery. 
  • Vizregress allows you to mark zones on your webpage that you would like the tool to ignore during testing.
  • Vizregress requires consistent browser attributes like version and size.

Advantages:

  • Vizregress combines the features of Selenium WebDriver and AForge to provide a robust solution to a complex problem. 
  • Based on pixel analysis, the tool does an excellent job of identifying differences between baseline and new screenshots.

6. iOSSnapshotTestCase

Created by Jonathan Dann and Todd Krabach, iOSSnapshotTestCase was previously known as FBSnapshotTestCase and developed within Facebook – although Uber now maintains it. The tool uses the visual testing structure, where test screenshots are compared with baseline images of the UI.

iOSSnapshotTestCase uses tools like Core Animation and UIKit to generate screenshots of an iOS interface. These are then compared to specimen images in a repository. The test inevitably fails if the snapshots do not match. 

Features:

  • iOSSnapshotTestCase renames screenshots on the disk automatically. The names are generated based on the image’s selector and test class. Additionally, the tool generates a description of all failed tests.
  • The tool must be executed inside an app bundle or the Simulator to access UIKit. However, screenshot tests can still be written inside a framework but have to be saved as a test library bundle devoid of a Test Host.
  • A single test on iOSSnapshotTestCase can accommodate several screenshots. The tool also offers an identifier for this purpose.

Advantages:

  • iOSSnapshotTestCase facilitates a screenshot to have multiple tests for devices and several operating systems.
  • The tool automates manual tasks like renaming test cases and generates failure messages.

7. VisualCeption

VisualCeption uses a straightforward, 5-step process to perform visual regression testing. It uses WebDriver to capture a snapshot, JavaScript for calculating element sizes and positions, and Imagick for cropping and comparing visual components. An exception, if raised, is handled by Codeception.

It is essential to note here that VisualCeption is a function created for Codeception. Hence, you cannot use it as a standalone tool – you must have access to Codeception, Imagick, and WebDriver to make the most out of it.

Features:

  • VisualCeption generates HTML reports for failed tests.
  • The visual testing process spans 5 steps. However, the long list of tool prerequisites could become a team’s limitation.

Advantages:

  • VisualCeption is user-friendly once the setup is complete.
  • The report generation is automated on VisualCeption and can help you visualize the cause of test failure.

8. BacktopJS

BackstopJS is a testing tool that can be seamlessly integrated with CI/CD pipelines for catching visual regressions. Like others mentioned above, BackstopJS compares webpage screenshots with a standard test suite to flag any modifications exceeding a minimum threshold.

A popular visual testing tool, BackstopJS has formed the basis of similar tools like Aye Spy. 

Features

  • BackstopJS can be easily automated using CI/CD pipelines to catch and fix regressions as and when they appear.
  • Report generation is hassle-free and elaborates why a test failed – with appropriately marked components highlighting the regressions.
  • BackstopJS can be configured for multiple devices and operating systems, taking into account varying resolutions and viewport sizes.

Advantages:

  • BackstopJS is open-source and hence, free to use. You can customize the tool per your demands (although this could often be more expensive in terms of resources).
  • The tool is easy to operate with an intuitive, beginner-friendly interface.

9. Visual Regression Tracker

Visual Regression Tracker is an exciting tool that goes the extra mile to protect your data. It is self-hosted, meaning your information is unavailable outside your intranet network. 

In addition to the usual visual testing procedure, the tool helps you track your baseline images to understand how they change over time. Moreover, Visual Regression Tracker supports multiple languages including Python, Java, and JavaScript. 

Features:

  • Visual Regression Tracker is simple to use and more straightforward to automate. It has no preferences in terms of automation tools and can be integrated easily with any of your preferences. 
  • The tool can ignore areas of an image you don’t want it to consider during testing.
  • Visual Regression Tracker can work on any device, including smartphones, as long it provides the provision for screenshots. 

Advantages:

  • The tool is open-source and user-friendly. It is available in a Docker container, making it easy to set up and kickstart testing.
  • Your data is kept safe within your network with the self-hosting capabilities of Visual Regression Tracker.

10. Galen Framework

Galen Framework is an open-source tool for testing web UI. It is primarily used for interactive websites. Although developed in Java, the tool offers multi-language support, including CSS and JavaScript. Galen Framework runs on Selenium Grid and can be integrated with any cloud testing platform. 

Features:

  • Galen is great for testing responsive website designs. It allows you to specify the screen size and then reformat the browser window to capture screenshots as required.
  • Galen has built-in functions that facilitate more straightforward testing methods. These modules support complex operations like color scheme verification.

Advantages:

  • Galen Framework simplifies testing with enhanced syntax readability. 
  • The tool also offers HTML reports generated automatically for easy visualization of test failures.

Takeaway

Here is a quick recap of all the 10 tools mentioned above:

  1. Aye Spy: It helps you take 40 screenshots in less than a minute. Aye Spy could be your solution if you are looking for a high-performance tool. 
  1. Applitools: It has numerous offerings from Eyes to Ultrafast Test Cloud that automate the visual testing process and make it smart. Customers have noted a 50% reduction in maintenance efforts and a 75% reduction in testing time. With Applitools, AI validation takes the front-row seat and helps you create robust test cases effortlessly while saving you the most critical resource in the world – time.
  1. Hermione: Hermione.js eliminates environment issues by re-implementing failed tests in a new browser session. This minimizes unexpected failures. 
  1. Needle: Besides the usual visual regression testing functionalities, the tool makes file clearance easy. You choose to either archive or delete your test images.
  1. Vizregress: Vizregress analyzes and compares every pixel to mark regressions. If your browser attributes (like size and version) remain constant throughout your testing process, Vizregress can be a good tool.
  1. iOSSnapshotTestCase: The tool caters to all apps for your iOS devices and automates test case naming and report generation.
  1. VisualCeption: Built for Codeception, VisualCeption uses several frameworks to achieve the desirable results. The con is that the prerequisites are plenty and can be easily avoided with any of the top 2 tools on this list (note: Aye Spy requires Selenium Grid to function). 
  1. BackstopJS: Multiple viewport sizes and screen resolutions can be seamlessly handled by BackstopJS. Want a tool for multi-device testing? BackstopJS could be a good choice.
  1. Visual Regression Tracker: A holistic tool overall, Visual Regression Tracker allows you to mark sections of your image that you would like the tool to ignore. This makes your testing process more flexible and efficient.
  1. Galen Framework: Galen has built-in methods that make repetitive functionalities easier.

The following comparison chart gives you an overview of all crucial features at a glance. Note how most tools have attributes that are ambiguous or undocumented. Applitools stands out in this list, giving you a clear view of its properties.

This summary gives you a good idea of the critical features of all the tools mentioned in this article. However, if you are looking for one tool that does it all with minimal resources and effort, select Applitools. Not only did they spearhead Visual AI testing, but they also fully automate cross-browser testing, requiring little to no intervention from you.

Customers have reported excellent results – 75% less time required for testing and 50% minimization in upkeep endeavors. To know how Applitools can seamlessly integrate with your DevOps pipeline, request your demo today. 
Register for a free Applitools account.

The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.

]]>
Future-Proofing Your Test Automation Pipeline https://applitools.com/blog/future-proofing-your-test-automation-pipeline/ Fri, 27 Jan 2023 20:02:48 +0000 https://applitools.com/?p=46171 Learn how to future-proof your test automation pipeline with Cypress and Applitools by adding tests that run from GitHub Actions. In this article, we’ll share how to ensure your test...

The post Future-Proofing Your Test Automation Pipeline appeared first on Automated Visual Testing | Applitools.

]]>
Cypress Heroes app homepage

Learn how to future-proof your test automation pipeline with Cypress and Applitools by adding tests that run from GitHub Actions. In this article, we’ll share how to ensure your test automation pipeline can scale while staying reliable and easy to maintain.

Automating different types of tests

To illustrate our different types of test automation, we’ll be using the example project Cypress Heroes. In this full-stack TypeScript app, users can take the following actions:

  • Log in with an email and password
  • Like heroes, which increments the hero’s number of fans
  • Hire heroes, which increments the hero’s number of saves
  • Manage hero profile information like name, superpowers, and price

ICYMI: Watch the on-demand recording of Future-Proofing Your Automation Pipeline to see Ely Lucas from Cypress demo the example project.

End-to-end testing

Cypress is traditionally known for end-to-end testing. You automate user interactions for specific scenarios from start to finish in the browser, and then run functional assertions to check the state of elements at each step. End-to-end tests are hidden in an actual web server and hit the site just like a user would.

Measurable stats for code coverage of your end-to-end testing can act as a health metric for your website or app. Adding coverage reports to your automation pipeline as commits can help ensure you’re testing all parts of your code.

Component testing

If you’re using a component-based framework like React or Angular or a design system like Storybook, you can also do component testing to test UI components. In this example, we have a button component with a few tests that pass, the hero card test, and a test for the login form. These components are being mounted in isolation outside of your typical web server.

Think of component tests as “UI unit” tests. While they don’t give end-to-end coverage, they’re quick and easy to run.

API testing

For your back end, you’ll need to automate API tests. The example project is using a community-built plugin called cypress-plugin-api. This plugin provides an interface inside the Cypress app to test APIs. It’s really cool and it’s super fun, and it allows you to write tests that you would have to do manually in a tool like Postman.

Fun fact: Cypress Ambassador Filip Hric developed the cypress-plugin-api. Check out Filip’s Test Automation University courses.

The API tests in our example are in the separate server project. We can use the command npx cypress open, and then we can run those tests in Chrome. We can see all of our results that we’re getting the response of the status codes. We can view a post request, the headers that were sent, the headers that were returned, and other stuff that you normally get from a tool like Postman.

And it’s just baked into the app, which is really nice. Cypress is basically a web app that tests a web app. And so, you could extend Cypress as an app with things like this to help you do your testing and to have it all seamlessly integrated.

Running a pipeline with GitHub Actions

The example project uses GitHub Actions to set up the test automation pipeline. When working on smaller projects, it’s easy to have CI interactions baked into your repository, all in one place.

Configuring your GitHub Action

With GitHub Actions, you declare everything you need in a YAML file in the .github/workflows folder. Your actions become part of your repository and are covered by version control. If you make any changes, you can review them easily with a simple line-by-line diff. GitHub Actions make it easy to automate processes alongside other interactions you make with your repository. For example, if you open a pull request, you can have it automatically kick off your tests and do linting. You can even perform static code analysis before merging changes.

Some environment variables are set at the top of the YAML file. The API URL is what the client app uses to communicate to the API. The example app is hooked up to send test results to the Cypress Cloud. Those results can then be used for analytics, diagnostics, reporting, and optimizing our test workflows. The Cypress cloud also requires a GitHub token, so it can do things like correctly identify what pull request is being merged.

For those new to GitHub Actions: You can define environment variables per step in a job, but declaring them at the top helps you update them painlessly.

Running each of our tests

To keep things simple, there is only one job right now in this GitHub Action. First, it checks out the code straight from GitHub. Next, it builds the project using the Cypress GitHub Action. The Cypress GitHub Action does a few things for you like building your application or npm installing or yarn installing the dependencies.

Building first means that subsequent jobs don’t have to build the app again. We’ve set run test to false, which is a parameter to the Cypress GitHub Action, because we don’t want to run the tests here. We’ll be running the tests separately below.

We have our component tests in our GitHub Action. We tell it to install false, since we installed it up above. And then we run our custom test command, which opens Cypress in run mode and initiates component testing. This record tells the GitHub Action to send the results to Cypress Cloud.

And then we have to start the client and server. For both end-to-end tests and API tests, the application must be up and running component tests. For the end-to-end test and API tests, the example app is hitting live servers.

This run command will start both the React app and the Node server, and then it will run the end-to-end test. We’re telling it again to not install the dependency, since it was already installed. Then, we’re running the command to start the end-to-end testing. The wait command will wait to make sure that both the client URL and the API URL are both up and running before it will start the test. If the test starts before both URLs get up and running, you’ll have some tests fail.

Another thing that the Cypress GitHub Action does is that you have the option to wait for these services to be live before the testing starts. By default, the npm run test commands are going to use the Chromium browser built into Electron. If you want to test on other browsers, you must make sure those browsers are installed on the runner. Cypress provides Docker images that you can add to your configuration to download the different browsers. However, downloading additional browsers increases the file size and makes the runs take longer.

Make sure that the Cypress binary itself is downloaded and installed. It’s going to run headlessly. This is because the command set up in these scripts is run mode, which is headless, whereas open mode is with the UI.

And then it will run the API test, which is very similar to end-to-end tests, except that since we’re not hitting the actual client app – only hitting the API app – we’re only waiting to make sure that the API URL is up and running.

If you write test cases per the local database for end-to-end testing before pushing to GitHub Actions, someone else running those test cases on their system could potentially fail. In whatever kind of test automation you develop, you’ll need to handle test data properly to avoid collisions. There are many different strategies you can follow. For more information on this and solving sample data dependency, watch my talk Managing the Test Data Nightmare.

How long do the test suites take?

When running your tests with Cypress and GitHub Actions, the results are uploaded to Cypress Cloud. You can go into Cypress Cloud and actually watch replays of all these tests that happened. The entire pipeline run in the example was 3 minutes and 50 seconds for all three test suites.

The individual test suites we ran took the following times:

  • Component tests: 49 seconds
  • End-to-end tests: 1 minute and 18 seconds
  • API tests: 13 seconds

Improving test coverage with visual assertions

Since all the Cypress tests are run inside of the browser window, you can visually see them and inspect to make sure that they’re looking correctly. But this type of review is a manual step. If someone accidentally makes a change to the stylesheet, the site could no longer be running properly, but if we run the tests, they’ll pass.

We can use Applitools Eyes to fix this issue.

Visual testing is meant to automate the things that traditional automation is not so good at. For example, as long as particular IDs on your page are in the DOM somewhere, your traditional automation scripts with something like Cypress are still going to find and interact with the elements. Applitools Eyes uses visual AI to look at an app and be able to detect these kinds of visual differences that traditional assertions struggle to capture. Let’s add some visual snapshots to these end-to-end tests.

Adding Applitools to your project

First, you’ll need an Applitools account. You can register a free Applitools account with your GitHub username or your email, and you’ll be good to go. You’ll need your Applitools API key for when we run tests with Applitools.

Next, we’ll need to install the Applitools SDK using npm install @applitools/eyes-cypress.

It can be a dev dependency or it can be a regular dependency – whichever you prefer. In the example project, we use a dev dependency. In the example project, we’re using the Applitools Eyes SDK for Cypress, we have Applitools SDKs for basically every tool framework you got.
Next, we’ll need to create an Applitools configuration file. Where in Cypress projects, you have your cypress.config.js file, basically we want one that’s called applitools.config.js.

Configuring your Applitools runner

In the Applitools config file, we will specify the configuration for running visual tests. There’s a separation between declaring configuration and actually adding test steps.
One of the settings we want is called batchName, and we’re going to set that to “cy heroes visual tests” to reflect the name of our demo app. The batch name will appear in the Eyes Test Manager (or the Applitools “dashboard”) after we run our visual tests.

Next, we’ll set the browsers. This will be a list, with each item being an entry that specifies a browser configuration, including name, width, and height.

Typically, since Cypress runs inside of an Electron app, it can be challenging to test mobile browsers. However, the Applitools Ultrafast Grid enables us to render our visual snapshots on mobile devices. The settings for mobile devices are going to be a bit different than those for browsers. Instead of having a name, we’re going to have a device name.

Our applitools.config.js file is complete. When we run our tests – either locally or in the GitHub Action – Applitools will render the snapshots it captures on these four browser configurations in the Ultrafast Grid and report results using the batch name. Furthermore, the local platform doesn’t matter. Even if you run this test on Windows, the Ultrafast Grid can still render snapshots on Safari and mobile emulators. A snapshot is just going to be a capture of that full page. Applitools will do the re-rendering with the appropriate size, the appropriate browser configuration, and all that will happen in the cloud. Essentially you can do multi-browser and multi-platform testing with simple declarations.

Now that we have completed the configuration, let’s update the tests to capture visual snapshots, starting with the homepage.

Setting up our test suites

You need to make sure that your tests aren’t interfering with other tests. In these tests, we’re going through and modifying some of the heroes that are in the application. The state of the application changes per test, so to get around that, we’re creating a new hero just for working with our tests and deleting the hero after the tests.

In the example, we’re using Cypress tasks, which is code that actually runs on the Node process part of Cypress. It’s directly communicating with our database to add the hero, delete the hero, and all the other types of setup tasks that we want to do before we actually run our test.

So it’s going to happen for each of the tests, and then we’re visiting the homepage and getting access to the hero.

We get our new hero and then we call cy.deleteHero, which is going to call the database to delete the hero. From the describe block at the start of every test, we get our hero. And then, finally, we have the hero card by its name, and we find the button that has the right selectors, so we can actually select it and click the button.

This test is making sure that you’re logged in before you can like this hero. We’re making sure that the modal popped up, clicking the okay on the modal, and then making sure that modal disappears and does not exist anymore.

Down below we have another suite for when a normal user is logged in. And so we’re using a custom Cypress command to log in with this username and password. You can define these custom commands that are like making your own function, encapsulating a little bit of logic so that it could be reusable.

So what we’re doing to test the login is going to the homepage, running the login process, and verifying the login was actually successful. The cy.session is caching a session for us to restore the session later from cookies. This helps speed up your test so you’re not having to go through the whole flow of actually logging in again.

We have another suite here for when an admin user is logged in, because an admin user can edit users and delete heroes.

 In the example, negative login tests – where you use the wrong username and/or password – are under the component tests.

In the login form component test, when an email and password is invalid, an error should show. The example uses cy.intercept to mock the API request that goes to the cert, which goes to the off endpoint and returns a status code 401, which represents an invalid login.

You can either write a component test or an end-to-end test. In this case, a component test makes it easier to set up the mock data.

Adding a call to Applitools Eyes

With the test suites set up, we’re ready to add some visual snapshots here. We need to call an Eyes session using the Applitools Eyes SDK. The idea is that we open our eyes, and we can take visual snapshots. And then at the end of the test, we will close our eyes to say that we’ve captured all the snapshots for that session or for that test. And at that point, Applitools Eyes will upload the snapshots that are captured to the Applitools Eyes server, do all of the re-rendering of the things of those four browsers in the Ultrafast Grid. Then we can log into the Applitools dashboard and we can see exactly what happened with our testing.
To get the autocomplete for Eyes commands, we need to set up the Applitools Eyes stuff with the Cypress project. We already did npm install on the package, so we’ll need to run npx eyes-setup.

We’ll want to use the command cy.eyesOpen in the homepage describe block under the beforeEach method. We want to pass an app name and test name for logging and reporting purposes. You might also put their Cypress eyes open code in the beforeEach of the test cases, so the call doesn’t need to be duplicated.

Then, in the afterEach block, you’ll call cy.eyesClose.

In this test, you must log in, make sure that the modal pops up, log in, and then click okay in the modal and make sure the modal disappears, so we’ll need a snapshot when the modal is up and one when the modal goes away. In this case, we’ll capture the whole window.

If we didn’t want to capture everything, we could actually capture a region, like a div or even an individual element. On a small scale, using the region option does not make a measurable difference in execution speed, but it gives you a way to tune the type of snapshot we want.

For capturing the next step, we can basically copy the whole call there and paste it, changing the tag to homepage with the modal dismissed.

These snapshots are very straightforward to write, and something that we could consider is that some of those other assertions you might arguably be able to remove. The visual snapshot is going to capture everything on that window, so if it’s there and visible, we’re going to capture it and track it over time.

You would still need to keep all of your interactions, but you can remove most of your assertions checking visible elements. However, there are certainly things where if you want to check a very specific numeric value, you still want to keep those assertions.

Running the updated tests

All we need to do to run this test is make sure that we have our Applitools API key from our account saved as an environment variable of the Cypress application.

Note: If you happen to steal someone’s API key, it doesn’t really help you. It just means they’ll see your results, and you won’t. API keys should be kept secret and safe.

Using the Applitools Eyes dashboard

So to see the visual testing results, we will need to view them in the Applitools Eyes dashboard.

You can view your test results in a grid to see the UI quickly, or you can view your results in a list to see your configurations quickly.

On the left, you’ve got the batch name that was set. Then on the main part of the body, you’ll see there are actually four tests. We only wrote one test, but each test is run once per browser configuration we specified, providing cross-browser and cross-platform testing without additional steps.

If we open up the snapshots, you can see the two snapshots that we captured. These results are new, because this is the first time we’ve run the test.

We’ve established the snapshot as a baseline image, meaning anything in the future will be checked against that.

That’s where that visual aspect of the testing comes in. Your Cypress results will essentially tell you if it was bare bones basic functional, and then Eyes will tell you what it actually looked like. You get richer results together.

Resolving test results in the dashboard

Let’s see what this looks like if we make that visual change.

In the main file, we’ve updated the stylesheet and run the test again. There is no need to do anything in the Applitools Eyes dashboard before re-running the test.

The new test batch is in an unresolved state because Eyes detected a visual difference. In theory, a visual difference could be good or bad. You could be making an intentional change. Visual AI is basically a change detector that makes it obvious to you, the human, to decide what is good or bad. Then anytime Applitools Eyes sees the same kind of passing or failing behavior in the future, it’ll remember.

It’s important to note that the unresolved test results won’t stop your test automation job or your automation flow. Test automation would complete normally. You as the human tester would review visual test results in the Eyes Test Manager (the “dashboard”) afterwards. The pipeline would not wait for you to manually mark visual test results.

Let’s open up one of those snapshots so we can see it full screen.

In the upper left, below the View menu in the ribbon, there’s a dropdown to show both so that you can see the baseline and test side by side.

In the example, we had removed the stylesheet, so we can see very clearly that it’s very different. It’s not always this obvious. In this case, pretty much the whole screen is different. But if it were like a single button that was missing or something shunted a little bit, it would show that a specific area was different. That’s the power of the visual AI check.

Whenever Applitools detects a visual change, you can mark it as “passing” with a thumbs-up. Then that snapshot automatically becomes the new baseline against which future checkpoints are compared. Applitools will go to the background and track similar images. And it will automatically update those appropriately as well.

Note: If you ever want to “reset” snapshots, you can also delete the baselines and run your tests “fresh” as if for the first time. The snapshots they capture will automatically become new baseline images.

Once we’ve resolved all test results, we’ll need to save. And now if we were to rerun our test again, Applitools Eyes would see the new snapshots and pass tests as appropriate. If you have dynamic content or test data, you add region annotations, which will ignore anything in the region box.
It is possible to compare your production and staging environments. You can use our GitHub Integration to manage different branches or versions of your application. We also support different baselines for A/B testing.

Closing thoughts

That’s basically how you would do visual testing with Applitools and Cypress. There are two big points to remember if you want to add visual testing to your own test suites:

  • To get these tests running in your pipeline, the only change you’d have to make is to inject the Applitools API key in those environment variables.
  • We didn’t really add a fourth suite of tests. Visual testing is more of a technique or an aspect of testing, not necessarily its own category of tests. All you have to do is work in the SDK, capture some snapshots, and you’re good to roll.

We hope this guide has helped you to build out your test automation pipeline to be more reliable and scalable. If you liked the guide, check out our Applitools tutorials for other guides on building your test automation pipeline. Watch the on-demand recording of Future-Proofing Your Automation Pipeline to see the full walkthrough. To keep up-to-date with test automation, you can peruse our latest courses taught by industry-leading testing experts on Test Automation University. Happy testing!

The post Future-Proofing Your Test Automation Pipeline appeared first on Automated Visual Testing | Applitools.

]]>
What’s New in Cypress 12 https://applitools.com/blog/whats-new-in-cypress-12/ Tue, 10 Jan 2023 17:56:27 +0000 https://applitools.com/?p=45657 Right before the end of 2022, Cypress surprised us with their new major release: version 12. There wasn’t too much talk around it, but in terms of developer experience (DX),...

The post What’s New in Cypress 12 appeared first on Automated Visual Testing | Applitools.

]]>
Cypress 12 is here

Right before the end of 2022, Cypress surprised us with their new major release: version 12. There wasn’t too much talk around it, but in terms of developer experience (DX), it’s arguably one of their best releases of the year. It removes some of the biggest friction points, adds new features, and provides better stability for your tests. Let’s break down the most significant ones and talk about why they matter.

No more “detached from DOM” errors

If you are a daily Cypress user, chances are you have seen an error that said something like, “the element was detached from DOM”. This is often caused by the fact that the element you tried to select was re-rendered, disappeared, or detached some other way. With modern web applications, this is something that happens quite often. Cypress could deal with this reasonably well, but the API was not intuitive enough. In fact, I listed this as one of the most common mistakes in my talk earlier this year.

Let’s consider the example from my talk. In a test, we want to do the following:

  1. Open the search box.
  2. Type “abc” into the search box.
  3. Verify that the first result is an item with the text “abc”.

As we type into the search box, an HTTP request is sent with every keystroke. Every response from that HTTP request then triggers re-rendering of the results.

The test will look like this:

it('Searching for item with the text "abc"', () => {
 
 cy.visit('/')
 
 cy.realPress(['Meta', 'k'])
 
 cy.get('[data-cy=search-input]')
   .type('abc')
 
 cy.get('[data-cy=result-item]')
   .first()
   .should('contain.text', 'abc')
 
})

The main problem here is that we ignore the HTTP requests that re-render our results. Depending on the moment when we call cy.get() and cy.first() commands, we get different results. As the server responds with search results (different with each keystroke), our DOM is getting re-rendered, making our “abc” item shift from second position to first. This means that our cy.should() command might make an assertion on a different element than we expect.

Typically, we rely on Cypress’ built-in retry-ability to do the trick. The only problem is that the cy.should() command will retry itself and the previous command, but it will not climb up the command chain to the cy.get() command.

It is fairly easy to solve this problem in versions v11 and before, but the newest Cypress update has brought much more clarity to the whole flow. Instead of the cy.should() command retrying only itself and the previous command, it will retry the whole chain, including our cy.get() command from the example.

In order to keep retry-ability sensible, Cypress team has split commands into three categories:

  • assertions
  • actions
  • queries

These categories are reflected in Cypress documentation. The fundamental principle brought by version 12 is that a chain of queries is retried as a whole, instead of just the last and penultimate command. This is best demonstrated by an example comparing versions:

// Cypress v11:
cy.get('[data-cy=result-item]') // ❌ not retried
 .first() // retried
 .should('contain.text', 'abc') // retried
 
// Cypress v12:
cy.get('[data-cy=result-item]') // ✅ retried
 .first() // retried
 .should('contain.text', 'abc') // retried

cy.get() and cy.first() are commands that both fall into queries category, which means that they are going to get retried when cy.should() does not pass immediately. As always, Cypress is going to keep on retrying until the assertion passes or until a time limit runs up.

cy.session() and cy.origin() are out of beta

One of the biggest criticisms of Cypress.io has been the limited ability to visit multiple domains during a test. This is a huge blocker for many test automation engineers, especially if you need to use a third-party domain to authenticate into your application.

Cypress has advised to use programmatic login and to generally avoid trying to test applications you are not in control of. While these are good advice, it is much harder to execute them in real life, especially when you are in a hurry to get a good testing coverage. It is much easier (and more intuitive) to navigate your app like a real user and automate a flow similar to their behavior.

This is why it seems so odd that it took so long for Cypress to implement the ability to navigate through multiple domains. The reason for this is actually rooted in how Cypress is designed. Instead of calling browser actions the same way as tools like Playwright and Selenium do, Cypress inserts the test script right inside the browser and automates actions from within. There are two iframes, one for the script and one for the application under test. Because of this design, browser security rules limit how these iframes interact and navigate. Laying grounds for solving these limitations were actually present in earlier Cypress releases and have finally landed in full with version 12 release. If you want to read more about this, you should check out Cypress’ official blog on this topic – it’s an excellent read.

There are still some specifics on how to navigate to a third party domain in Cypress, best shown by an example:

it('Google SSO login', () => {
 
 cy.visit('/login') // primary app login page
 
 cy.getDataCy('google-button')
   .click() // clicking the button will redirect to another domain
 
 cy.origin('https://accounts.google.com', () => {
   cy.get('[type="email"]')
     .type(Cypress.env('email')) // google email
   cy.get('[type="button"]')
     .click()
   cy.get('[type="password"]')
     .type(Cypress.env('password')) // google password
   cy.get('[type="button"]')
     .click()
 })
 
 cy.location('pathname')
   .should('eq', '/success') // check that we have successfully
 
})

As you see, all the actions that belong to another domain are wrapped in the callback of cy.origin() command. This separates actions that happen on the third party domain.

The Cypress team actually developed this feature alongside another one that came out from beta, cy.session(). This command makes authenticating in your end-to-end tests much more effective. Instead of logging in before every test, you can log in just once, cache that login, and re-use it across all your specs. I recently wrote a walkthrough of this command on my blog and showed how you can use it instead of a classic page object.

This command is especially useful for the use case from the previous code example. Third-party login services usually have security measures in place that prevent bots or automated scripts from trying to login too often. If you attempt to login too many times, you might get hit with CAPTCHA or some other rate-limiting feature. This is definitely a risk when running tens or hundreds of tests.

it('Google SSO login', () => {
 
 cy.visit('/login') // primary app login page
 cy.getDataCy('google-button')
   .click() // clicking the button will redirect to another domain
 
 cy.session('google login', () => {
   cy.origin('https://accounts.google.com', () => {
     cy.get('[type="email"]')
       .type(Cypress.env('email')) // google email
     cy.get('[type="button"]')
       .click()
     cy.get('[type="password"]')
       .type(Cypress.env('password')) // google password
     cy.get('[type="button"]')
       .click()
   })
 })
 
 cy.location('pathname')
   .should('eq', '/success') // check that we have successfully
 
})

When running a test, Cypress will make a decision when it reaches the cy.session() command:

  • Is there a session called google login anywhere in the test suite?
    • If not, run the commands inside the callback and cache the cookies, local storage, and other browser data.
    • If yes, restore the cache assigned to a session called “google login.”

You can create multiple of these sessions and test your application using different accounts. This is useful if you want to test different account privileges or just see how the application behaves when seen by different accounts. Instead of going through the login sequence through UI or trying to log in programmatically, you can quickly restore the session and reuse it across all your tests.

This also means that you will reduce your login attempts to a minimum and prevent getting rate-limited on your third party login service.

Run all specs in GUI

Cypress GUI is a great companion for writing and debugging your tests. With the version 10 release, it has dropped support for the “Run all specs” button in the GUI. The community was not very happy about this change, so Cypress decided to bring it back.

The reason why it was removed in the first place is that it could bring some unexpected results. Simply put, this functionality would merge all your tests into one single file. This can get tricky especially if you use before(), beforeEach(), after() and afterEach() hooks in your tests. These would often get ordered and stacked in unexpected order. Take following example:

// file #1
describe('group 1', () => {
 it('test A', () => {
   // ...
 })
})
 
it('test B', () => {
 // ...
})
 
// file #2
before( () => {
 // ...
})
 
it('test C', () => {
 // ...
})

If this runs as a single file, the order of actions would go like this:

  • before() hook
  • test B
  • test C
  • test A

This is mainly caused by how Mocha framework executes blocks of code. If you properly wrap every test into describe() blocks, you would get much less surprises, but that’s not always what people do.

On the other hand, running all specs can be really useful when developing an application. I use this feature to get immediate feedback on changes I make in my code when I work on my cypress plugin for testing API. Whenever I make a change, all my tests re-run and I can see all the bugs that I’ve introduced. ?

Running all specs is now behind an experimental flag, so you need to set experimentalRunAllSpecs to true in your cypress.config.js configuration file.

Test isolation

It is always a good idea to keep your tests isolated. If your tests depend on one another, it may create a domino effect. First test will make all the subsequent tests fail as well. Things get even more hairy when you bring parallelisation into the equation.

You could say that Cypress is an opinionated testing framework, but my personal take on this is that this is a good opinion to have. The way Cypress enforces test isolation with this update is simple. In between every test, Cypress will navigate from your application to a blank page. So in addition to all the cleaning up Cypress did before (clearing cookies, local storage), it will now make sure to “restart” the tested application as well.

In practice the test execution would look something like this:

it('test A', () => {
 cy.visit('https://staging.myapp.com')
 // ...
 // your test doing stuff
})
 
// navigates to about:blank
 
it('test B', () => {
 cy.get('#myElement') // nope, will fail, we are at about:blank
})

This behavior is configurable, so if you need some time to adjust to this change, you can set testIsolation to false in your configuration.

Removing of deprecated commands and APIs

Some of the APIs and commands reached end of life with the latest Cypress release. For example, cy.route() and cy.server() have been replaced by the much more powerful cy.intercept() command that was introduced back in version 6.

The more impactful change was the deprecation of Cypress.Cookies.default() and Cypress.Cookies.preserveOnce() APIs that were used for handling the behavior of clearing up and preserving cookies. With the introduction of cy.session(), these APIs didn’t fit well into the system. The migration from these commands to cy.session() might not seem as straightforward, but it is quite simple when you look at it.

For example, instead of using Cypress.Cookies.preserveOnce() function to prevent deletion of certain cookies you can use cy.session() like this:

beforeEach(() => {
 cy.session('importantCookies', () => {
   cy.setCookie('authentication', 'top_secret');
 })
});
 
it('test A', () => {
 cy.visit('/');
});
 
it('test B', () => {
 cy.visit('/');
});

Also, instead of using Cypress.Cookies.defaults() to set up default cookies for your tests, you can go to your cypress/support/e2e.js support file and set up a global beforeEach() hook that will do the same as shown in the previous example.

Besides these there were a couple of bug fixes and smaller tweaks which can all be viewed in Cypress changelog. Overall, I think that the v12 release of Cypress is one of the unsung heroes. Rewriting of query commands and availability of cy.session() and cy.origin() commands may not seem like a big deal on paper, but it will make the experience much smoother than it was before.

New command queries might require some rewriting in your tests. But I would advise you to upgrade as soon as possible, as this update will bring much more stability to your tests. I’d also advise to rethink your test suite and integrate cy.session() to your tests as it might not only handle your login actions more elegantly but shave off minutes of your test run.

If you want to learn more about Cypress, you can come visit my blog, subscribe to my YouTube channel, or connect with me on Twitter or LinkedIn.

The post What’s New in Cypress 12 appeared first on Automated Visual Testing | Applitools.

]]>
UI Testing: A Getting Started Guide and Checklist https://applitools.com/blog/ui-testing-guide/ Thu, 01 Sep 2022 20:32:38 +0000 https://applitools.com/?p=42155 Learn everything you need to know about how to perform UI testing, why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.

]]>

Learn everything you need to know about how to perform UI testing, including why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

When users explore web, mobile or desktop applications, the first thing they see is the User Interface (UI). As digital applications become more and more central to the way we all live and work, the way we interact with our digital apps is an increasingly critical part of the user experience.

There are many ways to test an application: Functional testing, regression testing, visual testing, cross-browser testing, cross-device testing and more. Where does UI testing fit into this mix?

UI testing is essential to ensure that the usability and functionality of an application performs as expected. This is critical for delivering the kinds of user experiences that ensure an application’s success. After all, nobody wants to use an app where text is unreadable, or where buttons don’t work. This article will explain the fundamentals of UI testing, why it’s important, and supply a UI testing checklist and examples to help you get started.

What is UI Testing?

UI testing is the process of validating that the visual elements of an application perform as expected. In UI Testing, graphical components such as text, radio buttons, checkboxes, buttons, colors, images and menus are evaluated against a set of specifications to determine if the UI is displaying and functioning correctly.

Why is UI Testing Important?

UI testing is an important way to ensure an application has a reliable UI that always performs as expected. It’s critical for catching visual and even functional bugs that are almost impossible to detect using other kinds of testing.

Modern UI testing, which typically utilizes visual testing, works by validating the visual appearance of an application, but it does much more than make sure things simply look correct. Your application’s functionality can be drastically affected by a visual bug. UI testing is critical for verifying the usability of your UI.

Note: What’s the difference between UI testing and GUI testing? Modern applications are heavily dependent on graphical user interfaces (GUIs). Traditional UI testing can include other forms of user interfaces, including CLIs, or can use DOM-based coded locators to try and verify the UI rather than images. Modern UI testing today frequently involves visual testing.

Let’s take an example of a visual bug that slipped into production from the Southwest Airlines website:

Visual Bug on Southwest Airlines App
Visual Bug on Southwest Airlines App

Under a traditional functional testing approach this would pass the test suite. All the elements are present on the page and successfully loaded. But for the user, it’s easy to see the visual bug. 

This does more than deliver a negative user experience that may harm your brand. In this example, the Terms and Conditions are directly overlapping the ‘continue’ button. It’s literally impossible for the user to check out and complete the transaction. That’s a direct hit to conversions and revenue.

With good UI testing in place, bugs like these will be caught before they become visible to the user.

UI Testing Approaches

Manual Testing

Manual UI testing is performed by a human tester, who evaluates the application’s UI against a set of requirements. This means the manual tester must perform a set of tasks to validate that the appearance and functionality of every UI element under test meets expectations. The downsides of manual testing are that it is a time-consuming process and that test coverage is typically low, particularly when it comes to cross-browser or cross-device testing or in CI/CD environments (using Jenkins, etc.). Effectiveness can also vary based on the knowledge of the tester.

Record and Playback Testing

Record and Playback UI testing uses automation software and typically requires limited or no coding skill to implement. The software first records a set of operations executed by a tester, and then saves them as a test that can be replayed as needed and compared to the expected results. Selenium IDE is an example of a record and playback tool, and there is even one built directly into Google Chrome.

Model-Based Testing

Model-based UI testing uses a graphical representation of the states and transitions that an application may undergo in use. This model allows the tester to better understand the system under test. That means tests can be generated and potentially automated more efficiently. In its simplest form, the approach requires the steps below:

  1. Build a model representing the system
  2. Determine the inputs
  3. Understand the expected outputs
  4. Execute the tests and compare the results against expectations

Automated UI Testing vs Manual UI Testing

Benefits of Manual UI Testing

Manual testing, as we have seen above, has a few severe limitations. Because the process relies purely on humans performing tasks one at a time, it is a slow process that is difficult to scale effectively. Manual testing does, however, have advantages:

  • Manual testing can potentially be done with little to no tooling, and may be sufficient for early application prototypes or very small apps. 
  • An experienced manual tester may be able to discover bugs in edge-cases through ad-hoc or exploratory testing, as well as intuitively “feel” the user experience in a way that is difficult to understand with a scripted test.

Benefits of Automated UI Testing

In most cases automation will help testing teams save time by executing pre-determined tests repeatedly. Automation testing frameworks aren’t prone to human errors and can run continuously. They can be parallelized and executed easily at scale. With automated testing, as long as tests are designed correctly they can be run much more frequently with no loss of effectiveness. 

Automation testing frameworks may be able to increase efficiency even further with specialized capabilities for things like cross-browser testing, mobile testing, visual AI and more.

UI Testing Checklist of Test Cases

On the surface, UI testing is simple – just make sure everything “looks” good. Once you poke beneath that surface, testers can quickly find themselves encountering dozens of different types of UI elements that require verification. Here is a quick checklist you can use to make sure you’ve considered all the most common items.

UI Testing Checklist – Common Tests

  • Text: Can all text be read? Is the contrast legible? Is anything covered by another element?
  • Forms, Fields and Pickers: Are all text fields be visible, and can text be entered and submitted? Do all dropdowns display correctly? Are validation requirements (such as a date in a datepicker) upheld?
  • Navigation and Sorting: Whether it’s a site menu, a sortable table or a multi-page form, can the user navigate via the UI? Do all dropdowns display? Can all options be clicked/tapped, and do they have the desired effect? 
  • Buttons and Links: Are all buttons and links visible? Are they formatted consistently? Can they be selected, and do they take the user to the intended pages?
  • Responsiveness: When you adjust the resolution, do all of the above UI elements continue to behave as intended?

Each of the above must be tested across every page, table, form and menu that your application contains. 

It’s also a good practice to test the UI for specific critical end-to-end user journeys. For example, making sure that it’s possible to journey smoothly from: User clicks Free Trial Signup (Button) > User submits Email Address (Form) > User Logs into Free Trial (Form) > User has trial access (Product)

Challenges of UI Testing

UI testing can be a challenge for many reasons. With the proper tooling and preparation these challenges can be overcome, but it’s important to understand them as you plan your UI testing strategy.

  • User Interfaces are complex: As we’ve discussed above, there are numerous distinct elements on each page that must be tested. Embedded forms, iFrames, dropdowns, tables, images, videos and more must all be tested to be sure the UI is working as intended.
  • User Interfaces change fast: For many applications the UI is in a near-constant state of flux, as frequent changes to the text, layout or links are implemented. Maintaining full coverage is challenging when this occurs.
  • User Interfaces can be slow: Testing the UI of an application can take time, especially compared to smaller and faster tests like unit tests. Depending on the tool you are using, this can make them feel difficult to run as regularly.
  • Testing script bottlenecks: Because the UI changes so quickly, not only do testers have to design new test cases, but depending on your tooling, you may have to constantly create new coded test scripts. Testing tools with advanced capabilities, like the Visual AI in Applitools, can mitigate this by requiring far less code to deliver the same coverage.

UI Testing Example

Let’s take an example of an app with a basic use case, such as a login screen.

Even a relatively simple page like this one will have numerous important test cases (TC):

  • TC 1: Is the logo at the top appropriate for the screen, and aligned with brand guidelines?
  • TC 2: Is the title of the page displaying correctly (font, label, position)?
  • TC 3: Is the dividing line displaying correctly? 
  • TC 4: Is the Username field properly labeled (font, label, position)?
  • TC 5: Is the icon by the Username field displaying correctly?
  • TC 6: Is the Username text field accepting text correctly (validation, error messages)?
  • TC 7: Is the Password field properly labeled (font, label, position)?
  • TC 8: Is the icon by the Password field displaying correctly?
  • TC 9: Is the Password text field accepting text correctly (validation, error messages)?
  • TC 10: Is the Log In button text displaying correctly (font, label, position)?
  • TC 11: Is the Log In button functioning correctly on click (clickable, verify next page)
  • TC 12: Is the Remember Me checkbox title displaying correctly (font, label, position)?
  • TC 13: Is the Remember Me checkbox functioning correctly on click (clickable, checkbox displays, cookie is set)?

Simply testing each scenario on a single page can be a lengthy process. Then, of course, we encounter one of the challenges listed above – the UI changes quickly, requiring frequent regression testing

How to Simplify UI Testing with Automation

Performing this regression testing manually while maintaining the level of test coverage necessary for a strong user experience is possible, but would be a laborious and time-consuming process. One effective strategy to simplify this process is to use automated tools for visual regression testing to verify changes to the UI.

Benefits of Automated Visual Regression Testing for UI Testing

Visual regression testing is a method of ensuring that the visual appearance of the application’s UI is not negatively affected by any changes that are made. While this process can be done manually, modern tools can help you automate your visual testing to verify far more tests far more quickly.

Automated Visual UI Testing Example

Let’s return to our login screen example from earlier. We’ve verified that it works as intended, and now we want to make sure any new changes don’t negatively impact our carefully tested screen. We’ll use automated visual regression testing to make this as easy as possible.

  1. As we saw above, our baseline screen looks like this:
  2. Next, we’ll make a change by adding a row of social buttons. Unfortunately, this will have the effect of inadvertently rendering our login button unusable by pushing it up into the password field:
  3. We’ll use our automated visual testing tool to evaluate our change against the baseline. In our example, we’ll use a tool that utilizes Visual AI to highlight only the relevant areas of change that a user would notice. The tool would then bring our attention to the new social buttons along with the section around the now unusable button as areas of concern.
  4. A test engineer will then review the comparison. Any intentional changes that were flagged are marked as accepted changes. On some screens we might expect changes in certain dynamic areas, and these can be flagged for Visual AI to ignore going forward.

    We need to address only the remaining areas that are flagged. In our example, every area flagged in red is problematic – we need to shift down the social buttons, and move the button out of the password field. Once we’ve done this, we run test again, and a new baseline is created only when everything passes. The final result is free of visual defects:

Why Choose Automated Visual Regression Testing with Applitools for UI Testing

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Read More

The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.

]]>
What is Regression Testing? Definition, Tutorial & Examples https://applitools.com/blog/regression-testing-guide/ Fri, 01 Jul 2022 16:08:00 +0000 https://applitools.com/?p=33704 Learn everything you need to know about what regression testing is, best practices, how you can apply it in your own organization and much more.

The post What is Regression Testing? Definition, Tutorial & Examples appeared first on Automated Visual Testing | Applitools.

]]>

In this detailed guide, learn everything you need to know about what regression testing is, along with best practices and examples. Learn how you can apply regression testing in your own organization and much more.

While regression testing is practiced in almost every organization, each team may have its own procedures and approaches. This article is a starter kit for organizations seeking a solid start to their regression testing strategy. It also assists teams in delving deeper into the missing links in their current regression testing technique in order to evolve their test strategy.

What is Regression Testing?

Regression testing is a type of software testing that verifies an application continues to work as intended after any code revisions, updates, or optimizations. As the application continues to evolve by adding new features, the team must perform regression testing to evaluate that the existing features work as expected and that there are no bugs introduced with the new feature(s). 

In this post, we will discuss various techniques for Regression Testing, and which to use depending on your team’s way of working. 

However, before we jump onto the how part, let us understand why having a regression test suite is essential.

Why Do We Need Regression Testing?

A software application gets directly modified due to new enhancements (functional, performance or even improved security), tweaks or changes to existing features, bug fixes, and updates. It is also indirectly affected by the third-party services it consumes to provide features through its interface. 

Changes in the application’s source code, both planned and unintended, demand verification. Additionally, the impact of modifications to external services used by the application should be verified.

Teams must ensure that the modified component of the application functions as expected and that the change had no adverse effect on the other sections of the application. 

A comprehensive regression testing technique aids the team in identifying regression issues, which are subsequently corrected and retested to ensure that the original faults are not present. 

Regression Testing Example

Let us quickly understand with the help of an example – Login functionality

  • A user can log into an app using either their username and password or their Gmail account via Google integration.
  • A new feature, LinkedIn integration, is added to enable users to log into the app using their LinkedIn account.
  • While it is vital to verify that LinkedIn login functions as expected, it is equally necessary to verify that other login methods continue to function (Form login and Google integration).

Smoke vs Sanity vs Regression Testing

People commonly use the terms smoke, sanity, and regression interchangeably in testing, which is misleading. These terms differ not only in terms of the application’s scope, but also in terms of when they are carried out. 

What is Smoke Testing?

Smoke testing is done at the outset of a fresh build. The main goal is to see if the build is good enough to start testing. Some examples include being able to launch the site by simply hitting in the URL, or being able to run the app after installing a new executable.

What is Sanity Testing?

Sanity testing is surface level testing on newly deployed environments. For instance, the features are broadly tested on staging environments before passing it on to User Acceptance Testing. Another example could be verifying that the fonts have correctly loaded on the web page, expected components are interactive and overall things appear to be in order without a detailed investigation.

How is Regression Testing Different from Smoke and Sanity Testing?

Regression testing has more depth where the potentially impacted areas are thoroughly tested on the environment where new changes have been introduced.

Existing stable features are rigorously tested on a regular basis to ensure their accuracy in the face of purposeful and unintended changes. 

Regression Testing Approaches

The techniques can be grouped into the following categories:

Partial Regression Testing

As the name suggests, partial regression testing is an approach where a subset of the entire regression suite is selected and executed as part of regression testing.

This subset selection results from a combination of several logical criteria as follows:

  • The cases derived from identifying the potentially affected feature(s) due to the change(s)
  • Business-critical cases
  • Most commonly used paths

Partial regression testing works excellently when the team successfully identifies the impacted areas and the corresponding test cases through proven ways like the Requirement Traceability Matrix (RTM henceforth) or any other form of metadata approved by the team.

The following situations are more conducive to partial regression testing:

  • The project has a solid test automation framework in place, with a large number of Unit, API, Integration tests, and Acceptance tests in proportion as per the test pyramid.
  • Changes to the source code are always being tracked and communicated within the cross-functional team.
  • Short-term projects tight on budget and time.
  • The same team members have been working on the project for a long period.

While this method is effective, it is possible to overlook issues if:

  • The impacted regions aren’t identified appropriately.
  • The changes aren’t conveyed to the entire team.
  • The team doesn’t religiously follow the process of documenting test scenarios or test cases.

Complete Regression Testing

In many cases, reasons like significant software updates, changes to the tech stack demand the team to perform comprehensive regression testing to uncover new problems or problems introduced due to the changes.

In this approach, the whole test suite is run to uncover issues every time new code is committed, or, at some agreed time intervals.

This is a significantly time-consuming approach compared to the other techniques and should ideally be adopted only when the situation demands.

To keep the feedback cycle faster, one must embrace automated testing to enable productive complete regression testing in their teams.

Which Regression Technique to Use?

Irrespective of the technique adopted, I always suggest that teams prioritize the most business-critical cases and the common use cases performed by end-users when it comes to execution. 

Remember, the main goal of regression testing is to ensure that the end-user is not impacted due to an unavailable/incorrect feature, which could affect business outcomes in many ways.

Best Practises for Regression Testing

To achieve better testing coverage of your application, plan your regression testing with a combination of technology and business scenarios. Apply the practices across the Test Pyramid. 

Leverage the Power of Visual Representation

Arranging the information in the form of a matrix enables the team to quickly identify the potentially impacted areas. 

  • The RTM shown in the diagram below, any changes made to REQ1 UC 1.3 will let us know that we have to test the test cases 1.1.2, 1.1.4 and 1.1.7.
  • Also, since test case 1.1.2 is also related to UC 1.2, we would immediately test that for any regression issues. 
  • Of course, the RTM should be up-to-date at all times for the technique to work correctly for the team.

    (Image Source)

Alternatively, many test case management tools now have started providing inbuilt support to build a regression test suite with the help of appropriate tags and modules. These tools also let you systematically track and identify patterns in the regression test execution to dig into more related areas.

I have seen teams being most effective when they have automated most of their regression suite, and the non-automatable tests organised and represented in a meaningful way that allows quick filtering and meaningful information.

Test Data

We should leverage the power of automation to create test data instantly across different test environments. We need to ascertain that the updated feature is evaluated against both old and new data. 

Ex: A new field added to a user profile, for example, should work consistently for both existing and newly formed accounts.

Production Data

Production test data plays a vital role in identifying issues that might have been missed during the initial delivery.

In cases where possible, replicate the production environment to identify edge cases and add those scenarios to the regression test suite.

Using production data isn’t always viable, and it can lead to non-compliance issues. Teams frequently conceal / mask sensitive information from production data and use the information to fulfil the requirement for on-the-ground scenario analysis.

Test Environments

If you have multiple environments, we should verify that the application works as intended in each of the environments.

Obtaining a Fresh Outlook

Every time a new person joined the team when the development was already in progress, they asked meaningful questions about the long-forgotten stable features. I also prefer young guns to be part of my regression team to get a raw and holistic testing perspective.

Automate

Automate the regression test suite! If you have the budget, great, or else, create supporting mechanisms to utilise the team’s idle time to implement automated tests.

Simply automating the business-critical scenarios or the most used workflows is a good enough start. Initiate this activity and work incrementally.

Either tag/annotate your automated scenarios as per the feature or segregate them into appropriate folders so that you’d be able to run particular automated regression scenarios.

Sequential execution won’t scale with a rising number of test environments and permutations, despite the fact that automated test execution is faster. As a result, concurrent test execution in various settings is required to meet scalability requirements. Selenium Grid and other cloud solutions like Applitools Ultrafast Test Cloud enable you to execute automated tests in parallel across different configurations.

In addition to adhering to best practises when creating the test automation framework, these tests must run at a high pace and in parallel to provide faster feedback.

Choose What Works for You

Always! One cannot ignore the business limitations and the client demands to meet the delivery. Based on your context, adopt the most suitable regression testing techniques.

Plan for Regression Testing in Sprints

I have seen it take a long time to automate a regression backlog. To keep making progress on this activity, while estimating the Sprint tasks, always account for regression testing efforts explicitly, or you might be increasing your technical debt in the form of uncovered bugs.

Communicate within the Cross-Functional Team

Changes are not always directly related to client needs, nor are they always conveyed. Internally, the development team continually optimises the code for reusability, performance, and other factors. Ensure that these source-code modifications are documented/tracked in a ticket so that the team can perform regression testing accordingly.

Regression Testing at Scale

An enterprise product results from multiple teams’ contributions across geographies. While the teams will independently conduct regression testing for their part, it mustn’t be done only in silos. The teams must also set up cadence structures/processes to test all integrational regression scenarios.

Crowdsourced Testing

Crowdsourced testing can help find brand new flaws in the programme, such as functionality, usability, and localization concerns, thereby improving the product’s quality.

Plan for Non-Functional Regression Testing

Non-functional elements like performance, security, accessibility, and usability must all be examined as part of your regression testing plan, in addition to functionality.

Benchmarking test execution results from past sessions and comparing them to test execution results after the most recent modifications is a simple but effective technique for detecting performance, accessibility, and other degradations.

Due to substantial faults in non-functional areas, applications with the best functionality have either been unable to see production through or have been shelved despite launching successfully.

In a similar vein, application security and accessibility issues have cost businesses millions of dollars in addition to a tarnished reputation.

The Need for an Automated Regression Test Suite

Regardless of your application architecture or development methodology, the importance of automating the regression tests can never fade away. Be it a small-scale application or an enterprise product, having automated tests will save you time, people’s energy and money in the longer run.

Let’s understand some reasons to automate the regression test suite:

Fast Feedback

Automated software verification is exponentially faster than humans. Automated continuous testing in the CI-CD pipeline is a powerful approach to identifying regression bugs as close to its introduction because of the increased speed and frequency at which it operates.

Equally important is to look at the test results from each automated suite execution and take meaningful steps to get the product and the test suite progressively better.

Timely identification of issues will avoid defect leakage in the most significant parts of the application and later stages of testing.

Consequently, the slight left shift always profits the organisation in many ways apart from cost.

Automated Test Data Creation

Before getting to the actual testing, the testing teams spend a significant amount of time generating test data. Automation aids not only in the execution of tests but also in the rapid generation of large amounts of test data. The functional testing team may leverage data generated by scripts (SQL, APIs), allowing them to focus on testing rather than worrying about the data.

Testing features like pagination, infinite scroll, tabular representation, performance of the app are few examples where rapid test data generation helps the team with instant test data. 

Banking and insurance are regulated sectors with several complex operations and subtleties. To exercise and address the data models and flows, a variety of test data is required. The ability to automate test data management has shown to be a critical component of successful testing.

Address Scalability

The automated test suite’s parallel execution answers the need for faster feedback and does it rapidly. Teams can generate test results across a variety of environments, browsers, devices, and operating systems with the right infrastructure and the prerequisite of having built a scalable automated test suite.

The Applitools Ultrafast Test Cloud is the next step forward in cross-browser testing. You run your functional and visual tests once locally using Ultrafast Grid (part of the Ultrafast Test Cloud), and the grid instantaneously generates all screens across whatever combination of browsers, devices, and viewports you choose. 

Use the Human Brain and Technology to Their Full Potential

Repetitive tasks are handled efficiently and consistently through automation. It does not make errors in the same way that people do.

It also allows humans to concentrate their ingenuity on exploratory testing, which machines cannot accomplish. You can deploy new features with a reduced time-to-market thanks to automation.

Maintenance of the Regression Test Suite

Now, let’s complete the cycle by ensuring that the corresponding test cases (manual and automated) are also modified immediately with every modification and change request to any existing part of the application. These modified test cases should now be part of the regression suite.

Failing to adjust the test cases would create chaos in the teams involved. The circus might result in incorrect testing of the underlying application and introduce unintended features and rollbacks. 

Maintaining the regression test suite consists of adding new tests, modifying existing tests, and deleting irrelevant tests. These changes should be reflected in the manual and automated test suites.

Regression Testing Tools

There aren’t separate testing tools categorised as “regression testing tools.” The teams use the same testing tools; however, many test automation tools are utilised to automate the regression test suite. 

Depending on the project type, the following regression testing tools may be used in a combination of the above techniques mentioned in the previous section:

API Heavy Applications

APIs are the foundation of modern software development, especially as more and more teams abandon monolithic programmes in favour of a microservices-based strategy.

  • Contract-driven testing is gaining popularity, and rightly so because it avoids regression issues being committed to the repository in the first place as opposed to identifying them later in the process during the testing phase. Understand more about pacts/contracts here
  • Specmatic is an excellent open-source solution that uses the contracts available in OpenAPI spec format, and turns them into executable specifications which can be used by the provider and consumer in parallel. It also allows you to check the contract for backward compatibility via CI.
  • Testing APIs is comparatively faster than verifying the functionality of the user interface. Hence, for faster & accurate feedback flowing across the groups, adopt automated API & API Workflow testing using open-source solutions like REST-Assured, Postman, etc.   
Logos for Postman, Specmatic, Pact and Rest-Assured

UI Heavy Applications

UI accuracy is unquestionably vital for a successful business because it directly impacts end users.

Even when utilizing the most extraordinary development processes and frontend technology, testing the UI is one of the most significant bottlenecks in a release.

Applitools is a pioneer in AI-powered automated visual regression testing. Their solution allows you to integrate Visual Testing with functional and regression UI automation and in turn get increased test coverage, quick feedback, and seamless scaling by using the Applitools Ultrafast Grid – all while writing less code. You can try out their solutions by signing up for a free account and going through the tutorials available here.

Applitools is the leader in Visual Testing

Support & Maintenance Portfolio

Teams responsible for testing legacy applications often experience the need to explore the application before blindly getting started with the regression test suite.

Utilizing the results from your exploratory testing sessions to populate and validate your impact analysis documents and RTMs proves beneficial in making necessary modifications to the regression test suite.

Exploratory testing tools are incredibly valuable and can assist you in achieving your goal for the session, whether it’s to explore a component of the app, detect flaws, or determine the relationship between features.

Non-Functional Testing

Each of the following topics is a specialised field in and of itself, and it is impossible to cover them all in one blog post. This list, on the other hand, will undoubtedly get you thinking in that direction.

Performance Testing

  • Performance issues can occur at any tier of the software stack, including the operating system, network, disc, web, application, and database layer.
  • Open source regression testing tools such as Apache JMeter, Gatling, Locust, Taurus, and others help detect performance issues such as concurrency, throughput, peak load, performance bottlenecks, and so on throughout the software stack.
  • Application performance monitoring (APM) tools are also used by development teams to link coding practises to app performance throughout development.

Security Testing

  • Zed Attack Proxy (ZAP), Wfuzz, Wapiti, W3af, SQLMap, SonarQube, Nogotofail, Iron Wasp, Grabber, and Arachni are open source security testing tools that help with assessing security conditions such as Authentication, Authorization, Availability, Confidentiality, Integrity, and Non-repudiation. 
  • To reap the benefits of both methodologies, organisations combine static application security testing (SAST) with dynamic application security testing (DAST).

Accessibility Testing

  1. Use Applitools Contrast Advisor to identify contrast violations as part of your test automation execution. This solution works for native Android apps, native iOS apps, all Web Browsers including mobile-web, PDF documents and images.
  2. Screen readers – VoiceOver, NVDA, JAWS, Talkback, etc.
  3. WAT (Web accessibility toolbar) – WAVE, Colour Contrast Analyser, etc.

Summary

A well-thought-out regression testing plan will aid your team in achieving your QA and software development goals, whether the architecture is monolithic or microservices-based, and whether the application is new or old. You can learn about how Applitools can help with functional and visual regression testing here.

Editor’s Note: This post was originally published in January 2022, and has since been updated for completeness and accuracy.

The post What is Regression Testing? Definition, Tutorial & Examples appeared first on Automated Visual Testing | Applitools.

]]>
What is Functional Testing? Types and Example (Full Guide) https://applitools.com/blog/functional-testing-guide/ Fri, 13 May 2022 20:12:32 +0000 https://applitools.com/?p=38369 Learn what functional testing is in this complete guide, including an explanation of functional testing types and examples of techniques.

The post What is Functional Testing? Types and Example (Full Guide) appeared first on Automated Visual Testing | Applitools.

]]>

What is Functional Testing?

Functional testing is a type of software testing where the basic functionalities of an application are tested against a predetermined set of specifications. Using Black Box Testing techniques, functional tests measure whether a given input returns the desired output, regardless of any other details. Results are binary: tests pass or fail.

Why is Functional Testing Important?

Functional testing is important because without it, you may not accurately understand whether your application functions as intended. An application may pass non-functional tests and otherwise perform well, but if it doesn’t deliver the key expected outputs to the end-user, the application cannot be considered working.

What is the Difference between Functional and Non-Functional Testing?

Functional tests verify whether specified functional requirements are met, where non-functional tests can be used to test non-functional things like performance, security, scalability or quality of the application. To put it another way, functional testing is concerned with if key functions are operating, and non-functional tests are more concerned with how the operations take place.

Examples of Functional Testing Types

There are many types of functional tests that you may want to complete as you test your application. 

A few of the most common include:

Unit Testing

Unit testing breaks down the desired outcome into individual units, allowing you to test whether a small number of inputs (sometimes just one) produce the desired output. Unit tests tend to be among the smallest tests to write and execute quickly, as each is designed to cover only a single section of code (a function, method, object, etc.) and verify its functionality.

Smoke Testing

Smoke testing is done to verify that the most critical parts of the application work as intended. It’s a first pass through the testing process, and is not intended to be exhaustive. Smoke tests ensure that the application is operational on a basic level. If it’s not, there’s no need to progress to more detailed testing, and the application can go right back to the development team for review.

Sanity Testing

Sanity testing is in some ways a cousin to smoke testing, as it is also intended to verify basic functionality and potentially avoid detailed testing of broken software. The difference is that sanity tests are done later in the process in order to test whether a new code change has had the desired effect. It is a “sanity check” on a specific change to determine if the new code roughly performs as expected. 

Integration Testing

Integration testing determines whether combinations of individual software modules function properly together. Individual modules may already have passed independent tests, but when they are dependent on other modules to operate successfully, this kind of testing is necessary to ensure that all parts work together as expected.

Regression Testing

Regression testing makes sure that the addition of new code does not break existing functionalities. In other words, did your new code cause the quality of your application to “regress” or go backwards? Regression tests target the changes that were made and ensure the whole application continues to remain stable and function as expected.

User Acceptance Testing (UAT)/Beta Testing

Usability testing involves exposing your application to a limited group of real users in a production environment. The feedback from these live users – who have no prior experience with the application and may discover critical bugs that were unknown to internal teams – is used to make further changes to the application before a full launch.

UI/UX Testing 

UI/UX testing evaluates the graphical user interface of the application. The performance of UI components such as menus, buttons, text fields and more are verified to ensure that the user experience is ideal for the application’s users. UI/UX testing is also known as visual testing and can be manual or automated.

Other classifications of functional testing include black box testing, white box testing, component testing, API testing, system testing and production testing.

How to Perform Functional Testing

The essence of a functional test involves three steps:

  • Determine the desired test input values
  • Execute the tests
  • Evaluate the resulting test output values

Essentially, when you executed a task with input (e.g.: enter an email address into a text field and click submit), did your application generate the expected output (e.g.: user is subscribed and thank you page is displayed)?

We can understand this further with a quick example.

Functional Testing Example

Let’s begin with a straightforward application: a calculator. 

To create a set of functional tests, you would need to:

  • Evaluate all the possible inputs – such as numbers and mathematical symbols – and design assertions to test their functionality
  • Execute the tests (either automated or manually)
  • Ensure that the desired outputs are generated – e.g.: each mathematical function works as intended, the final result is given correctly in all cases, the formula history is displayed accurately, etc.

For more on how to create a functional test, you can see a full guide on how to write an automated functional test for this example.

Functional Testing Techniques 

There are many functional testing techniques you might use to design a test suite for this:

  • Boundary value tests evaluate what happens if inputs are received outside of specified limits – such as a user entering a number that was too large (if there is a specified limit) or attempting to enter non-numeric input
  • Decision-based tests verify the results after a user decides to take an action, such as clearing the history
  • User-based tests evaluate how components work together within an application – if the calculator’s history was stored in the cloud, this kind of test would verify that it did so successfully
  • Ad-Hoc tests can be done at the end to try and discover bugs other methods did not uncover by seeking to break the application and check its response

Other common functional testing techniques include equivalence testing, alternate flow testing, positive testing and negative testing.

Automated Functional Testing vs Manual Functional Testing

Manual functional testing requires a developer or test engineer to design, create and execute every test by hand. It is flexible and can be powerful with the right team. However, as software grows in complexity and release windows get shorter, a purely manual testing strategy will face challenges keeping up a large degree of test coverage.

Automated functional testing automates many parts of the testing process, allowing tests to run continuously without human interaction – and with less chance for human error. Tests must still be designed and have their results evaluated by humans, but recent improvements in AI mean that with the right tool an increasing share of the load can be handled autonomously.

How to Use Automated Visual Testing for Functional Tests

One way to automate your functional tests is by using automated visual testing. Automated visual testing uses Visual AI to view software in the same way a human would, and can automatically highlight any unexpected differences with a high degree of accuracy.

Visual testing allows you to test for visual bugs, which are otherwise extremely challenging to uncover with traditional functional testing tools. For example, if an unrelated change caused a “submit” button to be shifted to the far right of a page and it could no longer be clicked by the user, but it was still technically on the page and using the correct identifier, it would pass a traditional functional test. Visual testing would catch this bug and ensure functionality is not broken by a visual regression.

How to Choose an Automated Testing Tool?

Here are a few key considerations to keep in mind when choosing an automated testing tool:

  • Ease of Use: Is it something easy for your existing QA team to use, or easy to hire for? Does it require an extensive learning curve or can it be picked up quickly?
  • Flexibility: Can it be used across different platforms? Can it easily integrate with your current testing environment, and does it allow you the freedom to change your environment in the future?
  • Reusability/AI Assistance: How easy is it to reuse tests, particularly if the UI changes? Is there meaningful AI that can help you test more efficiently, particularly at the scale you need?
  • Support: What level of customer support do you require, and how easily can you receive it from the provider of your tool?

Automated testing tools can be paid or open source. Some popular open source tools include Selenium for web testing and Appium for mobile testing.

Why Choose Automated Visual Testing with Applitools

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Keep Learning

Looking to learn more about Functional Testing? Check out the resources below to find out more.

The post What is Functional Testing? Types and Example (Full Guide) appeared first on Automated Visual Testing | Applitools.

]]>
Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io https://applitools.com/blog/applitools-testim-io-codeless-end-to-end-ai-powered-cross-browser-ui-testing/ Fri, 18 Feb 2022 17:27:57 +0000 https://applitools.com/?p=34425 The newly enhanced integration makes it easier for all testers to use Applitools and our AI-powered visual testing platform with Testim.io.

The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on Automated Visual Testing | Applitools.

]]>

As a product manager at Applitools I am excited to announce an enriched and updated integration with Testim.io! This enhanced integration makes it easier for testers of any technical ability to use Applitools and our AI-powered visual testing platform by using Testim.io to easily create your test scripts.

What Is Testim.io Used For?

Testim.io is a cloud platform that allows users to create, execute, and maintain automated tests without using code.

It is a perfect tool for getting started with your first automated tests, if you do not have an existing automated testing framework or if you have not started to run tests yet. Testim.io allows you to integrate your own custom code into their steps so you can implement custom validations if you need to.

How Do Applitools and Testim.io Integrate?

The visual validation empowered by Applitools Eyes allows you to compare the visual differences between expected results (baseline) with actual results after creating the tests in Testim.io. By using Visual AI to compare snapshots Applitools Eyes can spot any unexpected changes and highlight them visually. This lets you expand your test coverage to include everything on a given page as well as visually verify your results quickly.

As part of the integration, you can modify test parameters to customize Eyes while working with the Testim UI.

This AI-based visual validation functionality is provided by Applitools and requires simple integration setup in the Eyes application. Learn more.

So, What’s New With Applitools and Testim.io?

This up-to-date integration provides access to Applitools’ latest and greatest capabilities, including Ultrafast Test Cloud, enabling ultrafast cross-browser and cross-platform functional and visual testing. Testim users also now have access to Root Cause Analysis and many more powerful Applitools features!

The new integration also greatly improves on the user experience of test creators adding Applitools Eyes checkpoints to their Testim.io tests. Visual validations can be added right inside Testim and the maintenance and analysis of test results is much simpler.

What Kind of Visual Validations Can You Do?

You can perform the following visual validations:

  • Element Visualization – The Validate Element visualization step allows you to compare visual differences of a specific element between your baseline and your current test run.
  • Viewport Visualization – The Validate Viewport allows you to compare the visual difference between your baseline and the current test run of your viewport.
  • Full-page Visualization – Full-page validation allows you to compare the visual differences between your baseline and your current test run of your entire page.

What Are the New Visual Validation Settings?

Whether you select the element, viewport, or full-page visualization option you can always override the visual setting for that test or step.

The following Applitools Eyes settings can be accessed via the Testim.io UI:

  • Add Environment (New) – allows you to select Ultrafast Test Cloud environments. You can select the same test to run on multiple environments: different browser types and viewports for web, Chrome emulation, or iOS simulation for mobile devices. Using Applitools Ultrafast Test Cloud you can now increase your coverage and accelerate your release cycles.
  • Match Level – When writing a visual test, sometimes we will want to change the comparison method between our test and its baseline, especially when dealing with applications that consist of dynamic content. Here you can update the Applitools Eyes match level directly from Testim UI.
  • Enable RCA [Root Cause Analysis] (New) – when this flag is on it will provide insights into the causes of visual mismatches so that when looking at the Eyes dashboard you will be able to see the DOM and CSS that generated with the image.
  • Ignore displacement (New) – when this flag is on it will hide differences caused by element displacements. This feature is useful, for example, where content is added or deleted, causing other elements on the page to be displaced and generating additional differences. 

User Experience Improvements

In addition to exposing new features in the Testim UI, we have provided better visibility to Testim tests in Applitools Eyes:

  • Testim test properties are passed to the Eyes Dashboard to allow better filtering and grouping with all Testim tests properties.
  • Testim multi-step and test suites are now also grouped on the Applitools Eyes dashboard and are displayed as one batch to create a better user experience when moving between the two products.
  • Testim Selenium and extension modes are supported.

Complete and Scalable AI-Powered UI Testing

Testim.io allows users to quickly create and maintain tests through record and playback. Adding Applitools visual testing with Ultrafast Test Cloud capabilities will make sure your release cycles are short and test analysis and maintenance are easier than ever!

Learn More about Testim.io-Applitools Integration

If you want to learn more about how you can integrate your codeless Testim tests with Applitools and benefit from the latest Applitools capabilities, head over to Testim.io documentation.

Contact us if you have any queries about Applitools!

Happy testing!

The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual AI? https://applitools.com/blog/visual-ai/ Wed, 29 Dec 2021 14:27:00 +0000 https://applitools.com/?p=33518 Learn what Visual AI is, how it’s applied today, and why it’s critical across many industries - in particular software development and testing.

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.

From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.

The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.

What is AI? Background on Artificial Intelligence and Machine Learning

Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.

Image of Frankenstein

Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.

What is Visual Artificial Intelligence (Visual AI)?

Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.

In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.

As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.

Representation of Visual AI

How is Visual AI Used Today?

Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI. 

Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.

How Does Visual AI Help?

One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.

Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant. 

Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.

How Does Visual AI Help in Software Development and Testing Today?

Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation. 

Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use. 

At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.

A table showing the number of screens in production by modern organizations - 81,480 is the market average, and the top 30% of the market is  681,296
Source: The 2019 State of Automated Visual Testing

At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test. 

That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.

Visual AI is 5.8x faster, 5.9x more efficient, 3.8x more stable, and catches 45% more bugs
Source: The Impact of Visual AI on Test Automation Report

How Visual AI Enables Cross Browser/Cross Device Testing

Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.

Traditional test cycle takes 29.2 hours, modern test cycle takes just 1.6 hours.
Source: Modern Cross Browser Testing Through Visual AI Report

How Will Visual AI Advance in the Future?

As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.

In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.

Keep Reading: More about Visual AI and Visual Testing

What is Visual Testing (blog)

The Path to Autonomous Testing (video)

What is Applitools Visual AI (learn)

Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)

How AI Can Help Address Modern Software Testing (blog)

The Impact of Visual AI on Test Automation (report)

How Visual AI Accelerates Release Velocity (blog)

Modern Functional Test Automation Through Visual AI (free course)

Computer Vision defined (Wikipedia)

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>
Announcing Applitools Eyes 10.13: Enhanced Team Collaboration, Baseline across Browser/OS Versions and More https://applitools.com/blog/eyes-10-13-release/ Mon, 01 Nov 2021 15:55:39 +0000 https://applitools.com/?p=31771 We've just released a brand new version of Applitools which includes a new MS Teams integration, updates to Slack/RallyDev/GitHub integrations, new baseline option for testing across OS and browser versions, and UX enhancements on defining multiple regions!

The post Announcing Applitools Eyes 10.13: Enhanced Team Collaboration, Baseline across Browser/OS Versions and More appeared first on Automated Visual Testing | Applitools.

]]>

We are excited to announce the latest release of Applitools Eyes, 10.13. A big focus of this release is helping teams work efficiently, collaborate, and receive notifications on visual changes via the communication systems they are using everyday.

Along with the new and expanded integration options, we’ve added some additional improvements that we hope you’ll find useful!

Using Microsoft Teams to collaborate with your colleagues? You can now get Eyes test results directly to your Microsoft Teams chat

In addition to sharing test results on both Slack and via email, the new Applitools Eyes-Microsoft Teams integration provides you with the option to receive and view your test results via your Microsoft Teams chat. The Applitools Eyes App sends notifications to your Microsoft Teams chat to inform you when batches have finished running and to share a results summary with you. Learn more.

The Teams integration for Applitools Eyes, showing the toggle for on/off and notification options.

Enhanced baseline creation for new browsers and OS versions 

New browser or OS versions often introduce visual differences, therefore it is important to test across multiple versions to ensure visual perfection across all screens. Applitools Eyes now supports efficient and simple testing of your application on new browsers and OS versions. To save you time and effort, Eyes identifies the most relevant baseline to reuse whenever you test on a new version. It also allows you to easily filter and group baselines according to the browser or OS version you would like to explore. This capability is enabled by default for new accounts, for existing users please contact support to turn on this new & important capability for your accounts. Learn more.

And there is more…

  • Receive more focused test results notifications via Slack integration – Slack notifications can now be sent according to selected batch properties.
  • Ensure perfect UI with each commit with GitHub integration – update the test status on all commits.
  • Enhanced Rally integration – in addition to opening issues from within the Eyes dashboard, now automatically trigger an update in Rally when closing issues in Eyes.
  • Benefit from UX improvements for marking multiple regions – keep the draw mode active when adding multiple regions of the same type.

Explore this new release to find out more! Existing customers can upgrade today for free, or if you’re new to Applitools feel free to explore the latest features with a free trial below.

The post Announcing Applitools Eyes 10.13: Enhanced Team Collaboration, Baseline across Browser/OS Versions and More appeared first on Automated Visual Testing | Applitools.

]]>
What’s New In Selenium 4? https://applitools.com/blog/selenium-4/ Thu, 14 Oct 2021 07:37:00 +0000 https://applitools.com/?p=19463 There are a lot of cool new things coming up in Selenium 4. We're getting very close to the official release, and we've got a full review of what's coming for you.

The post What’s New In Selenium 4? appeared first on Automated Visual Testing | Applitools.

]]>

(Editor’s Note: This post has been recently updated for accuracy and completeness. It was originally published in June 2020 by Manoj Kumar.) 

There are a lot of cool and new things that just arrived in Selenium 4. If you haven’t heard, the official Selenium 4 release came out yesterday, and we’re excited by all the latest updates. We’ve got a full review of this long-awaited release ready for you, but first here’s a quick refresher on a few of the most interesting updates for Selenium 4.

What’s New in Selenium 4?

After an extensive alpha and beta period to get everything right, Selenium 4 has now been officially released!

In the new release, there have been changes made to the highly anticipated feature, Relative Locators, where the returned elements are now sorted by proximity to make the results more deterministic. By proximity, it means being sorted based on the distance from the midpoints of each element’s bounding client rect. Also new is the ability to use any selector (not just tagname) to find any relative locators.

Also in this release, work for NetworkInterceptor has begun. This functionality, once complete, will be a part of the new ChromeDevTools feature and will allow testers to stub out responses to network requests!

A Refresher: Getting Started with Selenium 4

Here are a few links outlining how you can get started with Selenium 4:

Watch Simon Stewart Break Down the Selenium 4 Updates

Although Selenium 4 is designed as a drop-in replacement for Selenium 3, it has some new tricks to help make your life as a tester easier. These include things like “relative locators,” and new support for intercepting network traffic, changes in how you can create a new Selenium instance, and more! Catch Selenium project lead Simon Stewart as he explains how these new features work, and also demonstrates how to use them. Learn how to take advantage of all that Selenium 4 can offer your tests!

What is your plan to move to Selenium 4.0? If you do not plan to upgrade, why not? What is preventing you from upgrading now that the official release is out?

To recap everything that’s new in the latest version of Selenium, keep reading for a full review of the cool things that have arrived in Selenium 4:

What’s New in Selenium 4

Selenium 4 is now released!

A lot of developments have happened since Selenium 4 was announced during the State of the Union Keynote by Simon Stewart and Manoj Kumar. There has been a significant amount of work done and we’ve released at least six alpha versions and four betas of Selenium 4 for users to try out and report back with any potential bugs so that we can make it right. Now, the official release is here.

Manoj Kumar

It is exciting times for the Selenium community as we have a lot of new features and enhancements that make Selenium WebDriver even more usable and scalable for practical use cases.

Selenium is a suite of tools designed to support different user groups:

  • Selenium IDE supports rapid test development, and doesn’t require extensive programming knowledge
  • WebDriver provides a friendly and flexible API for browser automation in most major programming languages
  • Grid makes it possible to distribute and run your tests across more than just one machine.

Let us dive in and take a look at some of the significant features that were released in each of these tools and share some of the cool upcoming features that are now available in Selenium 4.

Selenium WebDriver

One of the main reasons to release WebDriver as a major version (Selenium 4) is because of the complete W3C protocol adoption. The W3C protocol dialect has been available since the 3.8 version of Selenium WebDriver along with the JSON wire protocol. This change in protocol isn’t going to impact the users in any way, as all major browser drivers (such as geckodriver and chromedriver), and many third party projects, have already fully adopted the W3C protocol.

However, there are some notable new APIs, as well as the removal of deprecated APIs in the WebDriver API, such as:

  • Elements:
    • The FindsBy* interfaces (e.g. FindsByID, FindsByCss …) have been deleted. The recommended alternative is to use a `By` instance passed to `findElements` instead.
    • Relative locators”: a friendly way of locating elements using terms that users use, like “near”, “left of”, “right of”, “above” and “below”. This was inspired by an automation tool called Sahi by Narayan Raman, and the approach has also been adopted by tools like Taiko by ThoughtWorks.
    • A richer set of exceptions, providing better information about why a test might have failed. These include exceptions like ElementClickInterceptedError, NoSuchCookieError & more.
  • Chrome Debugging Protocol (CDP):
    • Although Selenium works on every browser, for those browsers that support it, Selenium 4 offers CDP integration, which allows us to take advantage of the enhanced visibility into the browser that a debugging protocol gives.
    • Because the CDP is, as the name suggests, designed for debuggers, it’s not the most user friendly of APIs. Fortunately, the Selenium team is working to provide comfortable cross-language APIs to cover common requirements, such as network stubbing, capturing logs, mocking geolocation, and more.
  • Browser Specifics:
    • A new ChromiumDriver extends packages for both Chrome and Edge browsers.
    • A new method to allow install and uninstall add-ons for Firefox browser at runtime.
  • Window Handling:
    • Users can go in full-screen mode during script executions.
    • Better control of whether new windows open as tabs, or in their own window.
  • Screenshots:
    • An option to grab a screenshot at UI element level. Unlike the usual view-port level screenshot.
    • Full Page Screenshot support for Firefox browser.

What’s next in WebDriver beyond Selenium 4?

It would be nice to have users extend the locator strategy like FindByImage or FindbyAI (like in Appium) – right now we have a hardcoded list of element location strategies. Providing a lightweight way of extending this set, particularly when using Selenium Grid, is on the roadmap.

Selenium IDE

The original Selenium IDE reached its end of life in August 2017, when Mozilla released Firefox 55, which switched its add-ons from the Mozilla-specific “XPI” format to the standardised “Web Extension” mechanism. This meant that the original Selenium IDE would no longer work in Firefox versions moving forwards.

Thanks to Applitools, Selenium IDE has been revived! It is one of the significant improvements in Selenium 4 and includes notable changes like:

  • A new shiny UI, for better user experience.
  • A web-extensions based plugin that makes it possible to be available in Chrome and Firefox browsers as well as for any other browser that allows web-extension based plugins. It will soon be available in the MS Edge store.
  • Code export is available now for all the official language bindings such as Java, .Net, Python, Ruby & JavaScript.
  • A new plugin system that can allow users to create new commands, code exports for new languages and frameworks. The plugins can be shipped as extensions. An example of a plugin is Applitools for Selenium IDE which enables codeless visual testing.
  • A new CLI runner called the “Selenium-side-runner” running on NodeJs. It allows users to execute the recorded tests in parallel with multi-browser capability.
  • A control flow mechanism which helps users write better tests using “while” & “if” conditions.
  • A backup element selector that can fall back and select elements using a different locator strategy like ID, CSS & XPath based on the recorded information. This helps make tests more stable and reliable.
  • Selenium IDE is accessible! We’ve gone above and beyond to make sure that it conforms to some of the latest accessibility guidelines and supports necessary controls like focus order, roles, tooltips, announcing the start of recording, color and design.

What’s next in Selenium IDE?

A remarkable milestone for Selenium IDE is that it’s going to be available as a standalone app, re-written to be an Electron app. By binding tightly to the browser, this would allow us to listen out for events from the browser, making test recording more powerful and feature-rich.

Selenium Grid

One of the essential improvements in Selenium 4 is the ability to use Docker to spin up containers instead of users setting up heavy virtual machines. Selenium Grid has been redesigned so that users can deploy it on Kubernetes for excellent scaling and self-healing capabilities.

Let’s look at some of the significant improvements:

  • We’ve enhanced Selenium Grid deployment for more scalable and traceable infrastructure.
  • Users can deploy Grid, either as Standalone, Hub-Node or in a distributed mode with different processes like in the below picture,
A deployment of Selenium Grid in a distributed mode with different processes.
  • Observability is a way of measuring systems’ internal state; a much-needed capability to trace what happens when an API is invoked or a new session creation is requested. This can help admins and developers when debugging, as providing insight into the root cause when strange problems arise.
  • Selenium Grid, by default, communicates via HTTP. This is fine for most use cases within the firewall but problematic when your server is exposed to the internet. Now users can have their Grid communicate via the HTTPS protocol with support for TLS connections.
  • Unlike in the old versions, where we’ve allowed only IPV4 based IP addresses,  now we support IPV6 addresses as well.
  • Grid has always allowed you to use configuration files when spinning up Grid instances. In Grid 4, those files can be written using TOML, which makes them easier for humans to understand..

What’s next in Selenium Grid?

As you follow, there have been exciting changes and performance improvements. There are a few more that expected to be added like:

  • A revived UI for Grid console
  • GraphQL for querying Grid
  • More work on Grid stability and resilience

More Goodies

We’ve also refreshed our branding, documentation, and the website, so check out Selenium.dev!

Selenium is an Open-Source project, and we do this voluntarily so there are never definite timelines that can be promised, but thanks for sticking with us and we’re excited that the new release is now here.

Please come and give us a hand if you have the energy and time! Happy hacking!

Thanks Simon Stewart in helping review this post!

Manoj Kumar is a Principal Consultant at ThoughtWorks. Manoj is an avid open-source enthusiast and a committer to the Selenium & Appium project. And a member of the project leadership committee for Selenium. Manoj has also contributed to various libraries and frameworks in the automated testing ecosystem like ngWebDriver, Protractor and Serenity to name a few. An avid accessibility practitioner who loves to share knowledge and is a voluntary member of the W3C ACT-R group. In his free time, he contributes to Open-Source projects or research on Accessibility and enjoys spending time with his family. He blogs at AssertSelenium.

Supercharge Selenium with Applitools Visual AI

Get STarted

Read More

Cover Photo by Sepp Rutz on Unsplash

The post What’s New In Selenium 4? appeared first on Automated Visual Testing | Applitools.

]]>
Front-End Test Fest 2021 Recap https://applitools.com/blog/front-end-test-fest-2021-recap/ Fri, 16 Jul 2021 18:20:44 +0000 https://applitools.com/?p=29960 Last month, Applitools and Cypress hosted the Front-End Test Fest, a free event that brought together leading experts in test automation for a full day of learning and discussion around...

The post Front-End Test Fest 2021 Recap appeared first on Automated Visual Testing | Applitools.

]]>

Last month, Applitools and Cypress hosted the Front-End Test Fest, a free event that brought together leading experts in test automation for a full day of learning and discussion around front-end testing. It was a great opportunity to hear about the latest in the industry and get to hear some really innovative and interesting stories.

We’ve got all the videos ready for you here, so feel free to jump right in below, but let’s recap the event below.

Opening Keynote: Applitools & Cypress: State of the Union – Angie Jones and Amir Rustamzadeh

This talk opened with Amir Rustamzadeh, Director of Developer Experience at Cypress, getting us all familiar with the latest and greatest in the testing tool. We all already know that Cypress is an excellent tool that is highly interactive and visual, but Amir took us on a tour of two new features that look pretty powerful.

These new features were Test Retries and Component Testing. Test Retries allows you to easily retry a test multiple times, helping you to catch and defeat test flake by highlighting how frequently a test passes with some handy analytics in the Cypress Dashboard. A related feature, Test Burn-In, allows you to do the same thing with brand new tests as they’re introduced. As for Component Testing, Amir noted that while this is typically done in a virtual DOM that you can’t debug, Cypress now has a beta where you can use the real browser DOM to mount a component and test it in isolation. Much better!

Angie Jones, Senior Director of Developer Relations at Applitools, then helped us understand the dangers of all-too-common visual bugs. Angie walked us through how Applitools Eyes can give your code super powers to find visual bugs, thanks to Visual AI. This talk covered visual component testing, visual testing of dynamic content, accessibility and localization testing, as well as cross-browser/viewport testing using the Ultrafast Grid. Check it out for a great overview of how to improve your visual testing.

The Allure of Adding Tests to Azure DevOps – Bushra Alam

Azure DevOps is a powerful tool, and if you’re curious about it, this talk will help get you started with it. Busra Alam, Software Quality Analyst, begins by covering the basics about what Azure, DevOps, and of course Azure DevOps means.

Azure Pipelines, part of Azure DevOps, is a tool to build, test and deploy your apps. By running tests in the pipeline, we can discover bugs early and deliver faster feedback with quicker time to market and overall better code quality. Bushra takes us through a live demo that shows how to create a pipeline, run a test and check the results – all automated through Azure, and quickly. She went on to share some advanced tips for running tests in parallel and utilizing release triggers. Check it out for the whole demo.

Bushra Alam

Answer The Call – A Digital Learning Transformation Using Model-Based Testing – Greg Sypolt

EverFi is a socially-conscious educational platform with a large number of courses and individualized learner paths. Greg Sypolt joined them as their VP of Quality Assurance to solve a tricky testing challenge they had – with so many different courses and paths for learners to take, traditional testing just couldn’t cover it all, and it would only get worse as EverFi grew.

Greg’s solution was to launch a multi-pronged approach centered around model-based testing. In this eye-opening talk you’ll see the step-by-step approach Greg used to build his models. Cypress and Applitools as critical components of the process, but there’s a lot more to it. This one is hard to sum up in a couple of sentences but is definitely worth watching to get the full story.

Greg Sypolt

Expert Panel: Successful Test Strategies

Stacy Kirk, CEO/Founder of QualityWorks Consulting Group, moderated this great panel with a trio of testing experts. Kristin Jackvony, Principal Engineer – Quality at Paylocity, Alfred Lucero, Senior Software Engineer at Twilio and Jeff Benton, Staff Software Engineer in Test at RxSaver share their experiences on a range of issues relevant to test engineers everywhere. Learn about the testing tools they used, tips for incorporating testing into the CI/CD process and how you can secure that crucial teamwide buy-in for testing. I won’t spoil it but the parting words from these experts make it clear that the first step for successful testing is to have the conversation with your team on the value of testing, and then just start – it’s ok if you start small with a quick win to get that buy-in quickly.

It’s a (Testing) Trap! – Common Testing Pitfalls and How to Solve Them – Ramona Schwering

How can we avoid trapping ourselves underneath tests that are hard to maintain or worse, don’t even deliver any value? Ramona Schwering, a Developer Core at shopware AG, shared her own mistakes here (and yes, her love of Star Wars) to try and make sure you don’t have to make them too. Ramona has worked as both a developer and in testing so she knows how to speak to both experiences, and this was a very easy to follow, relatable talk. She shared three main pain points (or traps) that tests can fall into – they can be slow, they can be tough to maintain, and they can be “Heisen tests” that are so flakey they don’t tell you anything. Check this one out to hear more about traps and solutions and how you can keep your tests simple.

Ramona Schwering

Testing Beyond the DOM – Colby Fayock

Colby Fayock, Developer Advocate at Applitools, kicked off his talk with a game of “UI Gone Wrong,” taking us through some cringeworthy examples of UI bugs from major organizations that probably cost them revenue or customers. You all know the kind of bug – it happens to everyone sometimes, but does it need to? With Cypress and Applitools working together, Colby showed us that you can do better. He walked us through a live demonstration of how you can easily add Applitools to an existing Cypress test, enhancing the browser automation provided by Cypress with Visual AI to catch any visual bugs. Take a look and see how you can take your testing to the next level.

Colby Fayock

Our Journey to CI/CD: The Trials, Tribulations, & Triumphs – Hector Coronado and Joseph King

As projects get increasingly complex, they get harder to maintain and changes become slower to deploy. That was the issue Hector Coronado and Joseph King were running into as frontend and web application engineers, respectively, at Autodesk. They were working on a React app they had built, the “Universal Help Module,” that provides users several types of support while appearing in multiple locations with varying layouts and UIs. To keep up with the growing complexity, they set out to build a fast and thorough CI/CD pipeline that would include an automated testing strategy.

Hector and Joseph moved away from manual testing and tried many tools for automated functional and visual testing. In the end, Cypress won big as a free all-in-one testing framework that is fast an open source, and they loved Applitools for its blazing speed, simple Cypress SDK, strong cross-browser capabilities and excellent customer support. They put them together to achieve a dream they used to get buy-in – more coverage with less code! Check out their full journey below.

Practically Testing – Kent C. Dodds

You have limited time in your day – should you write that test or fix that bug? That’s the subhead for this talk by Kent C. Dodds, a JavaScript Engineer and Trainer at Kent C. Dodds Tech LLC. Unlike many of the presentations above, which are filled with awesome code examples and demos, Kent’s talk is intended to be a practical one with relatable examples to get you thinking about one key thing: How do you prioritize?

Kent describes his methodology for understanding what’s truly important to your company and its mission and how you can identify your role in pushing that forward. He also reminds all of us that we’re not simply hired as engineers to write code or tests, but as humans to advance a mission. Watch this video for some really humanizing inspiration and to spark some thoughts about how you can get more out of your day.

Kent C. Dodds

Looking for More? Learn about the Future of Mobile Testing

We’ve got you covered with another free event. Our next live Future of Testing: Mobile event takes place on August 10th, and registration is officially open. Check it out and reserve your spot today.

Past Future of Testing: Mobile Events

You can also check out all the videos from our Future of Testing: Mobile event in June here or get a full recap of our Future of Testing: Mobile event from April right here.

Happy testing!

The post Front-End Test Fest 2021 Recap appeared first on Automated Visual Testing | Applitools.

]]>