Automated Visual Testing Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/automated-visual-testing/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 07 Jul 2023 18:34:51 +0000 en-US hourly 1 Ultrafast Cross Browser Testing with Selenium Java https://applitools.com/blog/cross-browser-testing-selenium/ Fri, 09 Sep 2022 15:51:52 +0000 https://applitools.com/?p=42442 Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>

Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

What is Cross Browser Testing?

Cross-browser testing is a form of functional testing in which an application is tested on multiple browsers (Chrome, Firefox, Edge, Safari, IE, etc.) to validate that functionality performs as expected.

In other words, it is designed to answer the question: Does your app work the way it’s supposed to on every browser your customers use?

Why is Cross Browser Testing Important?

While modern browsers generally conform to key web standards today, important problems remain. Differences in interpretations of web standards, varying support for new CSS or other design features, and rendering discrepancies between the different browsers can all yield a user experience that is different from one browser to the next.

A modern application needs to perform as expected across all major browsers. Not only is this a baseline user expectation these days, but it is critical to delivering a positive user experience and a successful app.

At the same time, the number of screen combinations (between screen sizes, devices and versions) is rising quickly. In recent years the number of screens required to test has exploded, rising to an industry average of 81,480 screens and reaching 681,296 for the top 30% of companies.

Ensuring complete coverage of each screen on every browser is a common challenge. Effective and fast cross-browser testing can help alleviate the bottleneck from all these screens that require testing.

Source: 2019 State of Automated Visual Testing

How to Perform Modern Cross Browser Testing in Selenium with Visual Testing

Traditional approaches to cross-browser testing in Selenium have existed for a while, and while they still work, they have not scaled well to handle the challenge of complex modern applications. They can be time-consuming to build, slow to execute and challenging to maintain in the face of apps that change frequently.

Applitools Developer Advocate and Test Automation University Director Andrew Knight (AKA Pandy Knight) recently conducted a hands-on workshop where he explored the history of cross-browser testing, its evolution over time and the pros and cons of different approaches.

Andrew then explores a modern cross-browser testing solution with Selenium and Applitools. He walks you through a live demo (which you can replicate yourself by following his shared Github repo) and explains the benefits and how to get started. He also covers how you can accelerate test automation with integration into CI/CD to achieve Continuous Testing.

Check out the workshop below, and follow along with the Github repo here.

More on Cross Browser Testing in Cypress, Playwright or Storybook

At Applitools we are dedicated to making software testing faster and easier so that testers can be more effective and apps can be visually perfect. That’s why we use our industry-leading Visual AI and built the Applitools Ultrafast Grid, a key component of the Applitools Test Cloud that enables ultrafast cross-browser testing. If you’re looking to do cross-browser testing better but don’t use Selenium, be sure to check out these links too for more info on how we can help:

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>
UI Testing: A Getting Started Guide and Checklist https://applitools.com/blog/ui-testing-guide/ Thu, 01 Sep 2022 20:32:38 +0000 https://applitools.com/?p=42155 Learn everything you need to know about how to perform UI testing, why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.

]]>

Learn everything you need to know about how to perform UI testing, including why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

When users explore web, mobile or desktop applications, the first thing they see is the User Interface (UI). As digital applications become more and more central to the way we all live and work, the way we interact with our digital apps is an increasingly critical part of the user experience.

There are many ways to test an application: Functional testing, regression testing, visual testing, cross-browser testing, cross-device testing and more. Where does UI testing fit into this mix?

UI testing is essential to ensure that the usability and functionality of an application performs as expected. This is critical for delivering the kinds of user experiences that ensure an application’s success. After all, nobody wants to use an app where text is unreadable, or where buttons don’t work. This article will explain the fundamentals of UI testing, why it’s important, and supply a UI testing checklist and examples to help you get started.

What is UI Testing?

UI testing is the process of validating that the visual elements of an application perform as expected. In UI Testing, graphical components such as text, radio buttons, checkboxes, buttons, colors, images and menus are evaluated against a set of specifications to determine if the UI is displaying and functioning correctly.

Why is UI Testing Important?

UI testing is an important way to ensure an application has a reliable UI that always performs as expected. It’s critical for catching visual and even functional bugs that are almost impossible to detect using other kinds of testing.

Modern UI testing, which typically utilizes visual testing, works by validating the visual appearance of an application, but it does much more than make sure things simply look correct. Your application’s functionality can be drastically affected by a visual bug. UI testing is critical for verifying the usability of your UI.

Note: What’s the difference between UI testing and GUI testing? Modern applications are heavily dependent on graphical user interfaces (GUIs). Traditional UI testing can include other forms of user interfaces, including CLIs, or can use DOM-based coded locators to try and verify the UI rather than images. Modern UI testing today frequently involves visual testing.

Let’s take an example of a visual bug that slipped into production from the Southwest Airlines website:

Visual Bug on Southwest Airlines App
Visual Bug on Southwest Airlines App

Under a traditional functional testing approach this would pass the test suite. All the elements are present on the page and successfully loaded. But for the user, it’s easy to see the visual bug. 

This does more than deliver a negative user experience that may harm your brand. In this example, the Terms and Conditions are directly overlapping the ‘continue’ button. It’s literally impossible for the user to check out and complete the transaction. That’s a direct hit to conversions and revenue.

With good UI testing in place, bugs like these will be caught before they become visible to the user.

UI Testing Approaches

Manual Testing

Manual UI testing is performed by a human tester, who evaluates the application’s UI against a set of requirements. This means the manual tester must perform a set of tasks to validate that the appearance and functionality of every UI element under test meets expectations. The downsides of manual testing are that it is a time-consuming process and that test coverage is typically low, particularly when it comes to cross-browser or cross-device testing or in CI/CD environments (using Jenkins, etc.). Effectiveness can also vary based on the knowledge of the tester.

Record and Playback Testing

Record and Playback UI testing uses automation software and typically requires limited or no coding skill to implement. The software first records a set of operations executed by a tester, and then saves them as a test that can be replayed as needed and compared to the expected results. Selenium IDE is an example of a record and playback tool, and there is even one built directly into Google Chrome.

Model-Based Testing

Model-based UI testing uses a graphical representation of the states and transitions that an application may undergo in use. This model allows the tester to better understand the system under test. That means tests can be generated and potentially automated more efficiently. In its simplest form, the approach requires the steps below:

  1. Build a model representing the system
  2. Determine the inputs
  3. Understand the expected outputs
  4. Execute the tests and compare the results against expectations

Automated UI Testing vs Manual UI Testing

Benefits of Manual UI Testing

Manual testing, as we have seen above, has a few severe limitations. Because the process relies purely on humans performing tasks one at a time, it is a slow process that is difficult to scale effectively. Manual testing does, however, have advantages:

  • Manual testing can potentially be done with little to no tooling, and may be sufficient for early application prototypes or very small apps. 
  • An experienced manual tester may be able to discover bugs in edge-cases through ad-hoc or exploratory testing, as well as intuitively “feel” the user experience in a way that is difficult to understand with a scripted test.

Benefits of Automated UI Testing

In most cases automation will help testing teams save time by executing pre-determined tests repeatedly. Automation testing frameworks aren’t prone to human errors and can run continuously. They can be parallelized and executed easily at scale. With automated testing, as long as tests are designed correctly they can be run much more frequently with no loss of effectiveness. 

Automation testing frameworks may be able to increase efficiency even further with specialized capabilities for things like cross-browser testing, mobile testing, visual AI and more.

UI Testing Checklist of Test Cases

On the surface, UI testing is simple – just make sure everything “looks” good. Once you poke beneath that surface, testers can quickly find themselves encountering dozens of different types of UI elements that require verification. Here is a quick checklist you can use to make sure you’ve considered all the most common items.

UI Testing Checklist – Common Tests

  • Text: Can all text be read? Is the contrast legible? Is anything covered by another element?
  • Forms, Fields and Pickers: Are all text fields be visible, and can text be entered and submitted? Do all dropdowns display correctly? Are validation requirements (such as a date in a datepicker) upheld?
  • Navigation and Sorting: Whether it’s a site menu, a sortable table or a multi-page form, can the user navigate via the UI? Do all dropdowns display? Can all options be clicked/tapped, and do they have the desired effect? 
  • Buttons and Links: Are all buttons and links visible? Are they formatted consistently? Can they be selected, and do they take the user to the intended pages?
  • Responsiveness: When you adjust the resolution, do all of the above UI elements continue to behave as intended?

Each of the above must be tested across every page, table, form and menu that your application contains. 

It’s also a good practice to test the UI for specific critical end-to-end user journeys. For example, making sure that it’s possible to journey smoothly from: User clicks Free Trial Signup (Button) > User submits Email Address (Form) > User Logs into Free Trial (Form) > User has trial access (Product)

Challenges of UI Testing

UI testing can be a challenge for many reasons. With the proper tooling and preparation these challenges can be overcome, but it’s important to understand them as you plan your UI testing strategy.

  • User Interfaces are complex: As we’ve discussed above, there are numerous distinct elements on each page that must be tested. Embedded forms, iFrames, dropdowns, tables, images, videos and more must all be tested to be sure the UI is working as intended.
  • User Interfaces change fast: For many applications the UI is in a near-constant state of flux, as frequent changes to the text, layout or links are implemented. Maintaining full coverage is challenging when this occurs.
  • User Interfaces can be slow: Testing the UI of an application can take time, especially compared to smaller and faster tests like unit tests. Depending on the tool you are using, this can make them feel difficult to run as regularly.
  • Testing script bottlenecks: Because the UI changes so quickly, not only do testers have to design new test cases, but depending on your tooling, you may have to constantly create new coded test scripts. Testing tools with advanced capabilities, like the Visual AI in Applitools, can mitigate this by requiring far less code to deliver the same coverage.

UI Testing Example

Let’s take an example of an app with a basic use case, such as a login screen.

Even a relatively simple page like this one will have numerous important test cases (TC):

  • TC 1: Is the logo at the top appropriate for the screen, and aligned with brand guidelines?
  • TC 2: Is the title of the page displaying correctly (font, label, position)?
  • TC 3: Is the dividing line displaying correctly? 
  • TC 4: Is the Username field properly labeled (font, label, position)?
  • TC 5: Is the icon by the Username field displaying correctly?
  • TC 6: Is the Username text field accepting text correctly (validation, error messages)?
  • TC 7: Is the Password field properly labeled (font, label, position)?
  • TC 8: Is the icon by the Password field displaying correctly?
  • TC 9: Is the Password text field accepting text correctly (validation, error messages)?
  • TC 10: Is the Log In button text displaying correctly (font, label, position)?
  • TC 11: Is the Log In button functioning correctly on click (clickable, verify next page)
  • TC 12: Is the Remember Me checkbox title displaying correctly (font, label, position)?
  • TC 13: Is the Remember Me checkbox functioning correctly on click (clickable, checkbox displays, cookie is set)?

Simply testing each scenario on a single page can be a lengthy process. Then, of course, we encounter one of the challenges listed above – the UI changes quickly, requiring frequent regression testing

How to Simplify UI Testing with Automation

Performing this regression testing manually while maintaining the level of test coverage necessary for a strong user experience is possible, but would be a laborious and time-consuming process. One effective strategy to simplify this process is to use automated tools for visual regression testing to verify changes to the UI.

Benefits of Automated Visual Regression Testing for UI Testing

Visual regression testing is a method of ensuring that the visual appearance of the application’s UI is not negatively affected by any changes that are made. While this process can be done manually, modern tools can help you automate your visual testing to verify far more tests far more quickly.

Automated Visual UI Testing Example

Let’s return to our login screen example from earlier. We’ve verified that it works as intended, and now we want to make sure any new changes don’t negatively impact our carefully tested screen. We’ll use automated visual regression testing to make this as easy as possible.

  1. As we saw above, our baseline screen looks like this:
  2. Next, we’ll make a change by adding a row of social buttons. Unfortunately, this will have the effect of inadvertently rendering our login button unusable by pushing it up into the password field:
  3. We’ll use our automated visual testing tool to evaluate our change against the baseline. In our example, we’ll use a tool that utilizes Visual AI to highlight only the relevant areas of change that a user would notice. The tool would then bring our attention to the new social buttons along with the section around the now unusable button as areas of concern.
  4. A test engineer will then review the comparison. Any intentional changes that were flagged are marked as accepted changes. On some screens we might expect changes in certain dynamic areas, and these can be flagged for Visual AI to ignore going forward.

    We need to address only the remaining areas that are flagged. In our example, every area flagged in red is problematic – we need to shift down the social buttons, and move the button out of the password field. Once we’ve done this, we run test again, and a new baseline is created only when everything passes. The final result is free of visual defects:

Why Choose Automated Visual Regression Testing with Applitools for UI Testing

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Read More

The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual Regression Testing? https://applitools.com/blog/visual-regression-testing/ Fri, 17 Jun 2022 16:24:42 +0000 https://applitools.com/?p=39297 Learn what visual regression testing is and why it's important. Explore a use case with an example, how to get started and how to choose the best tool.

The post What is Visual Regression Testing? appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, you’ll learn what visual regression testing is and why visual regression tests are important. We’ll go through a use case with an example and talk about how to get started and choose the best tool for your needs.

What is Visual Regression Testing?

Visual regression testing is a method of validating that changes made to an application do not negatively affect the visual appearance of the application’s user interface (UI). By verifying that the layout and visual elements align with expectations, the goal of visual regression testing is to ensure the user experience is visually perfect.

Visual regression testing is a kind of regression testing. In regression testing, an application is tested to ensure that a new change to the code doesn’t break existing functionality. Visual regression testing specifically focuses on verifying the appearance and the usability of the UI after a code change.

In other words, visual regression testing (also called just visual testing or UI testing) is focused on validating the appearance of all the visual elements a user interacts with or sees. These visual validations include the location, brightness, contrast and color of buttons, menus, components, text and much more.

Why is Visual Regression Testing Important?

Visual regression tests are important to prevent costly visual bugs from escaping into production. Failure to visually validate can severely compromise the user experience and in many cases lead directly to lost sales. This is because traditional functional testing works by simply validating data input and output. This method of testing catches many bugs, but it can’t discover visual bugs. Without visual testing these bugs are prone to slipping through even on an otherwise well tested application. 

As an example, here is a screenshot of a visual bug in production on the Southwest Airlines website:

Visual Bug on Southwest Airlines App
Visual Bug on Southwest Airlines App

This page would pass a typical suite of functional tests because all of the elements are present on the page and have loaded successfully. However, the visual bug is obvious. Not only that, but because the Terms and Conditions are inadvertently overlapping the button, the user literally cannot check out and complete their purchase. Visual regression testing would catch this kind of bug easily before it slipped into production.

Visual testing can also enhance functional testing practices and make them more efficient. Because visual tests can “see” the elements on a page they do not have to rely on individual coded assertions using unique selectors to validate each element. In a traditional functional testing suite, these assertions are often time-consuming to create and maintain as the application changes. Visual testing greatly simplifies that process.


How Do Visual Regression Tests Work?

At its core, visual regression testing works by capturing screenshots of the UI before a change is made and comparing it to a screenshot taken after. Differences are then highlighted for a test engineer to review. In practice, there are several different visual regression testing techniques available.

Types of Visual Regression Testing

  • Manual visual testing: Visual regression testing can be done manually and without any tools. Designers and developers take time during every release to scan pages, manually looking for visual defects. While it is slow and extremely cumbersome to do this for an entire application, not to mention prone to human error, manual testing in this way can allow for ad-hoc or exploratory testing of the UI, especially at early stages of development.
  • Pixel-by-Pixel comparison: This approach compares the two screenshots and analyzes each at the pixel level, alerting the test engineer of any discrepancies found. Pixel comparison, also called pixel diffs, will be certain to flag all possible issues, but will also include many irrelevant differences that are invisible to the human eye and have no effect on usability (such as rendering, anti-aliasing, or padding/margin differences). These “false-positives” must be painstakingly sifted through manually by the test engineer with every test run.
  • DOM-based comparison: A comparison based on the Document Object Model (DOM) analyzes the DOM before and after a state change and flags any differences. This will be effective in drawing attention to any alterations in the code that comprises the DOM, but is not truly a visual comparison. False negatives/positives are frequently produced when the code does not change but the UI does (e.g.: dynamic content, embedded content, etc.) or when the code changes but the UI does not. As a result, test results are often flaky and must be slowly and carefully reviewed to avoid escaped visual bugs.
  • Visual AI comparison: This type of visual regression testing leverages Visual AI, which uses computer vision to “see” the UI the same way a human would. A well-trained AI will be able to assist test engineers by only surfacing the kind of differences a human would notice, eliminating the time-consuming “false-positive” issues that plague pixel and DOM comparison tests. It can also include other capabilities, such as the ability to test dynamic content and flag issues only in the areas or regions where changes are not expected.

Automated Visual Testing Use Case and Example

Getting started with automated visual regression testing takes only a few steps. Let’s walk through the typical visual regression testing process and then consider a brief example.

  1. Define your test scenarios. What will be captured in the screenshots, and at what point in the test will they be taken? With some automated tools, a basic test can be as simple as a single line of code that will take a screenshot of an entire page at the end of a test.
  2. Use an automated testing tool to compare the new screenshots against a baseline image. The baseline is the most recent existing screenshot of the application that has already been approved by a tester.
  3. The tool will automatically generate a report highlighting the differences found between the two images. Using pixel diff this will be every pixel difference found, or with Visual AI, you will see a report showing only meaningful differences. 
  4. A test engineer reviews the report and determines what is a bug and what is an acceptable or valid change (a false positive). After all bugs are resolved, the baseline is updated with the new screenshot.

Visual Regression Testing Example

Let’s review a quick example of the four steps above with a basic use case, such as a login screen.

  1. Define your test scenario: In this case, we’ll capture the entire screen and review for any changes. Our baseline might look like this:
  2. Next, we’ll make some changes to the code, such as adding a row of social buttons. Unfortunately, doing so has pushed up the login button so that it is unusable. Our new login screen might look like this:
  3. The tool will then compare the two and generate a report. In our example, we’ll use Visual AI, which will highlight only the relevant areas of change that a user would notice. In this case, that’s the row with the new social buttons, and the area with the now unusable button. The comparison would look like this:
  4. A test engineer will then review the comparison. If any intentional changes were flagged, these are marked as accepted changes. Similarly, if there are expected changes in dynamic areas, these can be flagged for Visual AI to ignore going forward. Remaining areas flagged are marked as bugs to be addressed. In this case, every area flagged in red is problematic – the social buttons need to be shifted down, and the button needs to come down out of the password field. Once these are addressed, the test is run again, and a new baseline is created only when everything passes. The end result is free of visual defects:

How to Choose a Visual Testing Tool

Choosing the best tool for your visual regression tests will depend on your needs, as there are many options available. Here are some questions you should be asking as you consider a new tool:

  • Automated or Manual? How frequently do you want to conduct visual tests? For occasional spot checks, manual testing may suffice. If you want to run tests with every change to ensure no visual bugs escape, automated testing will be much more efficient. For automated testing, consider the learning curve of the tool and how easy it is to integrate into your existing CI/CD workflow.
  • Is your UI dynamic or static? How often does your user interface change? For completely static pages, simpler tools may serve to spot any visual bugs. Pages with dynamic content that changes regularly may be better served by tools with advanced capabilities like Visual AI.
  • How many browsers/devices/platforms? Do you have many browsers, devices or platforms to cover with your tests? A tool may be efficient for a single combination but quite inefficient when attempting to cover a wide range of configurations. If you need to cover a broad range of situations, you need to make sure you pick a tool that can quickly re-render visual snapshots on different configurations, or achieving full coverage can become a time-consuming headache.
  • Does your team have time? How much time does your QA team have to spend on UI testing? If they have capacity, sifting through potential false positives from pixel diff tools may not be an issue, or manual testing could be an option. For teams looking to be as efficient as possible, particularly with large or dynamic applications, automated visual testing with Visual AI will save time.
  • What is your build/release frequency? Are you running tests infrequently, daily, or even multiple times a day? If testing is quite infrequent, you may be able to absorb some inefficiency in test execution. Organizations running tests regularly, or seeking to increase their test velocity, should place significant value in a tool that can enable their QA team to achieve full coverage by easily executing a large quantity of tests quickly.
  • How many bugs are slipping through? For many teams, due to the increasing complexity of web and mobile development, visual bugs that can harm a company’s reputation or even sales escape more than they would like. In this case the value of automated visual testing is clear. However, if your team is catching all bugs or you can live with the level of bugs escaping, you may not need to invest in visual testing, at least for now.

Automated visual testing tools can be paid or open source. Visual testing tools are typically paired with an automated testing tool to automatically handle interactions and take screenshots. Some popular open source automated testing tools compatible with visual testing include Selenium for web testing and Appium for mobile testing.

Why Choose Automated Visual Regression Testing with Applitools

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Read More

The post What is Visual Regression Testing? appeared first on Automated Visual Testing | Applitools.

]]>
What is Functional Testing? Types and Example (Full Guide) https://applitools.com/blog/functional-testing-guide/ Fri, 13 May 2022 20:12:32 +0000 https://applitools.com/?p=38369 Learn what functional testing is in this complete guide, including an explanation of functional testing types and examples of techniques.

The post What is Functional Testing? Types and Example (Full Guide) appeared first on Automated Visual Testing | Applitools.

]]>

What is Functional Testing?

Functional testing is a type of software testing where the basic functionalities of an application are tested against a predetermined set of specifications. Using Black Box Testing techniques, functional tests measure whether a given input returns the desired output, regardless of any other details. Results are binary: tests pass or fail.

Why is Functional Testing Important?

Functional testing is important because without it, you may not accurately understand whether your application functions as intended. An application may pass non-functional tests and otherwise perform well, but if it doesn’t deliver the key expected outputs to the end-user, the application cannot be considered working.

What is the Difference between Functional and Non-Functional Testing?

Functional tests verify whether specified functional requirements are met, where non-functional tests can be used to test non-functional things like performance, security, scalability or quality of the application. To put it another way, functional testing is concerned with if key functions are operating, and non-functional tests are more concerned with how the operations take place.

Examples of Functional Testing Types

There are many types of functional tests that you may want to complete as you test your application. 

A few of the most common include:

Unit Testing

Unit testing breaks down the desired outcome into individual units, allowing you to test whether a small number of inputs (sometimes just one) produce the desired output. Unit tests tend to be among the smallest tests to write and execute quickly, as each is designed to cover only a single section of code (a function, method, object, etc.) and verify its functionality.

Smoke Testing

Smoke testing is done to verify that the most critical parts of the application work as intended. It’s a first pass through the testing process, and is not intended to be exhaustive. Smoke tests ensure that the application is operational on a basic level. If it’s not, there’s no need to progress to more detailed testing, and the application can go right back to the development team for review.

Sanity Testing

Sanity testing is in some ways a cousin to smoke testing, as it is also intended to verify basic functionality and potentially avoid detailed testing of broken software. The difference is that sanity tests are done later in the process in order to test whether a new code change has had the desired effect. It is a “sanity check” on a specific change to determine if the new code roughly performs as expected. 

Integration Testing

Integration testing determines whether combinations of individual software modules function properly together. Individual modules may already have passed independent tests, but when they are dependent on other modules to operate successfully, this kind of testing is necessary to ensure that all parts work together as expected.

Regression Testing

Regression testing makes sure that the addition of new code does not break existing functionalities. In other words, did your new code cause the quality of your application to “regress” or go backwards? Regression tests target the changes that were made and ensure the whole application continues to remain stable and function as expected.

User Acceptance Testing (UAT)/Beta Testing

Usability testing involves exposing your application to a limited group of real users in a production environment. The feedback from these live users – who have no prior experience with the application and may discover critical bugs that were unknown to internal teams – is used to make further changes to the application before a full launch.

UI/UX Testing 

UI/UX testing evaluates the graphical user interface of the application. The performance of UI components such as menus, buttons, text fields and more are verified to ensure that the user experience is ideal for the application’s users. UI/UX testing is also known as visual testing and can be manual or automated.

Other classifications of functional testing include black box testing, white box testing, component testing, API testing, system testing and production testing.

How to Perform Functional Testing

The essence of a functional test involves three steps:

  • Determine the desired test input values
  • Execute the tests
  • Evaluate the resulting test output values

Essentially, when you executed a task with input (e.g.: enter an email address into a text field and click submit), did your application generate the expected output (e.g.: user is subscribed and thank you page is displayed)?

We can understand this further with a quick example.

Functional Testing Example

Let’s begin with a straightforward application: a calculator. 

To create a set of functional tests, you would need to:

  • Evaluate all the possible inputs – such as numbers and mathematical symbols – and design assertions to test their functionality
  • Execute the tests (either automated or manually)
  • Ensure that the desired outputs are generated – e.g.: each mathematical function works as intended, the final result is given correctly in all cases, the formula history is displayed accurately, etc.

For more on how to create a functional test, you can see a full guide on how to write an automated functional test for this example.

Functional Testing Techniques 

There are many functional testing techniques you might use to design a test suite for this:

  • Boundary value tests evaluate what happens if inputs are received outside of specified limits – such as a user entering a number that was too large (if there is a specified limit) or attempting to enter non-numeric input
  • Decision-based tests verify the results after a user decides to take an action, such as clearing the history
  • User-based tests evaluate how components work together within an application – if the calculator’s history was stored in the cloud, this kind of test would verify that it did so successfully
  • Ad-Hoc tests can be done at the end to try and discover bugs other methods did not uncover by seeking to break the application and check its response

Other common functional testing techniques include equivalence testing, alternate flow testing, positive testing and negative testing.

Automated Functional Testing vs Manual Functional Testing

Manual functional testing requires a developer or test engineer to design, create and execute every test by hand. It is flexible and can be powerful with the right team. However, as software grows in complexity and release windows get shorter, a purely manual testing strategy will face challenges keeping up a large degree of test coverage.

Automated functional testing automates many parts of the testing process, allowing tests to run continuously without human interaction – and with less chance for human error. Tests must still be designed and have their results evaluated by humans, but recent improvements in AI mean that with the right tool an increasing share of the load can be handled autonomously.

How to Use Automated Visual Testing for Functional Tests

One way to automate your functional tests is by using automated visual testing. Automated visual testing uses Visual AI to view software in the same way a human would, and can automatically highlight any unexpected differences with a high degree of accuracy.

Visual testing allows you to test for visual bugs, which are otherwise extremely challenging to uncover with traditional functional testing tools. For example, if an unrelated change caused a “submit” button to be shifted to the far right of a page and it could no longer be clicked by the user, but it was still technically on the page and using the correct identifier, it would pass a traditional functional test. Visual testing would catch this bug and ensure functionality is not broken by a visual regression.

How to Choose an Automated Testing Tool?

Here are a few key considerations to keep in mind when choosing an automated testing tool:

  • Ease of Use: Is it something easy for your existing QA team to use, or easy to hire for? Does it require an extensive learning curve or can it be picked up quickly?
  • Flexibility: Can it be used across different platforms? Can it easily integrate with your current testing environment, and does it allow you the freedom to change your environment in the future?
  • Reusability/AI Assistance: How easy is it to reuse tests, particularly if the UI changes? Is there meaningful AI that can help you test more efficiently, particularly at the scale you need?
  • Support: What level of customer support do you require, and how easily can you receive it from the provider of your tool?

Automated testing tools can be paid or open source. Some popular open source tools include Selenium for web testing and Appium for mobile testing.

Why Choose Automated Visual Testing with Applitools

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Keep Learning

Looking to learn more about Functional Testing? Check out the resources below to find out more.

The post What is Functional Testing? Types and Example (Full Guide) appeared first on Automated Visual Testing | Applitools.

]]>
Testing Storybook Components in Any Browser – Without Writing Any New Tests! https://applitools.com/blog/storybook-components-cross-browser-testing/ Thu, 31 Mar 2022 20:28:00 +0000 https://applitools.com/?p=36060 Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.

The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.

Let’s face it: modern web apps are complex. If a team wants to provide a seamless user experience on a deadline, they need to squeeze the most out of the development resources they have. Component libraries help tremendously. Developers can build individual components for small things like buttons and big things like headers to be used anywhere in the frontend with a consistent look and feel.

Storybook is one of the most popular tools for building web components. It works with all the popular frameworks, like React, Angular, and Vue. With Storybook, you can view tweaks to components as you develop their “stories.” It’s awesome! However, manually inspecting components only works small-scale when you, as the developer, are actively working on any given component. How can a team test their Storybook components at scale? And how does that fit into a broader web app testing strategy?

What if I told you that you could automatically do cross-browser testing for Storybook components without needing to define any new tests or write any new automation code? And what if I told you that it could fit seamlessly into your existing development workflow? You can do this with the power of Applitools and your favorite CI tool! Let’s see how.

Adding Visual Component Testing to Your Strategy

Historically, web app testing strategies divide functional testing into three main levels:

  1. Unit testing
  2. Integration testing for APIs
  3. End-to-end testing for UIs and APIs

These three levels make up the classic Testing Pyramid. Each level of testing mitigates a unique type of risk. Unit tests pinpoint problems in code, integration tests catch problems where entities meet, and end-to-end tests exercise behaviors like a user.

The rise of frontend component libraries raises an interesting question: Where do components belong among these levels? Components are essentially units of the UI. In that sense, they should be tested individually as “UI units” to catch problems before they become widespread across multiple app views. One buggy component could unexpectedly break several pages. However, to test them properly, they should be rendered in a browser as if they were “live.” They might even call APIs indirectly. Thus, arguably, component testing should be sandwiched between traditional integration and end-to-end testing.

Web app testing levels, showing where component testing belongs in relation to other levels.

Wait, another level of testing? Nobody has time for that! It’s hard enough to test adequate coverage at the three other levels, let alone automate those tests. Believe me, I understand the frustration. Unfortunately, component libraries bring new risks that ought to be mitigated.

Thankfully, Applitools provides a way to visually test all the components in a Storybook library with the Applitools Eyes SDK for Storybook. All you need to do is install the @applitools/eyes-storybook package into your web app project, configure a few settings, and run a short command to launch the tests. Applitools Eyes will turn each story for each component into a visual test case. On the first run, it will capture a visual snapshot for each story as a “baseline” image. Then, subsequent runs will capture “checkpoint” snapshots and use Visual AI to detect any changes. You don’t need to write any new test code – tests become a side effect of creating new components and stories!

In this sense, visual component testing with Applitools is like autonomous testing. Test generation and execution is completely automated, and humans review the results. Since testing can be done autonomously, component testing is easy to add to an existing testing strategy. It mitigates lots of risk for low effort. Since it covers components very well, it can also reduce the number of tests at other layers. Remember, the goal of a testing strategy is not to cover all the things but rather to use available resources to mitigate as much risk as possible. Covering a whole component library with an autonomous test run frees up folks to focus on other areas.

Adding Applitools Eyes to Your Web App

Let’s walk through how to set up visual component tests for a Storybook library. You can follow the steps below to add visual component tests to any web app that has a Storybook library. Give it a try on one of your own apps, or use my example React app that I’ll use as an example below. You’ll also need Node.js installed as a prerequisite.

To get started, you’ll need an Applitools account to run visual tests. If you don’t already have an Applitools account, you can register for free using your email or GitHub account. That will let you run visual tests with basic features.

Once you get your account, store your API key as an environment variable. On macOS or Linux, use this command:

export APPLITOOLS_API_KEY=<your-api-key>

On Windows:

set APPLITOOLS_API_KEY=<your-api-key>

Next, you need to add the eyes-storybook package to your project. To install this package into a new project, run:

npm install --save-dev @applitools/eyes-storybook

Finally, you’ll need to add a little configuration for the visual tests. Add a file named applitools.config.js to the project’s root directory, and add the following contents:

module.exports = {
    concurrency: 1,
    batchName: "Visually Testing Storybook Components"
}

The concurrency setting defines how many visual snapshot comparisons the Applitools Ultrafast Test Cloud will perform in parallel. (With a free account, you are limited to 1.) The batchName setting defines a name for the batch of tests that will appear in the Applitools dashboard. You can learn about these settings and more under Advanced Configuration in the docs.

That’s it! Now, we’re ready to run some tests. Launch them with this command:

npx eyes-storybook

Note: If your components use static assets like image files, then you will need to append the -s option with the path to the directory for static files. In my example React app, this would be -s public.

The command line will print progress as it tests each story. Once testing is done, you can see all the results in the Applitools dashboard:

Results for visual component tests establishing baseline snapshots.

Run the tests a second time for checkpoint comparisons:

Results for visual component tests comparing checkpoints to baselines.

If you change any of your components, then tests should identify the changes and report them as “Unresolved.” You can then visually compare differences side-by-side in the Applitools dashboard. Applitools Eyes will highlight the differences for you. Below is the result when I changed a button’s color in my React app:

Comparing visual differences between two buttons after changing the color.

You can give the changes a thumbs-up if they are “right” or a thumbs-down if they are due to a regression. Applitools makes it easy to pinpoint changes. It also provides auto-maintenance features to minimize the number of times you need to accept or reject changes.

Adding Cross-Browser Tests for All Components

When Applitools performs visual testing, it captures snapshots from tests running on your local machine, but it does everything else in the Ultrafast Test Cloud. It rerenders those snapshots – which contain everything on the page – against different browser configurations and uses Visual AI to detect any changes relative to baselines.

If no browsers are specified for Storybook components, Applitools will run visual component tests against Google Chrome running on Linux. However, you can explicitly tell Applitools to run your tests against any browser or mobile device.

You might not think you need to do cross-browser testing for components at first. They’re just small “UI units,” right? Well, however big or small, different browsers render components differently. For example, a button may have rectangular edges instead of round ones. Bigger components are more susceptible to cross-browser inconsistencies. Think about a navbar with responsive rendering based on viewport size. Cross-browser testing is just as applicable for components as it is for full pages.

Configuring cross-browser testing for Storybook components is easy. All you need to do is add a list of browser configs to your applitools.config.js file like this:

module.exports = {
  concurrency: 1,
  batchName: "Visually Testing Storybook Components",
  browser: [
    // Desktop
    {width: 800, height: 600, name: 'chrome'},
    {width: 700, height: 500, name: 'firefox'},
    {width: 1600, height: 1200, name: 'ie11'},
    {width: 1024, height: 768, name: 'edgechromium'},
    {width: 800, height: 600, name: 'safari'},
    // Mobile
    {deviceName: 'iPhone X', screenOrientation: 'portrait'},
    {deviceName: 'Pixel 2', screenOrientation: 'portrait'},
    {deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
    {deviceName: 'Nexus 10', screenOrientation: 'portrait'},
    {deviceName: 'iPad Pro', screenOrientation: 'landscape'},
  ]
}

This declaration includes ten unique browser configurations: five desktop browsers with different viewport sizes, and five mobile devices with both portrait and landscape orientations. Every story will run against every specified browser. If you run the test suite again, there will be ten times as many results!

Results from running visual component tests across multiple browsers.

As shown above, my batch included 90 unique test instances. Even though that’s a high number of tests, Applitools Ultrafast Test Cloud ran them in only 32 seconds! That really is ultrafast for UI tests.

Running Visual Component Tests Autonomously

Applitools Eyes makes it easy to run visual component tests, but to become truly autonomous, these tests should be triggered automatically as part of regular development workflows. Any time someone makes a change to these components, tests should run, and the team should receive feedback.

We can configure Continuous Integration (CI) tools like Jenkins, CircleCI, and others for this purpose. Personally, I like to use GitHub Actions because they work right within your GitHub repository. Here’s a GitHub Action I created to run visual component tests against my example app every time a change is pushed or a pull request is opened for the main branch:

name: Run Visual Component Tests

on:
  push:
  pull_request:
    branches:
      - main

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v2
      
      - name: Set up Node.js
        uses: actions/setup-node@v2

      - name: Install dependencies
        run: npm install

      - name: Run visual component tests
        run: npx eyes-storybook -s public
        env:
          APPLITOOLS_API_KEY: ${{ secrets.APPLITOOLS_API_KEY }}

The only extra configuration needed was to add my Applitools API key as a repository secret

Maximizing Your Testing Value

Components are just one layer of complex modern web apps. A robust testing strategy should include adequate testing at all levels. Thankfully, visual testing with Applitools can take care of the component layer with minimal effort. Unit tests can cover how the code works, such as a component’s play method. Integration tests can cover API requests, and end-to-end tests can cover user-centric behaviors. Tests at all these levels together provide great protection for your app. Don’t neglect any one of them!

The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.

]]>
Autonomous Testing: Test Automation’s Next Great Wave https://applitools.com/blog/autonomous-testing-test-automations-next-great-wave/ Tue, 08 Mar 2022 22:28:49 +0000 https://applitools.com/?p=35096 "Full" test automation is approaching. We are riding the crest of the next great wave: autonomous testing. It will fundamentally change testing.

The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.

]]>

The word “automation” has become a buzzword in pop culture. It conjures things like self-driving cars, robotic assistants, and factory assembly lines. They don’t think about automation for software testing. In fact, many non-software folks are surprised to hear that what I do is “automation.”

The word “automation” also carries a connotation of “full” automation with zero human intervention. Unfortunately, most of our automated technologies just aren’t there yet. For example, a few luxury cars out there can parallel-park themselves, and Teslas have some cool autopilot capabilities, but fully-autonomous vehicles do not yet exist. Self-driving cars need several more years to perfect and even more time to become commonplace on our roads.

Software testing is no different. Even when test execution is automated, test development is still very manual. Ironic, isn’t it? Well, I think the day of “full” test automation is quickly approaching. We are riding the crest of the next great wave: autonomous testing. It’ll arrive long before cars can drive themselves. Like previous waves, it will fundamentally change how we, as testers, approach our craft.

Let’s look at the past two waves to understand this more deeply. You can watch the keynote address I delivered at Future of Testing: Frameworks 2022, or you can keep reading below.

Test Automations Next Great Wave

Before Automation

In their most basic form, tests are manual. A human manually exercises the behavior of the software product’s features and determines if outcomes are expected or erroneous. There’s nothing wrong with manual testing. Many teams still do this effectively today. Heck, I always try a test manually before automating it. Manual tests may be scripted in that they follow a precise, predefined procedure, or they may be exploratory in that the tester relies instead on their sensibilities to exercise the target behaviors.

Testers typically write scripted tests as a list of steps with interactions and verifications. They store these tests in test case management repositories. Most of these tests are inherently “end-to-end:” they require the full product to be up and running, and they expect testers to attempt a complete workflow. In fact, testers are implicitly incentivized to include multiple related behaviors per test in order to gain as much coverage with as little manual effort as possible. As a result, test cases can become very looooooooooooong, and different tests frequently share common steps.

Large software products exhibit countless behaviors. A single product could have thousands of test cases owned and operated by multiple testers. Unfortunately, at this scale, testing is slooooooooow. Whenever developers add new features, testers need to not only add new tests but also rerun old tests to make sure nothing broke. Software is shockingly fragile. A team could take days, weeks, or even months to adequately test a new release. I know – I once worked at a company with a 6-month-long regression testing phase.

Slow test cycles forced teams to practice Waterfall software development. Rather than waste time manually rerunning all tests for every little change, it was more efficient to bundle many changes together into a big release to test all at once. Teams would often pipeline development phases: While developers are writing code for the features going into release X+1, testers would be testing the features for release X. If testing cycles were long, testers might repeat tests a few times throughout the cycle. If testing cycles were short, then testers would reduce the number of tests to run to a subset most aligned with the new features. Test planning was just as much work as test execution and reporting due to the difficulty in judging risk-based tradeoffs.

A Waterfall release schedule showing overlapping cycles of Design, Development, Testing and Release.
Typical Waterfall release overlapping

Slow manual testing was the bane of software development. It lengthened time to market and allowed bugs to fester. Anything that could shorten testing time would make teams more productive.

The First Wave: Manual Test Conversion

That’s when the first wave of test automation hit: manual test conversion. What if we could implement our manual test procedures as software scripts so they could run automatically? Instead of a human running the tests slowly, a computer could run them much faster. Testers could also organize scripts into suites to run a bunch of tests at one time. That’s it – that was the revolution. Let software test software!

During this wave, the main focus of automation was execution. Teams wanted to directly convert their existing manual tests into automated scripts to speed them up and run them more frequently. Both coded and codeless automation tools hit the market. However, they typically stuck with the same waterfall-minded processes. Automation didn’t fundamentally change how teams developed software, it just made testing better. For example, during this wave, running automated tests after a nightly build was in vogue. When teams would plan their testing efforts, they would pick a few high-value tests to automate and run more frequently than the rest of the manual tests.

A table showing "interaction" on one column and "verification" in another, with sample test steps.
An example of a typical manual test that would have likely been converted to an automated test during this wave.

Unfortunately, while this type of automation offered big improvements over pure manual testing, it had problems. First, testers still needed to manually trigger the tests and report results. On a typical day, a tester would launch a bunch of scripts while manually running other tests on the side. Second, test scripts were typically very fragile. Both tooling and understanding for good automation had not yet matured. Large end-to-end tests and long development cycles also increased the risk of breakage. Many teams gave up attempting test automation due to the maintenance nightmare.

The first wave of test automation was analogous to cars switching from manual to automatic transmissions. Automation made the task of driving a test easier, but it still required the driver (or the tester) to start and stop the test.

The Second Wave: CI/CD

The second test automation wave was far more impactful than the first. After automating the execution of tests, focus shifted to automating the triggering of tests. If tests are automated, then they can run without any human intervention. Therefore, they could be launched at any time without human intervention, too. What if tests could run automatically after every new build? What if every code change could trigger a new build that could then be covered with tests immediately? Teams could catch bugs as soon as they happen. This was the dawn of Continuous Integration, or “CI” for short.

Continuous Integration revolutionized software development. Long Waterfall phases for coding and testing weren’t just passé – they were unnecessary. Bite-sized changes could be independently tested, verified, and potentially deployed. Agile and DevOps practices quickly replaced the Waterfall model because they enabled faster releases, and Continuous Integration enabled Agile and DevOps. As some would say, “Just make the DevOps happen!”

The types of tests teams automated changed, too. Long end-to-end tests that covered “grand tours” with multiple behaviors were great for manual testing but not suitable for automation. Teams started automating short, atomic tests focused on individual behaviors. Small tests were faster and more reliable. One failure pinpointed one problematic behavior.

Developers also became more engaged in testing. They started automating both unit tests and feature tests to be run in CI pipelines. The lines separating developers and testers blurred.

Teams adopted the Testing Pyramid as an ideal model for test count proportions. Smaller tests were seen as “good” because they were easy to write, fast to execute, less susceptible to flakiness, and caught problems quickly. Larger tests, while still important for verifying workflows, needed more investment to build, run, and maintain. So, teams targeted more small tests and fewer large tests. You may personally agree or disagree with the Testing Pyramid, but that was the rationale behind it.

The Testing Pyramid, showing a large amount of unit tests at the base, integration tests in the middle and end-to-end tests at the top.
The Classic Testing Pyramid

While the first automation wave worked within established software lifecycle models, the second wave fundamentally changed them. The CI revolution enabled tests to run continuously, shrinking the feedback loop and maximizing the value that automated tests could deliver. It gave rise to the SDET, or Software Development Engineer in Test, who had to manage tests, automation, and CI systems. SDETs carried more responsibilities than the automation engineers of the first wave.

If we return to our car analogy, the second wave was like adding cruise control. Once the driver gets on the highway, the car can just cruise on its own without much intervention.

Unfortunately, while the second wave enabled teams to multiply the value they can get out of testing and automation, it came with a cost. Test automation became full-blown software development in its own right. It entailed tools, frameworks, and design patterns. The continuous integration servers became production environments for automated tests. While some teams rose to the challenge, many others struggled to keep up. The industry did not move forward together in lock-step. Test automation success became a gradient of maturity levels. For some teams, success seemed impossible to reach.

Attempts at Improvement

Now, these two test automation waves I described do not denote precise playbooks every team followed. Rather, they describe the general industry trends regarding test automation advancement. Different teams may have caught these waves at different times, too.

Currently, as an industry, I think we are riding the tail end of the second wave, rising up to meet the crest of a third. Continuous Integration, Agile, and DevOps are all established practices. The innovation to come isn’t there.

Over the past years, a number of nifty test automation features have hit the scene, such as screen recorders and smart locators. I’m going to be blunt: those are not the next wave, they’re just attempts to fix aspects of the previous waves.

  1. Screen recorders and visual step builders have been around forever, it seems. Although they can help folks who are new to automation or don’t know how to code, they produce very fragile scripts. Whenever the app under test changes its behavior, testers need to re-record tests.
  2. Self-healing locators don’t deliver much value on their own. When a locator breaks, it’s most likely due to a developer changing the behavior on a given page. Behavior changes require test step changes. There’s a good chance the target element would be changed or removed. Besides, even if the target element keeps its original purpose, updating its locator is a super small effort.
  3. Visual locators – ones that find elements based on image matching instead of textual queries – also don’t deliver much value on their own. They’re different but not necessarily “better.” The one advantage they do offer is finding elements that are hard to locate with traditional locators, like a canvas or gaming objects.  Again, the challenge is handling behavior change, not element change.

You may agree or disagree with my opinions on the usefulness of these tools, but the fact is that they all share a common weakness: they are vulnerable to behavioral changes. Human testers must still intervene as development churns.

These tools are akin to a car that can park itself but can’t fully drive itself. They’re helpful to some folks but fall short of the ultimate dream of full automation.

The Third Wave: Autonomous Testing

The first two waves covered automation for execution and scheduling. Now, the bottleneck is test design and development. Humans still need to manually create tests. What if we automated that?

Consider what testing is: Testing equals interaction plus verification. That’s it! You do something, and you make sure it works correctly. It’s true for all types of tests: unit tests, integration tests, end-to-end tests, functional, performance, load; whatever! Testing is interaction plus verification.

At its core, testing is interaction plus verification

During the first two waves, humans had to dictate those interactions and verifications precisely. What we want – and what I predict the third wave will be – is autonomous testing, in which that dictation will be automated. This is where artificial intelligence can help us. In fact, it’s already helping us.

Applitools has already mastered automated validation for visual interfaces. Traditionally, a tester would need to write several lines of code to functionally validate behaviors on a web page. They would need to check for elements’ existence, scrape their texts, and make assertions on their properties. There might be multiple assertions to make – and other facets of the page left unchecked. Visuals like color and position would be very difficult to check. Applitools Eyes can replace almost all of those traditional assertions with single-line snapshots. Whenever it detects a meaningful change, it notifies the tester. Insignificant changes are ignored to reduce noise.

Automated visual testing like this fundamentally simplifies functional verification. It should not be seen as an optional extension or something nice to have. It automates the dictation of verification. It is a new type of functional testing.

The remaining problem to solve is dictation of interaction. Essentially, we need to train AI to figure out proper app behaviors on its own. Point it at an app, let it play around, and see what behaviors it identifies. Pair those interactions with visual snapshot validation, and BOOM – you have autonomous testing. It’s testing without coding. It’s like a fully-self-driving car!

Some companies already offer tools that attempt to discover behaviors and formulate test cases. Applitools is also working on this. However, it’s a tough problem to crack.

Even with significant training and refinement, AI agents still have what I call “banana peel moments:” times when they make surprisingly awful mistakes that a human would never make. Picture this: you’re walking down the street when you accidentally slip on a banana peel. Your foot slides out from beneath you, and you hit your butt on the ground so hard it hurts. Everyone around you laughs at both your misfortune and your clumsiness. You never saw it coming!

Banana peel moments are common AI hazards. Back in 2011, IBM created a supercomputer named Watson to compete on Jeopardy, and it handily defeated two of the greatest human Jeopardy champions at that time. However, I remember watching some of the promo videos at the time explaining how hard it was to train Watson how to give the right answers. In one clip, it showed Watson answering “banana” to some arbitrary question. Oops! Banana? Really?

IBM Watson is shown defeating other contestants with the correct answer of Bram Stoker in Final Jeopardy.
Watson (center) competing against Ken Jennings (left) and Brad Rutter (right) on Jeopardy in 2011. (Image source: https://i.ytimg.com/vi/P18EdAKuC1U/maxresdefault.jpg)

While Watson’s blunder was comical, other mistakes can be deadly. Remember those self-driving cars? Tesla autopilot mistakes have killed at least a dozen people since 2016. Autonomous testing isn’t a life-or-death situation like driving, but testing mistakes could be a big risk for companies looking to de-risk their software releases. What if autonomous tests miss critical application behaviors that turn out to crash once deployed to production? Companies could lose lots of money, not to mention their reputations.

So, how can we give AI for testing the right training to avoid these banana peel moments? I think the answer is simple: set up AI for testing to work together with human testers. Instead of making AI responsible for churning out perfect test cases, design the AI to be a “coach” or an “advisor.” AI can explore an app and suggest behaviors to cover, and the human tester can pair that information with their own expertise to decide what to test. Then, the AI can take that feedback from the human tester to learn better for next time. This type of feedback loop can help AI agents not only learn better testing practices generally but also learn how to test the target app specifically. It teaches application context.

AI and humans working together is not just a theory. It’s already happened! Back in the 90s, IBM built a supercomputer named Deep Blue to play chess. In 1996, it lost 4-2 to grandmaster and World Chess Champion Garry Kasparov. One year later, after upgrades and improvements, it defeated Kasparov 3.5-2.5. It was the first time a computer beat a world champion at chess. After his defeat, Kasparov had an idea: What if human players could use a computer to help them play chess? Then, one year later, he set up the first “advanced chess” tournament. To this day, “centaurs,” or humans using computers, can play at nearly the same level as grandmasters.

Gary Kasperov staring at a chessboard across the table from an operator playing for the Deep Blue AI.
Garry Kasparov playing chess against Deep Blue. (Image source: https://cdn.britannica.com/62/71262-050-25BFC8AB/Garry-Kasparov-Deep-Blue-IBM-computer.jpg)

I believe the next great wave for test automation belongs to testers who become centaurs – and to those who enable that transformation. AI can learn app behaviors to suggest test cases that testers accept or reject as part of their testing plan. Then, AI can autonomously run approved tests. Whenever changes or failures are detected, the autonomous tests yield helpful results to testers like visual comparisons to figure out what is wrong. Testers will never be completely removed from testing, but the grindwork they’ll need to do will be minimized. Self-driving cars still have passengers who set their destinations.

This wave will also be easier to catch than the first two waves. Testing and automation was historically a do-it-yourself effort. You had to design, automate, and execute tests all on your own. Many teams struggled to make it successful. However, with the autonomous testing and coaching capabilities, AI testing technologies will eliminate the hardest parts of automation. Teams can focus on what they want to test more than how to implement testing. They won’t stumble over flaky tests. They won’t need to spend hours debugging why a particular XPath won’t work. They won’t need to wonder what elements they should and shouldn’t verify on a page. Any time behaviors change, they rerun the AI agents to relearn how the app works. Autonomous testing will revolutionize functional software testing by lowering the cost of entry for automation.

Catching the Next Wave

If you are plugged into software testing communities, you’ll hear from multiple testing leaders about their thoughts on the direction of our discipline. You’ll learn about trends, tools, and frameworks. You’ll see new design patterns challenge old ones. Something I want you to think about in the back of your mind is this: How can these things be adapted to autonomous testing? Will these tools and practices complement autonomous testing, or will they be replaced? The wave is coming, and it’s coming soon. Be ready to catch it when it crests.

The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.

]]>
How to Run Cross Browser Tests with Cypress on All Browsers https://applitools.com/blog/cross-browser-tests-cypress-all-browsers/ Fri, 04 Feb 2022 17:37:50 +0000 https://applitools.com/?p=34121 Learn how you can run cross-browser Cypress tests against any browser, including Safari, IE and mobile browsers.

The post How to Run Cross Browser Tests with Cypress on All Browsers appeared first on Automated Visual Testing | Applitools.

]]>

Learn how you can run cross-browser Cypress tests against any browser, including Safari, IE and mobile browsers.

Ah, Cypress – the darling end-to-end test framework of the JavaScript world. In the past few years, Cypress has surged in popularity due to its excellent developer experience. It runs right in the browser alongside web apps, making it a natural fit for frontend developers. Its API is both concise and powerful. Its interactions automatically handle waiting to avoid any chance of flakiness. Cypress almost seems like a strong contender to dethrone Selenium WebDriver as the king of browser automation tools.

However, Cypress has a critical weakness: it cannot natively run tests against all browser types. At the time of writing this article, Cypress supports only a limited set of browsers: Chrome, Edge, and Firefox. That means no support for Safari or IE. Cypress also doesn’t support mobile web browsers. Ouch! These limitations alone could make you think twice about choosing to automate your tests in Cypress.

Thankfully, there is a way to run Cypress tests against any browser type, including Safari, IE, and mobile browsers: using Applitools Ultrafast Test Grid. With the help of Applitools, you can achieve full cross-browser testing with Cypress, even for large-scale test suites. Let’s see how it’s done. We’ll start with a basic Cypress test, and then we’ll add visual snapshots that can be rendered in any browser in the Applitools cloud.

Defining a Test Case

Let’s define a basic web app login test for the Applitools demo site. The site mimics a basic banking app. The first page is a login screen:

Demo login form including logo, username and password.

You can enter any username or password to login. Then, the main page appears:

Demo main page displaying a financial app, including balance and recent transactions.

Nothing fancy here. The steps for our test case are straightforward:

Scenario: Successful login
  Given the login page is displayed
  When the user enters their username and password
  And the user clicks the login button
  Then the main page is displayed

These steps would be the same for the login behavior of any other application.

Automating a Cypress Test

Let’s automate our login test using Cypress. Create a JavaScript project and install Cypress. Then, create a new test case spec: cypress/integration/login.spec.js. Add the following test case to the spec file:

describe('Login', () => {

    beforeEach(() => {
        cy.viewport(1600, 1200)
    })

    it('should log into the demo app', () => {
        loadLoginPage()
        verifyLoginPage()
        performLogin()
        verifyMainPage()
    })
})

Cypress uses Mocha as its core test framework. The beforeEach call makes sure the browser viewport is large enough to show all elements in the demo app. The test case itself has a helper function for each test step.

The first function, loadLoginPage, loads the login page:

function loadLoginPage() {
    cy.visit('https://demo.applitools.com')
}

The second function, verifyLoginPage, makes sure that the login page loads correctly:

function verifyLoginPage() {
    cy.get('div.logo-w').should('be.visible')
    cy.get('#username').should('be.visible')
    cy.get('#password').should('be.visible')
    cy.get('#log-in').should('be.visible')
    cy.get('input.form-check-input').should('be.visible')
}

The third function, performLogin, actually does the interaction of logging in:

function performLogin() {
    cy.get('#username').type('andy')
    cy.get('#password').type('i<3pandas')
    cy.get('#log-in').click()
}

The fourth and final function, verifyMainPage, makes sure that the main page loads correctly:

function verifyMainPage() {

    // Check various page elements
    cy.get('div.logo-w').should('be.visible')
    cy.get('div.element-search.autosuggest-search-activator > input').should('be.visible')
    cy.get('div.avatar-w img').should('be.visible')
    cy.get('ul.main-menu').should('be.visible')
    cy.contains('Add Account').should('be.visible')
    cy.contains('Make Payment').should('be.visible')
    cy.contains('View Statement').should('be.visible')
    cy.contains('Request Increase').should('be.visible')
    cy.contains('Pay Now').should('be.visible')

    // Check time message
    cy.get('#time').invoke('text').should('match', /Your nearest branch closes in:( \d+[hms])+/)

    // Check menu element names
    cy.get('ul.main-menu li span').should(items => {
        expect(items[0]).to.contain.text('Card types')
        expect(items[1]).to.contain.text('Credit cards')
        expect(items[2]).to.contain.text('Debit cards')
        expect(items[3]).to.contain.text('Lending')
        expect(items[4]).to.contain.text('Loans')
        expect(items[5]).to.contain.text('Mortgages')
    })

    // Check transaction statuses
    const statuses = ['Complete', 'Pending', 'Declined']
    cy.get('span.status-pill + span').each(($span, index) => {
        expect(statuses).to.include($span.text())
    })
}

The first three functions are fairly concise, but the fourth one is a doozy. The main page has so many things to check, and despite its length, this step doesn’t even check everything!

Run this test locally to make sure it works (npx cypress open). It should pass using any local browser (Chrome, Edge, Electron, or Firefox).

Introducing Visual Snapshots

You could run this login test on your local machine or from your Continuous Integration (CI) service, but in its present form, it can’t run on those extra browsers (Safari, IE, mobile). To do that, we need the help of visual testing techniques using Applitools Visual AI and the Ultrafast Test Cloud.

Visual testing is the practice of inspecting visual differences between snapshots of screens in the app you are testing. You start by capturing a “baseline” snapshot of, say, the login page to consider as “right” or “expected.” Then, every time you run the tests, you capture a new snapshot of the same page and compare it to the baseline. By comparing the two snapshots side-by-side, you can detect any visual differences. Did a button go missing? Did the layout shift to the left? Did the colors change? If nothing changes, then the test passes. However, if there are changes, a human tester should review the differences to decide if the change is good or bad.

Manual testers have done visual testing since the dawn of computer screens. Applitools Visual AI simply automates the process. It highlights differences in side-by-side snapshots so you don’t miss them. Furthermore, Visual AI focuses on meaningful changes that human eyes would notice. If an element shifts one pixel to the right, that’s not a problem. Visual AI won’t bother you with that noise.

If a picture is worth a thousand words, then a visual snapshot is worth a thousand assertions. We could update our login test to take visual snapshots using Applitools Eyes SDK in place of lengthy assertions. Visual snapshots provide stronger coverage than the previous assertions. Remember how verifyMainPage had several checks but still didn’t cover all the elements on the page? A visual snapshot would implicitly capture everything with only one line of code. Visual testing like this enables more effective functional testing than traditional assertions.

But back to the original problem: how does this enable us to run Cypress tests in Safari, IE, and mobile browsers? That’s the magic of snapshots. Notice how I said “snapshot” and not “screenshot.” A screenshot is merely a grid of static pixels. A snapshot, however, captures full page content – HTML, CSS, and JavaScript – that can be re-rendered in any browser configuration. If we update our Cypress test to take visual snapshots of the login page and the main page, then we could run our test one time locally to capture the snapshots, Then, the Applitools Eyes SDK would upload the snapshots to the Applitools Ultrafast Test Cloud to render them in any target browser – including browsers not natively supported by Cypress – and compare them against baselines. All the heavy work for visual checkpoints would be done by the Applitools Ultrafast Test Cloud, not by the local machine. It also works fast, since re-rendering snapshots takes much less time than re-running full Cypress tests.

Updating the Cypress Test

Let’s turn our login test into a visual test. First, make sure you have an Applitools account. You can register for a free account to get started.

Your account comes with an API key. Visual tests using Applitools Eyes need this API key for uploading results to your account. On your machine, set this key as an environment variable.

On Linux and macOS:

$ export APPLITOOLS_API_KEY=<value>

On Windows:

> set APPLITOOLS_API_KEY=<value>

Time for coding! The test case steps remain the same, but the test case must be wrapped by calls to Applitools Eyes:

describe('Login', () => {

    it('should log into the demo app', () => {

        cy.eyesOpen({
            appName: 'Applitools Demo App',
            testName: 'Login',
        })

        loadLoginPage()
        verifyLoginPage()
        performLogin()
        verifyMainPage()
    })

    afterEach(() => {
        cy.eyesClose()
    })
})

Before the test begins, cy.eyesOpen(...) tells Applitools Eyes to start watching the browser. It also sets names for the app under test and the test case itself. Then, at the conclusion of the test, cy.eyesClose() tells Applitools Eyes to stop watching the browser.

The interaction functions, loadLoginPage and performLogin, do not need any changes. The verification functions do:

function verifyLoginPage() {
    cy.eyesCheckWindow({
        tag: "Login page",
        target: 'window',
        fully: true
    });
}

function verifyMainPage() {
    cy.eyesCheckWindow({
        tag: "Main page",
        target: 'window',
        fully: true,
        matchLevel: 'Layout'
    });
}

All the assertion calls are replaced by one-line snapshots using Applitools Eyes. These snapshots capture the full window for both pages. The main page also sets a match level to “layout” so that differences in text and color are ignored.

The test code changes are complete, but you need to do one more thing: you must specify browser configurations to test in the Applitools Ultrafast Test Cloud. Add a file named applitools.config.js to the root level of the project, and add the following content:

module.exports = {
    testConcurrency: 5,
    apiKey: 'APPLITOOLS_API_KEY',
    browser: [
        // Desktop
        {width: 800, height: 600, name: 'chrome'},
        {width: 700, height: 500, name: 'firefox'},
        {width: 1600, height: 1200, name: 'ie11'},
        {width: 1024, height: 768, name: 'edgechromium'},
        {width: 800, height: 600, name: 'safari'},
        // Mobile
        {deviceName: 'iPhone X', screenOrientation: 'portrait'},
        {deviceName: 'Pixel 2', screenOrientation: 'portrait'},
        {deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
        {deviceName: 'Nexus 10', screenOrientation: 'portrait'},
        {deviceName: 'iPad Pro', screenOrientation: 'landscape'},
    ],
    batchName: 'Modern Cross-Browser Testing Workshop'
}

This config file contains four settings:

  1. testConcurrency sets the level of parallel execution in the Applitools Ultrafast Test Cloud. (Free accounts are limited to 1 concurrent test.)
  2. apiKey sets the environment variable name for the Applitools API key.
  3. browser declares a list of browser configurations to test. This config file provides ten total configs: five desktop, and five mobile. Notice that Safari and IE11 are included. Desktop browser configs include viewport sizes, while mobile browser configs include screen orientations.
  4. batchName sets a name that all results will share in the Applitools Dashboard.

Done! Let’s run the updated test.

Running our Cypress Cross Browser Test

Run the test locally to make sure it works (npx cypress open). Then, open the Applitools dashboard to view visual test results:

Applitools dashboard with baseline results.

Notice how this one login test has one result for each target configuration. All results have “New” status because they are establishing baselines. Also, notice how little time it took to run this batch of tests:

Results of test showing batch of 10 tests ran in 36 seconds

Running our test across 10 different browser configurations with 2 visual checkpoints each at a concurrency level of 5 took only 36 seconds to complete. That’s ultra fast! Running that many test iterations locally or in a traditional Cypress parallel environment could take several minutes.

Run the test again. The second run should succeed just like the first. However, the new dashboard results now say “Passed” because Applitools compared the latest snapshots to the baselines and verified that they had not changed:

Applitools dashboard with passing results.

This time, all variations took 32 seconds to complete – about half a minute.

Passing tests are great, but what happens if a page changes? Consider an alternate version of the login page:

Demo login form including logo, username and password, with broken icon for logo and different login button.

This version has a broken icon and a different login button. Modify the loadLoginPage function to test this version of the site like this:

function loadLoginPage() {
    cy.visit('https://demo.applitools.com/index_v2.html')
}

Now, when you rerun the test, results appear as “Unresolved” in the Applitools dashboard:

Applitools dashboard with unresolved changes.

When you open each result, the dashboard will display visual comparisons for each snapshot. If you click the snapshot, it opens the comparison window:

Applitools dashboard showing comparison window, which highlights differences.

The baseline snapshot appears on the left, while the latest checkpoint snapshot appears on the right. Differences will be highlighted in magenta. As the tester, you can choose to either accept the change as a new baseline or reject it as a failure.

Taking the Next Steps

Even though Cypress can’t natively run tests against Safari, IE, or mobile browsers, it can with visual testing with Applitools Ultrafast Test Cloud. You can use Cypress tests to capture snapshots and then render them under any number of different browser configurations to achieve true cross-browser testing with visual and functional test coverage.

Want to see the full code? Check out this GitHub repository: applitools/workshop-cbt-cypress.

Want to try visual testing for yourself? Register for a free Applitools account.

Want to see more examples? Check out other articles here, here, and here.

The post How to Run Cross Browser Tests with Cypress on All Browsers appeared first on Automated Visual Testing | Applitools.

]]>
Automatically Run Visual Tests on Every Netlify Deploy https://applitools.com/blog/instant-visual-testing-coverage-with-the-applitools-visual-diff-plugin-for-netlify/ Wed, 25 Aug 2021 17:32:25 +0000 https://applitools.com/?p=30390 Running tests shouldn’t be an afterthought, but it can seem challenging to dive in with the huge variety of frameworks and simply not knowing what to test. Instead, we can...

The post Automatically Run Visual Tests on Every Netlify Deploy appeared first on Automated Visual Testing | Applitools.

]]>
Applitools + Netlify

Running tests shouldn’t be an afterthought, but it can seem challenging to dive in with the huge variety of frameworks and simply not knowing what to test. Instead, we can use Netlify Build Plugins to instantly add automated visual testing to every deployment in just a few clicks with the Applitools Visual Diff Plugin.

What is a Netlify Build Plugin?

Netlify is a hosting and deployment platform that automates taking a web app and publishing it to the web. This also includes the ability to deploy serverless functions, allowing developers to closely develop APIs and backend logic right next to their apps.

As part of that process, Netlify gives developers the ability to install Build Plugins, which helps extend the default deployment allowing additional tools and services to tap.

Netlify site showing installed plugins with Visual Diff (Applitools)
Visual diff (Applitools) plugin installed on Netlify

This makes for a perfect opportunity for things like code analysis or custom build caching for those who don’t want to maintain it themselves, but it’s also an opportunity for adding testing, where we can start using Applitools to run visual tests without having to have any other type of infrastructure or frameworks configured.

How does Applitools work with Netlify Build Plugins?

With Build Plugins, Netlify provides a few different hook locations, meaning, throughout the build and deploy process, there are different steps at which point a Build Plugin can run code.

One of those steps is called Post Build, where as soon as Netlify finishes building the application, we can determine all of the pages that were built and run a visual test with Applitools Eyes.

Netlify build logs showing onPostBuild with netlify-plugin-visual-diff
Running applitools on Netlify Post Build

This provides instant, broad coverage for web apps, making sure that each time we deploy the site, we’re not creating any new regressions or visual bugs.

How to get started with the Applitools Build Plugin?

If you’re already up and running with Netlify, the only thing you need to get started is a free Applitools account. Otherwise, this plugin only works for Netlify-deployed applications.

With your account, you’ll want to be able to locate your Applitools API key. You can find your API key inside of your Applitools dashboard under the account dropdown and by selecting My API Key.

Finding and copying your API key in the Applitools dashboard
Finding your Applitools API key

Once you’re ready to go, there are two ways to install the Visual Diff plugin: using the Netlify dashboard or using the Netlify configuration file in your project.

Note: to use advanced settings such as custom browser configurations and ignore regions, you’ll need to use Option 2, using the Netlify configuration file in your project.

Option 1: Adding the Applitools Visual Diff Plugin with the Netlify UI

Setting up your API key

To start, locate to your site inside of Netlify and navigate to the Site Settings section.

Once inside, select Build & deploy in the sidebar and scroll down to Environment.

Here, we want to click Edit variables, where we’re going to add a new variable.

For the name, add APPLITOOLS_API_KEY.

For the value, add your unique Applitools API key.

Environment variables section in Netlify showing APPLITOOLS_API_KEY
APPLITOOLS_API_KEY environment variable in Netlify

Adding the Applitools Visual Diff Build Plugin

Next, navigate to the Plugins section of the Netlify dashboard, where it will then give you an option to go to the Netlify plugins directory.

Here, you can either search for “Applitools” or scroll down to the bottom where the name of the plugin will be “Visual diff (Applitools)”.

Netlify plugins directory with Install button highlighted next to Visual diff (Applitools)
Installing Visual diff (Applitools) plugin on Netlify

Click the Install button, where it will then ask you to select which site you want to add it to. Find and select your site then confirm installation.

Running a visual test through Netlify

Applitools has the ability to run visual tests by hooking into the Netlify build and deploy process, which means, we need to trigger a new deploy.

Inside of the Netlify dashboard and navigate to the Deploys section.

On the right, above the list of recent deployments, where you’ll see a button that says Trigger deploy. Click Trigger deploy, then select Deploy site from the dropdown.

Deploy site button highlighted in the Trigger deploy dropdown in Netlify
Triggering a new Netlify deployment

This will now kick off a new deployment with Netlify. You can even view the logs if you select the new deployment.

Viewing Applitools Visual Testing results

Once the Netlify deployment as finished, Applitools will have run through your site visually testing each page.

You can now head over to the Applitools dashboard, where you’ll now see a new test with your project’s name!

Applitools dashboard showing new test
New test in Applitools

Every time you run your test after, Applitools will compare the active state of your project to this baseline and make sure everything is working as expected!

Option 2: Using a netlify.toml to add the Applitools Visual Diff Plugin to a project

Setting up your API key

Even though we’re going to be setting up the plugin with our Netlify configuration file, we’ll still need to add our API key via the UI.

To start, locate to your site inside of Netlify and navigate to the Site Settings section.

Once inside, select Build & deploy in the sidebar and scroll down to Environment.

Here, we want to click Edit variables, where we’re going to add a new variable.

For the name, add APPLITOOLS_API_KEY.

For the value, add your unique Applitools API key.

Environment variables section in Netlify showing APPLITOOLS_API_KEY
APPLITOOLS_API_KEY environment variable in Netlify

Adding the Applitools Visual Diff Build Plugin

If you don’t already have one, create a netlify.toml file inside of the root of your Netlify project.

Next, add the following:

[[plugins]]
  package = "netlify-plugin-visual-diff"

This will tell Netlify that you want to use the Visual Diff plugin when deploying your site.

Running a visual test through Netlify

With your new configuration, commit the new file changes and push them out to your Git repository that’s connected to Netlify.

Unless disabled, this will automatically kick off a new Netlify deploy for your site which will run the visual tests during the build process!

Note: if you have automatic deployments disabled, try going to the Deploys section and triggering a new build.

Viewing Applitools Visual Testing results

Once the Netlify deployment as finished, Applitools will have run through your site visually testing each page.

You can now head over to the Applitools dashboard, where you’ll now see a new test with your project’s name!

Applitools dashboard showing new test
New test in Applitools

Every time you run your test after, Applitools will compare the active state of your project to this baseline and make sure everything is working as expected!

Optional: Advanced Configuration

If using the netlify.toml to manage the plugin, you additionally have the option of passing in configurations via Netlify build plugin inputs.

Head over to the [Applitools Visual Diff plugin GitHub](Viewing Applitools Visual Testing results Once the Netlify deployment as finished, Applitools will have run through your site visually testing each page. You can now head over to the Applitools dashboard, where you’ll now see a new test with your project’s name! Every time you run your test after, Applitools will compare the active state of your project to this baseline and make sure everything is working as expected! ) to learn more!

Learn more about Applitools

Getting started with Netlify and the Applitools Build Plugin is just one of the many ways you can provide broad visual testing coverage on your project.

Check out the Applitools tutorials to find your favorite framework or get in touch to schedule a demo with one of our engineers!

The post Automatically Run Visual Tests on Every Netlify Deploy appeared first on Automated Visual Testing | Applitools.

]]>
Tutorial: How to Automate with Power Automate Desktop https://applitools.com/blog/tutorial-how-to-automate-power-automate-desktop/ Thu, 25 Mar 2021 18:54:04 +0000 https://applitools.com/?p=28017 For anyone trying to automate repetitive and time-consuming tasks, Microsoft has just quietly released a free tool that will make your life easier. Power Automate Desktop is a codeless tool...

The post Tutorial: How to Automate with Power Automate Desktop appeared first on Automated Visual Testing | Applitools.

]]>

For anyone trying to automate repetitive and time-consuming tasks, Microsoft has just quietly released a free tool that will make your life easier. Power Automate Desktop is a codeless tool that arose out of their Softomotive acquisition a year ago. Now fully integrated into Windows, it is available for free to all Windows 10 users.

In this step-by-step tutorial, we’ll learn how to use Microsoft Power Automate Desktop to automate an application without code. We’ll even learn how to codelessly test it visually too.

Automating with Power Automate Desktop

So what can be done with Power Automate Desktop? A better question might be, what can’t you do with it? Power Automate Desktop provides many already pre-built “Actions” to automate many different processes or applications out of  the box. For example, you may want to automate sending an email triggered by a particular event. Or you may want to extract values from an Excel spreadsheet and store them somewhere else like a database on Azure. If so, Power Automate Desktop has you covered along with hundreds of other similar actions. If an action you need doesn’t exist, no problem either. It’s easy to create one – so let’s go ahead and do so! 

A Codeless Example of Power Automate Desktop

For the context of this example. Let’s say you’re a tester and you’ve been tasked to automate your Windows desktop application. Where do you start!? You could go down the path of using a framework such as WinAppDriver, which is an Appium driver built and maintained by Microsoft to automate desktop Windows applications. However, with this approach you will need some coding experience and some knowledge of the framework to really provide a viable solution. There are other enterprise options such as Micro Focus’s UFT One, formally named QTP/UFT. But this too requires some coding knowledge to be successful, along with a hefty paid subscription. 

The beauty of Power Automate Desktop is that, not only is it completely free, but it can be 100% codeless and no coding knowledge or background is required. A Web and Desktop recorder is built in to record any UI elements or views you need to interact with for your automation, which can easily be plugged into your “Flow.” A Flow in Power Automate Desktop is essentially the same thing as “automation script” in the coding world. 

For this example we are going to use a very simple application: the Calculator app. 

Creating Your First Flow

  1. If you haven’t done so already, download and install Power Automate Desktop.
  2. Open Power Automate Desktop and create a New Flow.
    Setting up a new flow called Calculator Image Validation Test in Power Automate Desktop,
  3. This will open up the Action flow builder screen.
  4. Let’s choose for our first action to “Run application” and set it to open the Calculator app.
    Using a Run Application action in Microsoft Power Automate Desktop to run the calculator application.
  5. We want to give it some time to wait for the Calculator application to start. There are many different wait actions we can use. I unfortunately didn’t have a lot of luck using some explicit waits so I used an implicit 5 second wait instead that gave me the desired result.
    Using a Wait action in Microsoft Power Automate Desktop to wait 5 seconds.
  6. Next, let’s take a screenshot of the calculator application and store it in a folder in PNG format. We will use the “Take screenshot” action.

    Tip! Whenever you screenshot your application, make sure the application window is isolated (preferably with a dark background) and not overlapping any desktop icons or other windows. This will provide a much better image result.

    Using a Take Screenshot action in Microsoft Power Automate Desktop to take the initial screenshot of the calculator.
  7. We’ll now click on an element on the Calculator application. To do this we can use the action “Click UI element in window” and the built-in recorder by clicking the “Add a new UI element” button to detect an element, in this case the 8 button, to click. When hovering over the 8 button, you can use Ctrl+left click to store the element in your list. Set the Click type to Left click.
    Using a Click UI Element in Window action in Microsoft Power Automate Desktop to gather the first calculator button click we will be automating - 8.
    Using the built-in recorder in Microsoft Power Automate Desktop to gather our calculator button clicks that we will be automating.
  8. Go ahead and repeat the step above by adding 3 more “Click UI element in window” actions to multiply 8×8=64.
    The 4 "Click UI element in Window" actions in order - 8, multiply by, 8, and equals.
  9. Add one more screenshot action (like in step 6 above). Give the screenshot filename something different than you did in step 6 and store it in the same folder. 
  10. Finally, use the action “Close window” to close the Calculator application.
    Using a Close Window action in Microsoft Power Automate Desktop to close our calculator app when we're done.
  11. Now go ahead and execute your Flow by clicking the Run button! If all goes to plan, you should see the Calculator application launch, take a screenshot, multiply 8×8, take another screenshot and finally close the Calculator application.
  12. Your two screenshot images should look similar to this:
    Two screenshots side by side, one showing a calculator displaying "0" and the other displaying "64".

Visually Validating Your Desktop App

So now let’s say you want to visually validate your desktop application with Applitools using our AI image comparison algorithms. But why would you want to visually validate your application? 

Why Visually Validate Your Application? 

Well the old saying goes “A picture is worth a thousand words.” That cannot be any truer than when it comes to testing. Factor in that in today’s software development world, the pace of releasing software updates is drastically increasing. A visual snapshot of your application can uncover bugs, visual discrepancies or layout issues you never knew existed and also ensure your Application’s UI still works and is functional. 

Like in this example, if 8×8 didn’t equal 64 we’d know right away something was wrong because the screenshot would look different. With an image, every visual attribute (text, color, location, etc..) of every element is automatically captured. 

Visual Testing with Power Automate Desktop

So lets add an action to upload our screenshots we took above to Applitools!

  1. For this next step we have a few different options. Applitools provides over 50+ SDKs to do visual testing. You can integrate these SDKs directly into your automation framework or use one of our standalone codeless tools. Power Automate Desktop does provide actions to execute programmable scripts such as Python. However, for this example we are going to use our ImageTester CLI codeless tool (also known as the Screenshots CLI) to upload these screenshots from the command line. 
  2. If you haven’t done so already, download the ImageTester and place it in a folder of your choosing on your PC. 
  3. Next add your Applitools API Key to your System Environment Variables as APPLITOOLS_API_KEY.
  4. Now we want to add another action called “Get environment variable” and place it after the “Close window” action.
    Using a Get Environment Variable action in Microsoft Power Automate Desktop to get our Applitools API Key.
  5. Finally, let’s add one more action, “Run DOS command,” to execute the ImageTester CLI we downloaded above. This is where we’re going to use the API Key we are storing in the EnvironmentVariableValue above.
    Using a Run DOS Command action in Microsoft Power Automate Desktop to execute the ImageTester CLI.
  6. Your final Flow should look similar to this.
    The final Flow in Power Automate Desktop, with 11 steps, including running the application, taking screenshots, clicking UI elements, getting environment variables, and running the DOS command.
  7. Now let’s run our Flow again! After this execution we should see our images in Applitools as new baselines.
    The baseline images in Applitools, showing our two calculator screenshots from earlier (one showing 0, the other showing 64).

Testing that is Automated and Visually Perfect

And that’s it! You now have stored baseline images of your application. So whenever your development team makes changes to your application in the future you can run this Power Automate Desktop Flow again to ensure your application still functionally works the same but is also visually perfect!

You can see the full Flow in action in the video below:

Applitools comes with many amazing features such as CI/CD integrations, bug tracking and accessibility testing, just to name a few. Visual testing with Applitools will help you improve your overall software quality, test smarter and more efficiently, and release application updates faster without sacrificing quality. Applitools is free so join Applitools today and see for yourself. 

And don’t forget to download Power Automate Desktop and play around with it now that it’s available. Happy testing!

The post Tutorial: How to Automate with Power Automate Desktop appeared first on Automated Visual Testing | Applitools.

]]>
Playing with Playwright https://applitools.com/blog/playing-with-playwright/ Mon, 21 Sep 2020 22:45:48 +0000 https://applitools.com/?p=23352 Coding recipes using the newest automation testing tool There’s a new kid on the block in the world of automation testing tools….Playwright! Created by Microsoft, Playwright is an open source...

The post Playing with Playwright appeared first on Automated Visual Testing | Applitools.

]]>

Coding recipes using the newest automation testing tool

There’s a new kid on the block in the world of automation testing tools….Playwright!

Created by Microsoft, Playwright is an open source browser automation framework that allows JavaScript engineers to test their web applications on Chromium, Webkit, and Firefox browsers.

While Playwright is still very new, there’s quite a buzz around the tool. So, I decided to try it out for myself.

The Playwright documentation contains a ton of information about how to get started as well as details on their APIs. So, I won’t rehash all of that here. Instead, I will automate real scenarios from the Automation Bookstore application, which will demonstrate examples of using the Playwright APIs together.

1 bookstore app

Launching an Application

First things first…launching the browser and navigating to the application’s url.

The first step to do this is to declare which browser engines you’d like to run against. Again, Playwright allows you to run your tests against Chromium, Firefox, and Webkit.

I’ll only run against Chromium for now.

View the code on Gist.

Sidenote: By default, Playwright runs in headless mode. If you’d like to actually see the browser as it’s executing the test, set headless to false in the launch call:

View the code on Gist.

Next, I declare variables for the browser and the page objects.

View the code on Gist.

An important thing to note about Playwright, is that all of their APIs are asynchronous, so async/await is needed to call them. So, I utilize this pattern to launch the browser and initial the page object.

View the code on Gist.

Now that I have the page object, I can go to my application using page.goto()

View the code on Gist.

Adding an Assertion Library

A key thing to note about Playwright, is like many other automated testing tools, it is designed to automate browser interaction, but you must use an assertion tool for your verifications. I’ll use Mocha for this example.

View the code on Gist.

I’ll also add a suite (using ‘describe’) and move the Playwright calls inside of there into before/after functions.

View the code on Gist.

Accessing Elements

When trying a new tool, I like to start with a very simple example. Basically, the “Hello World” of web test automation is verifying the title on the page. So, let’s start there and make the first test!

Playwright offers many ways to access elements including the typical ones of CSS and Xpath selectors.

When inspecting the DOM of this application, the element that displays the title has an id of ‘page-title’, and the text I want to verify is the inner text of this element.

View the code on Gist.

So, I’ll use page.innerText and pass in the CSS selector for this id.

View the code on Gist.

And voila, just like that, I have my very first test!

View the code on Gist.

To run a Playwright test, type npm test from your terminal.

Entering Text

Now that I’m up and running, I’d like to do something a little more interesting, such as search for a book.

To do so, I need to enter text into the Filter Books field (which has an id of ‘searchBar’).

To enter text, Playwright provides the fill() API, which accepts a selector as well as the text to enter.

View the code on Gist.

Waiting for Elements

Playwright does a great job of waiting for the elements to be ready before attempting to interact with them, so I don’t have to worry about that. However, sometimes explicitly waiting is still needed.

 For example, when I enter text into the field, Playwright will wait for the field since that’s the element we’re interacting with, but the verification needs to occur on the search results.

There is a slight delay between the time that the text is entered and when the results are shown, so my script must account for that.

Fortunately, in the latest release of Playwright (1.4), the waitForSelector API was introduced. I can use this to wait until hidden books become present in the DOM.

View the code on Gist.

Getting List of Elements

Now that I have the search results, I want to verify that there is only one book returned. To get a list of the elements, I use the $$ API on the page object to query for all elements matching a given selector.

View the code on Gist.

Now, that I have this list of elements, I can verify that it only contains one book.

View the code on Gist.

Accessing Children Elements

In addition to verifying the quantity, I also need to verify that the title is as expected. Since I have a list of all of the visible books, I don’t know exactly what the ID will be for the book at runtime, so I’d prefer to just get the child element from the visible book that holds the title. This is an h2 element, as seen on line 4.

View the code on Gist.

To access descendant elements in Playwright, you can use Clauses. Clauses are selectors that are separated by >>, where each clause is a selector that is relative to the one before it.

So, in my case, where I’d like to get h2 that is a child of this particular li, I can do so with ‘li:not(.ui-screen-hidden) >> h2’

And now the second test is done! On line 5 is where the parent selector is defined, and line 8 is where I expand to use a clause to target a child node.

View the code on Gist.

This particular scenario used a list that only contains one element, so I added one more scenario that demonstrates how to work with a list of multiple elements.

View the code on Gist.

Visual Testing

That last scenario got a bit long, and honestly, I’m only verifying a fraction of what I should be verifying. For more thorough tests that not only verify the count and title, but also verify everything else on the page in addition to making sure it’s displayed correctly, I’ll use visual testing.

Note that Playwright offers image and video capture, but it’s more meant as a visual logging mechanism, and can’t be used in the assertions themselves. So, I’m going to use a proper visual testing assertion library, Applitools.

After installing Applitools, I specify the Applitools classes required on line 3.

View the code on Gist.

I declare and initialize the ‘eyes’ object (on line 3), which is the API that does the visual testing.

View the code on Gist.

Then redo the last test using visual testing.

View the code on Gist.

Cross Platform Tests

While Playwright provides support for Chromium, Webkit, and Firefox, there is no built-in support for IE.

When integrating Playwright with Applitools’ Ultrafast Grid, I can now get further coverage for my tests!

As opposed to using the Applitools Classic Runner as I have above, I’m going to modify it to use the Visual Grid Runner. On line 5, notice I use the number 10. This indicates the number of tests I’d like to run in parallel…making my run even faster!

View the code on Gist.

Now for the good part! I can specify all of the browsers, devices, and viewports I’d like to run against – giving me the ultimate cross-platform coverage for my tests. Not only verifying them functionally, but also visually as well.

View the code on Gist.

No changes are needed to the tests themselves, they will automatically run across all of the specified configurations.

2 ultrafast grid

Play with Playwright Yourself

Reading blog posts helped me understand what Playwright is but when I played around with it to explore realistic scenarios is when I truly got a feel for the tool! I recommend you do the same.

You can use the code examples from this blog post as well as Playwright’s documentation to get started. Good luck!

The post Playing with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Eyes: Introduction to Automated Visual UI Testing https://applitools.com/blog/applitools-eyes-introduction-to-automated-visual-ui-testing/ Sun, 29 Jul 2018 21:56:20 +0000 https://applitools.com/blog/?p=2951 Applitools Eyes does more than just run visual UI testing—it is also a complete Application Visual Management (AVM) solution that manages and analyzes your test results. So, Applitools lets you manage the enormous number of baseline images generated by your visual UI tests, it records and aggregates the results of visual tests, and it shows you which tests were passed and which were failed.

The post Applitools Eyes: Introduction to Automated Visual UI Testing appeared first on Automated Visual Testing | Applitools.

]]>
Cognitive Vision Technology by Applitools

Until the late 1960s, no one thought computers needed a user interface (UI), and the UIs that did exist were expensive and primitive. At the time, these UIs were being used by the military for air defense systems, were built as academic research projects, or were used as props in science fiction movies.

Then, UI pioneers such as Doug Engelbart and Alan Kaye, and Ted Nelson had the crazy idea that people should be able to interact with computers and do useful work. By the late 1970s, the first UIs began to emerge at Stanford University and Xerox PARC, and they were then quickly copied by Apple and Microsoft. Today, UIs are everywhere: watches, refrigerators, cars, phones, and—of course—computers. In addition, software platform vendors have created tools that give anybody who wants to create a UI the ability to do so. Many of these tools, especially those used for web development, are simple to use and—even better—free!

But no matter how much easier or how much cheaper it becomes to build a UI, building a good UI will always be hard. In today’s world, any developer who creates customer-facing applications has to know more than how to write code; they need to understand design principles and have some artistic talent, too. With so many devices and UIs vying for our attention, and with the relentless progress of Moore’s Law continually forcing down the price of electronic hardware, the look and feel of software is becoming an increasingly important part of users’ purchasing decisions.

In short, your UI is the face of your application. Therefore, your application’s visual UI will determine whether a potential customer uses your product or service, or permanently deletes your application. Ensuring that your application looks and functions as expected requires intensive and expensive automated and manual testing. In this post, we explore a revolutionary solution from Applitools that gives you what no other automated testing solution can.

The Problem 

Most developers make tremendous efforts to ensure their applications work. Many of them invest time in running and debugging their source code and trying to build something that not only works, but also solves real problems. However, no matter how hard they try, developers are often working in isolation, and they can’t anticipate all the ways their software will be used by their potential customers. For their part, users don’t care how hard a developer worked. All they want is something that looks good and works as advertised. For a user, the way software looks matters—and if it doesn’t look right, it’s broken.

UI Issues

If all an application developer had to do was make things that looked decent, most of them would probably be able to develop the necessary skills and deliver great products to their users. Unfortunately, there is another unforgiving factor in this equation: time. Time is one thing that developers do not have. With today’s rapid development cycles and large number of browsers, operating systems, devices, form factors, and screen resolutions, it is more difficult than ever to test whether your application is working exactly how it should be in each area.

Until recently, the only way to comprehensively test an application was to put it in the hands of testers. These testers would run scripted or ad-hoc scenarios to manually test applications in real-life conditions. The advantage of using manual testers is they can perform tasks that computers and automated test scripts can’t. For all its benefits, manual application testing has a number of drawbacks. It is an expensive and time-consuming process. And because of its boring and repetitive nature, manual testers may begin to miss or underplay visual issues.

Manual testing is not scalable

The Solution

Applitools has developed a revolutionary solution for this problem. Applitools Eyes uses advanced cognitive vision technology, which makes it completely different from any other existing testing tool. Applitools Eyes detects any differences between two screen displays and can replace hundreds of lines of testing code with one smart visual scan.

Cognitive Vision Technology by Applitools

Applitools eyes is the first automated software testing tool that tests your application like a manual tester does—except that it works much more quickly and accurately. Applitools Eyes can test web applications on multiple browsers, including Chrome, Safari, Internet Explorer, etc., and it can test in as many screen resolutions as you want. It can also test a mobile application on an iPhone, iPad, or iPad mini in both portrait and landscape orientations.

Getting Started 

Getting started with Applitools Eyes is quick and easy. All you need to do is add calls to the Applitools Eyes cloud service from your existing automated tests. Alternatively, you can choose to send screen captures of your application directly to Applitools Eyes, in which case you will receive the added benefit of visual UI validation immediately.

Applitools Eyes does more than just run visual UI testing—it is also a complete Application Visual Management (AVM) solution that manages and analyzes your test results. So, Applitools lets you manage the enormous number of baseline images generated by your visual UI tests, it records and aggregates the results of visual tests, and it shows you which tests were passed and which were failed. It also allows you to analyze test data and generate meaningful reports and analysis. Additionally, Applitools ensures your test baseline remains relevant and is not polluted by inaccurate data. Applitools Eyes provides unique tools to manage your testing baseline, and it removes misleading information being propagated in such a way that it leads to information overload or hides potentially damaging issues.

In addition to all the great and unique features Applitools Eyes offers, it also works well with many existing test automation and management systems. Applitools Eyes works with many existing test automation frameworks, including Selenium, Appium, WebdriverIO, Microfocus (formerly HP) Unified Functional Testing (formerly QuickTest Professional), and 40 others.

Join the Visual UI Testing Revolution

In a relatively short time, computers have developed from massive industrial machines that did relatively simple tasks to what Steve Jobs referred to as “bicycles for the mind”. They have gone from communicating via punch cards and printers to using a range of visual and audio interfaces. In general, computer technology has changed rapidly, but one thing has remained constant: building an attractive UI for your apps is still hard, and if you get it wrong, you risk losing potential and existing customers.

To build great UIs, you need to invest in coding and testing. You need to create tests that behave like real people and that perceive the visual aspects of your software like real people. This is where Applitools can help you. Applitools Eyes is a smart, innovative testing solution. Using Applitools will give you the benefits of manual testing, but will also provide you with the added benefits that come with using advanced automated cognitive vision technology. With the help of this product, your software will be tested quickly, efficiently, and accurately.

To learn more about Applitools’ visual UI testing and AVM solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo, or sign up for a free Applitools account.

The post Applitools Eyes: Introduction to Automated Visual UI Testing appeared first on Automated Visual Testing | Applitools.

]]>
How Applitools changed visual UI testing at LexBlog https://applitools.com/blog/visual-ui-testing-lexblog/ Wed, 20 Sep 2017 10:47:22 +0000 http://blog.applitools.com/how-applitools-changed-product-development-at/ Guest post by Jared Sulzdorf, Director of Product Development at LexBlog There is an art to managing websites that many do not appreciate. The browser wars of the late 90s...

The post How Applitools changed visual UI testing at LexBlog appeared first on Automated Visual Testing | Applitools.

]]>
Change

Guest post by Jared Sulzdorf, Director of Product Development at LexBlog

There is an art to managing websites that many do not appreciate.

The browser wars of the late 90s carried over into the early 2000s, and while web standards have helped align how these browsers render front-end code, there are still dozens of versions of the five major browsers (Edge has joined the crew!), all with their own understanding of how the web should look and act. Add to that the fact most digital properties ship with many templates, each with their own way of displaying content, and those templates in turn may run on dozens or hundreds of websites, and you’re left wondering if this internet thing is really worth it.

At LexBlog, where I hang my digital hat as the lead of the product development team, we manage well over one thousand websites. From the front to the back-end, it’s our responsibility to ensure things are running smoothly for our clients who range from solo lawyers to some of the largest law firms in the world. With that in mind, it’s no surprise that expectations are high, and our team works to meet those standards during every update.

While our product development team makes great use of staging environments, functional testing, performance monitoring/server error logs, and unit tests to catch issues through the course of the development cycle, it still isn’t enough. The reality of developing on the web is that you can’t unit test how CSS and HTML will render in a browser and humans can only look at so many screenshots before losing focus.

Prior to finding Applitools, LexBlog used Fake (http://fakeapp.com/), a web automation software, to visit our sites in a staging environment after a front-end update and take a screenshot. One of our support teammates would then leaf through these screenshots in an effort to find unexpected changes and report back when they were done. Unfortunately, this approach was just not scaling, and our team was running into a myriad of issues:

  • Deployments would wait for weeks while manual testing was performed
  • Manual tests would invariably miss errors and when the code was deployed, hotfixes would quickly follow
  • The overhead in managing the humans and systems necessary for performing manual tests was untenable
  • All of the above led to low morale

The first major breakthrough was finding Selenium to replace Fake. This allowed the product team to better manage the writing and reviewing of tests. However, now that the ball was fully in our court, there was still the problem of reviewing all these screenshots.

Being technically inclined, we looked for programmatic solutions to comparing two screenshots (one taken before an update, the other after an update) and finding the diff. A myriad of tools like PhantomJS (http://phantomjs.org/) or BackstopJS (https://garris.github.io/BackstopJS/) were researched and thrown aside until we found Applitools.

Applitools had everything we were looking for:

  • An SDK that supported languages our team was familiar with
  • UI for reviewing screenshot comparisons
  • A cloud-based portal allowing multiple teammates to log in and view the results of the same test

After spending some time investing in a host of desktop scripts written in Python, we noticed immediate improvements. Deployments for front-end changes went out the door faster and with less errors and our team was able to focus on higher-purpose work, leading to happier clients.

Over the years, we’ve consistently updated our approach to visual regression testing to the point where we can now introduce large changesets in code that renders on the front-end and catch nearly every regression in staging environments before deploying to production. Applitools, in combination with Selenium, Node, React, and the WordPress REST API, has allowed us to create a fully-fledged visual regression testing application that saves our team countless hours and headaches all while ensuring happy clients. It really is a match made in heaven.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.


The post How Applitools changed visual UI testing at LexBlog appeared first on Automated Visual Testing | Applitools.

]]>