Functional Test Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/functional-test/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Tue, 22 Mar 2022 16:03:47 +0000 en-US hourly 1 How To Remove Blind Spots With Visual Testing https://applitools.com/blog/remove-blind-spots/ Mon, 06 Apr 2020 15:00:00 +0000 https://applitools.com/?p=17364 Visual bugs are errors in the presentation of an application. They appear all the time, and frequently surface when applications are viewed in the various viewport sizes of our mobile...

The post How To Remove Blind Spots With Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>

Visual bugs are errors in the presentation of an application. They appear all the time, and frequently surface when applications are viewed in the various viewport sizes of our mobile devices (laptops, phones, tablets, watches).

What’s horrifying about these pesky visual bugs is that they cannot be caught by typical automated tests. This is because most test automation tools consult the DOM (document object model) to report on the state of the application.

Take this tweet for example:

The application is not showing the last digit of the price, yet the full price is there in the DOM. The first price is NZ$1250 but is showing as NZ$125. Big difference!

guigonewrong

Sadly, test automation tools will tell you that the price is correct—even though that’s not what the users see. Here’s how visual testing can help you catch these problems.

What is visual testing?

In order to catch visual bugs, we need to add “eyes” to our automated scripts—an ability for our tests to see the application as our users do and verify its appearance.

Visual testing allows us to do just that. These work by taking a screenshot of your application when it’s in its desired state, and then taking a new screenshot every time the test is executed.

The screenshots from the regression run are compared against the approved screenshot. If any meaningful differences are detected, the test fails.

Because Applitools uses artificial intelligence (AI) to view the screenshots just as we would as humans, it is much more effective than basic pixel-based image comparison techniques.

Do I have to start over?

When I tell engineers about the awesomeness of visual testing, they often ask if they will have to throw out all of their existing tests and start over. The answer is no!

Visual testing can be integrated into whatever you already have. Applitools supports all the major programming languages as well as automation frameworks such as Selenium WebDriver, Cypress, Appium, and more.

And while the visual assertions can be added in addition to your existing functional assertions, an awesome benefit is that visual testing encompasses a lot of the functional checks as well.

For example, in the Wellington Suite example from the tweet above, the existing tests probably include assertions for the prices. While you certainly can keep those assertions and add a visual check to make sure the prices appear correctly, the comparison done by visual testing algorithms already ensures that the price is correct and displayed properly. This means less test code needs to be written and maintained.

What if I don’t want to capture the entire screen?

While capturing a screenshot of the entire screen will help you catch bugs you didn’t think to assert against, it’s not always the right approach to visual testing. Sometimes, the screen that you’re testing against contains areas that are still under development and therefore frequently changing, or there just may be data on the screen that is irrelevant for your tests.

Ignore regions

Applitools will allow you to ignore certain regions of your screen. This is perfect for areas that are irrelevant to test, for example status bars, date lines, ads, etc. In this screenshot, I’ve specified to always ignore the status bar for my mobile tests.

visual testing ignore region 1

Scope to regions

If there isn’t a specific area you’d like to ignore, but there is a specific area you want verified, you can do that with visual testing as well. By annotating the screenshot or programmatically specifying the locator of the region, you can scope your visual check to that area only.

Be mindful of gaps

Verifying an entire screen ensures that nothing is missed, because it inherently includes the verification of things you maybe wouldn’t have covered with traditional assertions. When narrowing the scope of your visual testing, you must be mindful of gaps you may be missing.

My advice is to broaden your scope as much as possible (maybe an entire section versus just a specific element within the section) and couple the visual assertions with traditional functional ones when needed.

What about dynamic data that changes every time the test is executed?

You may not think visual testing can handle applications where the data is different every time the test is run, but it actually can. By using AI, visual testing can determine the patterns that make up the layout of the screen and then use a different comparison algorithm to ensure that the layout is intact during regression runs, regardless of the actual data.

This technique is perfect for applications such as news sites, social media apps, and any other application where the data is not static.

Does visual testing work only on web applications?

Visual testing is not limited to just web applications. Visual testing can also be used for mobile apps, design systems such as Storybook, PDF files, and standalone images.

You can also accomplish cross-platform testing using visual grids. These grids enable faster, smarter visual testing across a plurality of devices and viewport sizes—prime locations for visual bugs!

Visual grids differ from other testing grids in that they do not have to execute your entire scenario on every configuration. Instead, a visual grid runs the scenario on one configuration, captures the state of your application, and then blasts that exact state across all of your supported configurations—thus saving time and effort in writing and executing cross-platform tests.

How to get started with visual testing

Applitools has a free-forever account option which is perfect for smaller projects. To learn how to do all of the cool things I’ve mentioned and much more, head on over to Test Automation University for my free course on visual testing!

[Note: A version of this blog was published previously at TechBeacon.]

For More Information

The post How To Remove Blind Spots With Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>
Visual Validation with Dynamic Data https://applitools.com/blog/visual-validation-dynamic-data/ Mon, 16 Dec 2019 23:02:24 +0000 https://applitools.com/blog/?p=6825 You’re running functional tests with visual validation, and you have dynamic data. Dynamic data looks different every time you inspect it. How do you do functional testing with visual validation,...

The post Visual Validation with Dynamic Data appeared first on Automated Visual Testing | Applitools.

]]>

You’re running functional tests with visual validation, and you have dynamic data. Dynamic data looks different every time you inspect it. How do you do functional testing with visual validation, when your data changes all the time?

I arrived at Chapter 7 of Raja Rao DV’s course on Test Automation University, Modern Functional Test Automation Through Visual AI. Chapter 7 discusses dynamic data – content that changes every time you run a test.

pasted image 0 8

Dynamic data pervades client screens. Just look at the screenshot above. Digital clocks. Wireless signal strength. Location services. Alarm settings. Bluetooth connectivity. All these elements change the on-screen pixels, but don’t reflect a change in functional behavior.

Screen Shot 2019 12 16 at 1.11.13 PM

This bank website updates the time until the local branch closes, in case you want to visit the branch. That time information will change on the screen. And Visual AI captures visual differences. So, how do you automate tests that will contain an obvious visual difference?

Dynamic regions can be the rule, rather than the exception, in web apps. But, for the purposes of visual validation, dynamic elements on-screen comprise the group of test exceptions. You need a way to handle these exceptions.

Match Levels

Visual validation depends on having a good range of comparison methods because dynamic data can impact visual validation.  Visual AI groups pixels into visual elements and compares elements to each other.  The strictness you use in your comparison is called the match level.

Applitools Visual AI determines the element relative location, boundary, and properties (colors, contents, etc.) of each visual element. If there is no prior baseline, these elements are saved as the baseline.  Once a baseline exists, Applitools will check the checkpoint image against the baseline.

Raja introduces three kinds of match levels for Applitools to compare your checkpoint against your baseline.  You can use these match levels to inspect a subset of a page, a screenful, or an entire web page.  Here are the three main match levels:

  • “Strict” – Visual AI distinguishes location, dimension, color, and content differences as a human viewer would.
  • “Content” – Visual AI distinguishes location, dimension, and content differences. Color differences are ignored as long as the content can be distinguished. Imagine wanting to see the impact of a global CSS color change.
  • “Layout” – Visual AI distinguishes location and dimension and ignores content like text and pictures. This match level makes it easy to validate the layout of shopping and publication sites with a consistent layout and dynamic content.

You choose the match level appropriate for your page. If you have a page with lots of changing content, you choose “Layout” – which checks the existence and of regions but ignores their content.  If you just made a global color change, you use “Content.” In most cases, you use “Strict.”

You set the match level in your call to open Applitools Eyes.  If you don’t specify a match level, it defaults to “Strict.”

Handling Exceptions – Regions

We like to think of our applications on a page-by-page basis. Each locator points to a unique page that behaves a certain way. In some cases, the relevant application content resides on a single screen. Often, though, applications and pages can extend beyond the bottom of the visible screen.  Occasionally, content extends across wider than the visible screen as well.

By default, Applitools captures a screenful – the current viewport. Raja covered this code specifically in the prior chapters when he showed us how to use:

eyes.checkWindow();

Using the same command, with the “fully” option, you can capture the full page, not just the current viewport. Assuming the page scrolls beyond the visible screen, you can have Applitools scroll down and across all the screens and stitch together a full page. So you can compare full pages of your application, even if it takes several screens to capture the full page.

Be aware that the default comparison uses strict mode. You can choose a different mode for your comparison.  And, you can handle exceptions with regions.

So, now that you know that you can instruct Applitools to capture a full page, or a viewport, what happens when you have dynamic data, or other parts of a page that could change? You need to identify a region that behaves differently.

Applitools adds the concept of “regions.” As Raja describes, “region” describes a rectangular subset of the screen capture – identified by a starting point X pixels across and Y pixels down relative to the top of the page, plus W pixels wide and H pixels high.

Control Region Comparisons

Once  you have a region, you can use one of the following selections to control the inspection of that region:

Screen Shot 2019 12 12 at 4.49.33 PM

  • Ignore – ignore that region completely. Its contents do not matter to the test identifying differences. Useful for counters
  • Floating – the content within the region can shift around. Text that can shift around.
  • Strict – content that should stay in the same place and color from screen to screen
  • Content – Content that should stay in the same place with varying color from screen to screen
  • Layout – Content that can change but has a common layout structure from screen to screen

Regions let you be permissive when you use a restrictive match level.  “Ignore” literally means that – ignore the content of the region.  There may be times you want to ignore a region. More often, though, you might want to ensure that the region boundary and content exists – for this you use “Layout.”

Regions let you handle exceptions on a more restrictive basis as well. For example, on a page using layout mode, you can create a region and use “strict” to compare content and color that should be identical – such as header or menu bar.

Testers Choice

One big point Raja makes is that you get to choose how to deploy Visual AI. Select the mode that matches the page behavior you expect, and then set the appropriate mode for handling exceptions.

Raja demonstrates how you can choose to define exceptions in the UI or in your test code. You can choose to set the exceptions in the Applitools UI. Once you set a region with a specific match level, that region with that match level persists through future comparisons. Alternatively, you can add regions to handle exceptions directly in your test code. Those region definitions persist as long as they persist in your code.

Screen Shot 2019 12 16 at 2.19.00 PMYou don’t need to capture an entire window or page. You can run your eyes.open() and use:

eyes.checkRegion()

choose just to capture individual regions at an appropriate comparison level.  This kind of test can be useful during app development when you want to distinguish the behavior of specific elements you are building.

If you’re really focused on using element-based checks, you can even run:

eyes.checkElement()

The checkElement instruction uses web element locators to find specific on-page elements.  checkElement lets you use a legacy identifier in a modern functional test approach. In general, though, checkElement adds more complexity compared with visual validation.

The key understanding is that, for a given capture page, you can define your mode for the full capture and exceptions for specific regions, so that you cover the entire page.

Handling Expected Changes

When you make changes, all your captures must be updated. CSS, icons, menus, and other changes can affect multiple pages – or even your entire site. Imagine having to maintain all those changes – page by page. Yikes.

Fortunately, Applitools makes it easy to accept common changes across multiple pages.

Whenever you encounter a difference on a single page, you are instructed to accept the change or reject it. If you reject the change – it’s an error and you can flag development. But, if you accept the change, you can also use a feature called automated maintenance to accept the change on all other pages where the change has been discovered.

Update your corporate logo. Done.

Install a new menu system. Easy.

You can use Automated Maintenance to accept changes. You can also use Automated Maintenance to deploy regions across all the pages – such as ignore regions.

Of course, the more comprehensive your changes, the more challenging it is to use automated maintenance. If you make some significant changes in your layout, expect to create new baselines as well as use automated maintenance.

Conclusions about Visual Validation and Dynamic Data

We all want to build applications that achieve business outcomes. We often build visually interesting pages with changing content designed to keep buyers engaged. But, we also know that testing requires repeatability – meaning that dynamic may be great for business, but testing requires predictable results.

Dynamic data can limit the benefits of visual validation. You need a way to handle dynamic data in your visual validation solution. Applitools gives you tools to handle dynamic parts of your application. You can handle truly dynamic sections by ignoring regions, treating those regions as layout regions, or even handling a whole page as a layout and let sections and content change.

And, when you make global changes, automated maintenance eases the pain of updating all your baseline images.

As Raja makes it clear, Applitools has thought not just about discovering visual changes, but handling unexpected changes that are defects, dynamic data that will produce false-positive defects, and expected global changes affecting multiple pages. All of these features make up key parts of a modern functional testing system.

And now, on to Chapter 8.

For More Information

Blogs about chapters in Raja’s series on Modern Functional Testing:

Actions you can take today.

 

The post Visual Validation with Dynamic Data appeared first on Automated Visual Testing | Applitools.

]]>
Complex Functional Testing – Simplified https://applitools.com/blog/functional-testing-simplified/ Tue, 03 Dec 2019 21:36:20 +0000 https://applitools.com/blog/?p=6776 Visual AI simplifies your functional test while adding visual coverage.  Visual AI compares rendered output against rendered output. If visual differences exist, you can identify those as intended or not.

The post Complex Functional Testing – Simplified appeared first on Automated Visual Testing | Applitools.

]]>

How does functional testing with visual assertions help simplify test development for complex real-world apps? Like, say, a retail app with inventory, product details, rotating displays, and shopping carts?

My special blog series discusses Modern Functional Testing with Visual AI, Raja Rao’s course on Test Automation University. I arrived at Chapter 6 – E-Commerce Real World Example. In this review, I hope to give you an overview of Raja’s examples and how they might apply to your test challenges.

Real-World Challenges

Raja starts by explaining the challenges in creating sophisticated functional tests for an e-commerce app. His demonstration site lets a shopper:

  • Scroll inventory
  • Select items
  • Put items in a shopping cart
  • Delete items from a cart
  • Process transactions

The app includes featured items that can change each time a shopper visits the site.

Screen Shot 2019 12 03 at 10.02.55 AM

In this case, Raja uses an e-commerce app from SAP, with sample data and a sample setup. The SAP application includes all the features of any e-commerce site, and it does have a simple look-and-feel for all that complex functionality. It’s useful for demonstrating the challenges of implementing a complicated test for the app pages and functionality, which I’ll dive into in a moment. But first…

There may be those among you readers who think,

“Hey, this isn’t a legitimate real-world test. He’s using a canned application. How does that compare with complex real-world apps we code and test ourselves?”

I have to admit that I was thinking that for a sec. And then I remembered all the times that people hesitate in applying app upgrades – even when there’s a security issue. Why? Because nobody feels comfortable about how that app will behave in the areas outside the issue that got ‘fixed.’ How do we know how an app we have customized will look and behave after the upgrade? How many of us have felt burned once we discovered that the vendor’s testing likely didn’t include our specific configuration?

Hold that thought – I’ll get back to it in a sec.

Challenges Testing Complex Applications

In the first few chapters, Raja focused on individual technologies that result in complex functional tests. He covered testing tables, data-driven testing, testing dynamic content, and testing iFrames. With each of these technologies, a small change on a single input can have a significant change on the output. For example, how do you ensure that the sort function on a table behaves as expected?

All of these tests share a common test strategy: perform an action, and then assert that the output matches expectations. Legacy functional tests require the test coder to assert each expected element exists in the DOM. Testing gets complicated by many conditions – but let’s boil them down to three:

  • A small change in an input makes a big change on the output that needs to be checked
  • An app gets updated – and locators and formatting change
  • Lots of locators exist on the app page

And, in each of the prior chapters, Raja shows how visual assertions with Visual AI result in functional testing simplified.

Visual vs Coded Assertions

The common issues involve all the coded assertions of outputs. At a root level, do you inspect the entire DOM? Or, do you just inspect the elements you expect to change?

Some testers get functional test myopia – they focus only on the elements and behavior they expect to change in their tests. These testers think that checking every element on every page after every change seems silly. They make a change and look for expected behavior.

When you test a table sort or some other activity that changes many elements on a page, you have your hands full just writing the assertions for your expected differences. Everything else should just take care of itself.

Raja’s point in each of these early chapters shows that coded assertions for DOM inspection miss all sorts of behaviors that can vary from browser to browser or app version to app version. Visual assertions with visual AI allow users to simplify their functional test code. He shows why, compared with coded DOM checks,  visual assertions provide:

  • Simpler deployment
  • Simpler maintenance
  • More robust test infrastructure

So, in chapter 6, Raja provides this analysis to testing an e-commerce app.

Testing E-Commerce App Elements

Raja begins by pointing out that many of the elements in an e-commerce app behave like elements in his earlier chapters. We find:

  • A table of catalog elements with regularly placed structural parts, such as product name, description, price, and availability.
  • Parts of the app screen that depend on previous activity (e.g. recently viewed products)
  • App behaviors best tested as data-driven (add to cart on an in-stock vs. out of stock item)

Screen Shot 2019 11 27 at 11.22.24 AMWith this example, each of the prior chapters in Raja’s course comes into play.

For instance, with the catalog, the legacy approach would require the tester to identify each web element locator in the catalog section, and then ensure that the value in the catalog matched the value in the test code.  With the modern approach – take a snapshot. This aligns with chapter 2 of Raja’s course.

With the items in the catalog, a shopper can inspect the details of an element and then click to add it to his or her cart.  The shopper can inspect both available and unavailable items – but unavailable items generate an error when the button is clicked to add the item to the cart.  Testing this behavior reminds me of chapter 3 in Raja’s course, the chapter about data-driven testing.

Screen Shot 2019 11 27 at 11.25.30 AM

Legacy Testing – Assert Element Locators

Raja then walks through the examples and shows the legacy test code. Here is the legacy test code for the catalog test. Note that each output value must be validated.

View the code on Gist.

Basically, for each item in the catalog, check its:

  • Label
  • Price
  • Availability
  • Color
  • Image

Compare for each against its expected reply.  The code calls out to each web element locator and asserts its value.

Read this code, and then think about your own tests. Consider the similarities and differences. How much of this code remains valid between app upgrades? How much must you rewrite when development adds a new function – or a new feature? Or, if the locators change?

Applitools Simplifies Functional Testing

If you’re with me so far, you understand Raja’s next point – why not leave the assertions to Visual AI? In this case, the assertions let you focus on code that exercises the application and let Visual AI perform the output comparisons.

For testing the tables of inventory, Visual AI lets you perform tasks like sorting and filtering and check the output. For functions like testing what happens when you encounter success versus error conditions – like trying to add an in-stock versus out-of-stock item to the shopping cart, this is the data-driven testing scenario.  As he showed in the prior chapters, and again here, Visual AI simplifies the entire test development process.

When you add new behaviors, they become new checkpoints that you can choose to include in the future baseline.  And, if you encounter unexpected behaviors, you can reject those changes and send them back to your development team for repair.

As we have discussed elsewhere, visual AI provides the accuracy of AI as used in self-driving cars and other computer vision technology. Visual AI breaks a sea of pixels into visual elements without relying on the DOM to identify those elements. Once you have established a baseline, every subsequent snapshot of that screen will compare the visual elements on that page versus your checkpoint.

Instead of relying on a simplistic pixel comparison to determine whether the checkpoint and baseline differ, visual AI checks the boundaries of the element and determines if the element itself differs in color, font, or – in the case of a photograph or other image file – image completeness.

Rather than depend on DOM differences alone, visual AI compares the rendered page with the previously rendered page.

Visual AI and Packaged Application Upgrades

As I wrote earlier, how does testing a packaged application compare with testing a custom app?

As a test engineer, using the legacy approach leaves you at the mercy of the vendor. You can write all the functional tests you want, but the developer can change element locators between releases and leave you with a huge coding task – rewrite all your element locators to initiate behaviors and all your locators to measure responses.

You likely know the situation I’m describing. I know many companies that use a vendor’s web service for CRM, marketing, or commerce. Once you customize one of these sites for your needs, you worry that a vendor upgrade can break your customized app. The vendor may have tested the generic app, but not the version with your custom CSS, data, and layout.

In my experience, test engineers loathe having to test packaged applications. Too often, test engineers must react to changes made in an app that the company doesn’t own with tools that make it difficult to expose unexpected changes. Imagine having to use legacy tests to validate that the browser rendering remains unchanged after an upgrade. And, if the browser rendering has changed, to highlight that change and pass the information back to the app owners.

Visual AI bridges the gap for a packaged app and third-party app owners. If you have made no change and upgrade your app version, the rendering check with Visual AI makes it easy to test whether the pages have changed. If you make a CSS change or other local or global change, Visual AI ensures that the changes match your expectations.

What About Dynamic Content?

One of the conditions Raja discusses is the case of dynamic content on a page. You find this content on media sites – with layouts showing the latest stories that update regularly. You find this content on e-commerce sites as well – showing wares of interest to prospective buyers.

On the app for this chapter, it’s the content that shows featured items. In the two screenshots that follow, the “Deal of the Day” rotates to multiple images, and “Promoted Items” updates each time the page gets refreshed.

Screen Shot 2019 12 02 at 11.12.04 AM copy

Screen Shot 2019 12 02 at 11.12.27 AM

This behavior makes sense – a seller wants to show a range of wares to the shopper.

Both the legacy approach and the modern approach can’t handle dynamic content through automation.

Raja describes how Visual AI allows you to highlight this region as an ignore region when comparing the pages for test automation. My takeaway here says that dynamic content requires a different kind of testing – earlier in the test development process.  For the functional behavior test, Applitools Visual AI lets you focus on what matters.

Conclusion – Visual AI Simplifies Functional Test

In walking through Chapter 6, you see how the legacy approach leans heavily on web locators that can change from version to version. You spend lots of resources on maintaining your tests – especially when locators change between app versions. The cost of test maintenance inclines most organizations to shy away from app version upgrades and even app behavior changes. Every time you make a change, you incur unexpected costs.

In contrast, Visual AI simplifies your functional test, while adding visual coverage.  Visual AI compares rendered output against rendered output. If visual differences exist, you can identify those as intended or not. Intended changes become the new baseline.  Unintended changes can go back to the app owners. And, all this is easy to manage and maintain – because you, as the tester, don’t worry about the individual identifiers you need to check along the way.

For More Information

The post Complex Functional Testing – Simplified appeared first on Automated Visual Testing | Applitools.

]]>
Functional Test Myopia https://applitools.com/blog/functional-test-myopia/ Tue, 22 Oct 2019 19:11:43 +0000 https://applitools.com/blog/?p=6352 Functional testing myopia results from the code-based nature of functional testing.  How can you tell that your app appears the way your customers expect?

The post Functional Test Myopia appeared first on Automated Visual Testing | Applitools.

]]>

Myopia means nearsightedness. If you write functional tests, you likely suffer from functional test myopia. You focus on your expected outcome and your test coverage. But you miss the big picture.

We all know the joke about one night with a guy on his knees under a streetlamp. As he’s fumbling around, a passerby notices him.

“Sir,” says the passerby, “What are you doing down there?”

“I’m looking for my wallet,” replies the guy.

“Where did you lose it?” asks the passerby.

“Down there, I think,” says the guy, pointing to a place down the darkened street.

“Why are you looking here, then?” wonders the passerby.

“Ah,” says the guy, “I can see here.”

1 x59cvjb mojtouxmkbihsq

Source: http://first-the-trousers.com/hello-world/

This is the case for functional testing of your web app. You test what you expect to see. But your functional tests can’t see beyond their code to the daylight of the rendered browser.

What Are Your Goals For Web App Testing?

Let me start by asking you – what goals do you set for your web app testing?

Have you ever wondered, if you’re building a web app, why do you bother using a browser to test the app? If you’re testing whether the app responds correctly to inputs and selectors, why not test using curl or a text-based browser like Browsh?

I’m being ironic. You know that the browser renders your app, and you want to make sure your app renders correctly.

But think about it: a text-based browser that converts web pages into something out of Minecraft will still pass a functional test — even if those pages are not rendered properly. That shows how limited traditional functional testing tools are.

I ask this question not to trip you up, but to help clarify your expectations.

In my experience, functional testers look for proper functionality for proper data input, handling data input exception cases, handling workflow exceptions, and handling server exceptions.

Whatever your goals, when you test your applications, you expect to see the application behave as expected when handling both normal and exception cases. You hope to cover as much of the app code as possible, and you want to validate that the app works as designed.

Why Functional Test Automation can fail

Functional testing myopia results from the code-based nature of functional testing.  To drive web app activity, most engineers use appropriate technology, such as Selenium WebDriver. Selenium suits this need quite well. Other tools, like Cypress, can drive the application.

Similarly, a code-based tool evaluates the application response in HTML. TestNG, JUnit, Cypress, or some other tool inspects the DOM for an exact or relative locator that matches the coder’s intended response and validates that:

  • The response text field exists
  • The text in that field matches expectations

Potentially, a functional test might validate the entire DOM for the response page resulting from a given test action. Practically, only the expected response gets checked.

Just like the person on his or her knees under the streetlamp, functional testing validates the conditions the test can check.  And, herein lies myopia. Even if the code could validate the entire DOM response, the code cannot validate the rendered page. Functional testing tests pre-rendered code.

The difference between functional test and user experience explains functional test myopia. Our best web browser automation code only checks for a tiny portion of page attributes and does not match the full visual experience that a human user has.

Alone, functional tests miss things like:

  • Overlapping text
  • Overlapping action buttons
  • Action buttons colored the same as the surrounding text
  • User regions or action areas too small for a user to see them
  • Invisible or off-page HTML elements that a user wouldn’t see

pasted image 0 11

How Do You Know Your App Renders Correctly?

Can you tell whether your application appears the way your customers expected or the way your designers intended? How do you validate app rendering?

Many app developers leaned on manual testing for validation purposes. One of my good friends has made a career as a manual QA tester. His job – run an app through its paces as a real user might, and validate the app behavior.

Manual testing can uncover visual bugs missed by functional test myopia. But manual testing suffers from three downsides:

  • Speed – you might have several apps, with dozens of responsive pages to test on several screen sizes, with every daily build. Even if a tester can check a page thoroughly in a couple of minutes, those add up to delays in your release cycle.
  • Evaluation Inconsistency – Manual testers can easily miss small issues that might affect users or indicate a problem
  • Coverage Inconsistency – Test coverage by manual testers depends on their ability to follow steps and evaluate responses – which vary from tester to tester.

Test engineers who consider automation for rendered applications seek the automation equivalent of manual testers. They seek a tool that can inspect the visual output and make judgments on whether or not the rendered page matches expectations. They seek a visual testing strategy that can help speed the testing process and validate visual behavior through visual test automation.

Conversely, test engineers don’t want a bot that doesn’t care if web pages are rendered Minecraft-style by Browsh.

pasted image 0 12

Source: https://pxhere.com/en/photo/536919

Legacy Visual Testing Technologies

Given the number of visual development tools available, one might think that visual testing tools match the development tools in scope and availability. Unfortunately, no. While one can develop a single app to run on mobile native and desktop browsers, from screen sizes ranging from five-inch LCD to 4K display, the two most commonly-used visual testing technologies remain pixel diffing and DOM diffing. And they both have issues.

DOM diff comparisons don’t actually compare the visual output. DOM diffs:

  • Identify web page regions through the DOM.
  • Can expose changes in the layout when the regions differ
  • Indicate potential differences when a given region indicates a content or CSS change
  • Remain blind to a range of visual changes, such as different underlying content with the same identifier.

Pixel diffing uses pixel comparisons to determine whether or not page content has changed. But pixel rendering can vary with the browser version, operating system, and graphics cards. Screen anti-aliasing settings and image rendering algorithms can result in pixel differences. While pixel diffs can identify valid visual differences they can also identify insignificant visual differences that a real user could not distinguish. As a result, pixel diffs suffer from false positives – reported differences that require you to investigate only to conclude they’re not different. In other words, a waste of time.

AI and visual Testing

Over the past decade, visual testing advanced from stand-alone tools to integrated visual testing tools. Some of these tools can be obtained through open source. Others are commercial tools that leverage new technologies.

The most promising approach uses computer vision technology – visual AI, the technology underlying Applitools Eyes – to distinguish visual changes between versions of a rendered page. Visual AI uses the same computer vision technology found in self-driving car development to replace pixel diffing.

pasted image 0 13

Source: https://www.teslarati.com/wp-content/uploads/2017/07/Traffic-light-computer-vision-lvl5.jpg

However, instead of comparing pixels, Visual AI perceives distinct visual elements — buttons, text fields, etc. — and can compare their presentation as a human might see them. As a result, Visual AI more closely “sees” a web page with the same richness that a human user does. For this reason, Visual AI provides a much higher level of accuracy in test validation.

Here’s an example of Visual AI in action:

pasted image 0 9

As a Visual AI tool, Applitools Eyes has been used by a large number of companies to add visual test automation to their web app testing. As a result, these companies have not only expanded beyond functional test myopia, but they now have the ability to rapidly accept or reject visual changes in their applications with visual validation workflow.

Broaden Your Vision

Whether you code web apps or write test automation code, you want to find and fix issues as early as possible. Visual validation tools let you bring visual testing into your development process so that you can automate behavior and rendering testing earlier in your development process.  You can use visual validation to see what you’re doing and what your customers will experience.

Don’t get stuck in functional test myopia.

For More Information

Get a free Applitools account.

Request a demo of Applitools Eyes.

Check out the Applitools tutorials.

The post Functional Test Myopia appeared first on Automated Visual Testing | Applitools.

]]>