Hackathon Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/hackathon/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:10:13 +0000 en-US hourly 1 My Holiday Hackathon 2020 – Kerry McKeever https://applitools.com/blog/holiday-hackathon-mckeever/ Thu, 11 Feb 2021 00:21:52 +0000 https://applitools.com/?p=26819 Using AI to intelligently identify discrepancies between a base image and the test run, capturing the DOM for easy debugging, controlling A/B testing for variants of your application, and integrating with CI/CD pipelines and Jira…I could go on.

The post My Holiday Hackathon 2020 – Kerry McKeever appeared first on Automated Visual Testing | Applitools.

]]>

As a test automation architect, I am always on the lookout for the best solutions that can help reduce the feedback loop between development and QA. As development cycles speed up, having the ability to release features at breakneck speed has become crucial to the success of many organizations. With software infrastructure becoming ever more complicated, and multiple teams contributing to a single application to accommodate these release cycles, the need for effective solutions to catch bugs as fast as possible has never been greater. Enter Applitools.

Joining The Applitools Hackathon

Applitools has been making waves across the test community for years now by disrupting the traditional means by which test engineers automate their solutions with more coverage, fewer lines of code, and less time spent toward test maintenance. In a stroke of genius, they have invited engineers worldwide to participate in hackathons leveraging the Applitools Ultrafast Grid to show them firsthand how powerful the tool can be. ?

…standing up an automated test solution can easily take hours to lay the groundwork before you write a single test. The best part of leveraging Cypress and Applitools is that it took roughly one hour from start to finish including the development of my tests, which is almost unheard of. 

Kerry McKeever

During my first Applitools hackathon, I set up my framework how I traditionally would for any company requiring strong cross-browser compatibility. To do this, I leveraged Selenium and C#, since that is what stack I had been working on for multiple employers up to that point. My challenge to myself for that hackathon was to see how I could build comparable tests in Applitools with as few lines as possible compared to my core Selenium tests.

Once I completed the solution, I ran an analysis on both projects to see where I stood. These comparisons do not account for the shared framework code for the Driver classes, enums to handle driver types and viewport sizes, page object models, and helper classes used between both solutions – just the test code itself. In the end, my core Selenium test project contained 845 lines of just test code.

Conversely, to execute the same tests with the same, or greater, level of confidence in the solution, the number of lines written for the Applitools test code was only 174

Though we SDETS do all we can to ensure that we have a robust test framework that is as resilient to change as possible, UI test code, by nature, is flaky. Having to maintain and refactor test code isn’t a possibility, it’s a promise. Keeping your test code lean = less time writing and maintaining test code = more time focusing on new application features and implementing other test solutions! ?

My Challenge – Maximize Coverage, Minimize Code

When I saw the opportunity to join Applitools’ Holiday Shopping Hackathon, I jumped on it. This time, I wanted to challenge myself to create both my solution and test framework in not only as few lines as possible, but also as fast as I possibly could. 

Traditionally, building out a test framework takes considerable time and effort. If you are working in Selenium, you can spend hours just building out a proper driver manager and populating your page object models. 

Instead, I opted to go another route and use Cypress with my reasoning being two-pronged: 1) Cypress allows for fast ramp up of testing, where you can easily have your tests solution up and running against your application in as little as 5 minutes. Seriously. 2) Cypress is used as the test framework at my current place of employment, and it allowed me to build a functioning POC to showcase the power of Applitools to my colleagues. Win-win!

But there is a caveat…While Cypress allows for cross-browser automation against modern browsers, it does not support Safari or IE11. ? For many companies with a large client base using one or both of these browsers, this can be a non-starter. But that’s where Applitools comes in.

While Cypress has a browser limitation, Applitools allowed me to leverage the easy, quick setup of test automation in Cypress without sacrificing the ability to run these tests on Safari and IE11. ❤ Essentially, my test runs once with Cypress in a single browser, then Applitools re-renders the DOM from my test execution and runs the test against all browser combinations I specify. No browser limitations, no duplication of tests, in as few lines as possible.

How few? Let me share with you my simple solution that covered all necessary scenarios:

That’s it! A whopping 84 lines of very lean code that executes all test scenarios across all supported browsers and viewports. 

Fast From Concept To Completion

As mentioned previously, standing up an automated test solution can easily take hours to lay the groundwork before you write a single test. The best part of leveraging Cypress and Applitools is that it took roughly 1 hour from start to finish including the development of my tests, which is almost unheard of. 

Beyond just the speed and ease of using Applitools, having such a powerful and succinct dashboard where you can collaboratively work with the entire development team on bugs is where they really shine. Using AI to intelligently identify discrepancies between a base image and the test run, capturing the DOM for easy debugging, controlling A/B testing for variants of your application, and integrating with CI/CD pipelines and Jira…I could go on. 

All these capabilities provide great visibility into the health of a product, and allow product owners, managers, developers and QA engineers to work harmoniously together in management of test flows, bugs, and regressions from a single dashboard. In any organization adopting a shift-left mentality, you really can’t ask for much more. 

Lead image by James Osborne from Pixabay

The post My Holiday Hackathon 2020 – Kerry McKeever appeared first on Automated Visual Testing | Applitools.

]]>
More than a Hackathon to me – Ivana Dilparic https://applitools.com/blog/holiday-hackathon-dilparic/ Thu, 11 Feb 2021 00:20:27 +0000 https://applitools.com/?p=26823 This experience has turned me into advocate for Applitools. I see a lot of potential of this kind of testing. I recognize the benefit for the current project my team is working on. And looking back I see there were many cases over the years where it would have helped QA Engineers I have worked with.

The post More than a Hackathon to me – Ivana Dilparic appeared first on Automated Visual Testing | Applitools.

]]>

I remember the feeling when I submitted my entry for Holiday Shopping Hackathon. Sure, there is always a bit of relief once you wrap something up. But mostly I was just proud that I managed to handle each task from Hackathon instructions. 

I wasn’t eyeing any of the prices nor expected to ever hear back from judges. I simply saw Applitools Holiday Shopping Hackathon as a learning opportunity and went for it. This sense of pride was coming from having my learning mission accomplished. 

I see a lot of potential of this kind of testing. I recognize the benefit for the current project my team is working on.

–Ivana dilparic

But Hackathon ended up being much more for me, besides getting fine JavaScript and Cypress practice and getting introduced to this amazing visual testing tool, now I also have lifetime bragging rights and a bit of self esteem boost to keep up with my new tech goals.

Why did I need new tech goals in the first place?

I have been in managerial and leadership roles in IT industry for over 12 years. Even though I hold Master’s degree in Computer Science and my first role after graduation was as Software Engineer, 12 years is a lot of time to not be actively developing software.   

All this time I have been making constant efforts to build and enhance an array of soft skills, to accumulate industry specific knowledge (for at least 5 industries) and to be able to actively participate in tech discussions. It turned out that this was not enough, at least not for the tech part, as I started getting feedback that I am “behind with the tech side”. 

One thing was clear, I needed to craft a plan which will turn the things around.

The things I tried

I know by now that the best way to learn something is to start practicing it actively and to combine theory with practice. My work is not leaving me with much room for something like getting hands on experience with new cool frameworks. So all the learning and practicing had to happen in the evenings and over the weekend.

I subscribed to several podcasts and blogs and I handpicked some development courses which seemed related to technologies currently used with my team. I was investing a lot of time and was absolutely sure that there is no significant improvement. Courses I choose were either focused on very basic examples or were too demanding in terms of mandatory coursework. Even if I managed to stretch my time and cover self-assignments, whatever I learned there would fade away shortly because I was not actively using it. 

Then came Hackathon

The hackathon just sounded like a good idea. The instructions were very specific; it was very clear what was expected from participants. Timeframe for submission was very generous – since learning about Hackathon, I had several weeks to complete my submission, so I didn’t need to pause on rest of my life and get behind with the sleep (something I have been associating with Hackathons until now).

For Cypress part I relied on Introduction to Cypress course from Test Automation University. Mr. Gil Tayar did a great job!

Visual testing in Applitools

I admit that I ignored the manual and relied on exploring Applitools myself. Overall, I find the app to be intuitive and easy to use. All information about test runs is very well structured and easy to navigate through. 

Multi-browser testing worked like a charm. It took me no time to set this up, and speed of multi-browser testing was more than I hoped for.

For one of Hackathon tasks, I figured out how bugs work. That was straight forward. Potential issues were very obviously highlighted. They scream action.

Another task was related to root causes. I didn’t figure this one in first attempt, but I have obviously excelled on that second try.

Visual testing before I knew Applitools

I recall scenarios where QA team on my projects was using Selenium to automate tests. Idea was to automate UI tests as well. 

There were too many visual issues which tests were not detecting. Even the issues important for the end user were being undetected by these tests. QA Engineers were explaining the causes for this, coming up with workarounds how to increase the test coverage with limited time investments. This didn’t sit that well with the client.

Conclusion

What can I say, this experience has turned me into advocate for Applitools. I see a lot of potential of this kind of testing. I recognize the benefit for the current project my team is working on. And looking back I see there were many cases over the years where it would have helped QA Engineers I have worked with. It shortens the time to set up UI tests and it probably shortens running time. Plus, it provides better coverage. 

Also, I find Test Automation University to be one of the best things that happened in testing community lately. Thank you for doing this, Applitools!

As for my personal development, Hackathon was a great boost for me. It helped me carry on with my learning trajectory. And I expect more Hackathons in my future.

Lead image by Antonis Kousoulas from Pixabay

The post More than a Hackathon to me – Ivana Dilparic appeared first on Automated Visual Testing | Applitools.

]]>
Stability In Cross Browser Test Code https://applitools.com/blog/stability-in-cross-browser-test-code/ Thu, 04 Feb 2021 23:56:57 +0000 https://applitools.com/?p=26674 If you read my previous blog, Fast Testing Across Multiple Browsers, you know that participants in the Applitools Ultrafast Cross Browser Hackathon learned the following: Applitools Ultrafast Grid requires an...

The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.

]]>
Test Code Stability

If you read my previous blog, Fast Testing Across Multiple Browsers, you know that participants in the Applitools Ultrafast Cross Browser Hackathon learned the following:

  • Applitools Ultrafast Grid requires an application test to be run just once. Legacy approaches require repeating tests for each browser, operating system, and viewport size of interest.
  • Cross browser tests and analysis complete typically within 10 minutes, meaning that test times match the scale of application build times. Legacy test and analysis times involve several hours to generate results
  • Applitools makes it possible to incorporate cross browser tests into the build process, with both speed and accuracy.

Today, we’re going to talk about another benefit of using Applitools Visual AI and Ultrafast Grid: test code stability.

What is Test Code Stability?

Test code stability is the property of test code continuing to give consistent and appropriate results over time. With stable test code, tests that pass continue to pass correctly, and tests that fail continue to fail correctly. Stable tests do not generate false positives (report a failure in error) or generate false negatives (missing a failure).

Stable test code produces consistent results. Unstable test code requires maintenance to address the sources of instability. So, what causes test code instability?

Anand Bagmar did a great review of the sources of flaky tests. Some of the key sources of instability:

  • Race conditions – you apply inputs too quickly to ensure a consistent output
  • Ignoring settling time – your output becomes stable only after your sampling time
  • Network delay – your network infrastructure causes unexpected behavior
  • Dynamic environments – your inputs cannot guarantee all the outputs
  • Incompletely scoped test conditions – you have not specified the correct changes
  • Myopia – you only look for expected changes and actual changes occur elsewhere
  • Code changes – your code uses obsolete controls or measures obsolete output.

When you develop tests for an evolving application, code changes introduce the most instability in your tests. UI tests, whether testing the UI or complete end-to-end behavior, depends on the underlying UI code. You use your knowledge of the app code to build the test interfaces. Locator changes – whether changes to coded identifiers or CSS or Xpath locators – can cause your tests to break.

When test code depends on the App code, each app release will require test maintenance. Otherwise, no engineer can ensure that a “passing” test omitted an actual failure, or that  a “failing” test indicates a real failure and not a locator change.

Test Code Stability and Cross Browser Testing

Considering the instability sources, a tester like you takes on a huge challenge with cross browser tests. You need to ensure that your cross browser test infrastructure addresses these sources of instability so that your cross browser behavior matches expected results.

If you use a legacy approach to cross browser testing, you need to ensure that your physical infrastructure does not introduce network or other infrastructure sources of test flakiness.  Part of your maintenance ensures that your test infrastructure does not become a source of false positives or false negatives.  

Another check you make relates to responsive app design. How do you ensure responsive app behavior? How do you validate page location based on viewport size?

If you use legacy approaches, you spend a lot of time ensuring that your infrastructure, your tests, and your results all match expected app user behavior. In contrast, the Applitools approach does not require debugging and maintenance of multiple test infrastructures, since the purpose of the test involves ensuring proper rendering of server response.

Finally, you have to account for the impact of every new app coding change on your tests. How do you update your locators? How do you ensure that your test results match your expected user behavior?

Improving Stability: Limiting Dependency on Code Changes

One thing we have observed over time: code changes drive test code maintenance. We demonstrated this dependency relationship in the Applitools Visual AI Rockstar Hackathon, and again in the Applitools Ultrafast Cross Browser Hackathon. 

The legacy approach uses locators to both apply test conditions and measure application behavior. As locators can change from release to release, test authors must consider appropriate actions.

Many teams have tried to address the locator dependency in test code. 

Some test developers sit inside the development team. They create their tests as they develop their application, and they build the dependencies into the app development process. This approach can ensure that locators remain current. On the flip side, they provide little information on how the application behavior changes over time. 

Some developers provide a known set of identifiers in the development process. They work to ensure that the UI tests use a consistent set of identifiers. These tests can run the risk of myopic inspection. By depending on supplied identifiers – especially to measure application behavior, these tests run the risk of false negatives. While the identifiers do not change, they may no longer reflect the actual behavior of the application. 

The modern approach limits identifier use to applying test conditions. Applitools Visual AI measures the application response of the UI. This approach still depends on identifier consistency – but with way fewer identifiers. In both hackathons, participants cut their dependence on identifiers by 75% to 90% – basically, they used way fewer identifiers. Their code ran more consistently and required less maintenance.

Modern cross browser testing reduces locator dependence by up to 90% - resulting in more stable tests over time.

Implications of Modern Cross Browser Testing

Applitools Ultrafast Grid overcomes many of the hurdles that testers experience running legacy cross browser test approaches. Beyond the pure speed gains, Applitools offers improved stability and reduced test maintenance.

Modern cross browser testing reduces dependency on locators. By using Visual AI instead of locators to measure application response, Applitools Ultrafast Grid can show when an application behavior has changed – even if the locators remain the same. Or, alternatively, Ultrafast Grid can show when the behavior remains stable even though locators have changed. By reducing dependency on locators, Applitools ensures a higher degree of stability in test results.

Also, Applitools Ultrafast Grid reduces infrastructure setup and maintenance for cross browser tests. In the legacy setup, each unique browser requires its own setup and connection to the server. Each setup can have physical or other failure modes that must be identified and isolated independent of the application behavior. By capturing the response from a server once and validating the DOM across other target browsers, operating systems, and viewport sizes, Applitools reduces the infrastructure debug and maintenance efforts.

Conclusions

Participant feedback from the Hackathon provided us with consistent views on cross browser testing. From their perspective, participants viewed legacy cross browser tests as:

  • Likely to break on an app update
  • Susceptible to infrastructure problems
  • Expensive to maintain over time

In contrast, they saw Applitools Ultrafast Grid as:

  • Less expensive to maintain
  • More likely to expose rendering errors
  • Providing more consistent results.

You can read the entire report here.

What’s Next

What holds companies back from cross browser testing? Bad experiences getting results. But, what if they could get good test results and have a good experience at the same time? We ask participants about their experience on the Applitools Cross Browser Hackathon.

The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.

]]>
Fast Testing Across Multiple Browsers https://applitools.com/blog/fast-testing-multiple-browsers/ Thu, 28 Jan 2021 08:22:47 +0000 https://applitools.com/?p=26281 Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>

If you think like the smartest people in software, you conclude that testing time detracts from software productivity. Investments in parallel test platforms pay off by shortening the time to validate builds and releases. But, you wonder about the limits of parallel testing. If you invest in infrastructure for fast testing across multiple browsers, do you capture failures that justify such an investment?

The Old Problem: Browser Behavior

Back in the day, browsers used different code bases. In the 2000s and early 2010s, most application developers struggled to ensure cross browser behavior. There were known behavior differences among Chrome, Firefox, Safari, and Internet Explorer. 

Annoyingly, each major version of Internet Explorer had its own idiosyncrasies. When do you abandon users who still run IE 6 beyond its end of support date? How do you handle the IE 6 thorough IE 10 behavioral differences? 

While Internet Explorer differences could be tied to major versions of operating systems, Firefox and Chrome released updates multiple times per year. Behaviors could change slightly between releases. How do you maintain your product behavior with browsers in the hands of your customers that you might not have developed with or tested against? 

Cross browser testing proved itself a necessary evil to catch potential behavior differences. In the beginning, app developers needed to build their own cross browser infrastructure. Eventually, companies arose to provide cross browser (and then cross device) testing as a service.

The Current Problem: Speed Vs Coverage

In the 2020s, speed can provide a core differentiator for app providers. An app that delivers features more quickly can dominate a market. Quality issues can derail that app, so coverage matters. But, how do app developers ensure that they get a quality product without sacrificing speed of releases?

In this environment, some companies invest in cross browser test infrastructure or test services. They invest in the large parallel infrastructure needed in creating and maintaining cross browser tests. And, the bulk of uncovered errors end up being rendering and visual differences. So, these tests require some kind of visual validation. But, do you really need to repeatedly run each test? 

Applitools concluded that repeating tests required costly infrastructure as well as costly test maintenance. App developers intend that one server response work for all browsers. With its Ultrafast Grid, Applitools can capture the DOM state on one browser and then repeat it across the Applitools Ultrafast Test Cloud. Testers can choose among browsers, devices, viewport sizes and multiple operating systems. How much faster can this be?

Hackathon Goal – Fast Testing With Multiple Browsers

In the Applitools Ultrafast Cross Browser Hackathon, participants used the traditional legacy method of running tests across multiple browsers to compare behavior results. Participants then compared their results with the more modern approach using the Applitools Ultrafast Grid. Read here about one participant’s experiences.

The time that matters is the time that lets a developer know the details about a discovered error after a test run. For the legacy approach, coders wrote tests for each platform of interest, including validating and debugging the function of each app test on each platform. Once the legacy test had been coded, the tests were run, analyzed, and reports were generated. 

For the Ultrafast approach, coders wrote their tests using Applitools to validate the application behavior. These tests used fewer lines of code and fewer locators. Then, the coders called the Applitools Ultrafast Grid and specified the browsers, viewports, and operating systems of interest to match the legacy test infrastructure.

Hackathon Results – Faster Tests Across Multiple Browsers

The report included this graphic showing the total test cycle time for the average Hackathon submission of legacy versus Ultrafast:

Here is a breakdown of the average participant time used for legacy versus Ultrafast across the Hackathon:

ActivityLegacyUltrafast
Actual Run Time9 minutes2 minutes
Analysis Time270 minutes10 minutes
Report Time245 minutes15 minutes
Test Coding Time1062 minutes59 minutes
Code Maintenance Time120 minutes5 minutes

The first three activities, test run, analysis, and report, make up the time between initiating a test and taking action. Across the three scenarios in the hackathon, the average legacy test required a total of 524 minutes. The average for Ultrafast was 27 minutes. For each scenario, then, the average was 175 minutes – almost three hours – for the legacy result, versus 9 minutes for the Ultrafast approach.

On top of the operations time for testing, the report showed the time taken to write and maintain the test code for the legacy and Ultrafast approaches. Legacy test coding took over 1060 minutes (17 hours, 40 minutes), while Ultrafast only required an hour. And, code maintenance for legacy took 2 hours, while Ultrafast only required 5 minutes.

Why Fast Testing Across Multiple Browsers Matters

As the Hackathon results showed, Ultrafast testing runs more quickly and gives results more quickly. 

Legacy cross-browser testing imposes a long time from test start to action. Their long run and analysis times do not make them suitable for any kind of software build validation. Most of these legacy tests get run in final end-to-end acceptance, with the hope that no visual differences get uncovered. 

Ultrafast approaches enable app developers to build fast testing across multiple browsers into software build. Ultrafast analysis catches unexpected build differences quickly so they can be resolved during the build cycle.

By running tests across multiple browsers during build, Ultrafast Grid users shorten their find-to-resolve cycle to branch validation even prior to code merge. They catch the rendering differences and resolve them as part of the feature development process instead of the final QA process. 

Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

Combine the other speed differences in coding and maintenace, and it becomes clear why Ultrafast testing across multiple browsers makes it possible for developers to run Ultrafast Grid in development.

What’s Next

Next, we will cover code stability – the reason why Ultrafast tests take, on average, 5 minutes to maintain, instead of two hours. 

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>
Visual Assertions – Hype or Reality? https://applitools.com/blog/visual-ai-hype-or-reality/ Thu, 21 Jan 2021 21:26:45 +0000 https://applitools.com/?p=25829 There is a lot of buzz around Visual Testing these days. You might have read or heard stories about the benefits of visual testing. You might have heard claims like,...

The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.

]]>

There is a lot of buzz around Visual Testing these days. You might have read or heard stories about the benefits of visual testing. You might have heard claims like, “more stable code,” “greater coverage,” “faster to code,” and “easier to maintain.” And, you might be wondering, is this a hype of a reality?

So I conducted an experiment to see how true this really is.

I used the instructions from this recently concluded hackathon to conduct my experiment.

I was blown away by the results of this experiment. Feel free to try out my code, which I published on Github, for yourself.

Visual Assertions – my learnings

Before I share the details of this experiment, here are the key takeaways I had from this exercise:

  1. Functional Automation is limiting. You only simulate known user behaviors, in predetermined conditions and in the process only validate & verify conditions what you know about. There has to be a more optimized and value generating approach!
  2. The Automation framework should have advanced capabilities like soft assertions and good reporting to allow quick RCA.
  3. Save time by integrating Applitools’ Visual AI Testing with the Ultrafast Grid to increase your test coverage and scale your test executions.

Let us now look at details of the experiment.

Context: What are we trying to automate?

We need implement the following tests to check the functionality of https://demo.applitools.com/tlcHackathonMasterV1.html

  1. Validate details on landing / home page
    This should include checking headers / footers, filters, displayed items
  2. Check if Filters are working correctly
  3. Check product details for a specific product

For this automation, I chose to use Selenium-Java for automation with Gradle as a build tool.

The code used for this exercise is available here: https://github.com/anandbagmar/visualAssertions

Step 1 – Pure Functional Automation using Selenium-Java

Once I spent time in understanding the functionality of the application, I was quickly able to automate the above mentioned tests.

Here is some data from the same.

Refer to HolidayShoppingWithSeTest.java

ActivityData (Time / LOC / etc.)

Time taken to understand the application and expected tests

30 min

Time taken to implement the tests

90 min

Number of tests automated

3

Lines of code (actual Test method code)

65 lines

Number of locators used

23

Test execution time:

Part 1: Chrome browser

32 sec

Test execution time:

Part 2: Chrome browser

57 sec

Test execution time:

Part 3: Chrome: 29 sec

29 sec

Test execution time:

Part 3: Firefox: 65 sec

65 sec

Test execution time:

Part 3: Safari: 35 sec

35 sec

Observations

A few interesting observations from this test execution:

  1. I added only superficial validations for each test.
    • I only added validations for the number of filters and items in each filter. But I have not added the validations for actual content of the filters.
    • To add actual validations for each aspect of the page will take 8-10x the time taken for my current implementation, and the number of locators and assertions would also probably increase by 4-6x of the current numbers.
    • Definitely does not seem worth the time and effort.
  1. The tests would not capture all errors based on the assertions added, as the first assertion failure would cause the test to stop.
  2. In order to check everything, instead of hard assertions, the framework would need to implement and support soft assertions
  3. The test implementation is heavily dependent on the locators in the page. Any small change in the locators will cause the test to fail. Ex: In the Product Details page, the locator of the Footer is different from that of the home page
  4. Scaling: I was limited by how many browsers / devices I could run my tests on. I needed to write additional code to manage browser drivers, and that too only for browsers that I had available on my laptop

Step 2 – Add Applitools Visual Assertions to Functional Automation

When I added Applitools Visual AI to the already created Functional Automation (in Step 1), the data was very interesting.

Refer to HolidayShoppingWithEyesTest.java

ActivityData (Time / LOC / etc.)

Time taken to add Visual Assertions to existing Selenium tests

10 min

Number of tests automated

3

Lines of code (actual Test method code)

7 lines

Number of locators used

3

Test execution time:

Part 1: Chrome browser

81 sec (test execution time)

38 sec (Applitools processing time)

Test execution time:

Part 2: Chrome browser

92 sec (test execution time)

42 sec (Applitools processing time)

Test execution time: (using Applitools Ultrafast Test Cloud)

Part 3: Chrome + Firefox + Safari + Edge + iPhone X

125 sec (test execution time)

65 sec (Applitools processing time)

Observations

Here are the observations from this test execution:

  1. My test implementation got simplified
    • Less lines of code
    • Fewer locators and assertions
    • Test became easier to read and extend
  1. Test became more stable
    • Fewer locators and assertions
    • It does not matter if the locators change for elements in the page as long as the user experience / look and feel remains as expected. (Of course, locators on which I need to do actions using Selenium need to be the same)
  1. Comprehensive coverage of functionality and user experience
    • My test focuses on specific functional aspects – but with Visual Assertions, I was able to get validation of the functional change from the whole page, automatically

See these below examples of the nature of validations that were reported by Applitools:

Version Check – Test 1:

Filter Check – Test 2:

Product Details – Test 3:

  1. Scaling test execution is seamless
    • I needed to run the tests only on any 1 browser which is available on my machine. I chose Chrome
    • With the Applitools Ultrafast Test Cloud, I was able to get results of the functional and visual validations across all supported platforms without any code change, and almost in the same time as a single browser execution.

Lastly, an activity I thoroughly enjoyed in Step 2 was the aspect of deleting code that now became irrelevant because of using Visual Assertions.

Conclusion

To conclude, the experiment made it clear – Visual Assertions are not a hype. The below table shows in summary the differences in the 2 approaches discussed earlier in the post.

ActivityPure Functional TestingUsing Applitools Visual Assertions

Number of Tests automated

3

3

Time taken to implement tests

90 min (implement + add relevant assertions)

Time taken to add Visual Assertions to existing Selenium tests

10 min

Includes time taken to delete the assertions and locators that now became irrelevant

Lines of code (actual Test method code)

65 lines

7 lines

Number of locators used

23

3

Number of assertions in Test implementation

16

This approach validates only specific behavior based on the assertions.

The first failing assertion stops the test. Remaining assertions do not even get executed

3 (1 in for each test)

Validates the full screen, captures all regressions and new changes as well in 1 validation

Test execution time:

Chrome + Firefox + Safari browser

129 sec

(for 3 browsers)

Test execution time: (using Applitools Ultrafast Test Cloud)

Part 3: Chrome + Firefox + Safari + Edge + iPhone X

125 sec (test execution time)

65 sec (Applitools processing time)

(for 4 browsers + 1 device)

Visual Assertions help in the following ways:

  • Make your tests more stable
  • Lower maintenance as there are less locators to work with
  • Increase test coverage – you do not need to add assertions for each and every piece of functionality as part of your automation. With Visual Assertions, you will get the full – functional & user experience validation by 1 call
  • Scaling is seamless – with the Ultrafast Test Cloud, you run your test just once, and get validation results across all supported browsers and devices

You can get started with Visual Testing by registering for a free account here. Also, you can take this course from the Test Automation University on “Automated Visual Testing: A Fast Path To Test Automation Success

The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.

]]>
How Easy Is Cross Browser Testing? https://applitools.com/blog/how-easy-is-cross-browser-testing/ Mon, 03 Aug 2020 22:21:23 +0000 https://applitools.com/?p=20477 In June Applitools invited any and all to its “Ultrafast Grid Hackathon”. Participants tried out the Applitools Ultrafast Grid for cross-browser testing on a number of hands-on real-world testing tasks....

The post How Easy Is Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.

]]>

In June Applitools invited any and all to its “Ultrafast Grid Hackathon”. Participants tried out the Applitools Ultrafast Grid for cross-browser testing on a number of hands-on real-world testing tasks.

As a software tester of more than 6 years, the majority of my time was spent on Web projects. On these projects, cross-browser compatibility is always a requirement. Since we cannot control how our customers access websites, we have to do our best to validate their potential experience. We need to validate functionality, layout, and design across operating systems, browser engines, devices, and screen sizes.

Applitools offers an easy, fast and intelligent approach to cross browser testing that requires no extra infrastructure for running client tests.

Getting Started

We needed to demonstrate proficiency with two different approaches to the task:

  • What Applitools referred to as modern tests (using their Ultrafast Grid)
  • Traditional tests, where we would set up a test framework from scratch, using our preferred tools.

In total there were 3 tasks that needed to be automated, on different breakpoints, in all major desktop browsers, and on Mobile Safari:

  • validating a product listing page’s responsiveness and layout,
  • using the product listing page’s filters and validating that their functionality is correct
  • validating the responsiveness and layout of a product details page

These tasks would be executed against a V1 of the website (considered “bug-free”) and would then be used as a regression pack against a V2 / rewrite of the website.

Setting up Cypress for the Ultrafast Grid

I chose Cypress as I wanted a tool where I could quickly iterate, get human-readable errors and feel comfortable. The required desktop browsers (Chrome, Firefox and Edge Chromium) are all compatible. The system under test was on a single domain, which meant I would not be disadvantaged choosing Cypress. None of Cypress’ more advanced features were needed (e.g. stubbing or intercepting network responses).

The modern cross browser tests were extremely easy to set up. The only steps required were two npm package installs (Cypress and the Applitools SDK) and running

npx eyes-setup

to import the SDK.

Easy cross browser testing means easy to maintain as well. Configuring the needed browsers, layouts and concurrency happened inside `applitools.config.js`, a mighty elegant approach to the many, many lines of capabilities that plague Selenium-based tools.

Screenshot 2020 07 23 at 16.47.28

In total, I added three short spec files (between 23 and 34 lines, including all typical boilerplate). We were instructed to execute these tasks against the V1 website then mark the runs as our baselines. We would then perform the needed refactors to execute the tasks against the V2 website and mark all the bugs in Applitools.

Applitools’ Visual AI did its job so well, all I had to do was mark the areas it detected and do a write-up!

In summary, for the modern tests:

  • two npm dependencies,
  • a one-line initialisation of the Applitools SDK,
  • 6 CSS selectors,
  • 109 total lines of code,
  • a 3 character difference to “refactor” the tests to run against a second website,

all done in under an hour. 

Performing a visual regression for all seven different configurations added no more than 20 seconds to the execution time. It all worked as advertised, on the first try. That is the proof of easy cross browser testing

Screenshot 2020 07 23 at 17.17.05

Setting up the traditional cross-browser tests

For the traditional tests I implemented features that most software testers are either used to or would implement themselves: a spec-file per layout, page objects, custom commands, (attempted) screenshot diff-ing, linting and custom logging.

This may sound like overkill compared to the above, but I aimed for feature parity and reached this end structure iteratively.

Screenshot 2020 07 23 at 17.18.41

Unfortunately, neither one of the plug-ins I tried for screenshot diff-ing (`cypress-image-snapshot`, `cypress-visual-regression` and `cypress-plugin-snapshots`) gave results in any way similar to Applitools. I will not blame the plug-ins, though, as I had a limited amount of time to get everything working and most likely gave up way sooner than one should have.

Since screenshot diff-ing was off the table, I chose to check each individual element. In total, I ended up with 57 CSS selectors and to make future refactoring easier I implemented page objects. Additionally, I used a custom method to log test results to a text file, as this was a requirement for the hackathon.

I did not count all the lines of code in the traditional approach as the comparison would have been absurd, but I did keep track of the work needed to refactor for V2 — 12 lines of code, meaning multiple CSS selectors and assertions. This work does not need to be done if Applitools is used, “selector maintenance” just isn’t a thing!

View the code on Gist.
View the code on Gist.

Applitools will intelligently find every single visual difference between your pages, while traditionally you’d have to know what to look for, define it and define what the difference should be. Is the element missing? Is it of a different colour? A different font or font size? Does the button label differ? Is the distance between these two elements the same? All of this investigative work is done automatically.

Conclusion

All in all, it has genuinely been an eye-opening experience, as the tasks were similar to what we’d need to do “in the real world” and the total work done exceeds the scope of usual PoCs.

My thanks to everyone at Applitools for offering this opportunity, with a special shout out to Stas M.!

For More Information

Dan Iosif serves as SDET at Dunelm in the United Kingdom. He participated in the recently-completed Applitools Ultrafast Cross Browser Hackathon.

The post How Easy Is Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Why Learn Modern Cross Browser Testing? https://applitools.com/blog/ultrafast-cross-browser-testing/ Wed, 29 Jul 2020 23:25:43 +0000 https://applitools.com/?p=20460 Why Learn Modern Cross Browser Testing? 100 Cross Browser Testing Hackathon Winners Share the Answer. Today, we celebrate the over 2,200 engineers who participated in the Applitools Ultrafast Cross Browser...

The post Why Learn Modern Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.

]]>

Why Learn Modern Cross Browser Testing?

100 Cross Browser Testing Hackathon Winners Share the Answer.

Today, we celebrate the over 2,200 engineers who participated in the Applitools Ultrafast Cross Browser Hackathon. To complete this task, engineers needed to create their own cross-browser test environment using the legacy multi-client, repetitive test approach. Then, they ran modern cross browser tests using the Applitools Ultrafast Grid, which required just a single test run that Applitools re-rendered on different clients and viewport specified by the engineers.

Participants discovered what you can discover as well:

  • For organizations that use HTML, CSS and JavaScript as standards, Applitools Ultrafast Grid almost completely eliminates the incremental cost of cross browser testing.

Applitools Ultrafast Grid changes your approach from, “How do I justify an investment in cross-browser testing?” to “Why shouldn’t I be running cross-browser tests?

Of the 2,200 participants, we are pleased to announce 100 winners. These engineers provided the best, most comprehensive responses to each of the challenges that made up the Hackathon.  

Celebrate the Winners

Before we go forward, let’s celebrate the winners. Here is the table of the top prize winners:

Applitools Rockstar Hackathon Prize Table 2020 Jul 28

Each of these engineers provided a high-quality effort across the hackathon tests. They demonstrated that they understood how to run both legacy and modern cross-browser tests successfully.

Collectively the 2,200 engineers provided 1,600 hours of engineering data as part of their experience with the Ultrafast Grid Hackathon. Over the coming weeks we will be sharing conclusions about modern cross-browser testing based on their experiences.

What’s the big deal about cross-browser testing?

At its core, cross-browser testing guards against client-specific failures.

Let’s say you write your application code, compile it to run in containers on a cloud-based service. For your end-to-end tests, you use Chrome on Windows. You write your end-to-end browser test automation using Cypress (or Selenium, etc.). You validate for the viewport size of your display? What happens if that is all you test?

Lots depends on your application. If you have a reactive application, how do you ensure that your application resizes properly around specific viewport break points? If your customers use mobile devices, have you validated the application on those devices? But, if HTML, CSS, and Javascript are standards, who need cross-brower testing?

Until Applitools Ultrafast Grid, that question used to define the approach organizations took to cross-browser testing. Some organizations did cross browser tests. Others avoided it.

Cross-Browser Testing Used To Be Costly

If you have thought about cross-browser testing, you know that most quality teams possessed a prejudice about the expense of cross-browser infrastructure. If asked, most engineers would include the cost and complexity of setting up a multi-client and mobile device lab, the effort to define and maintain cross-browser test software, and the tools to measure application behavior across multiple devices.  

When you look back on how quality teams approached cross-browser testing, most avoided it. Given the assumed expense, teams needed justification to run cross-browser tests. They approached the problem like insurance. If the probability of a loss exceeded the cost of cross-browser testing, they did it. Otherwise, no.

Even when companies provided the hardware and infrastructure as a cross-browser testing service, the costs still ran high enough that most organizations skipped cross-browser testing.

Applitools and Cross-Browser Testing

Some of our first customers recognized that Applitools Visual AI provides huge productivity gains for cross-browser tests. Some of our customers used popular third-party services for cross-browser infrastructure. All the companies that ran cross-browser tests did have significant risk associated with an application failure. Some had experienced losses associated with browser-specific failures.

We had helped our customers use Applitools to validate the visual output of cross-browser tests. We even worked with some of the popular third-party services that helped cross-browser tests without having to install or maintain an on-premise cross-browser lab.

Visual Differences With A Common DOM

Our experience with cross-browser testing gave us several key insights.

First, we rarely saw applications that had been coded separately for different clients. The vast majority of applications depended on HTML, CSS and JavaScript as standards for user interface. No matter which client ran the tests, the servers responded with the same code. So, each browser at a given step in the test had the same DOM.

Second, if differences arose in cross-browser tests, they were visual differences. Often, they were rendering differences – either due to the OS, browser, or for a given viewport size. But, they were clearly differences that could affect usability and/or user experience.

This led us to realize that organizations were trying to uncover visual behavior differences for a common server response. Instead of running the server multiple times, why not grab the DOM state on one browser and then duplicate the DOM state on every other browser? You need less server hardware. And you need less software – since you only need to automate a single browser.

Creating Applitools Ultrafast Grid

Using these insights, we created Applitools Ultrafast Grid. For each visual test, we capture the DOM state and reload for every other browser/os/viewport size we wish to test. We use cloud-based clients, but they do not need to access the server to generate test results. All we need to do is reload the server response on those cloud-based clients.

Ultrafast Grid provides a cloud-based service with multiple virtual clients. As a user, you specify the browser and viewport size to test against as part of the test specification. Applitools captures a visual snapshot and a DOM snapshot at each point you tell it to make a capture in an end-to-end, functional, or visual test. Applitools then applies the captured DOM on each target client and captures the visual output. This approach requires fewer resources and increases flexibility.

This infrastructure provides huge savings for anyone used to a traditional approach to cross-browser testing. And, Applitools is by far the most accurate visual testing solution, meaning we are the right solution for measuring cross-browser differences.

You might also be interested in using a flexible but limited test infrastructure. For example, Cypress.io has been a Chrome-only JavaScript browser driver. Would you rewrite tets in Selenium to run them on Firefox, Safari, or Android? No way.

Learn and Upskill – The Ultrafast Grid Hackathon

We knew that so many organizations might benefit from a low-cost, highly-accurate cross-browser testing solution. If cost had held people back from trying cross-browser testing, a low-cost, easy-to-deploy, accurate cross-browser solution might succeed. But,  how do we get the attention of organizations that have avoided cross-browser testing because their risks could not justify the costs?

We came up with the idea of a contest – the Ultrafast Grid Hackathon. This is our second Hackathon. In the first, the Applitools Visual AI Rockstar Hackathon, we challenged engineers who used assertion code to validate their functional tests to use Applitools Visual AI for the assertion instead. The empirical data we uncovered from our first Hackathon made it clear to participants that using Applitools increased test coverage even as it reduced coding time and code maintenance effort.

We hoped to upskill a similar set of engineers by getting the to learn Ultrafast Grid with a hackathon. So, we announced the Applitools Ultrafast Grid Hackathon.  Today, we announced the winners. Shortly, we will share some of the empirical data and lessons gleaned from the experiences of hackathon participants.  

These participants are engineers just like you. We think you will find their experiences insightful.

Some Ultrafast Grid Hackathon Insights

Here are two of the insights.

“The efforts to implement a comprehensive strategy using traditional approaches are astronomical. Applitools has TOTALLY changed the game with the Ultrafast Grid. What took me days of work with other approaches, only took minutes with the Ultrafast Grid! Not only was it easier, it’s smarter, faster, and provides more coverage than any other solution out there. I’ll be recommending the Ultrafast Grid to all of the clients I work with from now on.” – Oluseun Olugbenga Orebajo, Lead Test Practitioner at Fujitsu

“It was a wonderful experience which was challenging in multiple aspects and offered a great opportunity to learn cross browser visual testing. It’s really astounding to realize the coding time and effort that can be saved. Hands down, Applitools Ultrafast Grid is the tool to go for when making the shift to modern cross environment testing . Cheers to the team that made this event possible.” – Tarun Narula, Technical Test Manager at Naukri.com

What’s Next?

Look out for more insights and empirical data about the Applitools Ultrafast Grid Hackathon. And, think about how running cross-browser tests could help you validate your application and reduce some support costs you might have been incurring because you couldn’t justify the cost of cross-browser testing. With Applitools Ultrafast Grid, adding an affordable cross-browser testing solution to your application test infrastructure just makes sense.

For More Information

The post Why Learn Modern Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.

]]>
How Do You Catch More Bugs In Your End-To-End Tests? https://applitools.com/blog/catch-more-bugs/ Thu, 21 May 2020 01:08:41 +0000 https://applitools.com/?p=18925 How much is it worth to catch more bugs early in your product release process? Depending on where you are in your release process, you might be writing unit or...

The post How Do You Catch More Bugs In Your End-To-End Tests? appeared first on Automated Visual Testing | Applitools.

]]>

How much is it worth to catch more bugs early in your product release process? Depending on where you are in your release process, you might be writing unit or systems tests. But, you need to run end-to-end tests to prove behavior, and quality engineers require a high degree of skill to write end-to-end tests successfully.

What would you say if a single validation engine could help you ensure data integrity, functional integrity, and graphical integrity in your web and mobile applications? And, as a result, catch more bugs earlier in your release process?

Catch Bugs or Die

Let’s start with the dirty truth: all software has bugs. Your desire to create bug-free code conflicts with the reality that you often lack the tools to uncover all the bugs until someone finds them way late in the product delivery process. Like, say, the customer.

With all the potential failure modes you design for – and then test against – you begin to realize that not all failure modes are created equal. You might even have your own triage list:

  • Security & penetration
  • Data integrity and consistency
  • Functional integrity and consistency

So, where does graphical integrity and consistency fit on your list? For many of your peers, graphical integrity might not even show up on their list. They might consider graphical integrity as managing cosmetic issues. Not a big deal.

Lots of us don’t have reliable tools to validate graphical integrity. We rely on our initial unit tests, systems tests, and end-to-end tests to uncover graphical issues – and we think that they’re solved once they’re caught. Realistically, though, any application evolution process introduces changes that can introduce bugs – including graphical bugs. But, who has an automation system to do visual validation with a high degree of accuracy?

Tradeoffs In End-to-End Testing

Your web and mobile apps behave at several levels. The level that matters to your users, though, happens at the user interface on the browser or the device. Your server code, database code, and UI code turns into this representation of visual elements with some kind of visual cursor that moves across a plane (or keyboard equivalent) to settle on different elements. The end-to-end test exercises all the levels of your code, and you can use it to validate the integrity of your code.

So, why don’t people think to run more of these end-to-end tests? You know the answers.

First, end-to-end tests run more slowly. Page rendering takes time – your test code needs to manipulate the browser or your mobile app, execute an HTTP request, receive an HTTP response, and render the received HTML, CSS, and JavaScript. Even if you run tests in parallel, they’re slower than unit or system tests.

Second, it takes a lot of effort to write good end-to-end tests. Your tests must exercise the application properly. You develop data and logic pre-conditions for each test so it can be run independently of others. And, you build test automation.

Third, you need two kinds of automation. You need a controller that allows you to control your app by entering data and clicking buttons in the user interface. And, most importantly, you need a validation engine that can capture your output conditions and match those with the ones a user would expect.

You can choose among many controllers for browsers or mobile devices. Still, why do your peers still write code that effectively spot-checks the DOM? Why not use a visual validation engine that can catch more bugs?

Visual AI For Code Integrity

You have peers who continue to rely on coded assertions to spot-check the DOM. Then you have the 288 of your peers who did something different: they participated in the Applitools Visual AI Rockstar Hackathon. And they got to experience first-hand the value of Visual AI for building and maintaining end-to-end tests.

As I wrote previously, we gave participants five different test cases, asked them to write conventional tests for those cases, and then to write test cases using Applitools Visual AI. For each submission, we checked the conditions each test writer covered, as well as the failing output behaviors each test-writer caught.

As a refresher, we chose five cases that one might encounter in any application:

  1. Comparing two web pages
  2. Data-driven verification of a function
  3. Sorting a table
  4. Testing a bar chart
  5. Handling dynamic web content

For these test cases, we discovered that the typical engineer writing conventional tests to spot-check the DOM spent the bulk of their time writing assertion code. Unfortunately, the typical spot-check assertions missed failure modes. The typical submission got about 65% coverage. Alternatively, the engineers who wrote the tests that provided the highest coverage spent about 50% more time writing tests.

However, when using Visual AI for visual validation, two good things happened. First, everyone spent way less time writing test code. The typical engineer went from 7 hours of coding tests and assertions to about 1.2 hours of coding tests and Visual AI. Second, the average test coverage jumped from 65% to 95%. So, simultaneously, engineers took less time and got more coverage.

Visual AI Helps You Catch More Bugs

When you find more bugs, more quickly, with less effort, that’s significant to your quality engineering efforts. You’re able to validate data, functional, and graphical by focusing on the end-to-end test cases you run. You spend less time thinking about and maintaining all the assertion code checking the result of each test case.

Using Visual AI makes you more effective? How much more effective? Based on the data we reviewed – you catch 45% of your bugs earlier in your release process (and, importantly, before they reach customers).

We have previously written about some of the other benefits that engineers get when using Visual AI, including:

  • 5.8x Faster Test Creation – Authoring new tests is vital especially for new features during a release cycle. Less time authoring means more time managing quality. Read more.
  • 5.9x More Test Code Efficient – Like your team’s feature code, test code efficiency means you write less code, yet provide far more coverage. Sounds impossible, right? It’s not. Read More.
  • 3.8x Improvement In Test Stability – Code-based frameworks rely on brittle locators and labels that break routinely. This maintenance kills your release velocity and reduces coverage. What you need is self-maintaining and self-healing code that eliminates most of the maintenance. It sounds amazing and it is! Read More.

By comparing and contrasting the top participants – the prize winners – with the average engineer who participated in the Hackathon, we learned how Visual AI helped the average engineer greatly – and the top engineers become much more efficient.

The bottom line with Visual AI — you will catch more bugs earlier than you do today.        

More About The Applitools Visual AI Rockstar Hackathon

Applitools ran the Applitools Visual AI Rockstar Hackathon in November 2019. Any engineer could participate, and 3,000 did so from around the world. 288 people actually completed the Hackathon and submitted code. Their submissions became the basis for this article.

You can read the full report we wrote: The Impact of Visual AI on Test Automation.

In creating the report, we looked at three groups of quality engineers including:

  • All 288 Submitters – This includes any quality engineer that successfully completed the hackathon project. While over 3,000 quality engineers signed-up to participate, this group of 288 people is the foundation for the report and amounted to 3,168 hours, or 80 weeks, or 1.5 years of quality engineering data.
  • Top 100 Winners – To gather the data and engage the community, we created the Visual AI Rockstar Hackathon. The top 100 quality engineers who secured the highest point total for their ability to provide test coverage on all use cases and successfully catch potential bugs won over $40,000 in prizes.
  • Grand Prize Winners – This group of 10 quality engineers scored the highest representing the gold standard of test automation effort.

By comparing and contrasting the time, effort, and effectiveness of these groups, we were able to draw some interesting conclusions about the value of Visual AI in speeding test-writing, increasing test coverage, increasing test code stability, and reducing test maintenance costs.

What’s Next?

You now know five of the core benefits we calculate from engineers who use Visual AI.

  • Spend less time writing tests
  • Write fewer lines of test code
  • Maintain fewer lines of test code
  • Your test code remains much more stable
  • Catch more bugs

So, what’s stopping you from trying out Visual AI for your application delivery process? Applitools lets you set up a free Applitools account and start using Visual AI on your own.  You can download the white paper and read about how Visual AI improved the efficiency of your peers. And, you can check out the Applitools tutorials to see how Applitools might help your preferred test framework and work with your favorite test programming language.

Cover Photo by michael podger on Unsplash

The post How Do You Catch More Bugs In Your End-To-End Tests? appeared first on Automated Visual Testing | Applitools.

]]>
Functional vs Visual Testing: Applitools Hackathon https://applitools.com/blog/my-hackathon-test-experience/ Mon, 20 Apr 2020 21:20:24 +0000 https://applitools.com/?p=17475 Many thanks to Applitools for the exciting opportunity to learn more about Visual AI testing by competing in the hackathon. While trying to solve the five challenges step by step,...

The post Functional vs Visual Testing: Applitools Hackathon appeared first on Automated Visual Testing | Applitools.

]]>

Roman Iovlev, Guest Blogger

Many thanks to Applitools for the exciting opportunity to learn more about Visual AI testing by competing in the hackathon. While trying to solve the five challenges step by step, you truly grasp the fact that proper UI testing is impossible without visual testing. But let’s start from the beginning.

Two versions of the same website were provided. These two versions represented different builds of the same application.

Build 1: https://demo.applitools.com/hackathon.html

Build 2: https://demo.applitools.com/hackathonV2.html

I will evaluate each task for both approaches from 1 to 5, where 1 is “not applicable” and 5 is “the best choice”, and draw some conclusions at the end.

In my tests, I used the JDI Light test automation framework that is based on Selenium, but it is more effective and easier to use.

Here is the link to my final solution: https://github.com/RomanIovlev/applitools-hackathon

Here is the link to Applitools documentation: https://help.applitools.com/hc/en-us

Login Page View Test

In the first challenge, we needed to validate the view of the Login page in general, meaning to verify that all the elements are displayed properly.

It’s obvious that if you need to validate how the page looks, you can’t rely solely on functional validation. Why?

  1. You will spend a lot of time and lines of code to describe all elements.
  2. But more importantly, your validations can still miss critical issues:
    • Changes in elements’ size or position
    • Changes in images or backgrounds
    • Different colors or line lengths
    • Something unexpected does not appear

These are a few of the tricky, yet important things that prevent you from properly testing using traditional functional tests. Sometimes it is difficult to describe all the possible failure modes, and some validations are just impossible to verify. And of course, with the functional approach, in most cases, you can’t check that something unexpected does not appear. Sometimes you even can’t imagine that.

Here are the differences detected by Applitools between Build 1 and Build 2

How can you check this using the traditional approach to test automation? There are more than a dozen UI elements on the page and you need to create a method that validates them all

View the code on Gist.

That’s 40 lines of code and we’ve only checked what we know is there, and not some unexpected surprises.

How does this compare to using Applitools to verify this with visual testing? Only one line of code is needed! 

eyes.checkWindow("Login Page view");

Here we use the checkWindow() method which allows us to validate the entire page.

This one line of code will help you to prevent more risks. And at the same time, thanks to the AI used in Applitools these validations will be stable and reduce the number of typical false-negative cases.

And the last point here, in addition to less code and broader validations, visual tests allow you to easily validate multiple issues at once, which is not a simple task in functional testing approaches.

Testing TypeApplicable (1-5)Comment
Functional approach (JDI Light)2you can try to test most of the valuable elements, but this requires you to write a lot of code and it provides no guarantees
Visual approach (Applitools)5With just one line of code, you can check most of the possible issues

Login functionality (data-driven)

The second challenge is represented by a typical task for most applications. We needed to validate general login functionality for all valuable cases: Success login, Failed login, Empty and Big values, all Allowed symbols (maybe even some JS or SQL injection cases).

Definitely, this task is good for functional testing using the data-driven approach. Create test data for positive and negative cases and let it flow through your scenarios. Thanks to JDI you can describe simple forms in one row without complex PageObjects and submit it using a Data entity.

public static Form<User> loginForm;

And then you can use this form in your tests.

@Test(dataProvider = "correctUsers"...) 
public void loginSuccessTest(User user) { 
    loginForm.loginAs(user); 
...

We call this – Entities Driven Testing approach (or EDT). If you are interested in more details, please follow the link https://jdi-docs.github.io/jdi-light/?java#entity-driven-testing or see how it looks on Github.

So, let’s get back to test cases. When I wrote tests and run them against build V2 they were passed but… they should not! See how the incorrect password error message looks like.

The text is correct – this is what we always validate with the functional approach, but the message’s background is broken. This is a really enlightening moment where you realize that all the tests you’ve written in your career are not that thorough and at risk of missing such issues.

I have no idea how such cases can be tested using the functional approach, and what’s worse is that you don’t even consider all these types of issues when writing the tests.

The conclusion here is that in addition to functional validation, it’s always good to have visual checks as well. And with Applitools you can do it in a simple and stable manner.

eyes.checkRegion("Alert: "+message, By.cssSelector(".alert-warning"));

The check() or checkRegion() method used here allows us to validate the view of the exact element.

Testing TypeApplicable (1-5)Comment
Functional approach (JDI Light)5this task is mostly for Functional testing
Visual approach (Applitools)3but in some cases, you can’t avoid visual validation

Table sorting test

The next challenge in the hackathon was to validate the sorting of a data table. The scenario called for clicking on the table header (Amount column) and validating that the data in the sorted column is incorrect order. But this isn’t enough:

  • First of all, you should validate that the values in the sorted column are in the correct order.
  • Next, you need to check that the data in the table is still consistent. Data in rows should be the same, with only the order of the rows being changed.
  • The problem becomes more complex because the cells in a row sometimes are images or colorful states and you can’t just validate values.

This is a really interesting task and I would like you to try it yourself.

Let’s consider how we’d approach this using the functional approach. Programmatically, you’d need to: 

  • Include a table header (or at least “Amount” element).
  • Include a method that reads the  “Amount” column values (remember that we need numbers to compare the values).
  • Validate all parts of the row, including images and colors for each row.

With standard frameworks like Selenium, WebDriverIO, Cypres, or Selenide you could write hundreds of lines of code to properly interact with the table (especially with the cells containing images and color-related elements).

Thanks to JDI Light you can describe this table in just one row to test the data.

public static DataTable<TransactionRow, Transaction> transactionsTable;

And a few more rows if you would like to describe in detail the complex structure of each row. (See more details on Github.)

Via the functional approach, the validation has four steps:

  1. Get unsorted table rows with all expected data in each row.
  2. Click the Amount column header.
  3. Assert that values in the column amount are sorted in the correct order.
  4. Validate data consistency: Content of rows is not mixed and has all values the same in each row.

So, using the functional approach, the test script is about 5-7 lines of code (including row comparisons):

List<Transaction> unsortedTransactions = transactionsTable.allData();
transactionsTable.headerUI().select("AMOUNT");
transactionsTable.assertThat()
.rows(hasItems(toArray(unsortedTransactions)))
.sortedBy((prev,next) -> prev.amount.value() < next.amount.value());

And what about Visual validation with Applitools? Frankly speaking, we can do this validation with just two lines of code:

amountHeader.click();
eyes.checkElement(transactions, "Transactions Ascending");

But this approach has its limitations:

  1. First of all, we will need to make sure that the data is sorted correctly before saving the baseline image. 
  2. Second and more crucial, we should recreate a screenshot each time data changed in the table.

The best solution here is to mix functional and visual validations with Applitools:

  1. Get unsorted table rows data and images.
  2. Click the Amount column header.
  3. Assert that values in the column amount are sorted in the correct order.
  4. Validate data consistency: Data in rows is not mixed, and rows images are the same as before sort.

See full code below:

List unsortedTransactions = transactionsTable.allData();
List images = transactionsTable.rowsImages();
transactionsTable.headerUI().select("AMOUNT");
transactionsTable.assertThat()
.rows(hasItems(toArray(unsortedTransactions)))
.sortedBy((prev,next) -> prev.amount.value() < next.amount.value())
.rowsVisualValidation("Description", images);
Testing TypeApplicable (1-5)Comment
Functional approach (JDI Light)5This task is mostly for functional testing.
Visual approach (Applitools)4In the simple case for the table with hardcoded data, you can check the whole table just in one line of code.And in some cases, you can’t verify without visual validation.

Canvas Chart Test

The next task was to validate chart view – definitely task for visual validation. 

At first glance, you have no way to get the data from canvas because the details of the chart do not exist in the DOM. With Applitools, you can validate chart view “before” and “after”.

compareExpenses.click();
eyes.checkElement(chart, "Expenses Chart 2017-2018");
showNextYear.click();
eyes.checkElement(chart, "Expenses Chart 2017-2019");

P.S. I would like to show you just one trick that is useful in this particular case. (Note: In most cases with charts you can’t get data.)

Using the following JavaScript snippet, you can get the data for this chart and use it in functional testing. But in any case, you can’t check colors and chart’s layout in this way.

public ChartData get() {
  Object rowChartData = jsExecute("return { " +
    "labels:  window.barChartData.labels, " +
    "dataset: window.barChartData.datasets.map(ds => ({ " +
      "bgColor: ds.backgroundColor, " +
      "borderColor: ds.borderColor, " +
      "label: ds.label, " +
      "data: ds.data })) " +
    "}");
  return gson.fromJson(gson.toJson(rowChartData), ChartData.class);
}

See more details on Github.

Testing TypeApplicable (1-5)Comment
Functional approach (JDI Light)1Only if you are lucky, you can get some data from the canvas; but in general, this is not a case for functional testing.
Visual approach (Applitools)5Visual validation is the best choice for testing canvas staff.

Dynamic Content Test

The last task was to validate the advertisement content that comes from an external site.

I think the Applitools team added this task to show Applitools’ MatchLevel: Layout capability, which is great for validating an application’s layout without comparing the exact content.

Here’s the code to do this with visual validation:

eyes.setMatchLevel(MatchLevel.LAYOUT);
eyes.checkElement(advertisements, "Dynamic Advertisement");

But in this case, we can just check isDisplayed() (or validate the size of the advert blocks). 

Here, visual validation is a little better, because it also validates the layout, but the difference is not so big for just one ad. However, I can see where the visual validation of an entire page where all of the content is different would be really powerful.

Testing TypeApplicable (1-5)Comment
Functional approach (JDI Light)4Cannot validate layout.
Visual approach (Applitools)5Validate all possible layout problems.

Conclusion

The traditional approach that we often use for functional test automation is good for validating scenarios and data on the site; but if you validate only these things, you can easily miss layout or visual issues. Users abandon sites that have significant visual bugs, and in some cases, it makes the app unusable altogether. 

Your application is the face of your organization. If you really care about your clients’ comfort and would like to show that clients can trust your company because you have good solutions, you must include visual testing together with functional testing in your overall testing strategy.

Roman Iovlev participated in the Applitools Hackathon in November 2019.

For More Information

Cover Photo by Markus Spiske on Unsplash

The post Functional vs Visual Testing: Applitools Hackathon appeared first on Automated Visual Testing | Applitools.

]]>
Visual AI Rockstar Hackathon Winners Announced! https://applitools.com/blog/hackathon-winners/ Thu, 23 Jan 2020 06:30:25 +0000 https://applitools.com/blog/?p=6880 We are thrilled to announce the Applitools Visual AI Rockstar Hackathon winners. We think the people who participated and won the Hackathon are not just some of the top QA engineers but also trailblazers who are leading the way in pushing the QA industry forward.

The post Visual AI Rockstar Hackathon Winners Announced! appeared first on Automated Visual Testing | Applitools.

]]>

We are thrilled to announce the Visual AI Rockstar Hackathon winners. We think the people who participated and won the Hackathon are not just some of the top QA engineers but also trailblazers who are leading the way in pushing the QA industry forward. Congrats to all of them!

pasted image 0You can find all the Hackathon results here

In this blog, we provide you a summary of the Hackathon concept. Also, we share how we designed it, and some of the things we learned.

The Hackathon idea

Our idea behind the Hackathon started with a question: what incentive gets an engineer to try a new approach to functional test? Our customers described how they use Applitools to accelerate the development of automated functional tests. We hoped engineers who weren’t regular Applitools users could have similar experiences and get comparable value from Visual AI. The Hackathon seemed to give engineers like you an incentive to compare traditional functional test code with Visual AI.

When you think about it, you realize that all the user-observable functionality has an associated UI change to it. So if you simply take a screenshot after the function has run, if the UI didn’t change as expected, you found a functional bug. And if the functionality worked but something else in the UI didn’t work, then you found a visual bug. Since the screenshot captures both, you can very easily do both visual and functional testing through our Visual AI.

Visual validation overcomes a lot of functional test coding challenges.  For example, many apps include tables – like a compare list of purchase options. If you let your customers sort the table by price, how do you validate that the output results from a sort match the correct order – and that all the row contents beyond the sort column behave as expected? Or, what happens when you use a graphics library that must behave correctly, for which you only have HTML checks? For example, your app creates bar-charts using Canvas technology. How do you automate the validation of the Canvas code?

Since we are simply taking screenshots after the functionality, we can capture everything. That simplifies all your validation tasks.

Creating the Hackathon Tests

Realistically, we know that free items have real costs.  To use our free Applitools account, you need to take the time to learn Visual AI and try it out. While you might be willing to try the free account, would your selected tests highlight the value of Applitools’s Visual AI? We were confident that giving you the right experience would give you an easy way to see the value of Applitools. So, we built the test environment for the Hackathon in which you could run your tests.

We built a sample Hackathon app. Next, we designed 5 common use cases where  Applitools and Visual AI result in simpler functional test code or make test automation possible. Finally, we ran the Hackathon and gave people like you the chance to compare writing tests using the traditional approach versus using Appllitools. Engineers like you who tried the Hackathon generated many surprising and valuable experiences. Cumulatively, your experiences show the value of Visual AI in the workflows of many app development teams.

How We Judged The Hackathon Winners

We graded on a scale of 1-100 points. We divided the points across all five use cases. Within each case, we treated the points between Visual AI and the traditional approaches separately. What mattered included:

  • Coverage completeness on each test case with for traditional functional test code
  • Coverage completeness using Visual AI
  • The economy of coding with both traditional approach and Visual AI submissions
  • Availability and accuracy of results reported from testing

Our judges spent weeks looking through all the submissions and judged every submission very carefully.  The winners scored anywhere from 79 points and all the way to a perfect 100!

Summary Of The Results

Part of our test design included building on-page functionality that required thoughtful test engineering. Hackathon winners needed to cover relevant test cases as well as ensure page functionality. Generally speaking, people who scored the highest points wrote a lot of code and spent a lot of time in the traditional approach. Even with economic coding, the winners wrote many lines of code to validate a large number of on-page changes, such as sorting a table.

Unfortunately, we also found that many participants struggled writing proper tests in the traditional approach.  Some struggled with test design, some with the validation of a given page structure, and some struggled with other technical limitations of the code-based approach.

While participants either struggled or succeeded with traditional test code, pretty much every participant excelled at using Visual AI. Many of them succeeded on their first try! We found this to be very gratifying.

We plan to discuss more this in a future webinar, so stay tuned. But in the meantime, check out the Hackathon winners below.

Additional Hackathon Winners

After judging we found that there were some people with the same score or were very close calls. So we have decided to award an additional 9 people a $200 prizes each! So instead of 80, $200 winners, we’ll have 89, $200 winners!

What The Hackathon Winners Say

We wanted to leave you with quotes from the Hackathon winners. We are glad to recognize them for their achievements and are pleased with their success.

About the Hackathon

  • “This hackathon was a great way to get introduced to Applitools and the power of Visual AI testing. I will definitely be using it in my next automation project.” – Gavin Samuels, Lead Consultant
  • “This is the most interesting and useful event of the year in the field of testing automation. This allows you to take a look at test automation from a different point of view and gives an opportunity to radically improve your existing approaches.” – Viktar Silakou, Lead QA Automation Engineer
  • “This hackathon was a fun and challenging way of getting to know Applitools. It made great use of common day-to-day problems to show where Applitools clearly outperforms traditional approaches in speed, simplicity, and coverage.” – Arjan Blok, QA lead
  • “Completing the Applitools Hackathon was a keystone achievement in my career! I learned more by participating in this hackathon than any other automation instruction I’ve taken in the past number of years. I’m now 100% convinced that visual AI testing is an essential tool for efficiently validating web and mobile software applications.” – Tracy Mazelin QA Engineer
  • “Solid Hackathon by Applitools! Provided a great experience to showcase the power of Visual AI Testing and how they are a leader in this field with functionality that their competitors do not have.” – Hung Hau, Sr. QA Automation Engineer
  • “I liked that the hackathon had practical applications and links with examples to work on. It was a pragmatic approach. It provided a good way of practicing and comparing it with the traditional method. Applitools provided an alternative to UI testing that was easy to learn and fast to set up. It’s interesting to explore further on its applicability.” – Adina Nicolae, QA Team Leader

About Visual AI

  • “Tool will be a game-changer in the near future if it hasn’t already.” – Oluseun Orebajo, Lead Test Practioner
  • “This challenge propelled me into digging for alternative ways to traditional testing. While solving the challenge, I realized that a tool like Applitools will save time on the proposed scenarios while still delivering the same value as other traditional frameworks. Congratulations for the initiative and the elegant manner chosen for making “the rockstars understand how powerful and awesome is Applitools.” – Corina Zaharia, Test Engineer
  • “Although I prefer making apps accessible to screen readers so that they can also be tested, the importance of visual testing cannot be understated. Applitools made it so much easier to check if things are properly aligned, icons stay intact, and visualizations look correct, with just one line of code.” – Thai Pangsakulyanont, Frontend Architect
  • “It was a fun way to discover the limitations of current “traditional way” versus what AppliTools can provide: from simple image comparisons, broken layout, intended changes, broken sort algorithms, dynamic content, and JIRA integration. Bottom line: you still need the screenshot anyway (for bug reporting or discussion around the topic) !” – Ioan Cimpean, Senior QA Automation Engineer

For More Information

Blogs about chapters in Raja Rao DV’s series on Modern Functional Testing:

Actions you can take today:

The post Visual AI Rockstar Hackathon Winners Announced! appeared first on Automated Visual Testing | Applitools.

]]>