Shweta Sharma, Author at Automated Visual Testing | Applitools https://applitools.com/blog/author/shwetasharma/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 30 Jun 2022 21:13:02 +0000 en-US hourly 1 What is Regression Testing? Definition, Tutorial & Examples https://applitools.com/blog/regression-testing-guide/ Fri, 01 Jul 2022 16:08:00 +0000 https://applitools.com/?p=33704 Learn everything you need to know about what regression testing is, best practices, how you can apply it in your own organization and much more.

The post What is Regression Testing? Definition, Tutorial & Examples appeared first on Automated Visual Testing | Applitools.

]]>

In this detailed guide, learn everything you need to know about what regression testing is, along with best practices and examples. Learn how you can apply regression testing in your own organization and much more.

While regression testing is practiced in almost every organization, each team may have its own procedures and approaches. This article is a starter kit for organizations seeking a solid start to their regression testing strategy. It also assists teams in delving deeper into the missing links in their current regression testing technique in order to evolve their test strategy.

What is Regression Testing?

Regression testing is a type of software testing that verifies an application continues to work as intended after any code revisions, updates, or optimizations. As the application continues to evolve by adding new features, the team must perform regression testing to evaluate that the existing features work as expected and that there are no bugs introduced with the new feature(s). 

In this post, we will discuss various techniques for Regression Testing, and which to use depending on your team’s way of working. 

However, before we jump onto the how part, let us understand why having a regression test suite is essential.

Why Do We Need Regression Testing?

A software application gets directly modified due to new enhancements (functional, performance or even improved security), tweaks or changes to existing features, bug fixes, and updates. It is also indirectly affected by the third-party services it consumes to provide features through its interface. 

Changes in the application’s source code, both planned and unintended, demand verification. Additionally, the impact of modifications to external services used by the application should be verified.

Teams must ensure that the modified component of the application functions as expected and that the change had no adverse effect on the other sections of the application. 

A comprehensive regression testing technique aids the team in identifying regression issues, which are subsequently corrected and retested to ensure that the original faults are not present. 

Regression Testing Example

Let us quickly understand with the help of an example – Login functionality

  • A user can log into an app using either their username and password or their Gmail account via Google integration.
  • A new feature, LinkedIn integration, is added to enable users to log into the app using their LinkedIn account.
  • While it is vital to verify that LinkedIn login functions as expected, it is equally necessary to verify that other login methods continue to function (Form login and Google integration).

Smoke vs Sanity vs Regression Testing

People commonly use the terms smoke, sanity, and regression interchangeably in testing, which is misleading. These terms differ not only in terms of the application’s scope, but also in terms of when they are carried out. 

What is Smoke Testing?

Smoke testing is done at the outset of a fresh build. The main goal is to see if the build is good enough to start testing. Some examples include being able to launch the site by simply hitting in the URL, or being able to run the app after installing a new executable.

What is Sanity Testing?

Sanity testing is surface level testing on newly deployed environments. For instance, the features are broadly tested on staging environments before passing it on to User Acceptance Testing. Another example could be verifying that the fonts have correctly loaded on the web page, expected components are interactive and overall things appear to be in order without a detailed investigation.

How is Regression Testing Different from Smoke and Sanity Testing?

Regression testing has more depth where the potentially impacted areas are thoroughly tested on the environment where new changes have been introduced.

Existing stable features are rigorously tested on a regular basis to ensure their accuracy in the face of purposeful and unintended changes. 

Regression Testing Approaches

The techniques can be grouped into the following categories:

Partial Regression Testing

As the name suggests, partial regression testing is an approach where a subset of the entire regression suite is selected and executed as part of regression testing.

This subset selection results from a combination of several logical criteria as follows:

  • The cases derived from identifying the potentially affected feature(s) due to the change(s)
  • Business-critical cases
  • Most commonly used paths

Partial regression testing works excellently when the team successfully identifies the impacted areas and the corresponding test cases through proven ways like the Requirement Traceability Matrix (RTM henceforth) or any other form of metadata approved by the team.

The following situations are more conducive to partial regression testing:

  • The project has a solid test automation framework in place, with a large number of Unit, API, Integration tests, and Acceptance tests in proportion as per the test pyramid.
  • Changes to the source code are always being tracked and communicated within the cross-functional team.
  • Short-term projects tight on budget and time.
  • The same team members have been working on the project for a long period.

While this method is effective, it is possible to overlook issues if:

  • The impacted regions aren’t identified appropriately.
  • The changes aren’t conveyed to the entire team.
  • The team doesn’t religiously follow the process of documenting test scenarios or test cases.

Complete Regression Testing

In many cases, reasons like significant software updates, changes to the tech stack demand the team to perform comprehensive regression testing to uncover new problems or problems introduced due to the changes.

In this approach, the whole test suite is run to uncover issues every time new code is committed, or, at some agreed time intervals.

This is a significantly time-consuming approach compared to the other techniques and should ideally be adopted only when the situation demands.

To keep the feedback cycle faster, one must embrace automated testing to enable productive complete regression testing in their teams.

Which Regression Technique to Use?

Irrespective of the technique adopted, I always suggest that teams prioritize the most business-critical cases and the common use cases performed by end-users when it comes to execution. 

Remember, the main goal of regression testing is to ensure that the end-user is not impacted due to an unavailable/incorrect feature, which could affect business outcomes in many ways.

Best Practises for Regression Testing

To achieve better testing coverage of your application, plan your regression testing with a combination of technology and business scenarios. Apply the practices across the Test Pyramid. 

Leverage the Power of Visual Representation

Arranging the information in the form of a matrix enables the team to quickly identify the potentially impacted areas. 

  • The RTM shown in the diagram below, any changes made to REQ1 UC 1.3 will let us know that we have to test the test cases 1.1.2, 1.1.4 and 1.1.7.
  • Also, since test case 1.1.2 is also related to UC 1.2, we would immediately test that for any regression issues. 
  • Of course, the RTM should be up-to-date at all times for the technique to work correctly for the team.

    (Image Source)

Alternatively, many test case management tools now have started providing inbuilt support to build a regression test suite with the help of appropriate tags and modules. These tools also let you systematically track and identify patterns in the regression test execution to dig into more related areas.

I have seen teams being most effective when they have automated most of their regression suite, and the non-automatable tests organised and represented in a meaningful way that allows quick filtering and meaningful information.

Test Data

We should leverage the power of automation to create test data instantly across different test environments. We need to ascertain that the updated feature is evaluated against both old and new data. 

Ex: A new field added to a user profile, for example, should work consistently for both existing and newly formed accounts.

Production Data

Production test data plays a vital role in identifying issues that might have been missed during the initial delivery.

In cases where possible, replicate the production environment to identify edge cases and add those scenarios to the regression test suite.

Using production data isn’t always viable, and it can lead to non-compliance issues. Teams frequently conceal / mask sensitive information from production data and use the information to fulfil the requirement for on-the-ground scenario analysis.

Test Environments

If you have multiple environments, we should verify that the application works as intended in each of the environments.

Obtaining a Fresh Outlook

Every time a new person joined the team when the development was already in progress, they asked meaningful questions about the long-forgotten stable features. I also prefer young guns to be part of my regression team to get a raw and holistic testing perspective.

Automate

Automate the regression test suite! If you have the budget, great, or else, create supporting mechanisms to utilise the team’s idle time to implement automated tests.

Simply automating the business-critical scenarios or the most used workflows is a good enough start. Initiate this activity and work incrementally.

Either tag/annotate your automated scenarios as per the feature or segregate them into appropriate folders so that you’d be able to run particular automated regression scenarios.

Sequential execution won’t scale with a rising number of test environments and permutations, despite the fact that automated test execution is faster. As a result, concurrent test execution in various settings is required to meet scalability requirements. Selenium Grid and other cloud solutions like Applitools Ultrafast Test Cloud enable you to execute automated tests in parallel across different configurations.

In addition to adhering to best practises when creating the test automation framework, these tests must run at a high pace and in parallel to provide faster feedback.

Choose What Works for You

Always! One cannot ignore the business limitations and the client demands to meet the delivery. Based on your context, adopt the most suitable regression testing techniques.

Plan for Regression Testing in Sprints

I have seen it take a long time to automate a regression backlog. To keep making progress on this activity, while estimating the Sprint tasks, always account for regression testing efforts explicitly, or you might be increasing your technical debt in the form of uncovered bugs.

Communicate within the Cross-Functional Team

Changes are not always directly related to client needs, nor are they always conveyed. Internally, the development team continually optimises the code for reusability, performance, and other factors. Ensure that these source-code modifications are documented/tracked in a ticket so that the team can perform regression testing accordingly.

Regression Testing at Scale

An enterprise product results from multiple teams’ contributions across geographies. While the teams will independently conduct regression testing for their part, it mustn’t be done only in silos. The teams must also set up cadence structures/processes to test all integrational regression scenarios.

Crowdsourced Testing

Crowdsourced testing can help find brand new flaws in the programme, such as functionality, usability, and localization concerns, thereby improving the product’s quality.

Plan for Non-Functional Regression Testing

Non-functional elements like performance, security, accessibility, and usability must all be examined as part of your regression testing plan, in addition to functionality.

Benchmarking test execution results from past sessions and comparing them to test execution results after the most recent modifications is a simple but effective technique for detecting performance, accessibility, and other degradations.

Due to substantial faults in non-functional areas, applications with the best functionality have either been unable to see production through or have been shelved despite launching successfully.

In a similar vein, application security and accessibility issues have cost businesses millions of dollars in addition to a tarnished reputation.

The Need for an Automated Regression Test Suite

Regardless of your application architecture or development methodology, the importance of automating the regression tests can never fade away. Be it a small-scale application or an enterprise product, having automated tests will save you time, people’s energy and money in the longer run.

Let’s understand some reasons to automate the regression test suite:

Fast Feedback

Automated software verification is exponentially faster than humans. Automated continuous testing in the CI-CD pipeline is a powerful approach to identifying regression bugs as close to its introduction because of the increased speed and frequency at which it operates.

Equally important is to look at the test results from each automated suite execution and take meaningful steps to get the product and the test suite progressively better.

Timely identification of issues will avoid defect leakage in the most significant parts of the application and later stages of testing.

Consequently, the slight left shift always profits the organisation in many ways apart from cost.

Automated Test Data Creation

Before getting to the actual testing, the testing teams spend a significant amount of time generating test data. Automation aids not only in the execution of tests but also in the rapid generation of large amounts of test data. The functional testing team may leverage data generated by scripts (SQL, APIs), allowing them to focus on testing rather than worrying about the data.

Testing features like pagination, infinite scroll, tabular representation, performance of the app are few examples where rapid test data generation helps the team with instant test data. 

Banking and insurance are regulated sectors with several complex operations and subtleties. To exercise and address the data models and flows, a variety of test data is required. The ability to automate test data management has shown to be a critical component of successful testing.

Address Scalability

The automated test suite’s parallel execution answers the need for faster feedback and does it rapidly. Teams can generate test results across a variety of environments, browsers, devices, and operating systems with the right infrastructure and the prerequisite of having built a scalable automated test suite.

The Applitools Ultrafast Test Cloud is the next step forward in cross-browser testing. You run your functional and visual tests once locally using Ultrafast Grid (part of the Ultrafast Test Cloud), and the grid instantaneously generates all screens across whatever combination of browsers, devices, and viewports you choose. 

Use the Human Brain and Technology to Their Full Potential

Repetitive tasks are handled efficiently and consistently through automation. It does not make errors in the same way that people do.

It also allows humans to concentrate their ingenuity on exploratory testing, which machines cannot accomplish. You can deploy new features with a reduced time-to-market thanks to automation.

Maintenance of the Regression Test Suite

Now, let’s complete the cycle by ensuring that the corresponding test cases (manual and automated) are also modified immediately with every modification and change request to any existing part of the application. These modified test cases should now be part of the regression suite.

Failing to adjust the test cases would create chaos in the teams involved. The circus might result in incorrect testing of the underlying application and introduce unintended features and rollbacks. 

Maintaining the regression test suite consists of adding new tests, modifying existing tests, and deleting irrelevant tests. These changes should be reflected in the manual and automated test suites.

Regression Testing Tools

There aren’t separate testing tools categorised as “regression testing tools.” The teams use the same testing tools; however, many test automation tools are utilised to automate the regression test suite. 

Depending on the project type, the following regression testing tools may be used in a combination of the above techniques mentioned in the previous section:

API Heavy Applications

APIs are the foundation of modern software development, especially as more and more teams abandon monolithic programmes in favour of a microservices-based strategy.

  • Contract-driven testing is gaining popularity, and rightly so because it avoids regression issues being committed to the repository in the first place as opposed to identifying them later in the process during the testing phase. Understand more about pacts/contracts here
  • Specmatic is an excellent open-source solution that uses the contracts available in OpenAPI spec format, and turns them into executable specifications which can be used by the provider and consumer in parallel. It also allows you to check the contract for backward compatibility via CI.
  • Testing APIs is comparatively faster than verifying the functionality of the user interface. Hence, for faster & accurate feedback flowing across the groups, adopt automated API & API Workflow testing using open-source solutions like REST-Assured, Postman, etc.   
Logos for Postman, Specmatic, Pact and Rest-Assured

UI Heavy Applications

UI accuracy is unquestionably vital for a successful business because it directly impacts end users.

Even when utilizing the most extraordinary development processes and frontend technology, testing the UI is one of the most significant bottlenecks in a release.

Applitools is a pioneer in AI-powered automated visual regression testing. Their solution allows you to integrate Visual Testing with functional and regression UI automation and in turn get increased test coverage, quick feedback, and seamless scaling by using the Applitools Ultrafast Grid – all while writing less code. You can try out their solutions by signing up for a free account and going through the tutorials available here.

Applitools is the leader in Visual Testing

Support & Maintenance Portfolio

Teams responsible for testing legacy applications often experience the need to explore the application before blindly getting started with the regression test suite.

Utilizing the results from your exploratory testing sessions to populate and validate your impact analysis documents and RTMs proves beneficial in making necessary modifications to the regression test suite.

Exploratory testing tools are incredibly valuable and can assist you in achieving your goal for the session, whether it’s to explore a component of the app, detect flaws, or determine the relationship between features.

Non-Functional Testing

Each of the following topics is a specialised field in and of itself, and it is impossible to cover them all in one blog post. This list, on the other hand, will undoubtedly get you thinking in that direction.

Performance Testing

  • Performance issues can occur at any tier of the software stack, including the operating system, network, disc, web, application, and database layer.
  • Open source regression testing tools such as Apache JMeter, Gatling, Locust, Taurus, and others help detect performance issues such as concurrency, throughput, peak load, performance bottlenecks, and so on throughout the software stack.
  • Application performance monitoring (APM) tools are also used by development teams to link coding practises to app performance throughout development.

Security Testing

  • Zed Attack Proxy (ZAP), Wfuzz, Wapiti, W3af, SQLMap, SonarQube, Nogotofail, Iron Wasp, Grabber, and Arachni are open source security testing tools that help with assessing security conditions such as Authentication, Authorization, Availability, Confidentiality, Integrity, and Non-repudiation. 
  • To reap the benefits of both methodologies, organisations combine static application security testing (SAST) with dynamic application security testing (DAST).

Accessibility Testing

  1. Use Applitools Contrast Advisor to identify contrast violations as part of your test automation execution. This solution works for native Android apps, native iOS apps, all Web Browsers including mobile-web, PDF documents and images.
  2. Screen readers – VoiceOver, NVDA, JAWS, Talkback, etc.
  3. WAT (Web accessibility toolbar) – WAVE, Colour Contrast Analyser, etc.

Summary

A well-thought-out regression testing plan will aid your team in achieving your QA and software development goals, whether the architecture is monolithic or microservices-based, and whether the application is new or old. You can learn about how Applitools can help with functional and visual regression testing here.

Editor’s Note: This post was originally published in January 2022, and has since been updated for completeness and accuracy.

The post What is Regression Testing? Definition, Tutorial & Examples appeared first on Automated Visual Testing | Applitools.

]]>
A Guide to Automated Testing with Drupal and Applitools https://applitools.com/blog/automated-testing-drupal-applitools/ Tue, 08 Dec 2020 18:12:11 +0000 https://applitools.com/?p=24954 Introduction Traditionally, web applications built on Drupal are built using various entities like Content types, blocks, components using Layout Builder, and then the product is made available to the end-user on the...

The post A Guide to Automated Testing with Drupal and Applitools appeared first on Automated Visual Testing | Applitools.

]]>

Introduction

Traditionally, web applications built on Drupal are built using various entities like Content types, blocks, components using Layout Builder, and then the product is made available to the end-user on the front-end using HTML, CSS and JavaScript. The team usually starts with back-end stories related to building various content types, related roles, and permissions, and then the front-end team picks it up to make the site more usable and accessible as per the design requirements.

Of course, with component libraries like Storybook, Fractal, PatternLab, and with designs in place, the front-end team can start implementing them as component libraries in parallel, which are later integrated with Drupal.

Let’s talk about testing the application in the next section.

Automated Testing in Drupal

Behat, PHPUnit, Drupal Test Traits (DTT), and NightwatchJS are the most widely used tools for automating tests with Drupal. There are several reasons why these tools are popular within the Drupal community, such as all these tools are PHP-based frameworks (apart from NightwatchJS), offer ready to use Drupal extensions/plugins and have huge community support.

With these tools, one can automate unit, integration, and acceptance level tests. But what about automating the visual tests? That’s the missing tip of the pyramid which we will address through this blog post.

We have used Cypress to automate the browser and Applitools for AI-based visual validation. Our reasons for using Cypress over other tools are many, including the following:

  1. One can quickly get started with writing actual tests with Cypress as compared to Selenium.
  2. Cypress enables fast-paced test execution.
  3. Our POC with Cypress + Drupal proved that testing the Drupal side of things can also be achieved with Cypress.
  4. Cypress offers harmonious integration with Applitools. Having said that, please note that Applitools does have SDKs for Selenium PHP and NightwatchJS and many more just in case you have your existing automation functional suites written using any of the other testing frameworks.
  5. Since Cypress is a JS-based framework, developers can also contribute to writing automated tests.

The site to demonstrate the concept is Drupal Umami, the major advantage being that the site is already constructed and we can directly focus on writing automated visual tests without having to worry about creating these pages.

NOTE: If you are completely new to the concept of automated visual validation testing, then please refer to the course “Automated Visual Testing: A Fast Path To Test Automation Success” on Test Automation University from Angie Jones.

Applitools and Drupal

Applitools provides an SDK for Cypress which makes it very easy for us to integrate automated visual validation tests in the same functional test suite created using Cypress. The steps to configure Applitools with Cypress are straightforward and you can refer to their official documentation for more details. Let’s take a look at the test written for the Homepage. The gist as shown below:

View the code on Gist.

The test in the above example launches the homepage and verifies the UI using the “checkWindow()” command. The checkWindow() command takes in the following parameters:

  1. tag: The name of the test.
  2. target: Shall the window be targeted or a particular element?
  3. fully: Identify the scope, whether it is the current viewport or the entire window.

That’s it! And you are ready to execute the first automated visual test. So, let’s execute it using the command `npx cypress run`, assuming that the baseline image was captured correctly on the first run.

Here’s a small screencast for the same.

Interpreting the automated test execution results

Now that the tests have been executed, let’s look at the execution results, which brings us to the Applitools dashboard. Here’s the passed test result.

The tests passing is a good thing. However, that’s not the primary reason for having automated tests. You want the tests to correctly detect the discrepancies as close as the point of introduction. For the purpose of this demo, we have intentionally introduced a couple of CSS bugs on the homepage through Cypress’s invoke command. Once the script launches the homepage, CSS changes are made at run-time in the browser and then the screenshots are captured as below:

View the code on Gist.

      

Let’s re-execute our test to see how the tool catches these bugs. The tool has correctly highlighted the three errors (in pink color) below that we introduced on purpose:

  1. “Search bar” in the header has shifted from its original position.
  2. The font-color for the “View recipe” button on the banner has changed.
  3. “Find out more” link in the footer.



Reference Image

Test run image

We confirm that these indeed are bugs, and reject the image marking the test as failed and then report bugs in Jira directly from Applitools. Additionally, the root cause analysis feature from Applitools helps us quickly identify the visual (UI) issues, in this case, caused by CSS changes  as shown in the images below:

RCA RGB Bug

Until now, it was only about one browser. However, if we really want to leverage the automated tests written for validating the UI, then the true benefit lies in having the ability to execute these tests across several browsers, Operating systems, and devices. Verifying that the UI looks correct on one browser/device doesn’t guarantee that it would look exactly the same on all other browsers/devices because the rendering of the UI might be different on other browsers/devices.

Cross-browser/device/OS testing using the Ultrafast Test Cloud

Now that we have seen and understood how automated visual testing is done with one browser, let’s discuss some points that need to be accounted for to scale up your automated testing:

  1. Configure your suite to execute automated tests across several browsers and devices. However, not only the test authoring time increases but also creates a dependency on the technical staff as the logic for tests to run all tests across several browsers and devices need to be coded.
  2. Linear automated test execution increases execution time, thereby resulting in larger build times and delaying the feedback of the application to the entire team.
  3. Maintain an in-house grid for parallel execution or purchase additional subscriptions provided by Cloud-based solutions for parallel execution of automated tests.

This brings us to discussing the Applitools Ultrafast Test Cloud which inherently takes care of the above points.

By using Applitools Ultrafast Test Cloud, you would be able to execute the automated visual validation tests across several browsers, operating systems, and devices of your choice and at lightning speed as although the test runs once on say Chrome (assuming Chromedriver is configured in the tests), the capturing of the pages occurs in parallel, in the background for all the configured browsers and viewports.

So, let’s write some more tests for Articles and Recipes landing and listing pages on the site. Let us also execute these tests in parallel on several browsers/devices as configured below using Applitools Ultrafast grid solution:

View the code on Gist.

Here are the Ultrafast Grid Test Results across several browsers and devices.

To be precise, here are the details:

  1. Number of browsers and devices = 7
  2. Total number of functional tests = 6
  3. Total number of visual tests = 7*6 = 42
  4. The time taken for complete execution on Ultrafast grid – 5 minutes 2 seconds

Execute once and see the results on so many browsers and devices. Now, that’s what I call truly automating your visual tests.

Also, notice that using the Applitools Batch feature, we have logically grouped the Test Results, to make it extremely readable.

Other tools in the market

There are many other Open Source tools like BackstopJS, Shoov, Gemini, Visual regression service for WebdriverIO to name only a few, but none of the tools has the Applitools advantage and we will look at a few of many reasons in the coming section.

The Applitools Advantage

  1. The AI-driven image comparison algorithm is incredibly accurate and avoids false positives which otherwise occurs in a pixel-to-pixel based comparison. The amount of time it would take to troubleshoot false positives, especially on full-page screenshots would be time and cost-prohibitive. Pixel-based is ok for verifying small components across a short period of time, otherwise, it breaks down.
  2. Seamless integration with your existing functional automation suite through several Web, mobile, screenshot, desktop, and codeless SDKs available for testing and automation frameworks like Cypress, Selenium, WebdriverIO, Nightwatch, Appium, and also languages like PHP, Java, Javascript, C#, Python only to name a few.
  3. With the help of Ultrafast Test Cloud, your entire web application can be tested for visual accuracy at a fast speed (as the tests run only once whereas the visual rendering validations happen in parallel on several browsers and devices in the background) with guaranteed reliability and security.
  4. Applitools also provides out of the box integration with the Storybook component library for React, Vue and Angular.
  5. Learn more about the following Applitools Eyes integrations on their site:
    1. Integration with GitHub
    2. Integration with Microsoft Azure
    3. Integration with GitLab
    4. Integration with BitBucket
    5. Integration with Jira
    6. Integration with Email and Slack for notifications

What next?

Signup for a free account with Applitools and feel free to clone this repository to try it out on your own. Integrate automated visual validation tests in your project that will help you build and release visually perfect web applications or websites confidently at a faster rate.

Featured Image by Alex Wong on Unsplash

The post A Guide to Automated Testing with Drupal and Applitools appeared first on Automated Visual Testing | Applitools.

]]>