Developers Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/developers/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:22:54 +0000 en-US hourly 1 Announcing Applitools Centra: UI Validation From Design to Implementation https://applitools.com/blog/announcing-applitools-centra-ui-validation-from-design-to-implementation/ Mon, 10 Apr 2023 21:11:21 +0000 https://applitools.com/?p=49087 The user interface (UI) is the last frontier of differentiation for companies of all sizes. When you think about financial institutions, a lot of the services that they offer digitally...

The post Announcing Applitools Centra: UI Validation From Design to Implementation appeared first on Automated Visual Testing | Applitools.

]]>
Centra collaboration

The user interface (UI) is the last frontier of differentiation for companies of all sizes. When you think about financial institutions, a lot of the services that they offer digitally are exactly the same. A lot of the services and the data they all tap into have been commoditized. What hasn’t been commoditized is the actual digital online experience – what it looks like and how you complete actions.

“Examined at an organizational level, a mature design thinking practice can achieve an ROI between 71% and 107%, based on a consistent series of inputs and outputs.”

The ROI Of Design Thinking, Forrester Business Case Report

The challenges of building UI

Easy version of design to production

Modern UIs today are built by a diverse set of teams that work together at different parts of the process. The pace at which these design, development, QA, operations, marketing, and product teams ship their work is continuing to accelerate – creating new challenges around communication, collaboration, and validation across the workflow.

Realistic version of design to production

Getting from design mock-ups in Figma to live UI is a process that includes a lot of feedback and testing. It starts with the designer who passes to the product manager for approval before the developer can start building. Feedback in the development process requires rework to make those updates before it can get approval from the product manager. This is all before the testing team has even started their review.

You can see the game of telephone that’s played through different stakeholders into production, and we get something that’s slightly different at multiple levels. This makes measuring what actually happened and what actually needs to change incredibly hard, making it a huge burden on teams to ship clean UIs at a fast pace. Some of our main challenges here are:

  • Lack of communication between the growing group of stakeholders
  • Breadth of technology during implementation causing inconsistencies
  • No continued source of truth across tooling as the app UI evolves

How Applitools Centra helps UI teams collaborate

Applitools’ newest product Centra is a collaboration platform for teams of all sizes to alleviate these challenges. Applitools Centra enables organizations to track, validate, and collaborate on UIs from design to production. Centra uploads application designs from tools like Figma to the Applitools Test Cloud. Then, Centra compares the designs against current baselines in local, staging, or production environments. Designers, developers, testers, or digital leaders then validate that their application interface looks exactly as it was intended.

Benefits of using Applitools Centra

  • Less drift in the UI: By comparing design and implementation throughout the development lifecycle, teams can cut down on the amount of drift between design and production that occurs in their UI.
  • Design as documentation: Disseminate designs as a single source of truth across teams so that QA teams will know exactly what interfaces are supposed to look like during validation. 
  • Increased cross-functional collaboration: Teams from different functions across the design-to-experience process can all communicate over the interfaces that they are shipping. Product Managers, Designers, and Developers can all have equal visibility into what actually makes it to production.
  • Catching bugs earlier: Shift left into design and catch bugs earlier in the SDLC – right at the moment of implementation, when the cost to fix is at its lowest.

Start using Applitools Centra

Check out the full demo of Centra in our announcement webinar. Centra is free to use for teams, and you can sign up for the waitlist to start using it on your teams.

The post Announcing Applitools Centra: UI Validation From Design to Implementation appeared first on Automated Visual Testing | Applitools.

]]>
Getting Started with Localization Testing https://applitools.com/blog/localization-testing/ Thu, 18 Aug 2022 20:08:00 +0000 http://162.243.59.116/2013/12/09/taking-the-pain-out-of-ui-localization-testing-2/ Learn about common localization bugs, the traditional challenges involved in finding them, and solutions that can make localization testing far easier.

The post Getting Started with Localization Testing appeared first on Automated Visual Testing | Applitools.

]]>

Learn about common localization bugs, the traditional challenges involved in finding them, and solutions that can make localization testing far easier.

What is Localization?

Localization is the process of customizing a software application that was originally designed for a domestic market so that it can be released in a specific foreign market.

How to Get Started with Localization

Localization testing usually involves substantial changes of the application’s UI, including the translation of all texts to the target language, replacement of icons and images, and many other culture, language, and country-specific adjustments, that affect the presentation of data (e.g., date and time formats, alphabetical sorting order, etc.). Due to the lack of in-house language expertise, localization usually involves in-house personnel as well as outside contractors, and localization service providers.

Before a software application is localized for the first time, it must undergo a process of Internationalization.

What is Internationalization?

Internationalization often involves an extensive development and re-engineering effort which goal is to allow the application to operate in localized environments and to correctly process and display localized data. In addition, locale-specific resources such as texts, images and documentation files, are isolated from the application code and placed in external resource files, so they can be easily replaced without requiring further development efforts.

Once an application is internationalized, the engineering effort required to localize it to a new language or culture is drastically reduced. However, the same is not true for UI localization testing.

The Challenge of UI Localization Testing

Every time an application is localized to a new language, the application changes, or the resources of a supported localization change, the localized UI must be thoroughly tested for localization and internationalization (LI) bugs.

Common Localization and Internationalization Bugs Most Testers can Catch

LI bugs which can be detected by testers that are not language experts include:

  • Broken functionality – the execution environment, data or translated resources of a new locale, may uncover internationalization bugs can prevent the application from running or break some of its functionality.
  • Untranslated text – text appearing in text fields or images of the localized UI is left untranslated. This indicates that certain resources were not translated or that the original text is hard-coded in the UI and not exported to the resource files.
  • Text overlap / overflow – the translated text may require more space than available in its containing control, resulting with the text overflowing the bounds of the control and possibly overlapping or hiding other UI elements.
  • Layout corruption – UI controls dynamically adjust their size and position to the expanded or contracted size of the localized text, icons or images, resulting with misaligned, overlapping, missing or redundant UI artifacts.
  • Oversized windows and dialogs – multiple expanded texts and images can result with oversized tooltips, dialogs and windows. In extreme situations, expanded dialogs and windows may only be partially visible in low screen resolutions.
  • Inadequate fonts – a control’s font cannot properly display some characters of the target language. This usually results with question marks or glyphs being displayed instead of the expected text.

Localization and Internationalization Bugs Requiring Language Expertise

Other common LI bugs which can only be detected with the help of a language expert include:

  • Mistranslation – translated text that appears once in the resource files, may appear multiple times in different parts of the application. The context in which the text appears can vary its meaning and require a different translation.
  • Wrong images and icons – images and icons were replaced with wrong or inappropriate graphics.
  • Text truncation – the translated text may require more space than available in its containing control, resulting with a truncated string.
  • Locale violations – wrong date, time, number and currency formats, punctuation, alphabetical sort order, etc.

Localization and Internationalization Bugs are Hard to Find

An unfortunate characteristic of LI bugs, is that they require a lot of effort to find. To uncover such bugs, a tester (assisted by a language expert) must carefully inspect each and every window, dialog, tooltip, menu item, and any other UI state of the application. Since most of these bugs are sensitive to the size and layout of the application, tests must be repeated on a variety of execution environments (e.g., different operating systems, web browsers, devices, etc.) and screen resolutions. Furthermore, if the application window is resizable, tests should also be repeated for various window sizes.

Why is UI Localization Testing Hard?

There are several other factors that contribute to the complexity of UI Localization testing:

  • Lack of automation – most of the common LI bugs listed above are visual and cannot be effectively detected by traditional functional test automation tools. Manual inspection of the localized UI is slower than with a non-localized UI because it is  unreadable to the tester.
  • Lack of in-house language expertise – since many of the common LI bugs can only be detected with the help of external language experts, which are usually not testers and are not familiar with the application under test, LI testing often requires an in-house tester to perform tests together with a language expert. In many cases, these experts work on multiple projects for multiple customers in parallel, and their occasional lack of availability can substantially delay test cycles and product releases. Similarly, delays can occur while waiting for the translation of changed resources, or while waiting for translation bugs to be fixed.
  • Time constraints – localization projects usually begin at late stages of the development lifecycle, after the application UI has stabilized. In many cases, testers are left with little time to properly perform localization tests, and are under constant pressure to avoid delaying the product release.
  • Bug severity – UI localization bugs such as missing or garbled text are often considered critical, and therefore must be fixed and verified before the product is released.

Due to these factors, maintaining multiple localized application versions and adding new ones, incurs a huge overhead on quality assurance teams.

Fortunately, there is a modern solution that can make localization testing significantly easier – Automated Visual Testing.

How to Automate Localization Testing with Visual Testing

Visual test automation tools can be applied to UI localization testing to eliminate unnecessary manual involvement of testers and language experts, and drastically shorten test cycles.

To understand this, let’s first understand what visual testing is, and then how to apply visual testing to localization testing.

What is Visual Testing?

Visual testing is the process of validating the visual aspects of an application’s User Interface (UI).

In addition to validating that the UI displays the correct content or data, visual testing focuses on validating the layout and appearance of each visual element of the UI and of the UI as a whole. Layout correctness means that each visual element of the UI is properly positioned on the screen, is of the right shape and size, and doesn’t overlap or hide other visual elements. Appearance correctness means that the visual elements are of the correct font, color, or image.

Visual Test Automation tools can automate most of the activities involved in visual testing. They can easily detect many common UI localization bugs such as text overlap or overflow, layout corruptions, oversized windows and dialogs, etc. All a tester needs to do is to drive the Application Under Test (AUT) through its various UI states and submit UI screenshots to the tool for visual validation.

For simple websites, this can be as easy as directing a web browser to a set of URLs. For more complex applications, some buttons or links should be clicked, or some forms should be filled in order to reach certain screens. Driving the AUT through its different UI states can be easily automated using a variety of open-source and commercial tools (e.g., Selenium, Cypress, etc.). If the tool is properly configured to rely on internal UI object identifiers, the same automation script/program can be used to drive the AUT in all of its localized versions.

So, how can we use this to simplify UI localization testing?

How Automated Visual Testing Simplifies UI Localization Testing

  • Preparation – in order to provide translators with the context required to properly localize the application, screenshots of the application’s UI are often delivered along with the resource files to be localized. The process of manually collecting these screenshots is laborious, time consuming, and error prone. When a visual test automation tool is in place, updated screenshots of all UI states are always available and can be shared with translators with a click of a button. When an application changes, the tool can highlight only those screens (in the source language) that differ from the previous version so that only those screens are provided to translators. Some visual test automation tools also provide animated “playbacks” of tests showing the different screens, and the human activities leading from one screen to the next (e.g., clicks, mouse movements, keyboard strokes, etc.).  Such animated playbacks provide much more context than standalone screenshots and are more easily understood by translators, which are usually not familiar with the application being localized. Employing a visual test automation tool can substantially shorten the localization project’s preparation phase and assist in producing higher quality preliminary translations, which in turn can lead to fewer and shorter test cycles.
  • Testing localization changes – visual test automation tools work by comparing screenshots of an application against a set of previously approved “expected” screenshots called the baseline. After receiving the translated resources and integrating them with the application, a visual test of the updated localized application can be automatically executed using the previous localized version as a baseline. The tool will then report all screens that contain visual changes and will also highlight the exact changes in each of the changed screens. This report can then be inspected by testers and external language experts without having to manually interact with the localized application. By only focusing on the screens that changed, a huge amount of time and effort can be saved. As we showed above, most UI localization bugs are visual by nature and are therefore sensitive to the execution environment (browser, operating system, device, screen resolution, etc.). Since visual test automation tools automatically execute tests in all required execution environments, testing cycles can be drastically shortened.
  • Testing new localizations – when localizing an application for a new language, no localized baseline is available to compare with. However, visual test automation tools can be configured to perform comparisons at the layout level, meaning that only layout inconsistencies (e.g., missing or overflowing text, UI elements appearing out of place, broken paragraphs or columns, etc.) are flagged as differences. By using layout comparison, a newly localized application can be automatically compared with its domestic version, to obtain a report indicating all layout inconsistencies, in all execution environments and screen resolutions.
  • Incremental validation – when localization defects are addressed by translators and developers, the updated application must be tested again to make sure that all reported defects were fixed and that no new defects were introduced. By using the latest localized version as the baseline with which to compare the newly updated application, testers can easily identify the actual changes between the two versions, and quickly verify their validity, instead of manually testing the entire application.
  • Regression testing – whenever changes are introduced to a localized application, regression testing must be performed to make sure that no localization bugs were introduced, even if no direct changes were made to the application’s localizable resources. For example, a UI control can be modified or replaced, the contents of a window may be repositioned, or some internal logic that affects the application’s output may change. It is practically impossible to manually perform these tests, especially with today’s Agile and continuous delivery practices, which dictate extremely short release cycles. Visual test automation tools can continuously verify that no unexpected UI changes occur in any of the localized versions of the application, after each and every change to the application.
  • Collateral material – in additional to localizing the application itself, localized versions of its user manual, documentation and other marketing and sales collateral must be created. For this purpose, updated screenshots of the application must be obtained. As described above, a visual test automation tool can provide up-to-date screenshots of any part of the application in any execution environment. The immediate availability of these screenshots significantly reduces the chance of including out-of-date application images in collaterals and eliminates the manual effort involved in obtaining them after each application change.

Application localization is notoriously difficult and complex. Manually testing for UI localization bugs, during and between localization projects, is extremely time consuming, error-prone, and requires the involvement of external language experts.

Visual test automation tools are a modern breed of test automation tools that can effectively eliminate unnecessary manual involvement, drastically shorten the duration of localization projects, and increase the quality of localized applications.

Applitools Automated Visual Testing and Localization Testing

Applitools has pioneered the use of Visual AI to deliver the best visual testing in the industry. You can learn more about how Applitools can help you with localization testing, or to get started with Applitools today, request a demo or sign up for a free Applitools account.

Editor’s Note: Parts of this post were originally published in two parts in 2017/2018, and have since been updated for accuracy and completeness.

The post Getting Started with Localization Testing appeared first on Automated Visual Testing | Applitools.

]]>
How to Simplify UI Tests with Bi-Directional Contract Testing https://applitools.com/blog/how-to-simplify-ui-tests-bi-directional-contract-testing/ Wed, 22 Jun 2022 18:35:39 +0000 https://applitools.com/?p=39466 Learn how you can improve and simplify your UI testing using micro frontends and new Pactflow bi-directional API contract testing.

The post How to Simplify UI Tests with Bi-Directional Contract Testing appeared first on Automated Visual Testing | Applitools.

]]>

Learn how you can improve and simplify your UI testing using micro frontends and new Pactflow bi-directional API contract testing.

End-to-End Testing within Microservices

When you are writing end-to-end tests with Cypress, you want to make sure your tests are not flaky, run quickly and are independent of any dependencies. What if you could add contract tests to stabilise, speed up and isolate your UI tests? There’s a new kid on the block – now this is possible with Pactflow’s new feature, bi-directional contracts. UI tests often offer the confidence of the application working end-to-end, which is why utilising contract tests can eliminate some of the challenges mentioned above. To attempt to simplify the explanation of this testing approach, I’m using a recipe web application to describe the interactions between the consumer (web app) and provider (api service). If you want to learn more about API Contract Testing checkout the Pactflow docs.

recipe web app on an ipad

Microservices was a term first coined in 2011, and microservices have since become a popular way to build web services. With the adoption of microservices, testing techniques have had to adapt as well. Integration tests become really important when testing microservices, ensuring that any changes don’t impact the consuming services or applications.

Micro Frontends started being recognised around 2016. Often when building microservices you need a micro frontend to make the application truly independent. In this setup the integration between the web app and the API service is much easier to test in isolation. The benefits of an architecture that uses micro frontends and microservices together mean you can release changes quickly and with confidence. Add in contract testing to the mix, and you can apply the independent approach to end-to-end testing as well.

Traditionally the running of end-to-end tests looks a little something similar to this:

diagram of end-to-end test flow

How to Simplify Your UI Tests with Contract Testing

Using this traditional approach, integration points are covered from the end-to-end tests which can take quite a while to run, are difficult to maintain and are often costly to run within the continuous integration pipeline. Contract Testing does not replace the need for integration tests but minimises the amount of tests needed at that level. 

The introduction of bi-directional contract tests now means you can generate contracts from the UI component tests or end-to-end tests. A great opportunity to utilise the existing tests you already have, this will then provide the confidence that the application works end-to-end without running a large suite of end-to-end tests. The contracts could also be used as stubs within your Cypress tests once they’ve been generated.

In my podcast, I spoke to a developer advocate from Pactflow who told me that they realized there was a barrier to getting started with contract testing. Which was that engineers already had tools which were defining contract interactions such as in mocks or pre-defined openAPI specifications. The duplication of adding pact code to generate these contracts seemed like a lot of work when the contracts had already been defined. Often development teams realise the potential of introducing contracts between services but don’t quite know how to get started or what the true benefits are.

What Benefits Do API Contract Tests Bring to Your UI Tests?

  • End-to-end tests can run in isolation, while retaining the confidence of fully integrated tests
  • Service providers will verify any API changes before deploying making dependent applications more stable
  • How the consumer app interacts with the API service is visualised and better understood as a result
  • Versioning and tagging contracts allows you to deploy safely between environments

In a world of micro frontends and microservices, it’s important to isolate services while ensuring quality is not impacted. By adding contract tests to your UI testing suite, not only do you gain the benefits listed you also gain time and costs. Running tests in isolation means your tests are faster to run, with a shorter feedback loop and no need to rely on a dedicated integration environment, reducing environment costs.

The Benefits of Bi-Directional Contract Testing

two way road sign

When building the example recipe app, two teams were involved in defining the API schema. An API contract was documented on the teams’ wiki, which presents the ingredients for a specific cake recipe. Both teams go away and build their parts of the application in line with the API documentation. 

The frontend team uses mocks to test and build the recipe Micro Frontend¹. They want to deploy their Micro Frontend to an environment to see whether they can successfully integrate with the ingredients API service². Also during the development process they realized they needed another field within the ingredients service³, so they communicated with the API team and the developer on the team made the change in the code which generates a new swagger openAPI document⁴ (however they didn’t update the documentation). 

From this scenario there are a couple of things to draw attention to (see numbers 1-4 above):

  1. Mocks are often used to test integrations which can be utilised within bi-directional contract testing as test scenarios
  2. With contract testing you don’t need a dedicated environment in order to test the interactions between web app and API service
  3. Specifications defined before development often change during implementation which can be documented and continuously updated within a centralised contract store such as Pactflow
  4. Generated openAPI specifications generated by code can be uploaded to the pact broker as well which can be compared directly with the frontend mocks

As mentioned earlier, the introduction of bi-directional contract testing allows you to generate contracts from your existing tests. Pactflow now provides adaptors which you can use to generate contracts from your mocks for example using Cypress:

describe('Great British Bake Off', () => {
    before(() => {
        cy.setupPact('bake-off-ui', 'ingredients-api')
        cy.intercept(`http://localhost:5000/ingredients/chocolate`,
        {
          statusCode: 200,
          body: ["sugar"],
          headers: { 'access-control-allow-origin': '*' }
        }).as('ingredients')
    })

    it('Cake ingredients', () => {
        cy.visit('/ingredients/chocolate')
        cy.get('button').click()
        cy.usePactWait('ingredients').its('response.statusCode').should('eq', 200)
        cy.contains('li', 'sugar').should('be.visible')
    })
})

Once you have generated a contract from your end-to-end tests, the interactions with the service are now passed to the API provider via the contract store hosted in Pactflow. Sharing the contracts means that the interactions of how the web app behaves after implementation aligns with the API service or if any changes occur post initial development. Think of it like sharing test scenarios with the backend engineers which they will replay on the service they have built. The contract document looks similar to this:

{
    "consumer": {
        "name": "bake-off-ui"
    },
    "provider": {
        "name": "ingredients-api"
    },
    "interactions": [
        {
            "description": "Cake ingredients",
            "request": {
                "method": "GET",
                "path": "/ingredients/chocolate",
                "headers": {
                    "accept": "application/json"
                },
            },
            "response": {
                "status": 200,
                "body": [
                    "sugar"
                ]
            }
        }
    ]
}

Once the openAPI specification has been uploaded by the API service and the contracts have been uploaded by the web application to Pactflow, there is just one more step remaining to call can-i-deploy, which will compare both sides and check that everything is as expected. Voila, the process is complete! You can now safely run tests which are verified by the API service provider and reflect the actual behaviour of the web application.

Changing the Mindset of API Test Responsibility

I know it’s a lot to take in and can be a bit confusing to get your head around this testing approach, especially when you are used to the traditional way of testing integrations with a dedicated test environment or by calling the endpoints directly from within your tests. I encourage you to read more about contract testing on my blog, and to listen to my podcast where we talk about how to get started with contract testing.

When you are building software, quality is everyone’s responsibility and everyone is working towards the same goal. When you look at it like that then interactions between integrations are the responsibility of everyone. I have often been involved in conversations where the development team building the API service have said it’s not their responsibility what happens outside of their code and vice versa. The introduction of contracts to your UI tests allows you to break down this perception and start having conversations with the API development team to speak the same language. 

For me, the biggest benefit that comes from implementing contract tests is the conversations that come out of it. Having these conversations about API design early, with clear examples, makes developing microservices and micro frontends much easier.

The post How to Simplify UI Tests with Bi-Directional Contract Testing appeared first on Automated Visual Testing | Applitools.

]]>
The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions https://applitools.com/blog/visual-ai-vs-pixel-matching-dom-based-comparisons/ Fri, 10 Jun 2022 02:37:44 +0000 https://applitools.com/?p=39178 Customers expect apps and sites to be visually flawless. How does Visual AI compare to pixel-matching and DOM-based solutions for visual testing?

The post The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions appeared first on Automated Visual Testing | Applitools.

]]>

The visual aspect of a website or an app is the first thing that end users will encounter when using the application. For businesses to deliver the best possible user experience, having appealing and responsive websites is an absolutely necessity.

More than ever, customers expect apps and sites to be intuitive, fast, and visually flawless. The number of screens across applications, websites, and devices is growing faster, with the cost of testing rising high. Managing visual quality effectively is now becoming a MUST.

Visual testing is the automated process of comparing the visible output of an app or website against an expected baseline image.

In its most basic form, visual testing, sometimes referred to as Visual UI testing, Visual diff testing or Snapshot testing, compares differences in a website page or device screen by looking at pixel variations. In other words, testing a web or native mobile application by looking at the fully rendered pages and screens as they appear before customers.

The Different Approaches Of Visual Testing

While visual testing has been a popular solution for validating UIs, there have been many flaws in the traditional methods of getting it done. In the past, there have been two traditional methods of visual testing: DOM Diffs and Pixel Diffs. These methods have led to an enormous amount of false positives and lack of confidence from the teams that have adopted them.

Applitools Eyes, the only visual testing solution to use Visual AI, solves for all the shortcomings of visual testing – vastly improves test creation, execution, and maintenance.

The Pixel-Matching Approach

This refers to Pixel-by-pixel comparisons, in which the testing framework will flag literally any difference it sees between two images, regardless of whether the difference is visible to the human eye, or not.

While such comparisons provide an entry-level into visual testing, it tends to be flaky and can lead to a lot of false positives, which is time-consuming.

When working with the web, you must take into consideration that things tend to render slightly different between page loads and browser updates. If the browser renders the page off by 1 pixel due to a rendering change, your text cursor is showing, or an image renders differently, your release may be blocked due to these false positives.

Here are some examples of what this approach cannot handle:

Pixel-based comparisons exhibit the following deficiencies:

  • They will be considered successful ONLY if the compared image/checkpoint and the baseline image are identical, which means that every single pixel of every single component has been placed in the exact same way. 
  • These types of comparisons are very sensitive, so if anything changes (the font, colors, component size) or the page is rendered differently, you will get a false positive.
  • As mentioned above, these comparisons cannot handle dynamic content, shifting elements or different screen sizes, so it’s not a good approach for modern responsive websites.

Take for instance these two examples:

  1. When a “-” sign used in a line of text is changed to a “+” sign, many browsers will add literally single digit pixels of padding around the line based on formatting rules. This small change will throw off your entire baseline and render the entire page a massive bug. 
  2. When the version of your favorite browser updates, oftentimes the engine it uses to transform colors can improve and throw off small changes that are not even visible to the human eye into the pixels of your UI. This means that colors that have made no perceptible changes will fail visual tests.

The DOM-Based Approach

Images courtesy of the AKC

In this approach, the tool captures the DOM of the page and compares it with the DOM captured of a previous version of the page.

Comparing DOM snapshots does not mean the output in the browser is visually identical. Your browser renders the page from the HTML, CSS and JavaScript, which comprises the DOM. Identical DOM structures can have different visual outputs and different DOM outputs can render identically.

Some differences that a DOM diff misses:

  •  IFrame changes but the filename stays the same
  •  Broken embedded content
  •  Cross-browser issues
  •  Dynamic content behavior (DOM is static)

DOM comparators exhibit three clear deficiencies:

  1. Code can change and yet render identically, and the DOM comparator flags a false positive.
  2. Code can be identical and yet render differently, and the DOM comparator ignores the difference, leading to a false negative.
  3. The impact of responsive pages on the DOM. If the viewport changes or the app is loaded on a different device, components size and location may change, this will flag another set of false positives.

In short, DOM diffing ensures that the page structure remains the same from page to page. DOM comparisons on their own are insufficient for ensuring visual integrity.

A combination of Pixel and DOM diffs can mitigate some of these limitations (e.g. identify DOM differences that render identically) but are still suspect to many false-positive results.

The Visual AI Approach

Modern approaches have incorporated artificial intelligence, known as Visual AI, to view as a human eye would and avoid false positives.

Visual AI is a form of computer vision invented by Applitools in 2013 to help quality engineers test and monitor today’s modern apps at the speed of CI/CD. It is a combination of hundreds of AI and ML algorithms that help identify when things go wrong in your UI that actually matter. Visual AI inspects every page, screen, viewport, and browser combination for both web and native mobile apps and reports back any regression it sees. Visual AI looks at applications the same way the human eye and brain do, but without tiring or making mistakes.  It helps teams greatly reduce false positives that arise from small, inconceivable differences in regressions, which has been the biggest challenge for teams adopting visual testing

Visual AI overcomes the problems of pixel and DOM for visual validations, and has 99.9999% accuracy to be used in production functional testing. Visual AI captures the screen image, breaks it into visual elements using AI, compares the visual elements with an older screen image broken into visual elements (using AI), and identifies visible differences.

Each given page renders as a visual image composed of visual elements. Visual AI treats elements as they appear:

  • Text, not a collection of pixels
  • Geometric elements (rectangles, circles), not a collection of pixels
  • Pictures as images, not a collection of pixels

Check Entire Page With One Test

QA Engineers can’t reasonably test the hundreds of UI elements on every page of a given app, they are usually forced to test a subset of these elements, leading to a lot of production bugs due to lack of coverage.

With Visual AI, you take a screenshot and validate the entire page. This limits the tester’s reliance on DOM locators, labels, and messages. Additionally, you can test all elements rather than having to pick and choose. 

Fine Tune the Sensitivity Of Tests

Visual AI identifies the layout at multiple levels – using thousands of data points between location and spacing. Within the layout, Visual AI identifies elements algorithmically. For any checkpoint image compared against a baseline, Visual AI identifies all the layout structures and all the visual elements and can test at different levels. Visual AI can swap between validating the snapshot from exact preciseness to focusing differences in the layout, as well as differences within the content contained within the layout.

Easily Handle Dynamic Content

Visual AI can intelligently test interfaces that have dynamic content like ads, news feeds, and more with the fidelity of the human eye. No more false positives due to a banner that constantly rotates or the newest sale pop-up your team is running.

Quickly Compare Across Browsers & Devices

Visual AI also understands the context of the browser and viewport for your UI so that it can accurately test across them at scale. Visual testing tools using traditional methods will get tripped up by the small, inconsistencies in browsers and your UIs elements. Visual AI understands them and can validate across hundreds of different browser combinations in minutes.

Automate Maintenance At Scale

One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This significantly reduces the overhead involved with managing baselines from different browsers and device configurations.  

When it comes to reviewing your test results, this is a major step towards saving team’s and testers time, as it will help to apply the same change on a large number of tests and will identify this same change for future tests as well. Reducing the amount of time required to accomplish these tasks translates to reducing the cost of the project.

Use Cases of Visual AI

Testing eCommerce Sites

ECommerce websites and applications are some of the best candidates for visual testing, as buyers are incredibly sensitive to poor UI/UX. But previously, eCommerce sites had too many moving parts to be practically tested by visual testing tools that use DOM Diffs or Pixel Diffs. Items that are constantly changing and going in and out of stock, sales that happening all the time, and the growth of personalization in digital commerce has made it impossible to validate with AI. Too many things get flagged on each change!

Using Visual AI, tests can omit entire sections of the UI from tripping up tests, validate only layouts, or dynamically assert changing data. 

Testing Dashboards 

Dashboards can be incredibly difficult to test via traditional methods due to the large amount of customized data that can change in real-time.

Visual AI can help not only visually test around these dynamic regions of heavy data, but it can actually replace many of the repeated and customized assertions used on dashboards with a single line of code. 

Let’s take the example of a simple bank dashboard below.

It has hundreds of different assertions, like the Name, Total Balance, Recent Transactions, Amount Due, and more. With visual AI, you can assign profiles to full-page screenshots meaning that the entire UI of “Jack Gomez’s” bank dashboard can be tested via a single assertion. 

Testing Components Across Browsers

Design Systems are a common way to have design and development collaborate on building frontends in a fast, consistent manner. Design Systems output components, which are reusable pieces of UI, like a date-picker or a form entry, that can be mixed and matched together to build application screens and interfaces.

Visual AI can test these components across hundreds of different browsers and mobile devices in just seconds, making sure that they are visibly correct on any size screen. 

Testing PDF Documents 

PDFs are still a staple of many business and legal transactions between businesses of all sizes. Many PDFs get generated automatically and need to be manually tested for accuracy and correctness. Visual AI can scan through hundreds of pages of PDFs in just seconds making sure that they are pixel-perfect. 

Conclusion

DOM-based tools don’t make visual evaluations. DOM-based tools identify DOM differences. These differences may or may not have visual implications. DOM-based tools result in false positives – differences that don’t matter but require human judgment to render a decision that the difference is unimportant. They also result in false negatives, which means they will pass something that is visually different.

Pixel-based tools don’t make evaluations, either. Pixel based tools highlight pixel differences. They are liable to report false positives due to pixel differences on a page. In some cases, all the pixels shift due to an enlarged element at the beginning – pixel technology cannot distinguish the elements as elements, this means pixel technology cannot see the forest from the trees.

Automated Visual Testing powered by Visual AI, can successfully work with the challenges of Digital Transformation and CI-CD by driving higher testing coverage while at the same time helping teams increase their release velocity and improve visual quality.

Be mindful when selecting the right tool for your team and/or project, and always take into consideration:

  • Organizational maturity and opportunities for test tool support 
  • Appropriate objectives for test tool support 
  • Analyze tool information against objectives and project constraints 
  • Estimate the cost-benefit ratio based on a solid business case 
  • Identify compatibility of the tool with the current system under test components

The post The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions appeared first on Automated Visual Testing | Applitools.

]]>
The Top Skills You Need To Become A Software Tester In 2022 https://applitools.com/blog/how-to-start-career-software-tester-top-skills/ Wed, 25 May 2022 19:49:33 +0000 https://applitools.com/?p=38610 What are the skills you need to begin your career as a software tester? Learn how to get started in software testing and what's really important.

The post The Top Skills You Need To Become A Software Tester In 2022 appeared first on Automated Visual Testing | Applitools.

]]>

What are the skills you need to begin your career as a software tester? Learn how to get started in software testing and what’s really important.

What is a Software Tester?

A software tester is someone who analyzes software to uncover bugs and defects, or unexpected functionality, in the software’s output.

They help ensure that the software’s functionality and performance are operating as they should. They use various testing techniques and test automation tools to achieve this.

Why Should You Become a Software Tester?

Getting your first role as a software tester can be extremely difficult. Unfortunately for people new to the field, a lot of employers only want people with experience.

But in fact, a lot of employers actually want someone with the right skills so that the new person can (eventually) add value to the team. 

By learning the new skills you need to become a software tester, not only do you make sure you have something to offer your employer, but you also get an idea of what it will be like to work as a software tester.

After all, why would you need to learn these skills if you weren’t actually going to have to apply these skills in the workplace.

But which skills do you need to learn?

What are the Skills Required to Get Started as a Software Tester?

Here are some key skills to start off with when you want to get your first role as a software tester. 

We need to break this down into two categories: see diagram below.

It’s a question of which skills are employers looking for and which skills will actually be useful once you start working as a software tester.

Venn Diagram Software Testing skills of showing skills that testers will find useful and skills that employers are often looking for.

It may be surprising that there can be a difference between the what employers look for and what will be most useful for you. The difference can arise because the person writing the job ad might not always know what skills a software tester needs. 

What Employers Often Look for When Hiring Testers

  1. Experience with specific tools

You may have seen job ads looking for experience with specific tools such as API testing tools and test case management tools.

  1. Experience in writing test cases

For many people in the software testing industry, writing and executing test cases is the only way you can “properly” test software.

Most Useful Skills for a Software Tester

  1. Can Give Effective Actionable Feedback
  2. People Skills
  3. Able to assess risk
  4. Not afraid to ask questions

What Employers Often Look for that is also Useful as a Software Tester

  1. Test Automation Skills
  2. How To Work in an Agile environment
  3. Experience in Writing Bug Reports

How Can You Learn the Skills that are Most Useful to You as a Software Tester?

In this article, I will focus on learning the skills that you will find most useful as a software tester.

How to Start Learning Test Automation Skills

It can be overwhelming to try and figure out where to start. Analysis paralysis can cause you to keep on doing research into what you should learn, instead of actually spending time upskilling.

If you are currently on a project, I suggest you start by learning a programming language that is used on your project and if there is a test automation framework already in place, then that test automation framework.

If you are not currently on a project, I suggest you choose one of the following:

  1. Selenium Webdriver in Java
  2. Cypress in Javascript
  3. API Testing in Python 
  4. Playwright in Javascript

Aside from the fact there are some great courses on Test Automation University on these frameworks, these are very popular test automation frameworks with extensive documentation. Therefore, when you get stuck, you have plenty of resources online to enable you to get past any obstacles you may face.

More importantly, it’s important to know which tests you should automate.

Angie Jones has done an excellent talk on the topic.

Long story short, it depends.

According to her talk, there are some key factors you should consider including:

  1. What is the risk?
    probability (how often would customers come across this?) vs impact (if broken, how would this affect the customers?)
  2. Value
    distinctness (does this test give us new information?) vs induction to action (how quickly would this failure be fixed?)
  3. Cost-Efficiency
    quickness (how quickly can we write this test?) vs ease (how easy would it be to write a test for this?)
  4. History
    similar to weak areas (have there been a lot of failures in similar areas?) vs frequency of breaks (how often would this test have failed if it was already in place?)

How to Work in an Agile Environment

Realistically, this is pretty hard to define since there are many different flavors of Agile and judging by a job advertisement, it’ll be hard to tell exactly how any particular company interprets how one should work in an Agile environment.

Often, when people refer to “Agile” they are referring to a Scrum team. But be wary that people may claim to work in an Agile environment just because they have daily standups.

It helps to ask questions, so you can better understand their expectations here.   

Getting Experience in Writing Bug Reports

A large part of being a software tester is writing clear, compelling, reproducible bug reports. 

I’ve written a blog post on how to write a bug report. 

A few keys things to highlight when it comes to writing bug reports:

A bug report is a form of written communication – to write a good bug report you need to have good writing skills.

If you want to improve your bug reports you should look into:

  • Your use of words. 
  • Formatting, so that things are more clear. For example: I like to use bold formatting for subheadings in my bug descriptions. Bullet points can also be useful.  
  • Being clear so that it’s easy for the reader to understand what you are trying to say. Don’t expect the reader of your bug to always read it thoroughly before deciding what to do with it (whether that is assign it, start fixing it, reject it), to be safe assume your reader will scan it.

If you want to find a place to practice writing bug reports – you can sign up to Crowdsourced Testing sites like uTest.

If you are already on a project, ask for feedback on your bug reports – see what your team thinks could be done better. 

While it can be scary to ask for feedback, as you don’t know what you’ll find out (about yourself and others’ perception of you), it helps to know how your work is being received by your team. 

Giving Effective Actionable Feedback as a Software Tester

While it’s important to ask for feedback so you can improve, it’s also very important that you can give effective feedback

Many people associate Toastmasters with giving speeches. However, improving your public speaking skills isn’t the only way you can benefit from going to Toastmasters. Toastmasters is a great way to learn how to give effective, actionable feedback. At Toastmasters, you give people feedback on their speeches but then you, as an evaluator, also get feedback on how you delivered your feedback. 

You’ll get feedback on many aspects of your feedback including:

  • The structure
  • Your tone
  • The clarity

As an evaluator, your goal isn’t just to help the speaker improve, but also to help lift them up (you don’t want to drag someone down with your feedback). If we were to tie this back to the workplace, that is often the desired result of feedback as well.

Developing Your People Skills

It’ll be easier to do your job as a software tester if you have strong people skills.

While this isn’t an exhaustive list of things that will help you with your people skills, here are a few things I have found useful. 

  • Listen to what people are saying and then acknowledge that you had heard them. 
  • Be curious – get to know people. 
  • Ask those closest to you how they perceive you. This will give you an idea of how you come across to others. Make sure to ask people who you know will be open and honest with you. 

Understanding how to Assess Risk in Testing

If you were to take a risk-based approach to testing, it means you would prioritize testing areas and scenarios that are highly probable and/or have a high impact.

To do this, you need to know where the risks lie.

This often comes with experience.

Some questions to ask yourself when considering risk include:

  • Which use cases are the most common?
  • Is there any payment involved in any use cases?
  • Which use cases would customers expect to work? (Where would they be not-so-forgiving if it did not work?)
  • Which areas have had problems in the past?

Don’t be Afraid to Ask Questions

You’ll learn a lot by asking questions. You’ll also be surprised by how many people can shy away from asking questions out of fear of looking stupid.

Karen N. Johnson has done an excellent talk on The Art of Asking Questions.

Here are a few things to keep in mind when asking questions:

  • Timing matters. When and where you ask a question can impact what you get for a response. Keep this in mind before asking a question.
  • Be careful of relying on only one source for information. You may find that you asked the wrong person.

In Summary

There are a lot of ways in which you can upskill to become an effective software tester. While there are a few skills that are difficult to gain without experience as a tester, there are still plenty you can start learning as you look for your first role (or even after).

The post The Top Skills You Need To Become A Software Tester In 2022 appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual AI? https://applitools.com/blog/visual-ai/ Wed, 29 Dec 2021 14:27:00 +0000 https://applitools.com/?p=33518 Learn what Visual AI is, how it’s applied today, and why it’s critical across many industries - in particular software development and testing.

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.

From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.

The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.

What is AI? Background on Artificial Intelligence and Machine Learning

Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.

Image of Frankenstein

Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.

What is Visual Artificial Intelligence (Visual AI)?

Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.

In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.

As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.

Representation of Visual AI

How is Visual AI Used Today?

Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI. 

Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.

How Does Visual AI Help?

One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.

Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant. 

Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.

How Does Visual AI Help in Software Development and Testing Today?

Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation. 

Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use. 

At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.

A table showing the number of screens in production by modern organizations - 81,480 is the market average, and the top 30% of the market is  681,296
Source: The 2019 State of Automated Visual Testing

At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test. 

That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.

Visual AI is 5.8x faster, 5.9x more efficient, 3.8x more stable, and catches 45% more bugs
Source: The Impact of Visual AI on Test Automation Report

How Visual AI Enables Cross Browser/Cross Device Testing

Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.

Traditional test cycle takes 29.2 hours, modern test cycle takes just 1.6 hours.
Source: Modern Cross Browser Testing Through Visual AI Report

How Will Visual AI Advance in the Future?

As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.

In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.

Keep Reading: More about Visual AI and Visual Testing

What is Visual Testing (blog)

The Path to Autonomous Testing (video)

What is Applitools Visual AI (learn)

Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)

How AI Can Help Address Modern Software Testing (blog)

The Impact of Visual AI on Test Automation (report)

How Visual AI Accelerates Release Velocity (blog)

Modern Functional Test Automation Through Visual AI (free course)

Computer Vision defined (Wikipedia)

The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.

]]>
Why Should Software Testers Understand Unit Testing? https://applitools.com/blog/why-should-software-testers-understand-unit-testing/ Wed, 22 Dec 2021 17:54:59 +0000 https://applitools.com/?p=33490 Learn why unit testing isn’t only for developers, the importance of unit testing to quality engineers, and how you can improve your skills by building better unit tests.

The post Why Should Software Testers Understand Unit Testing? appeared first on Automated Visual Testing | Applitools.

]]>

Learn why unit testing isn’t only for developers, the importance of unit testing to software testers and quality engineers, and how you can improve your skills by building better unit tests.

The responsibility for product quality frequently falls on software testers. Yet, software testers are often divorced or even excluded from conversations around the cheapest and easiest way to inject quality into the product and the entire software development lifecycle, right from the beginning: unit testing. In this article, we’ll explore why it’s important for software testers to be able to speak clearly about unit tests and how this can help deliver better quality.

Why Unit Tests Are Important

Unit tests form the solid base of the testing pyramid. They are the cheapest kinds of tests to run, and can be run frequently throughout the deployment pipeline. Unit tests allow us to find errors the soonest, and to fix them before they bubble up in other, more expensive kinds of testing like functional or UI tests, which take much longer to complete and run than unit tests.

Unit Testing Frameworks

Most developers know how to write unit tests in the language in which they develop, and most languages have several libraries to choose from, depending on the type and complexity of testing. For example, Python has pytest, pyunit, unittest(inspired by Java’s JUnit), Nose2, and hypothesis (for property tests, a non-example based type of unit test). These are just some of the choices available, and every language has a number of possible unit testing frameworks to choose from.

You don’t need to know everything about a unit testing library, or even how to write unit tests, to get value from understanding the basics of the unit testing framework. A lot of value can be gained from knowing what framework is being used, and what kinds of assertions can be made within the framework. Also, does the framework support table tests or property-style tests? Understanding what is supported can help you better understand what aspects of your test design might be best handled in the unit-testing phase. 

Unit Testing Is the Developers Job

Yes, developers typically write unit tests. However, they are largely responsible for writing these tests to ensure that the code works – most developer tests are likely to cover happy-path and obvious negative cases. They may not think to write tests for edge or corner cases, as they are working to meet deadlines for code delivery. This is where software testers with unit test knowledge can help to make the unit tests more robust, and perhaps decrease testing that might otherwise be done at integration or functional levels.

The first step, if you are unfamiliar with the code, is to request a walkthrough of the unit tests. Understanding what developers have done and what they are testing will help you to make recommendations about what other tests might be included. Remember, adding tests here is the cheapest and fastest place to do it, especially if there are tests you want run quickly on every code change that a developer makes. 

If you are familiar with the codebase and version control systems, then you can also look for the unit tests in the code. These are often stored in a test directory, and typically named so it is easy to identify what is being tested. Quality teams can be coached to review unit tests, and compare those with their test plans. Once coached, teams can make recommendations to developers to improve unit tests and make test suites more robust. Some team members may even expand their skills by adding tests and making pull requests/merge requests for unit tests. There are many ways to participate in making unit tests more effective, involving writing no code or writing a lot of code; it’s up to you to decide what most benefits you and your team. 

But What if There Are No Unit Tests?

If you are responsible for software quality and you discover that your team or company is not doing unit testing, this can be both painful, but also a great opportunity for growth. The first step in having the conversations around developing unit tests can revolve around the efficiency, efficacy, and speed of unit tests. The next step is building awareness and fluency about quality and testing as a part of development, which is a difficult task to tackle alone, and it may not work without buy-in from key people. However, if you can get understanding and buy-in on the importance of building testing and testability into the product starting with unit-tests as the foundation, further discussions about code quality can be opened up.

Better Quality is the Goal

At the end of every day, every member of the team should be responsible for quality. However, that responsibility rests with different people in different organizations, and often with the person who has the word “quality” in their title is the person who is ultimately held responsible in the end. If you are responsible for quality, understanding the basics of how unit tests work in your code base will help you to have better discussions with developers about how to improve software quality in the fastest, cheapest way possible – directly from the code.

The post Why Should Software Testers Understand Unit Testing? appeared first on Automated Visual Testing | Applitools.

]]>
How to Test a Mobile Web App in Cypress https://applitools.com/blog/how-to-test-mobile-web-app-cypress/ Mon, 08 Nov 2021 19:00:32 +0000 https://applitools.com/?p=32376 Cypress is sometimes known as a tool for testing anything that runs in a browser. Here's how you can do mobile testing on the browser with mobile web apps.

The post How to Test a Mobile Web App in Cypress appeared first on Automated Visual Testing | Applitools.

]]>

Before we start, we need to clear up some areas that tend to confuse people when it comes to mobile testing. If you’d like to test a mobile application built for iOS or Android, Cypress will not be efficient and you should probably reach for a tool like Appium (although there are some interesting examples by Gleb Bahmutov for React Native). Cypress is best known as a tool for testing anything that runs in a browser. So in order to use Cypress for your mobile testing, your app needs to be able to run in a browser. That brings forward a question: “What’s the difference between desktop and mobile web app?”

To answer this question, it’s best to look at the application in test from a perspective of a developer. As someone who creates a web application, you might want to consider a couple of traits that make the mobile app different from a desktop one:

  • screen size
  • touch interface
  • any other information about the device

Let’s go through them one by one and see how we can write a Cypress test that would ensure our application works as expected. If you want to follow along, you can clone my repository with the application and attached Cypress tests. The simple example application shows different messages based on viewport width, presence of touch interface or user agent.

Testing the Viewport Size

We might want to test a responsive CSS on different screen widths to see if the application renders correctly. This can be easily done by using the cy.viewport() command. You can specify width and height of the viewport, or choose from one of the predefined devices:

View the code on Gist.

Using these commands will trigger visibility of a “it’s getting pretty narrow here” message:

<the text displays ‘it’s getting pretty narrow here’ on a narrow viewport.

CSS responsiveness can hide or show different content, like buttons or modals. A nice trick to test this is to run your existing tests with different resolution. To do that, pass a --config flag to your CLI command:

npx cypress run --config viewportWidth=320 viewportHeight=480

You can also set the viewport width and viewport height in cypress.json or programmatically via Cypress plugin API. I write about this on my personal blog in a slightly different context, but it still might be helpful in this case.

Depending on your application, you might want to consider testing responsiveness of your application by visual tests using a tool like Applitools. Functional tests might have a tendency to become too verbose if their sole purpose is to check for elements appearing/disappearing on different viewports.

Testing for Touch Devices

Your application might react differently or render slightly different content based on whether it is opened on a touch screen device or not. Manually you can test this with Chrome DevTools:

displays whether we’re viewing the page on a computer that is a touch device, or one that is not.

Notice how we can have a touch device, that is actually not a mobile. This is a nice case to consider when testing your app.

In Chrome DevTools, we are simply switching a mode of our browser. It’s like changing a setting to enable viewing our app as if it was opened on a touch device.

With Cypress we can do something very similar. There is an ontouchstart property which is present on a mobile browser. This is usually a very good clue for a web application to “know” that it is being opened on a touch device. In Cypress, we can add this property manually, and make our application “think” it is being opened on a touch device:

View the code on Gist.

With the onBeforeLoad function, we tap into the window object and add the property manually, essentially creating a similar situation as we did in DevTools, when we toggled the “touch” option:

displays whether we’re viewing the page on a computer that is a touch device, or one that is not, in Cypress.

To go even further with testing a touch interface I recommend using Dmitryi Kovalenko’s cypress-real-events plugin that fires events using Chrome devtools protocol. Your Cypress API will now get augmented with cy.realTouch() and cy.realSwipe() commands, which will help you test touch events.

Testing with User Agent

Some applications use information from user agent to determine whether an app is being viewed on mobile device. There are some neat plugins out there that are commonly used in web applications to help with that.

Although User Agent might sound like a super complicated thing, it is actually just a string that holds information about the device that is opening a web application. Cypress allows you to change this information directly in the cypress.json file. The following setup will set the user agent to the exact same value as on an iPhone:

View the code on Gist.

However, you might not want to change the user agent for all of your tests. Instead of adding the user agent string to cypress.json you can again tap into the onBeforeLoad event and change the viewport on your browser window directly:

View the code on Gist.

The reason why we are not changing win.navigator.userAgent directly via the assignment operator (=) is that this property is not directly configurable, so we need to use the defineProperty method. Opening our application in this way will, however, cause our application to render a message that it is being viewed on mobile device:

shows that we are viewing this page on a mobile

Conclusion

You cannot test native mobile apps with Cypress, but you can get pretty close with mobile web apps. Combining these three methods will help you narrowing the gap between desktop and mobile testing. To understand what makes an app behave differently, it is nice to look into your app as a developer at least for a little while.

If you enjoyed this blogpost, make sure to head over to my personal blog for some more Cypress tips.

The post How to Test a Mobile Web App in Cypress appeared first on Automated Visual Testing | Applitools.

]]>
Get a Jump Into GitHub Actions https://applitools.com/blog/jump-into-github-actions/ Tue, 02 Mar 2021 16:24:17 +0000 https://applitools.com/?p=27330 On January 27, 2021, Angie Jones of Applitools hosted Brian Douglas, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into GitHub Actions. You can...

The post Get a Jump Into GitHub Actions appeared first on Automated Visual Testing | Applitools.

]]>

On January 27, 2021, Angie Jones of Applitools hosted Brian Douglas, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into GitHub Actions. You can watch the entire webinar on YouTube. This blog post goes through the highlights for you.

Introductions

Angie Jones serves as Senior Director of Test Automation University and as Principal Developer Advocate at Applitools. She tweets at @techgirl1908, and her website is https://angiejones.tech.

Brian Douglas serves as the Staff Developer Advocate at GitHub. Insiders know him as the “Beyoncé of GitHub.” He blogs at https://bdougie.live, and tweets as @bdougieYO.

They ran their webinar as a question-and-answer session. Here are some of the key ideas covered.

What Are GitHub Actions?

Angie’s first question asked Brian to jump into GitHub Actions.

Brian explained that GitHub Actions is a feature you can use to automate actions in GitHub. GitHub Actions let you code event-driven automation inside GitHub. You build monitors for events, and when those events occur, they trigger workflows. 

If you’re already storing your code in GitHub, you can use GitHub Actions to automate anything you can access via webhook from GitHub. As a result, you can build and manage all the processes that matter to your code without leaving GitHub. 

Build Test Deploy

Next, Angie asked about Build, Test, Deploy as what she hears about most frequently when she hears about GitHub Actions.

Brian mentioned that the term, GitOps, describes the idea that a push to GitHub drives some kind of activity. A user adding a file should initiate other actions based on that file. External software vendors have built their own hooks to drive things like continuous integration with GitHub. GitHub Actions simplifies these integrations by using native code now built into GitHub.com.

Brian explained how GitHub Actions can launch a workflow. He gave the example that a team has created a JavaScript test in Jest. There’s an existing test using Jest – either npm test, or jest. With GitHub Action workflows, the development team can automate actions based on a starting action.  In this case, the operator can drive GitHub to execute the test based on uploading the JavaScript file.  

Get Back To What You Like To Do

Angie pointed out that this catchphrase, “Get back to what you like to do,” caught her attention. She spends lots of time in meetings and doing other tasks when she’d really just like to be coding. So, she asked Brian, how does that work?

Brian explained that, as teams grow, so much more of the work becomes coordination and orchestration. Leaders have to answer questions like:

  • What should happen during a pull request? 
  • How do we automate testing? 
  • How do we manage our build processes

When engineers have to answer these questions with external products and processes, they stop coding. With GitHub Actions, Brian said, you can code your own workflow controls. You can ensure consistency by coding the actions yourself. And, by using GitHub Actions, you make the processes transparent for everyone on the team.

Do you want a process to call Applitools? That’s easy to set up. 

Brian explained that GitHub hosted a GitHub Actions Hackathon in late 2020. The team coded the controls for the submission process into the hackathon. You can still check it out at githubhackathon.com.

The entire submission process got automated to check for all the proper files being included in a submission. The code recognized completed submissions on the hackathon home page automatically.

Brian then gave the example of work he did on the GitHub Hacktoberfest in October. For the team working on the code, Brian developed a custom workflow that allowed any authenticated individual to sign up to address issues exposed in the Hackathon. Brian’s code latched onto existing authentication code to validate that individuals could participate in the process and assigned their identity to the issue. As the developer, Brain built the workflow for these tasks using GitHub Actions.

What can you automate? Informing your team when a user does a pull request. Send a tweet when the team releases a build. Any webhook in GitHub you can automate with GitHub Actions. For example, you can even automate the nag emails that get sent out when a pull request review does not complete within a specified time. 

Common Actions

Angie then asked about the most common actions that Brian sees users running.

Brian summarized by saying, basically, continuous integration (CI). The most common use is ensuring that tests get run against code as it gets checked in to ensure that test suites get applied. You can have tests run when you push code to a branch, push code to a release branch or do a release, or even when you do a pull request.

While test execution gets run most frequently, there are plenty of tasks that one can automate. Brian did something specific to assign gifts to team members who reviewed pull requests. He also used a cron job to automate a GitHub Action which opened up a global team issue each Sunday US, which happens to be Monday in Australia, and assigned all the team members to this issue. Each member needed to explain what they were working on. This way, the globally-distributed team could stay on top of their work together without a meeting that would occur at an awkward time for at least one group of team members.

Brian talked about people coming up with truly use cases – like someone linking IOT devices to webhooks in existing APIs using GitHub Actions. 

But the cool part of these actions is that most of them are open source and searchable. Anyone can inspect actions and, if they don’t like them, modify them. If a repo includes GitHub Actions, they’re searchable.

On github.dom/bdougie, you can see existing workflows that Brian has already put together.

Jump Into GitHub Actions – What Next?

I shared some of the basic ideas in Brian’s conversation with Angie. If you want to jump into GitHub Actions in more detail, you can check out the full webinar and the slides in Addie Ben Yehuda’s summary blog for the webinar. That blog also includes a number of Brian’s links, several of which I include here as well:

Enjoy jumping into GitHub Actions!

Featured Photo by Aziz Acharki on Unsplash

The post Get a Jump Into GitHub Actions appeared first on Automated Visual Testing | Applitools.

]]>
How to Setup GitHub Actions with Cypress & Applitools for a Better Automated Testing Workflow https://applitools.com/blog/github-actions-with-cypress-and-applitools/ Mon, 01 Mar 2021 20:37:37 +0000 https://applitools.com/?p=27315 Applitools provides a number of SDKs that allows you to easily integrate it into your existing workflow. Using tools like Cypress, Espresso, Selenium, Appium, and a wide variety of others,...

The post How to Setup GitHub Actions with Cypress & Applitools for a Better Automated Testing Workflow appeared first on Automated Visual Testing | Applitools.

]]>

Applitools provides a number of SDKs that allows you to easily integrate it into your existing workflow. Using tools like Cypress, Espresso, Selenium, Appium, and a wide variety of others, web and native platforms can get automated visual testing coverage with the power of Applitools Eyes.

But what if you’re not looking to integrate it directly to an existing testing workflow because maybe you don’t have one or maybe you don’t have access to it? Or what if you want to provide blanket visual testing coverage on a website without having to maintain which pages get checked?

We’ll walk through how we were able to take advantage of the power of Applitools Eyes and flexibility of GitHub Actions to create a solution that can fit into any GitHub-based workflow.

Note: if you want to skip the “How it Works” and go directly to how to use it, you can check out the Applitools Eyes GitHub Action on github.com. https://github.com/colbyfayock/applitools-eyes-action

What are GitHub Actions?

To start, GitHub Actions are CI/CD-like workflows that you’re able to run right inside of your GitHub repository.

GitHub Actions logs and pull request
Running build, test, and publish on a branch with GitHub Actions

Using a YAML file, we can set up our project to run tests or really any kind of script based on events such as a commit, pull request, or even on a schedule with cron.

name: Tests

on:
  push:
	branches: [ main ]
  pull_request:
	branches: [ main ]

jobs:
  test:
	runs-on: ubuntu-latest
	steps:
	- uses: actions/checkout@v2
	- uses: actions/setup-node@v2
	  with:
		node-version: '12'
	- run: npm ci
	- run: npm test

Setting up simpler workflows like our example above, where we’re installing our dependencies and running our tests, is a great example of how we can automate critical code tasks, but GitHub also gives developers a way to package up complex scripts that can reach beyond what a configurable YML file can do.

Using custom GitHub Actions to simplify complex workflows

When creating a custom GitHub Action, we unlock the ability to use scripting tools like shell and node to a greater extent, as well as the ability to stand up entire environments using Docker, which can allow us to really take advantage of our allotted environment just like we could on any other CI/CD platform.

GitHub Actions Build Logs
Building a container in a GitHub Action

In our case, we want to allow someone to run Applitools Eyes without ever having to think about setting up a test runner.

To achieve this, we can include Cypress (or another test runner) right along with our Action, which would then get installed as a dependency on the workflow environment. This allows us to hook right into the environment to run our tests.

Scaffolding a Cypress environment in a GitHub Action workflow with Docker

Setting up Cypress is typically a somewhat simple process. You can install it locally using npm or yarn where Cypress will manage configuring it for your environment.

How to install cypress
Installing Cypress with npm via cypress.io

This works roughly the same inside of a standard YAML-based GitHub Action workflow. When we use the included environment, GitHub gives us access to a workspace where we can install our packages just like we would locally.

It becomes a bit trickier however when trying to run a custom GitHub Action, where you would want to potentially have access to both the project and the Action’s code to set up the environment and run the tests.

While it might be possible to figure out a solution using only node, Cypress additionally ships a variety of publicly available Docker images which let’s us confidently spin up an environment that Cypress supports. It also gives us a bit more control over how we can configure and run our code inside of that environment in a repeatable way.

Because one of our options for creating a custom Action is Docker, we can easily reference one of the Cypress images right from the start:

FROM cypress/browsers:node12.18.3-chrome87-ff82

In this particular instance, we’re spinning up a new Cypress-supported environment with node 12.18.3 installed along with Chrome 87 and Firefox 82.

Installing Cypress and Action dependencies

With our base environment set up, we move to installing dependencies and starting the script. While we’re using a Cypress image that has built-in support, Cypress doesn’t actually come already installed.

When installing Cypress, it uses cache directories to store download binaries of Cypress itself. When using Docker and working with different environments, its important to have predictable locations of where these caches exist, so that we’re able to reference it later.

Cypress cache verification
Cypress verifies that it can correctly identify an installation path from cache

In our Docker file, we additionally configure that environment with:

ENV NPM_CACHE_FOLDER=/root/.cache/npm
ENV CYPRESS_CACHE_FOLDER=/root/.cache/Cypress

When npm and Cypress goes to install, they’ll use those directories for caching.

We also need to set up an entrypoint which tells Docker what script to run to initiate our Action.

Inside of our Dockerfile, we add:

COPY entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

We’ll use a shell script to initiate our initial installation procedure so we can have a little more control over setting things up.

Finally, inside of our referenced shell script, we include:

#!/bin/sh -l

cd $GITHUB_WORKSPACE

git clone https://github.com/colbyfayock/applitools-eyes-action
cd applitools-eyes-action

npm ci

node ./src/action.js

This will first navigate into our GitHub’s Workspace directory, to make sure our session is in the right location.

We then clone down a copy of our custom Action’s code, which is referencing itself at this point, but it allows us to have a fresh copy in our Workspace directory, giving us access to the script and dependencies we ultimately need.

And with our Action cloned within our working environment, we can now install the dependencies of our action and run the script that will coordinate our tests.

Dynamically creating a sitemap in node

Once our node script is kicked off, the first few steps are to find environment variables and gather Action inputs that will allow our script to be configured by the person using it.

But before we can actually run any tests, we need to know what pages we can run the tests on.

To add some flexibility, we added a few options:

  • Base URL: the URL that the tests will run on
  • Sitemap URL: this allows someone to pass in an already sitemap URL, rather than trying to dynamically create one
  • Max Depth: how deep should we dynamically crawl the site? We’ll touch on this a little more in a bit

With these settings, we can have an idea on how the Action should run.

If no sitemap is provided, we have the ability to create one.

RSS xml example
Example sitemap

Specifically, we can use a Sitemap Generator package that’s available right on npm that will handle this for us.

const generator = SitemapGenerator(url, {
  stripQuerystring: false,
  filepath,
  maxDepth
});

Once we plug in a URL, Sitemap Generator will find all of the links on our page and crawl the site just like Google would with it’s search robots.

We need this crawling to be configurable though, which is where Max Depth comes in. We might not necessarily want our crawler to drill down link after link, which could cause performance issues, but it could also include pages or other websites that we aren’t interested in including in our sitemap.

Applitools sitemap diagram
Reduced sitemap of applitools.com

With Max Depth, we can tell our Sitemap Generator how deep we want it to crawl. A value of 1 would only scrape the top level page, where a value of 2 would follow the links on the first page, and then follow the links on the second page, to find pages to include in our dynamically generated sitemap.

But at this point, whether dynamically generated or provided to us with the Sitemap URL, we should now have a list of pages that we want to run our tests on.

Running Cypress as a script in node

Most of the time when we’re running Cypress, we use the command line or include it as a script inside of our package.json. Because Cypress is available as a node package, we additionally have the ability to run it right inside of a node script just like we would any other function.

Because we already have our environment configured and we’ve determined the settings we want, we can plug these values directly into Cypress:

const results = await cypress.run({
  browser: cypressBrowser,
  config: {
	baseUrl
  },
  env: {
	APPLITOOLS_APP_NAME: appName,
	APPLITOOLS_BATCH_NAME: batchName,
	APPLITOOLS_CONCURRENCY: concurrency,
	APPLITOOLS_SERVER_URL: serverUrl,
	PAGES_TO_CHECK: pagesToCheck
  },
  headless: true,
  record: false,
}); 

If you notice in the script though, we’re setting a few environment variables.

The trick with this is, we can’t directly pass in arguments that we may need inside of Cypress itself, such as settings for Applitools Eyes.

The way we can handle this is by creating Cypress environment variables, which end up roughly working the same as passing arguments into the function, we just need to access it slightly differently.

But beyond some Applitools-specific configurations, the important bits here are that we have a basic headless configuration of Cypress, we turn recording off as ultimately we won’t use that, and we pass in PAGES_TO_CHECK which is an array of pages that we’ll ultimately run through with Cypress and Applitools.

Using Cypress with GitHub Actions to dynamically run visual tests

Now that we’re finally to the point where we’re running Cypress, we can take advantage of the Applitools Eyes SDK for Cypress to easily check all of our pages.

describe('Visual Regression Tests', () => {
  const pagesToCheck = Cypress.env('PAGES_TO_CHECK');

  pagesToCheck.forEach((route) => {
	it(`Visual Diff for ${route}`, () => {

	  cy.eyesOpen({
		appName: Cypress.env('APPLITOOLS_APP_NAME'),
		batchName: Cypress.env('APPLITOOLS_BATCH_NAME'),
		concurrency: Number(Cypress.env('APPLITOOLS_CONCURRENCY')),
		serverUrl: Cypress.env('APPLITOOLS_SERVER_URL'),
	  });

	  cy.visit(route);
	  
	  cy.eyesCheckWindow({
		tag: route
	  });

	  cy.eyesClose();
	});
  });
});

Back to the critical part of how we ran Cypress, we first grab the pages that we want to check. We can use Cypress.env to grab our PAGES_TO_CHECK variable which is what we’ll use for Eyes coverage.

With that array, we can simply run a forEach loop, where for every page that we have defined, we’ll create a new assertion for that route.

Applitools Visual Regression Tests in Cypress
Running Applitools Eyes on each page of the sitemap

Inside of that assertion, we open up our Eyes, proceed to visit our active page, perform a check to grab a snapshot of that page, and finally close our Eyes.

With that brief snippet of code, we’re uploading a snapshot of each of our pages up to Applitools, where we’ll now be able to test and monitor our web project for issues!

Configuring Applitools Eyes GitHub Action into a workflow

Now for the fun part, we can see how this Action actually works.

To add the Applitools Eyes GitHub Action to a project, inside of an existing workflow, you can add the following as a new step:

steps:
- uses: colbyfayock/applitools-eyes-action@main
  with:
	APPLITOOLS_API_KEY: ${{secrets.APPLITOOLS_API_KEY}}
	appName: Applitools
	baseUrl: https://applitools.com

We first specify that we want to use the Action at its current location, then we pass in a few required input options such as an Applitools API Key (which is defined in a Secret), the name of our app, which will be used to label our tests in Applitools, and finally the base URL that we want our tests to run on (or we can optionally pass in a sitemap as noted before).

With just these few lines, any time our steps are triggered by our workflow, our Action will create a new environment where it will run Cypress and use Applitools Eyes to add Visual Testing to the pages on our site!

What’s next for Applitools Eyes GitHub Action?

We have a lot of flexibility with the current iteration of the custom GitHub Action, but it has a few limitations like having an environment already deployed that can be accessed by the script and generally not having some of the advanced Applitools feature customers would expect.

Because we’re using node inside of our own custom environment, we have the ability to provide advanced solutions for the project we want to run the tests on, such as providing an additional input for a static directory of files, which would allow our Action to spin up a local server and perform the tests on.

As far as adding additional Applitools features, we’re only limited to what the SDK allows, as we can scale our input options and configuration to allow customers to use whatever features they’d like.

This Action is still in an experimental stage as we try to figure out what direction we ultimately want to take it and what features could prove most useful for getting people up and running with Visual Testing, but even today, this Action can help immediately add broad coverage to a web project with a simple line in a GitHub Action workflow file.
To follow along with feature development, to report issues, or to help contribute, you can check out the Action on GitHub at https://github.com/colbyfayock/applitools-eyes-action.

The post How to Setup GitHub Actions with Cypress & Applitools for a Better Automated Testing Workflow appeared first on Automated Visual Testing | Applitools.

]]>
Thriving Through Visual AI – Applitools Customer Insights 2020 https://applitools.com/blog/customer-insights-2020/ Wed, 23 Dec 2020 20:48:10 +0000 https://applitools.com/?p=25318 In this blog post, we share what we learned about how Applitools helps to reduce test code, shorten code rework cycles, and shrink test time.

The post Thriving Through Visual AI – Applitools Customer Insights 2020 appeared first on Automated Visual Testing | Applitools.

]]>

In this blog post, I cover the customer insights into their successes through the use of Applitools. I share what we learned about how our users speed up their application delivery by reducing test code, shortening code rework cycles, and reducing test time.

Customer Insight – Moving To Capture Visual Issues Earlier

We now know that our customers go through a maturity process when using Applitools. We see a typical maturity process:

  1. End-to-end test validation on one application
  2. [OPTIONAL] Increasing the end-to-end validation across other applications (where they exist)
  3. Moving validation to code check-in and
  4. Build validation
  5. Validating component and component mock development

End to End Validation

In automating application tests, our customers realize that they need a way to validate the layout and rendering of their applications through automation. They have learned the problems that can escape when even a manual check does not occur. But, manual testing is both expensive and error-prone.

Every one of our customers has experience with pixel diff for visual validation.  They uniformly reject pixel diff for end-to-end testing. From their experience, pixel diff reports too many false positives to make it useful for automation.

So, our customers begin by running a number of visual use cases through Applitools to understand its accuracy and consistency. They realize that Applitools will capture visual issues without reporting false positives. And, they begin adding Applitools to other production applications for end-to-end tests.

Check-In Validation

As Applitools users become comfortable with Applitools in their end-to-end tests, they begin to see inefficiencies in their workflow. End-to-end validation occurs well-after developers finish and check-in their code. To repair any uncovered errors, developers must switch context from their current task to rework the failing code. Rework impacts developer productivity and slows product release cycles.

Once users uncover this workflow inefficiency, they look to move visual validation to an earlier point in app development. Check-in makes a natural point for visual validation. At check-in, all functional and visual changes can be validated. Any uncovered errors can go immediately back to the appropriate developers for rework.

So, our customers add visual validation to their check-in process. Their developers become more efficient. And, developers become attuned to the interdependencies of their code with shared resources used across the team.

Regular Build Validation

As our customers appreciate the dependability of Applitools Visual AI, they realize that Applitools can be part of their regular build validation process. These customers use Applitools as a “visual unit test”, which should run green after every build. When Applitools fails with an unexpected error, they uncover an unexpected change. In this mode, our users generally expect passing tests.

At this level of maturity, end-to-end validation tests provide a sanity check. Our customers who have reached this level tell us that they never discover visual issues late in their product release process anymore.

Component and Mock Validation

Our most mature customers have moved validation into visual component construction and validation.

To ensure visual consistency, many app developers have adopted some kind of visual component library. The library defines a number of visual objects or components. Developers can assign properties to an object, or define a style sheet from which the component inherits properties.

To validate the components and show them in use, developers create mock-ups. These mock-ups let developers manipulate the components independent of a back-end application. Tools like Storybook serve up these mock-ups. Developers can test components and see how they behave through CSS changes.

Applitools’ most mature customers use Visual AI in their mock-up testing to uncover unexpected behavior and isolate unexpected visual dependencies. They find visual issues much earlier in the development process – leading to better app behavior and reduced app maintenance costs.

Customer Insight – Common Problems

Our customers break themselves into two kinds of problems:

  • Behavior Signals Trustworthiness
  • Visual Functionality
  • Competitive Advantage

Behavior Signals Trustworthiness

When buyers spend money, they do so with organizations they trust. Similarly, when investors or savers deposit money, they expect their fiduciary or institution to behave properly.

Take a typical bill paying application from a bank. The payees may be organized in alphabetical order, or in the amount previously paid. A user enters the amount to pay for a current bill. The app automatically calculates and reports the bill pay date. How would an online bank customer react to any of these misbehaviors:

  • Missing payees
  • Lack of information on prior payments
  • the inability to enter a payment amount
  • Misaligned pay date

How do buyers or investors react to these misbehaviors? As they tell it, some ignore issues, some submit bug reports, some call customer support. And, some just disappear. Misbehavior erodes trust. In the face of egregious or consistent misbehavior, customers go elsewhere.

App developers understand that app misbehavior erodes trust. So, how can engineers uncover misbehavior? Functional testing can ensure that an app functions correctly. It can even ensure that an app has delivered all elements on a page by identifier. But, functional testing can overlook color problems that render text or controls invisible, or rendering issues that result in element overlap.

When they realize they uncover too many visual issues late in development, app developers look for a solution. They need accurate visual validation to add to their existing end-to-end testing.

Visual Functionality

Another group of users builds applications with visual functionality. With these applications, on-screen tools let users draw, type, connect and analyze content. These applications can use traditional test frameworks to apply test conditions. The hard part comes when developers want to automate application testing.

Sure, engineers can use identifiers for some of the on-screen content. However, identifiers cannot capture graphical elements. To test their app, some engineers use free pixel diff tools to validate screenshots or screen regions. How do these translate to cross-browser behavior?  Or, how about responsive application designs on different viewport sizes?

At some point, all these teams realize they have wasted engineering resources on home-grown visual validation systems. So, they look for a commercial visual validation solution.

Competitive Advantage

The final issue we hear from our users involves a competitive advantage. They tell us they seek an engineering advantage to overcome technical and structural limitations. For example, as teams grow and change, the newest members have the challenge to learn the dependencies that cause errors. Also, development teams build up technical debt based on pragmatic release decisions.

Over time, existing code becomes a set of functionality with an unknown thread of dependencies. Developers are loath to touch existing code for fear of incurring unknown defects that will result in unexpected code delays. As you might imagine, visual defects make up a large percentage of these risks.

Developers recognize the need to work through this thread of dependencies. They look for a solution to help identify unexpected behavior, and its root cause, well before code gets released to customers. They need a highly-accurate visual validation solution to uncover and address visual dependencies and defects.

Hear From Our Customers

A number of Applitools customers shared their insight in our online webinars in 2020. All these webinars have been recorded for your review.

Full End-To-End Testing

In a two-talk webinar, David Corbett of Pushpay described how they are running full end-to-end testing to achieve the quality they need. He described in detail how they use their various tools – especially Applitools. Later in that same webinar, Alexey Shpakov of Atlassian described the full model of testing for Jira. He described their use of visual validation. Both talks described the movement of quality validation to the responsibility of developers.

Alejandro Sanchez-Giraldo of Vodafone spoke about his company’s focus on test innovation over time – including the range of test strategies he has tried. As a veteran, he knows that some approaches fail while others succeed, and he recognizes that learning often makes the difference between a catastrophic failure and a setback. He describes the full range of Vodafone’s testing.

Testing Design Systems

Marie Drake of News UK explained how News UK had deployed a design system to make its entire news delivery system more productive. In her webinar, Marie explained how they depended on Applitools for testing, from the component level all the way to the finished project. She showed how the design system resulted in faster development. And, she showed how visual validation provided the quality needed at News UK to achieve their business goals.

Similarly, Tyler Krupicka of Intuit described their design system in detail. He showed how they developed their components and testing mocks. He described the design system as giving Intuit the ability to make rapid mock-up changes in their applications. Tyler explained how Intuit used their design system to make small visual tweaks that they could evaluate in A/B testing to determine which tweak resulted in more customers and greater satisfaction.

Testing PDF Reports

Priyanka Halder, head of quality at GoodRx, describes her process for delegating quality across all of the engineering team as a way to accelerate the delivery of new features to market. She calls this “High-Performance Testing.” In her webinar, one of the keys to GoodRx is the library of drug description and interaction pages served up by GoodRx. GoodRx uses Applitools to validate this content, even as company logos, web banners, and page functionality get tweaked constantly.

Similarly, Fiserv uses Applitools to test a range of PDF content generated by Fiserv applications.  In their webinar, David Harrison and Christopher Kane of Fiserv describe how Applitools makes their whole workflow run more smoothly.

These are just some of the customer stories shared in their own words in 2020.

Looking Ahead to 2021

As I mentioned earlier, we plan to publish a series of case studies outlining customer successes with Applitools in 2021.

When you read the published stories, you might be surprised by the results they get. For example, the company whose test suite for their graphical application today runs in 5 minutes. They previously used a home-grown visual test suite that took 4 hours to complete. Instead of running their tests infrequently, they can now run their application test suite as part of every software build. That’s what a 48x improvement can do.

Another company has tests that used to require every developer to run, analyze, and evaluate on their own. Visual tests were incorporated into their suite and had to be validated manually. Today, the tests get run automatically and require just a single engineer to review and either approve or delegate for rework, the results of tests.

You might be surprised to find competitors in your industry using Applitools. And, you might find that they feel guarded about sharing that information among competitors. Some companies see Applitools as a secret weapon in making their teams more efficient.

We look forward to sharing more with you in the weeks and months ahead.

Happy Testing, and Happy 2021.

Featured photo by alex bracken on Unsplash

The post Thriving Through Visual AI – Applitools Customer Insights 2020 appeared first on Automated Visual Testing | Applitools.

]]>
Leading With Visual AI – Applitools Achievements In 2020 https://applitools.com/blog/applitools-achievements-2020/ Wed, 23 Dec 2020 07:41:04 +0000 https://applitools.com/?p=25300 As we complete 2020, we want to share our take on the past year. We had a number of achievements in 2020. And, we celebrated a number of milestones.

The post Leading With Visual AI – Applitools Achievements In 2020 appeared first on Automated Visual Testing | Applitools.

]]>

As we complete 2020, we want to share our take on the past year. We had a number of achievements in 2020. And, we celebrated a number of milestones.

Any year in review article must include the effects of the pandemic, along with the threats on social justice. We also want to give thanks to our customers for their support.

Achievements: Product Releases in 2020

Ultrafast Grid

Among our achievements in 2020, Applitools launched the production version of Ultrafast Grid and the Ultrafast Test Cloud Platform. With Ultrafast Grid, you can validate your UI across multiple desktop client operating systems, browsers, and viewport sizes using only a single test run. We take care of the validation and image management, and you don’t need to set up and manage that infrastructure.

Ultrafast Grid works so quickly because we assume your application uses a common server response for all your clients. You only need to capture one server response. Ultrafast Grid captures the DOM state each snapshot and compares that snapshot in parallel across every client/operating system/viewport combination you wish to test. A single test run means less server time. Parallel validation means less test time. Ultrafast Grid simultaneously increases your test coverage while reducing both your test run time and infrastructure requirements.

“Accelerating time to production without sacrificing quality has become table stakes for Agile and DevOps professionals, the team at Applitools has taken a fresh approach to cross browser testing with the Ultrafast Grid. While traditional cloud testing platforms are subject to false positives and slow execution, Applitools’ unique ability to run Visual AI in parallel containers can give your team the unfair advantage of stability, speed, and improved coverage. This modern approach to testing is something that all DevOps professionals should strongly consider.”

Igor Draskovic, VP, Developer Specialist at BNY Mellon

A/B Testing

We introduced a new feature to support A/B testing. As more of our customers use A/B testing to conduct live experiments on customer conversion and retention, Applitools now supports the deployment and visual validation of parallel application versions.

“A/B testing is a business imperative at GoodRx – it helps our product team deliver the absolute best user experience to our valued customers. Until now, our quality team struggled to automate tests for pages with A/B tests – we’d encounter false positives and by the time we wrote complex conditional test logic, the A/B test would be over. Applitools implementation of A/B testing is incredibly easy to set up and accurate. It has allowed our quality team to align and rally behind the business needs and guarantee the best experience for our end-users.”

Priyanka Halder, Sr. Manager, Quality Engineering at GoodRx

GitHub, Microsoft, and Slack Integrations

Applitools now integrates with Slack, adding to our range of application and collaboration integrations. Applitools can now send alerts to your engineering team members, including highlights of changes and the test runs on which they occurred.

As a company, we also announced integrations with GitHub Actions and Microsoft Visual Studio App Center.  The integrations allow developers to seamlessly add Visual AI-powered testing to every build and pull request (PR), resulting in greater UI version control and improved developer workflows. As we have seen, this integration into the software build workflow provides visual testing at code check-in time. Instead of waiting for end-to-end tests to expose rendering problems and conflicts, developers can use Applitools to validate prior to code merge.

“We’re excited to welcome Applitools to the GitHub Partner Program and for them to expand their role within the GitHub ecosystem. Applitools’ Visual AI powered testing platform and GitHub’s automated, streamlined developer workflow pair perfectly to support our shared vision of making it easier to ship higher quality software, faster.”

Jeremy Adams, Director of Business Development and Alliances at GitHub

Auto Maintenance and Smart Assist

We also introduced major improvements with Auto Maintenance and Smart Assist. With Smart Assist, we help you deploy your tests to address unique visual test challenges, such as dynamic data and graphical tests. With Auto Maintenance, you can validate an intended visual change in one page of your application and then approve that change on every other page where that change occurs. If you update your logo or your color scheme, you can validate identical changes across your entire application in one click. Smart Assist and Auto Maintenance reduce the time and effort you need to maintain your visual tests – saving hours of effort in your development and release process.

“We use Applitools extensively in our regression testing at Branch. Visual AI is incredibly accurate, but equally impressive are the AI-powered maintenance features. With the volume of tests that we run, the time savings that the AI auto-maintenance features afford us are extensive.”

Joe Emison, CTO at Branch Financial

Achievements: Milestones in 2020

Applitools also achieved a number of major milestones in 2020.

1,000,000,000 Page Images Collected

We recorded one billion page images collected across our customer base. Many of our customers now include Applitools validation as part of every CICD check-in and build. You will find out more in our customer insights discussion, below. We celebrated that achievement earlier in 2020.

Test Automation University

We launched Test Automation University (TAU) as a way to help expand test knowledge among practitioners. Among our achievements in 2020, TAU now has over 50 courses to teach test techniques and programming languages. You can take any of these courses free of charge. Whether you are an experienced test programmer or just getting started, you will find a range of courses to match your interests and abilities. We introduced 19 new courses in 2020. We also saw significant numbers of new students using Test Automation University. In early 2020, we announced that we had 35,000 students taking courses. Later in the year we celebrated reaching the 50,000 user milestone. Look forward to another announcement in early 2021.

Hackathons

In 2019, Applitools launched our Visual AI Rockstar Hackathon. Hackathon participants ran a series of test cases comparing legacy locator-based functional testing with Applitools visual validation. In 2020, we shared the results of that Hackathon.  Engineers wrote tests faster, wrote test code that ran more quickly, and wrote tests that required less maintenance over time. We were able to show your achievements in 2020.

Also in 2020, we hosted a cross browser test hackathon.  Participant results demonstrated that Ultrafast Grid sets up more easily than a traditional farm of multiple browsers. The real value of Ultrafast Grid, though, comes with test maintenance as applications update over time. In November, we hosted a hackathon based on a retail shopping application. We look forward to sharing the insights from that hackathon in early 2021.

Future of Testing

Lastly in 2020 Applitools launched the Future of Testing Conference. Applitools gathered engineering luminaries across a range of industries and companies – from brand names like Microsoft and Sony to tech leaders like GoodRX and Everfi. Their stories show how companies continue to deliver quality products more quickly by using the right approaches and the right tools. Applitools has planned more Future of Testing Conferences for 2021.

Achievement: Customer Growth In 2020

Another of our achievements in 2020 involved customers. We want to thank our customers for their commitment to using Applitools in 2020. Not only did we pass the 1,000,000,000 page capture mark, but we also learned about the many exciting ways our customers are using Applitools.

During the COVID-19 coronavirus pandemic, our customers have appreciated how we have worked to ensure that they continued to get full use and value from Applitools. Though our support team worked largely from home during the year, we used tools to ensure that our customers got the support they needed to succeed with Applitools.

We continued to see our existing customers use more and more page checks over time. A number of companies run Applitools to validate code check-in on daily, and even hourly, code builds. Our customers are also using Applitools to validate component libraries they are building and modifying in React, Angular, and Vue.

We also saw a large number of companies experimenting with and adopting Cypress for development validation. Some companies used Cypress in development to complement an existing Selenium test infrastructure. Others were starting their Cypress validation in new areas or on new products.

Our World in Review – 2020

While many issues affected the world in 2020, two dominated the Applitools world.

The first issue, the COVID-19 pandemic, required our team to work from home for much of the year. Dan Levy offered his suggestions on how to work from home more efficiently.  As we continued to work remotely, we saw how the pandemic affected our team and the world around us. At this point, all of us know people who have been infected. Some in our circles have been hospitalized. And, some have died.

As a company, we are fortunate that Applitools has provided its employees with the ability to work from home. As a company, we want to thank the first responders and health care workers who cannot shelter in safety. We thank them for risking their lives to make all of us safe.

And, we also share condolences with those of you who have lost family, friends, and other loved ones in 2020.

The second issue, social justice, has continued to capture the spirit of our company and its employees. For 8 minutes 46 seconds, the world saw one human’s casual indifference while kneeling on another human’s neck. While not the only incident of 2020, the video of George Floyd’s struggle to live affected all of us. How can there be justice if our civil guardians cannot treat all of us equally? If we want a just world, we need to support those who advocate for social justice.

Applitools and its employees support creating a more just world for all. We continue to encourage our employees to support social justice movements for all. They can support Black Lives Matter, or any other organization actively combatting racism and injustice.

We know there are some who sow division for their own gain. As a company, we think we are stronger together.

Next Up – Customer Insights in 2020

In our next blog post, learn more about Applitools customers. We will share some details we learned about Applitools driving our customers’ productivity. We will be sharing more details in 2021 with a series of customer success stories. Before we release those, read our next blog post on customer insights. Learn how your peers and colleagues benefit from using our highly-accurate Visual AI infrastructure.

The post Leading With Visual AI – Applitools Achievements In 2020 appeared first on Automated Visual Testing | Applitools.

]]>