Best Practices Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/best-practices/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Wed, 11 Jan 2023 00:10:29 +0000 en-US hourly 1 [Visual] Mobile Test Automation Best Practices https://applitools.com/blog/visual-mobile-test-automation-best-practices/ Thu, 06 May 2021 19:46:08 +0000 https://applitools.com/?p=28817 Mobile Testing is challenging. Mobile test automation – i.e. automating tests to run on mobile devices, is even more challenging. This is because is requires some added tools, libraries and...

The post [Visual] Mobile Test Automation Best Practices appeared first on Automated Visual Testing | Applitools.

]]>

Mobile Testing is challenging. Mobile test automation – i.e. automating tests to run on mobile devices, is even more challenging. This is because is requires some added tools, libraries and infrastructure setup for you to be able to implement and run your automated tests on mobile devices.

This post will cover the various strategies and practices you should think about for your mobile test automation – including strategy, execution environment setup, automation practices and running your automated tests in the CI pipeline.

Lastly, there is also a link to a GitHub repository which can be used to start automation for any platform – Android, iOS, web, Windows. It also supports integration for Applitools Visual AI and can run against local devices as well as cloud-based device farms.

So, let’s get started. We will discuss the following topics:

Test Designing

The Test Automation Pyramid is not just a myth or a concept. It actually helps teams shift left and get quick feedback about the quality of the product-under-test. This then allows humans to use their intelligence and skills to explore the product-under-test to find other issues which are not covered by automation. 

To make test automation successful however, you need to consciously look at the intent of each identified test for automation, and check how low in the pyramid you can automate it.

A good automation strategy for the team would mean that you have identified the right layers of the pyramid (in context of the product and tech stack). 

Also, an important thing to highlight is that each layer of the pyramid has a certain type of impact on the product-under-test. Refer to the gray-colored inverted pyramid in the background of the above image.

This means:

  • Each unit test will test a very specific part of the product logic.
  • Each UI / end-2-end test will impact the breadth of the product functionality.

In the context of this post, we will be focusing on the top layer of the Test Automation Pyramid – the UI / end-2-end Tests.

To make the web / mobile test automation successful, you need to identify the right types of tests to automate at the top-layer of the automation pyramid.

Mobile Test Automation Strategy

For mobile test automation, you need to have a strategy for how, when and where you need to run the automated tests. 

Automated Test Execution Strategy

Based on your device strategy, you also need to think about how and where your tests will run. 

If your product supports it, you would want the tests to run on browsers / emulators / devices. These could be on the local machine, some browser / device farm as part of manually triggered test execution and also as part of automatic triggers setup in your CI server.

In addition, you also need to think about fast feedback and coverage. For this there are different considerations – sequential execution, distributed execution and parallel (with distributed) execution.

  • Sequential execution: All tests will run, in any order, but 1 at a time
  • Distributed execution (across same type of devices): 
    • If you have ‘x’ devices of the same type available, all tests will run on either of the device available for running the test
    • It is preferable to have the tests distributed across the same type of device to prevent device-specific false positives / negatives
    • This will give you faster feedback
  • Parallel execution (across different types of devices, ex: One Plus 6, One Plus 7): 
    • If you have ‘x’ types of devices available, all tests will run on each of the device types
    • This will give you wider coverage
  • Parallel with Distributed execution: (combination of Parallel and Distributed execution types)
    • If you have ‘x’ types of devices available, and ‘y’ devices of each type: (ex: 3 One Plus 6, 2 One Plus 7)
      • All tests will run on each device type – i.e. One Plus 6 & One Plus 7
      • The tests will be distributed across the available One Plus 6 devices
      • The tests will be distributed across the available One Plus 7 devices

Your Test Strategy needs to have a plan for achieving coverage. Analytics data can tell you what types of devices or OS or capabilities are important to run your tests against.

Device Testing Strategy

Each product has different needs and requirements of the device capabilities to function correctly. Based on the context of your product, look at how you can identify a Mobile Test Pyramid that suits you.

The Mobile Test Pyramid allows us to quickly start testing our product without the need for a lot of real devices in the initial stages. As the product moves across environments, you can progressively move to using emulators and real devices.

Some important aspects to keep in mind here is to identify:

  • If you can use the browser for early testing
  • Does your product work well (functionally and performance-wise) in emulators
  • What type of real devices and how many do you need? Is there an OS version limitation, or specific capabilities that are required? Are there enough devices for all team members (especially since most of us work remotely these days)

In addition, do you plan to set up your real devices in a local lab setup or do you plan to use a device farm (on-premise or cloud based)? Either approach needs to be thought through, and the solution needs to be designed accordingly.

Automating the set up of the Mobile Test Execution Environment

Depending on your test automation tech stack, the setup can be daunting for people new to the same. Also, depending on the versions of the tools / libraries being used, there can be differences in the execution results. 

To help new people start easily, and keep the test execution environment consistent, the setup should be automated.

In case you are using Appium from Linux or Mac OS machines, you can refer to this blog post for automatic setup of the same – https://applitools.com/blog/automatic-appium-setup/

Mobile Test Automation Solution / Framework

Your automation framework should have some basic criteria in place:

  • Test should be easy to read and understand
  • Framework design should allow for easy extensibility and scalability
  • Tests should be independent – it will allow to run them in any sequence, and also allow them to be run in parallel with other tests

Refer to this post for more details on designing your automation framework.

The team needs to decide its criteria for automation, and its execution. Based on the identified criteria, the framework should be designed and implemented.

Test Data Management

Test Data is often an ignored aspect. You may design and plan for everything, but if you do not have a good test data strategy, then all the efforts can go waste, and you would end up with sub-optimal execution.

Here are some things to strive for:

  • It is ideal if your tests can create the data it needs. 
  • If the above is not possible, then have seeded data that your tests can use. It is important to have sufficient data seeded in the environment, that will allow for test independence, and parallel execution
  • Lastly, if test data cannot be created or seeded, have intelligence in your test implementation to “query” the data that each of your test needs, and use that intelligently and dynamically in your execution

OS Support

The tests should be able to be implemented and executed based on the OS the team members are using, and the OS available in the CI agents. So if your team members are on Windows, Linux and Mac OSX, then keep that in consideration when implementing the tests and its utilities ensuring it would work in all OS environments.

Platform Support

Typically the product-under-test would be available to the end-users in various different platforms. Ex: As an Android app distributed via Google Play Store, or as an iOS app distributed via Apple’s AppStore, or via the web.

Based on what platforms your product is available, your test framework should support all of them.

My approach for such multi-platform product-under-test is simple:

  • Tests should be specified once, and should be able to run on any platform, determined by a simple environment variable / configuration option

To that effect, I have built an open-source automation framework, that supports the automation of web, Android, iOS, and Windows desktop applications. You can find that, with some sample tests here – https://github.com/znsio/unified-e2e. You can refer to the “Getting Started” section.

Reporting

Having good test reports automatically generated as part of your test execution would allow you the following:

  • Know what happened during the execution
  • If the test fails, it is easy to do root-cause analysis, without having to rerun the test, and hope the same failure is seen again

From a mobile test automation perspective, the following should be available in your test reports:

  • For each test, the device details should be available – especially very valuable if you are running tests on multiple devices
  • Device logs for the duration of the test execution should be part of the reports
    • Clear the device logs before test execution starts, and once test completes, capture the same and attach in the reports
  • Device performance details – battery, cpu, screen refresh / frozen frames, etc
  • Relevant screenshots from test execution (the test framework should have this capability, and the implementer should use it as per each test context)
  • Video recording of the executed test
  • Ability to add tags / meaningful metadata to each test
  • Failure analysis capability and ability to create team specific dashboards to understand the tests results

In addition, reports should be available in real time. One should not need to wait for all the tests to have finished execution to see the status of the tests.

Assertion Criteria

Since we are doing end-2-end test automation, the tests we are automating are scenarios / workflows. It is quite possible that as part of the execution, we encounter different types of inconsistencies in the product functionality. While some of the inconsistencies would mean there is no point proceeding with further execution of that specific scenario, there would be many cases where we can proceed with the execution. This is where using hard asserts Vs soft asserts can be very helpful.

Let’s take an example of automating a banking scenario – where the user logs in, then sees the account balance, and then transfers a portion of the balance to another account.

In this case, if the user is unable to login, there is no point proceeding with the rest of the validation. So this should be a hard-assertion.

However, let’s say the test logs in, but the balance is 5000 instead of 6000. Since our test implementation takes a portion of the available balance – say 10%, for transferring to another account, the check on the balance can be a soft-assertion.

When the test completes, it should then fail with the details of all the soft-assertion failures found in the execution. 

This approach, which should be used very consciously, will allow you to get more value from your automation, instead of the test stopping at the first inconsistency it finds.

Visual Test Automation

Let’s take an example of validating a specific functionality in a virtual meeting platform. The scenario is: The host should be able to mute all participants.

Following are the steps to follow in a traditional automated test:

  1. Host starts a meeting
  2. More than ‘x’ participants join the meeting
  3. Host mutes all participants
  4. The assertion needs to be done for each of the participants to check if they are muted

Though feasible, Step 4 needs a significant amount of code to be written.

But what about a situation where if there is a bug in the product, while force muting, the video is also turned off for each participant? How would your traditionally automated test validate this?

A better way would be to use a combination of functional and Applitools’ AI powered visual testing in such a case, where the following would now be done:

  1. Host starts a meeting
  2. More than ‘x’ participants join the meeting
  3. Host mutes all participants
  4. Using Applitools visual assertion, you will now be able to check the functionality and the other existing issues, even though not directly validated / related to your test. This automatically increases your test coverage while reducing the amount of code to be written

In addition, you want to ensure that your app looks great, consistent and as expected on any device. So this is an easy to implement solution which can give you a lot of value in your quest for higher quality!

Instrumentation / Analytics Automation

One of the key ways to understand your product (web / app) usage by the end-user typically uses Analytics. 

In the case of the web, if some analytics event capture is not working as expected, it is “relatively” easy for you to fix the problem, and do a quick release and you will be able to start seeing that data.

In the case of mobile apps though, once the app is released and the user has installed it, unless you release the app again (with the fix, of course), AND the user updates it, only then you will be able to see the data correctly. Hence you need to very carefully plan the release approach of your mobile apps. See this webinar by Justin and me on “Stop Testing (Only) The Functionality of Your Mobile Apps!” on different aspects of Mobile Testing and mobile test automation that one needs to think about and include in the Testing and Automation Strategy.

Coming back to analytics, it is easily possible, with some collaboration with the developers of the app, to validate the analytics events being sent as part of your mobile test automation. There are various approaches to this, but that is a separate topic for discussion.

Running the Automated Tests

The value of automation is to run the tests as often as we can on every change in the product-under-test. This will help identify issues as soon as they would be introduced in the product code.

It can also highlight that the tests need to be updated in case they are out of sync with expected functionality. 

From a mobile test automation perspective, we need to do a few additional steps to make these automated tests run in a truly non-intrusive and fully automated manner.

Automated Artifact Generation, in Debug Mode

When automating tests for the web, you do not need to worry about an artifact being generated. When the product build completes, you can simply deploy the artifact(s) to an environment, update configuration(s), and your tests can run against it.

However for mobile test automation, the process would be different.

  1. You need to build the mobile app (apk / ipa) – and have it point to a specific environment

    To clarify this – the apk / ipa would point to backend servers via APIs. You would need this configurable to point to your test environment Vs production environment.
  1. The capability to build capability to generate the artifacts for each type of environment (ex: dev, qa, pre-prod, prod) – from the same snapshot of the code is also needed.
  2. You would need to have this artifact being built in the debug mode to allow the automated tests to interact with it

Once the artifact is generated, it should automatically trigger the end-2-end tests.

Teams need to invest in artifacts with the above capabilities generated automatically. Ideally, these artifacts are generated for each change in the product code base – and subsequently tests should run automatically against it. This would allow us to easily identify what parts of the code caused the tests to fail – hence fix the problem very quickly.

Unfortunately, I have seen an antipattern in far too many teams – where the artifact is created from some developer machine (which may have unexpected code as well), and shared over email or some weird mechanism. If your team is doing this, stop it immediately, and invest in automating the mobile app generation via a CI pipeline.

Running the Tests Automated for Mobile as Part of CI Execution

To run the tests as part of CI needs a lot of thought from a mobile test automation perspective.

Things to think about:

  1. How are your CI agents configured? They should use the same automated script as discussed in the “Test Execution Environment” section
  2. Do you have access to the CI agents? Can you add real devices to those agents? There is a regular maintenance activity required for real devices and you would need access to the same.
  3. Do you have a Device Farm (on-premise or cloud-based)? You need to be able to control the device allocation and optimal usage
  4. How will you get the latest artifact from the build pipeline automatically and pass it to your framework. The framework then needs to clean up the device(s) and install this latest artifact automatically, before starting the test execution.

The mobile test automation framework should be fully configurable from the command line to allow:

  • Local Vs CI-based execution
  • Run against local or device-farm based execution
  • Run a subset of tests, or full suite
  • Run against any supported (and implemented-for) environment
  • Run against any platform that your product is supported on – ex: android, iOS, web, etc.

The post [Visual] Mobile Test Automation Best Practices appeared first on Automated Visual Testing | Applitools.

]]>
How to Upgrade to Selenium 4 https://applitools.com/blog/how-to-install-selenium-4/ Fri, 25 Sep 2020 21:03:53 +0000 https://applitools.com/?p=23570 In this series on Selenium 4, we are providing use cases and code samples to explore the upcoming changes to the WebDriver API. In the previous blog post, we discussed...

The post How to Upgrade to Selenium 4 appeared first on Automated Visual Testing | Applitools.

]]>

In this series on Selenium 4, we are providing use cases and code samples to explore the upcoming changes to the WebDriver API.

In the previous blog post, we discussed Migrating to Selenium 4 and today we’ll detail how to install Selenium 4 so that you can check out all of the new features!

While Selenium has various language bindings, Java and JavaScript are the most widely used, so this post will cover how to install these two bindings. 

Supercharge Selenium with Applitools Visual AI

Get Started

Java

How to Upgrade from Selenium 3 to Selenium 4

If you are already using Selenium with Maven, you can upgrade it to Selenium 4 by changing the Selenium version to 4. Refer to line 28 in the example below:

View the code on Gist.

If you are already using Selenium with Gradle, you can upgrade it to Selenium 4 by changing the Selenium version to 4 as shown on line 17 in the example below:

View the code on Gist.

How to Install Selenium 4

Before beginning, make sure Java is installed on your machine and the JAVA_HOME environment variable is set in your system path.

The Selenium Java binding can be set up in two different ways: via build tools, or manually. I’ll outline the steps for both. However, you only need to choose one approach.

Install Selenium via build tools

There are various Java build tools available to manage the build and dependencies; the most popular and widely used ones are Maven and Gradle.

Install and setup Maven

Maven is a build and dependency management tool for Java based application development which helps with complete build lifecycle management.

Maven is also available with the Java IDEs (IntelliJ, Eclipse, etc) as a plugin.

However, if you are not using the plugin, you can install Maven manually.

Download Selenium using Maven

Maven looks for a pom.xml file for information on project, configuration, dependencies and plugins etc.

Maven Central Repository is the place where all the dependencies/libraries for all versions can be found. We can search for a library and copy and paste the dependency into our pom.xml file to download them.

For example, Selenium 4 can be downloaded using maven by adding the dependencies in pom.xml as shown in the below sample:

View the code on Gist.

Test Setup

You are now all set to write tests and run them through the IDE or through the command line. You can test the setup, by adding the following code snippet into a new Java class under the ‘src’ directory.

View the code on Gist.

Install and setup Gradle

Gradle is another popular build tool used for java based applications these days. It’s open source and the build scripts can be written in Groovy or Kotlin DSL.

Gradle can be also installed as a plugin in Java IDEs (IntelliJ, Eclipse, etc).

However, if you are not using the plugin, you can install Gradle manually.

Download Selenium using Gradle

Gradle looks for a build.gradle file where all the dependencies, plugins and build scripts are written.

Maven Central Repository is the place where all the dependencies/libraries for all versions can be found for Gradle as well. We can search for a library and copy and paste the dependency from the Gradle tab into our build.gradle file to download them.

For example, Selenium 4 can be downloaded using gradle by adding the dependencies in build.gradle file as shown in the below sample.

View the code on Gist.

Test Setup

You are now all set to write tests and run them through the IDE or through the command line. You can test the setup, by adding the following code snippet into a new Java class under the ‘src’ directory and build.gradle file in the project root directory.

View the code on Gist.

Install Selenium manually

Download Selenium

The jar files for Selenium 4 Java bindings can be downloaded from the official Selenium downloads page.

It will be downloaded as a zip file, unzip it.

Selenium scripts can be executed on various platforms, namely Firefox, Chrome, Internet Explorer, Safari, Opera and Edge. To explore and install the drivers for each refer to documentation links under the “Browsers” section on Downloads page.

Add Selenium to IntelliJ

Within IntelliJ:

Go to the File Menu  Project Structure  Modules  Dependencies Click on ‘+’ Sign -→ Select JARs or directories → Select all the Selenium jar files and click on OK button.

Also, add the downloaded browser drivers under the project directory. In our example we will add chromedriver to run our tests on Chrome browser.

Test Setup

Create a new Java file under the ‘src’ directory and add this code:

View the code on Gist.

JavaScript

You can download and manage the Selenium JavaScript bindings using npm.

Install NodeJs and NPM

As a prerequisite, NodeJs needs to be downloaded and installed on the machine. To verify the installation, use the below two commands

node -v
npm -v

Both should give the current version installed as an output.

Download and install Selenium Webdriver

Using the below command we will install Selenium with npm package manager on your machine.

npm install -g selenium-webdriver

Additionally, to run the tests on the browsers (Chrome, Firefox and IE etc), browser drivers need to be downloaded and set in the system path.

The drivers can be downloaded from the Downloads page and the links can be found under the “Browsers” section on the page. Windows users may add the driver path to the PATH variable under environment variables. Mac and Linux users need to add a driver path to the $PATH variable in the shell.

Once we have installed the Selenium libraries and set the path for the browsers, we need to add the package.json file to the project root directory. NPM requires this file to install the dependencies and plugins used to build the project along with running the tests.

package.json can be created either manually or running the below command which adds the file to the current directory it is run from.

 npm init

Below sample package.json file can be referred for installing Selenium 4 and running our tests. If you are already using Selenium Javascript bindings, to upgrade to Selenium 4, change the version as shown on line 12 in the below sample build file.

View the code on Gist.

Once you have updated the Selenium 4 dependencies in the package.json file, run the below command to install all dependencies defined in your package.json file under dependencies section.

npm install

The test is written in “googleSearchTest.js” file. Below is the sample code, which opens a Firefox browser, navigates to google.com and searches for “Selenium 4” and hits ENTER.

View the code on Gist.

To execute the test use the below command

node googleSearchTest

Summary

We went through the setup instructions for Selenium 4 for two most popular language bindings – Java and JavaScript with ready to use sample tests and build files.

Now that we are ready with the Selenium 4 setup on our machine, in the next post of this series, we will explore the newly added features offered in Selenium 4 and the benefits they provide with working code samples.

The post How to Upgrade to Selenium 4 appeared first on Automated Visual Testing | Applitools.

]]>
How Do I Build A Culture of Quality Excellence? https://applitools.com/blog/quality-excellence/ Tue, 01 Sep 2020 00:19:42 +0000 https://applitools.com/?p=22016 How Do You Build A Culture of Quality Excellence? Greg Sypolt, VP of Quality Assurance at EverFi, led an hour-long webinar for Applitools titled “Building A Culture of Quality Excellence –...

The post How Do I Build A Culture of Quality Excellence? appeared first on Automated Visual Testing | Applitools.

]]>

How Do You Build A Culture of Quality Excellence?

Greg Sypolt, VP of Quality Assurance at EverFi, led an hour-long webinar for Applitools titled “Building A Culture of Quality Excellence – Understanding the Quality Maturity Rubric.” Let’s review his key points and his core slides.

About Greg Sypolt

Greg has been writing blogs about his experiences as a quality advocate inside his company. He wants to learn how to make testing more efficient and share what he learns with others. He loves to geek out about testing and technology, but he’s also a sports fan, he loves doing DIY projects, and he’s a husband and father of three.

Overview

Greg’s webinar focuses on you – someone who is looking to improve the quality culture at your organization. No matter where you are on the journey, from just starting out to making significant progress, Greg has useful thoughts and ideas for you.  His webinar breaks into several key pieces:

  • Vision
  • Journey
  • Three Pillars of App Quality Excellence
  • How To Measure The Quality Maturity Rubric
  • Building A Quality Platform
  • Reaping The Savings

Vision

Greg’s key message starts with a key idea: Quality is a journey, not a destination. Your tools and software constantly evolve, and your processes and procedures evolve as well.

How do you build your culture of quality excellence? To start, Greg says, envision an organization where each member has the mindset that he, she, or they own quality. This vision differs drastically from the legacy idea of a “QA team”. By getting on that path towards broad quality ownership, Greg says, each team member can focus on becoming the voice of quality for the organization.

At the same time, Greg says, understand your long-term goals. You want to build an organization where quality aligns with your organization’s other goals. You want to provide both speed and efficiency, in addition to reliability. Your organization likely needs a higher level of collaboration. And, by speed, he means focusing on the speed of testing – making sure that your actual tests run quickly. Quality should not be your bottleneck.

Quality Journey Objectives

Next, Greg shows the following slide:

image5

Greg describes each of these steps in turn.

  • Everyone understands how we make and test things. In this step, you have a good understanding that design requirements involve validation requirements. You know that you want unit tests, API tests, integration tests, and UI tests. You want to handle functional cases and failure cases. These cases need to be part of the design.
  • Building a quality mindset. Here, you recognize the importance of quality in every aspect of your process to produce a quality product.
  • Understanding our current testing coverage. Provide transparency to your team about what you cover and what you do not. Prioritize areas for test development.
  • Reduction of manual testing, advocating for automating the right things. Slow, inconsistent and inaccurate manual tests impede your testing speed. But prioritize. Know which tests benefit from automation.
  • Build a balanced testing portfolio. Understand how well you cover behavior across unit, API, integration, UI, performance, and security tests. Know your deficiencies and address them.
  • Shift-left mindset by embracing quality in every stage of the CICD pipeline. As you build, test. Ensure that your pipeline gates work well to check each code build against expected behavior.
  • Transitioning to exploratory charters and fully automated testing. This step gets you to automating the right things and beginning to expand your thinking to uncovered cases.
  • Building in quality velocity for developers. Now you deliver ways to help your development team create with a quality mindset by designing with test, testability, and validation results in mind.
  • Quality visibility and accountability. Create the tools and the feedback loop to show discovered defects and their origin – not as a path to punishment but to aid improvement.
  • Remove the traditional QA silo mentality. As part of all the other steps, quality becomes a team metric, rather than the responsibility of a subgroup tasked with assurance.
  • Not a destination. The journey continues – improvements continue along your path.

Greg’s Journey As An Example

Greg shows some data from his experience at his previous company. Greg had worked there for five years.  When he started, the company had no test automation. Each release took weeks to accomplish.

image8

At the end of his journey, the company:

  • Ran 4000 or more builds per week
  • Regularly generated 600 Selenium, 55 Visual, and 75 Integration automated tests
  • Ran an average of 5,000,000 or more tests per month with an average 99.2% pass rate
  • Authored tests in less than 15 minutes and ran all their tests in less than 5 minutes.

By creating all this test infrastructure to go along with the software development builds, Greg’s team created the infrastructure to validate application behavior through test automation. They also built the tools to automate test generation. Finally, they created a discipline to build tests that would run quickly with comprehensive coverage, so testing would allow those 4000+ weekly builds.

Clearly, the process before and after required an evolution at the company. He is working with his team at Everfi with the goal of achieving a comparable outcome.

The Three Pillars of Application Quality Excellence

Next, Greg speaks about the core pillars of building a culture of quality.

First, Greg describes “Quality over quantity.” The first step here is to test the right things, but not everything. Every line of test code you write becomes test code you need to maintain. New features get added, and your test code will likely have to change. You need your developers to learn to think from an end-user’s perspective – how a user will use the application? Analytic tools can help your team understand the paths that users take through  your application to know what’s important. That data can help you get a robust test infrastructure that targets how your users use your products.

Quality over quantity also focuses your team on transparency for quality. If a developer makes a change and kicks off a build, the build result provides immediate feedback to the developer and the team if the added feature causes test failures. Hold everyone accountable for the quality of their code.

image7

Second, Greg describes “Work smarter not harder.” At Greg’s previous company, they had a dedicated developer solutions team providing self-service solutions for testing. That team made it easy for developers to build the app and tests at the same time. They also built a central repository for metrics, so testing results across behavior, visual, and integration tests could be viewed and analyzed in one place. They worked on tools to author tests more quickly and more effectively. The result made it easier to create the right tests.

They also looked at ways to include more than just developers in helping to both understand the tests and to help create tests.

Third, Greg explains what he means by “Set the right expectations.” He makes it clear that you need to set the right expectations with the development team. When developers begin to develop tests, they need to understand what you expect from them – both from a coverage as well as velocity perspective. You need to make sure that everyone understands who can ensure quality and performance of the product – and it’s the people who write the code who have the greatest influence on the quality of that code.

Greg also makes it clear that the you need a clear plan to move to a more mature quality structure. Everyone needs to know what is expected, what will be measured, and what outcomes the team wants to achieve.

Measuring Quality Maturity Rubric

At this point, Greg shares his rubric for analyzing quality maturity.image4

He mentions that the metrics involve the people on your teams, the technology you use, and the processes you put in place. Your people need to be ready and organized appropriately. You require the right technology to increase your automation. And, you need to put processes in place that allow you to succeed at improving your quality maturity level.

As a preface, Greg describes these metrics and this rubric as based on his experience. He expresses an openness to update these based on the experience of others.

The Quality Maturity Rubric

His full rubric involves 23 different maturity attributes that each have four range values:

  • Beginner – getting started on maturity
  • Intermediate – taking steps to organize people, use technology, or apply processes
  • Advanced – organizing effectively to achieve the maturity measure outcome
  • Expert – having automated the process to be achieved with people used at their most efficient.

Greg goes through several attributes in detail, giving examples of beginner through expert for each.  For example:

image1

This table shows the “Culture” attribute.

  • Beginner – No shared quality ownership, siloed development and QA, losing sight of the larger quality picture.
  • Intermediate – Identify and define specific levels of quality involvement for individuals on teams, enable shared responsibilities
  • Advanced – Teams are finding problems together
  • Expert – Machines are identifying problems

To use this table, compare your organization to the description of each value that most closely matches yours. Give yourself a “1” if you’re a beginner, “2” if you’re Intermediate, “3” if you’re advanced, and “4” if you achieved expert.

Greg has created tables for each of the attributes. For example:

image2

For the Environment attribute, the Advanced level involves automated builds and staging, with push-button deployment.

Using the rubric, you go through each attribute and assign a numeric value based on how close you are to one of the levels.

As you go through the analysis, you can also use the table to set your goals for improvement over time, as you look to increase the quality maturity of your organization.

image6

Greg posted his grading rubric online – you can find it at:

http://bit.ly/qmc-grading-rubric

Once you use the grading rubric, you can start to figure out the next two steps:

  • Create an improvement map. You can’t move everything all at once. Focus on the key attributes that matter. Greg points out that culture allows you to move everything else. Figure out where you can go with culture maturity.
  • Move to implementation. Once you know what you want to do, move forward with appropriate steps. Do you have the right people? Do you have process changes you need to deploy? New technology? Move forward in steps appropriate to your organization.

Quality Roles

Now that you have people, processes and technology thought through, along with your approach to maturity, it’s time to think about your people and their skills.

Greg presents a map of QA role clarity that helps you think about your existing technical skills among your team. Greg created this table based on his experiences in quality engineering.

image3
  1. Level 1 is a QA specialist, who focuses on manual and exploratory test
  2. The second level is an automation engineer, who has the Level 1 skills plus the ability to develop automation test scripts.
  3. At Level 3 you find an Industry SDET (software development engineer in test) who does the work of the first two levels, plus can write code and develop test algorithms.
  4. Level 4 defines the need of EverFi – an EverFi SDET. In addition to the first three levels, an EverFi SDET must be proficient with DevOps. Greg hopes to get his team to this level in the next 12 to 18 months. A DevOps SDET can help make the development team proficient integrating test with development.
  5. The top level, Level 5 – the Next Gen SDET, incorporates the prior four levels. On top of that, a Next Gen SDET brings proficiency in security, data science, and machine learning. This level is more aspirational. Greg expects that, over time, more quality engineers will obtain these skills.

Greg sees this table as another rubric you can use to evaluate your people. You can evaluate where you are today. And, you can start to think about skills you want to add to your team. The people on your team will help you execute your vision, so you need to know where you are and where you want to go.

You can look to hire people with these skills. You likely can find engineers with Level 3 skills of SDET. More than likely, though, you, like Greg, will be building these skills among your team over time.

Building A Quality Platform

Once you have thought through your quality maturity, reviewed your existing processes and technologies and begun to evaluate your team, Greg wants you to think about your current and future states as a “quality platform”.

He reminds you that your quality platform serves key outcomes:

  • Building a culture that embraces quality at every stage from intake, discovery, execution and release
  • Enabling and driving continuous improvement and adoption of quality practices
  • Giving teams the ability to lead with a sense of purpose, openness and trust.

The combination of people, processes, and technologies that make up your quality platform can help you deliver major quality improvements.

image9

At the core of your quality platform is the Developer Solutions team – working to create quality solutions among the software development team.

Next, you need data insights that help create visibility across the organization.

Third you develop turnkey solutions that simplify the deployment of key functions or processes. For example you can easily deploy Selenium or Cypress test automation through a set of well-defined code structures, and. You can easily build new tests and structures in your code. For example, at EverFi, Greg has deployed Applitools to easily add visual validation to tests – simplifying overall test development.

Fourth – your platform team serve as quality ambassadors. They represent quality practices and endorse effective change within the organization.

Fifth – you focus on functional values of the platform, making testing better and easier for everyone.

Lastly, you focus on non-functional behaviors, like monitoring the critical paths inside your application. You make sure to understand the critical paths and ensure they work correctly.

Reaping The Savings

Finally, Greg gets to the bottom line.

How does the quality platform deliver savings to your organization? Greg shows this handy table.

image10

Each element of the quality platform contributes savings.

Greg gives the example of Data Insights providing cost savings because they help you know where you can add tests as well as providing data that can be consumed by the team. Developer Solutions helps you reduce the time to deploy pre-existing or new solutions. Functional tests can help improve your quality by ensuring you run tests on every pull request and get instant feedback, instead of waiting for savings. Or, a turnkey session with an external vendor can help you get up to speed quickly with the vendor’s technology.

Building Your Own Quality Platform

As Greg points out, moving to your own quality platform results in a journey of constant improvement. Your begin with your current state, envision a future state, and move to deliver that future state through people, technology, and process improvements. Each step along the way you end up with measurable savings.  

Here’s to your journey!

For More Information

The post How Do I Build A Culture of Quality Excellence? appeared first on Automated Visual Testing | Applitools.

]]>
How Do I Link GitHub Actions with Applitools? https://applitools.com/blog/link-github-actions/ Mon, 27 Jul 2020 23:45:33 +0000 https://applitools.com/?p=20257 At Applitools, we have tutorials showing you how to integrate Applitools with commonly-used CICD orchestration engines. You can already link Applitools with Jenkins, Travis, CircleCI. Each of these orchestration engines...

The post How Do I Link GitHub Actions with Applitools? appeared first on Automated Visual Testing | Applitools.

]]>

At Applitools, we have tutorials showing you how to integrate Applitools with commonly-used CICD orchestration engines. You can already link Applitools with Jenkins, Travis, CircleCI. Each of these orchestration engines can automatically launch visual validation with Applitools as part of a build process.

What about GitHub Actions?

What is “GitHub Actions”?

GitHub Actions is an API for cause and effect on GitHub: orchestrate any workflow, based on any event, while GitHub manages the execution, provides rich feedback and secures every step along the way. With GitHub Actions, workflows and steps are just code in a repository, so you can create, share, reuse, and fork your software development practices.

GitHub Actions makes it easier to automate how you build, test, and deploy your projects on any platform, including Linux, macOS, and Windows. This provides Fast CI/CD for any OS, any language, and any cloud!

Applitools provides automated Visual AI testing. It is easy to integrate Applitools with GitHub Actions, letting developers to test their code by running visual tests using GitHub Actions workflow.

For this article, I have chosen a Cypress automation project that has a single visual test with Applitools created before in my GitHub repo:

https://github.com/satishapplitools/Cypress-Applitools-GitActions.git

Configuring GitHub Actions workflow on the repository

Click on Actions tab in GitHub repository

image1 2

GitHub Actions has an option to choose from several starter workflows based on programming languages, tools, technologies, and deployment options. Workflow prepares a YAML file at the end.

image16

For this article, I have chosen, “Skip this and set up a workflow yourself” to use an existing YAML file.

image3 1

If you want to follow along, do the same. I named my file main.yaml. Copy the following code and paste the content into your Yaml file.

View the code on Gist.

For Applitools integration, we require Applitools API Key and pass Git Commit SHA as Applitools Batch ID. In the code example above, I have set up the Applitools API key as GitHub secrets and that can be passed as a call to an environment variable in the GitHub Actions workflow (as shown below). To know more about GitHub Actions secrets click secret hub,

image4 1

To create GitHub secret in the repo, do the following: below steps:

  • Click “Settings” on GitHub repository
image2 1
  • Navigate to “Secrets” on the left menu and click “New Secret”
image5
  • Enter name and value, for this article, I have used “APPLITOOLS_API_KEY” as a secret name and given my API key. To obtain “Applitools API Key” please visit https://applitools.com/
image7 1
  • Set GitHub commit SHA as Applitools Batch ID using environment variables (as shown below)

This is already done in the YAML file above. No changes required.

image6 2

For more detailed information click creating-and-storing-encrypted-secrets.

In addition to the above, we need to amend “applitools.config.js” to include API key and BatchID as shown below,

image17

Run Git Hub Actions workflow integrated with Applitools

Now it’s time to run the integration. Here are the steps:

  • Let us create a new branch from master and name it as “feature branch”.
  • Make some changes in the feature branch and create a pull request to the master branch.
  • GitHub Actions workflow understands there are changes made on the code, and it automatically kicks off the workflow and runs Cypress tests integrated with Applitools(as shown below)
image9
  • Click on “Pull requests” to see the status, we should see “checks are in progress” while the workflow is running.
image18

Let us wait for the workflow to complete. Workflow logs can be seen while the step in the workflow is running.

image8 2
  • After the workflow is complete, click on “Pull requests” and open the pull request to look at the commit details and the status of Applitools visual tests.
image11
  • Review results from GitHub Actions workflow integrated with Applitools

Applitools SCM integration shows the results of the visual test and compares the screenshots between the source and destination branches. In case there are visual tests have unresolved status – comparison with baseline has differences.

  • Click on “test/applitools” details to check the results on Applitools Test Manager dashboard
image12 1
  • If there are any unresolved steps, the status of “test/applitools” shows visual tests detected diffs and scm/applitools has Pending status.
  • A user can review the results in Applitools Test Manager dashboard and accept/reject differences.
image19

Let’s assume if the user has accepted differences, then we should see the status as “passed” for “test/applitools” step at the pull request/commit screen.

image14

Click on “scm/applitools” to compare and merge screenshots between source and destination branches.

image13 1

To know more about SCM integration feature of Applitools for GitHub and BitBucket Integration see below

Successful build with GitHub Actions integrated with Applitools will be like below with all checks passed,

image15

Key Benefits of this integration

  • GitHub Actions and Applitools seamless integration enable developers to test code faster, parallel, reducing the time for the feedback.
  • Applitools visual AI – testing the UI with less code and zero flakiness, saves developer time – increases productivity.
  • Increases code quality and confidence to merge or deploy code.

For More Information

The post How Do I Link GitHub Actions with Applitools? appeared first on Automated Visual Testing | Applitools.

]]>
7 Ways To Tidy Up Your Test Code https://applitools.com/blog/7-ways-to-tidy-up-your-test-code/ Wed, 15 May 2019 23:40:20 +0000 https://applitools.com/blog/?p=4961 Test code can get as messy as a network closet. Your test code is a mess. You’re not quite sure where anything is anymore. The fragility of it is causing your...

The post 7 Ways To Tidy Up Your Test Code appeared first on Automated Visual Testing | Applitools.

]]>
Messy network cables

Test code can get as messy as a network closet.

Your test code is a mess. You’re not quite sure where anything is anymore. The fragility of it is causing your builds to fail. You’re hesitant to make any changes for fear of breaking something else. The bottom line is that your tests do not spark joy, as organizing guru Marie Kondo would say.

What’s worse, you’re not really sure how you got to this point. In the beginning, everything was fine. You even modeled your tests after the ones demonstrated in the online tutorials from your favorite test automation gurus. You were so proud to see these initial tests running successfully.

But you didn’t realize your test code was built upon a rocky foundation. Those tutorials were meant to be a “getting started” guide, void of the architectural complexity that, to be honest, you weren’t quite ready for just yet.

This was never meant to be a template on which to pattern all of your subsequent tests, because as is, it doesn’t scale—it’s not extensible or maintainable.

But how could you know this? Unfortunately, you couldn’t, and now your tests are littered with code smells. A code smell is an implementation that violates fundamental design principles in a way that may slow down development and increase the risk of future bugs.

The code may very well work, but the way that it’s designed can lead to more problems than it solves.

Don’t worry. It’s time for some spring cleaning to tidy up your test code. Here are seven stinky code smells that may be lying around in your UI tests, and cleaning suggestions to get rid of them.

Photo by Ashim D’Silva on Unsplash

1. Long class

To properly execute your automated test, your code must open your application in a browser, perform the necessary actions on the application, and then verify the resulting state of the application.

While these are all needed for your test, these are different responsibilities, and having all of these responsibilities together within a single class is a code smell. This smell makes it difficult to grasp the entirety of what’s contained within the class, which can lead to redundancy and maintenance challenges.

The formula for cleaning this smell is to separate these concerns. Moving each of these responsibilities into their own respective classes would make the code easier to locate, spot redundancy, and maintain.

2. Long method

The long-method smell has a similar odor to the long-class smell. It’s where a method or function has too many responsibilities. The same symptoms—lack of readability, susceptibility to redundancy, and maintenance difficulty—all surface with this smell.

Many times a method does not start off being too long, but over time it grows and grows, taking on additional responsibilities. This poses an issue when tests want to call into a method to do one thing, but will ultimately have multiple other things executed as well.

The formula for cleaning this smell is the same as with the long-class smell: Separate the concerns, but this time into individual methods that have a single focus.

3. Duplicate code

For test automation, it’s especially easy to find yourself with the same code over and over. Steps such as logging in and navigating through common areas of the application are naturally a part of most scenarios. However, repeating this same logic multiple times in various places is a code smell.

Should there be a change in the application within this flow, you’ll have to hunt down all the places where this logic is duplicated within your test code and update each one. This takes too much development time and also poses the risk of you missing a spot, therefore introducing a bug in your test code.

To remove this code smell, abstract out common code into their own methods. This way the code exists only in one place, can be easily updated when needed, and can simply be called by methods that require it.

4. Flaky locator strategy

A key component to making UI automation work is to provide your tool with identifiers to the elements that you’d like it to find and interact with. Using flaky locators—ones that are not durable—is an awful code smell.

Flaky locators are typically ones provided by a browser in the form of a “Copy XPath” option, or something similar. The resulting locator is usually one with a long ancestry tree and indices (e.g., /html/body/div/div/div/div/div[2]/label/span[2]).

While this may work at the time of capture, it is extremely fragile and susceptible to any changes made to the page structure. Therefore, it’s useless for automation purposes. It’s also a leading cause of false negative test results.

To rid your test code of the flaky-locator smell, use reliable locators such as id or name. For cases where more is needed—such as with CSS selectors or XPath—craft your element locators with care.

5. Indecent exposure

Classes and methods should not expose their internals unless there’s a good reason to do so. In test automation code bases, we explicitly separate our tests from our framework. It’s important to stay true to this by hiding internal framework code from our test layer.

It’s a code smell to expose class fields such as web elements used in Page Object classes, or class methods that return web elements. Doing so enables test code to access these DOM-specific items, which is not its responsibility.

To rid your code of this smell, narrow the scope of your framework code by adjusting the access modifiers. Make all necessary class fields private or protected, and do the same for methods that return web elements.

6. Inefficient waits

Automated code moves a lot faster than we do as humans, which means that sometimes we have to slow the code down. For example, after clicking a button, the application under test may need time to process the action, whereas the test code is ready to take its next step. To account for this, you’ll need to add pauses, or waits, to your test code.

However, adding a hard-coded wait that tells the test to pause for x number of seconds is a code smell. It slows down your overall test execution time, and different environments may require different wait times. So you end up using the greatest common denominator that will work for all environments, thereby making your tests slower than they need to be in most cases.

To clean up this smell, consider using conditional waiting techniques. Many browser automation libraries, such as Selenium WebDriver, have methods of providing responsible ways to wait for certain conditions to occur before continuing with execution. Such techniques wait only the amount of time that is needed, nothing more or less.

7. Multiple points of failure

Again, separating test code from the framework is done to ensure that classes are focused on their responsibility and nothing more. Just as the test code should not be concerned with the internal details of manipulating the browser, the framework code should not be concerned with failing a test.

Adding assertions within your framework code is a code smell. It is not this code’s responsibility to determine the fate of a test. By doing so, it limits the reusability of the framework code for both negative and positive scenarios.

To rid your code of this smell, remove all assertions from your framework code. Allow your test code to query your framework code for state, and then make the judgment call of pass or fail within the test itself.

Clean Is Best.

Freshen up your code

Smelly test code does not spark joy. In fact, it is a huge pain point for development teams. Set aside some time to tidy up your test automation by sniffing out these code smells and getting rid of them once and for all!

The original version of this post first appeared on TechBeacon.

 

Find Out More About Applitools

Find out more about Applitools. Setup a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

To read more, check out some of these articles and posts:

  1. How to Do Visual Regression Testing with Selenium by Dave Haeffner
  2. The ROI of Visual Testing by Justin Rohrman
  3. Webinar: DevOps & Quality in The Era Of CI-CD: An Inside Look At How Microsoft Does It with Abel Wong of Microsoft Azure DevOps
  4. How to Run 372 Cross Browser Tests In Under 3 Minutes by Jonah Stiennon
  5. Applitools Blogs about Visual Regression Testing

 

The post 7 Ways To Tidy Up Your Test Code appeared first on Automated Visual Testing | Applitools.

]]>
7 Production Testing Pitfalls https://applitools.com/blog/7-pitfalls-production-testing/ Wed, 08 May 2019 20:12:24 +0000 https://applitools.com/blog/?p=4786 This list of pitfalls to avoid when testing in production serve as my notes on Amber Race’s Webinar: The Joy of Testing in Production If you don’t already know her,...

The post 7 Production Testing Pitfalls appeared first on Automated Visual Testing | Applitools.

]]>

This list of pitfalls to avoid when testing in production serve as my notes on Amber Race’s Webinar: The Joy of Testing in Production

If you don’t already know her, Amber Race is a rockstar. She’s a senior software development engineer in testing (SDET) at Big Fish Games. Through her amazing experiences, she continues to gain wisdom on a subject often thought of as taboo in the world of software development: testing in production.

Now, some people think that testing in production is bad, which results in this common meme:

Test in Production Meme
The famous testing in production meme.

Amber presented recently in a webinar why testing in production is actually a good thing. These are seven key takeaways from her webinar.

But first, some background: BigFish uses API-delivered services to power its games. Amber knows that real-world use can create test conditions she could never recreate in her lab. This is why it’s essential to test in production — as well as in development, of course. Also, monitoring in production has helped her team pinpoint areas of inefficiency that ultimately led to better results.

Pitfall 1 – Willfully Disputing the Need for Testing in Production

The first pitfall to avoid when testing in production is ignoring the need. Amber noted that there is still a hopefulness that testing ends once code is in production. “There’s a traditional view that once you put your stuff out in the world, you’re done. But, of course, you’re not done because this is the time when everybody starts to use it. It’s really just the beginning of your work when you’re releasing to production.

Roadrunner Coyote Chuck Jones

There will always be one that gets away…

Every time a customer opens an app or a webpage, they are testing to see if it opens properly, is responsive, etc. Amber notes that even if you *think* you are not testing in production, you already are: “Instead of thinking that testing and production is something that you should be doing, you should be thinking that it’s something that’s already happening every moment your application is being used.”

Every application depends on behaviors and environments that may fail when exercised. And, yet, the complexity of applications demands that they be tested “sufficiently” for behavior with a realistic understanding that testing will not cover every real-world use case.

Screen Shot 2019 05 07 at 9.34.07 AM

The pitfall here is being dogmatic about correlating production outcomes with how well your QA team does their job. All unit and functional tests must be completed and show that the application works according to a set of environmental metrics. The team needs confidence that the design works as expected in hypothetical customer environments. That’s not the same as knowing the real world.

Pitfall 2 – Overscoping Tests Needed In Your Sandbox

Sandbox testing can give you a false sense of security. They can also consume lots of resources that provide little incremental value. Amber called out this as the next pitfall to avoid when testing in production.

Testing within a “sandbox” doesn’t fully address the complexity of the number of environments, devices, browsers, operating systems, languages, etc. that exist. Amber exemplifies this point: “I counted up on one of our devices and we had in a certain snapshot of time 50 different Android devices. Forty-five different iOS devices, the combination of the device itself and the iOS version. So there are almost 100 different kinds of devices alone. And, that’s not even considering all the network conditions – whether people are playing on a fast Wi-Fi network, or they’re on 3G or they’re on 4G, or they have some spotty connection that’s in and out. You cannot possibly cover all the client combinations in a sandbox.”

Screen Shot 2019 05 03 at 2.34.22 PM

If you are thinking the cloud might help you better replicate a production testing environment, think again, Amber says: “Even if you’re able to replicate your production environment in the cloud, does your test user base match production? For example, in the case of the service we provide, there’s a constant load happening of a quarter of a million players every day playing our games constantly happening in the background. We can’t replicate that in a test environment.”

So the pitfall to avoid is thinking that the breadth of your sandbox testing covers all your functional and behavioral environments. Instead, ensure your coverage at the sandbox level exercises your code under load and one or more hypothetical production environments. To be clear, those must pass. But, your tests will never represent the array of environmental and load factors that will impact the behavior of the production application. If you are still expecting you will catch all issues in QA, you need to let go of that idea.

Pitfall 3 – Not building production monitoring

Amber pointed out that another pitfall to avoid in production testing involves proper tooling. Application developers need the right tools for production monitoring.

She talked about one API that her company was using on one of its largest production games, and this API was using a non-standard interface. It wasn’t JSON or SOAP – it was BSON. There weren’t any standard monitoring tools for this API service behavior – and was jumping all over the place. There were failures in production that could not be explained by logs or other metrics. So, she needed a way to help the team understand what was happening with the service.

Screen Shot 2019 05 03 at 2.33.26 PM

She explained that code in test is like practicing your steps preparing for a night out dancing. You work on your moves, you look good, you check yourself out in a mirror. But, once you get the club, the environment turns into “a giant mosh pit of users doing all kinds of combinations and all kinds of configurations you would never ever think of.” What is the solution? Monitoring in production! So, you really need to build monitoring into production.

Here are some top practices Amber suggests:

  • Monitor your areas of interest. Start small: “ “You can just start with one thing. Like, add counters onto your API usage and calculate the API usage patterns you observe.”
  • Take it to the next level. Amber would calculate the usage by API and figure out API dependencies by determining the order in which they were called. “Just by observing what’s really happening you can prioritize your testing. You increase your knowledge of the software.”
  • Know which dependencies are in your control. “Once you have dependency information you can add more monitoring around what’s going on. For example, if you have an authentication solution and that is calling a third-party service you might start by looking at how long authentication takes.”
  • Be aware that monitoring has costs. All monitoring will consume resources that require processing, memory, and network.
  • And, to get to some metrics, monitoring may need to be built into the code at the development or the operational level.

Pitfall 4 – Not Knowing the Monitoring Tools You Can Install

Yet another pitfall to avoid is a lack of information for tools that can store production metrics.  The easiest type of metric storage is a log file. Amber discussed that one of the most well-known log file search engines is Splunk. Another is the ELK (ElasticSearch, Logstash & Kibana) stack. The value of any of these tools depend on depend on the metrics you need being found in the log files.

Screen Shot 2019 05 03 at 2.34.03 PM

The next type of metric storage is in a database. Graphite is a tool for stats extraction and analysis from a database. StatsD and InfluxDB are other tools for database data extraction and analysis. Know that this kind of testing will require additional coding.

Finally, Amber discussed both Grafana and Kibana as visualization tools. These are all tools that can be either purchased commercially or, in some cases, downloaded as freeware.

At BigFish, Amber and her team used Graphite and Grafana for API behavior visualization. An example of this work is below:

Screen Shot 2019 05 03 at 11.03.50 AM

Bigfish chose Graphite and Grafana for the following reasons:

  1. “The stack was already in use by Ops. “Always ask the Ops team what they’re using,” says Amber.
  2. Instrumenting the code was easier than fixing the log writers
  3. There is a lot of flexibility in the Graphite query language
  4. Templates make Grafana really useful.
  5. “Pretty graphs are nice to look at.”

Again, the key is to look at the ability to add tools that are easiest, instrument the right metrics, and begin to monitor.

Obviously, you need to know what to monitor and how best to show the values that matter. For example, If you can install a timer for accessing a critical API, make sure you can graph the response time to show worst cases, statistically. Graphite can show you the statistics for that metric. The 95th percentile value will show you how slowest 5% of your users experience the behavior, and the p999 latency will show you how 0.1% of your users are seeing things.

Realize that 0.1% is 1 out of every thousand. If you only have 1,000 users, you might think you can live with one user having this experience. If you have a million users, can you live with 1,000 having this experience? And, if you have 1,000,000 users and they, on average, call this API 10 times during a typical session, they have more like a 1 in 100 chance of hitting this worst-case behavior. Can your business function if 10,000 customers get this worst-case behavior?

The key is to know what you are measuring and tie it back to customers and customer behavior.

Pitfall 5 – Not Learning From Monitoring

When engineers insert production monitoring, said Amber, they may think they are done. This is the next pitfall to avoid when testing in production: failure to learn from monitoring metrics.

The keys to understanding monitoring are knowing that you need to see what is important. For instance, when you want to observe performance, you need to monitor the latency of calls and their frequency. When you want to drive efficiency, you can look for frequently-invoked calls (such as error handling routines) which imply that your code requires underlying improvements.

After implementing their own monitoring tools and processes, Big Fish Games was able to find concrete ways to improve their system. One of the big takeaways was learning that they had API behaviors that were generating errors and then discovered that the errors did not result in user problems.

Amber noted: ”We would have calls that the game was making that failed every time. The calls failed because maybe it was an older client or we had changed the API. Regardless, it was an invalid call. So that’s a waste of time for everybody, it’s a waste of time for our service. It’s a waste of time for the client. Obviously, this call was not necessary because nobody had complained about it before. So having them remove those calls it’s saving traffic on our site. It’s saving network traffic for the player. It’s savings for everybody to get rid of these things or to fix them.”

Additionally, Amber and her team found that certain API calls caused obvious load issues on the database.

“We had another instance where the game was calling the database every 30 seconds. And it turned out 95 percent of the time the user had no items because the items in question were only given out by customer service. So we’re making all of these database requests and most of the time there’s no data at all. We were able to put some caching in place, where we knew they didn’t have data we didn’t hit the database anymore and customer service would then clear that cache. The point is that this change alone caused a 30 percent drop in load on the database.”

Pitfall 6 – Becoming Monitoring Heavy And Forgetting Value

The sixth pitfall to avoid in production testing involves failing to build high value, actionable monitoring solutions.

Once you have metrics, it’s tempting to begin making dashboards that show those metrics in action. However, don’t just dashboard for the sake of dashboarding, Amber advised. Instead, continue to make sure that the dashboards are meaningful to the operation of your application.

Make sure you are measuring the right things: “Always re-evaluate your metrics to make sure you are collecting the data that you want to see and your metrics are providing value.

Screen Shot 2019 05 03 at 2.35.42 PM

Amber explained a critical difference between monitoring and observability. She provided her explanation: “The difference between monitoring and observability is that with monitoring you might just be getting load average but you don’t know exactly what went into that average, whereas when something is completely observable you can identify and inspect specific calls that are happening within that moment of high-performance issues.”

The key, she concluded, is to value metrics and dashboards that let you act and not just wonder.

Pitfall 7 – Not Being Proactive about Performance, Feature, and Chaos Testing

Once you have instrumented your production code, you can use production a test bed for new development. Amber noted that many companies are wary about running production as a testbed. That fear can lead to the last pitfall to avoid when testing in production – not using your production testbed proactively.

No one is suggesting to use production as a place to test a new user interface or graphics package. Still, especially for new services, production can become a great place to try out new code while real-world users are applying load and changing data.

Being able to do performance testing in production is key, said Amber “The number one best thing about testing in production is doing performance testing in production so you can test before your clients are updated to use a new API. You can put it out on your production service and run a load test on that particular API while you have all the background noise going and you don’t have to guess.”

She shared her thoughts on the benefits of feature testing in production: “[With feature testing] you have the real hardware, the real traffic, and the real user experience is happening just in a box. If it’s something you can turn on and off easily then that makes for a very useful test.”

Even chaos testing belongs in production, Amber explained. “Chaos testing is good when you want to see what happens if a particular host goes down or a particular database is offline. With these sorts of network outage conditions, it’s important to see if your flows are working the way they should. You want to test these in a place where you have control rather than letting the real chaos take over. Who wants to be woken after midnight to figure out what’s going on?”

Final Food For Thought

Given that you’re paying attention and not falling into the pitfalls, get ready to add production monitoring and testing to your arsenal of application tests.

Here are a few more musings from Amber that will get you pumped to start testing in production:

  • By observing you can look for new things to look for and you can do this even without a so-called observer to observability solution. Taking in this information is really important for baking more quality into your services and understanding your services and applications better.”
  • The point is not to break things, it is to find out where things are broken. That’s why we’re doing the monitoring in production. That’s why we’re doing this testing is we want to see what’s happening with our actual users and where their issues actually are and hopefully alleviate them.”
  • “Most importantly, explore without fear.”

Screen Shot 2019 05 03 at 2.36.24 PM

To capture all of Amber’s inspiration on monitoring and testing in production, watch a webinar of her full presentation, “The Joy of Testing in Production”:

https://youtube.com/watch?v=8-ymeVdNxSE%3Frel%3D0%26showinfo%3D0

To visually test your apps before they go into production, check out our tutorials. Or, let us know if you’d like a demo of Applitools.

Other Posts that might interest you include:

  1. Test Your Service APIs – A review of Amber’s course on Test Automation University
  2. Automating your API tests with REST Assured – TAU Course by Bret Dijkstra
  3. Running Tests using REST API – Helpdesk article by Yarden Naveh
  4. Challenges of Testing Responsive Design – and Possible Solutions – by Justin Rohrman

The post 7 Production Testing Pitfalls appeared first on Automated Visual Testing | Applitools.

]]>
What Is Front End Testing in 2019? https://applitools.com/blog/what-is-front-end-testing-in-2019/ Tue, 09 Apr 2019 22:34:32 +0000 https://applitools.com/blog/?p=4578 What do you think are best practices for front end testing? On April 3, 2019, Adam Carmi, our CTO and Co-Founder, joined a panel discussed this topic on a This...

The post What Is Front End Testing in 2019? appeared first on Automated Visual Testing | Applitools.

]]>

What do you think are best practices for front end testing? On April 3, 2019, Adam Carmi, our CTO and Co-Founder, joined a panel discussed this topic on a This Dot Media webinar. The webinar title was “This.Javascript: State of Front End Testing.”  Moderated by Tracy Lee of This Dot, the panel included:

  • Guillermo Rauch – Founder & CEO, Zeit (@rauchg, @zeithq)
  • Gleb Bahmutov – VP of Engineering, Cypress (@bahmutov, @cypress_io)
  • Simen Bekkhus – Developer, Folio (@SBekkhus, @folio, @fbjest)
  • Adam Carmi – Co-Founder and CTO, Applitools (@carmiadam, @applitools)
  • Kevin Lamping – Developer, Test Automation (@klamping)
  • Vikram Subramanian – Googler, Angular Universal (@vikerman)
  • Tracy Lee – Founder, This Dot & RxJS Core Team (@ladyleet ‏, @thisdotmedia)

The panel covered a broad range of test issues. Adam began his segment by zeroing in on a simple problem: how small changes can have huge impacts on front end behavior. Specifically, how can developers account for changes in their CSS?

CSS changes can affect look and feel of a set of pages – leaving them visually problematic for users.  With an existing CSS, developers can create functional tests around expected behaviors and visualizations. But, CSS changes can have unknown or unexpected behavior changes that impact functional behavior.

How Do Front End Developers Test?

Adam started out by asking how most front end developers handle visual testing. He acknowledged the reality that most frontend developers don’t do testing, and those that do usually run unit tests. Some developers create functional UI testing with tools like Cypress, Webdriver.IO, Selenium, and other tools.

Adam gave an example of a typical login page test. The page requires that a user provide both an email and a password. If either or both are missing, the page should respond with “Required” to indicate that content is missing.

This kind of behavior is easy to test in Cypress. The test below instructs Cypress to run the test behavior and ensure that the screen shows the “Required” error for each box.

Functional Tests Miss UI Changes

After walking through the functional test approach, Adam asked,

“What happens if another developer on our team, or one on a different team, changes the CSS… “

What happens when the change unintentionally modifies pages with an existing functional test? The test would still pass because the functional elements haven’t changed. If we don’t have other means to verify how the UI looks like, we just might this change might go unnoticed and push this back to production.

Why Automated Visual UI Testing Matters

Then, Adam discussed the range of problems that result in visual UI issues. He said that the typical web page,

“… can have hundreds of elements and each element has dozens of attributes that affect the way that it is rendered.”

“How can we make sure that the application looks right renders correctly on all different form factors and viewport sizes that affect its layout? Especially when each browser uses a different layout engine, and each has a slightly different interpretation for the same domain CSS?  Even if we don’t change a line of code a browser version update might be incompatible with the application.”

Then, Adam talked about visual UI testing and how Applitools addresses these issues.

“The answer,” he said, “is visual testing and the way that you do it with Applitools. Basically, you just add a visual assertion, or a visual checkpoint, as we call it, which validates the entire window with one line of code. It captures a screenshot of the page, or a component (if you want to test a specific component) and compares that with a baseline image that defines the expected appearance of the page, and if there is any difference it the test.”

Ultrafast Grid

Next, Adam described how Applitools new Ultrafast Grid lets engineers automatically run their tests on a range of target environments and devices.

“What we do is basically capture all the resources that are used by the browser to render the page locally. We upload the differences to our Ultrafast Grid, which is where we host our own devices and browsers. And then we simultaneously render the same resources on all these different combinations of environment details. You can run hundreds of visual tests in less than a minute – super powerful and super cool.”

Adam then demonstrated Applitools Eyes in action.

He noted that when Applitools Eyes detects a difference, you can see the difference highlighted on the screen. The dashboard lets you compare the checkpoint and baseline image. You can toggle between the two images and clearly see what changed. Applitools Eyes is simple enough to use that anyone on your team who knows your application can run these tests.

Root Cause Analysis

Adam then showed the Applitools Eyes Root Cause Analysis option.

“Beyond just seeing what’s changed, we can show you what caused the change,” he said. “For example, if we click on a difference, Root Cause Analysis highlights the code difference between the checkpoint and the baseline. Root Cause analysis highlights changes in CSS that affect color, text case, and even indicate misbehavior on a specific platform. Root Cause Analysis saves a lot of time by presenting the code differences, rather than requiring the developer to dig into the code.”

Ease of Use

Next, Adam spoke about ease of use.

“We have plenty of features in the product that allow you to scale up your tests without increasing your maintenance overhead,” he said. Maintaining a baseline is very simple – there is a workflow for accepting new screenshots. And, Applitools Eyes can isolate errors to a single root cause, so an error touching hundreds of pages will show up as one error.

Finally, Adam spoke about Applitools integration with development and continuous development – continuous integration tools.

“The last thing I’ll mention is that visual testing needs to integrate with your day-to-day workflow. We integrate with bug trackers and with continuous integration systems like Jenkins, Travis CI, and many others. We also integrate with code repositories like Bitbucket and GitHub. You can quickly view, initiate, and maintain visual test results directly from your pull requests.”

See The Whole Webinar

We have linked the entire webinar here. Enjoy!

Find out more about Applitools. Setup a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

The post What Is Front End Testing in 2019? appeared first on Automated Visual Testing | Applitools.

]]>
Integrate test results into Slack [step-by-step tutorial] https://applitools.com/blog/integrate-test-results-slack/ Thu, 04 Apr 2019 18:45:13 +0000 https://applitools.com/blog/?p=4559 Update: We have recently released a new, native Slack integration. For additional details please visit our step by step guide for Applitools & Slack integration. One of the great things...

The post Integrate test results into Slack [step-by-step tutorial] appeared first on Automated Visual Testing | Applitools.

]]>

Update: We have recently released a new, native Slack integration. For additional details please visit our step by step guide for Applitools & Slack integration.

One of the great things about Slack is the long list of integrations available. Whether it’s an integration as foundational as Google Drive, GitHub, or Zendesk — or as whimsical as Lunch Train — there are Slack integrations for a wide range of use cases.

Here’s an example of Slack’s GitHub integration:

pasted image 0 8

With that backdrop, we’d like to show you how to integrate your Applitools Eyes visual UI test results into one of your Slack channels — something that our customers have frequently requested.

This step-by-step tutorial uses the Slack incoming webhooks API.

First, some background:

Slack provides APIs to users to create applications and to automate processes, such as sending automatic notifications based on human input, sending alerts on specified conditions, and more. The Slack API has been noted for its compatibility with many types of applications, frameworks, and services.

Once you build this tutorial, you’ll be able to view test results on any Slack client: your laptop, your phone. Or even via notifications on your Apple Watch, if you’re stuck in a meeting and don’t want to open your phone. You’ll be the first to know if your app has a visual glitch that you’ll need to fix.

With that, let’s dive into what you need to do  to bring Applitools into your Slack channel:

Step 1: Set up your Slack endpoint

  1. Per Slack recommendations, create a “sandbox” account for Slack for your experiments before integrating it with your team Slack account: https://slack.com/create
  2. Follow these instructions to see how you post your first “Hello world” message to your new Slack channel: https://api.slack.com/tutorials/slack-apps-hello-world
  3. Copy and keep your webhook URL (always better to add this webhook URL to your environment variables, but I’ll leave it to you).

hw incoming webhook table after

Step 2: Getting Applitools Eyes Test Results

  1. Switch to whatever IDE you use to write automated test scripts that call the Applitools API, and open one of those scripts.
  2. Towards the end of your test scripts, you should have a call to eyes.Close(). Here’s its documentation for our Java, JavaScript, C#, and Python SDKs. For scripts written in Java, it will look something like this:
    TestResults res = eyes.close(false);
  3. This call returns a TestResults object. Additional information about Applitools TestResults can be found here.
    
    

Step 3: Connecting The Dots

The example below is in Java; however, you can figure out how to achieve this in any language you are using (with minor differences) as the TestResults object is accessible also in JavaScript, Python, Ruby, C#, and PHP. I’ve followed the instructions in this article to create the code below but feel free to customize it to your specific team needs.

1. Create class EyesSlack and add the following code in it:

View the code on Gist.

2. Now let’s see how we use this class in our tests:

View the code on Gist.

3. All you need to do now is run it to see your Applitools Eyes Test Results right in your Slack channel.

Here is an example of the screen message in Slack:

pasted image 0 9

If you want to see my own experience putting together this blog post, check out this video:

Now that you tested it in your sandbox you are ready to merge into your team slack account – simply replace the webhook URL and you are good to go.

Looking for additional integrations with Applitools? Here are my favorites:

The post Integrate test results into Slack [step-by-step tutorial] appeared first on Automated Visual Testing | Applitools.

]]>
The Joy of Testing in Production – with expert Amber Race https://applitools.com/blog/testing-in-production-amber-race/ Mon, 01 Apr 2019 13:00:10 +0000 https://applitools.com/blog/?p=4492 “Watch this deep-dive session, where I demonstrate multiple methods you can use for collecting production data, as well as give real-life examples of how production data can help you find...

The post The Joy of Testing in Production – with expert Amber Race appeared first on Automated Visual Testing | Applitools.

]]>
Amber Race_- Sr. SDET @ Big Fish Games

Amber Race_- Sr. SDET @ Big Fish Games
Amber Race – Sr. SDET @ Big Fish Games

Watch this deep-dive session, where I demonstrate multiple methods you can use for collecting production data, as well as give real-life examples of how production data can help you find bugs, and even enable running tests in production.” — Amber Race

Our software doesn’t run in a test environment with mock users — it runs out in the real world, being used by thousands or even millions of real people, with thousands of variations of devices, screen sizes, and operating systems.

Learn why you need to be collecting this data — and how you can make it actionable for continuous testing, learning and improving, as well as how it can help you make your own services more transparent and user-friendly to your customers.

Watch this webinar by expert Amber Race — Sr. SDET @ Big Fish Games — and learn:

  1. What monitoring and logging tools are available
  2. How to decide what to track
  3. Ways to create meaningful reports with monitoring results
  4. How to use production data to create more realistic tests
  5. Strategies for using real-time monitoring to run tests in production

 

Amber’s Slide Deck:

 

Full webinar recording: 

 

Additional reading and recommended resources:
  1. 16 reasons why to use Selenium IDE in 2019 (and 2 why not) – post by Al Sargent 
  2. Which programming language is most popular for UI test automation in 2019? – post by Angie Jones 
  3. The New Selenium IDE: See How It Can Turbo-charge Your Testing Efforts – webinar with Dave Haeffner and Tomer Steinfeld
  4. Collaborating with Developers: How-to Guide for QA Pros & Test Engineers – webinar by Gil Tayar
  5. Confident React – Kent C. Dodds Speaks about Frontend Testing – webinar with Kent C. Dodds
  6. Release Apps with Flawless UI: Open your Free Applitools Account, and start visual testing today.
  7. Test Automation U — the most-talked-about test automation initiative of the year: online education platform led by Angie Jones, offering free test automation courses by industry leaders. Sign up and start showing off your test automation certificates and badges!
— HAPPY TESTING — 

 

The post The Joy of Testing in Production – with expert Amber Race appeared first on Automated Visual Testing | Applitools.

]]>
Test Your Metal: Drawing Parallels Between Testing Concepts and Heavy Metal Music https://applitools.com/blog/testing-concepts-webinar/ Sun, 17 Mar 2019 11:07:09 +0000 https://applitools.com/blog/?p=4411 Unleash your inner rock-star tester, and listen to this hard-rockin’ session with expert Paul Grizzaffi, covering the most important testing concepts and test automation best practices. Paul Grizzaffi is not...

The post Test Your Metal: Drawing Parallels Between Testing Concepts and Heavy Metal Music appeared first on Automated Visual Testing | Applitools.

]]>
Paul Grizzaffi

Paul Grizzaffi
Paul Grizzaffi

Unleash your inner rock-star tester, and listen to this hard-rockin’ session with expert Paul Grizzaffi, covering the most important testing concepts and test automation best practices.

Paul Grizzaffi is not only a senior software engineer, test automation expert, and well-known industry thought leader — he is also a self-proclaimed metal-head and rock-a-holic, is one of those people who is greatly affected by music.

Listen to this session, where Paul gave us a VIP ticket to his world tour, covering iconic heavy metal songs that frame memorable messages about testing and automation — worth remembering.

Among the featured testing concepts Paul covered: we make our own expectations, automation is programming – not magic, and important business aspects related to automation and testing that cannot be ignored.

At the end of the session, Paul took questions from our live audience, covering issues such as “is my job safe, or should I be very worried?“, “how do I make my value noticed in my team and my organization?“, and “what skills do you look for when hiring testers?” – just to name a few.

Full webinar recording:

 

Additional reading and recommended resources: 
— HAPPY TESTING —

 

The post Test Your Metal: Drawing Parallels Between Testing Concepts and Heavy Metal Music appeared first on Automated Visual Testing | Applitools.

]]>
Collaborating with Developers: How-to Guide for QA Pros & Test Engineers https://applitools.com/blog/collaborating-with-frontend-developers/ Thu, 31 Jan 2019 12:10:50 +0000 https://applitools.com/blog/?p=4135 Watch this in-depth session by Gil Tayar about how Test Engineers and QA pros can successfully collaborate with developers. This webinar includes an extensive overview on test methodologies and types...

The post Collaborating with Developers: How-to Guide for QA Pros & Test Engineers appeared first on Automated Visual Testing | Applitools.

]]>
Gil Tayar, Sr. Architect and Evangelist @ Applitools

Gil Tayar, Sr. Architect and Evangelist @ Applitools
Gil Tayar, Sr. Architect and Evangelist @ Applitools

Watch this in-depth session by Gil Tayar about how Test Engineers and QA pros can successfully collaborate with developers.

This webinar includes an extensive overview on test methodologies and types – especially for frontend testing – tips, tricks, and best practices on how to effectively test developer code, and how to decipher developer lingo.

The full webinar recording and Gil’s slide-deck are below.

“I will give a recipe that you can follow to ease your fear of the unknown: writing tests for developer code. At the end of this session, I guarantee that you will gain a deeper understanding of different kinds of tests, know how to decipher developer terminology, and learn how to write unit, integration, browser, and E2E tests.” — Gil Tayar. Sr. Architect & Evangelist

Testing is shifting left, moving closer to testing the code itself. But while managers dictate a shift to the left, developers and testers are confused as to how exactly to test the code.

And while the backend world has established code-testing methodologies, we are still trying to figure out how to test frontend code, while ensuring effective testing procedures and processes.

This means testers need to step in and work with the frontend developers, but with an understanding of the frameworks by which frontend code is tested, the various kinds of testing that can be performed on frontend code, and which tools can be used for this.

In this hands-on session, Gil Tayar will demystify the frontend testing process, and guide us on how to work effectively with developers. He discusses various test methodologies, and how they fit together in a coherent way. Gil also includes sample code that you can use as a template in your own project — all in order to provide you with the knowledge and tools to approach and test developer code.

Topics include:

  • Get familiar with different types of tests
  • Understand developer terminology around frontend testing
  • Learn how to write unit tests, integration tests, and browser and E2E tests
  • Get acquainted with the tools of the trade – for testers and for developers
Gil’s full slide deck:

 

Full webinar recording:

Additional reading and recommended links:

— HAPPY TESTING —

 

The post Collaborating with Developers: How-to Guide for QA Pros & Test Engineers appeared first on Automated Visual Testing | Applitools.

]]>
How to upgrade your Frontend Testing in 2019 https://applitools.com/blog/upgrade-frontend-testing-2019/ Thu, 13 Dec 2018 19:23:58 +0000 https://applitools.com/blog/?p=3983 No frontend project can survive without an effective testing strategy. Why? Because frontend projects can be as complex as backend projects — but users still expect a flawless experience. And...

The post How to upgrade your Frontend Testing in 2019 appeared first on Automated Visual Testing | Applitools.

]]>
Speaker lineup for Applitools State of Testing 2019 webinar

No frontend project can survive without an effective testing strategy. Why?

Because frontend projects can be as complex as backend projects — but users still expect a flawless experience. And if those users complain to your management… it’s just bad.

So, more and more frontend developers are realizing that they need to bake automated testing into their development process.

But questions remain:

  • What testing should you do beyond unit testing?
  • How can you find the time to both code and build/run/maintain automated tests?
  • How do you maintain tests when requirements and features are constantly changing — and you’re pushing new code into production daily?
  • How do you convince your boss that testing isn’t a waste of time?

We want to help you out of this.

Please join us at State of Frontend Testing, a free online event where frontend testing experts share how to quickly build high-quality web and mobile apps in 2019. We’ll cover the testing strategies, tools and frameworks you should be using in 2019.

Who’s presenting?

Not a bad group, right?

State of Frontend Testing will stream live on Tuesday, December 18th at 10 am PST / 1 pm EST / 6 pm UK. This is a free, online event open to all developers.

Register here to see the event details and join the livestream.

We hope to see you on the 18th!

What will you do upgrade your frontend testing in 2019?

The post How to upgrade your Frontend Testing in 2019 appeared first on Automated Visual Testing | Applitools.

]]>