CICD Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/cicd/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:21:17 +0000 en-US hourly 1 Test Management within Continuous Integration Pipelines https://applitools.com/blog/continuous-integration-test-management/ Fri, 23 Apr 2021 15:35:30 +0000 https://applitools.com/?p=28597 Techniques to manage a large suite of automated tests within a CI environment and yet still get fast feedback.

The post Test Management within Continuous Integration Pipelines appeared first on Automated Visual Testing | Applitools.

]]>

Once upon a time there were several software development teams that worked on a fairly mature product. Because the product was so mature, there were thousands of automated tests – a mix of unit and web UI tests – that covered the entire product. Any time they wanted to release a new version of the product, which was only a few times a year, they’d run the full suite of automated tests and several of the web UI tests would fail. This was because those tests were only executed during the specified regression testing period, and of course a lot had changed within the product over the course of several months.

The company wanted to be more competitive and release more often, so the various development teams began looking into continuous integration (CI) where the automated tests would run as part of every code check-in. But…there were thousands of tests. And although there were so many tests, the teams were really careful about choosing which tests to automate, so they were fairly confident that all of their tests provided value. So, they ran all of them – as part of every single build.

It didn’t take long for the team to complain about how much time it took the builds to execute. And rightfully so, as one of the benefits they were hoping to realize from CI was fast feedback. They were sold a promise that they’d be able to check in their code and within only a few minutes they’d receive feedback as to whether their check-in contained breaking changes. However, each build took hours to complete. And once the build was finally done, they’d also need to spend additional time investigating any test failures. This became especially annoying when the failures were in areas of the application that different teams worked on. This didn’t seem like the glorious Continuous Integration that they heard such great things about.

Divide and Conquer

Having a comprehensive test suite is good for covering the entire product, however, it posed quite the challenge for continuous integration builds. The engineers looked at how they themselves were divided up into smaller agile teams and decided to break their test suite up to better reflect this division.

Each area of the product was known as a work zone, and if anyone was working on a particular part of the application, that was considered an active work zone. Areas that were not under actively development, were considered dormant work zones.

The builds were broken up for the respective work zones. Each build would share common tests such as build verification tests and smoke tests, but the other tests in a given build would be only the ones related to that work zone. For example, the Registration feature of the application was considered a work zone, and therefore there was a build that would only run the tests that were related to Registration. This provided nicely scoped builds with relevant tests and reduced execution time.

In additional to the various work zone builds, there was still the main build with all of the tests, but this build was not used for continuous integration. Instead, this build would run periodically throughout the day. This provided information about how changes may have impacted dormant work zones which did not have active builds running.

Assigning tests to work zones

All web UI tests lived in a common repository, regardless of the specific functional area. This allowed tests to share common utilities. The teams decided to keep this approach and use tagging to indicate which functional area(s) a given test covered. For example, for a test that verified a product listing, this test would be tagged for the “Shopping” work zone. And for a test that adds a product to a cart, this one spanned multiple work zones and was therefore tagged as “Shopping” and “Cart”. Tests that were tagged for multiple work zones would run as part of multiple builds.

To tag the tests, the teams used their test runner such as TestNG or JUnit and made use of the annotation feature of these runners.

@Test(groups={WorkArea.SHOPPING, WorkArea.CART})
public void testAddProductToCart()
{
    ...
}

Test runners also typically allow a means to configure which tests run. The team decided not to create these configuration files within the code repository because it did not allow for quick changes, as they’d need to check the change in, have it reviewed, etc. So, instead the configuration was done at the CI job level.

mvn test -Dgroups=cart

With this, if someone was checking in a feature that touched multiple work zones, they could quickly configure the build to pull in tests from all relevant zones. Also, it allowed for teams to change their build’s needs as their sprints changed. For example, the Shopping area may be an active work zone one sprint but a dormant work zone in the next. So, while the builds were focused on a specific work zone, they really were more aligned with the sprint team and their current needs at any given time.

Limitations

While this approach eliminated the complaints of the build being too slow or containing unrelated test failures, there were still limitations.

Bugs can be missed

By reducing the scope of the build, the team was not testing everything. This means that unintentional bugs in other work zones could creep in with a check in. However, to mitigate this risk, remember, the teams kept the main build which ran all tests several times a day. Initially they set this to run only once a day but found that wasn’t often enough. So, they increased this to run every 6 hours. If this build failed, it would be from a check-in made within the last 6 hours which helped narrow down the problem area.

Also, this system relied heavily on the tests being tagged properly. If someone forgot to tag a test or mis-tagged it, that would not be run as part of the appropriate work zone build. Usually these were caught by the main build and this gave an opportunity to fix the tagging.

Tests must be reliable

The web UI tests were not originally part of continuous integration. Instead they were run periodically throughout the year (during the dedicated regression testing time) on someone’s local machine. That person would then investigate the failures and could easily dismiss flaky tests that failed with unjust cause, unbeknownst to the rest of the team.

However, this sort of immaturity is unacceptable when a test needs to run as part of continuous integration. It has to be reliable. So before this new CI process could work flawlessly, the team had to invest time into enhancing the quality of their tests so that they only failed when they were supposed to.

Not every test failure is a show-stopper

The teams went through the very important process of identifying the most valuable tests to automate. Which would make you think that if any of them fail, the integration should be canceled. This sounds right in theory, but was different in practice.

Sometimes tests would fail, the team would investigate, then determine they still wanted to integrate the feature. So, they opened a bug, disabled the test, and integrated the feature.

Is this wrong? Why have the test if you’re going to still integrate in the event of a failure?

The team decided that the information was still valuable to them. Knowing this gave them information about the risks they were taking, and they could discuss as a team if they were willing to take the risk of introducing this new feature knowing that it breaks an existing feature. In some cases, it was worth it, and they opened bugs to eventually address those failures.

That’s the role of tests: to provide the team with fast feedback so that they can make informed decisions.

Happily Ever After

Preparing for continuous integration certainly took a fair amount of investment. The team learned a valuable lesson: you don’t just jump into CI. A proper testing strategy is needed to ensure you’re able to realize the benefits of CI, namely fast and reliable feedback. After experiencing bumps and bruises along the way, the team finally figured out a system that worked for them.

The post Test Management within Continuous Integration Pipelines appeared first on Automated Visual Testing | Applitools.

]]>
Get a Jump Into GitHub Actions https://applitools.com/blog/jump-into-github-actions/ Tue, 02 Mar 2021 16:24:17 +0000 https://applitools.com/?p=27330 On January 27, 2021, Angie Jones of Applitools hosted Brian Douglas, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into GitHub Actions. You can...

The post Get a Jump Into GitHub Actions appeared first on Automated Visual Testing | Applitools.

]]>

On January 27, 2021, Angie Jones of Applitools hosted Brian Douglas, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into GitHub Actions. You can watch the entire webinar on YouTube. This blog post goes through the highlights for you.

Introductions

Angie Jones serves as Senior Director of Test Automation University and as Principal Developer Advocate at Applitools. She tweets at @techgirl1908, and her website is https://angiejones.tech.

Brian Douglas serves as the Staff Developer Advocate at GitHub. Insiders know him as the “Beyoncé of GitHub.” He blogs at https://bdougie.live, and tweets as @bdougieYO.

They ran their webinar as a question-and-answer session. Here are some of the key ideas covered.

What Are GitHub Actions?

Angie’s first question asked Brian to jump into GitHub Actions.

Brian explained that GitHub Actions is a feature you can use to automate actions in GitHub. GitHub Actions let you code event-driven automation inside GitHub. You build monitors for events, and when those events occur, they trigger workflows. 

If you’re already storing your code in GitHub, you can use GitHub Actions to automate anything you can access via webhook from GitHub. As a result, you can build and manage all the processes that matter to your code without leaving GitHub. 

Build Test Deploy

Next, Angie asked about Build, Test, Deploy as what she hears about most frequently when she hears about GitHub Actions.

Brian mentioned that the term, GitOps, describes the idea that a push to GitHub drives some kind of activity. A user adding a file should initiate other actions based on that file. External software vendors have built their own hooks to drive things like continuous integration with GitHub. GitHub Actions simplifies these integrations by using native code now built into GitHub.com.

Brian explained how GitHub Actions can launch a workflow. He gave the example that a team has created a JavaScript test in Jest. There’s an existing test using Jest – either npm test, or jest. With GitHub Action workflows, the development team can automate actions based on a starting action.  In this case, the operator can drive GitHub to execute the test based on uploading the JavaScript file.  

Get Back To What You Like To Do

Angie pointed out that this catchphrase, “Get back to what you like to do,” caught her attention. She spends lots of time in meetings and doing other tasks when she’d really just like to be coding. So, she asked Brian, how does that work?

Brian explained that, as teams grow, so much more of the work becomes coordination and orchestration. Leaders have to answer questions like:

  • What should happen during a pull request? 
  • How do we automate testing? 
  • How do we manage our build processes

When engineers have to answer these questions with external products and processes, they stop coding. With GitHub Actions, Brian said, you can code your own workflow controls. You can ensure consistency by coding the actions yourself. And, by using GitHub Actions, you make the processes transparent for everyone on the team.

Do you want a process to call Applitools? That’s easy to set up. 

Brian explained that GitHub hosted a GitHub Actions Hackathon in late 2020. The team coded the controls for the submission process into the hackathon. You can still check it out at githubhackathon.com.

The entire submission process got automated to check for all the proper files being included in a submission. The code recognized completed submissions on the hackathon home page automatically.

Brian then gave the example of work he did on the GitHub Hacktoberfest in October. For the team working on the code, Brian developed a custom workflow that allowed any authenticated individual to sign up to address issues exposed in the Hackathon. Brian’s code latched onto existing authentication code to validate that individuals could participate in the process and assigned their identity to the issue. As the developer, Brain built the workflow for these tasks using GitHub Actions.

What can you automate? Informing your team when a user does a pull request. Send a tweet when the team releases a build. Any webhook in GitHub you can automate with GitHub Actions. For example, you can even automate the nag emails that get sent out when a pull request review does not complete within a specified time. 

Common Actions

Angie then asked about the most common actions that Brian sees users running.

Brian summarized by saying, basically, continuous integration (CI). The most common use is ensuring that tests get run against code as it gets checked in to ensure that test suites get applied. You can have tests run when you push code to a branch, push code to a release branch or do a release, or even when you do a pull request.

While test execution gets run most frequently, there are plenty of tasks that one can automate. Brian did something specific to assign gifts to team members who reviewed pull requests. He also used a cron job to automate a GitHub Action which opened up a global team issue each Sunday US, which happens to be Monday in Australia, and assigned all the team members to this issue. Each member needed to explain what they were working on. This way, the globally-distributed team could stay on top of their work together without a meeting that would occur at an awkward time for at least one group of team members.

Brian talked about people coming up with truly use cases – like someone linking IOT devices to webhooks in existing APIs using GitHub Actions. 

But the cool part of these actions is that most of them are open source and searchable. Anyone can inspect actions and, if they don’t like them, modify them. If a repo includes GitHub Actions, they’re searchable.

On github.dom/bdougie, you can see existing workflows that Brian has already put together.

Jump Into GitHub Actions – What Next?

I shared some of the basic ideas in Brian’s conversation with Angie. If you want to jump into GitHub Actions in more detail, you can check out the full webinar and the slides in Addie Ben Yehuda’s summary blog for the webinar. That blog also includes a number of Brian’s links, several of which I include here as well:

Enjoy jumping into GitHub Actions!

Featured Photo by Aziz Acharki on Unsplash

The post Get a Jump Into GitHub Actions appeared first on Automated Visual Testing | Applitools.

]]>
9 Test Automation Predictions for 2021 https://applitools.com/blog/9-test-automation-predictions-2021/ Thu, 07 Jan 2021 20:24:40 +0000 https://applitools.com/?p=25359 Every year, pundits and critics offer their predictions for the year ahead. Here are my predictions for Test Automation in 2021.

The post 9 Test Automation Predictions for 2021 appeared first on Automated Visual Testing | Applitools.

]]>

Every year, pundits and critics offer their predictions for the year ahead. Here are my predictions for test automation in 2021. (Note: these are my personal predictions)

Prediction 1: Stand-alone QA faces challenges of dev teams with integrated quality engineering

chuttersnap tUSN3PNeX1U unsplash Small

Photo by CHUTTERSNAP on Unsplash

Teams running Continuous Integration/Continuous Deployment (CICD) have learned that developers must own the quality of their code. In 2021, everyone else will figure that out, too. Engineers know that the delay between developing code and finding bugs produces inefficient development teams. Companies running standalone QA teams find bugs later than teams with integrated quality. In 2021, this difference will begin to become painful as more companies adopt quality engineering in the midst of development.

Prediction 2: Development teams will own core test automation

sara cervera oEaGiyEjQyY unsplash Small

Photo by Sara Cervera on Unsplash

Dev owns test automation across many CICD teams. With more quality responsibility, more of the development teams will build test automation. Because they use JavaScript in development, front-end teams will choose JavaScript as the prime test automation language. As a result, Selenium JavaScript and Cypress adoption will grow, with Cypress seeing the most increase.

Prediction 3: Primary test automation moves to build

thisisengineering raeng zdBOU0faYK4 unsplash Small

Photo by ThisisEngineering RAEng on Unsplash

In 2021, core testing will occur during code build. In past test approaches, unit tests ran independently of system-level and full end-to-end integration tests. Quality engineers wrote much of the end-to-end test code. When bugs got discovered at the end, developers had to stop what they were doing to jump back and fix code. With bugs located at build time, developer productivity increases as they improve what they just checked in real-time.

Prediction 4: Speed+Coverage as the driving test metric

marc sendra martorell Vqn2WrfxTQ unsplash Small

Photo by Marc Sendra Martorell on Unsplash

As more testing moves to build, speed matters. Every minute needed to validate the build wastes engineering time. Check-in tests will require parallel testing for the unit, system, and end-to-end tests. Sure, test speed matters. What about redundant tests? Each test must validate unique aspects of the code. Developers will need to use existing tools or new tools to measure the fraction of unexercised code in their test suites.

Prediction 5: AI assists in selecting tests to ensure coverage

hitesh choudhary t1PaIbMTJIM unsplash Small

Photo by Hitesh Choudhary on Unsplash

To speed up testing, development teams will look to eliminate redundant tests. They will look to AI tools to generate test conditions, standardize test setup, and identify both untested code and redundancy in tests. You can look up a range of companies adding AI to test flows for test generation and refactoring. Companies adopting this technology will attempt to maximize test coverage as they speed up testing.

Prediction 6: Visual AI Page Checks Grows 10x

johen redman pktNMFsHNVs unsplash Small

Photo by Johen Redman on Unsplash

I’m making this prediction based on feedback from Applitools Visual AI customers. Each year, Applitools tracks the number of pages using Visual AI for validation. We continue to see exponential growth in visual AI use within our existing customers. The biggest driver for this growth in usage follows through from the next two predictions about Visual AI utility.

Prediction 7: Visual tests on every check-in

larissa gies MQWc4I9VuTc unsplash Small

Photo by Larissa Gies on Unsplash

When companies adopt visual testing, they often add visual validation to their end-to-end tests. At some point, every company realizes that bug discovery must come sooner. They want to uncover bugs at check-in, so developers can fix their code while it remains fresh in their minds. Visual AI provides the accuracy to provide visual validation on code build and code merge – letting engineers validate both the behavior and rendering of their code within the build process.

Prediction 8: Visual tests run with unit tests

wesley tingey 0are122T4ho unsplash Small

Photo by Wesley Tingey on Unsplash

Engineers treat their unit tests as sanity checks.  They run unit tests regularly and only check results when the tests fail. Why not automate unit tests for the UI? Many Applitools customers have been running visual validation alongside standard unit tests. Visual AI, unlike pixel diffs and DOM diffs, provides high accuracy validation for visual components and mocks. With the Ultrafast Test Platform, these checks can be validated across multiple platforms with just a single code run. Many more Applitools customers will adopt visual unit testing in 2021.

Prediction 9: The gap between automation haves and have-nots will grow

brett jordan 2dwqHcTjbtQ unsplash Small

Photo by Brett Jordan on Unsplash

As more development teams own test automation, we will see a stark divide between legacy and modern approaches. Modern teams will deliver features and capabilities more quickly with the quality demanded by users. Legacy teams will struggle to keep up; they will choose between quality and speed and continue to fall behind in reputation.

Where Do You See The Future?

These are nine predictions I see. What do you see? How will you get ahead of your competition in 2021? How will you keep from falling behind? What will matter to your business?

Each of us makes predictions and then sees how they come to fruition. Let’s check back in a year and see how each of us did.

Featured Photo by Sasha • Stories on Unsplash

The post 9 Test Automation Predictions for 2021 appeared first on Automated Visual Testing | Applitools.

]]>
How Visual UI Testing can speed up DevOps flow [Revisited] https://applitools.com/blog/visual-testing-devops-revisited/ Fri, 11 Dec 2020 20:51:09 +0000 https://applitools.com/?p=25168 With the advent of automated visual UI testing, people went about discovering the best way to integrate Automated Visual Testing in the DevOps workflow to speed up the process.

The post How Visual UI Testing can speed up DevOps flow [Revisited] appeared first on Automated Visual Testing | Applitools.

]]>

At its core, DevOps means applying best practices to automate the process between development and IT teams. In practice, DevOps requires automating the app builds, running tests to verify source code integrity, and automating the release process until the app is ready for deployment.

With the advent of automated visual UI testing, people went about discovering the best way to integrate Automated Visual Testing in the DevOps workflow to speed up the process.

Bilal Haidar

Continuous Integration / Continuous Delivery (CI/CD) makes up the heart of the DevOps workflow that supports the processes, developers, testers, and clients.

With the advent of automated visual UI testing, people went about discovering the best way to integrate Automated Visual Testing in the DevOps workflow to speed up the process.

In this article, I will share the workflow I use to integrate my source code with a CI/CD service to build the code and run the automated visual UI tests. And, I will show the end result – a full integration between the source code management system, and CI/CD, to report any discrepancies or failures in either.

The other half of this article will demonstrate a practical example of implementing a recommended Visual Testing Workflow.

While doing so, we will be covering topics like CI/CD, Source Code Management systems, and much more.

Let’s start!

Source Code Repository Management System

Where do you store your source code? Many of the external source code repository management (SCRM) systems make your code available to your team wherever they are and whenever they need access. Popular SCRM systems include GitHub, BitBucket, GitLab and Azure Repos. If you don’t already use one or more SCRM systems, I highly recommend you start

SCRM underpins DevOps source management. For example, I can add or modify code and push the changes to the centralized repository. My colleagues elsewhere would do the same thing. Developers joining the team can clone the centralized up-to-date repository to begin coding instantaneously.

SCRM systems have developed best practices and workflows. For instance, the diagram below depicts the workflow of implementing a new feature in your application.

pasted image 0 10

Start by cloning or branching to the dev branch.

  1. Create a new branch name relevant to the feature you are adding.
    1. Add your code and commit the changes locally.
    2. Push the branch to an SCRM system- GitHub for instance.
  2. Create a Pull Request (PR) based on the new branch.
  3. Invite your team members to review and comment.
  4. Incorporate any changes. If need be, approve and merge the PR.
  5. The branch dev has the new feature and ready to deploy.

I use this typical SCRM Workflow on a daily basis. It’s quite simple and straight to the point, yet it organizes the development process with all team members without losing code or overlapping.

Continuous Integration / Continuous Delivery (CI/CD)

Continuous Integration, or CI for short, is a fully-automated development methodology that seamlessly integrates your daily code changes into a centralized source code management system. CI ensures a high level of code integrity by enforcing your own software development process for any code changes.

When employing CI in your development workflow, creating a new PR in your SCRM system triggers the CI workflow via a webhook more often than not.

The CI workflow is depicted in this diagram:

pasted image 0 13
  • Start by pushing your branch to the SCRM.
  • Create a Pull Request.
  • Allow your colleagues to review and comment.
  • At the same time, creating a  PR triggers a Build Pipeline inside the CI Provider.
  • You can configure the Build Pipeline to build your code and run the tests. In addition, you can add more tasks to the workflow as you see fit.
  • Finally, the CI Provider reports the results of building the code and running the tests back to the SCRM.
  • The results will be displayed on the PR page to signal success or failure.

The CI results will be either a success or failure. In case of a failure, you need to review the PR and fix any issues reported. Otherwise, you can safely merge the PR into your code.

On the other hand, the Continuous Delivery, or CD, automates the process of generating a releasable package from your code. The success results of the CI process triggers a Release Pipeline inside the CD Provider. I won’t be covering CD in this article. To find out more about CD, you can read this detailed guide on CD/CI: What’s Continuous Integration and Continuous Delivery.

Visual Testing Workflow

Automated visual UI testing and CI/CD integrate well together! They complement each other and facilitate the work of DevOps.

Once you have an automated integration and delivery solution, you can easily integrate visual UI tests into your CI/CD workflow. Simply add an additional task in the workflow to run the visual UI tests. Typically, the CI Provider runs the automated tests and sends the results to the Visual Testing Provider. The Visual Testing Provider analyzes the testing results and reports them back to the SCRM on that specific PR.

The diagram below summarizes this whole workflow:

pasted image 0 3
  • Start by pushing your branch to the SCRM system.
  • Create a Pull Request.
  • Let your colleagues review and comment.
  • At the same time, creating a PR triggers a Build Pipeline inside the CI Provider.
  • The Build Pipeline builds your code and runs the tests (unit, integration, function and visual UI tests).
  • The CI Provider reports back the results to the SCRM system.
  • Meanwhile, the Visual Testing Provider collects the test run results, analyzes them and reports back success or failure to the SCRM system.

Every Visual Testing Provider supports an SCRM system integration to ease communication when running visual UI tests. For instance, Applitools offers GitHub, BitBucket, and other integrations with SCRM systems.

Let’s see this workflow in action in the demo section.

Demo

Now that we know where automated visual UI testing fits in the DevOps workflow, let’s work through an example by trying to automate the entire Visual Testing Workflow.

In this step-by-step guide, we’ll tackle the following:

  1. Cloning the example code.
  2. Configuring Applitools API Key.
  3. Connecting the GitHub repo with CircleCI.
  4. Configuring the GitHub repo.
  5. Connecting the GitHub repo with Applitools.
  6. Running a visual UI test locally.
  7. Running the Visual UI testing workflow.

For this article, I opt to use the source code that I developed for my previous article Cross-browser Testing With Cypress.io. The code contains a single visual UI test that runs against one of the Applitools customer’s website- the https://www.salesforce.com– and fills the contact form located at https://www.salesforce.com/uk/form/contact/contactme/.

1. Cloning the example code

Start by cloning the source code that we’re going to use for this demonstration by running this command:

git clone git@github.com:bhaidar/applitools-github-circle-ci.git

The command clones the GitHub repo to your local computer. Once the code is cloned, enter the applitools-github-circle-ci directory, and run the following command to install the packages:

npm install

This command installs all the necessary NPM packages used by the source code. The app is now ready!

2. Configuring Applitools API Key

Before you can locally run Applitools Eyes Cypress SDK, make sure you get an Applitools API Key, and store it on your machine as an environment variable.

To set the APPLITOOLS_API_KEY environment variable, you can use the export command on Mac or set command on Windows as follows:

View the code on Gist.

3. Connecting the GitHub repo with CircleCI

For this tutorial, we’re going to integrate with our GitHib repo with CircleCI. If you’re not familiar with CircleCI, check out their documentation and sign up for a free account with GitHub or Bitbucket.

Once you’ve created an account and logged in to the CircleCI website, click the Add Projects menu item.

Search and locate your GitHub repo and click the Set up Project button as shown in the diagram.

pasted image 0 9

You are then redirected to the Project Settings page to complete the set up of your project.

On the Project Settings page, select the Linux operating system option and Node as a language. Scroll down the page until you see:

pasted image 0 4

The CircleCI team offers you a step by step guide on how to integrate CircleCI into your application. Make sure you follow the steps listed as shown here:

  1. Create a folder called .circleci and add a file called config.yml
  2. Populate the config.yml file with the sample on the page
  3. Update the sample .yml to reflect your project’s configuration
  4. Push the change to GitHub.
  5. Start building! This will launch your project on CircleCI and its webhooks will listen for updates to your work.

Switch back to your code editor and let’s add the CircleCI requirements.

3a. Build in GitHub

Start by creating the dev git branch by running the commands below:

git checkout -b dev
git push --set-upstream origin dev

Then create a new folder named .circle at the root folder of the app. Then create a new file inside the .circleci folder named config.yml.

The CircleCI provides a sample config.yml file that you can use to start working with CircleCI. For now, paste the following inside the config.yml file:

View the code on Gist.

The “config.yml” defines a single build job. The build job defines some general configurations and some steps.

This job makes uses of the cypress/base:10 Docker image. It also defines the working directory to be the name of the app itself.

The build job steps are:

  1. Checks out the source code to the working directory inside the Docker container.
  2. Restores the cached NPM packages if any.
  3. Installs all NPM package dependencies located in the package.json file.
  4. Stores the packages inside the cache to retrieve and use them later.
  5. Runs a multi-line command as follows:
    1. Exports the APPLITOOLS_BATCH_ID to the CIRCLE_SHA1 auto-generated value at runtime.
    2. Runs the single visual UI test with Cypress.io

It’s very important to include both commands, exporting the APPLITOOLS_BATCH_ID  environment variable, and running the visual UI test, in the same command inside the config.yml file.

3b. Integrate with Applitools

In addition, we need to amend the applitools.config.js file to include the batchId. The applitools.config.js file should now look as follows:

module.exports = {
    showLogs: false,
    batchName: 'Circle CI batch',
    batchId: process.env.APPLITOOLS_BATCH_ID
}

One task remains – adding the APPLITOOLS_API_KEY environment variable on the CircleCI project level, so the visual UI tests can run and connect with Applitools servers. You have the choice to either place this environment variable inside the config.yml file or add the environment variable on the CircleCI website. The latter is best and the recommended option so you don’t share your API on GitHub. You simply embed it at the CircleCI project level.

  1. Locate and click the Jobs menu item on CircleCI.
  2. Near the project name, click the gear icon.
    pasted image 0 14
  3. Click the Environment Variables menu item.
  4. Click on the Add Variable button.
  5. On the popup modal window, fill in the following fields:
    1. Name: The name of the environment variable in this case APPLITOOLS_API_KEY.
    2. Value: Copy the API Key from the Applitools website and paste it here.
  6. Click the Add Variable button to add this environment variable.

3c. Push Changes

Now let’s push these changes to our local copy of source code to GitHub. Automatically, CircleCI will detect a change in the repo and will run the test job. Consequently, the Applitools visual UI test will run.

Run this commands to push your changes to GitHub:

git add .
git commit -m "Add CircleCI support to app"

Switch back to CircleCI and check the status of the job.

pasted image 0 15

The diagram lists all the steps as defined in the config.yml file. CircleCI runs each step in the same order it appears in the config file.

The job fails! Let’s expand the last step to find out why.

pasted image 0 1

If you read through the logs you will notice the following:

CypressError: Timed out retrying: Expected to find element: 'input[id="reg_form_1-UserFirstName"]', but never found it.

Cypress.io has generated an error as it cannot find an input with ID reg_form_1-UserFirstName on the contact form.

The SalesForce team has changed the IDs of the contact form since I wrote the visual test script in my previous article!

Don’t worry, we will fix this in the coming sections. More importantly, we have linked our GitHub repo with CircleCI.

Let’s move on and tackle other tasks.

4. Configuring the GitHub repo

Before we proceed, let’s configure our GitHub repo with the best practices.

  1. Locate the GitHub repo and go to the Settings page on GitHub.
  2. Click on the Branches menu section.
  3. Click the Add rule button to define the approval on new code pushed to this repo.
  4. Enter the name master into the Branch Name Pattern field. The rules we are going to add target only the master branch in this repo. You are free to decide on the branch you want.
  5. Usually, I opt to check the Require pull request reviews before merging option. For the sake of this demonstration, and since I am the only developer working on this code, I will keep it unchecked.
  6. Make sure to check the Require status checks to pass before merging option. This ensures CircleCl allows you to merge a PR into your master branch after a success status outcome.
  7. Check the Require branches to be up to date before merging option, and then select the ci/circleci: build checkbox. In other words, you are specifying the CircleCI job to listen to for status checks.
  8. Locate and click the Create button to add this rule.

That’s it for GitHub!

Let switch to CircleCI and configure one option there.

  1. Locate and click the Jobs menu item on CircleCI.
  2. Click the gear icon on the top-right side of the page.
pasted image 0 5
  1. Locate and click the Advanced Settings menu item.
  2. Scroll down the page until you hit the option Only build pull requests, and make sure to select On. This way, CircleCI build process will be executed only when a new PR is created, and not on every push to the master branch.

That’s it for CircleCI!

5. Connecting the GitHub repo with Applitools

So far, we have managed to integrate CircleCI and GitHub successfully. Any status update on running the build job on CircleCI will be reflected on the GitHub repo.

Let’s now integrate GitHub with Applitools so that our GitHub repo will receive any status updates on whether the visual UI tests ran successfully or failed.

Go to the Applitools Test Manager website and log in with your GitHub credentials.

  1. Navigate to the Admin page.
  2. Locate and click the Teams tile.
  3. In my case, I am the only member of the team. I click the row that represents my own record.
  4. Click the Integrations tab.
  5. Locate and click the GitHub option.
  6. Click the Manage repositories button.
  7. Sign in to your GitHub account again as requested. Click the Sign button and authorize Applitools to access your GitHub account.
  8. Locate your GitHub repository and click Apply.
pasted image 0 12

That’s all! Now any status updates on visual UI tests on the GitHub repo you’ve selected will appear right away.

Now, if you navigate back to the GitHub repo Settings page and click the Webhooks menu item, you will see two Webhooks were added to your repo. One for CircleCI, and another for Applitools.

pasted image 0 8

6. Running a visual UI test locally

In this section, I will be fixing the single visual UI test script that I have by changing the ID names for all the fields on the SalesForce contact form so that the test runs successfully.

As mentioned previously in my articles, the recommended approach for doing visual testing with Cypress.io is to assign data-* attributes to fields. Such data attributes never change, and provide a consistent way to keep your visual tests running.

Since we don’t have control over the source code behind the contact form, I will simply create a new Git branch and fix the visual test code to reflect the new IDs being used.

Run the following commands to create a new branch in Git:

git checkout -b amend-visual-test-to-use-new-ids
git push --set-upstream origin amend-visual-test-to-use-new-ids

I’ll go through the script code and change the way I find input fields on the form:

// Fill First Name
cy.get('input[id="reg_form_1-UserFirstName"]')
.type('John')
      .should('have.value', 'John');

To something like this:

// Fill First Name
cy.get('input[id^="UserFirstName"]')
.type('John')
.should('have.value', 'John');

The new script code now looks like this:

View the code on Gist.

Let’s give it a shot and run the single test, included in the source code, locally, and make sure it runs.

To run the test, we need first to open the Cypress Test Runner app by running this command:

npx cypress open
pasted image 0 2

Click the salesforce-contactus.spec.js file and Cypress will run the test for you. The result is shown below:

pasted image 0 7

The test runs successfully! Let’s make sure the test results are displayed on the Applitools Test Manager website.

Let’s visit the Applitools Test Manager, and verify the results.

pasted image 0 11

Great! Our test runs fine.

7. Running the Visual UI testing workflow

Now that our test runs successfully locally, what we need to do is push our new branch to GitHub, create a new PR and watch how CircleCI and Applitools will be triggered. Their status updates will appear on the PR itself on GitHub.

Run the following commands to push the new branch to Github:

git add .
git commit -m "Fix visual test script"
git push

Navigate to the GitHub repo, and create a new PR.

pasted image 0 6

Notice how the CircleCI build pipeline has started:

ci/circleci:build Pending -- CircleCI is running your tests

Let’s wait for CircleCI to finish running the job and for Applitools to report the results of running the tests on this PR:

pasted image 0

Both CircleCI and Applitools report success on running the single visual UI test!

Congratulations!

  • The ci/circleci:build job ran successfully
  • The scm/applitools job that compares the git branch to the baseline git branch ran successfully
  • The tests/applitools job that runs all the visual UI tests ran successfully

Only now you can go ahead and merge the PR into the source code.

Conclusion

This article is just the tip of the iceberg on how Applitools automated visual UI tests can seamlessly integrate with a CI engine to support and facilitate the DevOps workflow.

Now that you have a starting point, I will leave the floor to you. Explore,play and configure! Be brave and try more advanced workflows.

Other Related Content

The post How Visual UI Testing can speed up DevOps flow [Revisited] appeared first on Automated Visual Testing | Applitools.

]]>
qTest Integration with Applitools https://applitools.com/blog/qtest-integration/ Thu, 05 Nov 2020 20:06:49 +0000 https://applitools.com/?p=24395 In this article, I’m focussing on the automated tests to explain the benefits of qTest and Applitools integration. qTest supports multiple test runners like TestNG, Cucumber, JBehave, JUnit, NUnit, MSTest etc that can be used with popular tools like Selenium to run automated tests. Integrating Applitools with the automation tools and test runners is straightforward

The post qTest Integration with Applitools appeared first on Automated Visual Testing | Applitools.

]]>

Agile methodology should be a seamless setup where anytime, tracking of tests per requirement or story, automated or manual tests executed and results available for monitoring or auditing by key stakeholders. And, automation execution shouldn’t be something only techies can do. Non-technical people in the team should know what is being covered in automated tests for stories.

Mature organizations with agile methodologies, implement tooling to centralize, testing, requirements, defects and reporting at one place for better tracking and also to make informed decisions on the software quality management process.

Recently, I have come across two tools, qTest and Applitools – integrating them in the agile process will solve above challenges and makes testing practise in an agile environment – transparent, traceable, measurable and efficient.

“In this article, I’m focussing on the automated tests to explain the benefits of qTest and Applitools integration.”

— Satish Mallela

 qTest

Tricentis qTest offers a suite of Agile testing tools designed to improve efficiency and ensure collaboration on your journey to release the best software.

qTest has below components to support agile testing methodologies,

  • qTest Manager – Powerful and Easy-to-Use Test Case Management
  • qTest Launch – Centrally Manage Open Source Frameworks & Commercial Test Automation Tools
  • qTest Explorer – Test Execution Recorder & Documentation Tools

Applitools

Applitools Visual AI, provides the software industry’s only AI-powered, deep learning, computer vision engine that replicates the human eyes and brain to test rendered components and pages.

Applitools offers incredible efficiency. A single line of capture can replace dozens of text-based assertions per test.  With Applitools, you create tests faster, and you execute tests in seconds on multiple devices/browser combinations. Applitools helps you ensure perfection across all browsers, screens and viewports. You can even include web accessibility compliance testing at the same time.

Applitools has 40+ SDKs for different programming languages and test runners, covering web apps, mobile apps, screenshot testing, PDF validations, and more.

Applitools Integration with qTest

How do we combine visual AI in Applitools with Test management capabilities in qTest to make software testing efficient in Agile methodology?

Common flow of test management in any agile environment is,

  • Writing tests to user stories/requirements.
  • Execute tests manually or automatically.
  • Raise defects

In this article, I’m focussing on the automated tests to explain the benefits of qTest and Applitools integration.

qTest supports multiple test runners like TestNG, Cucumber, JBehave, JUnit, NUnit, MSTest etc that can be used with popular tools like Selenium to run automated tests.

Integrating Applitools with the automation tools and test runners is straightforward. Follow Tutorials  for a step-by-step guide

I have chosen, Cucumber-Java-Selenium and Applitools combination to explain integration.

Cucumber is most commonly used in agile development for writing automated tests.

Cucumber is a BDD(Behavior-Driven Development) software tool which helps facilitate the use of an easy-to-learn language called Gherkin. The Gherkin language is used for writing user stories in a ubiquitous language which can also be parsed by the test runner software. For details of Cucumber and Gherkin language, please refer to https://cucumber.io/docs.

Let’s take a look at a sample Cucumber scenario, as shown below to launch Tricentis website.

Feature: Browser Test
  Scenario: Search qTest Launch
      Given I am on the Tricentis website
      When I search for “qTest Launch”
      Then I should see qTest Launch item on the Search page

Applitools should be added to the implementation of the Cucumber steps. If the automation framework uses a page object model then Applitools should be added to page objects.

Applitools requires a Selenium webdriver object to capture a screenshot of the application under test.

The following line of code should be added before capturing the DOM screenshot.

eyes.open(driver, “Demo App”, “Smoke Test”, new RectangleSize(800, 800));

Next is to add the Applitools check command to take a full-page screenshot.

// Visual checkpoint #1 - Check the login page.
eyes.checkWindow(“Login Window”);

**Please refer to this tutorial for the remaining steps and completing the integration.

We have successfully added Applitools code to automated tests.

Running Tests

Let’s run the automated tests with qTest Launch or CI/CD tools like Jenkins.

Please refer to qTest Launch for more details.

During the test execution, Applitools takes DOM snapshots of the application under test (as shown below) and sends the snapshot to Applitools server for comparison with visual AI – emulating the human eye and brain.

pasted image 0 5

Please refer to Applitools for more details.

So far, we have prepared tests and integrated Applitools and now let’s see the results of integration.

qTest automatically imports automation results to qTest Manager dashboard if tests are executed by qTest Launch. Please refer to qTest Launch  for more details.

For execution in Jenkins refer to qTest integrations with Jenkins.

Let’s dive into results imported into qTest Manager as shown below.

Please refer to qTest Manager for more details.

pasted image 0

qTest Manager dashboard with automation run results.

As shown in the above screenshot, tests are marked passed or failed.

Integrating qTest with Applitools, the advantage is, qTest marks the test passed or failed depending on the status from Applitools.

Applitools has different status of tests, based on the comparison done by Visual AI.

Please refer to this article for more details.

Let’s click on the test executed and see the details of the execution in both pass and failure scenarios,  

Passed:

  • qTest marks the test as Passed as Applitools have a taken full-page screenshot of the application and no differences observed with baseline snapshot of the same screen.
  • Applitools has tested a full-page emulating human eye.
pasted image 0 2

Failed:

  • qTest marks the test as Failed as Applitools have taken a full-page screenshot of the application and observed differences with the baseline snapshot of the same screen.
  • Applitools has tested a full-page emulating human eye and has found differences that a human eye would visualise by comparing text, images, colour, and contrast.
  • To know more about Applitools algorithms comparison and human eye emulation. Follow this link.
  • qTest also provides a link(click the log highlighted in green) to the Applitools Dashboard (sample below) to understand the differences and take action. Follow this link for more details.
pasted image 0 4

“com.applitools.eyes.exceptions.DiffsFoundException: Test ‘qTest Integration’ of ‘qTest’ detected differences! See details at: https://eyes.applitools.com/app/batches/00000251801305860660/00000251801305860535?accountId=<accountID>”

The link sends you to Applitools Test Manager dashboard for users to review results and take actions.

In the below screenshot, Applitools has compared the checkpoint (present application state) with the expected application state(baseline) and if differences found between these two will be highlighted in pink.

pasted image 0 3

To know more about Applitools capabilities for testing websites, mobile applications or PDF documents. Please follow the links.

So far, we have achieved below in agile testing practise with integration,

  • Tests are written for user stories
  • Automated tests are linked to stories
  • Results have been centralised in one place

Now, all the stakeholders, team members should be able to,

  • See the progress of given user story testing
  • Understand what is been tested in automated testing
  • See the current state of application
  • Collaborate and take informed decisions

Results are available all time in one place for future reference or auditing purposes, giving agile teams transparency and the ability to trace, measure and increase their efficiency.

Benefits of Integration

In broader terms, Applitools compliments teams using qTest or any other automation tools to,

  • Test faster with less code or zero code – productivity gain by 70%
  • Cross Browser testing with no infrastructure cost – coverage increase by 90%
  • Automation results combined at one place – standardising tools
  • Fewer defects to production – bugs caught 45% more with Visual AI

Hope you liked this blog. Keep Testing!

The post qTest Integration with Applitools appeared first on Automated Visual Testing | Applitools.

]]>
Learn from Google – Use Google Cloud Build https://applitools.com/blog/google-cloud-build/ Thu, 01 Oct 2020 20:51:44 +0000 https://applitools.com/?p=23729 We all want to catch and fix problems as early as possible. In software development, this mindset helps development teams create a culture of continuous improvement. To automate best practices,...

The post Learn from Google – Use Google Cloud Build appeared first on Automated Visual Testing | Applitools.

]]>

We all want to catch and fix problems as early as possible. In software development, this mindset helps development teams create a culture of continuous improvement. To automate best practices, like Continuous Integration & Continuous Delivery (CI/CD), teams are adopting tools that help automate continuous improvement. Google has created a great tool in Google Cloud Build.

By creating Google Cloud Build, Google has created a platform that helps every software developer from every walk of life, on teams of every size. Google Cloud Build is a fully managed cloud application development environment. It includes an associated set of build and test management tools that also encompasses automation and workflow controls, along with privacy and vulnerability analysis intelligence.

Regardless of the speed or scale that teams are able to build, test and deploy software. This is very much aligned with the vision of Applitools specially after the launch of the Ultrafast Visual Grid. Supporting an integration between Google Cloud Build and Applitools provides every software developer the ability to run their tests across all environments at the speed of light and get the feedback seamlessly in second as a part of the build process.

Integrate Applitools with Google Cloud Build

To get started, follow the following steps to integrate Applitools into your Google Cloud Build. 

  • Step 1: Create an account on Google Cloud Platform.
  • Step 2: Navigate to the Cloud build.

Create the trigger and select your source code management tool (in this example I have chosen GitHub ) and  connect your repository.

image6

Allow the required access to perform the trigger runs on every commit on each branch or each pull request.

Edit your trigger and select the Build Configuration as Cloudbuild.yaml or Dockerfile.

image4

Once you make a small change in application and push it up to GitHub this will kick off the Cloud Build trigger you’ve just created.

Continue With Applitools Setup

Now follow these steps to configure your Applitools setup.

  • Step 1: Navigate to admin panel in the Applitools eyes dashboard
image5
  • Step 2: Go to Teams and select your team

image2

  • Step 3: Select your team and Navigate to Integrations Section
image1
  • Step 4: Integrate your GitHub repo with Applitools
image3

Run Your Cypress Tests

Now, run your Cypress tests on Google Cloud build.

  • Step 1: Let us clone the demo Cypress project from Applitools Cypress

https://applitools.com/tutorials/cypress.html 

  • Step 2: Create a cloudbuild.yaml file to pass the build steps.
View the code on Gist.

To run a Cypress end-to-end test let’s use a Docker.

In the Docker file we have defined the steps to copy the Docker images, files, Cypress folder, add configurations and execute the scripts.

Make sure to set the APPLITOOLS_API_KEY as ENV variable or access it from secrets in the Docker file.

View the code on Gist.

Applitools build status is reported based on COMMIT_SHA, hence we need to access the git commit_sha in the Docker and assign the COMMIT_SHA to Applitools BatchID as ENV variable.

Once the test is completed the curl command in Docker will perform a post call to update the build status.

Once you build your project. Applitools can visually validate each commit and reports the build status.

image7
image8

Watch the video to try out the integrations & Keep Testing!

The post Learn from Google – Use Google Cloud Build appeared first on Automated Visual Testing | Applitools.

]]>
What Not To Miss At Selenium Conf 2020 https://applitools.com/blog/selenium-conf-2020/ Wed, 09 Sep 2020 23:27:42 +0000 https://applitools.com/?p=22431 As a company that focuses on browser-based functional tests, Applitools stays on top of all things Selenium. We are excited to be Gold Sponsors of the virtual Selenium Conf 2020,...

The post What Not To Miss At Selenium Conf 2020 appeared first on Automated Visual Testing | Applitools.

]]>

As a company that focuses on browser-based functional tests, Applitools stays on top of all things Selenium. We are excited to be Gold Sponsors of the virtual Selenium Conf 2020, which starts with online workshops tomorrow morning in India and tonight in the United States. There will be lots to learn, whether you are just getting started with Selenium or if you are a hard-core contributor.

Here is our list of five ‘what not to miss’ sessions and workshops at this year’s conference.

Simon Stewart’s Keynote

pasted image 0 1

Simon Stewart talks about the State of Selenium 2020, including a discussion of what’s new in Selenium 4. Hear about contributors, lots of new contributions, and lots to discuss. Simon knows both the project’s history and can help predict its future. Expect Simon’s keynote to cover lots of ground.

Friday, 11 September 2020, 18:00 IST

US Times:

Friday, 11 September 2020, 08:30 EDT

Friday, 11 September 2020, 05:30 PDT

Anand Bagmar’s Workshop – Get Up To Speed

pasted image 0 3

Anand Bagmar is holding a “Selenium Deep Dive” workshop on Thursday, 10 September, from 10:00 to 18:00. Anand and Alexei will dive into the ins and outs of using Selenium. Anand hosts the Essence of Testing blog. He’s passionate about testing and test automation using Selenium. And, Anand is a Quality Evangelist & Solution Architect at Applitools

Thursday, 10 September 2020, 10:00-18:00 

US Times:

Thursday, 10 September 2020, 0:30-08:30 EDT

Wednesday 09 September 2020, 21:30 – Thursday 10 September 2020 05:30 PDT

Gaurav Singh on Automation Frameworks

pasted image 0

Gaurav Singh writes passionately about best practices in building test frameworks. He posts his blogs on automationhacks.io, and he has written a blog about test frameworks for Applitools. Gaurav cuts through all the information you can find on the Internet and gives you a robust way to think about modeling, building, and operating a test framework to make you more efficient.

Friday, 11 September 2020, 11:30-12:15 IST

US Times:

Friday, 11 September 2020, 02:00-02:45 EDT

Thursday, 10 September 2020, 23:00-23:45 PDT

Moshe Milman on Test Automation and CI/CD

pasted image 0 4

Moshe Milman works for Applitools and has a broad knowledge of test and test strategies. He’ll focus on the prime issues in CICD automation – the combination of change volume and the increasing pace of changes. The solution involves optimizing your tests, looking at organization needs and tradeoffs, choosing the right tools, and deploying the right processes. Moshe will share tips and tricks as well.

Friday, 11 September 2020, 10:30-11:15 IST

US Times:

Friday, 11 September 2020, 01:00-01:45 EDT

Thursday, 10 September 2020, 22:00 – 22:45 PDT

Sujasree Kurapati on Accessibility Testing

pasted image 0 2

Legal requirements and business opportunities drive organizations to ensure the accessibility of their web applications. Deque software has focused on providing tools for testing for software accessibility tests. Hear Sujasree Kurapati’s talk about a11y APIs and a11y testing that help organizations automate validation to ensure they meet accessibility goals.

Friday, 11 September 2020, 15:15-15:35 IST

US Times:

Friday, 11 September 2020, 05:45-06:05 EDT

Friday, 11 September 2020, 02:45-03:05 PDT

Jim Evans’s Keynote

Jim Evans

After all gets said and done, be sure to catch Jim Evans’s Saturday keynote, I’m Not Special. All projects have key people who contribute greatly to the project’s success. Jim wants you to understand what really matters when you contribute to a software project. He wants you to walk away with a different perspective on people, processes, projects, and software.

Saturday, 12 September 2020, 16:00-16:45 IST

US Times:

Saturday, 12 September 2020, 06:30-07:15 EDT

Saturday, 12 September 2020, 03:30-04:15 PDT

Wishing you all a good SeleniumConf 2020!

Learn More about Applitools and Selenium

The post What Not To Miss At Selenium Conf 2020 appeared first on Automated Visual Testing | Applitools.

]]>
Review – Automated Visual Testing With WebdriverIO https://applitools.com/blog/visual-testing-webdriverio/ Mon, 17 Aug 2020 22:08:24 +0000 https://applitools.com/?p=20596 I took Nyran Moodie’s course on Test Automation University: Automated Visual Testing with WebdriverIO. If you want the explicit link to the course, here it is: https://testautomationu.applitools.com/automated-visual-testing-javascript-webdriverio/index.html Course Introduction If you...

The post Review – Automated Visual Testing With WebdriverIO appeared first on Automated Visual Testing | Applitools.

]]>

I took Nyran Moodie’s course on Test Automation University: Automated Visual Testing with WebdriverIO. If you want the explicit link to the course, here it is: https://testautomationu.applitools.com/automated-visual-testing-javascript-webdriverio/index.html

Course Introduction

If you use WebdriverIO regularly, and you are unfamiliar with the basics of using Applitools for automated visual testing, you will appreciate this course. Nyran focuses you on how to add visual test automation with Applitools to your WebdriverIO tests.

Nyran expects you to know WebdriverIO. If you want to learn WebdriverIO, check out Julia Pottinger’s excellent course – UI Automation with WebdriverIO on Test Automation University.

Nyran uses JavaScript for his examples and Visual Studio Code as his IDE. In case you didn’t know, you can use Applitools with a range of test languages. No matter what your test language of choice or IDE, I think you will find Nyran’s code and coding approach fairly intuitive.

Course Structure

Nyran breaks the course into eight main chapters. He gives this description in the course overview:

  • 1 – I am going to start by giving you an introduction to visual testing.
  • 2 – We are going to look at how we can get our environment set up to start using Applitools eyes.
  • 3 – We are going to create and run our first visual test using Applitools
  • 4 – I want to introduce you to the different match levels that Applitools has and the concept of viewport sizes
  • 5 – I will be talking about checkpoints.
  • 6 – I will be looking at how we can organize our visual tests using batches
  • 7 – We will look at how we can analyze our test results using the Test Manager.
  • 8 – We will be looking at the integration and collaborations that Applitools provides.

Each of these chapters provides a methodical approach to getting going with Applitools. I’ll give a quick overview of each.

Chapter 1 – Why Visual Testing

If you read this review or take the course, you know why visual testing. Your UI and end-to-end tests result in rendered output. You can write all the functional tests that grab locators, enter data, effect action, and cause the appropriate output locators to have the appropriate values. But, until you actually look at the result, you cannot tell if the input and output conform to design and usability expectations.

Nyran did not explain the most frequently-experienced reason for visual testing – unintended consequences of code changes over time. Our experience shows us that most expected application changes get tested, but unintended changes cause problems.

Chapter 2 – Getting Started With Applitools

Nyran does a nice job explaining how to get started. You need an Applitools API key, which you can get from the Applitools console. Nyran explains why you set up a local environment variable for your API key (so you do not need to include your API key in your test code directly). He also points to the github repo he uses for all the examples in the course.

Chapter 3 – Create And Run A Visual Test with Applitools

Chapter 3 involves the first coding examples for setting up Applitools. With a simple:

npm install  @applitools/eyes.webdriverio

You get the the node instructions for installing the Applitools Eyes to your WebdriverIO setup. After this you can install the Applitools Eyes service to your tests. He shows code examples of what test code looks like when calling Applitools:

chapter3 img2

Once he walks you through a test page example and fills in the tests, he gets code that looks like a full test.

https://github.com/NyranMoodie/WebdriverIO-Applitools/blob/master/test/specs/visual.js

Finally, he shows you the Applitools UI and how it highlights differences found during a test. To do this, he shows a test site with the ability to show different content, and he shows how Applitools highlights the differences.

Nyran makes it clear that Applitools can find and highlight all the visual changes on a comparison page (the checkpoint) versus the original capture (the baseline). And, he explains that Applitools lets you accept the checkpoint as the new baseline, or reject it.

Chapter 4 – Viewports and Match Levels

Nyran breaks chapter 4 into two sections. In Section 4.1, Nyran goes through the idea of viewport sizes. If you build a responsive app, you want to run your app for 4K down to mobile device sizes. How do you validate the different device size views? Applitools makes it easy to add a JavaScript file that specifies all the viewport sizes you want to validate. Applitools runs all the captures for you.

Next, Nyran writes about match levels. Applitools default comparison level, called “strict”, compares visually noticeable changes between a checkpoint and baseline. Strict uses Applitools Visual AI to break items on a page into elements that it then compares.

However, sometimes strict is too strict. In “content” match level, Applitools checks text and structures but ignores color variations. This match level helps when you apply a global change to color and want to ensure that – color outstanding – no other changes have taken place. And, if they have, you want those changes highlighted quickly.

In “layout” match level, Nyran shows, Applitools lets you validate pages that have dynamic content sharing a common structure. For example, you might have a retail shoppping page that shows user-specific and hot items updating from test run to test run. Or, you have a news site that updates top stories regularly. Layout match level pays attention to the layout structure (relationship of sections and text sizes) without comparing the specific contents of any set of elements within that structure.

Chapter 5 – Checkpoints

Nyran spends this chapter reviewing the ways Applitools lets  you make visual captures. First, ou can capture the visible page – the current viisble screen. Alternatively, you can capture the full page. Applitools runs through the app horizontally and vertically and stitches the images into a single screen baseline and checkpoint.

Next, you can capture individual web elements. Finally, you can capture frames on a page. Your flexibility in setting checkpoints gives you plenty of power to control the details of your inspection at different times in your testing life cycle.

Chapter 6 – Batches

Batches provide a way of organizing common tests inside the Applitools Test Manager. Nyran explains how to code tests into batches. He also shows how batched tests get grouped inside the Test Manager. When you use batches, your test results become easier to interpret.

Nyran implies two things about batches. First, grouping tests into batches make your testing much easier to understand. Second, If you want the benefit of batches, you need to code those batch definitions yourself.

Chapter 7 – Using the Test Manager

The Test Manager is the Applitools service user interface. Your user login provides you an API key, and tests run under your API key collect in your view in Test Manager.

Nyran shows you the basics, as well as some cool tricks in the Test Manager. He shows you how to compare differences between the checkpoint and the baseline. You can approve expected changes and update the baseline to the checkpoint. Or, you can reject changes and have them routed back to development as bugs.

Next, Nyran shows you how tests get grouped and how to use statistics. He also shows you how to override the existing match level on all or part of a test. Finally, he shows you Automated Test Maintenance.

Automated Test Maintenance gives you huge powers of scale for validating changes to your application. When you find a change in one checkpoint and approve it as a valid change, Applitools finds all other similar changes and gives you the power to approve those identical changes at the same time. For example – you change your menu bar at the top of your app, and the change affects 145 pages. Following your validation of a change on one page, Applitools asks you if you want to approve the other 144 pages with the identical change. That’s powerful.

Chapter 8 – Integrations

In the last chapter, Nyran shows how Applitools integrates with other tools in development – especially your CICD workflows.

Nyran shows the range of Applitools test framework and language SDKs that work on web and mobile test frameworks. Applitools lets you capture screenshots to use Visual AI if you use an unsupported framework. Applitools even lets you compare PDFs to ensure that your document generators behave correctly even as you update your application.

Next, you see how to link Applitools with Jira. You can link issues found in Applitools with incidents in Jira. Also, you can link GitHub and Applitools to tie your image comparisons to the source code GitHub pull requests. Finally, you see the standard approach to having Applitools link with your favorite CICD workflow manager.

Conclusion

Nyran wrote a nice course on how to use Applitools for automated visual testing. He makes clear that he used WebdriverIO and JavaScript because he knew these well. However, he knows the range of choices available to you.

Nyran really doesn’t cover:

  • Setting up tests in WebdriverIO,
  • Approaches for running and managing tests and test results
  • Managing WebdriverIO in your CICD workflow

You can find other courses to address these issues.I enjoyed taking Nyran’s course. Having taken other JavaScript testing courses, I think Nyran provides good examples of Applitools. He wants you to know how to get the most out of Applitools when you use WebdriveriO.

As always, I include my certificate of completion from Test Automation University:

For More Information

The post Review – Automated Visual Testing With WebdriverIO appeared first on Automated Visual Testing | Applitools.

]]>
How Should I Organize My Automation Team? https://applitools.com/blog/organize-my-automation-team/ Tue, 11 Aug 2020 06:07:18 +0000 https://applitools.com/?p=20531 Joan Rivers once wrote, “I don’t want to order dinner by yelling into a plastic clown’s mouth”. We’ll get back to that shortly. There is a compelling discussion to have...

The post How Should I Organize My Automation Team? appeared first on Automated Visual Testing | Applitools.

]]>

Joan Rivers once wrote, “I don’t want to order dinner by yelling into a plastic clown’s mouth”. We’ll get back to that shortly.

There is a compelling discussion to have about creating a dedicated team to “do all the automation”. Why not have all the automation developers or SDETs be on one team? After all, automation creation is a special skill like database administration and CI/CD/CT pipeline management, so shouldn’t we treat automators for testing in the same way? We can just hand them the requirements or acceptance criteria and they can bang out those automation scripts for us.

Some call the above situation the automation drive-thru; it’s an apt analogy. Like a fast-food drive-thru, you simply decide what you want to order, yell your order into a clown’s mouth, pay some fee, then get your food from the window. No muss, no fuss. Can’t automation development be similar?  Instead of a greasy burger, some chicken nuggets, or some delicious tacos, out comes an automated test script; again, no muss, no fuss, right? Sadly, it doesn’t always work out with no muss or fuss.

The drive-thru is but one way to organize teams for automation; there are, however, three ways: the dedicated automation team, the distributed automation team, and the hybrid approach.

Let’s explore!

What Does A Dedicated Automation Team Look Like?

I know of an organization in a Fortune 500 company that had (and perhaps still has) a dedicated team that creates their automation for at least one organization. Putting snark aside, the dedicated automation team was the working, professional, effective version of the automation drive-thru. This team would take in requests to create automation scripts based on test cases and then create automated scripts that would (give or take) do what was prescribed based on the test cases. In general, I would dismiss this as folly; no sufficiently experienced testing professional could think this was a good working model. But, they did, and it was. This organization was producing quality automation with actual value. The organization was succeeding. I know this because I was well acquainted with their former leader and with several of their automation engineers.

Dedicated Team Advantages

The main advantages of the dedicated team are the critical mass of automation expertise and the economy of scale.

Regarding critical mass, if you have all the automation expertise in one place, the automation endeavor has less friction when it comes to helping one another conquer automation challenges and deliver new automation capabilities. New automation engineers can be brought up to speed quickly because they have the rest of the automation team “at arm’s reach”. The walls between who automates what don’t need to exist (except when funding comes into play, but, to quote Alton Brown, that’s another show).

Following from, and dependent on, critical mass is the ability to gain an economy of scale. The Economy of scale, for our purposes, is being more productive, efficient, effective, and therefore economical in the delivery of automation. Speaking practically, this means that there is a consistency that can be attained by having a dedicated automation team and that consistency can scale because of the critical mass and consistency. The dedicated team can be indoctrinated with the organization standards and the organization can preside over the guidelines; automation, regardless of framework or library, can be implemented based on a consistent ideology. When implemented judiciously, this approach can deliver economic gains to the organization and the company.

Dedicated Team Limitations

As is said, into every life some rain must fall. The main downside to a dedicated automation team is that the team, while experts in automation, likely lacks detailed domain knowledge. Of course, the automation team knows the basic operation of the system under test, but most current applications are large, distributed, multiple-serviced entities. It’s not realistic for a relatively small team of automation developers to be “sufficiently versed” in the esoterica of how each subsystem works; the details eventually get too small for this team to keep up. The limitation is further strained when the dedicated team is expected to provide automation across heterogeneous application environments. Having worked in an organization that was responsible for producing automation across disparate applications, I can attest that a lack of domain knowledge was a challenge to the value proposition.

What Does A Distributed Team Look Like?

Spinning 180 degrees from a dedicated automation team, we have the distributed automation team. Generally, a distributed automation team falls into one of three categories: in-team automation specialists, testers/QAs do the automation, developers do the automation.

In the case of automation specialists, often called software development engineers in test (SDETS) or quality engineers (QE), the name describes the role. The automation specialists only do that: automation. In the microcosm, we have that economy of scale and specialization, but it’s a relatively small economy in the scope of the overall company because it’s exclusive to the team on which the automation specialists work; this is especially true if the company has many, similar delivery teams. In this team implementation, the automation specialists are not testers/QAs; they work with the testers/QAs as well as the developers to develop the automation that helps the team be more effective or more efficient at testing.

It’s a great story to say that the testers/QAs do the automation. They automate instead of “doing it manually” and the team “goes faster”. To accomplish this approach, the testers/QAs need to have time to gain the skills necessary for automation; they also need the time to create and maintain the automation. Often, the whole team does automation variant can be more effective. In this model, the automation work, like other work, is not assigned to one role; anyone on the team with the appropriate skill set is empowered to perform or help with any task. Developers know how to program so they can be a natural fit to create automation or pair with a tester to create automation.

A third, and recently hot, approach is “we don’t need testers” because we can have the developers do the automation. This has a direct correlation to the testers/QAs do the automation approach, namely that automation is something you “just do” without training or special skills; since automation is programming and developers know how to program, it’s a natural fit for them. Like testers, however, developers need time to gain the appropriate skills to appropriately implement automation.

Advantages

Since the automators, whoever they happen to be, can focus their implementations on the app or app area on which their team works, they can have deep domain knowledge of the software. This allows the automators, and therefore the automation, to be very specific and specialized in the automation they create. Since the automation software doesn’t have to be generalized across all application teams, there can be some cost savings in some cases; the automaton software, the frameworks and reusable libraries, in particular, don’t have to be all things to everyone so the automation can be created faster and likely at a lower cost.

Additionally, the deep domain knowledge enables the automators to discover hence unthought-of paths in the code that merit testing. This is difficult-to-impossible without the deep domain knowledge this approach permits.

Distributed automation teams are especially appropriate when the teams are working disparate products where there is little-to-no overlap between the application or system technologies.

Limitations

If automation specialists are creating the automation, these automators are often over-burdened or under-utilized. If there’s too much automation to create, not all of it gets created. This is not necessarily a bad thing, but it can be sub-optimal if critical or high-value automation can’t be implemented simply because the team “doesn’t have time”. Conversely, if there’s “not enough” automation work to fully occupy the automators, they may be idle and the team loses potential value.

That’s OK, let’s “just” have the testers/QAs do automation. Bzzt, play again. There is an unfortunate perception that automation is something that you “just do”. I just brush my teeth, I just eat lunch, I just run some test cases, and I just create some automation. As much as it would be great if this were true, it’s just not. What typically happens is that the testers/QAs are “tasked” with automation with little-to-no guidance or training and this effort is heaped on top of their existing work, which continues to grow; when was the last time a team released a new version of their software with fewer features? Unless the organization is tolerant of a “let’s slow down now to go faster later” strategy, this approach is fraught with peril and the likely outcome will be “automation can’t work here”.

Developers do the automation has a similar issue. Leadership wants this to happen with negligible impact to effort estimates even in the short term. For teams that can’t embrace the “let’s slow down now to go faster later” strategy, one of two outcomes is usually the result. One outcome is that the created automation is largely confirmatory; the automation shows that the software can work under very specific conditions and other conditions are not accounted for; the developers are often required to perform heroic efforts to accomplish even this due to needing to attain an on-time delivery. The other outcome is that automation takes “too long” so it doesn’t work here, and the automation endeavor is abandoned.

If the automation creation is truly distributed, it is difficult or impossible for the teams to leverage each other’s work. Even the most well-meaning teams that try to develop cooperatively will have trouble doing so in a consistent manner. The incentive for the individual teams is to achieve the goals set for those teams. This work is not purposely completed to the detriment of the other team, but there is usually little-to-no incentive for teams to cooperate at that level. Senior leadership is usually not thrilled to pin their organization’s success on the “goodwill” of another team; similarly, those same leaders are not likely to jeopardize their organization’s success by spending some of their team’s cycles to help another team’s automation implementation if the need comes at a critical time.

If code sharing is still desired, some of the hard dependencies can be mitigated by having a shared codebase, something like an “internal open source” repository. The primary challenge there is when conflicting changes are made to the shared code. How is that conflict resolved? Often, even in the best-intentioned companies, organizational incentives and directives reign so the teams agree to fork the shared automation repository with the well-intended agreement to merge after the next business milestone. Sadly, that seldom happens; there’s always another business milestone.

The Hybrid Approach – An Oft-Appropriate Middle Ground

So, let’s split the difference, shall we?

As noted above, there are advantages and disadvantages to each of the preceding approaches. What if we meld the two? Let’s take the strengths of each team layout and attempt to mitigate the disadvantages. The goal is to let each of the disciplines focus on its strengths.

In this approach, the automation development is divided into two competencies: the base team and the distributed teams.

The Base Team

The base team is responsible for all the shared infrastructure. Need a wrapper around Selenium’s WebElement? The base team handles that. Need an encapsulation around Microsoft’s API library? The base team handles that, too. Need some helper functions to reduce the effort required to create or maintain automation? That’s right, the base team handles that.

The base team comprises experienced automation developers who are in the best position to provide a general framework, platform, and infrastructure for automation creation. The base team is also tasked with the training, coaching, and guidance of the teams implementing the actual automation.

It’s also responsible for coaching on the appropriate practices for the company. This team is trained in automation strategy, implementation, and theory. They provide the base infrastructure implementation and many, if not all, of the shared components used across the distributed teams but they understand and can coach on the theory behind the strategy and implementation.

The Distributed Teams

The distributed teams consume and exploit the value provided by the base team. They are domain experts who have intimate knowledge of the software being tested and of how to use the shared automation created by the base team. This is the team that writes the automation that helps the testers.

Risks

This is not a panacea.

If the base team isn’t staffed with enough appropriate people to service all of the distributed teams, the base team will become a bottleneck, actually impeding progress instead of facilitating it. If getting enough appropriate base team staff members is a challenge, the previously mentioned internal open-source model can be valuable. This model has a higher chance of success with a base team than with a purely distributed model because the base team is the steward of the shared code. Care must be taken, however, to avoid the next risk.

A dedicated base team runs the risk of becoming or at least being viewed as an ivory tower, i.e., delivering intellectual solutions and directives as opposed to practical and workable ones. One way to mitigate this risk is to give the base team a directive to be collaborative as opposed to dictatorial, conversations as opposed to edicts. Another way is to include members of the distributed team as stakeholders, which they indeed are, in any prioritization and feature creation discussions.

Sadly, as in many other corporate aspects, politics often play a role when organizing a base team. The base team is seen as strategic: whoever controls the base team owns their own automation destiny. When one application team or organization controls the base team, the non-controlling teams and organizations are at a disadvantage when conflicts arise or when the controlling team must make difficult business decisions. One way to mitigate this situation is to organize the base team in a “shared services” organization, such as the organization that houses IT and security.

Recommendations?

There’s not really a wrong way to organize an automation team. The key is to be appropriate; let the context of what the company and organizations need and will tolerate. The hybrid team can be very powerful and has most of the advantages of both the dedicated team and the distributed team approaches. That said, the hybrid approach is not appropriate for every company or organization. There is a large amount of trust and some organizational overhead with this approach; there’s also a funding facet. Choose an appropriate approach then be prepared to evolve if the initial choice becomes inappropriate.

[Editor’s Note: We are grateful to Paul Grizzaffi for contributing his blog post. If you are looking for some real-world examples that echo Paul’s hypotheses and conclusions, check out Marie Drake’s webinar on testing a design system at News UK and Priyanka Halder’s webinar on high performance testing at GoodRX.]

For More Information

The post How Should I Organize My Automation Team? appeared first on Automated Visual Testing | Applitools.

]]>
How Do I Link GitHub Actions with Applitools? https://applitools.com/blog/link-github-actions/ Mon, 27 Jul 2020 23:45:33 +0000 https://applitools.com/?p=20257 At Applitools, we have tutorials showing you how to integrate Applitools with commonly-used CICD orchestration engines. You can already link Applitools with Jenkins, Travis, CircleCI. Each of these orchestration engines...

The post How Do I Link GitHub Actions with Applitools? appeared first on Automated Visual Testing | Applitools.

]]>

At Applitools, we have tutorials showing you how to integrate Applitools with commonly-used CICD orchestration engines. You can already link Applitools with Jenkins, Travis, CircleCI. Each of these orchestration engines can automatically launch visual validation with Applitools as part of a build process.

What about GitHub Actions?

What is “GitHub Actions”?

GitHub Actions is an API for cause and effect on GitHub: orchestrate any workflow, based on any event, while GitHub manages the execution, provides rich feedback and secures every step along the way. With GitHub Actions, workflows and steps are just code in a repository, so you can create, share, reuse, and fork your software development practices.

GitHub Actions makes it easier to automate how you build, test, and deploy your projects on any platform, including Linux, macOS, and Windows. This provides Fast CI/CD for any OS, any language, and any cloud!

Applitools provides automated Visual AI testing. It is easy to integrate Applitools with GitHub Actions, letting developers to test their code by running visual tests using GitHub Actions workflow.

For this article, I have chosen a Cypress automation project that has a single visual test with Applitools created before in my GitHub repo:

https://github.com/satishapplitools/Cypress-Applitools-GitActions.git

Configuring GitHub Actions workflow on the repository

Click on Actions tab in GitHub repository

image1 2

GitHub Actions has an option to choose from several starter workflows based on programming languages, tools, technologies, and deployment options. Workflow prepares a YAML file at the end.

image16

For this article, I have chosen, “Skip this and set up a workflow yourself” to use an existing YAML file.

image3 1

If you want to follow along, do the same. I named my file main.yaml. Copy the following code and paste the content into your Yaml file.

View the code on Gist.

For Applitools integration, we require Applitools API Key and pass Git Commit SHA as Applitools Batch ID. In the code example above, I have set up the Applitools API key as GitHub secrets and that can be passed as a call to an environment variable in the GitHub Actions workflow (as shown below). To know more about GitHub Actions secrets click secret hub,

image4 1

To create GitHub secret in the repo, do the following: below steps:

  • Click “Settings” on GitHub repository
image2 1
  • Navigate to “Secrets” on the left menu and click “New Secret”
image5
  • Enter name and value, for this article, I have used “APPLITOOLS_API_KEY” as a secret name and given my API key. To obtain “Applitools API Key” please visit https://applitools.com/
image7 1
  • Set GitHub commit SHA as Applitools Batch ID using environment variables (as shown below)

This is already done in the YAML file above. No changes required.

image6 2

For more detailed information click creating-and-storing-encrypted-secrets.

In addition to the above, we need to amend “applitools.config.js” to include API key and BatchID as shown below,

image17

Run Git Hub Actions workflow integrated with Applitools

Now it’s time to run the integration. Here are the steps:

  • Let us create a new branch from master and name it as “feature branch”.
  • Make some changes in the feature branch and create a pull request to the master branch.
  • GitHub Actions workflow understands there are changes made on the code, and it automatically kicks off the workflow and runs Cypress tests integrated with Applitools(as shown below)
image9
  • Click on “Pull requests” to see the status, we should see “checks are in progress” while the workflow is running.
image18

Let us wait for the workflow to complete. Workflow logs can be seen while the step in the workflow is running.

image8 2
  • After the workflow is complete, click on “Pull requests” and open the pull request to look at the commit details and the status of Applitools visual tests.
image11
  • Review results from GitHub Actions workflow integrated with Applitools

Applitools SCM integration shows the results of the visual test and compares the screenshots between the source and destination branches. In case there are visual tests have unresolved status – comparison with baseline has differences.

  • Click on “test/applitools” details to check the results on Applitools Test Manager dashboard
image12 1
  • If there are any unresolved steps, the status of “test/applitools” shows visual tests detected diffs and scm/applitools has Pending status.
  • A user can review the results in Applitools Test Manager dashboard and accept/reject differences.
image19

Let’s assume if the user has accepted differences, then we should see the status as “passed” for “test/applitools” step at the pull request/commit screen.

image14

Click on “scm/applitools” to compare and merge screenshots between source and destination branches.

image13 1

To know more about SCM integration feature of Applitools for GitHub and BitBucket Integration see below

Successful build with GitHub Actions integrated with Applitools will be like below with all checks passed,

image15

Key Benefits of this integration

  • GitHub Actions and Applitools seamless integration enable developers to test code faster, parallel, reducing the time for the feedback.
  • Applitools visual AI – testing the UI with less code and zero flakiness, saves developer time – increases productivity.
  • Increases code quality and confidence to merge or deploy code.

For More Information

The post How Do I Link GitHub Actions with Applitools? appeared first on Automated Visual Testing | Applitools.

]]>
UI Tests In CICD – Webinar Review https://applitools.com/blog/ui-tests-in-cicd/ Fri, 24 Apr 2020 23:34:46 +0000 https://applitools.com/?p=17909 What does it take to add UI tests in your CICD pipelines? On March 12, Angie Jones, Senior Developer Advocate at Applitools, sat down with Jessica Deen, Senior Cloud Advocate...

The post UI Tests In CICD – Webinar Review appeared first on Automated Visual Testing | Applitools.

]]>

What does it take to add UI tests in your CICD pipelines?

On March 12, Angie Jones, Senior Developer Advocate at Applitools, sat down with Jessica Deen, Senior Cloud Advocate for Microsoft, held a webinar to discuss their approaches to automated testing and CI. 

Angie loves to share her experiences with test automation. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blog posts on angiejones.tech.

As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.

Jessica’s work at Microsoft focuses on Azure, Containers, OSS, and, of course, DevOps. Prior to joining Microsoft, she spent over a decade as an IT Consultant / Systems Administrator for various corporate and enterprise environments, catering to end users and IT professionals in the San Francisco Bay Area.

Jessica holds two Microsoft Certifications (MCP, MSTS), 3 CompTIA certifications (A+, Network+, and Security+), 4 Apple Certifications, and is a former 4-year Microsoft Most Valuable Professional for Windows and Devices for IT.

The Talk

Angie and Jessica broke the talk into three parts. First, Angie would discuss factors anyone should consider in creating automated tests. Second, Angie and Jessica would demonstrate writing UI tests for a test application.  Finally, they would work on adding UI tests to a CI/CD pipeline.

Let’s get into the meat of it.

Four Factors to Consider in Automated Tests

Angie first introduced the four factors you need to consider when creating test automation:

  • Speed
  • Reliability
  • Quantity
  • Maintenance

She went through each in turn.

Speed

Angie started off by making this point:

“When your team checks in code, they want to know if the check-in is good as quickly as possible. Meaning, not overnight, not hours from now.”

Angie points out that the talk covers UI tests primarily because lots of engineers struggle with  UI testing. However, most of your check-in tests should not be UI tests because they run relatively slowly.  From this she referred to the testing pyramid idea

  • Most of your tests are unit tests – they run the fastest and should pass (especially if written by the same team that wrote the code)
  • The next largest group is either system-level or business-layer tests. These tests don’t require a user interface and show the functionality of units working together
  • UI tests have the smallest number of total tests and should provide sufficient coverage to give you confidence in the user-level behavior.

While UI tests take time, Angie points out that they are the only tests showing user experience of your application. So, don’t skimp on UI tests.

Having said that, when UI tests become part of your build, you need to make sure that your build time doesn’t become bogged down with your UI tests. If all your conditions run over 15 minutes, that’s way too long. 

To keep your testing to a minimum, Angie suggests running UI tests in parallel. To determine whether or not  you need to split up one test into several parallel tests, give yourself a time limit. Let’s say your build needs to complete in five minutes. Once you have a time limit, you can figure out how many parallel tests to set up. Like – with the 15 minute example, you might need to divide into three or more parallel tests.

Reliability

Next, you need reliable tests. Dependable. Consistent. 

Unreliable tests interfere with CI processes. False negatives, said Angie, plague your team by making them waste time tracking down errors that don’t exist. False positives, she continues, corrupt your product by permitting the check-in of defective code. And, false positives corrupt your team because bugs found later in the process interfere with team cohesion and team trust. 

For every successful CICD team, check-in success serves as the standard for writing quality code. You need reliable tests.

How do you make your tests reliable?

Angie has a suggestion that you make sure your app includes testability – which involves you leaning on your team. If you develop code, grab one of your test counterparts. If you test, sit down with your development team. Take the opportunity to discuss app testability.

What makes an app testable? Identifiers. Any test runner uses identifiers to control the application. And, you can also use identifiers to validate outputs. So, a consistent regime to create identifiers helps you deliver consistency. 

If you lack identifiers, you get stuck with CSS Selectors or Xpath selectors. Those can get messy – especially over time.

Another way to make your app testable, Angie says, requires code that lets your test set initial conditions. If your UI tests depend on certain data values, then you need code to set those values prior to running those tests. Your developers need to create that code – via API or stored procedure – to ensure that the tests always begin with the proper conditions. This setup code can help you create the parallel tests that help your tests run more quickly.

You can also use code to restore conditions after your tests run – leaving the app in the proper state for another test.

Quantity

Next, Angie said, you need to consider the number of tests you run.

There is a common misconception that you need to automate every possible test condition you can think about, she said. People get into trouble trying to do this in practice.

First, lots of tests increase your test time. And, as Angie said already, you don’t want longer test times.

Second, you end up with low value as well as high-value UI tests.  Angie asks a question to help triage her tests:

“Which test would I want to stop an integration or deployment? If I don’t want this test to stop a deployment, it doesn’t get automated. Or maybe it’s automated, but it’s run like once a day on some other cycle, not on my CICD.”

Angie also asks about the value of the functionality:

“Which test exercises critical, core functionality? Those are the ones you want in there. Which tests cover areas of my application that have a history of failing? You’re nervous anytime you have to touch that code. You want some tests around that area, too.”

Lastly, Angie asks, which tests provide information already covered by other tests in the pipeline? So many people forget to think about total coverage. They create repetitive tests and leave them in the pipeline. And, as many developers know, a single check-in that triggers multiple failures can do so because it was a single code error that had been tested, and failed, multiple times. 

“Don’t be afraid to delete tests,” Angie said. If it’s redundant, get rid of it, and reduce your overall test code maintenance. She talked about how long it took her to become comfortable with deleting tests, but she appreciates the exercise now. 

Maintenance

“Test code is code,” Angie said. “You need to write it with the same rules, the same guidelines, the same care that you would any production code.”

Angie continued, saying that people ask, “‘Well, Angie, why do I need to be so rigorous with my test code?’”

Angie made the point that test code monitors production code. In your CICD development, the state of the build depends on test acceptance. If you build sloppy test code, you run the risk of false positives and false negatives.

As your production code changes, your test code must change as well. The sloppier your test code, the more difficult time you will have in test maintenance. 

Writing test code with the same care as you write production gives you the best chance to keep your CICD pipeline in fast, consistent delivery. Alternatively, Angie said, if your test code stays a mess, you will have a tendency to avoid code maintenance. Avoiding maintenance will lead to untrustworthy builds. 

Writing UI Tests – Introduction

Next, Angie introduced the application she and Jessica were using for their coding demonstration. The app – a chat app, looks like this:

The welcome screen asks you to enter your username and click “Start Chatting” – the red button. Once you have done so, you’re in the app. Going forward, you enter text and click the “Send” button and it shows up on a chat screen along with your username. Other users can do the same thing.

With this as a starting point, Angie and Jessica began the process of test writing. 

Writing UI Tests – Coding Tests

Angie and Jessica were on a LiveShare of code, which looked like this:

From here, Angie started building her UI tests for the sign-in functionality. And, because she likes to code in Java, she coded in Java. 

All the objects she used were identified in the BaseTests class she inherited.

Her full code to sign-in looked like this:

public class ChattyBotTests extends BaseTests {
  private ChatPage chatPage:

  @Test
  public void newSession(){
     driver.get(appUrl);
     homePage.enterUsername("angie");
     chatPage = homePage.clickStartChatting();
     validateWindow();
  }

The test code gest the URL previously defined in the BaseTests class, fills in the username box with “angie”, and clicks the “Start Chatting” button. Finally, Angie added the validateWindow() method inherited from BaseTests, which uses Applitools visual testing to validate the new screen after the Start Chatting button has been clicked.

Next, Angie wrote the code to enter a message, click send message, and validate that the message was on the screen.

  @Test
  public void enterMessage(){
     chatPage.sendMessage("hello world");
     validateWindow():
 }

The inherited chatPage.sendMessage method both enters the text and clicks the Send Message button. validateWindow() again checks the screen using Applitools.

Are these usable as-is for CICD? Nope.

Coding Pre-Test Setup

If we want to run tests in parallel, these tests, as written, block parallel operation, since the enterMessage() depends on the newSession() being run previously.

So solve this, Angie creates a pre-test startSession() that runs before all tests. It includes the first three lines of newSession() which go to the app URL, enter “angie” as the username, and click the “Start Chatting” button. Next, Angie modifies her newSession() test so all it does is the validation.

  @Before
  public void startSession(){
     driver.get(appUrl);
     homePage.enterUsername("angie");
     chatPage = homePage.clickStartChatting();
  }

  @Test
  public void newSession(){
     validateWindow();
 }

With this @Before setup, Angie can create independent tests.

Adding Multi-User Test

Finally, Angie added a multi-user test. In this test, she assumed the @Before gest run, and her new test looked like this:

  @Test
  public void multiPersonChat(){

     //Angie sends a message
     chatPage.sendMessage(“hello world”);

     //Jessica sends a message
     WindowUtils.openNewTab(driver, appUrl);
     homePage.enterUsername("jessica");
     chatPage = homePage.clickStartChatting();
     chatPage.sendMessage("goodbye world");
     WindowUtils.switchToTab(driver, 1);
     validateWindow();
  }

Here, user “angie” sends the message “hello world”. Then, Angie codes the browser to:

  • open a new tab for the app URL, 
  • create a new chat session for “jessica”
  • has “jessica” send the message “goodbye world”
  • Switch back to the original tab
  • Validate the window

Integrating UI Tests Into CICD

Now, it was Jessica’s turn to control the code. 

Before she got started coding, Jessica shared her screen from Visual Studio Code, to demonstrate the LiveShare feature of VS Code:

Angie and Jessica were working on the same file using LiveShare. LiveShare highlights Angie’s cursor on Jessica’s screen. 

When Angie selects a block of text, the text gets highlighted on Jessica’s screen.

This extension to Visual Studio Code makes it easy to collaborate on coding projects remotely. It’s available for download on the Visual Studio Code Marketplace. It’s great for pair programming when compared with remote screen share.

To begin the discussion of using these tests in CICD, Jessica started describing the environment for running the tests from a developer perspective versus a CICD perspective. A developer might imagine running locally, with IntelliJ or command line opening up browser windows. In contrast, CICD needs to run unattended.  So, we need to consider headless.

Jessica showed how she coded for different environments in which she might run her tests.

Her code explains that the environment gets defined by a variable called runWhere, which can equal one of three values:

  • local – uses a ChromeDriver
  • pipeline – uses a dedicated build server and sets the options –headless and –no-sandbox for ChromeDriver (note: for Windows you add the option “–disable-gui”)
  • container – instructs the driver to be a remote web driver based on the selenium hub remote URL and passes the –headless and –no-sandbox chromeOptions

Testing Locally

First, Jessica needed to verify that the testa ran using the local settings.

Jessica set the RUNWHERE variable to ‘local’ using the command

export RUNWHERE=local

She had already exported other settings, such as her Applitools API Key, so she can use Applitools.

Since Jessica was already in her visual test folder, she run her standard maven command:

mvn -f visual_tests/pom.xml clean test

The tests ran as expected with no errors. The test opened up a local browser window and she showed the tests running.

Testing Pipeline

Next, Jessica set up to test her pipeline environment settings. 

She changed the RUNWHERE variable using the command:

export RUNWHERE=pipeline

Again, she executed the same maven tests

mvn -f visual_tests/pom.xml clean test

With the expectation that the tests would run as expected using her pipeline server, meaning that the tests run without opening a browser window on her local machine.

This is important because whatever CICD pipeline you use – Azure DevOps, Github Actions, Travis CI, or any traditional non-container-based CICD system – will want to use this headless interaction with the browser that keeps the GUI from opening up and possibly throwing an error.

Once these passed, Jessica moved on to testing with containers.

Testing Containers

Looking back, the container-based tests used a call to RemoteWebDriver, which in turns called selenium_hub:

Selenium_hub let Jessica spin up whatever browser she wanted.  To specify what she wanted, she used a docker-compose file, docker-compose.yaml:

These container-based approaches align with the current use of cloud-native pipelines for CICD. Jessica noted you can use Jenkins, Jenkins X for Kubernetes native, and CodeFresh, among others. Jessica decided to show CodeFresh. It’s a CICD pipeline dedicated to Kubernetes and microservices.  Every task runs in a container.

Selenium_Hub let Jessica choose to run tests on both a chorme_node and a firefox_node in her container setup.  

She simply needed to modify her RUNWHERE variable

export RUNWHERE=container

However, before running her tests, she needed to spin up her docker-compose on her local system. And, because selenium_hub wasn’t something that her system could identify by DNS at that moment (it was running on her local system), she ensured that the selenium_hub running locally would port forward onto her local system’s 127.0.0.1 connection. Once she made these changes, and changed the container definition to use 127.0.0.1:4444, she was ready to run her maven pom.xml file.

When the tests ran successfully, her local validation confirmed that her tests should run in her pipeline of choice.

Jessica pointed out that CICD really comes down to a collection of tasks you would run manually.

After that, Jessica said, we need to automate those tasks in a definition file. Typically, that’s Yaml, unless you really like pain and choose Groovy in Jenkins… (no judgement, she said).

Looking at Azure DevOps

Next, Jessica did a quick look into Azure DevOps.

Inside Azure DevOps, Jessica showed that she had a number of pipelines already written, and she chose the one she had set aside for the project. This pipeline already had three separate stages: 

  • Build Stage
  • Deploy to Dev
  • Deploy to Prod

Opening up the build stage shows all the steps contained just within that stage in its 74 seconds of runtime:

Jessica pointed out that this little ChattyBot application is running on a large cluster in Azure. It’s running in Kubernetes, and it’s deployed with Helm.  The whole build stage includes:

  • using JFrog to package up all the maven dependencies and run the maven build
  • jfrog xray to make sure that the dependencies don’t result in security errors, 
  • Creating a helm chart and packaging that,
  • Sending Slack notifications

This is a pretty extensive pipeline. Jessica wondered how hard it would be to integrate Angie’s tests into an existing environment.

But, because of the work Jessica had done to make Angie’s tests ready for CICD, it was really easy to add those tests into the deploy workflow.

First, Jessica reviewed the Deploy to Dev stage.

Adding UI Tests in Your CICD Pipeline

Now, Jessica started doing the work to add Angie’s tests into her existing CICD pipeline. 

After the RUNWHERE=container tests finished successfully, Jessica went back into VS Code, where she started inspecting her azure-pipelines.yml file.

Jessica made it clear that she wanted to add the tests everywhere that it made sense prior to promoting code to production:

  • Dev
  • Test
  • QA
  • Canary

Jessica reinforced Angie’s earlier points – these UI tests were critical and needed to pass. So, in order to include them in her pipeline, she needed to add them in an order that makes sense.

In her Deploy to Dev pipeline, she added the following:

   - bash:
     # run check to see when $(hostname) is available
     attempt_counter=0
     max_attempts=5
     until $(curl --output /dev/null --silent --head --fail https://”$(hostname)”/); do
       if [ ${attempt_counter} -eq ${max_attempts} ]; then
         echo “Max attempts reached”
         exit 1
       fi

       printf “.”
       attempt_counter=$((attempt_counter+1))
       sleep 20
     done
   displayName: HTTP Check

This script checks to see if the url $hostname is available and gives up if not available after five tries after sleeping 20 seconds. Each try it displays a “.” to show it is working. And, the name “HTTP Check” shows what it is doing. 

Now, to add the tests, Jessica needed to capture the environment variable declarations and then run the maven commands.  And, as Jessica pointed out, this is where things can become challenging, especially when writing the tests from scratch, because people may not know the syntax.   

Editing the azure-pipelines.yml in Azure DevOps

Now, Jessica moved back from Visual Studio Code to Azure DevOps, where she could also edit an azure-pipelines.yml file directly in the browser.

And, here, on the right side of her screen (I captured it separately) are tasks she can add to her pipeline. The ability to add tasks makes this process really, really simple and eliminates a lot of the errors that can happen when you code by hand.

One of those tasks is an Applitools Build Task that she was able to add by installing an extension.

Just clicking on this Applitools Build Task adds it to the azure_pipelines.yml file.  

And, now Jessica wanted to add her maven build task – but instead of doing a bash script, she wanted to use the maven task in Azure DevOps. Finding the task and clicking on it shows all the options for the task.

The values are all defaults. Jessica changed the address for her pom.xml file to visual_tests/pom.xml (the file location for the test file), set her goal as ‘test’ and options as ‘clean test’. She checked everything else, and since it looked okay, she clicked the “Add” button.  The following code got added to her azure-pipelines.yml file.

  - task: Maven
     inputs: 
       mavenPomFile: ‘visual_tests/pom.xml’
       goals: 'test'
       options: 'clean test'
       publishJUnitResults: true
       testResultsFiles: '**/surefire-report/TEST-*.xml'
       javaHomeOption: 'JDKVersion'
       mavenVersionOption: 'Default'
       mavenAuthenticationFeed: false
       effectivePomSkip: false
       sonarQubeRunAnalysis: false

Going Back To The Test Code

Jessica copied the Applitools Built Task and Maven task code file back into the azure-pipelines.yml file she was already editing in Visual Studio Code.

Then, she added the environment variables needed to run the tests.  This included the Applitools API Key, which is a secret value from Applitools. In this case, Jessica defined this variable in Azure DevOps and could call it by the variable name.

Beyond the Applitools API Key, Jessica also set the RUNWHERE environment variable to ‘pipeline’ and the TEST_START_PAGE environment variable to the $hostname – same as used elsewhere in her code. All this made her tests dynamic.  

The added code reads:

     env: 
       APPLITOOLSAPIKEY: $APPLITOOLS_API_KEY
       RUNWHERE: pipeline
       TEST_START_PAGE: https://($hostname)/

So, now, the tests are ready to commit.

One thing Jessica noted is that LiveShare automatically adds the co-author’s id to the commit whenever two people have jointly worked on code. It’s a cool feature of LiveShare.

Verifying That UI Tests Work In CICD

So, now that the pipeline code had been added, Jessica wanted to demonstrate that the visual validation with Applitools worked as expected and found visual differences.   

Jessica modified the ChattyBot application so that, instead of reading:

“Hello, DevOps Days Madrid 2020!!!”

it read:

“Hello, awesome webinar attendees!”

She saved the change, double-checked the test code, saw that everything looked right, and pushed the commit.

This kicked off a new build in Azure DevOps. Jessica showed the build underway. She said that, with the visual difference, we expect the Deploy to Dev pipeline to fail. 

Since we had time to wait, she showed what happened on an earlier build that she had done just before the webinar. During that build, the Deploy to Dev passed. She was able to show how Azure DevOps seamlessly linked the Applitools dashboard – and, assuming you were logged in, you would see the dashboard screen just by clicking on the Applitools tab.

Here, the green boxes on the Status column show that the tests passed.

Jessica drilled into the enterMessage test to show how the baseline and the new checkpoint compared (even though the comparison passed), just to show the Applitools UI.  

As Jessica said, were any part of this test to be visually different due to color, sizing, text, or any other visual artifact, she could select the region and give it a thumbs-up to approve it as a change (and cause the test to pass), or give it a thumbs-down and inform the dev team of the unexpected difference.

And, she has all this information from within her Azure DevOps build.

What If I Don’t Use Azure DevOps?

Jessica said she gets this question all the time, because not everyone uses AzureDevOps.  

You could be using Azure DevOps, TeamCity CI, Octopus Deploy, Jenkins – it doesn’t matter. You’re still going to be organizing tasks that make sense.  You will need to run an HTTP check to make sure your site is up and running.  You will need to make sure you have access to your environment variables. And, then, finally, you will need to run your maven command-line test.  

Jessica jumped into Github Actions, where she had an existing pipeline, and she showed that her deploy step looked identical.

It had an http check, an Applitools Build Task, and a call for Visual Testing. The only difference was that the Applitools Build Task included several lines of bash to export Applitools environment variables.

The one extra step she added, just as a sanity check, was to set the JDK version.

And, while she was in Github Actions, she referred back to the container scenario. She noted the challenges with spinning up Docker Compose and services.  For this reason, when looking at container tests, she pointed to CodeFresh, which is Kubernetes-native.

Inside her CodeFresh pipelines, everything runs in a container.

As she pointed out, by running on CodeFresh, she didn’t need a huge server to handle everything. Each container handled just what it needed to handle. Spinning up Docker Compose just requires docker. She needed just jFrog for her Artifactory image. Helm lint – again, just what she needed.

The image above shows the pipelines before adding the visual tests. The below image shows the Deploy Dev pipeline with the same three additions.

There’s the HTTP check, the Applitools Build Task, and Running Visual Tests.

The only difference really is that the visual tests ran alongside services that were spinning up alongside the test.

This is really easy to do in your codefresh.yml file, and the syntax looks a lot like Docker Compose.

Seeing the Visual Failure

Back in Azure DevOps, Jessica checked in on her Deploy to Dev step.  She already knew there was a problem from her Slack notifications.  

The error report showed that the visual tests all failed.

Clicking on the Applitools tab, she saw the following.

All three tests showed as unresolved.  Clicking in to the multiPersonChat test, Jessica saw this:

Sure enough, the text change from “Hello, DevOps Days Madrid 2020!!!” to “Hello, awesome webinar attendees!” caused a difference. We totally expected this difference, and we would find that this difference had also shown up in the other tests.

The change may not have been a behavioral change expected in your tests, so you may or may not have thought to test for the “Hello…” text or check for its modification. Applitools makes it easy to capture any visual difference.

Jessica didn’t go through this, but one feature in Applitools is the ability to use Auto Maintenance. With Auto Maintenance, if Jessica had approved the change on this first page, she could automatically approve identical changes on other pages. So, if this was an intended change, it would go from “Unresolved” to “Passed” on all the pages where the change had been observed.

Summing Up

Jessica handed back presentation to Angie, who shared Jessica’s link for code from the webinar:

All the code from Angie and Jessica’s demo can be downloaded from:

https://aka.ms/jldeen/applitools-webinar

Happy Testing!

For More Information

The post UI Tests In CICD – Webinar Review appeared first on Automated Visual Testing | Applitools.

]]>
How To Ace High-Performance Test for CI/CD https://applitools.com/blog/how-to-ace-high-performance-test-for-ci-cd/ Thu, 26 Mar 2020 15:14:00 +0000 https://applitools.com/?p=17388 If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters. Priyanka Halder presented her approach...

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>

If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters.

Priyanka Halder presented her approach to achieving success in a hyper-growth company through her webinar for Applitools in January 2020. The title of her speech sums up her experience at GoodRx:

“High-Performance Testing: Acing Automation In Hyper-Growth Environments.”

Hyper-growth environments focus on speed and agility. Priyanka focuses on the approach that lets GoodRx not only develop but also test features and releases while growing at an exponential rate.

About Priyanka

Priyanka Halder is head of quality at GoodRx, a startup focused on finding all the providers of a given medication for a patient – including non-brand substitutes – and helping over 10 million Americans find the best prices for those medications.  Priyanka joined in 2018 as head of quality engineering – with a staff of just one quality engineer. She has since grown the team 1200% and grown her team’s capabilities to deliver test speed, test coverage, and product reliability. As she explains, past experience drives current success.

Priyanka’s career includes over a dozen years of test experience at companies ranging from startups to billion-dollar companies. She has extensive QA experience in managing large teams and deploying innovative technologies and processes, such as visual validation, test stabilization pipelines, and CICD. Priyanka also speaks regularly at testing and technology conferences. She accepted invitations to give variations of this particular talk eight times in 2019.

One interesting note: she says she would lik to prove to the world that 100% bug-free software does not exist.

Start With The Right Foundation

Three Little Pigs

Priyanka, as a mother, knows the value of stories. She sees the story of the Three Little Pigs as instructive for anyone trying to build a successful test solution in a hyper-growth environment. Everyone knows the story: three pigs each build their own home to protect themselves from a wolf. The first little pig builds a straw house in a couple of hours. The second little pig builds a home from wood in a day. The third little pig builds a solid infrastructure of brick and mortar – and that took a number of days. When the wolf comes to eat the pigs, he can blow down the straw house and the wood house, but the solid house saves the pigs inside.

Priyanka shares from her own experience. – She encounters many wolves in a hyper-growth environment. The only safeguard comes from building a strong foundation. Priyanka describes a hyper-growth environment and how high-performance testing works.  She describes the technology and team needed for high-performance testing. And, she describes what she delivered (and continues to deliver) at GoodRx.

Define High-Performance Testing

So, what is high-performance testing?

Fundamentally, high-performance testing maximizes quality in a hyper-growth startup. To succeed, she says, you must embrace the ever-changing startup mentality, be one step ahead, and constantly provide high-quality output without being burned out.

Agile startups share many common characteristics:

  • Chaotic – you need to be comfortable with change
  • Less time – all hands on deck all the time for all the issues
  • Less resources – you have to build a team where veterans are mentors and not enemies
  • Market pressure – teams need to understand and assess risk
  • Reward – do it right and get some clear benefits and perks

If you do it right, it can lead to satisfaction. If you do it wrong, it leads to burnout. So – how do you do it right?

Why High-Performance Testing?

Leveraging data collected by another company Priyanka showed how the technology for app businesses changed drastically over the past decade. These differences include:

  • Scope – instead of running a dedicated app, or on a single browser, today’s apps run on multiple platforms (web app and mobile)
  • Frequency – we release apps on demand (not annually, quarterly, monthly or daily)
  • Process – we have gone from waterfall to continuous delivery
  • Framework – we used to use singe-stack on premise software – today we are using open source, best of breed, cloud based solutions for developing and delivering.

The assumptions of “test last” that may have worked a decade back can’t work anymore. So, we need a new paradigm.

How To Achieve High-Performance Testing

Priyanka talked about her own experience. Among other things, teams need to know that they will fail early as they try to meet the demands of a hyper-growth environment. Her approach, based on her own experiences, is to ask questions:

  • Does the team appreciate that failures can happen?
  • Does the team have inconsistencies? Do they have unclear requirements? Set impossible deadlines? Use waterfall while claiming to be agile? Note those down.

Once you know the existing situation, you can start to resolve contradictions and issues. For example, you can use a mind map to visualize the situation. You can divide issues and focus on short term work (feature team for testing) vs. long term work (framework team). Another important goal – figure out how to find bugs early (aka Shift Left). Understand which tools are in place and which you might need. Know where you stand today vis-a-vis industry standards for release throughput and quality. Lastly, know the strength of your team today for building an automation framework, and get AI and ML support to gain efficiencies.

Building a Team

Next, Priyanka spoke about what you need to build a team for high-performance testing.

Screen Shot 2020 03 25 at 9.33.53 PM

In the past, we used to have a service team. They were the QA team and had their own identity. Today, we have true agile teams, with integrated pods where quality engineers are the resource for their group and integrate into the entire development and delivery process.

So, in part you need skills. You need engineers who know test approaches that can help their team create high-quality products. Some need to be familiar with behavior-driven design or test-driven design. Some need to know the automation tools you have chosen to use. And, some need to be thinking about design-for-testability.

One huge part of test automation involves framework. You need a skill set familiar with building code that self-identifies element locators, builds hooks for automation controls, and ensures consistency between builds for automation repeatability.

Beyond skills, you need individuals with confidence and flexibility. They need to meld well with the other teams. In a truly agile group, team members distribute themselves through the product teams as test resources. While they may connect to the main quality engineering team, they still must be able to function as part of their own pod.

Test Automation

Priyanka asserts that good automation makes high-performance testing possible.

In days gone by, you might have bought tools from a single vendor. Today, open source solutions provide a rich source for automation solutions. Open source generally has lower maintenance costs, generally lets you ship faster, and expands more easily.

Screen Shot 2020 03 25 at 10.06.28 PM

Open source tools come with communities of users who document best practices for using those tools. You might even learn best-practice processes for integrating with other tools. The communities give you valuable lessons so you can learn without having to fail (or learn from the failures of others).

Priyanka describes aspects of software deployment processes that you can automate.  Among the features and capabilities you can automate:

  • Assertions on Action
  • Initialization and Cleanup
  • Data Modeling/Mocking
  • Configuration
  • Safe Modeling Abstractions
  • Wrappers and Helpers
  • API Usage
  • Future-ready Features
  • Local and Cloud Setups
  • Speed
  • Debugging Features
  • Cross Browser
  • Simulators/Emulators/Real Devices
  • Built-in reporting or easy to plug in

Industry Standards

You can measure all sorts of values from testing. Quality, of course. But what else? What are the standards these days? Who knows what are typical test times for test automation?

Priyanka shares data from Sauce Labs about standards.  Sauce surveyed a number companies and discussed benchmark settings for four categories: test quality; test run time; test platform coverage; and test concurrency. The technical leaders at these companies set some benchmarks they thought aligned with best-in-class industry standards.

In detail:

  • Quality – pass at least 90% of all tests run
  • Run Time – average of all tests run two minutes or less
  • Platform Coverage – tests cover five critical platforms on average
  • Concurrency – at peak usage, tests utilize at least 75% of available capacity

Next, Priyanka shared the data Sauce collected from the same companies about how they fared against the average benchmarks discussed.

  • Quality – 18% of the companies achieved 90% pass rate
  • Run time – 36% achieved the 2 minute or less average
  • Platform coverage – 63% reached the five platform overage
  • Concurrency – 71% achieved the 75% utilization mark
  • However, only 6.2% of the companies achieved the mark on all four.

Test speed became a noticeable issue. While 36% ran on average in two minutes or faster, a large number of companies exceeded five minutes – more than double.

Investigating Benchmarks

These benchmarks are fascinating – especially run time – because test speed is key to faster overall delivery. The longer you have to wait for testing to finish, the slower your dev release cycle times.

Sadly, lots of companies think they’re acing automation, but so few are meeting key benchmarks. Just having automation doesn’t help. It’s important to use automation that helps meet these key benchmarks.

Another area worth investigating involves platform coverage. While Chrome remains everyone’s favorite browser, not everyone is on Chrome.  Perhaps 2/3 of users run Chrome, but Firefox, Safari, Edge and others still command attention. More importantly, lots of companies want to run mobile, but only 8.1% of company tests run on mobile. Almost 92% of companies run desktop tests and then resize their windows for the mobile device.  Of the mobile tests, only 8.9% run iOS native apps and 13.2% run Android native apps. There’s a gap at a lot of companies.

GoodRx Strategies

Priyanka dove into the capabilities that allow GoodRx to solve the high- performance testing issues.

Test In Production

The first capabilities GoodRx uses a Shift Right approach that moves testing into the realm of production.

Screen Shot 2020 03 25 at 10.19.12 PM

Production testing? Yup – but it’s not spray-and-pray. GoodRx’s approach includes the following:

  • Feature Flag – Test in production. Ship fast, test with real data.
  • Traffic Allocation – gradually introduce new features and empower targeted users with data. Hugely important for finding corner cases without impacting the entire customer base.
  • Dog Fooding – use a CDN like Fastly to deploy, route internal users to new features.

The net reduce – this reduces overhead, lets the app get tested with real data test sets, and identify issues without impacting the entire customer base. So, the big release becomes a set of small releases on a common code base, tested by different people to ensure that the bulk of your customer base doesn’t get a rude awakening.

AI/ML

Next, Priyanka talked about GoodRx uses AI/ML tools to augment her team. These tools make her team more productive – allowing her to meet the quality needs of the high-performance environment.

First, Priyanka discussed automated visual regression – using AI/ML to automate the validation of rendered pages. Here, she talked about using Applitools – as she says, the acknowledged leader in the field. Priyanka talked about how GoodRx uses Applitools.

At GoodRx, there may be one page used for a transaction. But, GoodRx supports hundreds of drugs in detail, and a user can dive into those pages that describe the indications and cautions about individual medications.  To ensure that those pages remain consistent, GoodRx validates these pages using Applitools. Trying to validate these pages manually would take six hours. Applitools validates these pages in minutes and allows GoodRx to release multiple times a day.

Screen Shot 2020 03 25 at 10.20.40 PM

To show this, Priyanka used an example of visual differences. She showed a kids cartoon with visual differences. Then she showed what happens if you do a normal image comparison – pixel-based comparison.

Screen Shot 2020 03 25 at 10.22.01 PM

A bit-wise comparison will fail too frequently.  Using the Applitools AI system, they can set up Applitools to look at the images that have already been approved and quickly validate the pages being tested.

Screen Shot 2020 03 25 at 10.23.29 PM

Applitools can complete a full visual regression in less than 12 minutes to run 350 test cases, which runs 2,500 checks.  Manually, it takes six hours.

Screen Shot 2020 03 25 at 10.24.29 PM

Priyanka showed the kinds of real-world bugs that Applitools uncovered. One – a screenshot from her own site GoodRx. A second from amazon.com, and a third from macys.com. She showed examples with corrupt display – and ones that Selenium alone could not catch.

ReportPortal.io

Next, Priyanka moved on to ReportPortal.io. As she says, when you ace automation, you need to know where you stand. You need to build trust around your automation platform by showing how it is behaving. All your data – test times, bugs discovered, etc. reportportal.io shows how tests are running at different times of the day.  Another display shows flakiest tests and longest-running tests to help the team release seamlessly and improve their statistics.

Any failed test case in reportportal.io can link the test results log directly into the reportportal.io user interface.

GoodRx uses behavior-driven design (BDD), and their BDD approach lets them describe the behavior they want for a given feature – how it should behave in good and bad cases – and ensure that those cases get covered.

High-Performance Testing – The Burn Out

Priyanka made it clear that high-performance environments take a toll on people. Everywhere.

She showed a slide referencing a blog by Atlassian talking about work burnout symptoms – and prevention. From her perspective, the symptoms of workplace stress include:

  • Being cynical or critical at work
  • Dragging yourself to work and having trouble getting started
  • Irritable or impassion, lack energy, hard to concentrate, headache
  • Lack of satisfaction from achievement
  • Use food, drugs or alcohol to feel better or simply not to feel

So, what should a good team lead do when she notices signs of burnout? Remind people to take steps to prevent burnout. These include:

  • Avoid unachievable deadlines. Don’t take on too much work. Estimate, add buffer, add resource.
  • Do what gives you energy – avoid what drains you
  • Manage digital distraction – the grass will always be greener on the other side
  • Do something outside your work – Engage in activities that bring you joy
  • Say No too many projects – gauge your bandwidth and communicate
  • Make self-care a priority – meditation/yoga/massage
  • Have a strong support system – talk to you family, friends, seek help
  • Unplugging for short periods helps immensely

The point here is that hyper-growth environments can take a toll on everyone – employees, managers. Unrealistic demands can permeate the organization. Use care to make sure that this doesn’t happen to you or your team.

GoodRx Case Study

Why not look at Priyanka’s direct experience at GoodRx? Her employer, GoodRx, provides prices transparency for drugs. GoodRx lets individuals search for drugs they might need or use for various conditions. Once an individual selects a drug, GoodRx lets the individual see the prices for that drug in various locations to find the best price for that drug.

The main customers are people who don’t have insurance or have high-deductible insurance. In some cases, GoodRx offers coupons to keep the prices low.  GoodRx also provides GoodRx Care – a telemedicine consultation system – to help answer patient questions about drugs. Rather than see a doctor, GoodRx Care costs anywhere between $5 and $20 for a consultation.

Because the GoodRx web application provides high value for its customers, often with high demand, the app must maintain proper function, high performance, and high availability.

Set Goals

Screen Shot 2020 03 25 at 10.28.44 PM

The QA goals Priyanka designed needed to meet the demands of this application. Her goals included:

  • Distributed QA team 24/7 QA support
  • Dedicated SDET Team who specializes in test
  • A robust framework that will make any POC super simple (plug and play)
  • Test stabilization pipeline using Travis
  • 100% automation support to reduce regression time 90%

Build a Team

Screen Shot 2020 03 25 at 10.30.03 PM

As a result, Priyanka needed to hire a team that could address these goals. She showed the profile she developed on LinkedIn to find people that met her criteria – dev-literate, test-literate engineers who could work together as a team and function successfully. More emphasis on test automation and coding abilities rose to the top.

Build a Tech Stack

Screen Shot 2020 03 25 at 10.33.22 PM

Next, Priyanka and her team invested in tech stack:

  • Python and Selenium WebDriver
  • Behave for BDD
  • Browserstack for a cloud runner
  • Applitools for visual regression
  • Jenkins/Travis and Google Drone for CI
  • Jira, TestRail for documentation

CICD success criteria requirements came up with four issues:

  • Speed and parallelization
  • BDD for easy debug and read
  • Cross-browser cross-device coverage in CICD
  • Visual validation

Set QA expectations for CI/CD testing

Finally, Priyanka and her team had to set expectations for testing.  How often would they test? How often would they build?

The QA for CI/CD means that test and build become asynchronous. Regardless of the build state,

  • Hourly; QA runs 73 tests hourly against the latest build to sanity check the site.
  • On Build: Any new build runs 6 cross-browser and makes sure all critical business paths get covered.
  • Nightly 300 test regression tests on top of other tests.

Some of these were starting points, but most got refined over time.

Priyanka’s GoodRx Quality Timeline

Next, Priyanka talked about how her team grew from the time she joined until now.

She started in June 2018. At that point, GoodRx had one QA engineer.

  • In her first quarter, she added a QA Manager, QA Analyst, and a Senior SDET. They added offshore reprocessing to support releases.
  • By October 2018 they had fully automated P0/P1 tests. Her team had added Spinnaker pipeline integration. They were running cross-browser testing with real mobile device tests.
  • By December 2018 she added two more QA Analysts and 1 more SDET.  Her team’s tests fully covered regression and edge cases.
  • And, she pressed on. In early 2019, they had built automation-driven releases. They had added Auth0 support – her team was hyper-productive.
  • Then, she discovered her team had started to burnout.  Two of her engineers quit. This was an eye-opening time for Priyanka. Her lessons about burnout came from this period. She learned how to manage her team through this difficult period.

By August 2019 she had the team back on an even keel and had hired three QA engineers and one more SDET.

And, in November 2019 they achieved 100% mobile app automation support.

GoodRx Framework for High-Performance Testing

Finally, Priyanka gave a peek into the GoodRx framework, which helps her team build and maintain test automation.

The browser base class provides access for test automation. Using the browser base class eliminates the need to use Selenium embed click.

The page class simplifies the web element location. The page class structure assigns a unique XPath to each web element. Automation benefits by having clean XPath elements for automation purposes.

Screen Shot 2020 03 25 at 10.47.12 PM

The element wrapper class allows for behaviors like lazy loading.  Instead of having to program exceptions into the test code, the element wrapper class standardizes interaction between the browser under test and the test infrastructure.

Screen Shot 2020 03 25 at 10.47.27 PM

Finally, for every third-party application or tool that integrates using an SDK, like Applitools GoodRx deploys an SDK Wrapper. As one of her SDET team figured, the wrapper ensures that an SDK change from a third party can mess up your test behavior. Using a wrapper is a good practice for handling situations when the service you use encounters something unexpected.

The framework results in a more stable test infrastructure that can rapidly change to meet the growth and change demands of GoodRx.

Conclusions

Hyper-growth companies put demands on their quality engineers to achieve quickly. Test speed matters, but it cannot be achieved consistently without investment. Just as Priyanka started with the story of the Three Little Pigs, she made clear that success requires investment in automation, people, AI/ML, and framework.

To watch the entire webinar:

For More Information

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>