Dave Piacente, Author at Automated Visual Testing | Applitools https://applitools.com/blog/author/dave/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 17 Nov 2023 20:57:26 +0000 en-US hourly 1 AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take https://applitools.com/blog/ai-and-the-future-of-test-automation-with-adam-carmi/ Mon, 16 Oct 2023 18:23:49 +0000 https://applitools.com/?p=52314 We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of...

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>

We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of room for you to form your own impressions.

Dave Piacente

Curious if the software robots are here to take our jobs? Or maybe you’re not a fan of the AI hype train? During a recent session, The Future of AI-Based Test Automation, CTO Adam Carmi discussed—in practical terms—the current and future state of AI-based test automation, why it matters, and what you can do today to level up your automation practice.

  • He describes how AI can be used to overcome common everyday challenges in end-to-end test automation, how the need for skilled testers will only increase, and how AI-based tooling can help supercharge any automated testing practice.
  • He also puts his money where his mouth is by demonstrating how the neverending maintenance overhead of tests can be mitigated using AI-driven tooling which already exists today using concrete examples (e.g., visual validation and self-healing locators).
  • He also discusses the role that AI will play in the future, including the development of autonomous testing platforms. These platforms will be able to automatically explore applications, add validations, and fill gaps in test coverage. (Spoiler alert: Applitools is building one, and Adam shows a bit of a teaser for it using a real-time in-browser REPL to automate the browser which uses natural language similar to ChatGPT.)

You can watch the full recording and find the session materials here, and I’ve included a quick breakdown with timestamps for ease of reference.

  • Challenges with automating end-to-end tests using traditional approaches (02:34-10:22)
  • How AI can be used to overcome these challenges (10:23-44:56)
  • The role of AI in the future of test automation (e.g., autonomous testing) (44:57-58:56)
  • The role of testers in the future (58:57-1:01:47)
  • Q&A session with the speaker (1:01:48-1:12:30)

Want to see more? Don’t miss Future of Testing: AI in Automation.

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>
Welcome Back, Selenium Dave! https://applitools.com/blog/welcome-back-selenium-dave/ Tue, 05 Sep 2023 18:53:47 +0000 https://applitools.com/?p=51615 Let me tell you a story. It’s one I haven’t told before. But to do it, let’s first get acquainted. Hi – I’m Dave Piacente. You may know me from...

The post Welcome Back, Selenium Dave! appeared first on Automated Visual Testing | Applitools.

]]>
Dave Piacente

Let me tell you a story. It’s one I haven’t told before. But to do it, let’s first get acquainted.

Hi – I’m Dave Piacente. You may know me from a past life when I went by the name Dave Haeffner and my past works with Selenium. I’m the new DevRel and Head of Community at Applitools—Andy’s moved on to a tremendous bucket-list job opportunity elsewhere, and we wish him all the best! I’ve been working closely with him behind the scenes to learn the ropes to help make this a smooth transition and to ensure that all of the great work he’s done and the community he’s grown will continue to flourish. And to loosely paraphrase Shakespeare – A DevRel (or a Dave) by any other name would be just as sweet.

Now, about that story…

I used to be known for a thing – “Selenium Dave” as they would say. I worked hard to earn that rep. I had one aim, to be helpful. I was trying to solve a problem that vexed me early on in my career in test automation (circa 2009) when open-source test automation and grid providers were on a meteoric rise. The lack of clear and concise guidance on how to get started and grow into a mature test automation practice was profound. But the fundamentals weren’t that challenging to master (once you knew what they were), and the number of people gnashing their teeth as they white-knuckled their way through it was eye-popping.

So, back in 2011, after working in the trenches at a company as an SDET (back before that job title was a thing), I left to start out on my own with a mission to help make test automation simpler. It started simply enough with consulting. But then the dominos began to fall when I started organizing a local test automation meetup.

While running the meetup I realized I kept getting asked the same questions and offering the same answers, so I started jotting them down and putting them into blog posts which later became a weekly tip newsletter (Elemental Selenium, which eventually grew to a readership of 30,000 testers). Organically, that grew into enough content (and confidence) to write a book, The Selenium Guidebook.

I then stepped out of meetup organization and into organizing the Selenium conference, where I became the conference chair from 2014 to 2017. My work on the conference opened the door for me to become part of the Selenium core team. From there it was a hop-skip-and-a-jump to working full-time as a contributor on Selenium IDE at Applitools.

Underpinning all of this, I was doing public speaking at meetups and conferences around the world (starting with my first conference talk back in 2010). I felt like I had summited the mountain—I was in the best possible position to be the most helpful. And I truly felt like I was making a difference in the industry.

But then I took a hard right turn and stopped doing it all. I felt like I had accomplished what I’d set out to do – I had helped make testing simpler (at least for people using Selenium). So I stepped down from the Selenium project, I stopped organizing the Selenium conference, I stopped doing public speaking, I sold my content business (e.g., the newsletter & book) to a third party, and I even changed my last name (from Haeffner to Piacente – although for reasons unrelated to my work). By all marks, I had closed that chapter of my life and was happily focusing on being a full-time Software Developer in the R&D team at Applitools.

While I was doing that, the test automation space continued to grow and evolve as I watched from the sidelines. Seemingly every enterprise was now shifting left (not just the more progressive ones), alternative open-source test automation frameworks to Selenium continued to gain ground in adoption, some new-and-noteworthy entrants started popping up, and the myriad of companies selling their wares in test automation seemed to grow exponentially. And then, Generative AI waltzed into the public domain like the Kool-Aid man busting through a wall. “Oh yeah!”

I started to realize that the initial problem I had strived to make a dent in—making testing simpler—was a moving target. Some things are far simpler now than when I started out, but some are more complex. There are new problems constantly emerging, and the ground underneath our feet is shifting.

So perhaps my work is not done. Perhaps there is more that I can do to help make test automation simpler. To return to public speaking and content creation. To return to being helpful. But this time, with the full weight of a company behind me, instead of as just as a one-man show.

I’m thrilled to be back, and I’m excited for what’s to come!

The post Welcome Back, Selenium Dave! appeared first on Automated Visual Testing | Applitools.

]]>
Why I Joined Applitools (again): Dave Haeffner https://applitools.com/blog/why-join-applitools-again-dave-haeffner/ Wed, 02 Feb 2022 19:40:22 +0000 https://applitools.com/?p=34066 Dave Haeffner shares his career journey, which included taking time off to focus on his family before joining Applitools (again).

The post Why I Joined Applitools (again): Dave Haeffner appeared first on Automated Visual Testing | Applitools.

]]>

Everyone has their own opinions. And some of them? Preposterous. Others? Downright controversial! Like, for instance, thinking that The Bourne Legacy is the best of all the Bourne movies (change my mind). Or that vim is better than emacs… and VSCode (!). You get the idea, and this is nothing new.

But what never gets old is when you can find a group of people where you can connect regardless of your opinions and feel like you belong. To find that group of people who can take you for you are (regardless of your poor taste in movies or questionable choice of text editor), riff on it, and (lovingly) rib you a bit for it. To do this in all facets of life is important, but most recently, I managed to find this group of people in my work life – having recently joined Applitools as a Software Developer. And ironically, this isn’t my first time working here.

In the not-too-distant-past I took some time away from my career to focus on my family. My wife was looking to head back to work after a few years away to focus on raising our young children. I was looking to take a break from work and endeavor on a different kind of challenge (family circus conductor). Fast forward to the end of what I affectionately refer to now as my extended sabbatical, things are different. My kids are older now and in preschool, my wife is back working, and I’m home twiddling my thumbs wondering “What to do?”

So I explored a few options.

Back into entrepreneurship? Sure, why not? So I looked into starting a new business. But man, that’s a lot of work! Last time I did that was in 2011. I did it by myself then and it was very hard. The conclusion? That’s a young man’s game. Nope, next.

Why not partner with someone instead of going it alone? Okay, sure. So I joined a friend’s startup as a technical co-founder. But that didn’t feel like the right fit either. So maybe freelance software developer? Done. That started out okay, but after a few months I realized it was lonely and not challenging me in the ways I was looking to grow. Hmm.

At the end of it all, a question crystallized for me, “What do I want to do and who do I want to do it with?” Me, a vim user with fantastic taste in films. Where I ended up was a headspace eerily similar to where I was in 2018.

Back then I decided that I wanted to make a go of being a full time software developer. To focus on the process of making high quality, high value software. But here was the rub. While I had experience working with a handful of programming languages (at least superficially through my work in software testing), I didn’t have a “proper” background. I don’t have a degree in computer science (I have a degree in network engineering, thank you very much), I’ve never worked as a developer for a company, and hilariously I failed my intro to programming course at university (I did much better the second time though!). But through my work in software testing I was able to parlay that into a position as a software developer working at Applitools, which turned out to be a life-changing experience for me. I got to work with immensely smart and talented people who welcomed me warmly, helped bring me along, and challenged me in ways that supercharged my growth (shoutout to Doron, Tomer, Gil, Amit, and Adam!). And it didn’t hurt that I got to work on fascinating, multi-faceted technical problems that forced me to grow my problem solving skills every day.

I remembered all of this fondly when searching for an answer to my question – “What do I want to do and who do I want to do it with?” Not only did I realize I wanted to continue my journey of software craftsmanship but I also wanted to go back to working alongside great engineers in a collegial environment. With people who both accept me as I am and challenge me to be better. To be in a place where I’m fed an endless supply of technical problems which are fascinating to a software geek like me.

On the tin, this is Applitools. And not for nothing, it also doesn’t hurt that they are building the most innovative stuff in the software testing space. I say this non-hyperbolically with over a decade in this industry (“I’ve seen things you people wouldn’t believe”).

So I was floored when I messaged my old manager to reconnect and tell him what I was thinking. Because very quickly this started a chain reaction of conversations which led to me ultimately answering the question “When can you start?”. Before I knew it, my start date was upon me. And you know what? I was welcomed back just as warmly as when I joined the first time. Now I’m well on the other side of my start date, back in the trenches, working alongside my fellow colleagues. And I gotta say, it’s great to be back!

Interested in joining? Come for the people and technical problems, but stay for the innovation that’s shaking up software testing (and maybe a movie recommendation or two :-)). Take a look at job openings here: http://www.applitools.com/careers

The post Why I Joined Applitools (again): Dave Haeffner appeared first on Automated Visual Testing | Applitools.

]]>
How to Do Visual Regression Testing with Selenium https://applitools.com/blog/visual-regression-testing-selenium/ https://applitools.com/blog/visual-regression-testing-selenium/#comments Sun, 17 Dec 2017 13:17:00 +0000 http://162.243.59.116/2014/12/17/how-to-do-visual-testing-with-selenium/ The Problem There is a stark misconception in the industry when it comes to visual testing. Namely that it is hard to automate. Even noted automaters in the field share...

The post How to Do Visual Regression Testing with Selenium appeared first on Automated Visual Testing | Applitools.

]]>
Dave Haeffner Selenium visual regression testing

The Problem

There is a stark misconception in the industry when it comes to visual testing. Namely that it is hard to automate. Even noted automaters in the field share this misconception, which can be seen by this quote.

Visual testing, something that is very difficult to automate.” – Richard Bradshaw (@FriendlyTester) [source]

As a result, people tend to turn a blind eye to it and miss out on a tremendous amount of value from something that is getting easier to implement every day. 

A Visual Regression Testing Primer

Visual Testing (a.k.a. visual checking or visual regression testing or visual UI testing) is the act of verifying that an application’s graphical user interface (GUI) appears correctly to its users. The goal of the activity is to find visual bugs (e.g., font, layout, and rendering issues) so they can be fixed before the end-user sees them. Additionally, visual testing can be used to verify content on a page. This is ideal for sites that have graphical functionality (e.g., charts, dashboards, etc.) since verification with traditional automated functional testing tools can be very challenging.

Given the number of variables (e.g., web browsers, operating systems, screen resolutions, responsive design, internationalization, etc.) the nature of visual testing can be complex. But with existing open source and commercial solutions, this complexity is manageable, making it easier to automate than it once was. And the payoff is well worth the effort.

For example, a single automated visual test will look at a page and assert that every element on it has rendered correctly. Effectively checking hundreds of things and telling you if any of them are out of place. This will occur every time the test is run, and it can be scaled to each browser, operating system, and screen resolution you care about.

Put another way, one automated visual test is worth hundreds of assertions. And if done in the service of an iterative development workflow, then you’re one giant leap closer to Continuous Delivery.

A Solution

By using an existing solution, you can get up and running with automated visual testing quickly. Here’s a list of what’s available (sorted alphabetically by programming language):

Name Platform Programming Language
Applitools Eyes Selenium & Other All
Fighting Layout Bugs Selenium Java
Selenium Visual Diff Selenium Java
CSS Critic Other JavaScript
Gemini Selenium JavaScript
Grunt PhotoBox PhantomJS JavaScript
PhantomCSS PhantomJS & Resemble.js JavaScript
Snap and Compare PhantomJS JavaScript
Specter XULRunner JavaScript
WebdriverCSS Selenium JavaScript
FBSnapshotTestCase Other Objective-C
VisualCeption Selenium PHP
dpdxt PhantomJS Python
Huxley Selenium Python
Needle Selenium Python
Wraith PhantomJS Ruby
Wraith-Selenium Selenium Ruby

NOTE: You may be wondering why Sikuli isn’t on this list. That’s because Sikuli isn’t well suited for automated visual testing. It’s better suited for automated functional testing — specifically for hard to automate user interfaces.

Each of these tools follows some variation of the following workflow:

  1. Drive the application under test (AUT) and take a screenshot
  2. Compare the screenshot with an initial “baseline” image
  3. Report the differences
  4. Update the baseline as needed

Let’s dig in with an example.

An Example

We’ll use WebdriverCSS, which works with Selenium WebDriver. Specifically, the NodeJS bindings and the Selenium Standalone Server (which you can download the latest version of here).

Our application under test is a page on the-internet which has a menu bar with a button that will render in a slightly different location (e.g., 25 pixels in either direction) every time the page loads.

After starting the Selenium Standalone Server (e.g., java -jar selenium-server-standalone-2.44.0.jar from the command-line) we’ll need create a new file. In it we’ll need to require the necessary libraries (e.g., assert for our assertions, webdriverio to drive the browser, and webdrivercss to handle the visual testing) and configure Selenium to connect to the standalone server (which is handled with desiredCapabilities).


# filename: shifting_content.js

var assert = require('assert');
var driver = require('webdriverio').remote({
  desiredCapabilities: {
    browserName: 'chrome'
  }
});
require('webdrivercss').init(driver, { updateBaseline: true });

This will provide us with a driver object that we can use to interact with the browser.

This object is a normal webdriverio instance with one enhancement — the addition of the .webdrivercss command. This command provides the ability to specify which parts of our application we want to perform visual testing on.

For this example, let’s keep things simple and focus on just the page body.

driver
  .init()
  .url('http://the-internet.herokuapp.com/shifting_content?mode=random')
  .webdrivercss('body', {
    name: 'body',
    elem: 'body'
  }, function(err,res) {
      assert.ifError(err);
      assert.ok(res.body[0].isWithinMisMatchTolerance);
    })
  .end();

After specifying our focus element we want to check to see that no visual changes occurred since the last time we ran this test. This is handled with an assert, the focus element (e.g., res.body[0]), and the isWithinMisMatchTolerance command.

The mismatch tolerance is configurable (on a scale of 0 to 100) but is defaulted to 0.5. There are other configuration options with sensible defaults as well (e.g., the folder where screenshots are stored) but you can ignore them for now.

Each time this script is run WebdriverCSS will take a screenshot. The initial shot will be used as a baseline for future comparisons. Then on each subsequent run WebDriverCSS will check to see if the new screenshot is within the mismatch tolerance from the baseline image. If it’s not, then the script will fail.

If the script fails enough times, then the baseline image can be automatically updated. But only if we tell WebdriverCSS to do it (which we already did in our initialization of it).


require('webdrivercss').init(driver, { updateBaseline: true });

Expected Behavior

If we save this file and run it (e.g., node shifting_content.js from the command-line) here is what will happen:

  • Open the browser
  • Navigate to the page
  • Take a screenshot of the page
  • Compare the screenshot against the baseline
  • Assert that the targeted area of the page is within the match tolerance
  • Fail and render a diff image on local disk

Shifting Content
Shifting Content

In Conclusion…

Hopefully this write-up has gotten you started off on the right foot with automated visual testing. There’s still more to consider when it comes to scaling your automated visual testing. But don’t fret, we’ll cover that in the following write-ups.

Until then, feel free to check out UIRegression. It’s another great online resource for learning about visual testing.

Read Dave’s next post in this series: How to Handle False Positives in Visual Testing

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

The post How to Do Visual Regression Testing with Selenium appeared first on Automated Visual Testing | Applitools.

]]>
https://applitools.com/blog/visual-regression-testing-selenium/feed/ 1
Automating Your Test Runs with Continuous Integration — CI Series by Dave Haeffner: Part 3/3 https://applitools.com/blog/automating-your-test-runs-with-continuous/ Fri, 25 Sep 2015 18:12:02 +0000 http://162.243.59.116/2015/09/25/automating-your-test-runs-with-continuous/ This is the final post in a 3-part series on getting started off with automated web testing on the right foot. You can find the first two posts here and...

The post Automating Your Test Runs with Continuous Integration — CI Series by Dave Haeffner: Part 3/3 appeared first on Automated Visual Testing | Applitools.

]]>

This is the final post in a 3-part series on getting started off with automated web testing on the right foot. You can find the first two posts here and here.
Automating Your Test Runs with Continuous Integration 

 

You’ll probably get a lot of mileage out of your automated tests if you run things from your computer, look at the results, and tell people when there are issues in the application. But that only helps you solve part of the problem.

The real goal in test automation is to find issues reliably, quickly, and automatically – and ideally, in sync with the development workflow you’re a part of.

To do that we need to use a Continuous Integration server.

A Continuous Integration Server Primer

A Continuous Integration server (a.k.a. CI) is responsible for merging code that is actively being developed into a central place (e.g., “trunk” or “master”) frequently (e.g., several times a day, or on every code commit, etc.) to find issues early so they can be addressed quickly — all for the sake of releasing working software in a timely fashion.

With CI, we can automate our test runs so they can happen as part of the development workflow. The lion’s share of tests that are typically run on a CI Server are unit (and potentially integration) tests. But we can very easily add in our recently written Selenium tests.

There are numerous CI Servers available for use today, most notably:

Let’s step through an example of using Jenkins on CloudBees.

An Example

Jenkins is a fully functional, widely adopted, and open-source CI adn CD (Contibuous Delivery) server. Its a great candidate for us to step through. And DEV@cloud is an enterprise-grade hosted Jenkins service offered by CloudBees, the enterprise Jenkins company. It takes the infrastructure overhead out of the equation for us.

1. Quick Setup

We’ll first need to create a free trial account, which we can do here.

Once logged in we can click on Get Started with Builds from the account page. This will take us to our Jenkins server. We can also get to the server by visiting http://your-username.ci.cloudbees.com. Give it a minute to provision, when it’s done, you’ll be presented with a welcome screen.

Jenkins CI - welcome screen
Jenkins CI – welcome screen

NOTE: Before moving on, click the ENABLE AUTO-REFRESH link at the top right-hand side of the page. Otherwise you’ll need to manually refresh the page to see results (e.g., when running a job and waiting for results to appear).

2. Create A Job

Now that Jenkins is loaded, let’s create a Job and configure it to run our tests.

  1. Click New Item from the top-left of the Dashboard
  2. Give it a descriptive name (e.g., Login Tests IE8)
  3. Select Freestyle project
  4. Click OK
Jenkins CI - create a job
Jenkins CI – create a job

This will load a configuration screen for the Jenkins job.

Jenkins CI - configuration screen
Jenkins CI – configuration screen

 

 

3. Pull In Your Test Code

Ideally your test will live in a version control system (like Git). There are many benefits to doing this, but the immediate one is that you can configure your job (under Source Code Management) to pull in the test code from the version control repository and run it.

  1. Scroll down to the Source Code Management section
  2. Select the Git option
  3. Input the Repository URL (e.g., https://github.com/tourdedave/getting-started-blog-series.git)
Jenkins CI - source code management
Jenkins CI – source code management

Now we’re ready to tell the Jenkins Job how to run our tests.

4. Add Build Execution Commands

  1. Scroll down to the Build section
  2. Click on Add Build Step and select Execute Shell
Jenkins CI - Build Execution Commands
Jenkins CI – Build Execution Commands

In the Command input box, add the following commands:

export SAUCE_USERNAME="your-sauce-username"
export SAUCE_ACCESS_KEY="your-sauce-access-key"
export APPLITOOLS_API_KEY="your-applitools-api-key"
gem install bundler
bundle install
bundle exec rspec
Jenkins CI - command input box
Jenkins CI – command input box

Since our tests have never run on this server we need to include the installation and running of the bundler gem (gem install bundler and bundle install) to download and install the libraries (a.k.a. gems) used in our test suite. And we also need to specify our credentials for Sauce Labs and Applitools Eyes (unless you decided to hard-code these values in your test already – if so, then you don’t need to specify them here).

5. Run Tests & View The Results

Now we’re ready to save, run our tests, and view the job result.

  • Click Save
  • Click Build Now from the left-hand side of the screen

When the build completes, the result will be listed on the job’s home screen under Build History.

Jenkins CI - Build reports
Jenkins CI – Build reports

You can drill into the job to see what was happening behind the scenes. To do that click on the build from Build History and select Console Output (from the left navigation). This output will be your best bet in tracking down an unexpected result.

Jenkins CI - Console Output
Jenkins CI – Console Output

In this case, we can see that there was a failure. If we follow the URLs provided, we can see a video replay of the test in Sauce Labs (link) and a diff image in Applitools Eyes.

Visual Diffs presented on the Applitools Eyes Dashboard
Visual Diffs presented on the Applitools Eyes Dashboard

The culprit for the failure here wasn’t a failure of functionality, but a visual defect with the image on the Login button.

A Small Bit of Cleanup

Before we can call our setup complete, we’ll want a better failure report for our test job. That way when there’s a failure we won’t have to sift through the console output for info. Instead we should get it all in a formatted report. For that, we’ll turn to JUnit XML (a standard format that CI servers support).

This functionality doesn’t come built into RSpec, but it’s simple enough to add through the use of another gem. There are plenty to choose from with RSpec, but we’ll go with rspec_junit_formatter.

After we install the gem we need to specify some extra command-line arguments when running our tests. A formatter type (e.g., --format RspecJunitFormatter) and an output file for the XML (e.g., --out results.xml). And since this type of output is really only useful when running on our CI server, we’ll want an easy way to turn it on and off.

# filename: .rspec

<% if ENV['ci'] == 'on' %>
--format RspecJunitFormatter
--out tmp/result.xml
<% end %>

Within RSpec comes the ability to specify command line arguments that are used frequently in a file (e.g., .rspec) that lives in the root of the test directory. In it we specify the new commands we want to use and wrap them in a conditional that checks an environment variable that denotes whether or not the tests are being run on a CI server (e.g., if ENV['ci'] == 'on').

Now it’s a small matter of updating our Jenkins job to consume this new JUnit XML output file by adding a post-build action to publish it as a report.

Jenkins CI - publish JUnit test result report
Jenkins CI – publish JUnit test result report

Then we need to tell the Jenkins job where the XML file is. Since it ends up in the root of the test directory, we can just specify the file extension with a wildcard.

Jenkins CI - post-build actions
Jenkins CI – post-build actions

Lastly, we need to update the shell commands for the build to set the ci environment variable to on.

Jenkins CI - update shell commands
Jenkins CI – update shell commands

Now when we run our test, we’ll get a test report which states which test failed. And when we drill into it, we get the URLs for the jobs in Sauce Labs and Applitools Eyes.

Jenkins CI - test results 01
Jenkins CI – test results 01

Jenkins CI - test results 02
Jenkins CI – test results 02

One More Thing: Notifications

In order to maximize your CI effectiveness, you’ll want to send out notifications to alert your team members when there’s a failure.

There are numerous ways to go about this (e.g., e-mail, chat, text, co-located visual cues, etc). And thankfully there are numerous, freely available plugins that can help facilitate whichever method you want. You can find out more about Jenkins’ plugins here.

For instance, if you wanted to use chat notifications and you use a service like HipChat or Slack, you would do a plugin search and find one of the following plugins:

Jenkins CI - notification plugins 01
Jenkins CI – notification plugins 01

Jenkins CI - notification plugins 02
Jenkins CI – notification plugins 02

After installing the plugin for your chat service, you will need to provide the necessary information to configure it (e.g., an authorization token, the channel/chat room where you want notifications to go, what kinds of notifications you want sent, etc.) and then add it as a Post-build Action to your job (or jobs).

Now when your CI job runs and fails, a notification will be sent to the chat room you configured.

Outro

If you’ve been following along through this whole series, then you should now have a test that leverages Selenium fundamentals, that performs visual checks (thanks to Applitools Eyes), which is running on whatever browser/operating system combinations you care about (thanks to Sauce Labs), and running on a CI server with notifications being sent to you and your team (thanks to CloudBees).

This is a powerful combination that will help you find unexpected bugs (thanks to the automated visual checks) and act as a means of collaboration for you and your team.

And by using a CI Server you’re able to put your tests to work by using computers for what they’re good at – automation. This frees you up to focus on more important things. But keep in mind that there are numerous ways to configure your CI server. Be sure to tune it to what works best for you and your team. It’s well worth the effort.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

 

Increase Coverage - Reduce Maintenance - with Automated Visual Testing

The post Automating Your Test Runs with Continuous Integration — CI Series by Dave Haeffner: Part 3/3 appeared first on Automated Visual Testing | Applitools.

]]>
How To Get Started with Automated Web Testing — CI Series by Dave Haeffner : Part 1/3 https://applitools.com/blog/how-to-get-started-with-automated-web-testing-ci/ Fri, 25 Sep 2015 18:10:30 +0000 http://162.243.59.116/2015/09/25/how-to-get-started-with-automated-web-testing-ci/ In this 3-part series you will learn how to build simple and powerful automated web tests that will work on the browsers you care about and be configured to run...

The post How To Get Started with Automated Web Testing — CI Series by Dave Haeffner : Part 1/3 appeared first on Automated Visual Testing | Applitools.

]]>

In this 3-part series you will learn how to build simple and powerful automated web tests that will work on the browsers you care about and be configured to run automatically through the use of a continuous integration server (CI).

How To Get Started with Automated Web Testing 

The Problem

If you’re new to automated web testing there’s only a few things you need to know to become effective quickly. But unless someone tells you what they are you could easily head down the wrong path and end up wasting time on things which will yield very few results.

A Solution

By learning the fundamentals of Selenium (a popular free and open-source browser automation tool), you can be up and running with cross-browser automated web tests rather quickly. And when combined with a third-party automated visual testing solution like Applitools Eyes, you can have an impressively high amount of test coverage with surprisingly little effort.

An Example

Selenium is built to mimic human action and it works with two pieces of information to do it: the element on the page you want to interact with and the action you want to take on that element.

If we take a common example like login for a website, here are the actions Selenium would take to step through it like a user would:

  • visit the page
  • find the username input field and type text into it
  • find the password input field and type text into it
  • find the submit button and click it

Let’s take the login example found on the-internet and write a Selenium test for it. Let’s use a scripting language (e.g., Ruby), since it is very approachable and reads a lot like English. And we’ll use RSpec, which is another open source library. It will enable us to organize our tests easily and perform repeatable actions before and after each test.

Here is what the initial Selenium test looks like once written:



# filename: login_spec.rb

require 'selenium-webdriver'

describe 'Login' do

  before(:each) do
    @driver = Selenium::WebDriver.for :firefox
  end

  after(:each) do
    @driver.quit
  end

  it 'succeeded' do
    @driver.get 'http://the-internet.herokuapp.com/login'
    @driver.find_element(id: 'username').send_keys 'tomssmith'
    @driver.find_element(id: 'password').send_keys 'SuperSecretPassword!'
    @driver.find_element(css: 'button').click
  end
  
  end

NOTE: RSpec has some syntax that is worth pointing out. A test case starts with the word describe, tests start with the word it followed by the test name (e.g., it 'succeeded' do`), and you can specify actions to occur before and after each test with before(:each) and after(:each). The words do and end are Ruby code which signify the beginning and end of a chunk of code (a.k.a. a block). And test files in RSpec are known as “specs”, which need to end with _spec.rb in the filename (e.g., login_spec.rb).

At the top of the file we pull in the Selenium Ruby bindings (e.g., require 'selenium-webdriver') and declare our test case (e.g., describe 'Login' do). Before each test we will create an instance of Firefox and store it in an instance variable to be used during and after our test (e.g., @driver = Selenium::WebDriver.for :firefox). After each test we will close the browser (e.g., @driver.quit).

Next, we create our login test and fill it with our Selenium commands (which all work using the @driver variable created in before(:each)).

The first command of the test visits the login page with .get followed by the URL as a string (e.g., 'http://the-internet.herokuapp.com'). We then interact with the login form on the page by finding the username input field with .find_element and a locator for the element (e.g., (id: 'username')) along with the action we want to take (e.g., .send_keys 'tomsmith'). And we repeat the same approach again for the password field (e.g., @driver.find_element(id: 'password').send_keys 'SuperSecretPassword!'). For the last command we issue one more .find_element to find the submit button (e.g., (css: 'button')) and click it with .click.

If we save this file and run it (e.g., rspec login_spec.rb from the command-line), it will open the browser and complete the login on the page. But it wouldn’t tell us if the application behaved as expected. To handle this we need to add in automated visual checks with Applitools Eyes.

The Eyes Have It

NOTE: To use Applitools Eyes you need to grab a free account from here (no credit card required).

In order to incorporate Applitools Eyes into our test, we first need to make some modifications to our test setup and teardown.



# filename: login_spec.rb

require 'selenium-webdriver'
require 'eyes_selenium'

describe 'Login' do

  before(:each) do
    @browser = Selenium::WebDriver.for :firefox
    @eyes = Applitools::Eyes.new
    @eyes.api_key = ENV['APPLITOOLS_API_KEY']
    @driver = @eyes.open(app_name: 'the-internet', test_name: 'login', driver: @browser)
  end
# ...
  after(:each) do
    @eyes.abort_if_not_closed
    @driver.quit
  end
# ...

Once we pull in the Applitools Eyes Ruby SDK (e.g., require 'eyes_selenium') we update our test setup (e.g., before(:each)) by creating an instance of Applitools Eyes (storing it in another instance variable for later use), specifying our API key, and starting an Applitools Eyes session (which requires the name of the app, the name of the test, and the Selenium instance). After each test we make sure to abort the Eyes session if it was unable to close properly (e.g., @eyes.abort_if_not_closed) – more on that soon. And we do this before destroying the Selenium instance (since the Eyes instance relies on Selenium).

NOTE: In this example the API key is being passed through an environment variable. You can just as easily hard-code your API key value here.

Now we’re ready to add some visual checks to our test so we can verify that our application works as intended (and verify that the page renders correctly).



# ...
  it 'succeeded' do
    @driver.get 'http://the-internet.herokuapp.com/login'
    @eyes.check_window('Login Page')
    @driver.find_element(id: 'username').send_keys 'tomsmith'
    @driver.find_element(id: 'password').send_keys 'SuperSecretPassword!'
    @driver.find_element(css: 'button').click
    @eyes.check_window('Logged In')
    @eyes.close
  end

end

With the @eyes.check_window command we’re performing visual checks. We can specify a value with it or not (e.g., ('Login Page'), ('Logged In')). The value is optional, but it will appear in the test results in Applitools Eyes. It could add a helpful narrative for you (or other people on your team) to follow along with what the test was doing when the results are reviewed after the fact.

@eyes.close ends the Applitools Eyes session and triggers the final comparison of the visual checks.

Expected Behavior

When you save this file and run it (e.g., rspec login_spec.rb from the command-line) here is what will happen:

  • Selenium opens the browser
  • An Applitools Eyes session is created and connected with the Selenium instance
  • The test runs the Selenium commands and captures visual checks along the way
  • The Applitools Eyes session closes and performs validation on the visual checks
  • Selenium closes the browser
  • Failures (if any) are displayed in the console output

If Applitools Eyes finds a visual anomaly it will fail the test and provide a URL to the test results in the console output. When viewing the results in Applitools Eyes you can choose to accept or reject the results, which will impact the way Eyes validates the test going forward.

NOTE: A new test in Applitoosl Eyes will fail and prompt you to view the results. It’s recommended that you review the results, but it’s not mandatory. The results will automatically be saved as a baseline image for subsequent runs.

A Small Bit of Cleanup

In order to make this example useful for more than one test, you need to make a small tweak to your test setup.


# filename: login_spec.rb
# ...
  before(:each) do |example|
    @browser = Selenium::WebDriver.for :firefox
    @eyes = Applitools::Eyes.new
    @eyes.api_key = ENV['APPLITOOLS_API_KEY']
    @driver = @eyes.open(app_name: 'the-internet', test_name: example.full_description, driver: @browser)
  end
# ...

Within RSpec you can access each test in the before(:each) by adding a block variable (e.g.,before(:each) do |example|). You can then put it to use @eyes.open when specifying the test name (e.g., test_name: example.full_description).

Now when you add another test to this file the name will be dynamically passed to Applitools Eyes.

Outro

By combining your Selenium tests with an automated visual testing solution you are armed with a powerful combination that gives you a staggering level of coverage (e.g., hundreds of assertions) for only a few lines of code. And as a bonus you’ve automated something that many people think can only be done manually (and as a result tends to get relegated to the end of a software development cycle) – moving you and your team one massive step closer to Continuous Delivery. Bravo!

In PART 2 of this series: “How To Run Your Automated Web Tests From Any Browser”, I’ll show you how to run your tests against whatever browser and operating system combination you need.

You can read the third and final post in this series: “Automating Your Test Runs with Continuous Integration”.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

Increase coverage in minutes - with Applitools Eyes Automated Visual Testing

The post How To Get Started with Automated Web Testing — CI Series by Dave Haeffner : Part 1/3 appeared first on Automated Visual Testing | Applitools.

]]>
How To Do Cross-browser Visual Testing with Selenium https://applitools.com/blog/how-to-do-cross-browser-visual-testing-with/ Tue, 28 Apr 2015 14:54:54 +0000 http://162.243.59.116/2015/04/28/how-to-do-cross-browser-visual-testing-with/ The Problem It’s easy enough to get started doing visual UI testing with a single browser, and with Selenium you can quickly expand your efforts into different browsers. But, it’s...

The post How To Do Cross-browser Visual Testing with Selenium appeared first on Automated Visual Testing | Applitools.

]]>

The Problem

It’s easy enough to get started doing visual UI testing with a single browser, and with Selenium you can quickly expand your efforts into different browsers. But, it’s difficult to verify that your web application looks right on all of them (e.g., making sure the elements are not missing/misaligned/hidden, etc.).

Especially when there are small rendering inconsistencies between browsers that can easily cause your visual tests to fail. Things like how an image gets rendered. This poses a real challenge since you can’t use traditional visual testing techniques like pixel comparison. So standard workarounds like modifying the match tolerance won’t work. 

A Solution

By leveraging a solution like Applitools Eyes we can reliably run the same visual tests against multiple browsers (which we can gain access to through the use of Sauce Labs).

A Visual Matching Primer

Normally, you would use a strict visual match for visual regression testing. This is good for verifying the layout of an application in the same execution environment (e.g., same browser, screen size, device, etc.), and with exactly the same (or very similar) content. And if you want to cover multiple execution environments (e.g., multiple browsers, various screen sizes, etc.) you need to maintain several baseline images.

But with layout matching (a feature that Applitools Eyes offers) we can use a single baseline for validating multiple execution environments and/or sites with extremely dynamic content.

Let’s dig in with an example.

An Example

NOTE: This example builds on the test code from this previous write-up which covers how to add visual testing to existing Selenium tests using Applitools Eyes and Sauce Labs. You can see the full code example from it here. In order to follow along with this example, you’ll want to familiarize yourself with the sample code. Also, if you want to play along at home, you’ll need to have an account for both Applitools Eyes and SauceLabs (they have free trial options).

To prepare ourselves for cross-browser testing, we’ll need to modify our setup() method. This is where we’ll focus all of our efforts for this post.

Here is where we left off with the setup() method from the previous post.

// filename: Login.java
// ...
    @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer();
        capabilities.setCapability("platform", Platform.XP);
        capabilities.setCapability("version", "8");
        capabilities.setCapability("name", testName);
        String sauceUrl = String.format(
                "http://%s:%s@ondemand.saucelabs.com:80/wd/hub",
                "YOUR_SAUCE_USERNAME",
                "YOUR_SAUCE_ACCESS_KEY");
        WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
        sessionId = ((RemoteWebDriver) browser).getSessionId().toString();
        eyes = new Eyes();
        eyes.setApiKey("YOUR_APPLITOOLS_API_KEY");
        driver = eyes.open(browser, "the-internet", testName);
    }
// ...

First, let’s modify the DesiredCapabilities instantiation so that we can more flexibly specify the browser name.

   @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.setCapability("browserName", "firefox");
        capabilities.setCapability("platform", Platform.XP);
        capabilities.setCapability("version", "36");
        capabilities.setCapability("name", testName);

Next, we’ll want to specify the name of the baseline so we can reuse it. If one doesn’t exist, it will be created. And if one exists, it will be used for comparison. This enables us to run our test in one browser, capture a baseline, and then rerun the same test against another browser, and compare the results between the browsers. And if we don’t do it, a separate baseline will automatically be created and used for each browser or screen size.

       eyes.setBaselineName(testName);
        driver = eyes.open(browser, "the-internet", testName);
    }

Now let’s save the file and run the test to capture the baseline (e.g., mvn clean test -Dtest=Login.java from the command line).

Once the test completes, Applitools will provide the URL to the job in the test output. You can review it to make sure it is what you expect. If it is Accept and Save it. Alternatively, you can assume the baseline image is correct and proceed without checking it. It will automatically be used as the baseline on future test runs.

Now that we have a baseline, let’s change our browserName and version capabilities so our test will run against a different browser.

   @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.setCapability("browserName", "internet explorer");
        capabilities.setCapability("platform", Platform.XP);
        capabilities.setCapability("version", "8");
    // ...

When we save and run it (e.g., mvn clean test -Dtest=Login.java from the command-line) the test fails.

When we view the results we can see that there are no visual bugs between Firefox and Internet Explorer. Instead, there are differences in how the page was rendered (e.g., different screen sizes, different placement of text, etc.).

Cross-browser visual testing with Applitools

The test was considered a failure because the Applitools Eyes match level defaults to Strict mode, which performs a match that is very exact. For effective cross-browser visual testing, we’ll need to use a different match level called Layout2.

Let’s update our test code to use this match level instead.

import com.applitools.eyes.MatchLevel;

    // ...

        eyes.setBaselineName(testName);
        eyes.setMatchLevel(MatchLevel.LAYOUT2);
        driver = eyes.open(browser, "the-internet", testName);
    }

Now when we save our test and run it again (e.g., mvn clean test -Dtest=Login.java from the command-line) it will pass.

Cross-browser testing with Eyes

 

For Consistent Results

The test worked this time, but for consistent results we’ll want to specify the viewport size in our Applitools setup. This will help ensure that the viewport size of each browser is consistent regardless of the browser used and the system’s screen resolution.

If you don’t know what size to specify, go with a generic value like 1000x600.

import com.applitools.eyes.RectangleSize;

  // ...

        eyes.setBaselineName(testName);
        eyes.setMatchLevel(MatchLevel.LAYOUT2);
        driver = eyes.open(browser, "the-internet", testName, new RectangleSize(1000, 600));
    }

If the application you’re testing is responsive you can verify it’s layout by changing the viewport size to force the page layout to change. But regardless of the viewport size specified, Applitools Eyes will scroll through the page and stitch together a full page screenshot for validation on each test run.

A Small Bit of Cleanup

Rather than constantly modifying our capabilities by hand to change the browser, version, and platform let’s update the test setup to retrieve runtime properties specified on the command-line instead.

   @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.setCapability("browserName", System.getProperty("browser", "firefox"));
        capabilities.setCapability("platform", System.getProperty("platform", "Windows XP"));
        capabilities.setCapability("version", System.getProperty("browserVersion", "36"));
    // ...

With this approach we’re also able to set sensible defaults, which we’ve done. So now if we don’t specify anything, Firefox 36 will run on Windows XP.

To specify values we need to use the -D flag when running our tests on the command line. Here are some examples of it in use. Notice the use of double-quotes for values with spaces.

mvn clean test -Dtest=Login.java -Dbrowser="internet explorer" -DbrowserVersion=8
mvn clean test -Dtest=Login.java -Dbrowser="internet explorer" -DbrowserVersion=10 -Dplatform="Windows 8"
mvn clean test -Dtest=Login.java -Dbrowser=firefox -DbrowserVersion=26 -Dplatform="Windows 7"
mvn clean test -Dtest=Login.java -Dbrowser=safari -DbrowserVersion=8 -Dplatform="OS X 10.10"
mvn clean test -Dtest=Login.java -Dbrowser=chrome -DbrowserVersion=40 -Dplatform="OS X 10.8"

For a full list of available browser and operating system combinations, check out Sauce Labs’ platform list.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

The post How To Do Cross-browser Visual Testing with Selenium appeared first on Automated Visual Testing | Applitools.

]]>
How To Add Visual Testing To Your BDD Tests https://applitools.com/blog/how-to-add-visual-testing-to-your-bdd-tests/ Tue, 10 Mar 2015 12:22:58 +0000 http://162.243.59.116/2015/03/10/how-to-add-visual-testing-to-your-bdd-tests/ The Problem If you’re new to Behavior Driven Development (BDD) tooling (e.g., Cucumber) it may not be obvious how scenarios you’ve specified (using the Gherkin syntax) translates into test automation....

The post How To Add Visual Testing To Your BDD Tests appeared first on Automated Visual Testing | Applitools.

]]>

The Problem

If you’re new to Behavior Driven Development (BDD) tooling (e.g., Cucumber) it may not be obvious how scenarios you’ve specified (using the Gherkin syntax) translates into test automation. If that’s the case, then adding visual testing to the mix might be a challenge as well. 

A Solution

By using the built-in functionality of our BDD tool of choice, we can easily generate step definitions for our scenarios and add in test execution with a third-party provider like Applitools (for visual testing) and Sauce Labs (for access to additional browsers/devices).

NOTE: In order to do BDD well don’t start with automation. You should focus on communication instead. This way everyone on the team (both business and tech alike) have a shared understanding of what needs to be done and what is actually being done. There are loads of great write-ups to help guide you on this. Pieces like “Step Away From The Tools” by Liz Keogh (link) and Specification Workshops from “Bridging The Communication Gap” by Gojko Adzic (link).

An Example

For this example, let’s use Cucumber to step through automating the login of a website. Scenarios for valid and invalid users would look something like this:

# filename: features/login.feature

Feature: Login

Scenario: Valid User
  Given a user with valid credentials
  When they log in
  Then they will have access to secure portions of the site

Scenario: Invalid User
  Given a user with invalid credentials
  When they log in
  Then they will not gain access to secure portions of the site

Gherkin scenarios are plain text files that end in .feature. And they live in a directory called features.

Inside of the features directory, there are some additional folders we’ll want to use (e.g., step_definitions and support).

├── features
│   ├── login.feature
│   ├── step_definitions
│   └── support

When we save the feature file and run it (e.g., cucumber from the command-line), Cucumber will see if there are any step definitions that match. If there aren’t, it will provide us with some code to get us started.

You can implement step definitions for undefined steps with these snippets:

Given(/^a user with valid credentials$/) do
  pending # express the regexp above with the code you wish you had
end

When(/^they log in$/) do
  pending # express the regexp above with the code you wish you had
end

Then(/^they will have access to secure portions of the site$/) do
  pending # express the regexp above with the code you wish you had
end

Given(/^a user with invalid credentials$/) do
  pending # express the regexp above with the code you wish you had
end

Then(/^they will not gain access to secure portions of the site$/) do
  pending # express the regexp above with the code you wish you had
end

We can copy this outputted code and paste it into a new file – login.rb in the step_definitions directory.

This is where we’ll place our test actions (e.g., Selenium commands, assertions, etc.). But before we do that, we’ll need to take care of setting up and tearing down our Selenium session.

That gets handled in the support directory. All files in this directory get executed before the tests. But there’s one file in particular that will get executed before anything else – and that’s env.rb. Let’s create this file and add in our Selenium configuration with access to Applitools and Sauce Labs.

# filename: support/env.rb

require 'selenium-webdriver'
require 'rspec/expectations'
include RSpec::Matchers
require 'eyes_selenium'

Before do |scenario|
  @eyes = Applitools::Eyes.new
  @eyes.api_key = 'your Applitools API key'
  caps = Selenium::WebDriver::Remote::Capabilities.internet_explorer
  caps.version  = '8'
  caps.platform = 'Windows XP'
  caps['name'] = scenario.title
  browser = Selenium::WebDriver.for(
    :remote,
    url: "http://your-sauce-username:your-sauce-access-key@ondemand.saucelabs.com:80/wd/hub",
    desired_capabilities: caps)
  @driver = @eyes.open(app_name: 'the-internet', test_name: scenario.title, driver: browser)
end

After do
  @eyes.abort_if_not_closed
  @driver.quit
end

At the top of the file we pull in our requisite libraries (e.g., selenium-webdriver to load and drive the browser, rspec/expecations and RSpec::Matchers to perform an assertion, and eyes_selenium for visual testing with Applitools Eyes).

Next we specify our setup and teardown in Before and After blocks. Things specified here will occur before and after each scenario specified in our feature files.

In Before we create an instance of Applitools Eyes, configure the browser/operating system we want on Sauce Labs, and join the two together – storing the final outcome (which is a Selenium WebDriver instance that is now connected to both Applitools and Sauce Labs) in a @driver variable. This variable will automatically be made available for use in the step definitions.

In After we ensure that our Applitools connection closes (in addition to quitting the instance of Selenium).

Now let’s wire everything up in our step definition.

# filename: step_definitions/login.rb

Given(/^a user with valid credentials$/) do
  @user = {
    username: 'tomsmith',
    password: 'SuperSecretPassword!'
  }
end

Given(/^a user with invalid credentials$/) do
  @user = {
    username: 'tomsmith',
    password: 'badpassword'
  }
end

When(/^they log in$/) do
  @driver.get 'http://the-internet.herokuapp.com/login'
  @driver.find_element(id: 'username').send_keys(@user[:username])
  @driver.find_element(id: 'password').send_keys(@user[:password])
  @driver.find_element(id: 'login').submit
end

Then(/^they will have access to secure portions of the site$/) do
  @eyes.check_window('Logged In')
  expect(@driver.find_element(css: '.flash.success').displayed?).to eql true
  @eyes.close
end

Then(/^they will not gain access to secure portions of the site$/) do
  @eyes.check_window('Not Logged In')
  expect(@driver.find_element(id: 'login').displayed?).to eql true
  expect(@driver.find_element(css: '.flash.error').displayed?).to eql true
  @eyes.close
end

Our Selenium actions are simple – we’re finding the form elements and inputting text (e.g., username and password), submitting the form, and checking to make sure the correct notification message appeared in an assertion.

With our Applitools commands (e.g., @eyes.check_window() and @eyes.close) we are capturing snapshots of the page and comparing them against the baseline image.

Expected Outcome

If we save this file and then run cucumber again (e.g., cucumber from the command-line), here is what will happen:

  • Create an instance of Selenium on Sauce Labs
  • Connect the Selenium instance to Applitools Eyes
  • Test actions run
  • Assertions (e.g., expect()) and image checks (e.g., @eyes.check_window()) occur
  • Applitools and Sauce Labs sessions close
  • Test results get outputted
> cucumber
Feature: Login

  Scenario: Login Succeeded                                   # features/login.feature:3
    Given a user with valid credentials                       # features/step_definitions/login.rb:1
    When they log in                                          # features/step_definitions/login.rb:8
    Then they will have access to secure portions of the site # features/step_definitions/login.rb:16

  Scenario: Login Failed                                          # features/login.feature:8
    Given a user with invalid credentials                         # features/step_definitions/login.rb:22
    When they log in                                              # features/step_definitions/login.rb:8
    Then they will not gain access to secure portions of the site # features/step_definitions/login.rb:29

2 scenarios (2 passed)
6 steps (6 passed)
1m39.410s

If Applitools found a visual bug, it would fail the test by raising an exception and output the URL to the job – which would look like this:

> cucumber
Feature: Login

  Scenario: Login Succeeded                                   # features/login.feature:3
    Given a user with valid credentials                       # features/step_definitions/login.rb:1
    When they log in                                          # features/step_definitions/login.rb:8
    Then they will have access to secure portions of the site # features/step_definitions/login.rb:16
      'Login Succeeded' of 'the-internet'. see details at https://eyes.applitools.com/app/sessions/251976656790606 (Applitools::TestFailedError)
      ./features/step_definitions/login.rb:19:in `/^they will have access to secure portions of the site$/'
      features/login.feature:6:in `Then they will have access to secure portions of the site'

  Scenario: Login Failed                                          # features/login.feature:8
    Given a user with invalid credentials                         # features/step_definitions/login.rb:22
    When they log in                                              # features/step_definitions/login.rb:8
    Then they will not gain access to secure portions of the site # features/step_definitions/login.rb:29
      'Login Failed' of 'the-internet'. see details at https://eyes.applitools.com/app/sessions/251976656732861 (Applitools::TestFailedError)
      ./features/step_definitions/login.rb:33:in `/^they will not gain access to secure portions of the site$/'
      features/login.feature:11:in `Then they will not gain access to secure portions of the site'

Failing Scenarios:
cucumber features/login.feature:3 # Scenario: Login Succeeded
cucumber features/login.feature:8 # Scenario: Login Failed

2 scenarios (2 failed)
6 steps (2 failed, 4 passed)
2m9.743s

A Small Bit of Cleanup

Right now we’re explicitly calling eyes.close() to perform a comparison of our tests against their baseline images, but we don’t have to. We can easily make it so this automatically gets called at the end of every test run.

To do that, we’ll need to eyes.close() to our After block in support/env.rb.

# filename: support/env.rb
...
After do
  @driver.quit
  @eyes.close
end

By placing @eyes.close here we can remove @eyes.abort_if_not_closed since it’s no longer necessary.

We can also remove @eyes.close from our Then step definitions. Additionally, we can remove our RSpec assertions since they are redundant – and our visual validation is effectively performing hundreds of validations.

Then(/^they will have access to secure portions of the site$/) do
  @eyes.check_window('Logged In')
end

Then(/^they will not gain access to secure portions of the site$/) do
  @eyes.check_window('Not Logged In')
end

If we save these changes and run the tests one more time (e.g., cucumber from the command-line) they will execute just like before. And when there’s a failure, the test output will be correctly attributed to the failing scenario.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

Add Automated Visual Testing to your Selenium Tests - In Minutes!

Dave Haeffner is the writer of Elemental Selenium – a free, once weekly Selenium tip newsletter read by thousands of testing professionals. He’s also the creator and maintainer of ChemistryKit (an open-source Selenium framework) and author of The Selenium Guidebook. He’s helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, Animoto, and Aquent. He’s also a founder/co-organizer of the Selenium Hangout and has spoken at numerous conferences and meet-ups about automated acceptance testing.

Image is taken from here

The post How To Add Visual Testing To Your BDD Tests appeared first on Automated Visual Testing | Applitools.

]]>
How To Handle Visual Testing False Positives – Part 2 https://applitools.com/blog/how-to-handle-visual-testing-false-positives-part/ Tue, 27 Jan 2015 16:52:00 +0000 http://162.243.59.116/2015/01/27/how-to-handle-visual-testing-false-positives-part/ The Problem In the last write-up I covered common issues that can lead to false positives in your automated visual tests along with some workarounds for them. While this was...

The post How To Handle Visual Testing False Positives – Part 2 appeared first on Automated Visual Testing | Applitools.

]]>

The Problem

In the last write-up I covered common issues that can lead to false positives in your automated visual tests along with some workarounds for them.

While this was a good start, it was incomplete.

It provided enough information to add initial resiliency to your visual tests, but it glossed over other common scenarios that will cause false positives and create gaps in your test coverage. Not only that, the tactics I’ve demonstrated thus far won’t hold up when faced with these new scenarios. 

A Solution

These are the sorts of challenges that give visual testing a bad rap. But by leveraging a more sophisticated image comparison engine we can save ourselves a lot of time and frustration by side-stepping these issues entirely.

But before we get too far ahead of ourselves, let’s step through these additional scenarios to make sure we know what we’re up against.

An Example

In the last write-up we stepped through how to work with small pixel shifts (e.g., an element moving 1 pixel left or right on subsequent page loads) by increasing the mismatch tolerance (the value that WebdriverCSS uses to determine if the page has changed significantly).

Here is the code from it.


// filename: false_positives.js
var assert = require(‘assert’); 
var driver = require(‘webdriverio’).remote({ 
   desiredCapabilities: { 
      browserName: ‘firefox’ 
      } 
   }); 
require(‘webdrivercss’).init(driver, { 
    updateBaseline: true, misMatchTolerance: 0.20 } ); 
driver 
   .init()
   .url(‘http://the-internet.herokuapp.com/shifting_content
/menu?mode= random&pixel_shift=1’) 
   .webdrivercss(‘body’, { 
      name: ‘body’, 
      elem: ‘body’ 
   }, function(err,res) { 
      assert.ifError(err); 
      assert.ok(res.body[0].isWithinMisMatchTolerance); 
   }) 
   .end(); “`

 

NOTE: For a full walk-through you can read the previous post here.

To briefly recap — this code loads an instance of Firefox, visits the example application, captures a baseline image, then refreshes the page, captures another image, and checks to see if there is a significant change between the baseline image and the second image.

For this example, our test code works well. It passes when the menu button shifts by a pixel (avoiding a false positive), and it fails when it shifts by 20 pixels (a legitimate failure). But when the same approach is applied to a slightly different example (e.g., an image shifting by a single pixel) the test will get tripped up.

Let’s change the URL to point to this new image example and see for ourselves.


// filename: false_positives.js
driver
  .init()
  .url('http://the-internet.herokuapp.com/shifting_content/image?mode=
random&pixel_shift=1')
...

If we save this file and run it (e.g., node false_positives.js from the command-line) it will fail when it should have passed. Here’s the diff image from the failure:

Visual Diffs
Visual Diffs

Compared to a text menu button, when an image moves, it accounts for a greater number of pixels on the page. So when the image in this example moves a single pixel from it’s original location, it’s enough to trigger the mismatch tolerance (which causes the test to fail). Simply put, the failure occurred because more pixels changed, even though it was just a single pixel to the left or right.

And it’s not that the image we’re testing is overly detailed. The same thing will occur with a simpler image.


// filename: false_positives.js
...
driver
  .init()
  .url('http://the-internet.herokuapp.com/shifting_content/image?mode=
random&pixel_shift=1&image_type=simple')
...

Here’s the diff image from the failure:

Shifting Content
Shifting Content

Conventional wisdom posits that we could simply increase the mismatch tolerance even higher. This is a bad idea and should be avoided since it can quickly make your automated visual tests go off the rails by opening up holes in your coverage that you’re not aware of.

Another Example

And that’s not to say that we don’t already have holes in our coverage.

In addition to tricky false positives, we also run the risk of missing legitimate failures. For example, let’s point our test at an example that has an occasional typo and see if we can catch it.

And just to be certain, let’s also set the mismatch tolerance back to the it’s original default (reverting it from 0.20 to 0.05) — making the visual comparison stricter.


// filename: false_positives.js
...
require('webdrivercss').init(driver, {
  updateBaseline: true
  }
);
driver
  .init()
  .url('http://the-internet.herokuapp.com/typos')
...

If we save the file and run it (e.g., node false_positives.js from the command-line) it will run and pass. But it should have failed. It missed the typo entirely.

Now, granted, if the typo occurred elsewhere (e.g., at the beginning of the sentence) the test may catch it. But not if the typo were composed of a character of similar pixel width to the original character.

Missing typos may seem like a contrived example, regardless it’s a hole in our visual test coverage. And if it can slip through the cracks, then something else could too.

A Better Solution

With a more sophisticated image comparison engine like Applitools Eyes we can easily avoid these issues since it doesn’t rely on thresholds like our previous examples. It actually ignores changes that are not visible to the human eye, but still identifies valid changes (no matter how tiny) in pages of all sizes. And thankfully WebdriverCSS comes with support baked-in.

NOTE: For more info on Applitools Eyes, check out their Quick Start Guide and take a spin for yourself. They have SDKs for a bunch of different languages and are constantly adding to that list.

You just need to create a free Applitools Eyes account (no credit-card required) and grab your API key from your account dashboard. Once you have that, we just need to update the test setup for WebdriverCSS (removing the existing WebdriverCSS init configuration, replacing it with your key).


// filename: false_positives.js
...
require('webdrivercss').init(driver, {
    key: 'your API key goes here'
  }
);
...

After that, there are a couple additional tweaks to make.

Previously we just used 'body' for all of the test parameters. But now we want the test to be more descriptive since the results will show up in the Applitools dashboard and we’ll want to discern which test goes to what since we’ll be doing multiple test runs.

So let’s specify the name of the site (e.g., .webdrivercss('the-internet')) and the page name (e.g., name: 'typos') — the focus element will stay the same (e.g., elem: 'body').


// filename: false_positives.js
...
driver
  .init()
  .url('http://the-internet.herokuapp.com/typos')
  .webdrivercss('the-internet', {
    name: 'typos',
    elem: 'body'
...

We’ll also need to update our assertion to use the Applitools Eyes comparison engine (instead of the WebdriverCSS mismatch tolerance).


// filename: false_positives.js

}, function(err,res) { assert.ifError(err); assert.equal(res[‘typos’].steps, res[‘typos’].matches, res[‘typos’].url) }) .end(); …

Just like with the assertions in previous examples, res is a collection of the results for the elements used in the test. The main difference here is that we’re performing an assert.equal (instead of assert.ok) to compare the results of the test steps against the baseline image stored in Applitools. We’ve also included custom output for the assertion (so a permalink for the Applitools job will get outputted for us on failure).

If we save the file and run it (e.g., node false_positives.js from the command-line) the test will run, fail, and provide us with a URL (to the job in Applitools Eyes) in the terminal output. This failure can be ignored since it’s not actually a failure. It’s a side-effect of the Eyes platform capturing a new baseline image. It’s the second test run that we’ll want to pay attention to.

NOTE: If you don’t want to save this initial test run as the baseline, open the test in Applitools Eyes (e.g., either through URL provided or through the dashboard), click Reject, Save, and then re-run your test.

Now let’s run the test again. This time it will be compared against the baseline, fail, and provide us with a URL to job in Eyes. When we view the result page, we can see that the failure was legitimate because it caught the typo.

Here is an image of failure result.

Diffs with Applitools
Diffs with Applitools

One Last Example

Now, for closure, let’s point our test back at our shifting image example and see how Applitools Eyes holds up.


// filename: false_positives.js
...
driver
  .init()
  .url('http://the-internet.herokuapp.com/shifting_content/image?mode=
random&pixel_shift=1')
  .webdrivercss('the-internet', {
    name: 'shifting image',
    elem: 'body'
  }, function(err,res) {
      assert.ifError(err);
      assert.equal(res['shifting image'].steps,
        res['shifting image'].matches,
        res['shifting image'].url)
    })
  .end();

Just like before, we’ll want to run the test twice. And on the second test run it will pass — clearing the previous hurdle of a false positive from the image shifting by a single pixel.

In Conclusion…

It’s worth noting (although not demonstrated in these last two examples) that Applitools Eyes handles other false positive scenarios (e.g., anti-aliasing, dynamic data, etc.) that would normally trip up your tests. And these examples merely scratch the surface of what the Eyes platform can do for your visual testing efforts — which I’ll cover in more depth in future posts.

For now, I’d say you’re armed with more than enough information to dig into automated visual testing on your own; regardless of whether you go with an open source or proprietary solution.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

 

 

Dave Haeffner is the writer of Elemental Selenium – a free, once weekly Selenium tip newsletter read by thousands of testing professionals. He’s also the creator and maintainer of ChemistryKit (an open-source Selenium framework), and author of The Selenium Guidebook. He’s helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, Animoto, and Aquent. He’s also a founder/co-organizer of the Selenium Hangout, and has spoken at numerous conferences and meet-ups about automated acceptance testing.

The post How To Handle Visual Testing False Positives – Part 2 appeared first on Automated Visual Testing | Applitools.

]]>
How To Handle Visual Testing False Positives – Part 1 https://applitools.com/blog/how-to-handle-visual-testing-false-positives/ Mon, 05 Jan 2015 15:56:00 +0000 http://162.243.59.116/2015/01/05/how-to-handle-visual-testing-false-positives/ The Problem One of the biggest hurdles to successful automated testing is making sure that when your tests fail, they’re legitimate failures. Getting false positives is a slippery slope that...

The post How To Handle Visual Testing False Positives – Part 1 appeared first on Automated Visual Testing | Applitools.

]]>

The Problem

One of the biggest hurdles to successful automated testing is making sure that when your tests fail, they’re legitimate failures. Getting false positives is a slippery slope that can easily lead to a death knell for any testing initiative.

This holds true in automated visual testing with common issues like elements being offset by a pixel, dynamic content, and making sure things work across different screen sizes.

If left unchecked, they can quickly erode any momentum you have with automated visual testing. 

A Solution

With some simple adjustments to our test code we can account for these issues head on.

Let’s dig in with some examples.

An Example

In Getting Started with Visual Testing write-up we stepped through an example where an element moves to the left or right by a large margin (e.g., 20+ pixels). This is an obvious visual defect.

Let’s use the same test code and point it at an example where the element only moves a single pixel and see what happens.

var assert = require('assert');
var driver = require('webdriverio').remote({
  desiredCapabilities: {
    browserName: 'firefox'
  }
});
require('webdrivercss').init(driver, {
  updateBaseline: true
  }
);
driver
  .init()
  .url('http://the-internet.herokuapp.com/shifting_content
/menu?mode=random&pixel_shift=1')
  .webdrivercss('body', {
    name: 'body',
    elem: 'body'
  }, function(err,res) {
      assert.ifError(err);
    })
  .end();

In case you missed the last write-up, this test creates an instance of Selenium, visits the page specified, grabs a screenshot of the page, and compares it to an initial baseline image. If there is a discrepancy between them, the test will fail. For a more detailed explanation of what’s happening, read this.

If we run this test (e.g., node false_positives.js from the command-line), it will see the single pixel offset and fail. That’s because the mismatch tolerance (the thing that WebdriverCSS uses to determine if there is a visual anomaly) is too sensitive by default for such subtle shifts.

To address this, we can increase the mismatch tolerance from it’s sensible default (e.g., from 0.05 to 0.20).


# filename: false_positives.js
...
require('webdrivercss').init(driver, {
  updateBaseline: true,
  misMatchTolerance: 0.20
  }
...

Keep in mind that changing the mismatch tolerance introduces risk into our test. Raising the mismatch tolerance will solve our immediate issue, but it can introduce other issues by creating a possible gap in our coverage. For example, if a page we’re testing is corrupt within this new tolerance (e.g., typo, missing icon, etc.), then our test could miss it. So use this sparingly, and with caution.

Alternatively, we could leave the mismatch tolerance alone and narrow our test’s focus to just look at the element that’s moving (instead of the page body). This would enable us to test the element regardless of it’s location, but it would strip us of benefit of overall page coverage — so let’s not do that. The best approach would be to use a more sophisticated image comparison engine (which I’ll cover in the next write-up).

Since we’re dealing with a reasonably small shift in the mismatch tolerance (e.g., raising it 0.05% to 0.20%) let’s stick with our tweak, save our file, and run it (e.g., node false_positives.js from the command-line). Now our test will run without getting tripped up by single pixel offsets. And if we point the test back at the original example (with the element that moves by 20+ pixels) the test will still catch the visual bug.


# filename: false_positives.js
...
driver
  .init()
  .url('http://the-internet.herokuapp.com/shifting_content
/menu?mode=random')

Another Example

Now if we take the same test code and point at something else (e.g., an example that loads content dynamically), it will fail.


# filename: false_positives.js
...
driver
  .init()
  .url('http://the-internet.herokuapp.com/dynamic_content')
...

We accounted for pixel offsets by adjusting the mismatch tolerance. But the elements in this new example don’t shift. It fails because the contents of the elements are changing on each page load. To account for this, we will need to selectively ignore pieces of the page. Specifically, the pieces of the page with dynamic content.

This can be done one of two ways. We can either specify an element (e.g., with a CSS selector), or use X and Y coordinates of the page. The latter approach works if you have a page with limited locators to choose from. Whereas using locators is the simplest and most scalable way. Fortunately, our example has good semantic markup, so let’s use locators.


# filename: false_positives.js
...
driver
  .init()
  .url('http://the-internet.herokuapp.com/dynamic_content')
  .webdrivercss('body', {
    name: 'body',
    elem: 'body',
    exclude: ['#content > .row']
...

With the exclude: parameter we can specify more than one element. But we just need to exclude each of the content rows, which we can easily do with the #content > .row CSS selector.

Now when we run our test (e.g., node false_positives.js from the command-line), it will pass. You can see a screenshot of what it looks like with the exclusion here:

Dynamic content
Dynamic content

 

One More Example

If we want to run this test against different page sizes to see if it holds up, it’s a simple matter of adding an additional initialization parameter for the different screen widths (e.g., screenWidth).


# filename: false_positives.js
...
    exclude: ['#content > .row'],
    screenWidth: [320,640,960]
...
“`

If we save this and run it (e.g., node false_positives.js from the command-line), the test will resize the browser for each of the screen widths specified, exclude the elements with dynamic content, and then perform visual checks for each of the sizes using the mismatch tolerance we specified — and pass.

In Conclusion…

Now that we’ve stepped through the basics of automated visual testing and some common issues that you will run into, we’re ready to dig into some of the more interesting bits of automated visual testing.

Stay tuned!

Read Dave’s next post in this series: How to Handle False Positives in Visual Testing – Part 2

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

 

Boost your Selenium tests with Automated Visual Testing - Open your FREE account now.
 

Dave Haeffner is the writer of Elemental Selenium – a free, once weekly Selenium tip newsletter read by thousands of testing professionals. He’s also the creator and maintainer of ChemistryKit (an open-source Selenium framework), and author of The Selenium Guidebook. He’s helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, Animoto, and Aquent. He’s also a founder/co-organizer of the Selenium Hangout, and has spoken at numerous conferences and meet-ups about automated acceptance testing.

The post How To Handle Visual Testing False Positives – Part 1 appeared first on Automated Visual Testing | Applitools.

]]>