Angie Jones Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/angie-jones/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 19:21:17 +0000 en-US hourly 1 The Top 10 Test Automation University Courses https://applitools.com/blog/the-top-10-test-automation-university-courses/ Thu, 10 Nov 2022 16:34:00 +0000 https://applitools.com/?p=44406 Test Automation University (also called “TAU”) is one of the best online platforms for learning testing and automation skills. TAU offers dozens of courses from the world’s leading instructors, and...

The post The Top 10 Test Automation University Courses appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation University, powered by Applitools

Test Automation University (also called “TAU”) is one of the best online platforms for learning testing and automation skills. TAU offers dozens of courses from the world’s leading instructors, and everything is available for free. The platform is proudly powered by Applitools. As of November 2022, nearly 140,000 students have signed up! TAU has become an invaluable part of the testing community at large. Personally, I know many software teams who use TAU courses as part of their internal onboarding and mentorship programs.

So, which TAU courses are currently the most popular? In this list, we’ll count down the top 10 most popular courses, ranked by the total number of course completions over the past year. Let’s go!

#10: Selenium WebDriver with Java

Selenium WebDriver with Java course badge
Angie Jones

Starting off the list at #10 is Selenium WebDriver with Java by none other than Angie Jones. Even with the rise of alternatives like Cypress and Playwright, Selenium WebDriver continues to be one of the most popular tools for browser automation, and Java continues to be one of its most popular programming languages. Selenium WebDriver with Java could almost be considered the “default” choice for Web UI test automation.

In this course, Angie digs deep into the WebDriver API, teaching everything from the basics to advanced techniques. It’s a great course for building a firm foundation in automation with Selenium WebDriver.

#9: Python Programming

Python Programming course badge
Jess Ingrassellino

#9 on our list is one of our programming courses: Python Programming by Jess Ingrassellino. Python is hot right now. On whatever ranking, index, or article you find these days for the “most popular programming languages,” Python is right at the top of the list – often vying for the top spot with JavaScript. Python is also quite a popular language for test automation, with excellent frameworks like pytest, libraries like requests, and bindings for browser automation tools like Selenium WebDriver and Playwright.

In this course, Dr. Jess teaches programming in Python. This isn’t a test automation course – it’s a coding course that anyone could take. She covers both structured programming and object-oriented principles from the ground up. After two hours, you’ll be ready to start coding your own projects!

#8: API Test Automation with Postman

API Test Automation with Postman course badge
Beth Marshall

The #8 spot belongs to API Test Automation with Postman by Beth Marshall. In recent years, Postman has become the go-to tool for building and testing APIs. You could almost think of it as an IDE for APIs. Many test teams use Postman to automate their API test suites.

Beth walks through everything you need to know about automating API tests with Postman in this course. She covers basic features, mocks, monitors, workspaces, and more. Definitely take this course if you want to take your API testing skills to the next level!

#7: Introduction to Cypress

Intro to Cypress course badge
Gil Tayar

Lucky #7 is Introduction to Cypress by Gil Tayar. Cypress is one of the most popular web testing frameworks these days, even rivaling Selenium WebDriver. With its concise syntax, rich debugging features, and JavaScript-native approach, it’s become the darling end-to-end test framework for frontend developers.

It’s no surprise that Gil’s Cypress course would be in the top ten. In this course, Gil teaches how to set up and run tests in Cypress from scratch. He covers both the Cypress app and the CLI, and he even covers how to do visual testing with Cypress.

#6: Exploring Service APIs through Test Automation

Exploring Services APIs through Test Automation course badge
Amber Race

The sixth most popular TAU course is Exploring Service APIs through Test Automation by Amber Race. API testing is just as important as UI testing, and this course is a great way to start learning what it’s all about. In fact, this is a great course to take before API Test Automation with Postman.

This course was actually the second course we launched on TAU. It’s almost as old as TAU itself! In it, Amber shows how to explore APIs first and then test them using the POISED strategy.

#5: IntelliJ for Test Automation Engineers

IntelliJ for Test Automation Engineers course badge
Corina Pip

Coming in at #5 is IntelliJ for Test Automation Engineers by Corina Pip. Java is one of the most popular languages for test automation, and IntelliJ is arguably the best and most popular Java IDE on the market today. Whether you build frontend apps, backend services, or test automation, you need proper development tools to get the job done.

Corina is a Java pro. In this course, she teaches how to maximize the value you get out of IntelliJ – and specifically for test automation. She walks through all those complicated menus and options you may have ignored otherwise to help you become a highly efficient engineer.

#4: Java Programming

Java Programming course badge
Angie Jones

Our list is winding down! At #4, we have Java Programming by Angie Jones. For the third time, a Java-based course appears on this list. That’s no surprise, as we’ve said before that Java remains a dominant programming language for test automation.

Like the Python Programming course at spot #9, Angie’s course is a programming course: it teaches the fundamentals of the Java language. Angie covers everything from “Hello World” to exceptions, polymorphism, and the Collections Framework. Clocking in at just under six hours, this is also one of the most comprehensive courses in the TAU catalog. Angie is also an official Java Champion, so you know this course is top-notch.

#3: Introduction to JavaScript

Introduction to Cypress course badge
Mark Thompson

It’s time for the top three! The bronze medal goes to Introduction to JavaScript by Mark Thompson. JavaScript is the language of the Web, so it should be no surprise that it is also a top language for test automation. Popular test frameworks like Cypress, Playwright, and Jest all use JavaScript.

This is the third programming course TAU offers, and also the top one in this ranking! In this course, Mark provides a very accessible onramp to start programming in JavaScript. He covers the rock-solid basics: variables, conditionals, loops, functions, and classes. These concepts apply to all other programming languages, too, so it’s a great course for anyone who is new to coding.

#2: Web Element Locator Strategies

Web Element Locator Strategies course badge
Andrew Knight

I’m partial to the course in second place – Web Element Locator Strategies by me, Andrew Knight! This was the first course I developed for TAU, long before I ever joined Applitools.

In whatever test framework or language you use for UI-based test automation, you need to use locators to find elements on the page. Locators can use IDs, CSS selectors, or XPaths to uniquely identify elements. This course teaches all the tips and tricks to write locators for any page, including the tricky stuff!

#1: Setting a Foundation for Successful Test Automation

Setting a Foundation for Successful Test Automation course badge
Angie Jones

It should come as no surprise that the #1 course on TAU in terms of course completions is Setting a Foundation for Successful Test Automation by Angie Jones. This course was the very first course published to TAU, and it is the first course in almost all the Learning Paths.

Before starting any test automation project, you must set clear goals with a robust strategy that meets your business objectives. Testing strategies must be comprehensive – they include culture, tooling, scaling, and longevity. While test tools and frameworks will come and go, common-sense planning will always be needed. Angie’s course is a timeless classic for teams striving for success with test automation.

What can we learn from these trends?

A few things are apparent from this list of the most popular TAU courses:

  1. Test automation is clearly software development. All three of TAU’s programming language courses – Java, JavaScript, and Python – are in the top ten for course completions. A course on using IntelliJ, a Java IDE, also made the top ten. They prove how vital good development skills are needed for successful test automation.
  2. API testing is just as important as UI testing. Two of the courses in the top ten focused on API testing.
  3. Principles are more important than tools or frameworks. Courses on strategy, technique, and programming rank higher than courses on specific tools and frameworks.

What other courses are popular?

The post The Top 10 Test Automation University Courses appeared first on Automated Visual Testing | Applitools.

]]>
Get a Jump Into GitHub Actions https://applitools.com/blog/jump-into-github-actions/ Tue, 02 Mar 2021 16:24:17 +0000 https://applitools.com/?p=27330 On January 27, 2021, Angie Jones of Applitools hosted Brian Douglas, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into GitHub Actions. You can...

The post Get a Jump Into GitHub Actions appeared first on Automated Visual Testing | Applitools.

]]>

On January 27, 2021, Angie Jones of Applitools hosted Brian Douglas, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into GitHub Actions. You can watch the entire webinar on YouTube. This blog post goes through the highlights for you.

Introductions

Angie Jones serves as Senior Director of Test Automation University and as Principal Developer Advocate at Applitools. She tweets at @techgirl1908, and her website is https://angiejones.tech.

Brian Douglas serves as the Staff Developer Advocate at GitHub. Insiders know him as the “Beyoncé of GitHub.” He blogs at https://bdougie.live, and tweets as @bdougieYO.

They ran their webinar as a question-and-answer session. Here are some of the key ideas covered.

What Are GitHub Actions?

Angie’s first question asked Brian to jump into GitHub Actions.

Brian explained that GitHub Actions is a feature you can use to automate actions in GitHub. GitHub Actions let you code event-driven automation inside GitHub. You build monitors for events, and when those events occur, they trigger workflows. 

If you’re already storing your code in GitHub, you can use GitHub Actions to automate anything you can access via webhook from GitHub. As a result, you can build and manage all the processes that matter to your code without leaving GitHub. 

Build Test Deploy

Next, Angie asked about Build, Test, Deploy as what she hears about most frequently when she hears about GitHub Actions.

Brian mentioned that the term, GitOps, describes the idea that a push to GitHub drives some kind of activity. A user adding a file should initiate other actions based on that file. External software vendors have built their own hooks to drive things like continuous integration with GitHub. GitHub Actions simplifies these integrations by using native code now built into GitHub.com.

Brian explained how GitHub Actions can launch a workflow. He gave the example that a team has created a JavaScript test in Jest. There’s an existing test using Jest – either npm test, or jest. With GitHub Action workflows, the development team can automate actions based on a starting action.  In this case, the operator can drive GitHub to execute the test based on uploading the JavaScript file.  

Get Back To What You Like To Do

Angie pointed out that this catchphrase, “Get back to what you like to do,” caught her attention. She spends lots of time in meetings and doing other tasks when she’d really just like to be coding. So, she asked Brian, how does that work?

Brian explained that, as teams grow, so much more of the work becomes coordination and orchestration. Leaders have to answer questions like:

  • What should happen during a pull request? 
  • How do we automate testing? 
  • How do we manage our build processes

When engineers have to answer these questions with external products and processes, they stop coding. With GitHub Actions, Brian said, you can code your own workflow controls. You can ensure consistency by coding the actions yourself. And, by using GitHub Actions, you make the processes transparent for everyone on the team.

Do you want a process to call Applitools? That’s easy to set up. 

Brian explained that GitHub hosted a GitHub Actions Hackathon in late 2020. The team coded the controls for the submission process into the hackathon. You can still check it out at githubhackathon.com.

The entire submission process got automated to check for all the proper files being included in a submission. The code recognized completed submissions on the hackathon home page automatically.

Brian then gave the example of work he did on the GitHub Hacktoberfest in October. For the team working on the code, Brian developed a custom workflow that allowed any authenticated individual to sign up to address issues exposed in the Hackathon. Brian’s code latched onto existing authentication code to validate that individuals could participate in the process and assigned their identity to the issue. As the developer, Brain built the workflow for these tasks using GitHub Actions.

What can you automate? Informing your team when a user does a pull request. Send a tweet when the team releases a build. Any webhook in GitHub you can automate with GitHub Actions. For example, you can even automate the nag emails that get sent out when a pull request review does not complete within a specified time. 

Common Actions

Angie then asked about the most common actions that Brian sees users running.

Brian summarized by saying, basically, continuous integration (CI). The most common use is ensuring that tests get run against code as it gets checked in to ensure that test suites get applied. You can have tests run when you push code to a branch, push code to a release branch or do a release, or even when you do a pull request.

While test execution gets run most frequently, there are plenty of tasks that one can automate. Brian did something specific to assign gifts to team members who reviewed pull requests. He also used a cron job to automate a GitHub Action which opened up a global team issue each Sunday US, which happens to be Monday in Australia, and assigned all the team members to this issue. Each member needed to explain what they were working on. This way, the globally-distributed team could stay on top of their work together without a meeting that would occur at an awkward time for at least one group of team members.

Brian talked about people coming up with truly use cases – like someone linking IOT devices to webhooks in existing APIs using GitHub Actions. 

But the cool part of these actions is that most of them are open source and searchable. Anyone can inspect actions and, if they don’t like them, modify them. If a repo includes GitHub Actions, they’re searchable.

On github.dom/bdougie, you can see existing workflows that Brian has already put together.

Jump Into GitHub Actions – What Next?

I shared some of the basic ideas in Brian’s conversation with Angie. If you want to jump into GitHub Actions in more detail, you can check out the full webinar and the slides in Addie Ben Yehuda’s summary blog for the webinar. That blog also includes a number of Brian’s links, several of which I include here as well:

Enjoy jumping into GitHub Actions!

Featured Photo by Aziz Acharki on Unsplash

The post Get a Jump Into GitHub Actions appeared first on Automated Visual Testing | Applitools.

]]>
Intro to GitHub Actions for Test Automation — with Angie Jones and Brian Douglas [webinar recording] https://applitools.com/blog/github-action/ Mon, 01 Feb 2021 11:42:26 +0000 https://applitools.com/?p=26510 Watch this on-demand webinar to learn how the new GitHub Actions can help you build, test and deploy faster and easier! Curious about GitHub Actions and how this feature can...

The post Intro to GitHub Actions for Test Automation — with Angie Jones and Brian Douglas [webinar recording] appeared first on Automated Visual Testing | Applitools.

]]>
Intro to GitHub Actions for Test Automation with Angie Jones and Brian Douglas

Watch this on-demand webinar to learn how the new GitHub Actions can help you build, test and deploy faster and easier!

Curious about GitHub Actions and how this feature can be used for test automation? So is test automation guru Angie Jones!

GitHub Actions makes it easy to automate all your software workflows, now with CI/CD. Build, test, and deploy your code right from GitHub. Make code reviews, branch management, and issue triaging work the way you want.

Watch this on-demand webinar, where Angie Jones chats with Brian Douglas — staff developer advocate at GitHub — about this exciting new offering and how it can be utilized for test automation.

Brian also showed a demo illustrating how easy it is to add automated tests to a project and kick them off with GitHub Actions!

Angie’s Slide Deck

https://slides.com/angiejones/github-actions

Full Webinar Recording

Additional Resources and Reading Materials

— HAPPY TESTING —

The post Intro to GitHub Actions for Test Automation — with Angie Jones and Brian Douglas [webinar recording] appeared first on Automated Visual Testing | Applitools.

]]>
10 Portfolio Projects for Aspiring Automation Engineers https://applitools.com/blog/project-portfolio-for-testers/ Thu, 03 Dec 2020 07:07:24 +0000 https://applitools.com/?p=25007 Angie Jones describes a portfolio of ten projects that help even novice test engineers demonstrate their skills to get the job.

The post 10 Portfolio Projects for Aspiring Automation Engineers appeared first on Automated Visual Testing | Applitools.

]]>

Those looking to break into the test automation field have difficulty doing so because of lack of experience. One way to gain experience is, of course, to study and practice on your own. But how do you demonstrate your newfound knowledge to employers?

Other professionals, such as front-end developers, create portfolios to highlight their skills, and you can do the same as automation engineers!

Here are 10 projects for your test automation portfolio that will help you stand out among the competition.

1. Web browser automation

Web automation is by far the most common and sought-after form of test automation. If you’re looking to break into test automation, this is an absolute must-have for your portfolio.

Be sure to go beyond a basic login flow. Instead, show full scenarios that require your code to interact with multiple pages.

This project should demonstrate your ability to find element locators and interact with various types of elements such as dropdown menus, checkboxes, text fields, buttons, links, alerts, file upload widgets, and frames.

Also, be sure you’re writing clean test code and utilizing design patterns such as the Page Object Model or the Screenplay Pattern.

Sites to practice against:

2. Mobile automation

The demand for mobile test automation engineers has increased over the years as the popularity of mobile apps has soared. Having experience here can certainly work in your favor.

Your portfolio should demonstrate automated testing against both iOS and Android apps. Using Appium to create one project that works for both iOS and Android would be great. Using tools such as Apple’s XCUITest or Google’s Espresso is good as well. But if you go this route, I recommend doing at least two projects (one of each), since each supports only one mobile operating system.

No matter which framework you use, you’ll want to demonstrate the same element interactions as you did in your web automation project, but also mobile-specific gestures such as swiping and pinching.

Apps to practice with; download any of these to use in your project:

3. Visual automation

After making your web and mobile projects, fork them and add visual testing capabilities to them. You’ll quickly see just how much your tests are missing because they weren’t enabled to verify the appearance of your app.

Visual testing is a skill being listed on a number of job postings, and having this skill will really help you shine against the competition.

4. API automation

With the rise of microservices, IoT applications, and public-facing APIs, the demand for automation engineers who know how to test APIs has become substantial. So definitely add an API testing project to your portfolio. (Here’s a free class on how to test APIs to get you started.)

Within this project, be sure to demonstrate a variety of API methods, with GET and POST as a minimum. Use APIs that require parameters or request bodies, and also return complex responses with multiple objects and arrays.

For bonus points, use advanced verification techniques such as deserialization or approval testing. Also, demonstrating how to mock API responses would be a nice bonus.

APIs to practice against:

5. BDD specification automation

Many teams are practicing behavior-driven development (BDD) and automating tests based on the specifications produced. You’ll want to demonstrate your experience with this and how you can jump in and hit the ground running.

For this portfolio project, be sure to not only show the mapping between feature files and step definitions, but also demonstrate how to share state between steps via dependency injection.

Also, be extremely careful when writing your feature files. Long, verbose feature files will hurt your portfolio more than help. Make the effort to write good, concise Gherkin.

6. Data-driven automation

Your practice projects may use only a small amount of test data, so it’s easy to store that data inside the source code. However, on production development teams, you’ll have hundreds or even thousands of automated tests. To keep up with all this data, many teams adopt a data-driven testing approach.

I recommend adding this to at least one of your projects to demonstrate your ability to programmatically read data from an external source, such as a spreadsheet file.

7. Database usage

Speaking of being able to access data from external sources, it’s a good idea to add a project that interacts with a database. I recommend writing queries within your code to both read and write from a database, and use this within the context of a test.

For example, you can read from the database to gather the expected results of a search query. Or you can write to a database to place your application in a prerequisite state before proceeding to test.

8. Multiple languages and libraries

Writing all of your portfolio projects in one programming language is okay; however, automation engineers often need to dabble in multiple languages.

To make yourself more marketable, try using a different language for a few of your projects.

Also switch it up a bit and try a few other automation libraries as well as assertion libraries. For example, maybe do a project with Selenium WebDriver in Java and JUnit, and another project with Cypress in JavaScript and Mocha.

I know this sounds daunting, but you’ll find that some of the architecture and design patterns in test automation are universal. This exercise will really solidify your understanding of automation principles in general.

9. Accessibility automation

Automating accessibility testing has always been needed but recently has become extremely important for companies. There have been legal battles where companies have been sued because their websites were not accessible to those with disabilities.

Demonstrating that you are able to do test automation for accessibility will give you a great advantage when applying for jobs.

You can use the same sites/apps you used for your web and mobile projects to demonstrate accessibility testing.

10. Performance testing

Last but not least, you should consider adding a performance testing project to your portfolio.

Nonfunctional testing such as performance is a niche skill that many automation engineers do not have. Adding this to your portfolio will help you be perceived as a unicorn who really stands out from the crowd.

Presenting the Portfolio

GitHub

Be sure to put all of your projects on GitHub so employers can easily access and review your code. However, be careful to hide secret keys. This will give you bonus points, since it shows another level of maturity.

Website

Create a website that highlights each of your portfolio projects. You don’t have to build the website yourself; you can use common CMS systems such as WordPress to allow you to quickly get your portfolio up and visible.

Each project highlight should include a paragraph or bullet points explaining what you’ve done in the project and the tools and programming language used.

Resume

Include a link to your portfolio on your resume, and feel free to list your portfolio projects under the “Experience” section of your resume.

While this is not traditional work experience, it shows that you are self-driven, passionate, and competent to break into the test automation field.

Interview

During your interviews, be sure to mention all of the projects you have worked on. Draw from your experiences with building the projects to be able to answer the questions. Also brush up on other concepts around testing and development that may come up during the interview.

Good luck!

The original version of this post can be found at TechBeacon.com.

Header Photo by Shahadat Rahman on Unsplash

The post 10 Portfolio Projects for Aspiring Automation Engineers appeared first on Automated Visual Testing | Applitools.

]]>
Test Driving Selenium 4 – with Angie Jones [webinar recording] https://applitools.com/blog/selenium-4-webinar/ Mon, 19 Oct 2020 11:36:21 +0000 https://applitools.com/?p=23860 The Selenium WebDriver maintainers have been hard at work on a brand-new version: Selenium 4! Some of the features are already available in alpha mode — which allows us the...

The post Test Driving Selenium 4 – with Angie Jones [webinar recording] appeared first on Automated Visual Testing | Applitools.

]]>
Selenium 4 webinar - Angie Jones

The Selenium WebDriver maintainers have been hard at work on a brand-new version: Selenium 4!

Some of the features are already available in alpha mode — which allows us the opportunity to try it out, provide feedback, and even pitch in with development.

Watch this webinar, where Angie Jones explored some of the most promising new capabilities of Selenium 4.

Angie also took for a test-drive some of its new features such as Relative Locators, Chrome Devtools Protocol (CDP), Window Management, and more.

You know Angie – so you know code samples were provided! 🙂

Angie’s Slide Deck

https://slides.com/angiejones/selenium4/fullscreen

Full Webinar Recording

Additional Selenium 4 Resources

— HAPPY TESTING —

The post Test Driving Selenium 4 – with Angie Jones [webinar recording] appeared first on Automated Visual Testing | Applitools.

]]>
UI Tests In CICD – Webinar Review https://applitools.com/blog/ui-tests-in-cicd/ Fri, 24 Apr 2020 23:34:46 +0000 https://applitools.com/?p=17909 What does it take to add UI tests in your CICD pipelines? On March 12, Angie Jones, Senior Developer Advocate at Applitools, sat down with Jessica Deen, Senior Cloud Advocate...

The post UI Tests In CICD – Webinar Review appeared first on Automated Visual Testing | Applitools.

]]>

What does it take to add UI tests in your CICD pipelines?

On March 12, Angie Jones, Senior Developer Advocate at Applitools, sat down with Jessica Deen, Senior Cloud Advocate for Microsoft, held a webinar to discuss their approaches to automated testing and CI. 

Angie loves to share her experiences with test automation. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blog posts on angiejones.tech.

As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.

Jessica’s work at Microsoft focuses on Azure, Containers, OSS, and, of course, DevOps. Prior to joining Microsoft, she spent over a decade as an IT Consultant / Systems Administrator for various corporate and enterprise environments, catering to end users and IT professionals in the San Francisco Bay Area.

Jessica holds two Microsoft Certifications (MCP, MSTS), 3 CompTIA certifications (A+, Network+, and Security+), 4 Apple Certifications, and is a former 4-year Microsoft Most Valuable Professional for Windows and Devices for IT.

The Talk

Angie and Jessica broke the talk into three parts. First, Angie would discuss factors anyone should consider in creating automated tests. Second, Angie and Jessica would demonstrate writing UI tests for a test application.  Finally, they would work on adding UI tests to a CI/CD pipeline.

Let’s get into the meat of it.

Four Factors to Consider in Automated Tests

Angie first introduced the four factors you need to consider when creating test automation:

  • Speed
  • Reliability
  • Quantity
  • Maintenance

She went through each in turn.

Speed

Angie started off by making this point:

“When your team checks in code, they want to know if the check-in is good as quickly as possible. Meaning, not overnight, not hours from now.”

Angie points out that the talk covers UI tests primarily because lots of engineers struggle with  UI testing. However, most of your check-in tests should not be UI tests because they run relatively slowly.  From this she referred to the testing pyramid idea

  • Most of your tests are unit tests – they run the fastest and should pass (especially if written by the same team that wrote the code)
  • The next largest group is either system-level or business-layer tests. These tests don’t require a user interface and show the functionality of units working together
  • UI tests have the smallest number of total tests and should provide sufficient coverage to give you confidence in the user-level behavior.

While UI tests take time, Angie points out that they are the only tests showing user experience of your application. So, don’t skimp on UI tests.

Having said that, when UI tests become part of your build, you need to make sure that your build time doesn’t become bogged down with your UI tests. If all your conditions run over 15 minutes, that’s way too long. 

To keep your testing to a minimum, Angie suggests running UI tests in parallel. To determine whether or not  you need to split up one test into several parallel tests, give yourself a time limit. Let’s say your build needs to complete in five minutes. Once you have a time limit, you can figure out how many parallel tests to set up. Like – with the 15 minute example, you might need to divide into three or more parallel tests.

Reliability

Next, you need reliable tests. Dependable. Consistent. 

Unreliable tests interfere with CI processes. False negatives, said Angie, plague your team by making them waste time tracking down errors that don’t exist. False positives, she continues, corrupt your product by permitting the check-in of defective code. And, false positives corrupt your team because bugs found later in the process interfere with team cohesion and team trust. 

For every successful CICD team, check-in success serves as the standard for writing quality code. You need reliable tests.

How do you make your tests reliable?

Angie has a suggestion that you make sure your app includes testability – which involves you leaning on your team. If you develop code, grab one of your test counterparts. If you test, sit down with your development team. Take the opportunity to discuss app testability.

What makes an app testable? Identifiers. Any test runner uses identifiers to control the application. And, you can also use identifiers to validate outputs. So, a consistent regime to create identifiers helps you deliver consistency. 

If you lack identifiers, you get stuck with CSS Selectors or Xpath selectors. Those can get messy – especially over time.

Another way to make your app testable, Angie says, requires code that lets your test set initial conditions. If your UI tests depend on certain data values, then you need code to set those values prior to running those tests. Your developers need to create that code – via API or stored procedure – to ensure that the tests always begin with the proper conditions. This setup code can help you create the parallel tests that help your tests run more quickly.

You can also use code to restore conditions after your tests run – leaving the app in the proper state for another test.

Quantity

Next, Angie said, you need to consider the number of tests you run.

There is a common misconception that you need to automate every possible test condition you can think about, she said. People get into trouble trying to do this in practice.

First, lots of tests increase your test time. And, as Angie said already, you don’t want longer test times.

Second, you end up with low value as well as high-value UI tests.  Angie asks a question to help triage her tests:

“Which test would I want to stop an integration or deployment? If I don’t want this test to stop a deployment, it doesn’t get automated. Or maybe it’s automated, but it’s run like once a day on some other cycle, not on my CICD.”

Angie also asks about the value of the functionality:

“Which test exercises critical, core functionality? Those are the ones you want in there. Which tests cover areas of my application that have a history of failing? You’re nervous anytime you have to touch that code. You want some tests around that area, too.”

Lastly, Angie asks, which tests provide information already covered by other tests in the pipeline? So many people forget to think about total coverage. They create repetitive tests and leave them in the pipeline. And, as many developers know, a single check-in that triggers multiple failures can do so because it was a single code error that had been tested, and failed, multiple times. 

“Don’t be afraid to delete tests,” Angie said. If it’s redundant, get rid of it, and reduce your overall test code maintenance. She talked about how long it took her to become comfortable with deleting tests, but she appreciates the exercise now. 

Maintenance

“Test code is code,” Angie said. “You need to write it with the same rules, the same guidelines, the same care that you would any production code.”

Angie continued, saying that people ask, “‘Well, Angie, why do I need to be so rigorous with my test code?’”

Angie made the point that test code monitors production code. In your CICD development, the state of the build depends on test acceptance. If you build sloppy test code, you run the risk of false positives and false negatives.

As your production code changes, your test code must change as well. The sloppier your test code, the more difficult time you will have in test maintenance. 

Writing test code with the same care as you write production gives you the best chance to keep your CICD pipeline in fast, consistent delivery. Alternatively, Angie said, if your test code stays a mess, you will have a tendency to avoid code maintenance. Avoiding maintenance will lead to untrustworthy builds. 

Writing UI Tests – Introduction

Next, Angie introduced the application she and Jessica were using for their coding demonstration. The app – a chat app, looks like this:

The welcome screen asks you to enter your username and click “Start Chatting” – the red button. Once you have done so, you’re in the app. Going forward, you enter text and click the “Send” button and it shows up on a chat screen along with your username. Other users can do the same thing.

With this as a starting point, Angie and Jessica began the process of test writing. 

Writing UI Tests – Coding Tests

Angie and Jessica were on a LiveShare of code, which looked like this:

From here, Angie started building her UI tests for the sign-in functionality. And, because she likes to code in Java, she coded in Java. 

All the objects she used were identified in the BaseTests class she inherited.

Her full code to sign-in looked like this:

public class ChattyBotTests extends BaseTests {
  private ChatPage chatPage:

  @Test
  public void newSession(){
     driver.get(appUrl);
     homePage.enterUsername("angie");
     chatPage = homePage.clickStartChatting();
     validateWindow();
  }

The test code gest the URL previously defined in the BaseTests class, fills in the username box with “angie”, and clicks the “Start Chatting” button. Finally, Angie added the validateWindow() method inherited from BaseTests, which uses Applitools visual testing to validate the new screen after the Start Chatting button has been clicked.

Next, Angie wrote the code to enter a message, click send message, and validate that the message was on the screen.

  @Test
  public void enterMessage(){
     chatPage.sendMessage("hello world");
     validateWindow():
 }

The inherited chatPage.sendMessage method both enters the text and clicks the Send Message button. validateWindow() again checks the screen using Applitools.

Are these usable as-is for CICD? Nope.

Coding Pre-Test Setup

If we want to run tests in parallel, these tests, as written, block parallel operation, since the enterMessage() depends on the newSession() being run previously.

So solve this, Angie creates a pre-test startSession() that runs before all tests. It includes the first three lines of newSession() which go to the app URL, enter “angie” as the username, and click the “Start Chatting” button. Next, Angie modifies her newSession() test so all it does is the validation.

  @Before
  public void startSession(){
     driver.get(appUrl);
     homePage.enterUsername("angie");
     chatPage = homePage.clickStartChatting();
  }

  @Test
  public void newSession(){
     validateWindow();
 }

With this @Before setup, Angie can create independent tests.

Adding Multi-User Test

Finally, Angie added a multi-user test. In this test, she assumed the @Before gest run, and her new test looked like this:

  @Test
  public void multiPersonChat(){

     //Angie sends a message
     chatPage.sendMessage(“hello world”);

     //Jessica sends a message
     WindowUtils.openNewTab(driver, appUrl);
     homePage.enterUsername("jessica");
     chatPage = homePage.clickStartChatting();
     chatPage.sendMessage("goodbye world");
     WindowUtils.switchToTab(driver, 1);
     validateWindow();
  }

Here, user “angie” sends the message “hello world”. Then, Angie codes the browser to:

  • open a new tab for the app URL, 
  • create a new chat session for “jessica”
  • has “jessica” send the message “goodbye world”
  • Switch back to the original tab
  • Validate the window

Integrating UI Tests Into CICD

Now, it was Jessica’s turn to control the code. 

Before she got started coding, Jessica shared her screen from Visual Studio Code, to demonstrate the LiveShare feature of VS Code:

Angie and Jessica were working on the same file using LiveShare. LiveShare highlights Angie’s cursor on Jessica’s screen. 

When Angie selects a block of text, the text gets highlighted on Jessica’s screen.

This extension to Visual Studio Code makes it easy to collaborate on coding projects remotely. It’s available for download on the Visual Studio Code Marketplace. It’s great for pair programming when compared with remote screen share.

To begin the discussion of using these tests in CICD, Jessica started describing the environment for running the tests from a developer perspective versus a CICD perspective. A developer might imagine running locally, with IntelliJ or command line opening up browser windows. In contrast, CICD needs to run unattended.  So, we need to consider headless.

Jessica showed how she coded for different environments in which she might run her tests.

Her code explains that the environment gets defined by a variable called runWhere, which can equal one of three values:

  • local – uses a ChromeDriver
  • pipeline – uses a dedicated build server and sets the options –headless and –no-sandbox for ChromeDriver (note: for Windows you add the option “–disable-gui”)
  • container – instructs the driver to be a remote web driver based on the selenium hub remote URL and passes the –headless and –no-sandbox chromeOptions

Testing Locally

First, Jessica needed to verify that the testa ran using the local settings.

Jessica set the RUNWHERE variable to ‘local’ using the command

export RUNWHERE=local

She had already exported other settings, such as her Applitools API Key, so she can use Applitools.

Since Jessica was already in her visual test folder, she run her standard maven command:

mvn -f visual_tests/pom.xml clean test

The tests ran as expected with no errors. The test opened up a local browser window and she showed the tests running.

Testing Pipeline

Next, Jessica set up to test her pipeline environment settings. 

She changed the RUNWHERE variable using the command:

export RUNWHERE=pipeline

Again, she executed the same maven tests

mvn -f visual_tests/pom.xml clean test

With the expectation that the tests would run as expected using her pipeline server, meaning that the tests run without opening a browser window on her local machine.

This is important because whatever CICD pipeline you use – Azure DevOps, Github Actions, Travis CI, or any traditional non-container-based CICD system – will want to use this headless interaction with the browser that keeps the GUI from opening up and possibly throwing an error.

Once these passed, Jessica moved on to testing with containers.

Testing Containers

Looking back, the container-based tests used a call to RemoteWebDriver, which in turns called selenium_hub:

Selenium_hub let Jessica spin up whatever browser she wanted.  To specify what she wanted, she used a docker-compose file, docker-compose.yaml:

These container-based approaches align with the current use of cloud-native pipelines for CICD. Jessica noted you can use Jenkins, Jenkins X for Kubernetes native, and CodeFresh, among others. Jessica decided to show CodeFresh. It’s a CICD pipeline dedicated to Kubernetes and microservices.  Every task runs in a container.

Selenium_Hub let Jessica choose to run tests on both a chorme_node and a firefox_node in her container setup.  

She simply needed to modify her RUNWHERE variable

export RUNWHERE=container

However, before running her tests, she needed to spin up her docker-compose on her local system. And, because selenium_hub wasn’t something that her system could identify by DNS at that moment (it was running on her local system), she ensured that the selenium_hub running locally would port forward onto her local system’s 127.0.0.1 connection. Once she made these changes, and changed the container definition to use 127.0.0.1:4444, she was ready to run her maven pom.xml file.

When the tests ran successfully, her local validation confirmed that her tests should run in her pipeline of choice.

Jessica pointed out that CICD really comes down to a collection of tasks you would run manually.

After that, Jessica said, we need to automate those tasks in a definition file. Typically, that’s Yaml, unless you really like pain and choose Groovy in Jenkins… (no judgement, she said).

Looking at Azure DevOps

Next, Jessica did a quick look into Azure DevOps.

Inside Azure DevOps, Jessica showed that she had a number of pipelines already written, and she chose the one she had set aside for the project. This pipeline already had three separate stages: 

  • Build Stage
  • Deploy to Dev
  • Deploy to Prod

Opening up the build stage shows all the steps contained just within that stage in its 74 seconds of runtime:

Jessica pointed out that this little ChattyBot application is running on a large cluster in Azure. It’s running in Kubernetes, and it’s deployed with Helm.  The whole build stage includes:

  • using JFrog to package up all the maven dependencies and run the maven build
  • jfrog xray to make sure that the dependencies don’t result in security errors, 
  • Creating a helm chart and packaging that,
  • Sending Slack notifications

This is a pretty extensive pipeline. Jessica wondered how hard it would be to integrate Angie’s tests into an existing environment.

But, because of the work Jessica had done to make Angie’s tests ready for CICD, it was really easy to add those tests into the deploy workflow.

First, Jessica reviewed the Deploy to Dev stage.

Adding UI Tests in Your CICD Pipeline

Now, Jessica started doing the work to add Angie’s tests into her existing CICD pipeline. 

After the RUNWHERE=container tests finished successfully, Jessica went back into VS Code, where she started inspecting her azure-pipelines.yml file.

Jessica made it clear that she wanted to add the tests everywhere that it made sense prior to promoting code to production:

  • Dev
  • Test
  • QA
  • Canary

Jessica reinforced Angie’s earlier points – these UI tests were critical and needed to pass. So, in order to include them in her pipeline, she needed to add them in an order that makes sense.

In her Deploy to Dev pipeline, she added the following:

   - bash:
     # run check to see when $(hostname) is available
     attempt_counter=0
     max_attempts=5
     until $(curl --output /dev/null --silent --head --fail https://”$(hostname)”/); do
       if [ ${attempt_counter} -eq ${max_attempts} ]; then
         echo “Max attempts reached”
         exit 1
       fi

       printf “.”
       attempt_counter=$((attempt_counter+1))
       sleep 20
     done
   displayName: HTTP Check

This script checks to see if the url $hostname is available and gives up if not available after five tries after sleeping 20 seconds. Each try it displays a “.” to show it is working. And, the name “HTTP Check” shows what it is doing. 

Now, to add the tests, Jessica needed to capture the environment variable declarations and then run the maven commands.  And, as Jessica pointed out, this is where things can become challenging, especially when writing the tests from scratch, because people may not know the syntax.   

Editing the azure-pipelines.yml in Azure DevOps

Now, Jessica moved back from Visual Studio Code to Azure DevOps, where she could also edit an azure-pipelines.yml file directly in the browser.

And, here, on the right side of her screen (I captured it separately) are tasks she can add to her pipeline. The ability to add tasks makes this process really, really simple and eliminates a lot of the errors that can happen when you code by hand.

One of those tasks is an Applitools Build Task that she was able to add by installing an extension.

Just clicking on this Applitools Build Task adds it to the azure_pipelines.yml file.  

And, now Jessica wanted to add her maven build task – but instead of doing a bash script, she wanted to use the maven task in Azure DevOps. Finding the task and clicking on it shows all the options for the task.

The values are all defaults. Jessica changed the address for her pom.xml file to visual_tests/pom.xml (the file location for the test file), set her goal as ‘test’ and options as ‘clean test’. She checked everything else, and since it looked okay, she clicked the “Add” button.  The following code got added to her azure-pipelines.yml file.

  - task: Maven
     inputs: 
       mavenPomFile: ‘visual_tests/pom.xml’
       goals: 'test'
       options: 'clean test'
       publishJUnitResults: true
       testResultsFiles: '**/surefire-report/TEST-*.xml'
       javaHomeOption: 'JDKVersion'
       mavenVersionOption: 'Default'
       mavenAuthenticationFeed: false
       effectivePomSkip: false
       sonarQubeRunAnalysis: false

Going Back To The Test Code

Jessica copied the Applitools Built Task and Maven task code file back into the azure-pipelines.yml file she was already editing in Visual Studio Code.

Then, she added the environment variables needed to run the tests.  This included the Applitools API Key, which is a secret value from Applitools. In this case, Jessica defined this variable in Azure DevOps and could call it by the variable name.

Beyond the Applitools API Key, Jessica also set the RUNWHERE environment variable to ‘pipeline’ and the TEST_START_PAGE environment variable to the $hostname – same as used elsewhere in her code. All this made her tests dynamic.  

The added code reads:

     env: 
       APPLITOOLSAPIKEY: $APPLITOOLS_API_KEY
       RUNWHERE: pipeline
       TEST_START_PAGE: https://($hostname)/

So, now, the tests are ready to commit.

One thing Jessica noted is that LiveShare automatically adds the co-author’s id to the commit whenever two people have jointly worked on code. It’s a cool feature of LiveShare.

Verifying That UI Tests Work In CICD

So, now that the pipeline code had been added, Jessica wanted to demonstrate that the visual validation with Applitools worked as expected and found visual differences.   

Jessica modified the ChattyBot application so that, instead of reading:

“Hello, DevOps Days Madrid 2020!!!”

it read:

“Hello, awesome webinar attendees!”

She saved the change, double-checked the test code, saw that everything looked right, and pushed the commit.

This kicked off a new build in Azure DevOps. Jessica showed the build underway. She said that, with the visual difference, we expect the Deploy to Dev pipeline to fail. 

Since we had time to wait, she showed what happened on an earlier build that she had done just before the webinar. During that build, the Deploy to Dev passed. She was able to show how Azure DevOps seamlessly linked the Applitools dashboard – and, assuming you were logged in, you would see the dashboard screen just by clicking on the Applitools tab.

Here, the green boxes on the Status column show that the tests passed.

Jessica drilled into the enterMessage test to show how the baseline and the new checkpoint compared (even though the comparison passed), just to show the Applitools UI.  

As Jessica said, were any part of this test to be visually different due to color, sizing, text, or any other visual artifact, she could select the region and give it a thumbs-up to approve it as a change (and cause the test to pass), or give it a thumbs-down and inform the dev team of the unexpected difference.

And, she has all this information from within her Azure DevOps build.

What If I Don’t Use Azure DevOps?

Jessica said she gets this question all the time, because not everyone uses AzureDevOps.  

You could be using Azure DevOps, TeamCity CI, Octopus Deploy, Jenkins – it doesn’t matter. You’re still going to be organizing tasks that make sense.  You will need to run an HTTP check to make sure your site is up and running.  You will need to make sure you have access to your environment variables. And, then, finally, you will need to run your maven command-line test.  

Jessica jumped into Github Actions, where she had an existing pipeline, and she showed that her deploy step looked identical.

It had an http check, an Applitools Build Task, and a call for Visual Testing. The only difference was that the Applitools Build Task included several lines of bash to export Applitools environment variables.

The one extra step she added, just as a sanity check, was to set the JDK version.

And, while she was in Github Actions, she referred back to the container scenario. She noted the challenges with spinning up Docker Compose and services.  For this reason, when looking at container tests, she pointed to CodeFresh, which is Kubernetes-native.

Inside her CodeFresh pipelines, everything runs in a container.

As she pointed out, by running on CodeFresh, she didn’t need a huge server to handle everything. Each container handled just what it needed to handle. Spinning up Docker Compose just requires docker. She needed just jFrog for her Artifactory image. Helm lint – again, just what she needed.

The image above shows the pipelines before adding the visual tests. The below image shows the Deploy Dev pipeline with the same three additions.

There’s the HTTP check, the Applitools Build Task, and Running Visual Tests.

The only difference really is that the visual tests ran alongside services that were spinning up alongside the test.

This is really easy to do in your codefresh.yml file, and the syntax looks a lot like Docker Compose.

Seeing the Visual Failure

Back in Azure DevOps, Jessica checked in on her Deploy to Dev step.  She already knew there was a problem from her Slack notifications.  

The error report showed that the visual tests all failed.

Clicking on the Applitools tab, she saw the following.

All three tests showed as unresolved.  Clicking in to the multiPersonChat test, Jessica saw this:

Sure enough, the text change from “Hello, DevOps Days Madrid 2020!!!” to “Hello, awesome webinar attendees!” caused a difference. We totally expected this difference, and we would find that this difference had also shown up in the other tests.

The change may not have been a behavioral change expected in your tests, so you may or may not have thought to test for the “Hello…” text or check for its modification. Applitools makes it easy to capture any visual difference.

Jessica didn’t go through this, but one feature in Applitools is the ability to use Auto Maintenance. With Auto Maintenance, if Jessica had approved the change on this first page, she could automatically approve identical changes on other pages. So, if this was an intended change, it would go from “Unresolved” to “Passed” on all the pages where the change had been observed.

Summing Up

Jessica handed back presentation to Angie, who shared Jessica’s link for code from the webinar:

All the code from Angie and Jessica’s demo can be downloaded from:

https://aka.ms/jldeen/applitools-webinar

Happy Testing!

For More Information

The post UI Tests In CICD – Webinar Review appeared first on Automated Visual Testing | Applitools.

]]>
5 Visual Testing Features that Foster Collaboration Between Remote Workers https://applitools.com/blog/remote-teams-testing/ Wed, 08 Apr 2020 14:56:00 +0000 https://applitools.com/?p=17356 COVID-19 has drastically affected the global workforce. Suddenly, development teams everywhere are now working from the safety of their homes. Many new remote workers are finding it challenging to effectively...

The post 5 Visual Testing Features that Foster Collaboration Between Remote Workers appeared first on Automated Visual Testing | Applitools.

]]>

COVID-19 has drastically affected the global workforce. Suddenly, development teams everywhere are now working from the safety of their homes. Many new remote workers are finding it challenging to effectively collaborate with their teammates. Fortunately, Applitools offers several features that make collaboration a breeze!

Remarks

All Applitools test results include images of your application, which makes for easier collaboration. As the old saying goes “a picture is worth a thousand words”. Not only can all team members view the screenshots from tests results, but they can also add remarks to the images. These remarks can be reviewed and also serve as a discussion thread right at the point of question.

Bug Regions

Team members can also indicate which parts of their application’s screenshot contains bugs. Similar to Remarks, users can highlight certain areas of the image and leave comments about that area. By specifying it as a Bug Region, users can also fail the test and optionally snooze failures for this bug in future runs! This eliminates the need for back and forth questions between the bug reporter and the assignee – as details regarding the bug are overlaid on top of the bug itself.

Root Cause Analysis

In addition to the image itself, Applitools also provides root cause analysis to determine the exact cause of the bug! While the image alone helps teams quickly identify issues with their application, the root cause analysis features provides a fast and simple way to determine what HTML or CSS changed to cause this issue. Utilizing this feature helps teams collaborate more effectively as well as move quicker to result bugs.

Jira

Applitools integrates nicely with Jira to allow you to associate your visual testing results with Jira tickets for better tracking. After integrating these two tools, the Applitools Bug Region modal now includes an option to create a new Jira issue. Once the issue is created, additional options are available in the Bug Region modal such as viewing the bug in Jira and linking to more Jira issues.

Slack Notifications

There is also an Applitools plugin available for Slack. By enabling this plugin for your Slack workspace, your team will automatically be notified of your test results. You can choose whether to receive slack notifications every time an Applitools batch completes, or only when there are failures. This provides another mechanism for fast feedback in the place where your teams already collaborate.

Working remotely doesn’t have to negatively impact team collaboration. Tap into the hidden features of some of your favorite tools, like Applitools,  for fast and effective remote work.

The post 5 Visual Testing Features that Foster Collaboration Between Remote Workers appeared first on Automated Visual Testing | Applitools.

]]>
How To Remove Blind Spots With Visual Testing https://applitools.com/blog/remove-blind-spots/ Mon, 06 Apr 2020 15:00:00 +0000 https://applitools.com/?p=17364 Visual bugs are errors in the presentation of an application. They appear all the time, and frequently surface when applications are viewed in the various viewport sizes of our mobile...

The post How To Remove Blind Spots With Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>

Visual bugs are errors in the presentation of an application. They appear all the time, and frequently surface when applications are viewed in the various viewport sizes of our mobile devices (laptops, phones, tablets, watches).

What’s horrifying about these pesky visual bugs is that they cannot be caught by typical automated tests. This is because most test automation tools consult the DOM (document object model) to report on the state of the application.

Take this tweet for example:

The application is not showing the last digit of the price, yet the full price is there in the DOM. The first price is NZ$1250 but is showing as NZ$125. Big difference!

guigonewrong

Sadly, test automation tools will tell you that the price is correct—even though that’s not what the users see. Here’s how visual testing can help you catch these problems.

What is visual testing?

In order to catch visual bugs, we need to add “eyes” to our automated scripts—an ability for our tests to see the application as our users do and verify its appearance.

Visual testing allows us to do just that. These work by taking a screenshot of your application when it’s in its desired state, and then taking a new screenshot every time the test is executed.

The screenshots from the regression run are compared against the approved screenshot. If any meaningful differences are detected, the test fails.

Because Applitools uses artificial intelligence (AI) to view the screenshots just as we would as humans, it is much more effective than basic pixel-based image comparison techniques.

Do I have to start over?

When I tell engineers about the awesomeness of visual testing, they often ask if they will have to throw out all of their existing tests and start over. The answer is no!

Visual testing can be integrated into whatever you already have. Applitools supports all the major programming languages as well as automation frameworks such as Selenium WebDriver, Cypress, Appium, and more.

And while the visual assertions can be added in addition to your existing functional assertions, an awesome benefit is that visual testing encompasses a lot of the functional checks as well.

For example, in the Wellington Suite example from the tweet above, the existing tests probably include assertions for the prices. While you certainly can keep those assertions and add a visual check to make sure the prices appear correctly, the comparison done by visual testing algorithms already ensures that the price is correct and displayed properly. This means less test code needs to be written and maintained.

What if I don’t want to capture the entire screen?

While capturing a screenshot of the entire screen will help you catch bugs you didn’t think to assert against, it’s not always the right approach to visual testing. Sometimes, the screen that you’re testing against contains areas that are still under development and therefore frequently changing, or there just may be data on the screen that is irrelevant for your tests.

Ignore regions

Applitools will allow you to ignore certain regions of your screen. This is perfect for areas that are irrelevant to test, for example status bars, date lines, ads, etc. In this screenshot, I’ve specified to always ignore the status bar for my mobile tests.

visual testing ignore region 1

Scope to regions

If there isn’t a specific area you’d like to ignore, but there is a specific area you want verified, you can do that with visual testing as well. By annotating the screenshot or programmatically specifying the locator of the region, you can scope your visual check to that area only.

Be mindful of gaps

Verifying an entire screen ensures that nothing is missed, because it inherently includes the verification of things you maybe wouldn’t have covered with traditional assertions. When narrowing the scope of your visual testing, you must be mindful of gaps you may be missing.

My advice is to broaden your scope as much as possible (maybe an entire section versus just a specific element within the section) and couple the visual assertions with traditional functional ones when needed.

What about dynamic data that changes every time the test is executed?

You may not think visual testing can handle applications where the data is different every time the test is run, but it actually can. By using AI, visual testing can determine the patterns that make up the layout of the screen and then use a different comparison algorithm to ensure that the layout is intact during regression runs, regardless of the actual data.

This technique is perfect for applications such as news sites, social media apps, and any other application where the data is not static.

Does visual testing work only on web applications?

Visual testing is not limited to just web applications. Visual testing can also be used for mobile apps, design systems such as Storybook, PDF files, and standalone images.

You can also accomplish cross-platform testing using visual grids. These grids enable faster, smarter visual testing across a plurality of devices and viewport sizes—prime locations for visual bugs!

Visual grids differ from other testing grids in that they do not have to execute your entire scenario on every configuration. Instead, a visual grid runs the scenario on one configuration, captures the state of your application, and then blasts that exact state across all of your supported configurations—thus saving time and effort in writing and executing cross-platform tests.

How to get started with visual testing

Applitools has a free-forever account option which is perfect for smaller projects. To learn how to do all of the cool things I’ve mentioned and much more, head on over to Test Automation University for my free course on visual testing!

[Note: A version of this blog was published previously at TechBeacon.]

For More Information

The post How To Remove Blind Spots With Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>
Leverage The Power of Testers https://applitools.com/blog/power-of-testers/ Fri, 03 Apr 2020 15:05:27 +0000 https://applitools.com/?p=17374 If you’re a developer, you probably don’t appreciate the power of testers. In fact, you probably think negatively about some aspect of your testing team. They think of bugs that...

The post Leverage The Power of Testers appeared first on Automated Visual Testing | Applitools.

]]>

If you’re a developer, you probably don’t appreciate the power of testers. In fact, you probably think negatively about some aspect of your testing team. They think of bugs that you haven’t considered. They try doing things you never designed your product to handle. And, they file bugs when they encounter unexpected behavior. What a pain!

If you’re a tester, you may not get respect from your development teammates, but you don’t quite know why. After all, you’re good at your job. You put the product through its paces and do your best to expose issues in the product’s behavior. Your job, in fact, is to do this before someone outside the company tries the same thing.

At its core, the difference between developers and testers comes down to two factors: starting point and mindset.

Developer Power: Developers Make

A developer starts at nothing and creates something. Whether the developer creates an entirely new product from scratch, or simply adds a new feature, the end result didn’t exist previously. The development mindset involves filling the void where nothing existed previously.

Coding Shots Annual Plan high res 5

A good developer thinks of all the ways a customer can use a product. In some cases, developers work to the product spec created by a product manager or product architect. In other cases, developers consider behaviors that a user might try and determine how to handle those behaviors.

Because developers venture into the unknown, they look to product experts to guide their development.  Product managers often must specify the behavior of the product explicitly so developers can code effectively. And, when developers ask “What if…” questions of the product experts, (e.g. “What if the user does…”), they delegate code behavior to others who know what they intend to make.

Tester Power: Testers Break

In contrast to developers, who start with nothing, testers start with “completed” code. In their mindset, testers see code in an unfinished state until proven ready. Testers work to determine how a future user might get that code to misbehave.  In other words, the developer begins with blank space and creates a valuable behavior, but a tester begins with supposedly valuable behavior and finds all the ways a user or another actor could diminish that value.

shutterstock 174303563

Do you know those commercials where Mercedes demonstrates the safety of their cars by testing them in a collision? Testers love that commercial.

Good testers assume that users might be bad actors and attempt to do something that the developer might not expect. Much of the time, the product team had not considered that behavior, so they wrote no specification for it.

As a recovering product manager, I tried to draw the distinction between defining the behavior and requirements for the customer benefit versus the product design. I wasn’t the product designer, I thought. However, while I worked with a bunch of competent developers, none of them were designers, either.  Alas, I got the questions about usability, security, and other non-functional specifications. In that situation, I did the only thing useful I could think of – rely on the power of testers on the QA team. And, fortunately, they had prior test experience so they could identify potential failure modes and help describe trade-offs in design.

School Of Hard Knocks

Nobody really thinks about what it takes to become a test engineer. Generally, you cannot study and succeed. When do you learn to spot failure? How can you see quicksand where other engineers see a flat path. You have to know how many possible failure modes actually exist, and how many of those your design team has considered.

In one of my favorite movies from the 1980s, Body Heat (Rated “R” for a reason), the protagonist, a public defender, gets asked by his girlfriend to help kill her husband. The public defender had previously defended an arsonist, who is now out of jail. The attorney figures a fire would help cover up the murder, so he meets with the former arsonist to ask what would take to start a fire. To which, the arsonist says:

“Are you thinking about committing a crime, counselor? If so, ‘you have to realize there are probably 50 ways you could [make a mistake and get caught], and you’re a genius if you can think of 25. And you ain’t no [bleeping] genius.’

“Do you know who told me that?” the arsonist continues. “You did, counselor.”

It’s the same point for product designers. You often must experience design failures before you can spot potential future design failures. Often, however, the failure experience accumulates with test engineers.

If you want to hear two engineers discussing the power of testers in some detail, listen to Angie Jones of Applitools talk with Dr. Nicole Forsgren of Google in their webinar: Test Automation as a Key Enabler for High Performing Teams. Both of them explain how testers bring unique perspectives to their teams.

The Heartbleed OpenSSL Example

600px Simplified Heartbleed explanation.svg

Do you remember the “Heartbleed” bug in OpenSSL? One of my favorite xkcd cartoons gives a great overview.  To ensure that a connection was set up properly, the initiator sent a test key of arbitrary length and number to the receiving side. The number was supposed to be the length of the sent test key. The receiver would send the number of bytes back according to the value.  To exploit the bug, the value greatly exceeded the length of the test key – exposing the SSL stack of the receiving side back to the exploiter.

How would you know to look for that kind of bug?  It existed in the wild for years before the exploit finally got exposed.

The power of testers comes from imagining scenarios like this in a design and considering the risks to future designs.

Failure Modes

What failure modes do you consider for your testing? Designers design for the failure modes they can think of or have experienced. Testers test for failure modes they have experienced or understand. Generally, you defer to the most experienced person on the team.

Why experience? Because tools alone cannot help you. For example, you can measure execution coverage in your code.  But, coverage tools won’t always tell you when you have potential bugs in your code. For example, you can execute 100% of your code and exercise all the expected input cases, but an unexpected input causes an unanticipated result.

You can also run functional tests that become out of date but still pass.  A number of Applitools customers have experienced what happens when they don’t validate the visual behavior of their functional application tests after an app upgrade. New code comes out that can change the on-screen app rendering, but the underlying HTML and identifiers don’t reflect any change in the code. Tests pass, but the screen gets filled with visual errors. And woe to you if you release the app with those errors in the wild.

So, as failure modes continue to evolve, you need the combination of experience and tools to help you stay ahead.

Web Security Exploits

Web security exploits form their own class of design bugs. The Open Web Application Security Project (OWASP) keeps track of the top exploits.  The most recently published list came out in 2017. These include:

  • A1:2017-Injection
  • A2:2017-Broken Authentication
  • A3:2017-Sensitive Data Exposure
  • A4:2017-XML External Entities (XXE)
  • A5:2017-Broken Access Control
  • A6:2017-Security Misconfiguration
  • A7:2017-Cross-Site Scripting (XSS)
  • A8:2017-Insecure Deserialization
  • A9:2017-Using Components with Known Vulnerabilities
  • A10:2017-Insufficient Logging&Monitoring

OWASP even offers a testing guide for exploits:

https://www.owasp.org/index.php/OWASP_Testing_Project

Again, these are the kinds of issues one learns from experience.

Continuous Deployment Teams

One place where you can find developers and testers collaborating successfully is in strong agile development teams. On agile teams, a “shift-left” approach places quality engineers in the midst of development teams or pods. The quality engineer, as a coach, distributes the power of testers to help the developers design usable, testable code. In this type of organization, the whole team takes responsibility for testing code, as they know each build can, in fact, be released.

software engineers 1

Elisabeth Hocke, a test guru on agile test and principle agile tester at Flixbus, teaches a great course about The Whole Team Approach to Continuous Testing on Test Automation University. In this course, she explains how continuous delivery demands the team mindset for testability. Take the course, or read my summary of it.

Priyanka Halder, head of quality at GoodRX, gives a great webinar about High-Performance Testing – Acing Automation in Hyper-Growth Environments. She talks about how her team embeds into the development team as a practical application of continuous delivery.  You can watch her webinar and read my blog about it.

This may not be your organization. Just know that there are organizations where developers value the contributions of the test team, and quality engineers help coach the developers to build testable code.

Seek Wisdom In Experience

If you are thinking to yourself, “I have no idea how to find these kinds of issues in my designs,” you need the power of a tester with experience.

Realistically, quality engineers face a myriad of challenges that developers cannot always help solve. The OpenSSL Heartbleed exploit existed for years before someone understood the problem. Because developers can blind themselves to failure modes, quality engineers have to spend time imagining a horrible future.

New approaches help bridge the gap between development and test. New frameworks help developers build testability hooks into each of their applications. Standardized approaches help ensure that all changes can be anticipated.

At Applitools, we address a specific blindness. Web apps include built-in complexity that you might sometimes ignore because of standard. HTML, CSS, and JavaScript have standards, so developers expect that they only need to code once for all platforms. Realistically, though, today’s testers know that rendering engines behave differently across platforms, viewport sizes, browsers, and operating systems on both mobile devices and computers.  In the end, pixel and DOM comparisons lead to so much extra work that testers limit their automation to validating expected behaviors – blinding themselves to unexpected behaviors.

Conclusion

Whether you primarily develop or primarily test, you will always feel a tension between your mindset and the mindset of the person on the other side. After all, they seem to be slowing you down or, alternatively, giving you more work to finish.  And, most people have difficulty seeing how more work today can eliminate rework tomorrow.

But, as the continuous delivery examples show – successful teams can come from these opposite views collaborating. Wherever you find yourself on this continuum, know that collaboration is likely in your future.

Further Reading

The post Leverage The Power of Testers appeared first on Automated Visual Testing | Applitools.

]]>
CI Test Automation Strategy https://applitools.com/blog/ci-test-automation/ Thu, 12 Mar 2020 22:40:20 +0000 https://applitools.com/blog/?p=7045 Elite teams add CI test automation early - and not full development database tests, but simple unit tests that can run in less than 10 minutes and validate code that has been written.

The post CI Test Automation Strategy appeared first on Automated Visual Testing | Applitools.

]]>

What happens when you put two top thinkers together to talk about DevOps, continuous integration, and CI test automation strategy?

That’s what happened in November 2019, when Dr. Nicole Forsgren, Google Cloud researcher and strategic thinker, and co-author of Accelerate: The Science of Lean Software and DevOps, joined a webinar with Angie Jones, test automation consultant and automation architect at Applitools.

If you haven’t heard two thoughtful people discussing test as an integral part of product delivery, you need to listen to this webinar.

 

nicole forsgren

Dr. Forsgren publishes an annual “State of DevOps” report, in which she uncovers trends that drive DevOps productivity.  So, when she sat down with Angie, Dr. Forsgren came prepared to discuss trends and results.

state of devops 2019 Cover

When discussing her State of DevOps reports, Dr. Forsgren said that a common refrain she hears is, “‘This isn’t for me, right?’ ‘It’s only for executives,’ or, ‘It’s only for developers,’ or, ‘It’s only for (fill in the blank).’” And, yet, her conclusions are based on reaching out to a range of stakeholders, which is what makes her report, and her discussion with Angie, so interesting.

Knowlege of DevOps changes constantly and Dr. Forsgren points out that the elite teams achieve levels of productivity that exceed peers who are still developing their DevOps expertise. So, let’s dive in.

CI Automated Testing As A DevOps Foundation

Angie led into a discussion of the State of DevOps Report. She said,

“So in this year’s State of Dev Ops report, you listed automated testing as a basic foundation for teams. Yet so many teams, they don’t get around to this until they feel they have matured. So there is a belief that they just need to focus on building the thing right now. And after that thing is built, then they can add the nice to have like tests. So what does it mean to have automated tests and how can teams get started with this earlier?”

state of devops 2019 SDO Operational

Dr. Forsgren spoke about how most immature teams fail to prioritize testing, and how her research showed that elite teams focus on building fast-running unit-tests into their build processes. Her research shows that effective unit test automation serves as a foundation to DevOps just as version control serves as an essential for release management.

One mistake she sees teams make is pushing test off to the end. Elite teams add CI test automation early – and not full development database tests, but simple unit tests that can run in less than 10 minutes and validate code that has been written. As new features get added, the unit tests make it easy to expose regressions.

Dr. Forsgren said,

“Sometimes I’ll talk to developers and they’ll say, oh, I want to skip tests because I only want to run them at the end of the day or once a week. And like my face does this contortion.”

“‘What are you talking about,’ I ask?”

“They say, ‘Oh, well, it takes like nine hours to run it back overnight’”. And I’m thinking, ‘Something is wrong. We want to run these small incremental tests that give us this good fast [pass/fail] signal.’”

Test-Driven Development

Discussing the developer mindset about tests led to a quick discussion about test-driven development (TDD).

Test Driven Development

Dr. Forsgren noted that a common developer will say, “I’m a developer, not a tester.” But, she noted, testability helps drive development flow. Code designed with testing in mind helps drive efficiency in downstream processes, like the system and functional tests. She observed that testable code decomposes more easily and integrates more efficiently into team processes.

Angie agreed that developers rarely understand test. She said that developers would tell her,

“Yeah, I know I should be doing it, but I don’t know how.”

In part, Angie noted, universities that teach software skills often focus on development skills and rarely focus on testing. Angie pointed out that she never learned anything about testing while she was studying for her degree.

“So, when it comes to TDD, most developers can find it painful,” Angie said. “Lots of developers have an idea about what to build, and thinking about testability just slows them down. Some even find it a waste of time. It takes them a while to become comfortable with the approach and understand what TDD brings to the product delivery process.”

“And,” Angie noted, “I’m not pro-TDD or anti-TDD. I think that TDD serves a valid purpose and should be used when appropriate.”

Dr. Forsgren noted, “The real value comes when you’re trying to deliver completed code during a sprint…” and testability is one of your acceptance criteria.

Value of Testers

When she spoke about TDD, Dr. Forsgren says her advocacy for getting everyone involved in testing leads people to ask her,

“So, do you think we can do away with testers?”

Actually, quite the contrary, she says. By having a test-focused mindset in the development process, the test team can focus on issues further down the road. Dr. Forsgren refers to the test engineers as “test experts.”

Angie, who is one of those experts, echoed the tester’s perspective back.

“I just want people to test their stuff – test it early,” Angie said. “Because there’s nothing more annoying and a suck of time than testing stuff, and the basic of scenarios don’t work. Do you know what I mean?”

They shared perspectives on the value of testers in the product delivery process.

Team vs Centralized Test

Angie talked about the problems in centralizing test approaches.

“I’ve worked in companies where they might try to spread a testing strategy across like an organization,” Angie said. I’ll be honest – it just felt heavier.”

She talked about how trying to standardize testing approaches would devolve to discussions about which tool to use – and stop focusing on productivity.

Team Edits

In discussing DevOps teams, Angie asked the next relevant question:

“Last year’s report found that test automation had a significant impact on continuous delivery. But, you didn’t consider the relationship between these two. This year you did. And you found that the automated test positively impacted continuous integration by making it better.

“Now, I tend to hear teams say that integration tests slow down their C.I. The tests take too long to run. They’re brittle. And so on. So as opposed to running them on every build, the teams are doing things like running them maybe once a day, right? So if they fail, their thought process here is, ‘Well, we have a list of check ins for that day, better than, like that week or that month.’ And they triage it this way.

“So, Nicole, do you see any drawbacks to this approach and how might people change what they’re currently doing to enable faster feedback?”

Dr. Forsgren gave an, ‘it depends’ reply, because she understood that each team has its own understanding of the kinds of tests they can and should run regularly.

“It can be tricky,” Dr. Forsgren said. “Because it kind of depends on what your test system looks like, what your build system looks like. If you have a whole bunch of tests and then your build system is tightly coupled – where if you’re just waiting for, like, a nine-hour build – that can be really, really difficult. But if you have a way to run small, smaller tests so you get fast feedback that that can be nice.”

“Those tests need to be designed for a C.I system,” she continued. “And, it can also help if you have things designed so that you don’t have branch merge conflicts and things like that happening.”

Angie’s View on Test in C.I.

Angie and Dr. Forsgren went back and forth on this topic – noting that the quality level of tests run in a C.I. system differ from the quality of tests that might be run elsewhere. In C.I., a failing test becomes a process gate – meaning that tests must be free of failures that might affect non-C.I. tests.

For example, a test that depends on pre-set conditions must have those conditions set correctly before it runs. If that tests fails in a place where someone forgot to pre-set the conditions, that can be an annoyance for a test run – but that would break C.I. tests.

dzone researchguide devops2019 Cover

Angie writes all about this in the DZone Guide to DevOps, and you can read more about her contribution in her blog.

Her core ideas are that there are four key requirements in automated CI tests:

  • Speed – CI tests must run quickly to give fast feedback to the team when failures occur. Most should be unit tests. Some should be at the service level – to uncover failure modes that unit tests may not uncover. Finally, few should be UI tests – they take the longest to write and the longest to run – and should be ones that uncover unique failure modes.
  • Reliability – Angie already mentioned reliability, and here she reminds people about the key in running automation in a CI environment is that a failure for an automated test must be tied directly to a testing error – not a problem with the test itself.
  • Quantity – since every test gets run in CI automation, be aware that each test takes time to run. Don’t add tests for no reason. Make sure each test captures a relevant failure mode.
  • Maintenance – your tests will need to be maintained as or immediately after each incremental coding task gets completed.

Approaching Test Maintenance

When it comes to maintenance, Angie made it clear that her perspective on test maintenance has changed in her career.

“I’ve been doing automation for a very long time,” Angie said. “But this is something I’ve just learned in the last, maybe, five years. That, it’s OK to delete these things. Like, maybe this test was very important at the time that it was written. Right? But now the data is showing us this is no longer that important. Maybe it’s covered by something else. Maybe we thought this was gonna be the feature that, you know, saved the company, but nobody’s using it. So, maybe we don’t need this.”

SDLC Maintenance Highlighted

Angie talked about the key being knowing what to automate.

“People think they can simply automate their manual tests,” Angie said. “When in doing that, you’re slowing down your C.I. build. Even if you run these in parallel, still, you’re slowing them down. And, it’s a lot of noise.”

“Let’s say you just automate your manual tests,” Angie continued. “Now, you make a change, and suddenly 50 test fail. Why are 50 tests failing? Well, a developer has to go through all 50 failures to see every single what broke. And guess what? All of them are failing for the same reason. Why do I need 50 things to tell me that I broke this one thing?” You have to be thoughtful when you add automation, Angie added.

Continuing, Angie said, “Here’s a way to do maintenance when you’re not sure about the importance of a test. You have a failure. That test is gating your integration and your deployment. You can delete the test because it seems like it’s not that valuable. Or, if you still want to know this information, move it out to another build. You can have more than one build. Have your important tests run as part of your main build. And then you can have, like, alternate tests run once a day or something like that. So you’ll still have that information. But it’s not part of your C.I. build.”

“That’s my favorite pro tip for today,” commented Dr. Forsgren. “I learn something new every time I talk to you or work with you.”

Value of Designing for Test

Angie returned to discussing Dr. Forsgren’s findings from the State of DevOps 2019 report.

“So,” Angie said to Dr. Forsgren, “your study showed designs that permit testing and deploying services independently help the teams achieve higher performance. It’s really interesting that you found that design considerations should include testing because so many people don’t usually think of testing until after the features develop. Which makes automated testing much harder.”

Angie then spoke briefly about Rob Meaney’s 10 Ps of Testability, which concludes that for a product to be testable, it must be designed to facilitate testing and automation at every level of the product. And that process should help the team decompose work into small, testable chunks.

Screen Shot 2020 03 12 at 2.04.41 PM

Dr. Forsgren replied, “A good architecture contributes to performance. The way we define performance is speed and stability. This loosely coupled architecture, along with communication and coordination among teams, lets those teams test.”

“And it’s fine-grained community without fine grained communication and coordination,” Dr. Forsgren continued, “so that can be provisioning storage and compute. It can be provisioning test resources and conducting tests. But, if you somehow if you skip a piece of that process – let’s say you provision compute and storage, but you skip the test section, you’re going to have a constraint. You’re going to have something that surprisingly blocks you.”

“People often view testing as the bottleneck to actually get this feature out the door,” Angie said.

“And when you do it wrong, of course,” Dr. Forsgren replied.

“Exactly,” Angie said. “If you don’t build that testability in, then testing is going to be more difficult. Yes, of course it has become this bottleneck, but that’s not due to the fault of the testers, right? That’s because the application was not built to be tested.”

Making Time for Automation

Angie focused on the difference between the research results and the reality in many places.

“Your research concludes that automation is truly a sound investment,” Angie said. “It allows engineers to spend less time on manual work and instead they can focus on the new work.”

“Yet,” Angie continued, “what I’m hearing from practitioners is this:  ‘Time is only allocated for feature development. We don’t have the time energy. We don’t have the budget, Angie, to automate because it’s not considered a feature.’”

Angie asked Dr. Forsgren how to get buy-in to allocate time for automation.

Dr. Forsgren acknowledged the problem.

“I hear this constantly,” Dr. Forsgren said. “Development is so much easier to justify. We can always justify features. We can always justify shipping something.

The HP LaserJet Test Case

Dr. Forsgren said, “There’s this fantastic, fantastic case that I use all the time. The HP LaserJet firmware case. Gary Gruver and his team needed to find a way to help improve the quality at HP LaserJet and its firmware. For people who say, ‘Automated testing is too hard.’ you’re probably working on software. Gary’s team had a huge problem with their firmware.”

“Gary’s team had so many constraints,” Dr. Forsgren continued. “Each printer could have multiple code versions. His teams spent just 5 percent of their development time writing new features. They were spending 15 to 20 percent of their time just integrating their code back into trunk. Because of the number of versions to check, for each build, it would take them a week to figure out if the build actually integrated successfully.”

“So they just started chipping away at the problem. Their goals were to free up the critical path and give themselves more time to innovate.”

“Their solution was to innovate in test automation. Today they spend something like 40 percent of their time just for automated testing because they had seen so much benefit in automated tests. Why? Because they had a huge quality problem. Before they made the automation investment, they were spending 25 percent of the time on product support. When you think about it, product support time comes from customer problems making it all the way to the field.”

As Dr. Forsgren pointed out, in the end, the HP LaserJet team used automation to improve quality throughout the process. Their builds ran much more quickly. They wiped out pretty much all the product support work that arose due to quality issues. And, they were able to spend time on innovation.

Tying Back to C.I.

Angie took the HP firmware story back to the C.I. case.

“I’m glad that you shared that story,” Angie said. “A lot of teams fail at this, and so you hear mumbles throughout the hallways of companies, or at conferences, where people who have been burned by attempting to do automation, they’re saying things like, ‘…this automation thing. I don’t see the return on investment.’”

Continous Delivery Automated Acceptance Testing

“That’s why I really value the work that you’ve done, the stories that you share, what you’re seeing out there,” Angie continued. “The research is showing that there are teams who are able to do this properly. And when they are, they are definitely seeing a return on their investment. So I love that.”

“I love hearing the stories about what you see,” Dr. Forsgren said. “I love to hear where your experience mirrors the research, as well as where you see things a little differently. I especially enjoyed the tips that you shared on how to develop automated tests specifically for C.I., because so many more teams and organizations are doing continuous integration.”

“It’s clear to me that, if you don’t have your automated tests developed the right way, it’s hard to pull those into your C.I. flow,” Dr. Forsgren continued. “I’m going to reread your article because I’m super excited. I had somebody asks me for tips, so I can point them to the article.”

“Definitely send them my way,” Angie said. “Thank you so much.”

For More Information

The post CI Test Automation Strategy appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation as a Key Enabler for High-performing Teams – with Angie Jones and Google’s Nicole Forsgren [webinar] https://applitools.com/blog/test-automation-key-enabler-for-success/ Mon, 18 Nov 2019 13:29:24 +0000 https://applitools.com/blog/?p=6631 Angie Jones, a test automation architect, consults with development teams around the world to help them with their test automation and DevOps strategy. Nicole Forsgren, who does research and strategy...

The post Test Automation as a Key Enabler for High-performing Teams – with Angie Jones and Google’s Nicole Forsgren [webinar] appeared first on Automated Visual Testing | Applitools.

]]>
"Test Automation as a Key Enabler for High-performing Teams" - with Angie Jones and Google's Nicole Fosrgen [webinar]

Angie Jones, a test automation architect, consults with development teams around the world to help them with their test automation and DevOps strategy.

Nicole Forsgren, who does research and strategy at Google Cloud, is best known as the lead investigator on the State of DevOps reports and lead author of the book Accelerate.

In this on-demand webinar, Angie and Nicole examine the research of DORA’s Accelerate State of DevOps Reports, and what testing practices drive high performance. They’ll also discuss the messy reality of test automation in DevOps and what Angie is seeing in the trenches.

For those unfamiliar with the research, the Report identifies specific capabilities to improve software delivery and focuses on the practices of high-performing teams.

But as we all know, identifying capabilities and executing on those capabilities are two different things.
The reality is that many teams are struggling to take these steps towards transformation.

Angie and Nicole will discuss concrete steps teams can take to make high performance with test automation a reality.

In this session, Angie and Nicole discuss:

  • What practices development teams are currently following
  • The debunking of common myths and misconceptions around test automation in DevOps
  • How development teams can adopt the guidelines found in the research, while facing the reality of their current constraints

Slide deck:

Full webinar recording:

Additional Resources and Recommended Reading:

— HAPPY TESTING —

 

The post Test Automation as a Key Enabler for High-performing Teams – with Angie Jones and Google’s Nicole Forsgren [webinar] appeared first on Automated Visual Testing | Applitools.

]]>
A/B Testing: Validating Multiple Variations https://applitools.com/blog/validating-multiple-variations/ Wed, 13 Nov 2019 17:32:01 +0000 https://applitools.com/blog/?p=6431 Many teams don't automate tests to validate multiple variations because it's "throw away" code. You're not entirely sure which variation you'll get each time the test runs. If you did write test automation, you may need a bunch of conditional logic in your test code to handle both variations. What if instead of writing and maintaining all of this code, you used visual testing instead? Would that make things easier?

The post A/B Testing: Validating Multiple Variations appeared first on Automated Visual Testing | Applitools.

]]>

When you have multiple variations of your app, how do you automate the process to validate each variation?

A/B testing is a technique used to compare multiple experimental variations of the same application to determine which one is more effective with users. You typically run A/B tests to get statistically valid measures of effectiveness. But, do you know why one version is better than the other? It could be that one contains a defect.

Let’s say we have two variations – Variation A and Variation B; and Variation B did much better than Variation A. We’d assume that’s because our users really liked Variation B.

But what if Variation A had a serious bug that prevented many users from converting?

The problem is that many teams don’t automate tests to validate multiple variations because it’s “throw away” code. You’re not entirely sure which variation you’ll get each time the test runs.

And if you did write test automation, you may need a bunch of conditional logic in your test code to handle both variations.

What if instead of writing and maintaining all of this code, you used visual testing instead? Would that make things easier?

Yes, it certainly would! You could write a single test, and instead of coding all of the differences between the two variations, you could simply do the visual check and provide photos of both variations.  That way, if either one of the variations comes up and there are no bugs, the test will pass. Visual testing simplifies the task of validating multiple variations of your application.

 

Let’s try this on a real site.

Here’s a website that has two variations.

 

 

There are differences in color as well as structure. If we wanted to automate this using visual testing, we could do so and cover both variations. Let’s look at the code.

I have one test here which is using Applitools Eyes to handle the A/B test variations.

  • On line 27, I open Eyes just as I normally would do
  • Because the page is pretty long, I make a call to capture the full page screenshot on line 28
  • There’s also a sticky header that is visible even when scrolling, so to avoid that being captured in our baseline image when scrolling, I set the stitch mode on line 29
  • Then, the magic happens on line 30 with the checkWindow call which will take the screenshot
  • Finally, I close Eyes on line 31

After running this test, the baseline image (which is Variation B) is saved in the Applitools dashboard. However, if I run this again, chances are that Variation A will be displayed, and in that event my visual check will fail because the site looks different.

Setting Up the Variation

In the dashboard, we see the failure which is comparing Variation A with Variation B. We want to tell Applitools that both of these are valid options.

 

To do so, I click the A/B button which will open the Variations Gallery. From here, I click the Create New button.

 

After clicking, the Create New button, I’m prompted to name this new variation and then it is automatically saved in the Variation Gallery. Also, the test is now marked as passed. Now in future regression runs, if either Variation A or Variation B appears (without bugs), the test will still pass.

 

Another thing we can do is rename the variations. Notice the original variation is named Default. By hovering over the variation, we see an option to rename it to Variable B, for example.

 

I can also delete a variation. So, if my team decides to remove one of the variations from the product I can simply delete it from the Variation Gallery as well.

See It In Action!

Check out the video below to see A/B baseline variations in action.

 

 

For More Information

 

The post A/B Testing: Validating Multiple Variations appeared first on Automated Visual Testing | Applitools.

]]>