future of testing Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/future-of-testing/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 30 Nov 2023 21:31:36 +0000 en-US hourly 1 Should We Fear AI in Test Automation? https://applitools.com/blog/should-we-fear-ai-in-test-automation/ Mon, 04 Dec 2023 13:39:00 +0000 https://applitools.com/?p=53216 Richard Bradshaw explores fears around the use of AI in test automation shared during his session—The Fear Factor—at Future of Testing.

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>

At the recent Future of Testing: AI in Automation event hosted by Applitools, I ran a session called ‘The Fear Factor’ where we safely and openly discussed some of our fears around the use of AI in test automation. At this event, we heard from many thought leaders and experts in this domain who shared their experiences and visions for the future. AI in test automation is already here, and its presence in test automation tooling will only increase in the very near future, but should we fear it or embrace it?

During my session, I asked the attendees three questions:

  • Do you have any fears about the use of AI in testing?
  • In one word, describe your feelings when you think about AI and testing.
  • If you do have fears about the use of AI in testing, describe them.

Do you have any fears about the use of AI in testing?

Where do you sit?

I’m in the Yes camp, and let me try to explain why.

Fear can mean many things, but one of them is the threat of harm. It’s that which concerns me in the software testing space. But that harm will only happen if teams/companies believe that AI alone can do a good enough job. If we start to see companies blindly trusting AI tools for all their testing efforts, I believe we’ll see many critical issues in production. It’s not that I don’t believe AI is capable of doing great testing—it’s more the fact that many testers struggle to explain their testing, so to have good enough data to train such a model feels distant to me. Of course, not all testing is equal, and I fully expect to see many AI-based tools doing some of the low-hanging fruit testing for us.

In one word, describe your feelings when you think about AI and testing.

It’s hard to disagree with the results from this question—if I were to pick two myself, I would have gone with ‘excited and skeptical.’ I’m excited because we seem to be seeing new developments and tools each week. On top of that, though, we are starting to see developments in tooling using AI outside of the traditional automation space, and that really pleases me. Combine that with the developments we are seeing in the automation space, such as autonomous testing, and the future tooling for testing looks rather exciting.

That said, though, I’m a tester, so I’m skeptical of most things. I’ve seen several testing tools now that are making some big promises around the use of AI, and unfortunately, several that are talking about replacing or needing fewer testers. I’m very skeptical of such claims. If we pause and look across the whole of the technology industry, the most impactful use of AI thus far is in assisting people. Various GPTs help generate all sorts of artifacts, such as code, copy, and images. Sometimes, it’s good enough, but the majority of the time is helping a human be more efficient—this use of AI and such messaging, excites me.

If you do have fears about the use of AI in testing, describe them here.

We got lots of responses to this question, but I’m going to summarise and elaborate on four of them:

  • Job security
  • Learning curve
  • Reliability & security
  • How it looks

Job Security

Several attendees shared they were concerned about AI replacing their jobs. Personally, I can’t see this happening. We had the same concern with test automation, and that never really materialized. Those automated tests don’t maintain themselves, or write themselves, or share the results themselves. The direction shared by Angie Jones in her talk Where Is My Flying Car?! Test Automation in the Space Age, and Tariq King in his talk, Automating Quality: A Vision Beyond AI for Testing, is AI that assists the human, giving them superpowers. That’s the future I hope, and believe we’ll see, where we are able to do our testing a lot more efficiently by having AI assist us. Hopefully, this means we can release even quicker, with higher quality for our customers.

Another concern shared was about skills that we’ve spent years and a lot of effort learning, suddenly being replaced by AI. Or significantly easier with AI. I think this is a valid concern but also inevitable. We’ve already seen AI have a significant benefit to developers with tools like GitHub Copilot. However, I’ve got a lot of experience with Copilot, and it only really helps when you know what to ask for—this is the same with GPTs. Therefore, I think the core skills of a tester will be crucial, and I can’t see AI replacing those.

Learning Curve

If we are going to be adding all these fantastic AI tools into our tool belts, I feel it’s going to be important we all have a basic understanding of AI. This concern was shared by the attendees. For me, if I’m going to be trusting a tool to do testing for me or generating test artefacts for me, I definitely want that basic understanding. So, that poses the question, where are we going to get this knowledge from?

On the flip side of this, what if we become over-reliant on these new AI tools? A concern shared by attendees was that the next generation of testers might not have some of the core skills we consider important today. Testers are known for being excellent thinkers and practitioners of critical thinking. If the AI tools are doing all this thinking for us, we run the risk of those skills losing their focus and no longer being taught. This could lead to us being over-reliant on such tools, but also the tools biassing the testing that we do. But given that the community is focusing on this already, I feel it’s something we can plan to mitigate and ensure this doesn’t happen.

Reliability & Security

Data, data, data. A lot of fears were shared over the use and collection of data. The majority of us work on applications where data, security, and integrity are critical. I absolutely share this concern. I’m no AI expert, but the best AI tools I’ve used thus far are ones that are contextual to my domain/application, and to do that, we need to train it on our data. These could lead to data bleeding and private data, and that is a huge challenge I think the AI space has yet to solve.

One of the huge benefits of AI tooling is that it’s always learning and, hopefully, improving. But that brings a new challenge to testing. Usually, when we create an automated test, we are codifying knowledge and behavior, to create something that is deterministic, we want it to do the same thing over and over again. This provides consistent feedback. However, with an AI-based tool it won’t always do the same thing over and over again—it will try and apply its intelligence, and here’s where the reliability issues come in. What it tested last week may not be the same this week, but it may give us the same indicator. This, for me, emphasizes the importance of basic AI knowledge but also that we use these tools as an assistant to our human skills and judgment.

How It Looks

Several attendees shared concerns about how these AI tools are going to look. Are they going to a completely black box, where we enter a URL or upload an app and just click Go? Then the tool will tell us pass or fail, or perhaps it will just go and log the bugs for us. I don’t think so. As per Angie’s and Tariq’s talk I mentioned before, I think it’s more likely these tools will focus on assistance. 

These tools will be incredibly powerful and capable of doing a lot of testing very quickly. However, what they’ll struggle to do is to put all the information they find into context. That’s why I like the idea of assistance, a bunch of AI robots going off and collecting information for me. It’s then up to me to process all that information and put it into the context of the product. The best AI tool is going to be the one that makes it as easy as possible to process the masses of information these tools are going to return.

Imagine you point an AI bot at your website, and within minutes, it’s reporting accessibility issues to you, performance issues, broken links, broken buttons, layout issues, and much more. It’s going to be imperative that we can process that information as quickly as possible to ensure these tools continue to support us and don’t drown us in information.

Visit the Future of Testing: AI in Automation archive

In summary, AI is here, and more is coming. It’s very exciting times in the software testing tooling space, and I’m really looking forward to playing with more new tools. I think we need to be curious with these new tools, try them, and see what sticks. The more tools we have in our tool belts, the more options we have to solve our ever-increasing complex testing challenges. 

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>
Future of Testing: AI in Automation Recap https://applitools.com/blog/future-of-testing-ai-in-automation-recap/ Tue, 28 Nov 2023 13:13:00 +0000 https://applitools.com/?p=53155 Recap of the Future of Testing: AI in Automation conference. Watch the on-demand sessions to learn actionable steps to implement AI in your software testing strategy, key considerations around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>

The latest edition of the Future of Testing events, held on November 7, 2023, was nothing short of inspiring and thought-provoking! Focused on AI in Automation, attendees learned how to leverage AI in software testing with top industry leaders like Angie Jones, Tariq King, Simon Stewart, and many more. All of the sessions are available now on-demand, and below, we take a look back at these groundbreaking sessions to give you a sneak peek of what to expect before you watch.

Opening Remarks

Joe Colantonio from TestGuild and Dave Piacente from Applitools set the stage for a thought-provoking discussion on reimagining test automation with AI. As technology continues to evolve at a rapid pace, it’s important for software testing professionals to adapt and embrace new tools and techniques. Joe and Dave encouraged attendees to explore the potential of AI in test automation and how it can enhance their current processes. They also touch upon the challenges faced by traditional test automation methods and how AI-powered solutions can help overcome them.

Dave shared one of our latest updates – the integration of Applitools Eyes with Preflight! Learn more about Preflight.

Keynote—Reimagining Test Automation with AI by Anand Bagmar

In this opening session, Anand Bagmar explored how to reimagine your test automation strategies with AI at each stage of the test automation life cycle, including a live demo showcasing the power of AI in test automation with Applitools.

Anand first introduced the transition from Waterfall to Agile software delivery practices, and while we can’t imagine going back to a Waterfall way of working, he addressed the challenges Agile brings to the software testing life cycle. Each iteration brings more room for error across analysis, maintenance, and validation of tests. This is why testers should turn toward AI-powered test automation, with the help of tools like Applitools, to help ease the pain of Agile testing.

The session is aimed at helping testers understand the importance of leveraging AI technology for successful test automation, as well as empowering them to become more effective in their roles. Watch now.

From Technical Debt to Technical Capital by Denali Lumma

In this session, Denali Lumma from Modular dived into the concept of technical debt and proposed a new perspective on how we view it – technical capital. She walked attendees through key mathematical concepts that help calculate technical capital, as well as examples comparing Pytorch vs. TensorFlow, MySQL vs.Postgres, Frameworks vs. Code Editors, and more.

Attendees gained insights into calculating technical capital and how it can impact the valuation of a company. Watch now.

Automating Quality: A Vision Beyond AI for Testing by Tariq King

Tariq King of EPAM Systems took attendees on a journey through the evolution of software testing and how it has been impacted by generative AI. He shared his vision for the future of automated quality, one that looks beyond just AI to also prioritize creativity and experimentation. Tariq emphasized the need for quality and not just using AI to “go faster.” The more quality you have, the more productive you will be.

Tariq also dove into the ethical implications of using AI for testing and how it can be used for good or evil. Watch the full session.

Leveraging ChatGPT with Cypress for API Testing: Hands-On Techniques by Anna Patterson

In this session, Anna Patterson of EVERFI explored practical techniques and provided hands-on examples of how to harness the combined power of Cypress and ChatGPT to create robust API tests for your applications.

Anna guided us through writing descriptive and clear test prompts using HTML status codes, with a pet store website as an example. She showed in real-time how meaningful prompts in ChatGPT can help you create a solid API test suite, while also considering the security requirements of your company. Watch now.

PANEL—Testing in the AI Era: Opportunities, Hurdles, and the Evolving Role of Engineers

Joe Colantonio, Test Guild • Janna Loeffler, mParticle • Dave Piacente, Applitools • Stephen Williams, Accenture

As the use of AI in software development continues to grow, it is important for engineers and testers to stay ahead of the curve. In this panel discussion led by Joe Colantonio from Test Guild, Janna Loeffler from mParticle, Dave Piacente from Applitools, and Stephen Williams from Accenture came together to discuss the current state of AI implementation and its impact on testing.

They talked about how AI is still in its early stages of adoption and why there may always be some level of distrust in AI technology. The panel emphasized the importance of first understanding why you might implement AI in your testing strategy so that you can determine what the technology will help to solve vs. jumping in right away. Many more incredible takes and insights were shared in this interactive session! Watch now.

The Fear Factor with Richard Bradshaw

The Friendly Tester, Richard Bradshaw, addressed the common fears about AI and automation in testing. Attendees heard Richard’s open and honest discussion on the challenges and concerns surrounding AI and automation in testing. Ultimately, he calmed many fears around AI and gave attendees a better understanding of how they can begin to use it in their organization and to their own advantage. Watch now.

Tests Too Slow? Rethink CI! by Simon Stewart

Simon Stewart from the Selenium Project discussed the latest updates on how to speed up your testing process and improve the reliability of your CI runs. He shared insights into the challenges and tradeoffs involved in this process, as well as what is to come with Selenium and Bazel.
Attendees learned how to rethink their CI approach and use these tools to get faster feedback and more reliable testing results. Watch now.

Revolutionizing Testing: Empowering Manual Testers with AI-Driven Automation by Dmitry Vinnik

Dmitry Vinnik explored how AI-driven automation is revolutionizing the testing process for manual testers. He showed how Applitools’ Visual AI and Preflight help streamline test maintenance and reduce the need for coding.

Dmitry shared the importance of test maintenance, no code solutions for AI testing, and a first-hand look at Applitools Preflight. Watch this session to better understand how AI is transforming testing and empowering manual testers to become more effective in their roles. Watch the full session.

Keynote—Where Is My Flying Car?! Test Automation in the Space Age by Angie Jones

In her closing keynote, Angie Jones of Block took us on a trip into the future to see how science fiction has influenced the technology we have today. The Jetsons predicted many futuristic inventions such as robots, holograms, 3D printing, smart devices, and drones. We will explore these predictions and see how far we have come regarding automation and technology in the testing space.

As technology continues to evolve, it is important for testers to stay updated and adapt their strategies accordingly. Angie dove into the exciting world of tech innovation and imagined the future for test automation in the space age. Watch now.


Visit the full Future of Testing: AI in Automation on-demand archive to watch now and learn actionable steps to implement AI in your software testing strategy, key considerations before you start, other ideas around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>
AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take https://applitools.com/blog/ai-and-the-future-of-test-automation-with-adam-carmi/ Mon, 16 Oct 2023 18:23:49 +0000 https://applitools.com/?p=52314 We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of...

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>

We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of room for you to form your own impressions.

Dave Piacente

Curious if the software robots are here to take our jobs? Or maybe you’re not a fan of the AI hype train? During a recent session, The Future of AI-Based Test Automation, CTO Adam Carmi discussed—in practical terms—the current and future state of AI-based test automation, why it matters, and what you can do today to level up your automation practice.

  • He describes how AI can be used to overcome common everyday challenges in end-to-end test automation, how the need for skilled testers will only increase, and how AI-based tooling can help supercharge any automated testing practice.
  • He also puts his money where his mouth is by demonstrating how the neverending maintenance overhead of tests can be mitigated using AI-driven tooling which already exists today using concrete examples (e.g., visual validation and self-healing locators).
  • He also discusses the role that AI will play in the future, including the development of autonomous testing platforms. These platforms will be able to automatically explore applications, add validations, and fill gaps in test coverage. (Spoiler alert: Applitools is building one, and Adam shows a bit of a teaser for it using a real-time in-browser REPL to automate the browser which uses natural language similar to ChatGPT.)

You can watch the full recording and find the session materials here, and I’ve included a quick breakdown with timestamps for ease of reference.

  • Challenges with automating end-to-end tests using traditional approaches (02:34-10:22)
  • How AI can be used to overcome these challenges (10:23-44:56)
  • The role of AI in the future of test automation (e.g., autonomous testing) (44:57-58:56)
  • The role of testers in the future (58:57-1:01:47)
  • Q&A session with the speaker (1:01:48-1:12:30)

Want to see more? Don’t miss Future of Testing: AI in Automation.

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation Video Winter Roundup: September – December 2022 https://applitools.com/blog/test-automation-video-winter-roundup-september-december-2022/ Mon, 09 Jan 2023 18:35:00 +0000 https://applitools.com/?p=45499 Get all the latest test automation videos you need right here. All feature test automation experts sharing their knowledge and their stories.

The post Test Automation Video Winter Roundup: September – December 2022 appeared first on Automated Visual Testing | Applitools.

]]>
Applitools minions in winter

Check out the latest test automation videos from Applitools.

We hope you got to take time to rest, unplug, and spend time with your loved ones to finish out 2022 with gratitude. I have been incredibly appreciative of the learning opportunities and personal growth that 2022 offered. In reflection of our past quarter here at Applitools, we’ve curated our latest videos from some amazing speakers. If you missed any videos while away on holiday or finishing off tasks for the year, we’ve gathered the highlights for you in one spot.

ICYMI: Back in November, Andrew Knight (a.k.a. the Automation Panda) shared the top ten Test Automation University courses.

Cypress vs. Playwright: The Rematch

One of our most popular series is Let the Code Speak, where we compare testing frameworks in real examples. In our rematch of Let the Code Speak: Cypress vs. Playwright, Andrew Knight and Filip Hric dive deeper to how Cypress and Playwright work in practical projects. Quality Engineer Beth Marshall moderates this battle of testing frameworks while Andy and Filip explore comparisons of their respective testing frameworks in the areas of developer experience, finding selectors, reporting, and more.

Video preview of Cypress vs Playwright: The Rematch webinar

Automating Testing in a Component Library

Visual testing components allows teams to find bugs earlier, across a variety of browsers and viewports, by testing reused components in isolation. Software Engineering Manager David Lindley and Senior Software Engineer Ben Hudson joined us last year to detail how Vodafone introduced Applitools into its workflow to automate visual component testing. They also share the challenges and improvements they saw when automating their component testing.

Video preview of Automating Testing in a Component Library webinar

When to Shift Left, Move to Centre, and Go Right in Testing

Quality isn’t limited to the end of the development process, so testing should be kept in mind long before your app is built. Quality Advocate Millan Kaul offers actionable strategies and answers to questions about how to approach testing during different development phases and when you should or shouldn’t automate. Millan also shares real examples of how to do performance and security testing.

Video preview of When to Shift Left, Move Centre, and Go Right in Testing webinar

You, Me, and Accessibility: Empathy and Human-Centered Design Thinking

Inclusive design makes it easier for your customers with your varying needs and devices are able to use your product. Accessibility Advocate and Crema Test Engineer Erin Hess talks about the principles of accessible design, how empathy empowers teams and end users, and how to make accessibility more approachable to teams that are newer to it. This webinar is helpful all team members, whether you’re a designer, developer, tester, product owner, or customer advocate.

Video preview of You, Me, and Accessibility webinar

Erin also shared a recap along with the audience poll results in a follow-up blog post.

Future of Testing October 2022

Our October Future of Testing event was full of experts from SenseIT, Studylog, Meta, This Dot, EVERSANA, EVERFI, LAB Group, and our own speakers from Applitools. We covered test automation topics across ROI measurement, accessibility, testing at scale, and more. Andrew Knight, Director of Test Automation University, concludes the event with eight testing convictions inspired by Ukiyo-e Japanese woodblock prints. Check out the full Future of Testing October 2022 event library for all of the sessions.

Video preview of Future of Testing keynote

Skills and Strategies for New Test Managers

Being a good Test Manager is about more than just choosing the right tools for your team. EasyJet Test Manager Laveena Ramchandani shares what she has learned in her experience on how to succeed in QA leadership. Some of Laveena’s strategies include how to create a culture that values feedback and communication. This webinar is great for anyone looking to become a Test Manager or for anyone who has newly started the role.

Video preview of Skills and Strategies for New Test Managers

Ensuring a Reliable Digital Experience This Black Friday

With so much data and so many combinations of state, digital shopping experiences can be challenging to test. Senior Director of Product Marketing Dan Giordano talks about how to test your eCommerce application to prioritize coverage on the most important parts of your application. He also shares some common shopper personas to help you start putting together your own user scenarios. The live demo shows how AI-powered automated visual testing can help retail businesses in the areas of visual regression testing, accessibility testing, and multi-baseline testing for A/B experiments.

Video preview of Ensuring a Reliable Digital Experience webinar

Dan gave a recap and went a little deeper into eCommerce testing in a follow-up blog post.

Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak!

Our popular Let the Code Speak webinar series focused primarily on differences in syntax and features, but it doesn’t really cover how these frameworks hold up in the long term. In our new Let the Engineers Speak webinar, we spoke with a panel of engineers from Mercari US, YOOBIC, Hilton, Q2, and Domino’s about how they use Cypress, Playwright, Selenium, and WebdriverIO in their day-to-day operations. Andrew Knight moderated as our panelists discussed what challenges they faced and if they ever switched from one framework to another. The webinar gives a great view into the factors that go into deciding what tool is right for the project.

Video preview of Let the Engineers Speak webinar

More on the way in 2023!

We’ve got even more great test automation content coming this year. Be sure to visit our upcoming events page to see what we have lined up.

Check out our on-demand video library for all of our past videos. If you have any favorite videos from this list or from 2022, you can let us know @Applitools. Happy testing!

The post Test Automation Video Winter Roundup: September – December 2022 appeared first on Automated Visual Testing | Applitools.

]]>
A Journey to Better Automation with the Screenplay Pattern https://applitools.com/blog/better-automation-screenplay-pattern/ Fri, 18 Mar 2022 14:51:02 +0000 https://applitools.com/?p=35526 In this tutorial, you'll learn how to use the Screenplay Pattern with Boa Constrictor to make more reliable interactions for better test automation.

The post A Journey to Better Automation with the Screenplay Pattern appeared first on Automated Visual Testing | Applitools.

]]>

In this tutorial, we’ll examine the limitations of automation with raw WebDriver calls and the Page Object Model, before learning how to use the Screenplay Pattern with Boa Constrictor to make more reliable interactions for better test automation.

I welcome you to join me on a journey through a process of automating better interactions. Grab your adventure hat and your gear and let’s get started!

You can watch the talk I gave at Future of Testing: Frameworks 2022, or you can keep reading below.

In this journey, we will explore a design pattern that is useful for automating better interactions: the Screenplay Pattern. This pattern has been around for several years, but perhaps it is still new to many people in our industry. I want to raise awareness of how useful Screenplay really is. I believe that the Screenplay Pattern offers a better approach to automation than the ways which are traditionally used.

Our route on this journey includes several main points of interest:

  1. Automation with raw WebDriver calls
  2. Automation with the Page Object Model
  3. Introduction to the Screenplay Pattern
  4. Automation with the Screenplay Pattern using Boa Constrictor

We’ll use C# for code examples. Let’s go!

Interactions

Let’s begin this adventure by exploring a fundamental building block to automation: interactions. An interaction is simply how a user operates software. Here, we will explore Web UI interactions, such as clicking buttons and scraping text. Interactions are essential for testing.

Testing is interaction plus verification. You do something and then confirm that it works. It’s simple!

Now, take a moment to consider your own experiences with functional test cases. Each one was probably a step-by-step procedure, where each step had interactions and verifications. As an example, let’s walk through a simple DuckDuckGo search test. DuckDuckGo is a search engine, like Google. The steps for it should be straightforward:

  1. Open the search engine. (This requires navigation.)
  2. Search for a phrase. (This requires entering keystrokes and clicking the search button.)
  3. Verify the results.  (This requires scraping the page title and the result links from the new page.)

A diagram for a simple DuckDuckGo search test

Look at all those interactions!

It seems that handling automated Web UI interactions well is such a challenge in our industry. Usage of the available tools for interactions, like Selenium WebDriver, can vary by team. Also, where there are Web UI interactions, you’re most certainly able to find those pesky critters known as “code duplication” and “flakiness.” Let’s explore the common way that many people learn to start on their journey of automating interactions.

Automation With Raw WebDriver Calls

As people begin to code automated tests, they often start by writing raw Selenium WebDriver calls. If you have had any previous experience with the WebDriver API, you’re probably already familiar with these kinds of calls.

The following code uses raw WebDriver calls in C# to perform the simple DuckDuckGo search test:

IWebDriver driver = new ChromeDriver();

// Open the search engine
driver.Navigate().GoToUrl("https://duckduckgo.com/");

// Search for a phrase
driver.FindElement(By.Id("search_form_input_homepage")).SendKeys("eevee");
driver.FindElement(By.Id("search_button_homepage")).Click();

// Verify results appear
driver.Title.ToLower().Should().Contain("eevee");
driver.FindElements(By.CssSelector("a.result__a")).Should().BeGreaterThan(0);

driver.Quit();

To prep the test, we first need to initialize the WebDriver object. We will use ChromeDriver for the Chrome browser. Then we need to navigate to DuckDuckGo. To do a search, we need to provide locators for the desired Web elements to send a search phrase and click search. Next, we want to verify the results of our search. To do this, we’ll make assertions on the title of the result page and its links. Before we complete our test, we need to make sure we call Quit on the WebDriver object. A good hiking rule is to leave nothing behind but footprints (or perhaps in the case of test automation, leave only logs to show that you were there).

For anyone who has traveled down this path before, perhaps you already spot the trouble up ahead in this code: race conditions! A race condition happens when automation tries to interact with a page or element before it’s fully loaded. In our test case, there are three places where the automation doesn’t properly wait. WebDriver methods aren’t equipped to wait automatically. This is a big reason why many tests are flaky: because they do not properly handle waiting.

One option would be to add an implicit wait for a target element. However, that isn’t viable in all situations, like the race condition in our assertion on the title.

Another option would be to use explicit waits. These give us more control over waiting in our test. To use them, let’s create a new WebDriverWait object named “wait” and give it a timeout value. Next, we’ll use the wait.Until() method to place the wait object before elements that need time to be ready. This method takes a function that returns true when the condition is reached.

IWebDriver driver = new ChromeDriver();
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(30));

// Open the search engine
driver.Navigate().GoToUrl("https://duckduckgo.com/");

// Search for a phrase
wait.Until(d => d.FindElements(By.Id("search_form_input_homepage")).Count > 0);
driver.FindElement(By.Id("search_form_input_homepage")).SendKeys("eevee");
driver.FindElement(By.Id("search_button_homepage")).Click();

// Verify results appear
wait.Until(d => d.Title.ToLower().Contains("eevee"));
wait.Until(d => d.FindElements(By.CssSelector("a.result__a"))).Count > 0);

driver.Quit();

Look out! Adding explicit waits mitigated one problem but created more issues in the process. We can see that the search_form_input_homepage element is used multiple times. This is the “code duplication” pest. The other pest we encounter here is called “unintuitive code.” If the comments are removed, it becomes more difficult to quickly understand what the code is doing. Oh no!

IWebDriver driver = new ChromeDriver();
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(30));
 
driver.Navigate().GoToUrl("https://duckduckgo.com/");
 
wait.Until(d => d.FindElements(By.Id("search_form_input_homepage")).Count > 0);
driver.FindElement(By.Id("search_form_input_homepage")).SendKeys("eevee");
driver.FindElement(By.Id("search_button_homepage")).Click();
 
wait.Until(d => d.Title.ToLower().Contains("eevee"));
wait.Until(d => d.FindElements(By.CssSelector("a.result__a"))).Count > 0);
 
driver.Quit();

Automation With the Page Object Model

I would hate to scale that code out any further. The Page Object Model is a method people commonly turn to at this point. Let’s try it out!

To use the Page Object Model, we’ll need to set up a class that has locator variables for elements and methods for interactions. Let’s explore this option by creating a search page class:

public class SearchPage
{
    public const string Url = "https://duckduckgo.com/";
    public static By SearchInput => By.Id("search_form_input_homepage");
    public static By SearchButton => By.Id("search_button_homepage");
   
    public IWebDriver Driver { get; private set; }
   
    public SearchPage(IWebDriver driver) => Driver = driver;
   
    public void Load() => Driver.Navigate().GoToUrl(Url);
 
    public void Search(string phrase)
    {
        WebDriverWait wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(30));
        wait.Until(d => d.FindElements(SearchInput).Count > 0);
        Driver.FindElement(SearchInput).SendKeys(phrase);
        Driver.FindElement(SearchButton).Click();
    }
}

We will start by adding some variables and giving them intuitive names. We will add a constant for the search page URL and variables to hold locators for search input and search button. Then we’ll add a variable to store a reference to the WebDriver, which will get injected through the constructor.

Now for the methods. We’ll add a load method for navigating to the search page URL. We will also add a search method. It first initializes a wait object and then uses that to wait for the search input to appear. After that, it puts in the phrase and clicks the search button to complete the search.

So far, this looks better than raw WebDriver calls. We begin to see the concerns are being separated. This code is easier to read because the different pieces have meaningful names. They are also reusable which is a step towards dealing with the “code duplication” pest.

Let’s refactor our simple search test using this new SearchPage class. We can also apply the Page Object Model to the other steps. Here is the refactored code:

IWebDriver driver = new ChromeDriver();
 
SearchPage searchPage = new SearchPage(driver);
searchPage.Load();
searchPage.Search("eevee");
 
ResultPage resultPage = new ResultPage(driver);
resultPage.WaitForTitle("eevee");
resultPage.WaitForResultLinks();
 
driver.Quit();

This looks much better.

Okay, I know I said that page objects help cut down on that pesky code duplication. However, it doesn’t deal with all of it. To show you what I mean, let’s consider interaction methods. Our test has a method to click on one button, but what happens if there is another button on the search page we might want to click? Well, we would need to add another method to click that button, such as in the following code:

public class AnyPage
{
    // ...
    public void ClickButton()
    {
        Wait.Until(d => d.FindElements(Button).Count > 0);
        driver.FindElement(Button).Click();
    }
   
    public void ClickOtherButton()
    {
        Wait.Until(d => d.FindElements(OtherButton).Count > 0);
        driver.FindElement(OtherButton).Click();
    }
}

We would use the same code for both of those click methods. On top of that, if there were more buttons to click, the same thing would happen. We may have thought we left that pest “code duplication” behind, yet here it is popping up again. It’s so pesky!

Don’t fret yet! I have heard that a “base page” pairs well with page objects. We can abstract the common parts into a central parent class and then use them for any page object we want. Maybe that will help us deal with this pest. Check it out:

public class BasePage
{
    public IWebDriver Driver { get; private set; }
    public WebDriverWait Wait { get; private set; }
   
    public SearchPage(IWebDriver driver)
    {
        Driver = driver;
        Wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(30));
    }
 
    protected void Click(By locator)
    {
        Wait.Until(d => d.FindElements(locator).Count > 0);
        driver.FindElement(locator).Click();
    }
}

First, we will refactor the variables for WebDriver and wait objects into our new BasePage class. Next, we will add common interaction methods, like Click(). I’m so glad we found abstraction as a hiking stick to help us along! Any child class we would like to write can now inherit these methods. I hope you are not tired because we still have more hiking to do – our destination has not been reached yet!

The trouble with page objects is that they combine two concerns: page elements and their interactions. We would still need to write page object methods for every interaction an element might need. For example, every click, scrape, or appearance check is a new interaction. We would have to write all those interactions for other elements as well. Here is an example:

public class AnyPage : BasePage
{
    // ...
 
    public void ClickButton() => Click(Button);
    public void ClickOtherButton() => Click(OtherButton);
 
    public void ButtonText() => Text(Button);
    public void OtherButtonText() => Text(OtherButton);
   
    public void IsButtonDisplayed() => IsDisplayed(Button);
    public void IsOtherButtonDisplayed() => IsDisplayed(OtherButton);
}

Unfortunately, this issue is not mitigated by using a base page. The base page may abstract the specific interaction code, but it does not reduce the number of interactions we would need to write.

Yikes, this path looks like it could get steep!

One other frustration with page objects is that there isn’t a real structure. If you have been down the Page Object Path before, consider this: if a junior tester joined your team, would they be able to look at the code and easily determine the structure of it? If an experienced developer joined your team, would your implementation of the page object pattern look like the implementation they used previously? My guess is that the answer would be “no.” Why? Because there is no official version of the Page Object Pattern nor conformity to its design. What is enforcing a structure? Nothing that I know of. Pages objects are more of a convention rather than a design pattern.

Isn’t there a better way to handle interactions?!

Introduction to the Screenplay Pattern

Aha, the Screenplay Pattern! Maybe this will have better interactions for better automation. Come on, friends, let the adventure continue!

Let’s first examine interactions through the lens of the Screenplay pattern. For an interaction to happen, it needs an initiator. Typically, this entity is a user. The Screenplay Pattern calls this role the “Actor.” The Actor is the one who takes actions like clicking buttons or sending keys.

Interactions need something to act upon, like elements on a web page. All those pieces are part of the product that’s being tested. Our journey is using a Web app for testing, but the product is not limited to just that. It could be other things like microservices or mobile apps.

Then we have the actual interaction itself. We have already seen simple ones like clicks and scrapes. Interactions can also be more complex, like what we saw when creating a Search method for our SearchPage class. The great thing is each interaction will operate the same on whatever target is given to it.

Finally, we need objects that will allow Actors to perform interactions. The Screenplay Pattern calls these “Abilities.” In our web test, we’re using Selenium WebDriver as the tool to automate browser interactions.

Components of the Screenplay Pattern

Actor, Ability, and Interaction – these three things are the crucial blocks of the Screenplay pattern. They each represent a different concern. Their relationship can simply be explained like this:

Actors use Abilities to perform Interactions.

If you only remember one thing from today’s journey, remember this point because it is the heart of the Screenplay Pattern.

As we started to discover earlier, page objects can become difficult to untangle because of how the concerns are combined. The Screenplay Pattern uses Actors, Abilities, and Interactions to nicely separate out the concerns, which allows for greater reusability and scalability.

Automation With the Screenplay Pattern Using Boa Constrictor

Now that we have been introduced to this pattern, let’s explore Screenplay further using Boa Constrictor.

Boa Constrictor is an open-source C# implementation of the Screenplay Pattern. It was developed under the lead of Andy Knight at PrecisionLender as the cornerstone of PrecisionLender’s end-to-end test automation solution. The project can be used with any .NET test framework, like SpecFlow or NUnit.

Boa Constrictor – The .NET Screenplay Pattern
Boa Constrictor – The .NET Screenplay Pattern

We will use Boa Constrictor to further evolve our simple DuckDuckGo search test. All the example code we are about to explore is copied directly from the Boa Constrictor GitHub repository with permission.

Setup

This demo requires installing the following NuGet packages and declaring dependencies:

// NuGet Packages:
//  Boa.Constrictor
//  FluentAssertions
//  Selenium.Support
//  Selenium.WebDriver
 
using Boa.Constrictor.Logging;
using Boa.Constrictor.Screenplay;
using Boa.Constrictor.WebDriver;
using FluentAssertions;
using OpenQA.Selenium.Chrome;
using static Boa.Constrictor.WebDriver.WebLocator;

Every Screenplay call begins with an Actor. The Actor’s job is to perform interactions. Typically, only a single Actor is required for most test cases. In our new test using Boa Constrictor, let’s initialize our Actor:

IActor actor = new Actor(name: "Sarah", logger: new ConsoleLogger());

The Actor class has two optional arguments. The first one is for naming the Actor to help describe who is acting. This is recorded in logged messages. The second one is for a logger, which logs messages from Screenplay calls to a target destination. The logger is required to implement Boa Constrictor’s ILogger interface. In this example, we are using the ConsoleLogger class, which logs messages to the system console. You can choose to define a custom logger instead; just make sure to implement ILogger.

Actors need Abilities to perform interactions. In this test, our Actor must have a Selenium WebDriver instance to click elements on a Web page, so we’ll give her the BrowseTheWeb Ability by using a method called Can():

actor.Can(BrowseTheWeb.With(new ChromeDriver()));

In plain English this line says, “The actor can browse the Web with a new ChromeDriver.” We can see that Boa Constrictor’s fluent-like syntax makes its call chains easy to understand.

BrowseTheWeb is an Ability that allows an Actor to initiate Web UI Interactions:

public class BrowseTheWeb : IAbility
{
    public IWebDriver WebDriver { get; }
 
    private BrowseTheWeb(IWebDriver driver) =>
        WebDriver = driver;
       
    public static BrowseTheWeb With(IWebDriver driver) =>
        new BrowseTheWeb(driver);
}

The With() method supplies a WebDriver object to the Actor, which can be retrieved from the Actor by any Web UI Interaction. Boa Constrictor supports all browser types.

Every Ability needs to implement the IAbility interface. There is no limit to the number of Abilities an Actor can have.

Unlike the Page Object Model, the Screenplay Pattern requires separating page structure concerns from interaction concerns. This structure provides greater reusability because any element can be targeted by any interaction. To do that, we write models for the Web pages we want to test. These are needed so the Actor can call WebDriver interactions. These types of models should be static classes that contain element locators for the page, and they can also include URLs. Interaction logic does not belong in page classes; they should only be used to model structure. By following the Screenplay Pattern, we now have classes of elements that can be shared among any interaction. We no longer need to write a click or a scrape for each different element.

We’ll add two members in the SearchPage class:

public static class SearchPage
{
    public const string Url =
        "https://www.duckduckgo.com/";
   
    public static IWebLocator SearchInput => L(
        "DuckDuckGo Search Input",
        By.Id("search_form_input_homepage"));
}

For convenience, locators can be constructed using the statically imported L method. There are two parts to a locator. The first is a plain-English description that will be used by the logger. The second is a Query used to find the element. Boa Constrictor uses Selenium WebDriver’s By queries.

Screenplay Interaction: Tasks

There are two types of interactions in The Screenplay Pattern: Tasks and Questions. We’ll explore a Task first. A Task is simply an action that does not return a value, like a click or a refresh.

In Boa Constrictor, there is a Task called Navigate, which is used to load a Web page for a given URL. Let’s add the following line to our test:

actor.AttemptsTo(Navigate.ToUrl(SearchPage.Url));

In plain English this line says, “The actor attempts to navigate to the URL for the search page.” We can easily see that this line will load a search page.

Every Task needs to implement the ITask interface. AttemptsTo() calls a Task. When the Actor calls AttemptsTo() on a Task, it calls the Task’s PerformAs() method:

public void AttemptsTo(ITask task)
{
    task.PerformAs(this);
}

Here is the Navigate Task:

public class Navigate : ITask
{
    private string Url { get; set; }
 
    private Navigate(string url) => Url = url;
   
    public static Navigate ToUrl(string url) => new Navigate(url);
 
    public void PerformAs(IActor actor)
    {
        var driver = actor.Using<BrowseTheWeb>().WebDriver;
        driver.Navigate().GoToUrl(Url);
    }
}

ToUrl() gives the specified URL. The Navigate Task’s PerformAs() method retrieves the WebDriver object from the Actor’s BrowseTheWeb Ability and then uses it to load the specified URL.

The SearchPage.Url parameter we gave ToUrl() in our test is from the SearchPage class. Like I said before, since it is in a page class, this URL is available to any interaction.

Screenplay Interaction: Questions

Now we will move on to explore Question Interactions. A Question can perform one or more actions and then return an answer, like scraping an element’s text or waiting for its appearance.

In Boa Constrictor, there is a Question called ValueAttribute, which is used to get the current value within a given input field. We will add it to our test:

actor.AskingFor(ValueAttribute.Of(SearchPage.SearchInput)).Should().BeEmpty();

In plain English this line says, “The actor asking for the value attribute of the search page’s search input element should be empty.”

Every Question needs to implement the IQuestion interface. AskingFor() calls a Question. There is also an equivalent method called AsksFor(). When the Actor calls either of these, it calls the Question’s RequestAs() method:

public TAnswer AskingFor<TAnswer>(IQuestion<TAnswer> question)
{
    return question.RequestAs(this);
}

Here is the ValueAttribute Question:

public class ValueAttribute : IQuestion<string>
{
    public IWebLocator Locator { get; }
   
    private ValueAttribute(IWebLocator locator) => Locator = locator;
   
    public static ValueAttribute Of(IWebLocator locator) => new ValueAttribute(locator);
 
    public string RequestAs(IActor actor)
    {
        var driver = actor.Using<BrowseTheWeb>().WebDriver;
        actor.AttemptsTo(Wait.Until(Existence.Of(Locator), IsEqualTo.True()));
        return driver.FindElement(Locator.Query).GetAttribute("value");
    }
}

Of() gives the specified Web element’s locator. The ValueAttribute Question’s RequestAs() method retrieves the WebDriver object, waits for existence on the page of the specified element, then scrapes and returns its value attribute.

The SearchPage.SearchInput parameter we gave Of() in our test is from the SearchPage class. It is the locator for the search input field.

Now that we have a value, our test can make assertions on it. Should().BeEmpty() is a Fluent Assertion that verifies if the search input field is empty after the page is first loaded.

Screenplay Interaction: Custom Interactions

Let’s explore creating custom interactions in this next step. There are two basic interactions involved when doing a search. The first is entering the phrase in the search input and the second is clicking the search button. It makes sense to create a custom interaction for this since searching is so common. We can do that by combining lower-level interactions.

We’ll call our custom Task “SearchDuckDuckGo” and give it a search phrase as an argument:

public class SearchDuckDuckGo : ITask
{
    public string Phrase { get; }
 
    private SearchDuckDuckGo(string phrase) =>
        Phrase = phrase;
   
    public static SearchDuckDuckGo For(string phrase) =>
        new SearchDuckDuckGo(phrase);
   
    public void PerformAs(IActor actor)
    {
        actor.AttemptsTo(SendKeys.To(SearchPage.SearchInput, Phrase));
        actor.AttemptsTo(Click.On(SearchPage.SearchButton));
    }
}

The two interactions it should call in its PerformAs() method are SendKeys() and Click().

You can see that we made the code more understandable by combining both interactions into a custom one. Another benefit is the ability to reuse this automation.

actor.AttemptsTo(SearchDuckDuckGo.For("eevee"));

In plain English this line is now short, sweet and to the point: “The actor attempts to search DuckDuckGo for eevee.”

Now we’re ready for our final assertion – verifying that the result links have appeared. We know from the previous versions of this test that this step contains a race condition: we still need to wait for the links to be displayed after the page loads. If the automation checks too early, the test case will fail. Do not despair my adventure companions, waiting can be easy when using Boa Constrictor!

actor.WaitsUntil(Appearance.Of(ResultPage.ResultLinks), IsEqualTo.True());

In plain English this line says, “The actor waits until the appearance of result page result links is equal to true.”

WaitsUntil() is a Boa Constrictor method that will call a Question repeatedly until the answer meets a specified condition or it reaches the timeout. The Question here is asking for the appearance of result links on the result page.

public static IWebLocator ResultLinks => L(
    "DuckDuckGo Result Page Links",
    By.ClassName("result__a"));

The waiting condition IsEqualTo.True() is waiting for the answer value to become “true.” This Question will return “false” before the links are loaded. It will return “true” after the links appear. Boa Constrictor comes with several conditions readily available. A few examples are equality and string matching. Custom conditions can be created by implementing the ICondition interface.

Asking a Question repeatedly until the answer is met is better than hard sleeps. Upon failure to receive the expected answer within a given timeout, an exception is raised. The good news is that waiting is already taken care of in many of Boa Constrictor WebDriver interactions! If it has a target element, the interaction will wait for the element’s existence before taking further action. We have already seen this when we used Click() and SendKeys(). Be mindful when using interactions that ask for appearance or existence, as those do not have waiting built in.

We’re almost done! This last part is important – don’t forget to quit the browser. We can do this by using Boa Constrictor’s QuitWebDriver() Task.

actor.AttemptsTo(QuitWebDriver.ForBrowser());

Regardless of what framework you use, the best practice is to put this in a cleanup or teardown routine. That way, even if a test fails, the browser is still cleaned up. Remember, my adventure buddies, leave nothing but footprints.

Congratulations, we have reached this journey’s destination! Here we have a complete test case using the Screenplay Pattern with Boa Constrictor. It’s easy to understand, smartly handles race conditions, and separates concerns for better interactions.

IActor actor = new Actor(name: "Sarah", logger: new ConsoleLogger());

actor.Can(BrowseTheWeb.With(new ChromeDriver()));

actor.AttemptsTo(Navigate.ToUrl(SearchPage.Url));

actor.AskingFor(ValueAttribute.Of(SearchPage.SearchInput)).Should().BeEmpty();

actor.AttemptsTo(SearchDuckDuckGo.For("eevee"));

actor.WaitsUntil(Appearance.Of(ResultPage.ResultLinks), IsEqualTo.True());

actor.AttemptsTo(QuitWebDriver.ForBrowser());

Conclusion

Remember, the Screenplay Pattern is summarized in this statement: Actors use Abilities to perform Interactions. Simple.

I will sum up what we’ve discovered in five main points.

  1. The Screenplay Pattern offers rich, reusable, and reliable interactions. Specifically, Boa Constrictor comes with built-in Tasks and Questions for every type of WebDriver-based interactions.
  2. Screenplay interactions are composable. It is easy to combine interactions, which alleviates the issues caused by code that is difficult to understand and full of duplication.
  3. The Screenplay Pattern makes waiting easy using existing Questions and conditions. One of the most challenging parts of black box automation is the proper handling of waiting.
  4. Screenplay calls are understandable. Their fluent-like syntax reads more like prose than code.
  5. The Screenplay Pattern is a design pattern for any type of interaction, not just for Web UI. On this journey, we discovered how to use it for Web UI interactions, but it is also an option that can be applied to mobile, REST API, and other things. On top of that, you can even design your own interactions.

To support this, I would like to share a bit of my personal experience learning it. I was introduced to this pattern a little over a year ago. After I learned how to use Screenplay, I found that writing new automation with Boa Constrictor was intuitive. I also enjoyed how easy it was to understand the code, basically like plain English, which made debugging issues that much easier. In a short amount of time it helped me gain confidence in writing new tests and understanding my way around the test automation solution. It also provided a great structure for me to develop within. I’m proud to say the solution boasts over 2200 unique Web UI test cases and is still scaling well. We even published a case study with Specflow on it!

The Screenplay Pattern provides better interactions for better automation. Screenplay is a fantastic option for automating behaviors under test. As we already discovered, the Screenplay Pattern is simple. Actors use Abilities to perform Interactions. That’s all there is to it.

The journey does not have to end here. I invite you to check out the Boa Constrictor GitHub repository and to try the tutorials for yourself from the doc site.

I hope everyone had a fun and informative journey today (and that no one got stuck back there on the path of Page Object Models.) Thanks for being my journey companions today. Happy automating!

The post A Journey to Better Automation with the Screenplay Pattern appeared first on Automated Visual Testing | Applitools.

]]>
Autonomous Testing: Test Automation’s Next Great Wave https://applitools.com/blog/autonomous-testing-test-automations-next-great-wave/ Tue, 08 Mar 2022 22:28:49 +0000 https://applitools.com/?p=35096 "Full" test automation is approaching. We are riding the crest of the next great wave: autonomous testing. It will fundamentally change testing.

The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.

]]>

The word “automation” has become a buzzword in pop culture. It conjures things like self-driving cars, robotic assistants, and factory assembly lines. They don’t think about automation for software testing. In fact, many non-software folks are surprised to hear that what I do is “automation.”

The word “automation” also carries a connotation of “full” automation with zero human intervention. Unfortunately, most of our automated technologies just aren’t there yet. For example, a few luxury cars out there can parallel-park themselves, and Teslas have some cool autopilot capabilities, but fully-autonomous vehicles do not yet exist. Self-driving cars need several more years to perfect and even more time to become commonplace on our roads.

Software testing is no different. Even when test execution is automated, test development is still very manual. Ironic, isn’t it? Well, I think the day of “full” test automation is quickly approaching. We are riding the crest of the next great wave: autonomous testing. It’ll arrive long before cars can drive themselves. Like previous waves, it will fundamentally change how we, as testers, approach our craft.

Let’s look at the past two waves to understand this more deeply. You can watch the keynote address I delivered at Future of Testing: Frameworks 2022, or you can keep reading below.

Test Automations Next Great Wave

Before Automation

In their most basic form, tests are manual. A human manually exercises the behavior of the software product’s features and determines if outcomes are expected or erroneous. There’s nothing wrong with manual testing. Many teams still do this effectively today. Heck, I always try a test manually before automating it. Manual tests may be scripted in that they follow a precise, predefined procedure, or they may be exploratory in that the tester relies instead on their sensibilities to exercise the target behaviors.

Testers typically write scripted tests as a list of steps with interactions and verifications. They store these tests in test case management repositories. Most of these tests are inherently “end-to-end:” they require the full product to be up and running, and they expect testers to attempt a complete workflow. In fact, testers are implicitly incentivized to include multiple related behaviors per test in order to gain as much coverage with as little manual effort as possible. As a result, test cases can become very looooooooooooong, and different tests frequently share common steps.

Large software products exhibit countless behaviors. A single product could have thousands of test cases owned and operated by multiple testers. Unfortunately, at this scale, testing is slooooooooow. Whenever developers add new features, testers need to not only add new tests but also rerun old tests to make sure nothing broke. Software is shockingly fragile. A team could take days, weeks, or even months to adequately test a new release. I know – I once worked at a company with a 6-month-long regression testing phase.

Slow test cycles forced teams to practice Waterfall software development. Rather than waste time manually rerunning all tests for every little change, it was more efficient to bundle many changes together into a big release to test all at once. Teams would often pipeline development phases: While developers are writing code for the features going into release X+1, testers would be testing the features for release X. If testing cycles were long, testers might repeat tests a few times throughout the cycle. If testing cycles were short, then testers would reduce the number of tests to run to a subset most aligned with the new features. Test planning was just as much work as test execution and reporting due to the difficulty in judging risk-based tradeoffs.

A Waterfall release schedule showing overlapping cycles of Design, Development, Testing and Release.
Typical Waterfall release overlapping

Slow manual testing was the bane of software development. It lengthened time to market and allowed bugs to fester. Anything that could shorten testing time would make teams more productive.

The First Wave: Manual Test Conversion

That’s when the first wave of test automation hit: manual test conversion. What if we could implement our manual test procedures as software scripts so they could run automatically? Instead of a human running the tests slowly, a computer could run them much faster. Testers could also organize scripts into suites to run a bunch of tests at one time. That’s it – that was the revolution. Let software test software!

During this wave, the main focus of automation was execution. Teams wanted to directly convert their existing manual tests into automated scripts to speed them up and run them more frequently. Both coded and codeless automation tools hit the market. However, they typically stuck with the same waterfall-minded processes. Automation didn’t fundamentally change how teams developed software, it just made testing better. For example, during this wave, running automated tests after a nightly build was in vogue. When teams would plan their testing efforts, they would pick a few high-value tests to automate and run more frequently than the rest of the manual tests.

A table showing "interaction" on one column and "verification" in another, with sample test steps.
An example of a typical manual test that would have likely been converted to an automated test during this wave.

Unfortunately, while this type of automation offered big improvements over pure manual testing, it had problems. First, testers still needed to manually trigger the tests and report results. On a typical day, a tester would launch a bunch of scripts while manually running other tests on the side. Second, test scripts were typically very fragile. Both tooling and understanding for good automation had not yet matured. Large end-to-end tests and long development cycles also increased the risk of breakage. Many teams gave up attempting test automation due to the maintenance nightmare.

The first wave of test automation was analogous to cars switching from manual to automatic transmissions. Automation made the task of driving a test easier, but it still required the driver (or the tester) to start and stop the test.

The Second Wave: CI/CD

The second test automation wave was far more impactful than the first. After automating the execution of tests, focus shifted to automating the triggering of tests. If tests are automated, then they can run without any human intervention. Therefore, they could be launched at any time without human intervention, too. What if tests could run automatically after every new build? What if every code change could trigger a new build that could then be covered with tests immediately? Teams could catch bugs as soon as they happen. This was the dawn of Continuous Integration, or “CI” for short.

Continuous Integration revolutionized software development. Long Waterfall phases for coding and testing weren’t just passé – they were unnecessary. Bite-sized changes could be independently tested, verified, and potentially deployed. Agile and DevOps practices quickly replaced the Waterfall model because they enabled faster releases, and Continuous Integration enabled Agile and DevOps. As some would say, “Just make the DevOps happen!”

The types of tests teams automated changed, too. Long end-to-end tests that covered “grand tours” with multiple behaviors were great for manual testing but not suitable for automation. Teams started automating short, atomic tests focused on individual behaviors. Small tests were faster and more reliable. One failure pinpointed one problematic behavior.

Developers also became more engaged in testing. They started automating both unit tests and feature tests to be run in CI pipelines. The lines separating developers and testers blurred.

Teams adopted the Testing Pyramid as an ideal model for test count proportions. Smaller tests were seen as “good” because they were easy to write, fast to execute, less susceptible to flakiness, and caught problems quickly. Larger tests, while still important for verifying workflows, needed more investment to build, run, and maintain. So, teams targeted more small tests and fewer large tests. You may personally agree or disagree with the Testing Pyramid, but that was the rationale behind it.

The Testing Pyramid, showing a large amount of unit tests at the base, integration tests in the middle and end-to-end tests at the top.
The Classic Testing Pyramid

While the first automation wave worked within established software lifecycle models, the second wave fundamentally changed them. The CI revolution enabled tests to run continuously, shrinking the feedback loop and maximizing the value that automated tests could deliver. It gave rise to the SDET, or Software Development Engineer in Test, who had to manage tests, automation, and CI systems. SDETs carried more responsibilities than the automation engineers of the first wave.

If we return to our car analogy, the second wave was like adding cruise control. Once the driver gets on the highway, the car can just cruise on its own without much intervention.

Unfortunately, while the second wave enabled teams to multiply the value they can get out of testing and automation, it came with a cost. Test automation became full-blown software development in its own right. It entailed tools, frameworks, and design patterns. The continuous integration servers became production environments for automated tests. While some teams rose to the challenge, many others struggled to keep up. The industry did not move forward together in lock-step. Test automation success became a gradient of maturity levels. For some teams, success seemed impossible to reach.

Attempts at Improvement

Now, these two test automation waves I described do not denote precise playbooks every team followed. Rather, they describe the general industry trends regarding test automation advancement. Different teams may have caught these waves at different times, too.

Currently, as an industry, I think we are riding the tail end of the second wave, rising up to meet the crest of a third. Continuous Integration, Agile, and DevOps are all established practices. The innovation to come isn’t there.

Over the past years, a number of nifty test automation features have hit the scene, such as screen recorders and smart locators. I’m going to be blunt: those are not the next wave, they’re just attempts to fix aspects of the previous waves.

  1. Screen recorders and visual step builders have been around forever, it seems. Although they can help folks who are new to automation or don’t know how to code, they produce very fragile scripts. Whenever the app under test changes its behavior, testers need to re-record tests.
  2. Self-healing locators don’t deliver much value on their own. When a locator breaks, it’s most likely due to a developer changing the behavior on a given page. Behavior changes require test step changes. There’s a good chance the target element would be changed or removed. Besides, even if the target element keeps its original purpose, updating its locator is a super small effort.
  3. Visual locators – ones that find elements based on image matching instead of textual queries – also don’t deliver much value on their own. They’re different but not necessarily “better.” The one advantage they do offer is finding elements that are hard to locate with traditional locators, like a canvas or gaming objects.  Again, the challenge is handling behavior change, not element change.

You may agree or disagree with my opinions on the usefulness of these tools, but the fact is that they all share a common weakness: they are vulnerable to behavioral changes. Human testers must still intervene as development churns.

These tools are akin to a car that can park itself but can’t fully drive itself. They’re helpful to some folks but fall short of the ultimate dream of full automation.

The Third Wave: Autonomous Testing

The first two waves covered automation for execution and scheduling. Now, the bottleneck is test design and development. Humans still need to manually create tests. What if we automated that?

Consider what testing is: Testing equals interaction plus verification. That’s it! You do something, and you make sure it works correctly. It’s true for all types of tests: unit tests, integration tests, end-to-end tests, functional, performance, load; whatever! Testing is interaction plus verification.

At its core, testing is interaction plus verification

During the first two waves, humans had to dictate those interactions and verifications precisely. What we want – and what I predict the third wave will be – is autonomous testing, in which that dictation will be automated. This is where artificial intelligence can help us. In fact, it’s already helping us.

Applitools has already mastered automated validation for visual interfaces. Traditionally, a tester would need to write several lines of code to functionally validate behaviors on a web page. They would need to check for elements’ existence, scrape their texts, and make assertions on their properties. There might be multiple assertions to make – and other facets of the page left unchecked. Visuals like color and position would be very difficult to check. Applitools Eyes can replace almost all of those traditional assertions with single-line snapshots. Whenever it detects a meaningful change, it notifies the tester. Insignificant changes are ignored to reduce noise.

Automated visual testing like this fundamentally simplifies functional verification. It should not be seen as an optional extension or something nice to have. It automates the dictation of verification. It is a new type of functional testing.

The remaining problem to solve is dictation of interaction. Essentially, we need to train AI to figure out proper app behaviors on its own. Point it at an app, let it play around, and see what behaviors it identifies. Pair those interactions with visual snapshot validation, and BOOM – you have autonomous testing. It’s testing without coding. It’s like a fully-self-driving car!

Some companies already offer tools that attempt to discover behaviors and formulate test cases. Applitools is also working on this. However, it’s a tough problem to crack.

Even with significant training and refinement, AI agents still have what I call “banana peel moments:” times when they make surprisingly awful mistakes that a human would never make. Picture this: you’re walking down the street when you accidentally slip on a banana peel. Your foot slides out from beneath you, and you hit your butt on the ground so hard it hurts. Everyone around you laughs at both your misfortune and your clumsiness. You never saw it coming!

Banana peel moments are common AI hazards. Back in 2011, IBM created a supercomputer named Watson to compete on Jeopardy, and it handily defeated two of the greatest human Jeopardy champions at that time. However, I remember watching some of the promo videos at the time explaining how hard it was to train Watson how to give the right answers. In one clip, it showed Watson answering “banana” to some arbitrary question. Oops! Banana? Really?

IBM Watson is shown defeating other contestants with the correct answer of Bram Stoker in Final Jeopardy.
Watson (center) competing against Ken Jennings (left) and Brad Rutter (right) on Jeopardy in 2011. (Image source: https://i.ytimg.com/vi/P18EdAKuC1U/maxresdefault.jpg)

While Watson’s blunder was comical, other mistakes can be deadly. Remember those self-driving cars? Tesla autopilot mistakes have killed at least a dozen people since 2016. Autonomous testing isn’t a life-or-death situation like driving, but testing mistakes could be a big risk for companies looking to de-risk their software releases. What if autonomous tests miss critical application behaviors that turn out to crash once deployed to production? Companies could lose lots of money, not to mention their reputations.

So, how can we give AI for testing the right training to avoid these banana peel moments? I think the answer is simple: set up AI for testing to work together with human testers. Instead of making AI responsible for churning out perfect test cases, design the AI to be a “coach” or an “advisor.” AI can explore an app and suggest behaviors to cover, and the human tester can pair that information with their own expertise to decide what to test. Then, the AI can take that feedback from the human tester to learn better for next time. This type of feedback loop can help AI agents not only learn better testing practices generally but also learn how to test the target app specifically. It teaches application context.

AI and humans working together is not just a theory. It’s already happened! Back in the 90s, IBM built a supercomputer named Deep Blue to play chess. In 1996, it lost 4-2 to grandmaster and World Chess Champion Garry Kasparov. One year later, after upgrades and improvements, it defeated Kasparov 3.5-2.5. It was the first time a computer beat a world champion at chess. After his defeat, Kasparov had an idea: What if human players could use a computer to help them play chess? Then, one year later, he set up the first “advanced chess” tournament. To this day, “centaurs,” or humans using computers, can play at nearly the same level as grandmasters.

Gary Kasperov staring at a chessboard across the table from an operator playing for the Deep Blue AI.
Garry Kasparov playing chess against Deep Blue. (Image source: https://cdn.britannica.com/62/71262-050-25BFC8AB/Garry-Kasparov-Deep-Blue-IBM-computer.jpg)

I believe the next great wave for test automation belongs to testers who become centaurs – and to those who enable that transformation. AI can learn app behaviors to suggest test cases that testers accept or reject as part of their testing plan. Then, AI can autonomously run approved tests. Whenever changes or failures are detected, the autonomous tests yield helpful results to testers like visual comparisons to figure out what is wrong. Testers will never be completely removed from testing, but the grindwork they’ll need to do will be minimized. Self-driving cars still have passengers who set their destinations.

This wave will also be easier to catch than the first two waves. Testing and automation was historically a do-it-yourself effort. You had to design, automate, and execute tests all on your own. Many teams struggled to make it successful. However, with the autonomous testing and coaching capabilities, AI testing technologies will eliminate the hardest parts of automation. Teams can focus on what they want to test more than how to implement testing. They won’t stumble over flaky tests. They won’t need to spend hours debugging why a particular XPath won’t work. They won’t need to wonder what elements they should and shouldn’t verify on a page. Any time behaviors change, they rerun the AI agents to relearn how the app works. Autonomous testing will revolutionize functional software testing by lowering the cost of entry for automation.

Catching the Next Wave

If you are plugged into software testing communities, you’ll hear from multiple testing leaders about their thoughts on the direction of our discipline. You’ll learn about trends, tools, and frameworks. You’ll see new design patterns challenge old ones. Something I want you to think about in the back of your mind is this: How can these things be adapted to autonomous testing? Will these tools and practices complement autonomous testing, or will they be replaced? The wave is coming, and it’s coming soon. Be ready to catch it when it crests.

The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.

]]>
The Best Test Automation Framework Is… https://applitools.com/blog/what-is-the-best-test-automation-framework/ Tue, 19 Oct 2021 10:01:00 +0000 https://applitools.com/?p=31662 Catch a recap of my recent keynote, where I spoke about the context and the criteria required to make any test automation framework the “best.”

The post The Best Test Automation Framework Is… appeared first on Automated Visual Testing | Applitools.

]]>

In this social world, it is very easy to get biased into believing that some practice is the best practice, or some automation tool or framework is “the best.” When anyone makes the statement – “this <some_practice> is a best practice”, or “this tool <name_of_tool> or framework <name_of_framework>” is the best, there are 2 things that come to mind:

  1. The person is promoting the practice / tool / framework as a “silver bullet” – something that will solve all problems, magically.
  1. The said practice / tool / framework actually worked best for them, in the context of the team adopting it

So when I hear anyone saying – “best ….” I get suspicious and think which category do they belong to – “silver bullet,” or knowledgeable folks who have done their study and determined what is working well for them.

Doing the study is extremely important to determine what is good or bad. In the context of Test Automation, there are a lot of parameters that need to be considered before you reach a decision about what tool or framework is going to become “the best tool / framework” for you. I classify these parameters as negotiables and non-negotiables.

I had the privilege of delivering the opening keynote at the recent Future of Testing event focused on Test Automation Frameworks on September 30th 2021. My topic was “The best test automation framework is …”. I spoke about the context, and the non-negotiable and negotiable criteria required to make any test automation framework the “best.”

How to Choose the Best Test Automation Framework…

Understanding the Context

Here are the questions to answer to determine the context:

Negotiable and Non-Negotiable Criteria

Once you understand the context, then apply that information to determine your non-negotiable and negotiable criteria.

Start Evaluating

Now that you understand all the different parameters, here are the steps to get started.

You can see the full mind map I used in my keynote presentation below.

You can also download the PDF version of this mind map from here.

Catch the Keynote Video

To hear more about how to choose the best test automation framework, you can watch the whole video from my keynote presentation here, or by clicking below.

The post The Best Test Automation Framework Is… appeared first on Automated Visual Testing | Applitools.

]]>
Future of Testing: Mobile Recap – All About Mobile Test Automation https://applitools.com/blog/future-of-testing-mobile-all-about-mobile-test-automation/ Fri, 30 Apr 2021 20:09:18 +0000 https://applitools.com/?p=28760 Applitools recently hosted a conference on the future of testing for mobile applications. Check out a recap of the event and watch the recordings.

The post Future of Testing: Mobile Recap – All About Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>

A few weeks ago, Applitools hosted a conference on the future of testing for mobile applications. Almost 4000 people registered for the event, creating a fun and exciting atmosphere in the chat for each session as well as for the live Q&A that followed. There was a lot to learn and it was a great opportunity to engage with the testing community on such an important topic.

The videos are all available now in our on-demand library and can be watched for free. If you want to dive right in and watch them right now and skip this recap, go ahead, I won’t blame you ?. You can check them all out at the link below.

The Path to Autonomous Testing – Gil Sever

The opening remarks were from Applitools CEO and co-founder Gil Sever. In this ten-minute presentation, Gil delivers a strong primer on what autonomous testing really is – and how machine learning can help assist humans and make testing much, much more effective. Tune in for a glimpse at the autonomous future.

On the Same Wavelength: Adding Radio to Your Testing Toolbox – Jason Huggins

Jason Huggins was the opening keynote speaker at the event, and he gave a fascinating talk on where testing is headed. As Jason says, “testing is getting weird,” and is increasingly about things you can’t even see. He argues that it’s time to move past an understanding of testing as just simulating what can be seen and tapped. What mobile testers are ultimately interested in today is the triggering of radio activity. That’s the essence of how your app truly performs, isn’t it?

Jason is a founder of Selenium, Appium, and Tapster Robotics, so he knows quite a bit about where testing has been and where it’s going. Check out his talk to hear what he has to say.

Appium 2.0: What’s Next – Sai Krishna, Srinivasan Sekar 

Appium is a very popular test automation framework, and the upcoming release of Appium 2.0 is highly anticipated. Sai and Srinivasan are both contributors to the Appium project as well as lead consultants at ThoughtWorks, and in their presentation you’ll find a preview of what’s coming with Appium 2.0. 

For example, today you need to install a large number of drivers when you install Appium server – even ones you don’t need. With Appium 2.0, you can just install the ones you need. Another example has to do with bug fixes – a lot of fixes are added to betas but many people don’t install betas and miss out, so with Appium 2.0 the fixes will be attached to individual drivers and rolled out faster. There will be improved docs and it’ll be easier to build your own plugins… the list goes on.

Catch Sai and Srini’s presentation to learn all about it. And if you’re ready to try it out for yourself, read our blog post on Getting Started with Appium 2.0 Beta.

Coffee Break

You might think there’s not much to recap during a coffee break, but during the first coffee break of the conference the brand-new Test Automation Cookbook was introduced to the world. This is a collection of bite-sized recipes you can use to answer a number of specific and common questions you may have about test automation. This “commercial break” was very well received by the audience ?.

Mobile App Testing & Release Strategy – Anand Bagmar

Your mindset needs to be mobile-first. That’s how Anand, a Quality Evangelist and Solution Architect at Applitools, opened his talk. He followed that up with an overview of the differences between web and mobile testing/releasing, including mobile test automation on a local/cloud device lab. Anand explains that even after all our hard work in continuous testing, sometimes visual tests can still come down to a game of manual “spot the difference.” 

Visual AI is a difference-maker there, as Anand explains. He talks about the difference between Visual AI and pixel comparisons and how you can apply it yourselves. Take a look at this talk for a great overview of mobile testing and releasing.

Next Generation Mobile Testing with Visual AI – Adam Carmi

Adam Carmi, a co-founder and CTO of Applitools, picked up with Anand left off with a deeper dive into Visual AI. Adam walks through a live demo of Applitools Eyes so you can see it for yourself. He talks about the huge code reduction when you use Eyes – up to 80% – which also gives you increased coverage and no validation logic to maintain. He backs this up with hard data from a hackathon, highlighting the fact that many testers were completely new to Applitools and were able to pick it up quickly and get some really strong results.

Adam’s talk was full of examples of how Eyes can work in the kinds of scenarios you may be wondering about, including how Eyes deals with different mobile form factors and how it batches together similar errors that can be approved/rejected together. Check it out.

Expert Panel: State of the Mobile Frameworks

This panel gathered together three mobile development experts for a robust discussion of what life is like for developers using different mobile frameworks. Eran Kinsbruner, DevOps Chief Evangelist and Sr. Director, Product Marketing at Perforce Software, Eugene Berezin, iOS Developer at Nordstrom and Moataz Nabil, Mobile Developer Advocate at Bitrise shared a lot of great information about the frameworks they use, which included Flutter, Appium, Kif, EarlGray and of course XCUITest and Espresso.

The panel was moderated by Justin Ison, Sr. Software Engineer at Applitools. Justin led the panel through a conversation around framework limitations, how to make apps testable, and what could make mobile testing easier. You can check out the whole discussion below. And if you’re curious for a quick comparison, be sure to take a look at a recent writeup on our blog that tackles Appium vs Espresso vs XCUITest

The Future of Multi-Platform Integration Testing – Bijoya Chatterjee, Rajnikant Ambalpady

Bijoya and Rajnikant work on testing for the new SONY PlayStation 5, giving them a unique outlook on what it takes to deliver strong integration testing across platforms. In this talk, they describe the challenge of having many standalone apps that require automated testing, when there aren’t any off-the-shelf tools that are built to test a PlayStation! They ended up customizing Appium and making use of many other tools in their stack (this might be a good place to mention that Applitools is part of it, which I did not know until I heard Bijoya tell the audience ?).

They cover the challenges of testing numerous standalone components within apps that must talk to each other, as well as testing across platforms from console to web to mobile. For a discussion of the pros and cons of end-to-end integration testing and much more, be sure to check out this talk.

Let the Robots Test Your Flutter App – Paulina Grigonis, Jorge Coca

It’s not easy to organize code so that it’s A) maintainable and customizable by development teams, and B) still easily understood and readable by business stakeholders. In this presentation, Paulina and Jorge, who are respectively business and technical experts at Very Good Ventures, walk us through a methodology they call the Robot Pattern. This pattern separates the “What” from the “How” of testing and can result in some pretty spiffy code. Definitely very easy to read even for a non-technical user.

Want to learn how to implement this pattern in your own development? Check out their presentation below.

Your Tests Lack Vision: Adding Eyes to Your Mobile Tests – Angie Jones

As humans, we can only pay attention to so much at one time – and that means we miss things, even in plain sight. The closing keynote from Angie Jones makes this clear from the first moments with a great video clip. I won’t spoil it, but it reminded me a lot of another video when I saw it, so after you watch Angie’s talk go ahead and take a look at this one too and see how it goes if you want to laugh at yourself.

After helping us all understand our blind spots, Angie provides a lot of great examples of how visual bugs slip through traditional testing processes. She then walks us through a demo of a new app and show us how Applitools Eyes can help us make sure it’s visually perfect. In the live Q&A, Angie also answers a number of questions around handling multiple viewport sizes or when you have to scroll, and even testing variations like light and dark mode or dealing with pop-up notifications and alerts.

Angie also shared her inspiration behind launching the automation cookbook (hint: it’s making the life of fellow testers easier). If you haven’t taken a look at it yet, be sure to check out the automation cookbook here.

You can see Angie’s full talk below.

Thank You!

And with that (and a few closing remarks from host Joe Colantonio of TestGuild) the Future of Testing Mobile event ended. We want to extend a huge thanks to everyone involved in making this event such a success, from Joe for his incredible hosting to the amazing speakers for sharing their insights to every attendee for adding your voices and presence. The event could not happen without all of you.

If you liked these videos, you might also like our videos from our previous Future of Testing events too – all free to watch. Happy testing!

The post Future of Testing: Mobile Recap – All About Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>