AI Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/ai/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 30 Nov 2023 21:31:36 +0000 en-US hourly 1 Should We Fear AI in Test Automation? https://applitools.com/blog/should-we-fear-ai-in-test-automation/ Mon, 04 Dec 2023 13:39:00 +0000 https://applitools.com/?p=53216 Richard Bradshaw explores fears around the use of AI in test automation shared during his session—The Fear Factor—at Future of Testing.

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>

At the recent Future of Testing: AI in Automation event hosted by Applitools, I ran a session called ‘The Fear Factor’ where we safely and openly discussed some of our fears around the use of AI in test automation. At this event, we heard from many thought leaders and experts in this domain who shared their experiences and visions for the future. AI in test automation is already here, and its presence in test automation tooling will only increase in the very near future, but should we fear it or embrace it?

During my session, I asked the attendees three questions:

  • Do you have any fears about the use of AI in testing?
  • In one word, describe your feelings when you think about AI and testing.
  • If you do have fears about the use of AI in testing, describe them.

Do you have any fears about the use of AI in testing?

Where do you sit?

I’m in the Yes camp, and let me try to explain why.

Fear can mean many things, but one of them is the threat of harm. It’s that which concerns me in the software testing space. But that harm will only happen if teams/companies believe that AI alone can do a good enough job. If we start to see companies blindly trusting AI tools for all their testing efforts, I believe we’ll see many critical issues in production. It’s not that I don’t believe AI is capable of doing great testing—it’s more the fact that many testers struggle to explain their testing, so to have good enough data to train such a model feels distant to me. Of course, not all testing is equal, and I fully expect to see many AI-based tools doing some of the low-hanging fruit testing for us.

In one word, describe your feelings when you think about AI and testing.

It’s hard to disagree with the results from this question—if I were to pick two myself, I would have gone with ‘excited and skeptical.’ I’m excited because we seem to be seeing new developments and tools each week. On top of that, though, we are starting to see developments in tooling using AI outside of the traditional automation space, and that really pleases me. Combine that with the developments we are seeing in the automation space, such as autonomous testing, and the future tooling for testing looks rather exciting.

That said, though, I’m a tester, so I’m skeptical of most things. I’ve seen several testing tools now that are making some big promises around the use of AI, and unfortunately, several that are talking about replacing or needing fewer testers. I’m very skeptical of such claims. If we pause and look across the whole of the technology industry, the most impactful use of AI thus far is in assisting people. Various GPTs help generate all sorts of artifacts, such as code, copy, and images. Sometimes, it’s good enough, but the majority of the time is helping a human be more efficient—this use of AI and such messaging, excites me.

If you do have fears about the use of AI in testing, describe them here.

We got lots of responses to this question, but I’m going to summarise and elaborate on four of them:

  • Job security
  • Learning curve
  • Reliability & security
  • How it looks

Job Security

Several attendees shared they were concerned about AI replacing their jobs. Personally, I can’t see this happening. We had the same concern with test automation, and that never really materialized. Those automated tests don’t maintain themselves, or write themselves, or share the results themselves. The direction shared by Angie Jones in her talk Where Is My Flying Car?! Test Automation in the Space Age, and Tariq King in his talk, Automating Quality: A Vision Beyond AI for Testing, is AI that assists the human, giving them superpowers. That’s the future I hope, and believe we’ll see, where we are able to do our testing a lot more efficiently by having AI assist us. Hopefully, this means we can release even quicker, with higher quality for our customers.

Another concern shared was about skills that we’ve spent years and a lot of effort learning, suddenly being replaced by AI. Or significantly easier with AI. I think this is a valid concern but also inevitable. We’ve already seen AI have a significant benefit to developers with tools like GitHub Copilot. However, I’ve got a lot of experience with Copilot, and it only really helps when you know what to ask for—this is the same with GPTs. Therefore, I think the core skills of a tester will be crucial, and I can’t see AI replacing those.

Learning Curve

If we are going to be adding all these fantastic AI tools into our tool belts, I feel it’s going to be important we all have a basic understanding of AI. This concern was shared by the attendees. For me, if I’m going to be trusting a tool to do testing for me or generating test artefacts for me, I definitely want that basic understanding. So, that poses the question, where are we going to get this knowledge from?

On the flip side of this, what if we become over-reliant on these new AI tools? A concern shared by attendees was that the next generation of testers might not have some of the core skills we consider important today. Testers are known for being excellent thinkers and practitioners of critical thinking. If the AI tools are doing all this thinking for us, we run the risk of those skills losing their focus and no longer being taught. This could lead to us being over-reliant on such tools, but also the tools biassing the testing that we do. But given that the community is focusing on this already, I feel it’s something we can plan to mitigate and ensure this doesn’t happen.

Reliability & Security

Data, data, data. A lot of fears were shared over the use and collection of data. The majority of us work on applications where data, security, and integrity are critical. I absolutely share this concern. I’m no AI expert, but the best AI tools I’ve used thus far are ones that are contextual to my domain/application, and to do that, we need to train it on our data. These could lead to data bleeding and private data, and that is a huge challenge I think the AI space has yet to solve.

One of the huge benefits of AI tooling is that it’s always learning and, hopefully, improving. But that brings a new challenge to testing. Usually, when we create an automated test, we are codifying knowledge and behavior, to create something that is deterministic, we want it to do the same thing over and over again. This provides consistent feedback. However, with an AI-based tool it won’t always do the same thing over and over again—it will try and apply its intelligence, and here’s where the reliability issues come in. What it tested last week may not be the same this week, but it may give us the same indicator. This, for me, emphasizes the importance of basic AI knowledge but also that we use these tools as an assistant to our human skills and judgment.

How It Looks

Several attendees shared concerns about how these AI tools are going to look. Are they going to a completely black box, where we enter a URL or upload an app and just click Go? Then the tool will tell us pass or fail, or perhaps it will just go and log the bugs for us. I don’t think so. As per Angie’s and Tariq’s talk I mentioned before, I think it’s more likely these tools will focus on assistance. 

These tools will be incredibly powerful and capable of doing a lot of testing very quickly. However, what they’ll struggle to do is to put all the information they find into context. That’s why I like the idea of assistance, a bunch of AI robots going off and collecting information for me. It’s then up to me to process all that information and put it into the context of the product. The best AI tool is going to be the one that makes it as easy as possible to process the masses of information these tools are going to return.

Imagine you point an AI bot at your website, and within minutes, it’s reporting accessibility issues to you, performance issues, broken links, broken buttons, layout issues, and much more. It’s going to be imperative that we can process that information as quickly as possible to ensure these tools continue to support us and don’t drown us in information.

Visit the Future of Testing: AI in Automation archive

In summary, AI is here, and more is coming. It’s very exciting times in the software testing tooling space, and I’m really looking forward to playing with more new tools. I think we need to be curious with these new tools, try them, and see what sticks. The more tools we have in our tool belts, the more options we have to solve our ever-increasing complex testing challenges. 

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>
Future of Testing: AI in Automation Recap https://applitools.com/blog/future-of-testing-ai-in-automation-recap/ Tue, 28 Nov 2023 13:13:00 +0000 https://applitools.com/?p=53155 Recap of the Future of Testing: AI in Automation conference. Watch the on-demand sessions to learn actionable steps to implement AI in your software testing strategy, key considerations around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>

The latest edition of the Future of Testing events, held on November 7, 2023, was nothing short of inspiring and thought-provoking! Focused on AI in Automation, attendees learned how to leverage AI in software testing with top industry leaders like Angie Jones, Tariq King, Simon Stewart, and many more. All of the sessions are available now on-demand, and below, we take a look back at these groundbreaking sessions to give you a sneak peek of what to expect before you watch.

Opening Remarks

Joe Colantonio from TestGuild and Dave Piacente from Applitools set the stage for a thought-provoking discussion on reimagining test automation with AI. As technology continues to evolve at a rapid pace, it’s important for software testing professionals to adapt and embrace new tools and techniques. Joe and Dave encouraged attendees to explore the potential of AI in test automation and how it can enhance their current processes. They also touch upon the challenges faced by traditional test automation methods and how AI-powered solutions can help overcome them.

Dave shared one of our latest updates – the integration of Applitools Eyes with Preflight! Learn more about Preflight.

Keynote—Reimagining Test Automation with AI by Anand Bagmar

In this opening session, Anand Bagmar explored how to reimagine your test automation strategies with AI at each stage of the test automation life cycle, including a live demo showcasing the power of AI in test automation with Applitools.

Anand first introduced the transition from Waterfall to Agile software delivery practices, and while we can’t imagine going back to a Waterfall way of working, he addressed the challenges Agile brings to the software testing life cycle. Each iteration brings more room for error across analysis, maintenance, and validation of tests. This is why testers should turn toward AI-powered test automation, with the help of tools like Applitools, to help ease the pain of Agile testing.

The session is aimed at helping testers understand the importance of leveraging AI technology for successful test automation, as well as empowering them to become more effective in their roles. Watch now.

From Technical Debt to Technical Capital by Denali Lumma

In this session, Denali Lumma from Modular dived into the concept of technical debt and proposed a new perspective on how we view it – technical capital. She walked attendees through key mathematical concepts that help calculate technical capital, as well as examples comparing Pytorch vs. TensorFlow, MySQL vs.Postgres, Frameworks vs. Code Editors, and more.

Attendees gained insights into calculating technical capital and how it can impact the valuation of a company. Watch now.

Automating Quality: A Vision Beyond AI for Testing by Tariq King

Tariq King of EPAM Systems took attendees on a journey through the evolution of software testing and how it has been impacted by generative AI. He shared his vision for the future of automated quality, one that looks beyond just AI to also prioritize creativity and experimentation. Tariq emphasized the need for quality and not just using AI to “go faster.” The more quality you have, the more productive you will be.

Tariq also dove into the ethical implications of using AI for testing and how it can be used for good or evil. Watch the full session.

Leveraging ChatGPT with Cypress for API Testing: Hands-On Techniques by Anna Patterson

In this session, Anna Patterson of EVERFI explored practical techniques and provided hands-on examples of how to harness the combined power of Cypress and ChatGPT to create robust API tests for your applications.

Anna guided us through writing descriptive and clear test prompts using HTML status codes, with a pet store website as an example. She showed in real-time how meaningful prompts in ChatGPT can help you create a solid API test suite, while also considering the security requirements of your company. Watch now.

PANEL—Testing in the AI Era: Opportunities, Hurdles, and the Evolving Role of Engineers

Joe Colantonio, Test Guild • Janna Loeffler, mParticle • Dave Piacente, Applitools • Stephen Williams, Accenture

As the use of AI in software development continues to grow, it is important for engineers and testers to stay ahead of the curve. In this panel discussion led by Joe Colantonio from Test Guild, Janna Loeffler from mParticle, Dave Piacente from Applitools, and Stephen Williams from Accenture came together to discuss the current state of AI implementation and its impact on testing.

They talked about how AI is still in its early stages of adoption and why there may always be some level of distrust in AI technology. The panel emphasized the importance of first understanding why you might implement AI in your testing strategy so that you can determine what the technology will help to solve vs. jumping in right away. Many more incredible takes and insights were shared in this interactive session! Watch now.

The Fear Factor with Richard Bradshaw

The Friendly Tester, Richard Bradshaw, addressed the common fears about AI and automation in testing. Attendees heard Richard’s open and honest discussion on the challenges and concerns surrounding AI and automation in testing. Ultimately, he calmed many fears around AI and gave attendees a better understanding of how they can begin to use it in their organization and to their own advantage. Watch now.

Tests Too Slow? Rethink CI! by Simon Stewart

Simon Stewart from the Selenium Project discussed the latest updates on how to speed up your testing process and improve the reliability of your CI runs. He shared insights into the challenges and tradeoffs involved in this process, as well as what is to come with Selenium and Bazel.
Attendees learned how to rethink their CI approach and use these tools to get faster feedback and more reliable testing results. Watch now.

Revolutionizing Testing: Empowering Manual Testers with AI-Driven Automation by Dmitry Vinnik

Dmitry Vinnik explored how AI-driven automation is revolutionizing the testing process for manual testers. He showed how Applitools’ Visual AI and Preflight help streamline test maintenance and reduce the need for coding.

Dmitry shared the importance of test maintenance, no code solutions for AI testing, and a first-hand look at Applitools Preflight. Watch this session to better understand how AI is transforming testing and empowering manual testers to become more effective in their roles. Watch the full session.

Keynote—Where Is My Flying Car?! Test Automation in the Space Age by Angie Jones

In her closing keynote, Angie Jones of Block took us on a trip into the future to see how science fiction has influenced the technology we have today. The Jetsons predicted many futuristic inventions such as robots, holograms, 3D printing, smart devices, and drones. We will explore these predictions and see how far we have come regarding automation and technology in the testing space.

As technology continues to evolve, it is important for testers to stay updated and adapt their strategies accordingly. Angie dove into the exciting world of tech innovation and imagined the future for test automation in the space age. Watch now.


Visit the full Future of Testing: AI in Automation on-demand archive to watch now and learn actionable steps to implement AI in your software testing strategy, key considerations before you start, other ideas around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>
AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take https://applitools.com/blog/ai-and-the-future-of-test-automation-with-adam-carmi/ Mon, 16 Oct 2023 18:23:49 +0000 https://applitools.com/?p=52314 We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of...

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>

We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of room for you to form your own impressions.

Dave Piacente

Curious if the software robots are here to take our jobs? Or maybe you’re not a fan of the AI hype train? During a recent session, The Future of AI-Based Test Automation, CTO Adam Carmi discussed—in practical terms—the current and future state of AI-based test automation, why it matters, and what you can do today to level up your automation practice.

  • He describes how AI can be used to overcome common everyday challenges in end-to-end test automation, how the need for skilled testers will only increase, and how AI-based tooling can help supercharge any automated testing practice.
  • He also puts his money where his mouth is by demonstrating how the neverending maintenance overhead of tests can be mitigated using AI-driven tooling which already exists today using concrete examples (e.g., visual validation and self-healing locators).
  • He also discusses the role that AI will play in the future, including the development of autonomous testing platforms. These platforms will be able to automatically explore applications, add validations, and fill gaps in test coverage. (Spoiler alert: Applitools is building one, and Adam shows a bit of a teaser for it using a real-time in-browser REPL to automate the browser which uses natural language similar to ChatGPT.)

You can watch the full recording and find the session materials here, and I’ve included a quick breakdown with timestamps for ease of reference.

  • Challenges with automating end-to-end tests using traditional approaches (02:34-10:22)
  • How AI can be used to overcome these challenges (10:23-44:56)
  • The role of AI in the future of test automation (e.g., autonomous testing) (44:57-58:56)
  • The role of testers in the future (58:57-1:01:47)
  • Q&A session with the speaker (1:01:48-1:12:30)

Want to see more? Don’t miss Future of Testing: AI in Automation.

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>
Unlocking the Power of ChatGPT and AI in Test Automation Q&A https://applitools.com/blog/chatgpt-and-ai-in-test-automation-q-and-a/ Thu, 20 Apr 2023 16:14:13 +0000 https://applitools.com/?p=49358 Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, where I explored how artificial intelligence, specifically ChatGPT, can revolutionize the field of test...

The post Unlocking the Power of ChatGPT and AI in Test Automation Q&A appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT webinar Q&A

Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, where I explored how artificial intelligence, specifically ChatGPT, can revolutionize the field of test automation. In this article, I’ll share the audience Q&A as well as some of the results of the audience polls. Be sure to read my previous article, where I summarized the key takeaways from the webinar. You can also find the full recording, session materials, and more in our event archive.

Audience Q&A

Our audience asked various questions about the data and intellectual property when using AI and adding AI into their own test automation processes.

Intellectual properties when using ChatGPT

Question: To avoid disclosing company intellectual properties to ChatGPT, is it not better to build a “private” ChatGPT / large language model to use and train for test automation inside the company while securing the privacy?

My response: The way a lot of organizations are proceeding is setting up private ChatGPT-like infrastructure to get the value of AI. I think that’s a good way to proceed, at least for now.

Data privacy when using ChatGPT

Question: What do you think about feeding commercial data (requirements, code, etc.) to ChatGPT and/or OpenAI API (e.g. gpt-3.5-turbo) data privacy connected to the recent privacy issues like with Samsung, exposure of ChatGPT chats, and so forth?

My response: Feeding public data is okay, because it’s out in the public space anyway, but commercial data could be public or it could be private and that could become an issue. The problem is we do not understand enough about how ChatGPT is using the data or the questions that we are asking it. It is constantly learning, so if you feed a very unique type of question that it has never come across before, the algorithm is intelligent to learn from that. It might give you the wrong answer, but it is going to learn based on your follow-up questions, and it is going to use that information to answer someone else’s similar question.

Complying with data regulations

Question: How can we ensure that AI-driven test automation tools maintain compliance with data privacy regulations like GDPR and CCPA during the testing process?

My response: It’s a tough question. I don’t know how we can ensure that, but if you are going to use any AI tool, you must make sure you are asking very focused, specific questions that don’t disclose any confidential information. For example, in my demo, I had a piece of code pointing to a website asking it a very specific question how to implement it. That question could be implemented using some complex free algorithms or anything else, but taking that solution, you make it your own and then implement it in your organization. That might be safer than disclosing anything more. This is a very new area right now. It’s better to be on the side of caution.

Adding AI into the test automation process

Question: Any suggestions on how to embed ChatGPT/AI into automation testing efforts as a process more than individual benefit?

My response: I unfortunately do not have an answer to this yet. It is something that needs to be explored and figured out. One thing I will add is that even though it may be similar to many others, each product is different. The processes and tech stacks are going to vary for all these types of products you use for testing and automation, so one solution is not going to fit everyone. Auto-generated code will go up to a certain level, but at least as of now, the human mind is still very essential to use it correctly. So it’s not going to solve your problems; it is just going to make solving them easier. The examples I showed are ways to make it easier in your own case.

Using AI for API and NFRs

Question: How effective would AI be for API and NFR?

My response: I could ask a question to give me a performance test implementation approach for the Amazon website, and it gives me a performance test strategy. If I ask it a question to give me an implementation detail of what tool I should use, it is probably going to suggest a few tools. If I ask it to build an initial script for automating this performance test, it is probably going to do that for me as well. It all depends on the questions you are asking to proceed from there, and I’m sure you’ll get good insights for NFRs.

Using AI a dedicated private cloud instance

Question: Our organization is very particular about intellectual property protection, and they might deny us from using third-party cloud tools. What solution is there for this?

My response: Applitools uses the public cloud. I use a public cloud for my learning and training and demos that I do, but a lot of our customers actually use the dedicated cloud instance, which is hosted only for them. Only they have access to that, so that takes care of the security concerns that might be there. We also work with our customers to ensure from compliance and security perspectives that all the questions are answered and to make sure everything conforms as per their standards.

Using AI for mobile test automation

Question: Do you think AI can improve quality of life for automation engineers working on mobile app testing too? Or mostly web and API?

My response: Yes, it works for mobile. It works for anything that you want. You just have to try it out and be specific with your questions. What I learned from using ChatGPT is that you need to learn the art of asking the questions. It is very important in any communication, but now it is becoming very important in communicating with tools as well to get you the appropriate responses.

Audience poll results

In the live webinar, the audience was asked “Given the privacy concerns, how comfortable are you using AI tools for automation?” Of 105 votes, over half of the respondents would be somewhat or very comfortable with using AI tools for automation.

  • Very comfortable: 16.95%
  • Somewhat comfortable: 33.05%
  • Not sure: 24.59%
  • Somewhat comfortable: 14.41%

Next steps

You can take advantage of AI today by using Applitools to test web apps, mobile apps, desktop apps, PDFs, screenshots, and more. Applitools offers SDKs that support several popular testing frameworks in multiple languages, and these SDKs install directly into your projects for seamless integration. You can try it yourself by claiming a free account or request a demo.

Check out our events page to register for an upcoming session with our CTO and the inventor of visual AI Adam Carmi. For our clients and friends in the Asia-Pacific region, be sure to register to attend our upcoming encore presentation of Unlocking the Power of ChatGPT and AI in Test Automation happening on April 26th.

The post Unlocking the Power of ChatGPT and AI in Test Automation Q&A appeared first on Automated Visual Testing | Applitools.

]]>
Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways https://applitools.com/blog/chatgpt-and-ai-in-test-automation-key-takeaways/ Tue, 18 Apr 2023 21:12:14 +0000 https://applitools.com/?p=49170 Editor’s note: This article was written with the support of ChatGPT. Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, when I discussed...

The post Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT webinar key takeaways

Editor’s note: This article was written with the support of ChatGPT.

Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, when I discussed how artificial intelligence – specifically ChatGPT – can impact the field of test automation. The webinar delved into the various applications of AI in test automation, the benefits it brings, and the best practices to follow for successful implementation. With the ever-growing need for efficient and effective testing, the webinar is a must-watch for anyone looking to stay ahead of the curve in software testing. This blog article recaps the key takeaways from the webinar. Also, you can find the full recording, session materials, and more in our event archive.

Takeaways from the previous webinar

I started with a recap of the takeaways from the previous webinar, Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look. The webinar focused on two main aspects from a testing perspective: testing approach mindset (strategy, design, automation, and execution) and automation perspective. ChatGPT was able to help with automation by guiding me to automate test cases more quickly and effectively. ChatGPT was also able to provide solutions to programming problems, such as giving a solution to a problem statement and refactoring code. However, there were limitations to ChatGPT’s ability to provide answers, particularly in terms of test execution, and some challenges when working with large blobs of code.
If you didn’t catch the previous webinar, you can still watch it on demand.

What’s new in AI since the previous webinar

Since we hosted the previous webinar, there have been many updates in the AI chatbot space. A few key updates we covered in the webinar include:

  • ChatGPT has become accessible on laptops, phones, and Raspberry Pi, and can be run on own devices.
  • Google Bard was released, but it is limited to English language, cannot continue conversations, and cannot help with coding.
  • ChatGPT 4 was released, which accepts images and text inputs and provides text outputs.
  • ChatGPT Plus was introduced, offering better reasoning, faster responses, and higher availability to users.
  • Plugins can now be built on top of ChatGPT, opening up new and powerful ways of interaction.

During the webinar, I gave a live demo of some of ChatGPT’s updates, where it was able to provide code implementation and generate unit tests for a programming question.

Using AI to address common challenges in test automation

Next, I discussed the actual challenges in automation and how we can leverage AI tools to get better results. Those challenges include:

  • Slow and flaky test execution
  • Sub-optimal, inefficient automation
  • Incorrect, non-contextual test data

Using AI to address flakiness in test automation

The section specifically focused on flaky tests related to UI or locator changes, which can be identified using consistent logging and reporting. I advised against using a retry listener to handle flaky tests and instead suggest identifying and fixing the root cause. I then demonstrated an example of a test failing due to a locator change and discussed ways to solve this challenge.

Read our step-by-step tutorial to learn how to use visual AI locators to target anything you need to test in your application and how it can help you create tests that are more resilient and robust.

Using AI to address sub-optimal or inefficient information

Next, I discussed sub-optimal or inefficient automation and how to improve it. I use GitHub Copilot to generate a new test and auto-generate code. I explained how to integrate GitHub Copilot and JetBrains Aqua with IntelliJ and how to use Aqua to find locators for web elements. Then, I showed how to implement code in the IDE and interact with the application to perform automation.

Using AI to address incorrect, non-contextual test data

Next, I discussed the importance of test data in automation testing and related challenges. There are many libraries available for generating test data that can work in the context of the application. Aqua can generate test data by right-clicking and selecting the type of text to generate. Copilot can generate data for “send keys” commands automatically. It’s important to have a good test data strategy to avoid limitations and increase the value of automation testing.

Potential pitfalls of AI

AI is not a magic solution and requires conscious and contextual use. Over-reliance on AI can lead to incomplete knowledge and lack of understanding of generated code or tests. AI may replace certain jobs, but individuals can leverage AI tools to improve their work and make it an asset instead of a liability.

Data privacy is also a major concern with AI use, as accidental leaks of proprietary information can occur. And AI decision-making can be problematic if it does not make the user think critically and understand the reasoning behind the decisions. Countries and organizations are starting to ban the use of AI tools like ChatGPT due to concerns over data privacy and accidental leaks.

Conclusion

Overall at first, I was skeptical about AI in automation, but that skepticism has reduced significantly. You must embrace technology or you risk being left behind. Avoid manual repetition, and leverage automation tools to make work faster and more interesting. The automation cocktail (using different tools in combination) is the way forward.

Focus on ROI and value generation, and make wise choices when building, buying, or reusing tools. Being agile is important, not just following a methodology or procedure. Learning, evolving, iterating, communicating, and collaborating are key to staying agile. Upskilling and being creative and innovative are important for individuals and teams. Completing the same amount of work in a shorter time leads to learning, creativity, and innovation.

Be sure to read my next article where I answers questions from the audience Q&A. If you have any other questions, be sure to reach out on Twitter or LinkedIn.

The post Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways appeared first on Automated Visual Testing | Applitools.

]]>
AI-Generated Test Automation with ChatGPT https://applitools.com/blog/ai-generated-test-automation-with-chatgpt/ Mon, 06 Feb 2023 16:42:08 +0000 https://applitools.com/?p=46333 The AI promise AI is not a new technology. Recently, the field has made huge advancements. Currently it seems like AI technology is mostly about using ML (machine learning) algorithms...

The post AI-Generated Test Automation with ChatGPT appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT example code

The AI promise

AI is not a new technology. Recently, the field has made huge advancements.

Currently it seems like AI technology is mostly about using ML (machine learning) algorithms to train models with large volumes of data and use these trained computational models to make predictions or generate some outcomes.

From a testing and test automation point-of-view, the question in my mind still remains: Will AI be able to automatically generate and update test cases? Can it find contextual defects in the product? Can it eventually inspect code and the test coverage to prevent defects getting into production?

ICYMI: Read my recent article on this topic, AI: The magical helping hand in testing.

The promising future

Recently I came across a lot of interesting buzz created by ChatGPT, and I had to try it out. 

Given I am a curious tester by heart, I signed up for it on https://chat.openai.com/ and tried to see the kind of responses generated by ChatGPT for a specific use case.

Live demo: Watch me demonstrate how to use ChatGPT for testing and development in the on-demand recording of Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look.

What is the role of AI in testing?

I started with a simple question to ChatGPT. The answer was interesting. But the answer was nothing different than what I already knew.

So I thought of asking some more specific questions.

Create a test strategy using AI

I asked ChatGPT to create a test strategy for testing and automating Amazon India shopping app and website.

I was blown away by the first response. Though I can’t use this directly, as it is quite generic, the answer was actually pretty good as a starting point.

I was now hooked, and I had to keep going on now.

What test cases should I automate for my product?

Example: What test cases should I automate for Amazon India?

Like the test strategy, these are not to the level of details I am looking for. But it’s a great start.

That said, at the top of the test automation pyramid, I do not want to automate test cases, but I want to automate test scenarios. So, I asked ChatGPT about the important scenarios I should automate.

What test scenarios should I automate for my product?

Though not the specific test scenarios I was expecting, there is a clear difference between the identified test cases and the test scenarios.

Code generation: the first auto-generated test

I asked ChatGPT to give me the Selenium-Java automation code to automate the “search and add a One Plus phone to cart scenario” for Amazon India website.

ChatGPT not only corrected my question, but also gave me the exact code to implement the above scenario.

I updated the question to generate the test using Applitools Visual AI, and voila, here was the solution:

Is this code usable?

Not really!

If you run the test, there is a good chance it would fail. The reason being, the generated code assumes the page is “rendered” as soon as the actions are done. To make this code usable and consistent in execution, we need to update it with realistic and intelligent waits at relevant places to cater to the rendering and loading time required by the browser, and of-course, your environment.

NOTE: Intelligent waits are important as opposed to a hard wait to optimise the execution time. Also, being realistic in your wait durations is very important. You do not want to wait for a particular state of the application for more than what is expected to happen in production.

Also, typically in our test frameworks, we have various abstraction layers and utilities to manage different functionalities and responsibilities. But we can use snippets of this generated code and easily plug it in the right places in our frameworks. So it is still very valuable.

I also tried the same for some internal sites – i.e. not available to the general public. The test was still created, but not sure about the validity of those tests.

Test data generation

Lastly, I asked specific questions around test data – an aspect unfortunately ignored or forgotten until the last minute by most teams.

Example: What test data do I need to automate test scenarios for my product?

Again, I was blown away by the detailed test data characteristics that were given as a response. It was specific about the relevant data required for product, user, payment, and order to automate the important test scenarios that were highlighted in the earlier response. 

I tried to trick ChatGPT to “search and add a product to cart” – but I didn’t specify what product.

It caught my bluff and gave very clear instructions that I should provide valid test data.

Try it yourself: Can ChatGPT accurately translate your tests from one framework to another? Try it out and let us know how it went @Applitools.

Other examples

Generating content

Instead of thinking and typing this blog post, I could have used ChatGPT to generate the post for me as well.

I didn’t even have time to step away to get a cup of coffee while this was being generated!

Planning a vacation

Another interesting use case of ChatGPT – getting help to plan a vacation!

NOTE: Be sure to work with your organization’s security team and @OpenAI before implementing ChatGPT into the organization’s official processes.

What’s next? What does it mean for using AI in testing?

In my examples, the IDEs are very powerful and make it very easy for programmers and automation engineers to write code in a better and more efficient way.

36.8% of our audience participants answered that they were worried about AI replacing their jobs. Others see AI as a tool to improve their skills and efficiency in their job.

Tools and technologies like ChatGPT will offer more assistance to such roles. Individuals can focus on logic and use cases. I have doubts on the applicability of these tools to provide answers based on contextual information. However, given very focused and precise questions, tools can provide information to help implement the same in an efficient and optimal fashion.

I am hooked!

To learn more, watch on-demand recording of Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look. Be looking for a follow-on event in early April where I will go deeper into specific use cases and how ChatGPT and its use in testing are evolving. Sign up for an email notification when registration opens.

The post AI-Generated Test Automation with ChatGPT appeared first on Automated Visual Testing | Applitools.

]]>
AI: The Magical Helping Hand in Testing https://applitools.com/blog/ai-the-magical-helping-hand-in-testing/ Tue, 24 Jan 2023 16:59:28 +0000 https://applitools.com/?p=46027 The AI promise AI as a technology is not new. There are huge advancements made in the recent past in this field.  Currently it seems like AI technology is mostly...

The post AI: The Magical Helping Hand in Testing appeared first on Automated Visual Testing | Applitools.

]]>
Gartner Hype Cycle for Artificial Intelligence, 2021

The AI promise

AI as a technology is not new. There are huge advancements made in the recent past in this field. 

Currently it seems like AI technology is mostly about using ML (machine learning) algorithms to train models with large volumes of data and use these trained computational models to make predictions or generate some outcomes.

From a testing and test automation point-of-view, the question in my mind still remains: Will AI be able to automatically generate and update test cases? Find contextual defects in the product? Inspect code and the test coverage to prevent defects getting into production?

These are the questions I have targeted to answer in this post.

The hype

Gartner publishes reports related to the Hype Cycle of AI. You can read the detailed reports from 2019 and 2021.

The image below is from the Hype Cycle for AI, 2021 report.

Based on the report from 2021, we seem to be some distance away from the technology being able to satisfactorily answer the questions in my mind. 

But there are areas where we seem to be getting out of what the Gartner report calls “Trough of Disillusionment” and moving towards the “Slope of Enlightenment”.

Based on my research in the area, I agree with this.

Let’s explore this further.

The promising future

Recently I came across a lot of interesting buzz created by ChatGPT, and I had to try it out. 
Given I am a curious tester by heart, I signed up for it on https://chat.openai.com/ and tried to see the kind of responses generated by ChatGPT for use cases related to test strategy and automation.

The role and value of AI in testing

I took ChatGPT for a spin to see how it could help in creating test strategy and generate code to automate some tests. In fact, I also tried creating content using ChatGPT. Look out for my next blog for the details of the experiment.

I am amazed to see how far ahead we have come in making the use of AI technology accessible to the end-users.

The answers to a lot of the non-coding questions, though generic, were pretty spot on in the responses. Like the code generated by record-and-playback tools cannot be used directly and needs to be tuned and optimized for regular usage, the answers provided by ChatGPT can be a great starting point for anyone. You would get a high-level structure, with major areas identified, which you can then detail out based on context, that only you would be aware of.

But, I often wonder what is the role of AI in Testing? Is it just a buzzword gone viral on social media?

I did some research in the area, but most of the references I found were blog posts and articles about “how AI can help make testing better”.

Here is a summary from my research on trends and hopes from AI & Testing.

Test script generation

This aspect was pretty amazing to me. The code generated was usable, though in isolation. Typically in our automation framework, we have a particular architecture we conform to while implementing the test scripts. The code generated for specific use cases will need to be optimized and refactored to fit in the framework architecture.

Test script generation using Applitools Visual AI

Applitools Visual AI takes a very interesting approach at this. Let me explain this in more detail.

There are 2 main aspects in a test script implementation.

  1. Implementing the actions/navigations/interactions with the product-under-test
  2. Implementing various assertions based on the above interactions to validate the functionality is working as expected

We typically only write the obvious and important assertions in the test. This itself can make the code quite complex and also affects the test execution speed. 

The less important assertions are typically ignored – for lack of time, or also because they may not be directly related to the intent of the test.

For the assertions you have not added in the test script, you either need to implement different tests for validate those, or worse yet, they are ignored.

There is another category of assertions which are important and you need to implement in your test, but these are very difficult or impossible to implement in your test scripts (based on your automation toolset). Example – validating the colors, fonts, page layouts, overlapping content, images, etc. is very difficult (if at all possible) and complex to implement. For these types of validations, you usually end up relying on a human testing these specific details manually. Given the error-prone nature of manual testing, a lot of these validations are often (unintentionally) missed or ignored and the incorrect behavior of your application gets released to Production.

This is where Applitools Visual AI comes to the rescue.

Instead of implementing a lot of assertions, with Applitools Visual AI integrated in your UI automation framework, it can take care of all your functional and visual (UI and UX) assertions with a single line of code. With this single line of code, you are able to check the full screen automatically, thus exponentially increasing your test coverage. 

Let us see an example of what this means.

Below is a simple “invalid-login” test with assertions.

If you use Applitools Visual AI, the same test changes as below:

NOTE: In the above example with Applitools Visual AI, the lines eyes.open and eyes.closeAsync are typically called from your before and after test method hooks in your automation framework.

You can see the vast reduction in code required for the same “invalid-login” test as compared between the regular assertion-based code we are used to writing, versus using Applitools Visual AI. Hence, the time to implement the test has improved significantly.

The true value is seen when the test runs between different builds of your product which has changes in functionality.

The standard assertion-based test will fail at the first error it encountered. Whereas the test with Applitools is able to highlight all the differences between the 2 builds, which include:

  • Broken functionality – bugs
  • Missing images
  • Overlapping content
  • Changed/new features in the new UI

All the above is done without having to rely on locators, which could have changed as well, and caused your test to fail for a different reason. It is all possible because of the 99.9999% accurate AI algorithms provided by Applitools for visual comparison. 

Thus, your test coverage has increased, and it is impossible for bugs to escape your attention.

The Applitools AI algorithms can be used in any combination as appropriate to the context of your application to get the maximum validation as possible.

Based on the data from the survey done on “The impact of Visual AI on Test Automation”, we can see below the advantages of using Applitools Visual AI.

Visual AI: The Empirical Evidence

5.8x faster

Visual AI allows tests to be authored 5.8x faster compared to the traditional code-base approach.

5.9x more efficient

Test code powered by Visual AI increases coverage via open-ended assertions and is thus 5.9x more efficient per line of code.

3.8x more stable

Reducing brittle locators and labels via Visual AI means reduced maintenance overhead.

45% more bugs caught

Open-ended assertions via Visual AI are 45% more effective at catching bugs.

Increasing test coverage

AI technology is getting pretty good in suggesting test scenarios and test cases. 

There are many products that are already using AI technology under the hoods to provide valuable features to the users.

Increasing test coverage using Applitools Ultrafast Test Cloud

A great example of an AI-based tool is the Applitools Test Cloud which allows you to scale the test execution seamlessly without having to run the tests on other browsers and devices.

In the image below, you can see how easy it is to scale your web-based test execution across the devices and browsers of your choice as part of the same UI test execution using the Applitools Ultrafast Grid.

Similarly you can use the Applitools Ultrafast Native Mobile Grid for scaling the test execution for your iOS and Android native applications as shown below:.

Example of using Applitools Ultrafast Native Mobile Grid for Android apps:

Example of using Applitools Ultrafast Native Mobile Grid for iOS apps:

Test data generation

This is an area that is still to get better. My experiments showed that while actual test data was not generated, it indicated all the important areas we need to think about from a test data perspective. This is a great starting point for any team, while ensuring no obvious area is missed out.

Debugging

We use code quality checks in our code-base to ensure the code is of high quality. While the code quality tool itself usually provides suggestions on how to fix the flagged issues, there are times when it may be tricky to fix the problem. This is an area where I got lot of help in fixing the problem.

Example: In one of my projects, I was getting a sonar error related to “XML transformers should be secured”.

​​I asked ChatGPT this question:

The solution to the problem was spot-on and I was immediately able to resolve my sonar error.

Code optimization and fixing

Given a block of code, I was able to use ChatGPT to suggest a better way to implement that code, without me providing any context of my tech stack. This was mind-blowing!

Here is an example of this:

There were 5 very valuable suggestions provided by ChatGPT. All of these suggestions are very actionable, hence very valuable!

Analysis and prediction

This is a part I feel tools are currently still limited. There is an aspect of product context, learning, and using the product and team-specific data to come up with very contextual analysis and eventually get to the predictive and recommendations stage.

What other AI tools exist?

While most of the examples are based on ChatGPT, there are other tools also in the works from a research perspective, and also application perspective.

Here are some interesting examples and resources for you to investigate more in the fascinating work that is going on.

While the above list is not exhaustive, it is more focused in the area of research and development.

There are tools that leverage AI and ML already providing value to users. Some of these tools are:

Testing AI systems

The tester in me is also thinking about another aspect. How would I test AI systems? How can I contribute to building and helping the technology and tools move to the Slope of Enlightenment in Gartner’s Hype Cycle? I am very eager to learn and grow in this exciting time!

Challenges of testing AI systems

Given I do not have much experience in this (yet), there are various challenges that come to my  mind to test AI systems. But by now I am getting lazy to type, so I asked ChatGPT to list out the challenges of testing AI systems – and it did a better job at articulating my thoughts … almost felt like it read my mind.

This answer seems quite to the point, but also generic. In this case, I also do not have any specific points I can think of, given my lack of knowledge and experience in this field. So I am not disappointed.

Why generic? Because I have certain thoughts, ideas, and context in my mind. I am trying to build a story around that. ChatGPT does not have an insight into “my” thoughts and ideas. It did a great job giving responses based on the data it was trained on, keeping the complexity, adaptability, transparency, bias, and diversity in consideration.

For fun, I asked ChatGPT: “How to test ChatGPT?”

The answer was not bad at all. In fact, it gave a specific example of how to test it. I am going to give it a try. What about you?

What’s next? What does it mean for using AI in testing?

There is a lot of amazing work happening in the field. I am very happy to see this happen, and I look forward to finding ways to contribute to its evolution. The great thing is that this is not about technology, but how technology can solve some difficult problems for the users.

From a testing perspective, there is some value we can start leveraging from the current set of AI tools. But there is also a lot more to be done and achieved here!

Sign up for my webinar Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look to learn more about the uses of AI in testing.

Learn more about visual AI with Applitools Eyes or reach out to our sales team.

The post AI: The Magical Helping Hand in Testing appeared first on Automated Visual Testing | Applitools.

]]>
How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale https://applitools.com/blog/automating-test-maintenance-and-analysis/ Mon, 07 Nov 2022 11:54:26 +0000 https://applitools.com/?p=44166 As teams get bigger and mature their testing strategy alongside the needs of business, new challenges in their process often arise. One of those challenges is that the analysis and...

The post How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale appeared first on Automated Visual Testing | Applitools.

]]>

As teams get bigger and mature their testing strategy alongside the needs of business, new challenges in their process often arise. One of those challenges is that the analysis and maintenance of tests and their results at scale can be incredibly cumbersome and time-consuming.

While a lot of emphasis gets put on “creating” new tests and reducing the time it takes to run them across different environments, there doesn’t seem to be the same emphasis on dealing with the results and repercussions of them.

Let’s say you have a test that validates a checkout experience and you want to expand that testing to the top 10 browsers. Just two bugs along that test scenario would produce 20 errors that need to be analyzed and then actioned on. This entire back and forth can become untenable in the rapid CI/CD environments present in many businesses. We basically have to choose to ignore our test results at this point if we want to get anything productive done.

This is where Auto-Grouping and Auto-Maintenance from Applitools come in, as it allows AI to quickly and accurately assess results just as an army of testers would!

What Is Automatic Analysis?

Applitools Auto-Grouping helps group together similar bugs that occur in different environments like browsers, devices, screen sizes, etc. Applitools even allows you to group these bugs between entire test runs, test steps, or even specific environments allowing you to really fine-tune your automation. 

In the above scenario, let’s assume we found 2 bugs across our 20 browsers for a total of 40 bugs! When we enable Auto Grouping, our errors are grouped together to present only 2 bugs – making it much easier to analyze what actually is going wrong in our interface and cutting down on chasing repeat bugs.

What is Automatic Maintenance?

Auto-Maintenance builds on Auto-Grouping by automating the process of updating tests based on their test results. Auto-Maintenance also enables users to set granular controls over what gets updated automatically between checkpoints, test runs, and more.

Again, taking a look at the above example, if we accepted a new baseline on one browser, we’d have to accept it on the other 19 browsers manually – taking up a ton of time. When a new baseline is accepted, Auto-Maintenance can apply that acceptance across all similar environments saving you hours of writing new tests that would accommodate those new baselines.

How Sonatype Saves Money & Time With Automatic Maintenance

Jamie Whitehouse and everyone on the development team spent time on each release working to uncover and address new failures and bugs across different browsers. Often, this work occurred as spot checks of the 1,000+ pages of the application during development. In reality, this work, and the inherent risk of unintended changes, slowed the delivery of the product to market.

Now, if Sonatype engineers make a change in their margins across a number of pages, all the differences show up as highlights in Applitools. Features in Applitools like Auto-Maintenance make visual validation a time saver. Auto-Maintenance lets engineers quickly accept identical changes across a number of pages – leaving only the unanticipated differences. As Jamie says, Applitools takes the guesswork out of testing the rendered pages. 

Start Saving Your Testing Time

To get started with automatically maintaining and analyzing your tests, you can check out our documentation here.

You’ll need a free account, so be sure to sign up for Applitools.

The post How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale appeared first on Automated Visual Testing | Applitools.

]]>
Myth vs Reality: Understanding AI/ML for QA Automation – webinar w/ Expert Jonathan Lipps https://applitools.com/blog/ai-qa-automation/ Sun, 02 Feb 2020 19:11:36 +0000 https://applitools.com/blog/?p=6904 Artificial Intelligence and Machine Learning (AI/ML) have seen application in a variety of fields, including automation of QA tasks. But what are they exactly? What distinguishes different instances and applications...

The post Myth vs Reality: Understanding AI/ML for QA Automation – webinar w/ Expert Jonathan Lipps appeared first on Automated Visual Testing | Applitools.

]]>
Jonathan Lipps - Architect and project lead for Appium; Founder of Cloud Grey and AppiumPro

Artificial Intelligence and Machine Learning (AI/ML) have seen application in a variety of fields, including automation of QA tasks. But what are they exactly? What distinguishes different instances and applications of AI, for example? What are the horizons of these technologies in the field of QA?

The promise of AI/ML must be understood correctly to be harnessed appropriately. As with any buzzword, many technologies and products are offered under the guise of AI/ML without satisfying the definition. The industry is reforming itself around the promise that AI/ML holds often without a clear understanding of the technical limitations that give the promise its boundaries.

In this webinar, test automation guru Jonathan Lipps gives a detailed overview of the concepts that underpin AI/ML, and discuss their ramifications for the work of QA automation.

In addition to a discussion of AI/ML in general, Jonathan looks at examples from the QA industry. These examples will help give attendees the basic understanding required to cut through the marketing language. so we can clearly evaluate AI/ML solutions, and calibrate expectations about the benefit of AI/ML in QA, both as it stands today and in the future.

Jonathan’s slide deck:

Full webinar recording:

Additional resources:

— HAPPY TESTING —

 

The post Myth vs Reality: Understanding AI/ML for QA Automation – webinar w/ Expert Jonathan Lipps appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Named to CB Insights’ 2018 AI 100 List https://applitools.com/blog/applitools-named-to-cb-insights-2018-ai-100-list/ Tue, 19 Dec 2017 05:40:27 +0000 http://blog.applitools.com/applitools-named-to-cb-insights-2018-ai-100-list/ We are thrilled to share that Applitools has been named to CB Insights 2018 AI 100. This recognition is part of a ranking of the 100 most promising private artificial...

The post Applitools Named to CB Insights’ 2018 AI 100 List appeared first on Automated Visual Testing | Applitools.

]]>

We are thrilled to share that Applitools has been named to CB Insights 2018 AI 100.

This recognition is part of a ranking of the 100 most promising private artificial intelligence (AI) companies in the world, and recognizes the teams and technologies that are successfully using AI to solve big challenges. We are honored to be recognized alongside many other organizations who are innovating in the realm of AI, machine and deep learning.

On top of making the general list, Applitools Eyes is the stand-alone finalist in the software development category. With other award categories in retail, media, cyber-security, agriculture, and more, it is clear that the ROI of AI and machine learning is real and organizations in every industry are finding innovative ways to leverage its benefits. 

Want to give Applitools Eyes a try? Enjoy a free trial of our easy-to-use SaaS solution

To calculate and determine the winners that made the AI 100 list, CB Insights uses a Mosaic scoring system, which is a quantitative framework to measure the overall health and growth potential of private companies. The Mosaic score evaluates companies based on three distinct factors: Momentum, Market and Money. After review of Applitools’ Mosaic score, we were selected as one of 1,000+ applicants to have made the list!

Why Applitools?
In today’s increasingly digital world, it is imperative that any company have a digital presence, let alone one that is visually flawless. With web teams buried in lines of code, it can often be challenging for organizations to find visual discrepancies on their apps and sites, and it often involves laborious manual processes to find/fix these discrepancies.

Thanks to Applitools Eyes, our AI-powered visual testing and monitoring solution, customers are able to automatically test the look, feel and user experience of their apps and sites without having to sacrifice their existing test framework.

All the while, Applitools Eyes’ AI-powered cognitive vision is constantly learning and improving from what are now millions of collected data points. This saves organizations countless hours and dollars spent using teams of manual testers tracking down visual imperfections and searching for flawed lines of code.

This journey has only just begun!
The increasing pressure from business executives to protect the brand experience, as digital transformation continues unabated, means we will see the emergence of the nascent Application Visual Management (AVM) category. This logical extension of Application Delivery Management (ADM), and Application Performance Management (APM), will become more a focus for enterprises as we head into 2018.

Make sure to stay tuned in the coming year for some more exciting announcements, updates to our solutions and the latest trends in test automation. And, thanks again to the CB Insights team for the accolade!

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

The post Applitools Named to CB Insights’ 2018 AI 100 List appeared first on Automated Visual Testing | Applitools.

]]>
Not Only Cars: The Six Levels of Autonomous Testing https://applitools.com/blog/not-only-cars-the-six-levels-of-autonomous/ Tue, 24 Oct 2017 11:50:13 +0000 http://blog.applitools.com/not-only-cars-the-six-levels-of-autonomous/ There is something similar about driving and testing. While testing is an exercise in creativity, parts of it are boring — just like driving is. Regression testing is tedious in that...

The post Not Only Cars: The Six Levels of Autonomous Testing appeared first on Automated Visual Testing | Applitools.

]]>

There is something similar about driving and testing. While testing is an exercise in creativity, parts of it are boring — just like driving is. Regression testing is tedious in that you need to do the same tests over and over again, every time a release is created, just like your daily commute. And just like during your daily commute, doing something repetitively is a recipe for mistakes, so repetitive testing, just like driving, is a dangerous activity, as can be seen from the various crash sites strewn over our commute highways. Or by the various
bugs that slipped from us during our regression testing.

Which is why we automate testing. We write code that runs our tests, and run them whenever we want. But even that gets repetitive. Another day, another form field to check. Another form, another page. And one gets the feeling that writing those tests is a repetitive process. That it too can be automated.

Can AI automate this process? Can we make some, or all parts of our testing, and writing our tests, autonomous ? Can AI do to writing tests what it is doing to driving? 

Autonomous Driving

On the first day of May, 2012, in Las Vegas, Google passed Nevada’s Self-driving test. This can be thought of as the firing shot in an all-out war for domination over what appears to be the future of automobiles, a war that includes hi-tech giants such as Google, Apple, and Intel, but also automotive giants like Ford, Mercedes, and Volvo.

 

Credit: Google
Credit: Google

What’s the big deal? Why is everybody in a panic to be a player in this market? Why are self-driving cars such a big deal? Why are they perceived to be the future of cars? Because everybody “knows” that long-term, nobody is going to be driving their own cars. Because, as we said, driving is a boring activity. Everybody would rather teleport to the office than spend an hour driving there. And if teleporting is not an option (yet!), then a driver that drives you there is almost as good an option. While they are driving, you can get work done, catch up with friends, or take a nap. And if the boredom is taken out of the equation, the accidents that come with it are taken out too.

Because of consumer appeal, the possibility of saving billions in insurance money, and because of the possibility that self-driving cars will solve the present traffic jams, car manufacturers and hi-tech companies are scrambling to implement this technology.

But what exactly defines a self-driving autonomous car? Is every autonomous car like another? How is Tesla’s driving-assistance similar to Volkswagen’s automatic braking system? And how are they related to Google’s cute self-driving car that doesn’t even have a steering wheel? To this end, in 2014, SAE International published a classification system for autonomous cars, a classification that was adopted by the National Highway Traffic Safety Administration (NHTSA), and is the standard by which all autonomous cars are classified.

 

Levels of Driving Automation

 

Credit: Applitools
Credit: Applitools

The classification adopts six levels of driving automation, and a car is classified for a level depending on the amount of autonomy it exhibits.

The first level of driving automation is Level 0 — no automation whatsoever. Congratulations! You are already the proud owner of an autonomous car, Level 0. Level 1 is where we are today, in some cars on the road. It is named “Driving Assistance”, and is where things like the modern Cruise Control and automatic braking systems are today. In this mode, the human is still under control — the AI is only assisting while the human is driving.

Level 2 is “Partial Automation”. This is the level Tesla’s Autopilot and GM’s Super Cruise reside in. In this level, the human is still in charge, but the AI takes care of the boring details of acceleration/deceleration and steering, while assuming that the human can take over on the first problem. We’re talking mostly highways here, and not city driving.

Level 3 is “Conditional Automation”. This is the first level where the car drives itself. But only under certain conditions, and always under the assumption that the human can take over when the AI notifies it that it is incapable of responding to a certain situation.

Level 4 is “High Automation”. In this level, the AI totally takes over all driving, and does not assume that the human can take over. In other words, this car can drive without a steering wheel. Google’s cars are at this level. The caveat is that there are times and places that the car cannot be autonomous, like in a snow storm, or during fog.

Level 5 is the highest level — “Full Automation”. The car can be autonomous anytime and anywhere. We have yet to reach that level of automation, but we’re getting there.

Level 6 is “SkyNet Automation”. No human is even in the car, because the robots have killed all of them! (Just joking — this level doesn’t really exist.)

 

Autonomous Testing

So back to autonomous testing. My company, Applitools, has just hired a new guy, an addition to our algorithm and AI team. During lunch, we started talking about the similarities between driving and testing, and just brainstorming this idea of “levels of autonomous testing”. These are our thoughts on this.

But Caveat Emptor! There is a saying in Hebrew that since the destruction of the (second) temple, prophecy is only given to fools and children. So please take these levels with a grain of salt. On the other hand, while these thoughts reflect our fantasies, they also reflect where we think the testing world is going in terms of using AI.

Are you worried that AI will soon replace all the testers? If so, skip to the last section to get your answer. But if you like your rides with suspense, then fasten your seat belts, because we’re going on a ride through the 6 levels of autonomous testing.

The Six Levels of Testing Autonomy (Credit: Applitools)
The Six Levels of Testing Autonomy (Credit: Applitools)

Level 0 — “No Autonomy”

Congratulations! You’re there. You write code that tests the application, and you’re very happy because you can run the tests again and again on every release, instead of boringly repeating the same tests again and again. This is perfect, because now you can concentrate on the important aspects of testing—thinking:

But nobody’s helping you write that automation code. And writing the code itself is repetitive. Any field added to a form means adding a test. Any form added to a page means adding a test that checks all the fields. And any page added means checking all the components and forms in that page.

Moreover, the more tests you have, the more they fail when there is a sweeping change in the application. Each and every failed test needs to be checked to verify whether this is a real bug, or just a new baseline.

 

Level 1 — “Drive Assistance”

An interesting thing we saw when we researched autonomous driving, is that the better the vision system of the autonomous car, the better its autonomy is. This is why we believe the better the AI can see the app, the more autonomous it can be. This means that the AI should be able to see not only a snapshot of the DOM of the page, but also a visual picture of the page. The DOM is only half the picture — the fact that there is an element with text on it, does not mean that the user can see it: it may be obscured by other elements, or not in the correct position. The visuals enable you to concentrate on the data that the user sees.

Once the testing framework can see the page, and look at it holistically, whether through the DOM or through a screenshot, it can help you write those checks that you write manually.

If we take a screenshot of the page, we can holistically check all form fields and texts in the page in one fell swoop. Instead of writing code that checks each and every field, we can test all of them at once against the baseline of the previous test.

 

Checking the whole form in one fell swoop. Credit: Applitools
Checking the whole form in one fell swoop. Credit: Applitools

This level exists today in testing tools like Applitools Eyes. Previous tools that tried to do this failed because comparing images pixel by pixel never works. For it to work, the tool needs AI algorithms to figure out what changes are not really changes and which ones are real.

This is how AI today assists you in writing your test code, by writing all your checks for you. The human is still driving the tests, but some of the checks are done automatically by the AI.

Also, the AI can check that a test passes, but when it fails it still needs to notify the human to check whether the failure is a real one, or happened because of a correct change in the software. The human will then have to confirm that the change is good, or reject it because it’s a bug.

And having the AI “see” the application means that the AI can check the visual aspects of an application against a baseline. This is something that until now could only be covered by manual testing.

But the human still needs to assert every change.

 

Level 2 — “Partial Automation”

With Level 1 autonomy, the tester can pass on the tedious aspect of writing the checks for all the fields on a page onto the AI by having the AI test against a baseline. And the AI can now test the visual aspects of the page.

But checking each and every test failure to verify whether it’s a “good” failure or a real bug can be tedious, especially if one change is reflected in many test failures. A Level 2 AI needs to understand the difference not in terms of bitmap differences, but in terms a human can understand — in terms the user of the application should be able to understand. Thus, a Level 2 AI will be able to group changes from lots of pages, as it will be able to understand that semantically, they are the same “change”. We’re starting to see tools like Applitools Eye using AI-techniques to do just that.

A Level 2 AI will be able to group these changes, and tell the human that these are the “same” change, and would they like to please confirm or reject all these changes as a group.

 

Grouping similar changes in Applitools Eyes. Credit: Applitools
Grouping similar changes in Applitools Eyes. Credit: Applitools

In other words, a Level 2 AI will assist the human in checking changes against the baseline and will turn what used to be a tedious effort into a simple one.

 

Level 3 — “Conditional Automation”

In Level 2, any failure or change detected in the software still needs to be vetted by a human. A Level 2 AI can help analyzing the change, but cannot understand whether a page is correct or not just by looking at the page. It needs a baseline to compare against. But a Level 3 AI can do that and much more, because it can apply machine learning techniques to the page.

For example, a Level 3 AI can look at the visual aspects of a page and figure out whether the design is off, based on standard rules for design, rules like alignment rules, whitespace use, color and font usage, and layout rules.

 

A Level 3 AI will autonomously determine that this is a bug. Credit: Applitools
A Level 3 AI will autonomously determine that this is a bug. Credit: Applitools

And what about the data aspects? A Level 3 AI will also be able to look at the data and figure out that all the numbers up to now, in this field or that, need to be between a specific range, and that this field is an email, and this one needs to be the sum of these fields above it. It can figure out that in this page, this table needs to be sorted by this column.

The AI can now look at the pages and determine whether the page is OK, without human intervention, just by understanding the design and data rules. And even if there was a change in the page, the AI can still understand that this page is OK, and not submit it to a human to vet.

And because that AI is looking at hundreds of test results, and seeing how things change over time, it can apply machine learning techniques to detect anomalies in the changes, and only submit a test to a human to verify if such an anomaly is detected.

 

Level 4 — “High Automation”

Up to now, all the AI did was run the checks automatically. The human is still driving the test, and clicking on the links (albeit using automation software). Level 4 is where the AI is driving the test itself!

The Level 4 AI will be able to understand some pages, by visually looking at the page and understanding what type of page it is. Because it can look at the page, and understand it as a human does, it will understand that this page is a login page, this one is a profile page, and that one is a registration page, or a shopping cart page.

 

A Level 4 AI will be able to look at user interactions over time, visualizing the interactions, and understanding the pages and the flow through them. Once the AI understands the type of page, using techniques like reinforcement learning, it can start driving the tests on it automatically, without human intervention. It will be able to write the tests themselves, and not just the checks for the tests. Obviously, it will use the visual (and other) techniques in Level 3 and 2 to find the bugs in each page.

 

Level 5 — “Full Automation”

This level is really Scifi, and we’re starting to fly on wings of fantasy: at this level, the AI will be able to converse with the product manager, understand the application, and fully drive the tests itself!

Unfortunately, given that no human has been able to intelligently understand a product manager’s description of an application, AI-s will need to be much smarter than humans to reach this level! ?

 

Level 6 — “Skynet Automation”

At this level, due to the Robot Apocalypse, no humans are left alive, so there’s nothing to test, is there? Then again, who’s testing the robot software? Hm…

 

Are We There Yet?

Are we there yet? Do we have to start looking for a job because computers are going to automate all software testing?

Most emphatically, no! Software Testing is similar to driving in some aspects, but in other aspects it is much more complicated, as it entails understanding complex human interactions. An AI today doesn’t really understand what it’s doing. It’s just automating tasks based on lots and lots of historical data.

Where are we now in terms of using AI? Advanced tools like Applitools Eyes are currently at level 1, and progressing nicely in Level 2 functionality. While companies are working on level 2, a level 3 AI will need a lot more work and research (but is definitely doable). A level 4 AI is very far away in the future. This is good news — in the next decade or so, we will be able to enjoy the fruits of AI-assisted testing, without the nasty side-effects.

But will we one day, in the far far future, be out of of a job? Who knows — since the destruction of the (second) temple, prophecy is only given to fools and children!

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.


 

About the Author:

30 years of experience have not dulled the fascination Gil Tayar has with software development. From the  olden days of DOS, to the contemporary world of Software Testing, Gil was, is, and always will be, a software developer. He has in the past co-founded WebCollage, survived the bubble collapse of 2000, and worked on
various big cloudy projects at Wix.

His current passion is figuring out how to test software, a passion which he has turned into his main job as Evangelist and Senior Architect at Applitools. He has religiously tested all his software, from the early days as a junior software developer to the current days at Applitools, where he develops tests for software that tests software, which is almost one meta layer too many for him.

In his private life, he is a dad to two lovely kids (and a cat), an avid reader of Science Fiction, (he counts Samuel Delany, Robert Silverberg, and Robert Heinlein as favorites) and a passionate film buff. (Stanley Kubrick, Lars Von Trier, David Cronenberg, anybody?)

Unfortunately for him, he hasn’t really answered the big question of his life – he still doesn’t know whether static languages or dynamic languages are best.

The post Not Only Cars: The Six Levels of Autonomous Testing appeared first on Automated Visual Testing | Applitools.

]]>