Anand Bagmar, Author at Automated Visual Testing | Applitools https://applitools.com/blog/author/anandbagmar/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Mon, 20 Nov 2023 17:50:05 +0000 en-US hourly 1 Unlocking the Power of ChatGPT and AI in Test Automation Q&A https://applitools.com/blog/chatgpt-and-ai-in-test-automation-q-and-a/ Thu, 20 Apr 2023 16:14:13 +0000 https://applitools.com/?p=49358 Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, where I explored how artificial intelligence, specifically ChatGPT, can revolutionize the field of test...

The post Unlocking the Power of ChatGPT and AI in Test Automation Q&A appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT webinar Q&A

Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, where I explored how artificial intelligence, specifically ChatGPT, can revolutionize the field of test automation. In this article, I’ll share the audience Q&A as well as some of the results of the audience polls. Be sure to read my previous article, where I summarized the key takeaways from the webinar. You can also find the full recording, session materials, and more in our event archive.

Audience Q&A

Our audience asked various questions about the data and intellectual property when using AI and adding AI into their own test automation processes.

Intellectual properties when using ChatGPT

Question: To avoid disclosing company intellectual properties to ChatGPT, is it not better to build a “private” ChatGPT / large language model to use and train for test automation inside the company while securing the privacy?

My response: The way a lot of organizations are proceeding is setting up private ChatGPT-like infrastructure to get the value of AI. I think that’s a good way to proceed, at least for now.

Data privacy when using ChatGPT

Question: What do you think about feeding commercial data (requirements, code, etc.) to ChatGPT and/or OpenAI API (e.g. gpt-3.5-turbo) data privacy connected to the recent privacy issues like with Samsung, exposure of ChatGPT chats, and so forth?

My response: Feeding public data is okay, because it’s out in the public space anyway, but commercial data could be public or it could be private and that could become an issue. The problem is we do not understand enough about how ChatGPT is using the data or the questions that we are asking it. It is constantly learning, so if you feed a very unique type of question that it has never come across before, the algorithm is intelligent to learn from that. It might give you the wrong answer, but it is going to learn based on your follow-up questions, and it is going to use that information to answer someone else’s similar question.

Complying with data regulations

Question: How can we ensure that AI-driven test automation tools maintain compliance with data privacy regulations like GDPR and CCPA during the testing process?

My response: It’s a tough question. I don’t know how we can ensure that, but if you are going to use any AI tool, you must make sure you are asking very focused, specific questions that don’t disclose any confidential information. For example, in my demo, I had a piece of code pointing to a website asking it a very specific question how to implement it. That question could be implemented using some complex free algorithms or anything else, but taking that solution, you make it your own and then implement it in your organization. That might be safer than disclosing anything more. This is a very new area right now. It’s better to be on the side of caution.

Adding AI into the test automation process

Question: Any suggestions on how to embed ChatGPT/AI into automation testing efforts as a process more than individual benefit?

My response: I unfortunately do not have an answer to this yet. It is something that needs to be explored and figured out. One thing I will add is that even though it may be similar to many others, each product is different. The processes and tech stacks are going to vary for all these types of products you use for testing and automation, so one solution is not going to fit everyone. Auto-generated code will go up to a certain level, but at least as of now, the human mind is still very essential to use it correctly. So it’s not going to solve your problems; it is just going to make solving them easier. The examples I showed are ways to make it easier in your own case.

Using AI for API and NFRs

Question: How effective would AI be for API and NFR?

My response: I could ask a question to give me a performance test implementation approach for the Amazon website, and it gives me a performance test strategy. If I ask it a question to give me an implementation detail of what tool I should use, it is probably going to suggest a few tools. If I ask it to build an initial script for automating this performance test, it is probably going to do that for me as well. It all depends on the questions you are asking to proceed from there, and I’m sure you’ll get good insights for NFRs.

Using AI a dedicated private cloud instance

Question: Our organization is very particular about intellectual property protection, and they might deny us from using third-party cloud tools. What solution is there for this?

My response: Applitools uses the public cloud. I use a public cloud for my learning and training and demos that I do, but a lot of our customers actually use the dedicated cloud instance, which is hosted only for them. Only they have access to that, so that takes care of the security concerns that might be there. We also work with our customers to ensure from compliance and security perspectives that all the questions are answered and to make sure everything conforms as per their standards.

Using AI for mobile test automation

Question: Do you think AI can improve quality of life for automation engineers working on mobile app testing too? Or mostly web and API?

My response: Yes, it works for mobile. It works for anything that you want. You just have to try it out and be specific with your questions. What I learned from using ChatGPT is that you need to learn the art of asking the questions. It is very important in any communication, but now it is becoming very important in communicating with tools as well to get you the appropriate responses.

Audience poll results

In the live webinar, the audience was asked “Given the privacy concerns, how comfortable are you using AI tools for automation?” Of 105 votes, over half of the respondents would be somewhat or very comfortable with using AI tools for automation.

  • Very comfortable: 16.95%
  • Somewhat comfortable: 33.05%
  • Not sure: 24.59%
  • Somewhat comfortable: 14.41%

Next steps

You can take advantage of AI today by using Applitools to test web apps, mobile apps, desktop apps, PDFs, screenshots, and more. Applitools offers SDKs that support several popular testing frameworks in multiple languages, and these SDKs install directly into your projects for seamless integration. You can try it yourself by claiming a free account or request a demo.

Check out our events page to register for an upcoming session with our CTO and the inventor of visual AI Adam Carmi. For our clients and friends in the Asia-Pacific region, be sure to register to attend our upcoming encore presentation of Unlocking the Power of ChatGPT and AI in Test Automation happening on April 26th.

The post Unlocking the Power of ChatGPT and AI in Test Automation Q&A appeared first on Automated Visual Testing | Applitools.

]]>
Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways https://applitools.com/blog/chatgpt-and-ai-in-test-automation-key-takeaways/ Tue, 18 Apr 2023 21:12:14 +0000 https://applitools.com/?p=49170 Editor’s note: This article was written with the support of ChatGPT. Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, when I discussed...

The post Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT webinar key takeaways

Editor’s note: This article was written with the support of ChatGPT.

Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, when I discussed how artificial intelligence – specifically ChatGPT – can impact the field of test automation. The webinar delved into the various applications of AI in test automation, the benefits it brings, and the best practices to follow for successful implementation. With the ever-growing need for efficient and effective testing, the webinar is a must-watch for anyone looking to stay ahead of the curve in software testing. This blog article recaps the key takeaways from the webinar. Also, you can find the full recording, session materials, and more in our event archive.

Takeaways from the previous webinar

I started with a recap of the takeaways from the previous webinar, Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look. The webinar focused on two main aspects from a testing perspective: testing approach mindset (strategy, design, automation, and execution) and automation perspective. ChatGPT was able to help with automation by guiding me to automate test cases more quickly and effectively. ChatGPT was also able to provide solutions to programming problems, such as giving a solution to a problem statement and refactoring code. However, there were limitations to ChatGPT’s ability to provide answers, particularly in terms of test execution, and some challenges when working with large blobs of code.
If you didn’t catch the previous webinar, you can still watch it on demand.

What’s new in AI since the previous webinar

Since we hosted the previous webinar, there have been many updates in the AI chatbot space. A few key updates we covered in the webinar include:

  • ChatGPT has become accessible on laptops, phones, and Raspberry Pi, and can be run on own devices.
  • Google Bard was released, but it is limited to English language, cannot continue conversations, and cannot help with coding.
  • ChatGPT 4 was released, which accepts images and text inputs and provides text outputs.
  • ChatGPT Plus was introduced, offering better reasoning, faster responses, and higher availability to users.
  • Plugins can now be built on top of ChatGPT, opening up new and powerful ways of interaction.

During the webinar, I gave a live demo of some of ChatGPT’s updates, where it was able to provide code implementation and generate unit tests for a programming question.

Using AI to address common challenges in test automation

Next, I discussed the actual challenges in automation and how we can leverage AI tools to get better results. Those challenges include:

  • Slow and flaky test execution
  • Sub-optimal, inefficient automation
  • Incorrect, non-contextual test data

Using AI to address flakiness in test automation

The section specifically focused on flaky tests related to UI or locator changes, which can be identified using consistent logging and reporting. I advised against using a retry listener to handle flaky tests and instead suggest identifying and fixing the root cause. I then demonstrated an example of a test failing due to a locator change and discussed ways to solve this challenge.

Read our step-by-step tutorial to learn how to use visual AI locators to target anything you need to test in your application and how it can help you create tests that are more resilient and robust.

Using AI to address sub-optimal or inefficient information

Next, I discussed sub-optimal or inefficient automation and how to improve it. I use GitHub Copilot to generate a new test and auto-generate code. I explained how to integrate GitHub Copilot and JetBrains Aqua with IntelliJ and how to use Aqua to find locators for web elements. Then, I showed how to implement code in the IDE and interact with the application to perform automation.

Using AI to address incorrect, non-contextual test data

Next, I discussed the importance of test data in automation testing and related challenges. There are many libraries available for generating test data that can work in the context of the application. Aqua can generate test data by right-clicking and selecting the type of text to generate. Copilot can generate data for “send keys” commands automatically. It’s important to have a good test data strategy to avoid limitations and increase the value of automation testing.

Potential pitfalls of AI

AI is not a magic solution and requires conscious and contextual use. Over-reliance on AI can lead to incomplete knowledge and lack of understanding of generated code or tests. AI may replace certain jobs, but individuals can leverage AI tools to improve their work and make it an asset instead of a liability.

Data privacy is also a major concern with AI use, as accidental leaks of proprietary information can occur. And AI decision-making can be problematic if it does not make the user think critically and understand the reasoning behind the decisions. Countries and organizations are starting to ban the use of AI tools like ChatGPT due to concerns over data privacy and accidental leaks.

Conclusion

Overall at first, I was skeptical about AI in automation, but that skepticism has reduced significantly. You must embrace technology or you risk being left behind. Avoid manual repetition, and leverage automation tools to make work faster and more interesting. The automation cocktail (using different tools in combination) is the way forward.

Focus on ROI and value generation, and make wise choices when building, buying, or reusing tools. Being agile is important, not just following a methodology or procedure. Learning, evolving, iterating, communicating, and collaborating are key to staying agile. Upskilling and being creative and innovative are important for individuals and teams. Completing the same amount of work in a shorter time leads to learning, creativity, and innovation.

Be sure to read my next article where I answers questions from the audience Q&A. If you have any other questions, be sure to reach out on Twitter or LinkedIn.

The post Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways appeared first on Automated Visual Testing | Applitools.

]]>
AI-Generated Test Automation with ChatGPT https://applitools.com/blog/ai-generated-test-automation-with-chatgpt/ Mon, 06 Feb 2023 16:42:08 +0000 https://applitools.com/?p=46333 The AI promise AI is not a new technology. Recently, the field has made huge advancements. Currently it seems like AI technology is mostly about using ML (machine learning) algorithms...

The post AI-Generated Test Automation with ChatGPT appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT example code

The AI promise

AI is not a new technology. Recently, the field has made huge advancements.

Currently it seems like AI technology is mostly about using ML (machine learning) algorithms to train models with large volumes of data and use these trained computational models to make predictions or generate some outcomes.

From a testing and test automation point-of-view, the question in my mind still remains: Will AI be able to automatically generate and update test cases? Can it find contextual defects in the product? Can it eventually inspect code and the test coverage to prevent defects getting into production?

ICYMI: Read my recent article on this topic, AI: The magical helping hand in testing.

The promising future

Recently I came across a lot of interesting buzz created by ChatGPT, and I had to try it out. 

Given I am a curious tester by heart, I signed up for it on https://chat.openai.com/ and tried to see the kind of responses generated by ChatGPT for a specific use case.

Live demo: Watch me demonstrate how to use ChatGPT for testing and development in the on-demand recording of Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look.

What is the role of AI in testing?

I started with a simple question to ChatGPT. The answer was interesting. But the answer was nothing different than what I already knew.

So I thought of asking some more specific questions.

Create a test strategy using AI

I asked ChatGPT to create a test strategy for testing and automating Amazon India shopping app and website.

I was blown away by the first response. Though I can’t use this directly, as it is quite generic, the answer was actually pretty good as a starting point.

I was now hooked, and I had to keep going on now.

What test cases should I automate for my product?

Example: What test cases should I automate for Amazon India?

Like the test strategy, these are not to the level of details I am looking for. But it’s a great start.

That said, at the top of the test automation pyramid, I do not want to automate test cases, but I want to automate test scenarios. So, I asked ChatGPT about the important scenarios I should automate.

What test scenarios should I automate for my product?

Though not the specific test scenarios I was expecting, there is a clear difference between the identified test cases and the test scenarios.

Code generation: the first auto-generated test

I asked ChatGPT to give me the Selenium-Java automation code to automate the “search and add a One Plus phone to cart scenario” for Amazon India website.

ChatGPT not only corrected my question, but also gave me the exact code to implement the above scenario.

I updated the question to generate the test using Applitools Visual AI, and voila, here was the solution:

Is this code usable?

Not really!

If you run the test, there is a good chance it would fail. The reason being, the generated code assumes the page is “rendered” as soon as the actions are done. To make this code usable and consistent in execution, we need to update it with realistic and intelligent waits at relevant places to cater to the rendering and loading time required by the browser, and of-course, your environment.

NOTE: Intelligent waits are important as opposed to a hard wait to optimise the execution time. Also, being realistic in your wait durations is very important. You do not want to wait for a particular state of the application for more than what is expected to happen in production.

Also, typically in our test frameworks, we have various abstraction layers and utilities to manage different functionalities and responsibilities. But we can use snippets of this generated code and easily plug it in the right places in our frameworks. So it is still very valuable.

I also tried the same for some internal sites – i.e. not available to the general public. The test was still created, but not sure about the validity of those tests.

Test data generation

Lastly, I asked specific questions around test data – an aspect unfortunately ignored or forgotten until the last minute by most teams.

Example: What test data do I need to automate test scenarios for my product?

Again, I was blown away by the detailed test data characteristics that were given as a response. It was specific about the relevant data required for product, user, payment, and order to automate the important test scenarios that were highlighted in the earlier response. 

I tried to trick ChatGPT to “search and add a product to cart” – but I didn’t specify what product.

It caught my bluff and gave very clear instructions that I should provide valid test data.

Try it yourself: Can ChatGPT accurately translate your tests from one framework to another? Try it out and let us know how it went @Applitools.

Other examples

Generating content

Instead of thinking and typing this blog post, I could have used ChatGPT to generate the post for me as well.

I didn’t even have time to step away to get a cup of coffee while this was being generated!

Planning a vacation

Another interesting use case of ChatGPT – getting help to plan a vacation!

NOTE: Be sure to work with your organization’s security team and @OpenAI before implementing ChatGPT into the organization’s official processes.

What’s next? What does it mean for using AI in testing?

In my examples, the IDEs are very powerful and make it very easy for programmers and automation engineers to write code in a better and more efficient way.

36.8% of our audience participants answered that they were worried about AI replacing their jobs. Others see AI as a tool to improve their skills and efficiency in their job.

Tools and technologies like ChatGPT will offer more assistance to such roles. Individuals can focus on logic and use cases. I have doubts on the applicability of these tools to provide answers based on contextual information. However, given very focused and precise questions, tools can provide information to help implement the same in an efficient and optimal fashion.

I am hooked!

To learn more, watch on-demand recording of Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look. Be looking for a follow-on event in early April where I will go deeper into specific use cases and how ChatGPT and its use in testing are evolving. Sign up for an email notification when registration opens.

The post AI-Generated Test Automation with ChatGPT appeared first on Automated Visual Testing | Applitools.

]]>
AI: The Magical Helping Hand in Testing https://applitools.com/blog/ai-the-magical-helping-hand-in-testing/ Tue, 24 Jan 2023 16:59:28 +0000 https://applitools.com/?p=46027 The AI promise AI as a technology is not new. There are huge advancements made in the recent past in this field.  Currently it seems like AI technology is mostly...

The post AI: The Magical Helping Hand in Testing appeared first on Automated Visual Testing | Applitools.

]]>
Gartner Hype Cycle for Artificial Intelligence, 2021

The AI promise

AI as a technology is not new. There are huge advancements made in the recent past in this field. 

Currently it seems like AI technology is mostly about using ML (machine learning) algorithms to train models with large volumes of data and use these trained computational models to make predictions or generate some outcomes.

From a testing and test automation point-of-view, the question in my mind still remains: Will AI be able to automatically generate and update test cases? Find contextual defects in the product? Inspect code and the test coverage to prevent defects getting into production?

These are the questions I have targeted to answer in this post.

The hype

Gartner publishes reports related to the Hype Cycle of AI. You can read the detailed reports from 2019 and 2021.

The image below is from the Hype Cycle for AI, 2021 report.

Based on the report from 2021, we seem to be some distance away from the technology being able to satisfactorily answer the questions in my mind. 

But there are areas where we seem to be getting out of what the Gartner report calls “Trough of Disillusionment” and moving towards the “Slope of Enlightenment”.

Based on my research in the area, I agree with this.

Let’s explore this further.

The promising future

Recently I came across a lot of interesting buzz created by ChatGPT, and I had to try it out. 
Given I am a curious tester by heart, I signed up for it on https://chat.openai.com/ and tried to see the kind of responses generated by ChatGPT for use cases related to test strategy and automation.

The role and value of AI in testing

I took ChatGPT for a spin to see how it could help in creating test strategy and generate code to automate some tests. In fact, I also tried creating content using ChatGPT. Look out for my next blog for the details of the experiment.

I am amazed to see how far ahead we have come in making the use of AI technology accessible to the end-users.

The answers to a lot of the non-coding questions, though generic, were pretty spot on in the responses. Like the code generated by record-and-playback tools cannot be used directly and needs to be tuned and optimized for regular usage, the answers provided by ChatGPT can be a great starting point for anyone. You would get a high-level structure, with major areas identified, which you can then detail out based on context, that only you would be aware of.

But, I often wonder what is the role of AI in Testing? Is it just a buzzword gone viral on social media?

I did some research in the area, but most of the references I found were blog posts and articles about “how AI can help make testing better”.

Here is a summary from my research on trends and hopes from AI & Testing.

Test script generation

This aspect was pretty amazing to me. The code generated was usable, though in isolation. Typically in our automation framework, we have a particular architecture we conform to while implementing the test scripts. The code generated for specific use cases will need to be optimized and refactored to fit in the framework architecture.

Test script generation using Applitools Visual AI

Applitools Visual AI takes a very interesting approach at this. Let me explain this in more detail.

There are 2 main aspects in a test script implementation.

  1. Implementing the actions/navigations/interactions with the product-under-test
  2. Implementing various assertions based on the above interactions to validate the functionality is working as expected

We typically only write the obvious and important assertions in the test. This itself can make the code quite complex and also affects the test execution speed. 

The less important assertions are typically ignored – for lack of time, or also because they may not be directly related to the intent of the test.

For the assertions you have not added in the test script, you either need to implement different tests for validate those, or worse yet, they are ignored.

There is another category of assertions which are important and you need to implement in your test, but these are very difficult or impossible to implement in your test scripts (based on your automation toolset). Example – validating the colors, fonts, page layouts, overlapping content, images, etc. is very difficult (if at all possible) and complex to implement. For these types of validations, you usually end up relying on a human testing these specific details manually. Given the error-prone nature of manual testing, a lot of these validations are often (unintentionally) missed or ignored and the incorrect behavior of your application gets released to Production.

This is where Applitools Visual AI comes to the rescue.

Instead of implementing a lot of assertions, with Applitools Visual AI integrated in your UI automation framework, it can take care of all your functional and visual (UI and UX) assertions with a single line of code. With this single line of code, you are able to check the full screen automatically, thus exponentially increasing your test coverage. 

Let us see an example of what this means.

Below is a simple “invalid-login” test with assertions.

If you use Applitools Visual AI, the same test changes as below:

NOTE: In the above example with Applitools Visual AI, the lines eyes.open and eyes.closeAsync are typically called from your before and after test method hooks in your automation framework.

You can see the vast reduction in code required for the same “invalid-login” test as compared between the regular assertion-based code we are used to writing, versus using Applitools Visual AI. Hence, the time to implement the test has improved significantly.

The true value is seen when the test runs between different builds of your product which has changes in functionality.

The standard assertion-based test will fail at the first error it encountered. Whereas the test with Applitools is able to highlight all the differences between the 2 builds, which include:

  • Broken functionality – bugs
  • Missing images
  • Overlapping content
  • Changed/new features in the new UI

All the above is done without having to rely on locators, which could have changed as well, and caused your test to fail for a different reason. It is all possible because of the 99.9999% accurate AI algorithms provided by Applitools for visual comparison. 

Thus, your test coverage has increased, and it is impossible for bugs to escape your attention.

The Applitools AI algorithms can be used in any combination as appropriate to the context of your application to get the maximum validation as possible.

Based on the data from the survey done on “The impact of Visual AI on Test Automation”, we can see below the advantages of using Applitools Visual AI.

Visual AI: The Empirical Evidence

5.8x faster

Visual AI allows tests to be authored 5.8x faster compared to the traditional code-base approach.

5.9x more efficient

Test code powered by Visual AI increases coverage via open-ended assertions and is thus 5.9x more efficient per line of code.

3.8x more stable

Reducing brittle locators and labels via Visual AI means reduced maintenance overhead.

45% more bugs caught

Open-ended assertions via Visual AI are 45% more effective at catching bugs.

Increasing test coverage

AI technology is getting pretty good in suggesting test scenarios and test cases. 

There are many products that are already using AI technology under the hoods to provide valuable features to the users.

Increasing test coverage using Applitools Ultrafast Test Cloud

A great example of an AI-based tool is the Applitools Test Cloud which allows you to scale the test execution seamlessly without having to run the tests on other browsers and devices.

In the image below, you can see how easy it is to scale your web-based test execution across the devices and browsers of your choice as part of the same UI test execution using the Applitools Ultrafast Grid.

Similarly you can use the Applitools Ultrafast Native Mobile Grid for scaling the test execution for your iOS and Android native applications as shown below:.

Example of using Applitools Ultrafast Native Mobile Grid for Android apps:

Example of using Applitools Ultrafast Native Mobile Grid for iOS apps:

Test data generation

This is an area that is still to get better. My experiments showed that while actual test data was not generated, it indicated all the important areas we need to think about from a test data perspective. This is a great starting point for any team, while ensuring no obvious area is missed out.

Debugging

We use code quality checks in our code-base to ensure the code is of high quality. While the code quality tool itself usually provides suggestions on how to fix the flagged issues, there are times when it may be tricky to fix the problem. This is an area where I got lot of help in fixing the problem.

Example: In one of my projects, I was getting a sonar error related to “XML transformers should be secured”.

​​I asked ChatGPT this question:

The solution to the problem was spot-on and I was immediately able to resolve my sonar error.

Code optimization and fixing

Given a block of code, I was able to use ChatGPT to suggest a better way to implement that code, without me providing any context of my tech stack. This was mind-blowing!

Here is an example of this:

There were 5 very valuable suggestions provided by ChatGPT. All of these suggestions are very actionable, hence very valuable!

Analysis and prediction

This is a part I feel tools are currently still limited. There is an aspect of product context, learning, and using the product and team-specific data to come up with very contextual analysis and eventually get to the predictive and recommendations stage.

What other AI tools exist?

While most of the examples are based on ChatGPT, there are other tools also in the works from a research perspective, and also application perspective.

Here are some interesting examples and resources for you to investigate more in the fascinating work that is going on.

While the above list is not exhaustive, it is more focused in the area of research and development.

There are tools that leverage AI and ML already providing value to users. Some of these tools are:

Testing AI systems

The tester in me is also thinking about another aspect. How would I test AI systems? How can I contribute to building and helping the technology and tools move to the Slope of Enlightenment in Gartner’s Hype Cycle? I am very eager to learn and grow in this exciting time!

Challenges of testing AI systems

Given I do not have much experience in this (yet), there are various challenges that come to my  mind to test AI systems. But by now I am getting lazy to type, so I asked ChatGPT to list out the challenges of testing AI systems – and it did a better job at articulating my thoughts … almost felt like it read my mind.

This answer seems quite to the point, but also generic. In this case, I also do not have any specific points I can think of, given my lack of knowledge and experience in this field. So I am not disappointed.

Why generic? Because I have certain thoughts, ideas, and context in my mind. I am trying to build a story around that. ChatGPT does not have an insight into “my” thoughts and ideas. It did a great job giving responses based on the data it was trained on, keeping the complexity, adaptability, transparency, bias, and diversity in consideration.

For fun, I asked ChatGPT: “How to test ChatGPT?”

The answer was not bad at all. In fact, it gave a specific example of how to test it. I am going to give it a try. What about you?

What’s next? What does it mean for using AI in testing?

There is a lot of amazing work happening in the field. I am very happy to see this happen, and I look forward to finding ways to contribute to its evolution. The great thing is that this is not about technology, but how technology can solve some difficult problems for the users.

From a testing perspective, there is some value we can start leveraging from the current set of AI tools. But there is also a lot more to be done and achieved here!

Sign up for my webinar Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look to learn more about the uses of AI in testing.

Learn more about visual AI with Applitools Eyes or reach out to our sales team.

The post AI: The Magical Helping Hand in Testing appeared first on Automated Visual Testing | Applitools.

]]>
Modern Cross Device Testing for Android & iOS Apps https://applitools.com/blog/cross-device-testing-mobile-apps/ Wed, 13 Jul 2022 20:47:15 +0000 https://applitools.com/?p=40383 Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>

Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

What is Cross Device Testing

Modern cross device testing is the system by which you verify that an application delivers the desired results on a wide variety of devices and formats. Ideally this testing will be done quickly and continuously.

There are many articles explaining how to do CI/CD for web applications, and many companies are already doing it successfully, but there is not much information available out there about how to achieve the same for native mobile apps.

This post will shed light on the cross device testing practices you need to implement to get a step closer to Continuous Delivery for native mobile apps.

Why is Cross Device Testing Important

The number of mobile devices used globally is staggering. Based on the data from bankmycell.com, we have 6.64 billion smartphones in use.

Source: https://www.bankmycell.com/blog/how-many-phones-are-in-the-world#part-1

Even if we are building and testing an app which impacts a fraction of this number, that is still a very huge number.

The below chart shows the market share by some leading smartphone vendors over the years.

Source: https://www.statista.com/statistics/271496/global-market-share-held-by-smartphone-vendors-since-4th-quarter-2009/

Challenges of Cross Device Testing

One of the biggest challenges for testing mobile apps is that across all manufacturers combined, there are 1000s of device types in use today. Depending on the popularity of your app, this means there are a huge number of devices your users could be using. 

These devices will have variations based on:

  • OS types and versions
  • potentially customized OS
  • hardware resources (memory, processing power, etc.)
  • screen sizes
  • screen resolutions
  • storage with different available capacity for each
  • Wifi Vs using mobile data (from different carriers)
  • And many more

It is clear that you cannot run our tests on each type of device that may be used by your users. 

So how do you get quick feedback and confidence from your testing that (almost) no user will get impacted negatively when you release a new version of your app?

Mobile Test Automation Execution Strategy

Mobile Testing Strategy

Before we think about the strategy for running your automated tests for mobile apps, we need to have a good and holistic mobile testing strategy

Along with testing the app functionality, mobile testing has additional dimensions, and hence complexities as compared with web-app testing. 

You need to understand the impact of the aspects mentioned above and see what may, or may not be applicable to you.

Here are some high-level aspects to consider in your mobile testing strategy:

  • Know where and how to run the tests – real devices, emulators / simulators available locally versus in some cloud-based device farm
  • Increasing test coverage by writing less code – using Applitools Visual AI to validate functionality and user-experience
  • Scaling your test execution – using Applitools Native Mobile Grid
  • Testing on different text fonts and display densities 
  • Testing for accessibility conformance and impact of dark mode on functionality and user experience
  • Chaos & Monkey Testing
  • Location-based testing
  • Testing the impact of Network bandwidth
  • Planning and setting up the release strategy for your mobile application including beta-testing, on-field testing, staged-rollouts. This differs for Google PlayStore and Apple
  • Building and testing for Observability & Analytics events

Once you have figured out your Mobile Testing Strategy, you now need to think about how and what type of automated tests can give you good, reliable, deterministic and fast feedback about the quality of your apps. This will result in you identifying the different layers of your test automation pyramid.

Remember: It is very important to execute all types of automated tests, on every code change and new app that is built. The functional / end-2-end / UI tests for your app should also be run at this time.

Additionally, you need to be able to run the tests on a local developer / qa machine, as well in your Continuous Integration (CI) system. In case of native / hybrid mobile apps, developers and QAs should be able to install the app on the (local) devices they have available with them, and run the tests against that. For CI-based execution, you need to have some form of device farm available locally in your network, or cloud-based to allow execution of the tests.

This continuous testing approach will provide you with quick feedback and allow you to fix issues almost as soon as they creep in the app. 

How to Run Functional Tests against Your Mobile Apps

Testing and automating mobile apps have additional complexities. You need to install the app in some device before your automated tests can be run against it.

Let’s explore your options for devices.

Real Devices

Real devices are ideal to run the tests. Your users / customers are going to use your app using a variety of real devices. 

In order to allow proper development and testing to be done, each team member needs access to relevant types of devices (which is subject to their user-base).

However, it is not as easy to have a variety of devices available for running the automated tests, for each team member (developer / tester). 

The challenges of having the real devices could be related to:

  • cost of procuring devices for each team member of a good variety to allow seamless development and testing work. 
  • maintenance of the devices (OS/software updates, battery issues, other problems the device may have at any point in time, etc.)
  • logistical issues like time to order and get devices, tracking of the devices assigned to the team, etc.
  • deprecating / disposing the older devices that are not used / required anymore.

Hence we need a different strategy for executing tests on mobile devices. Emulators and Simulators come to the rescue!

What is the Difference between Emulators & Simulators

Before we get into specifics about the execution strategy, it is good to understand the differences between an emulator and simulator.

Android-device emulators and iOS-device simulators make it easy for any team member to easily spin up a device.

An emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system

An emulator can mimic the operating system, software and the hardware features of the android device.

A Simulator runs on your Mac and behaves like a standard Mac app while simulating iPhone, iPad, Apple Watch, or Apple TV environments. Each combination of a simulated device and software version is considered its own simulation environment, independent of the others, with its own settings and files. These settings and files exist on every device you test within a simulation environment. 

An iOS simulator mimics the internal behavior of the device. It cannot mimic the OS / hardware features of the device.

Emulators / Simulators are a great and cost-effective way to overcome the challenges of real devices. These can easily be created as per the requirements and needs by any team member and can be used for testing and also running automated tests. You can also relatively easily set up and use the emulators / simulators in your CI execution environment.

While emulators / simulators may seem like they will solve all the problems, that is not the case. Like with anything, you need to do a proper evaluation and figure out when to use real devices versus emulators / simulators.

Below are some guidelines that I refer to.

When to use Emulators / Simulators

  • You are able to validate all application functionality
  • There is no performance impact on the application-under-test

Why use Emulators / Simulators

  • To reduce cost
  • Scale as per needs, resulting in faster feedback
  • Can use in CI environment as well

When to use Real Devices for Testing

  • If Emulators / Simulators are used, then run “Sanity” / focussed testing on real devices before release
  • If Emulators / Simulators cannot validate all application functionality reliably, then invest in Real-Device testing
  • If Emulators / Simulators cause performance issues or slowness of interactions with the application-under-test

Cases when Emulators / Simulators May not Help

  • If the application-under-test has streaming content, or has high resource requirements
  • Applications relying on hardware capabilities
  • Applications dependent on customized OS version

Cross-Device Test Automation Strategy

The above approach of using real-devices or emulators / simulators will help your team to  shift-left and achieve continuous testing. 

There is one challenge that still remains – scaling! How do you ensure your tests run correctly on all supported devices?

A classic, or rather, a traditional way to solve this problem is to repeat the automated test execution on a carefully chosen variety of devices. This would mean, if you have 5 important types of devices, and you have a 100 tests automated, then you are essentially running 500 tests. 

This approach has multiple disadvantages:

  1. The feedback cycle is substantially delayed. If 100 tests took 1 hour to complete on 1 device, 500 tests would take 5 hours (for 5 devices). 
  2. The time to analyze the test results increases by 5x 
  3. The added number of tests could have flaky behavior based on device setup / location, network issues. This could result in re-runs or specific manual re-testing for validation.
  4. You need 5x more test data
  5. You are putting 5x more load on your backend systems to cater to executing the same test 5 times

We all know these disadvantages, however, there is no better way to overcome this. Or, is there?

Modern Cross-Device Device Test Automation Strategy

The Applitools Native Mobile Grid for Android and iOS apps can easily help you to overcome the disadvantages of traditional cross-device testing.

It does this by running your test on 1 device, but getting the execution results from all the devices of your choice, automatically. Well, almost automatically. This is how the Applitools Native Mobile Grid works:

  1. Integrate Applitools SDK in your functional automation.
  2. In the Applitools Eyes configuration, specify all the devices you want to do your functional testing. Added bonus, you will be able to leverage the Applitools Visual AI capabilities to also get increased functional and visual test coverage.

Below is an example of how to specify Android devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Pixel_4, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_ULTRA, ScreenOrientation.PORTRAIT));
eyes.setConfiguration(config);

Below is an example of how to specify iOS devices for Applitools Native Mobile Grid:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_mini));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XS));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_X));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XR));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_8));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_7));
eyes.setConfiguration(config);   

  1. Run the test on any 1 device – available locally or in CI. It could be a real device or a simulator / emulator. 

Every call to Applitools to do a visual validation will automatically do the functional and visual validation for each device specified in the configuration above.

  1. See the results from all the devices in the Applitools dashboard

Advantages of using the Applitools Native Mobile Grid

The Applitools Native Mobile Grid has many advantages.

  1. You do not need to repeat the same test execution on multiple devices. This will save the team members a lot of time for execution, flaky tests and result analysis
  2. Very fast feedback of test execution across all specified devices (10x faster than traditional cross device testing approach)
  3. There is no additional test data requirements 
  4. You do not need to procure, build and maintain the devices
  5. There is less load on your application backend-system
  6. A secure solution where your application does not need to be shared out of your corporate network
  7. Using visual assertions instead of functional assertions gives you increased test coverage while writing less code

Read this post on How to Scale Mobile Automation Testing Effectively for more specific details of this amazing solution!

Summary of Modern Cross-Device Testing of Mobile Apps

Using the Applitools Visual AI allows you to extend coverage at the top of your Test Automation Pyramid by including AI-based visual testing along with your UI/UX testing. 

Using the Applitools Native Mobile Grid for cross device testing of Android and iOS apps makes your CI loop faster by providing seamless scaling across all supported devices as part of the same test execution cycle. 

You can watch my video on Mobile Testing 360deg (https://applitools.com/event/mobile-testing-360deg/) where I share many examples and details related to the above to include as part of your mobile testing strategy.

To start using the Native Mobile Grid, simply sign up at the link below to request access. You can read more about the Applitools Native Mobile Grid in our blog post or on our website.

Happy testing!

The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.

]]>
The Best Test Automation Framework Is… https://applitools.com/blog/what-is-the-best-test-automation-framework/ Tue, 19 Oct 2021 10:01:00 +0000 https://applitools.com/?p=31662 Catch a recap of my recent keynote, where I spoke about the context and the criteria required to make any test automation framework the “best.”

The post The Best Test Automation Framework Is… appeared first on Automated Visual Testing | Applitools.

]]>

In this social world, it is very easy to get biased into believing that some practice is the best practice, or some automation tool or framework is “the best.” When anyone makes the statement – “this <some_practice> is a best practice”, or “this tool <name_of_tool> or framework <name_of_framework>” is the best, there are 2 things that come to mind:

  1. The person is promoting the practice / tool / framework as a “silver bullet” – something that will solve all problems, magically.
  1. The said practice / tool / framework actually worked best for them, in the context of the team adopting it

So when I hear anyone saying – “best ….” I get suspicious and think which category do they belong to – “silver bullet,” or knowledgeable folks who have done their study and determined what is working well for them.

Doing the study is extremely important to determine what is good or bad. In the context of Test Automation, there are a lot of parameters that need to be considered before you reach a decision about what tool or framework is going to become “the best tool / framework” for you. I classify these parameters as negotiables and non-negotiables.

I had the privilege of delivering the opening keynote at the recent Future of Testing event focused on Test Automation Frameworks on September 30th 2021. My topic was “The best test automation framework is …”. I spoke about the context, and the non-negotiable and negotiable criteria required to make any test automation framework the “best.”

How to Choose the Best Test Automation Framework…

Understanding the Context

Here are the questions to answer to determine the context:

Negotiable and Non-Negotiable Criteria

Once you understand the context, then apply that information to determine your non-negotiable and negotiable criteria.

Start Evaluating

Now that you understand all the different parameters, here are the steps to get started.

You can see the full mind map I used in my keynote presentation below.

You can also download the PDF version of this mind map from here.

Catch the Keynote Video

To hear more about how to choose the best test automation framework, you can watch the whole video from my keynote presentation here, or by clicking below.

The post The Best Test Automation Framework Is… appeared first on Automated Visual Testing | Applitools.

]]>
A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile https://applitools.com/blog/guide-testing-automating-data-analytics-events-web-mobile/ Mon, 14 Jun 2021 21:07:07 +0000 https://applitools.com/?p=29312 I have been testing Analytics for the past 10+ years. In the initial days, it was very painful and error-prone, as I was doing this manually. Over the years, as...

The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.

]]>

I have been testing Analytics for the past 10+ years. In the initial days, it was very painful and error-prone, as I was doing this manually. Over the years, as I understood this niche area better, and spent time understanding the reason and impact of data Analytics on any product and business, I started getting smarter about how to test analytics events well.

This post will focus on how to test Analytics for Mobile apps (Android / iOS), and also answer some questions I have gotten from the community regarding the same.

What is Analytics?

Analytics is the “air your product breathes”.  Analytics allows teams to:

  • Know their users
  • Measure outcome and value
  • Take decisions

Why is Analytics important?

Analytics allows the business team and product team to understand how well (or not) the features are being used by the users of the system. Without this data, the team would (almost) be shooting in the dark for the ways the product needs to evolve.

The analytics information is critical data to understand where in the feature journeys the user “drops off” and then the inference will provide insights if the drop is because of the way the features have been designed, or if the user experience is not adequate, or of course, there is a defect in the way the implementation has been done.

How do teams use Analytics?

For any team to know how their product is used by the users, you need to instrument your product so that it can share with you meaningful (non-private) information about the usage of your product. From this data, the team would try to infer context and usage patterns which would serve as inputs to make the product better. 

The instrumentation I refer to above is of different types. 

This can be logs sent to your servers – typically these are technical information about the product. 

Another form of instrumentation would be analytics events. These capture the nature of interaction and associated metadata, and send that information to (typically) a separate server / tool. This information is sent asynchronously and does not have any impact on the functioning, nor performance of the product.

This is typically a 4 step process:

  • Capture
    • You need to know what data you want, and why. 
    • Implement the capturing of data based on specific user action(s)
  • Collect
    • The captured data needs to be collected in a central server. 
    • There are many Analytics tools available (commercial & open-source)
    • Many organizations end up building their own tool based on specific customisations / requirements
  • Prepare data for Analysis
    • The collected data needs to be analysed and put in context to make meaning
  • Report
    • Based on the context of the analysed data, reports would be generated that show patterns with details and reasons
    • This allows teams to evolve the product in better ways for business and their users

How to implement Analytics in your product?

Once you know what information you want to capture and when, implementing Analytics into your product goes through the same process as for your regular product features & functionalities.

How to implement Analytics in your product?

Implementing Analytics

Embedding and Triggering an Analytics Library

Step 1: Embed Analytics library

The analytics library is typically a very light-weight library, and is added as a part of your web pages or your native apps (android or iOS).

Step 2: Trigger the event

Once the library is embedded in the product, whenever the user does any specific, predetermined actions, the front-end client code would capture all the relevant information regarding the event, and then trigger a call to the analytic tool being used with that information.

Ex: Trigger an analytics event when user “clicks on the search button”

The data in the triggered event can be sent in 2 ways:

  1. As part of query parameters in the request.
  2. As part of the POST body in the request. This is a preferred approach if the data to be sent is large.

What is an Analytics Event?

An analytics event is a simple https request sent to the Analytics tool(s) your product uses. Yes, your product may be using multiple tools to capture and visualise different types of information.

Below is an example of an analytics event.

An example of an analytics event

Let’s dissect this call to understand what it is doing:

  • The request in the above example is from my blog, which is using Google Analytics as the tool to capture and understand the readers of my blog.
  • The request itself is a straightforward https call to the “collect” API endpoint. 
  • The real information, as shown in this call, is all the query parameters associated with the request. 
    • For the above request, here is a closer look at the query parameters
A look at the query parameters of our example
  • The query parameters are the collection of information captured to understand what the user (reader of my blog) did.
  • The name-value pairs of query parameters may seem cryptic – and that is not wrong. It is probably designed in this fashion for the following reasons:
  • To reduce the packet size of these requests – which reduces network load, and eventual processing load on the analytics tool as well
    • To try and mask what information is being captured. This probably was more relevant in the http days. Ex: “dnt=1” may indicate that the user has set preferences for “do-not-track=true”
    • The mapping is created based on the analytic tool
  • Even if the request is sent as part of the POST body, it would have similar payload
  • When the request reaches the analytic tool, the tool processes each request based on the mapping it had created, and creates the reports and charts based on what information was received by it

Different ways to test Analytics events

There are different ways to test Analytics events. Let’s understand the same.

Test at the source

Well, if testing the end report is too late, then we need to shift-left and test at the source.

During Development

Based on requirements, the (front-end) developers would be adding the analytics library to the web pages or native apps. Then they set the trigger points when the event should be captured and sent to the analytics tool. 

A good practice is for the analytics event generation and trigger to be implemented as a common function / module, which will be called by any functionality that needs to send an analytics event.

This will allow the developers to write unit tests to ensure:

  1. All the functionalities that need to trigger an event are collecting the correct and expected data (which will be converted to query parameters) to be sent to the common module
  2. The event generation module is working as expected – i.e. the request is created with the right structure and parameters (as received from its callers)
  3. The event can be sent / triggered with the correct structure and details as expected

This approach will ensure that your event triggering and generation logic is well tested. Also, these tests will be able to be run on developer machines as well as your build pipelines / jobs in your CI (Continuous Integration) server. So you get quick feedback in case anything goes wrong.

During Manual / Exploratory Testing

While the unit testing is critical to ensure all aspects of the code works as expected, the context of dynamic data based on real users is not possible to understand from the unit tests. Hence, we also need the System Tests / End-2-End tests to understand if analytics is working well.

A sink next to an motion sensing paper towel dispenser, arranged so that turning on the faucet automatically activates the dispenser. Titled "when you write 2 unit tests and no integration tests".

Reference: https://devrant.com/rants/754857/when-you-write-2-unit-tests-and-no-integration-tests

Let’s look at the details of how you can test Analytics Events during Testing in any of your internal testing environments:

  1. Requirements for testing
    1. You need the ability to capture / see the events being sent from your browser / mobile app
    2. For Browsers, you can simply refer to the Network tab in the Developer Tools
    3. For Native Apps, set up a proxy server on your computer, configure the device to route the traffic through the proxy server. Now launch the app and perform actions / interact with the functionality. All API requests (including Analytics event requests) will be captured in the Proxy server on your computer
  2. Based on the types of actions performed by you in the browser or the native app, you will be able to verify the details of those requests from the Network tab / Proxy server.

The details include – name of the event, and the details in the query parameter

This step is very important, and different from what your unit tests are able to validate. With this approach, you would be able to verify:

  • Aspects like dynamic data (in the query parameters)
  • If any request is repeated / duplicated
  • Whether any request is not getting triggered from your product
  • If requests get triggered on different browsers or devices

All the above is possible to be tested and verified even if you do not have the Analytic tool setup or configured as per business requirements.

The advantage of this approach is that it complements the unit testing, and ensures that your product is behaving as expected in all scenarios.

The only challenge / disadvantage of this aspect is that this is manual testing. Hence, it is very possible to miss out certain scenarios or details to be validated on every manual test cycle. Also, it is impossible to scale and repeat this approach.

As part of Test Automation 

Hence, we need a better approach. The way unit tests are automated, the above activity of testing should also be automated. The next section talks about a solution for how you can automate testing of Analytics events as part of your System / end-2-end test automation.

Test the end-report

This is unfortunately the most common approach teams take to test if the analytics events are being captured correctly, and that too may end up happening in production / or when the app is released for its users. But you need to test early. Hence the above technique of Testing at the source is critical for the team to know if the events are been triggered and validated as soon as the implementation is completed. 

I would recommend this strategy after you have completed Testing at the Source

A collection of charts and graphs for Testing the End Report

There are pros and cons of this approach.

Pros and Cons of Testing the End Report - pros include ensuring the report is set up correctly, cons include licensing, reports not yet set up, and validating all requests are sent / captured.

The biggest disadvantage though of the above approach is that it is too late!

The biggest problem with testing the end report is that it's too late!

That said, there is still a lot of value in doing this. This indicates that your Analytics tool is also configured correctly to accept the data and you are actually able to set up meaningful charts and reports that can indicate patterns and allows you to identify and prioritise the next steps to make the product better.

Automating Analytics Events 

Let’s look at the approach to automate testing of Analytics events as part of your System / end-2-end Test Automation.

We will talk separately about Web & Mobile – as both of them need a slightly different approach.

Web

Assumptions

  • The below technique assumes you are using Selenium WebDriver for your System / end-2-end automation. But you could implement a similar solution based on any other tools / technologies of your choice.

Prerequisites

  1. You already have System / end-2-end test automated using Selenium Webdriver
  2. For each System / end-2-end test automated, have a full list of the Analytics events that are expected to be triggered, with all the expected query parameters (name & value)

Integrating with Functional Automation

There are 2 options to accomplish the Analytics event test automation for Web. They are as follows:

  1. Use WAAT

I built WAAT – Web Analytics Automation Testing in Java & Ruby back in 2010. Integrate this in your automation framework using the instructions in the corresponding github pages.

Here is an example of how this test would look using WAAT.

A test shown using WAAT - Web Analytics Automation Testing.

This approach will let you find the correct request and do the appropriate matching of parameters automatically.

  1. Selenium 4 (beta) with Chrome Developer Protocol 

With Selenium 4 almost available, you could potentially use the new APIs to query the network requests from Chrome Developer Protocol

With this approach, you will need to write code to query the appropriate Analytics request from the list of requests captured, and compare the actual query parameters with what is expected.

That said, I will be working on enhancing WAAT to support Chrome Developer Protocol based plugin in the near future. Keep an eye out for updates to the WAAT project in the near future.

Mobile (Android & iOS)

Assumptions

  • The below technique assumes you are using Appium for your System / end-2-end automation. But you could implement a similar solution based on any other tools / technologies of your choice.

Prerequisites

  1. You already have System / end-2-end test automated using Appium
  2. For each System / end-2-end test automated, have a full list of the Analytics events that are expected to be triggered, with all the expected query parameters (name & value)

Integrating with Functional Automation

There are 2 options to accomplish the Analytics event test automation for Mobile apps (Android / iOS). They are as follows:

  1. Use WAAT

As described for the web, you can integrate WAAT – Web Analytics Automation Testing in your automation framework using the instructions in the corresponding github pages.

On the device where the test is running, you would need to do the following additional setup as described in the  Proxy setup for Android device

This approach will let you find the correct request and do the appropriate matching of parameters automatically.

  1. Instrument app 

This is a customized implementation, but can work great in some contexts. This is what you can do:

  • Taking developer help, instrument the app to add the analytics events as a log message, in a clear and easily identifiable way
  • For each System / end-2-end test you run, follow these steps
    1. Have a list of expected analytics events with query parameters (in sequence) for this test
    2. Clear the logs on the device
    3. Run the System / end-2-end test
    4. Retrieve the logs from the device
    5. Retrieve all the analytics events that would be added to the logs while running the System / end-2-end tests
    6. Compare the actual analytics events captured with the expected results

This approach will allow us to validate events as they are being sent as a result of running the System / end-2-end tests. 

Differences in Analytics for Mobile Apps Vs Web sites

As you may have noticed in the above sections for Web and Mobile, the actual testing of Analytics events is really the same in either case. The differences arise a little about how to capture the events, and maybe some proxy setup required. 

There is another aspect that is different for Analytics testing for Mobile.

The Analytics tool sdk / library that is added to the Mobile app has an optimising feature – batching! This configurable feature (in most tools) allows customizing the number of requests that should be collected together. Once the batch is full, or on trigger of some specific events (like closing the app), all the events in the batch will be sent to the Analytics tool and then cleared / reset. 

This feature is important for mobile devices, as the users may be on the move, (or using the apps in Airplane mode) and may not have internet connectivity when using the app. In such cases, if the device does not cache the analytics requests, then that data may be lost. Hence it is important for the app to store the analytics events and then send it at a later point when there is connectivity available.

Also, another reason batching of analytics events helps is to minimize the network traffic generated by the app.

So when we are doing the Mobile Analytics events automation, when the test completes, ensure the events are triggered from the app (i.e. from the batch), only then it will be seen in the logs or proxy server, and then validation can be done.

While batching can be a problem for Test Automation (since the events will not be generated / seen immediately), you could take one of these 2 approaches to make your tests deterministic:

  • Configure the batch size to be 1, or turn of batching to enable triggering the events immediately. This can be done for your apps for the non-prod environments or as part of debug builds.
  • Trigger the flushing of the batch through an action in the app (ex. Closing / minimizing the app). Talk to the developers to understand what actions will work for your app.

A Comprehensive System / end-2-end Test Automation Solution

I like to have my System Tests / end-2-end Test Automation solution to have the following capabilities built in:

  • Ability to run tests on multiple platforms (web and native mobile – android & iOS)
  • Run tests in parallel
  • Tests manage their own test data
  • Rich reporting
  • Visual Testing using Applitools Visual AI
  • Analytics Events validation

See this post on Automating Functional / End-2-End Tests Across Multiple Platforms for implementation details for building a robust, scalable and maintainable cross-platform Test Automation Framework

Answers to questions from community

  • How do you add the automated event tests on Android and iOS to the delivery pipeline?
    • If you have automated the tests for Mobile using either of the above approaches, the same would work from CI as well. Of course, the CI agent machines (where the tests will be running) would need to have the same setup as discussed here.
  • How do you make sure that new and old builds are working fine?
    • The expected analytics events are compared with every new build / app generated. Any difference found there will be highlighted as part of your System / end-2-end test execution
  • Is UI testing mandatory to do event testing?
    • There are different levels of testing for Analytics. Refer to the Test at the source section. Ask yourself the question – what is the risk to the business team IF any set of events does not work correctly or relevant details are not captured? If there is a big risk, then it is advisable to do some form of System / end-2-end test automation and integrate Analytics automation along with that.
  • Any suggestions on a shift-right approach here?
    • We should actually be shifting left. Ensure Analytics requirements are part of the requirements, and build and test this along with the actual implementation and testing to prevent surprises later.
  • How do we make sure everything is working fine in Production? Should we consider an alerting mechanism in case of sudden spike or loss of events?
    • You could have a smoke test suite that runs against Production. These tests can validate functionality and analytics events.
    • Regarding the alerts, it is always good to have these setup. The alerts would depend on the Analytics tool that you are using. That said, the nature of alerts would depend on specific functionality of the application-under-test. 
  • What happens when there are a lot of events to be automated? How do you prioritize?
    • Take help from your product team to help prioritise. While all events are important, not all are critical. Do cost / value analysis and based on that, start.
  • Testing at source means only UI testing or native testing? You mentioned about a debug app file, so is it possible to automate the events with native frameworks like espresso and XCUITest or only with Appium?
    • There are 2 aspects of testing at the source – development & testing. Based on this understanding, figure out what unit testing can be done, and what will trigger the tests in context of an integrated testing. If your automated tests using either espresso or XCUITest  can simulate the user actions, which will in-turn trigger the events from the app when the test runs, then you can do Analytics automation at that level as well.
  • Once the events are sent to the Analytics tool, the data would be stored in the database. How do you ensure that events are saved in the database? Did you have any other end to end tests to verify that? How do we make sure that? Verifying the network logs alone doesn’t guarantee that events will be dispatched to database
    • The product / app does not write the events to the database. 
      • You are testing your product, and not the Analytics tool
      • The app makes an https call to send the event with details to an independent Analytics server – which chooses to put this in some data store, in their defined schema. 
      • This aspect in most likelihood will be transparent to you. 
      • Also, in most cases, no one will have access to the Analytics tools’s data store directly. So it does not make sense to verify the data is there in the database. 
    • Another thing to consider – you / the team would be choosing the Analytics tool based on the features it offers, its reliability and stability. So you should not be needing to “test” the Analytics tool, but instead, focus on the integration to ensure everything your team is building, is tested well.
    • So my suggestion is:
      • Test at the source (unit tests + System / end-2-end tests), for each new build
      • Test the end report to ensure final integration is working well, in test and production
  • Sometimes we make use of 3rd party products like https://segment.com/ to segregate events to different endpoints. As a result sometimes only a subset of the events (basis business rules / cost optimizations) might reach the target endpoint. How to manage these in an automation environment?
    • Same answer as above.

The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.

]]>
Automating Functional / End-2-End Tests Across Multiple Platforms https://applitools.com/blog/automating-functional-end-to-end-tests-cross-platform/ Tue, 01 Jun 2021 20:06:00 +0000 https://applitools.com/?p=29024 This post talks about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms.  It shares details on the thought process & criteria involved...

The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.

]]>

This post talks about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms. 

It shares details on the thought process & criteria involved in creating a solution that includes how to write the tests, and run it across the multiple platforms without any code change.

Lastly, the open-sourced solution also has examples on how to implement a test that orchestrates the simulation between multiple devices / browsers to simulate multiple users interacting with each other as part of the same test.

We will cover the following topics.

Background

How many times do we see products available only on a single platform? For example, Android app only, or iOS app only?

Organisations typically start building the product on a particular platform, but then they do expand to other platforms as well. 

Once the product is available on multiple platforms, do they differ in their functionality? There definitely would be some UX differences, and in some cases, the way to accomplish the functionality would be different, but the business objectives and features would still be similar across both platforms. Also, one platform may be ahead of the other in terms of feature parity. 

The above aspects of product development are not new.

The interesting question is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?

Case Study

To answer this question, let’s take an example of any video conferencing application – something that we would all be familiar with in these times. We will refer to this application as “MySocialConnect” for the remainder of this post.

MySocialConnect is available on the following platforms:

  • All modern browsers (Chrome / Firefox / Edge / Safari) available on laptop / desktop computers as well as on mobile devices
  • Android app via Google’s PlayStore
  • iOS app via Apple’s App Store

In terms of functionality, the majority of the functionality is the same across all these platforms. Example:

  • Signup / Login
  • Start an instant call
  • Schedule a call
  • Invite registered users to join an on-going call
  • Invite non-registered users can join a call
  • Share screen
  • Video on-off
  • Audio on-off
  • And so on…

There are also some functionality differences that would exist. Example:

  • Safe driving mode is available only in Android and iOS apps
  • Flip video camera is available only in Android and iOS apps

Test Automation Approach

So, repeating the big question for MySocialConnect is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?

I would approach Functional automation of MySocialConnect as follows:

  1. The test should be specified only once. The implementation details should figure out how to get the execution happening across any of the supported platforms
  2. For the common functionalities, we should implement the business logic only once
  3. There should be a way to address differences in business functionality across platforms
  4. The value of the automation for MySocialConnect is to simulate “real calls” – i.e. more than one user in the call – and interacting with each other

In addition, I need the following capabilities in my automation:

  • Rich reports
    • With on-demand screenshots attached in the report
    • Details of the devices / browsers where the test 
    • Understand trends of test execution results
    • Test Failure analysis capabilities
  • Support parallel / distributed execution of tests to get faster feedback
  • Visual Testing support using Applitools Visual AI
    • To reduce the number of validations I need to write (less code)
    • Increase coverage (functional and UI / UX)
    • Contrast Advisor to ensure my product meets the WCAG 2.0 / 2.1 guidelines for Accessibility
  • Ability to run on local machines or in the CI
  • Ability to run the full suite or a subset of tests, on demand, and without any code change
  • Ability to run tests across any environment
  • Ability to easily specify test data for each supported environment 

Test Automation Implementation

To help implement the criteria mentioned above, I built (and open-sourced on github) my automation framework – teswiz. The implementation is based on the discussion and guidelines in [Visual] Mobile Test Automation Best Practices and Test Automation in the World of AI & ML

Tech Stack

After a lot of consideration, I chose the following tech stack and toolset to implement my automated tests in teswiz.

Test Intent Specification

Using Cucumber, the tests are specified with the following criteria:

  • The test intent should be clear and “speak” business requirements
  • The same test should be able to execute against all supported platforms (assuming feature parity)
  • The clutter of the assertions should not pollute the test intent. That is implementation detail

Based on these criteria, here is a simple example of how the test can be written.

The tags on the above test indicates that the test is implemented and ready for execution against the Android apk and the web browser. 

Multi-User Scenarios

Given the context of MySocialConnect, implementing tests that are able to simulate real meeting scenarios would add the most value – as that is the crux of the product.

Hence, there is support built-in to the teswiz framework to allow implementation of multi-user scenarios. The main criteria for implementing such scenarios are:

  • One test to orchestrate the simulation of multi-user scenarios
  • The test step should indicate “who” is performing the action, and on “which” platform
  • The test framework should be able to manage the interactions for each user on the specified platform.

Here is a simple example of how this test can be specified.

In the above example, there are 2 users – “I” and “you”, each on a different platform – “android” and “web” respectively.

Configurable Framework

The automated tests are run in different ways – depending on the context.

Ex: In CI, we may want to run all the tests, for each of the supported platforms

However, on local machines, the QA / SDET / Developers may want to run only specific subset of the tests – be it for debugging, or verifying the new test implementation.

Also, there may be cases where you want to run the tests pointing to your application for a different environment.

The teswiz framework supports all these configurations, which can be controlled from the command-line. This prevents having to make any code / configuration file changes to run a specific subset type of tests.

teswiz Framework Architecture

This is the high-level architecture of the teswiz framework.

Visual Testing & Contrast Advisor

Based on the data from the study done on the “Impact of Visual AI on Test Automation,” Applitools Visual AI helps automate your Functional Tests faster, while making the execution more stable. Along with this, you will get increased test coverage and will be able to find significantly more functional and visual issues compared to the traditional approach.

You can also scale your Test Automation execution seamlessly with the Applitools UltraFast Test Cloud and use the Contrast Advisor capability to ensure the application-under-test meets the accessibility guidelines of the WCAG 2.0 / 2.1 standards very early in the development stage.

Read this blog post about “Visual Testing – Hype or Reality?” to see some real data of how you can reduce the effort, while increasing the test coverage from our implementation significantly by using Applitools Visual AI.

Hence it was a no-brainer to integrate Applitools Visual AI in the teswiz framework to support adding visual assertions to your implementation simply by providing the APPLITOOLS_API_KEY. Advanced configurations to override the defaults for Applitools can be done via the applitools_config.json file. 

This integration works for all the supported browsers of WebDriver and all platforms supported by Appium.

Reporting

It is very important to have good and rich reports of your test execution. These reports not only make it valuable to pinpoint the reasons of the failing test, but also should be able to give an understanding of the trend of execution and quality of the product under test. 

I have used ReportPortal.io as my reporting tool – it is extremely easy to set up and use and allows me to also add screenshots, log files and other information that may seem important along with the test execution to make root cause analysis easy.

How Can You Get Started?

I have open-sourced this teswiz framework so you do not need to reinvent the wheel. See this page to get started – https://github.com/znsio/teswiz#what-is-this-repository-about

Feel free to raise issues / PRs against the project for adding more capabilities that will benefit all.

The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.

]]>
[Visual] Mobile Test Automation Best Practices https://applitools.com/blog/visual-mobile-test-automation-best-practices/ Thu, 06 May 2021 19:46:08 +0000 https://applitools.com/?p=28817 Mobile Testing is challenging. Mobile test automation – i.e. automating tests to run on mobile devices, is even more challenging. This is because is requires some added tools, libraries and...

The post [Visual] Mobile Test Automation Best Practices appeared first on Automated Visual Testing | Applitools.

]]>

Mobile Testing is challenging. Mobile test automation – i.e. automating tests to run on mobile devices, is even more challenging. This is because is requires some added tools, libraries and infrastructure setup for you to be able to implement and run your automated tests on mobile devices.

This post will cover the various strategies and practices you should think about for your mobile test automation – including strategy, execution environment setup, automation practices and running your automated tests in the CI pipeline.

Lastly, there is also a link to a GitHub repository which can be used to start automation for any platform – Android, iOS, web, Windows. It also supports integration for Applitools Visual AI and can run against local devices as well as cloud-based device farms.

So, let’s get started. We will discuss the following topics:

Test Designing

The Test Automation Pyramid is not just a myth or a concept. It actually helps teams shift left and get quick feedback about the quality of the product-under-test. This then allows humans to use their intelligence and skills to explore the product-under-test to find other issues which are not covered by automation. 

To make test automation successful however, you need to consciously look at the intent of each identified test for automation, and check how low in the pyramid you can automate it.

A good automation strategy for the team would mean that you have identified the right layers of the pyramid (in context of the product and tech stack). 

Also, an important thing to highlight is that each layer of the pyramid has a certain type of impact on the product-under-test. Refer to the gray-colored inverted pyramid in the background of the above image.

This means:

  • Each unit test will test a very specific part of the product logic.
  • Each UI / end-2-end test will impact the breadth of the product functionality.

In the context of this post, we will be focusing on the top layer of the Test Automation Pyramid – the UI / end-2-end Tests.

To make the web / mobile test automation successful, you need to identify the right types of tests to automate at the top-layer of the automation pyramid.

Mobile Test Automation Strategy

For mobile test automation, you need to have a strategy for how, when and where you need to run the automated tests. 

Automated Test Execution Strategy

Based on your device strategy, you also need to think about how and where your tests will run. 

If your product supports it, you would want the tests to run on browsers / emulators / devices. These could be on the local machine, some browser / device farm as part of manually triggered test execution and also as part of automatic triggers setup in your CI server.

In addition, you also need to think about fast feedback and coverage. For this there are different considerations – sequential execution, distributed execution and parallel (with distributed) execution.

  • Sequential execution: All tests will run, in any order, but 1 at a time
  • Distributed execution (across same type of devices): 
    • If you have ‘x’ devices of the same type available, all tests will run on either of the device available for running the test
    • It is preferable to have the tests distributed across the same type of device to prevent device-specific false positives / negatives
    • This will give you faster feedback
  • Parallel execution (across different types of devices, ex: One Plus 6, One Plus 7): 
    • If you have ‘x’ types of devices available, all tests will run on each of the device types
    • This will give you wider coverage
  • Parallel with Distributed execution: (combination of Parallel and Distributed execution types)
    • If you have ‘x’ types of devices available, and ‘y’ devices of each type: (ex: 3 One Plus 6, 2 One Plus 7)
      • All tests will run on each device type – i.e. One Plus 6 & One Plus 7
      • The tests will be distributed across the available One Plus 6 devices
      • The tests will be distributed across the available One Plus 7 devices

Your Test Strategy needs to have a plan for achieving coverage. Analytics data can tell you what types of devices or OS or capabilities are important to run your tests against.

Device Testing Strategy

Each product has different needs and requirements of the device capabilities to function correctly. Based on the context of your product, look at how you can identify a Mobile Test Pyramid that suits you.

The Mobile Test Pyramid allows us to quickly start testing our product without the need for a lot of real devices in the initial stages. As the product moves across environments, you can progressively move to using emulators and real devices.

Some important aspects to keep in mind here is to identify:

  • If you can use the browser for early testing
  • Does your product work well (functionally and performance-wise) in emulators
  • What type of real devices and how many do you need? Is there an OS version limitation, or specific capabilities that are required? Are there enough devices for all team members (especially since most of us work remotely these days)

In addition, do you plan to set up your real devices in a local lab setup or do you plan to use a device farm (on-premise or cloud based)? Either approach needs to be thought through, and the solution needs to be designed accordingly.

Automating the set up of the Mobile Test Execution Environment

Depending on your test automation tech stack, the setup can be daunting for people new to the same. Also, depending on the versions of the tools / libraries being used, there can be differences in the execution results. 

To help new people start easily, and keep the test execution environment consistent, the setup should be automated.

In case you are using Appium from Linux or Mac OS machines, you can refer to this blog post for automatic setup of the same – https://applitools.com/blog/automatic-appium-setup/

Mobile Test Automation Solution / Framework

Your automation framework should have some basic criteria in place:

  • Test should be easy to read and understand
  • Framework design should allow for easy extensibility and scalability
  • Tests should be independent – it will allow to run them in any sequence, and also allow them to be run in parallel with other tests

Refer to this post for more details on designing your automation framework.

The team needs to decide its criteria for automation, and its execution. Based on the identified criteria, the framework should be designed and implemented.

Test Data Management

Test Data is often an ignored aspect. You may design and plan for everything, but if you do not have a good test data strategy, then all the efforts can go waste, and you would end up with sub-optimal execution.

Here are some things to strive for:

  • It is ideal if your tests can create the data it needs. 
  • If the above is not possible, then have seeded data that your tests can use. It is important to have sufficient data seeded in the environment, that will allow for test independence, and parallel execution
  • Lastly, if test data cannot be created or seeded, have intelligence in your test implementation to “query” the data that each of your test needs, and use that intelligently and dynamically in your execution

OS Support

The tests should be able to be implemented and executed based on the OS the team members are using, and the OS available in the CI agents. So if your team members are on Windows, Linux and Mac OSX, then keep that in consideration when implementing the tests and its utilities ensuring it would work in all OS environments.

Platform Support

Typically the product-under-test would be available to the end-users in various different platforms. Ex: As an Android app distributed via Google Play Store, or as an iOS app distributed via Apple’s AppStore, or via the web.

Based on what platforms your product is available, your test framework should support all of them.

My approach for such multi-platform product-under-test is simple:

  • Tests should be specified once, and should be able to run on any platform, determined by a simple environment variable / configuration option

To that effect, I have built an open-source automation framework, that supports the automation of web, Android, iOS, and Windows desktop applications. You can find that, with some sample tests here – https://github.com/znsio/unified-e2e. You can refer to the “Getting Started” section.

Reporting

Having good test reports automatically generated as part of your test execution would allow you the following:

  • Know what happened during the execution
  • If the test fails, it is easy to do root-cause analysis, without having to rerun the test, and hope the same failure is seen again

From a mobile test automation perspective, the following should be available in your test reports:

  • For each test, the device details should be available – especially very valuable if you are running tests on multiple devices
  • Device logs for the duration of the test execution should be part of the reports
    • Clear the device logs before test execution starts, and once test completes, capture the same and attach in the reports
  • Device performance details – battery, cpu, screen refresh / frozen frames, etc
  • Relevant screenshots from test execution (the test framework should have this capability, and the implementer should use it as per each test context)
  • Video recording of the executed test
  • Ability to add tags / meaningful metadata to each test
  • Failure analysis capability and ability to create team specific dashboards to understand the tests results

In addition, reports should be available in real time. One should not need to wait for all the tests to have finished execution to see the status of the tests.

Assertion Criteria

Since we are doing end-2-end test automation, the tests we are automating are scenarios / workflows. It is quite possible that as part of the execution, we encounter different types of inconsistencies in the product functionality. While some of the inconsistencies would mean there is no point proceeding with further execution of that specific scenario, there would be many cases where we can proceed with the execution. This is where using hard asserts Vs soft asserts can be very helpful.

Let’s take an example of automating a banking scenario – where the user logs in, then sees the account balance, and then transfers a portion of the balance to another account.

In this case, if the user is unable to login, there is no point proceeding with the rest of the validation. So this should be a hard-assertion.

However, let’s say the test logs in, but the balance is 5000 instead of 6000. Since our test implementation takes a portion of the available balance – say 10%, for transferring to another account, the check on the balance can be a soft-assertion.

When the test completes, it should then fail with the details of all the soft-assertion failures found in the execution. 

This approach, which should be used very consciously, will allow you to get more value from your automation, instead of the test stopping at the first inconsistency it finds.

Visual Test Automation

Let’s take an example of validating a specific functionality in a virtual meeting platform. The scenario is: The host should be able to mute all participants.

Following are the steps to follow in a traditional automated test:

  1. Host starts a meeting
  2. More than ‘x’ participants join the meeting
  3. Host mutes all participants
  4. The assertion needs to be done for each of the participants to check if they are muted

Though feasible, Step 4 needs a significant amount of code to be written.

But what about a situation where if there is a bug in the product, while force muting, the video is also turned off for each participant? How would your traditionally automated test validate this?

A better way would be to use a combination of functional and Applitools’ AI powered visual testing in such a case, where the following would now be done:

  1. Host starts a meeting
  2. More than ‘x’ participants join the meeting
  3. Host mutes all participants
  4. Using Applitools visual assertion, you will now be able to check the functionality and the other existing issues, even though not directly validated / related to your test. This automatically increases your test coverage while reducing the amount of code to be written

In addition, you want to ensure that your app looks great, consistent and as expected on any device. So this is an easy to implement solution which can give you a lot of value in your quest for higher quality!

Instrumentation / Analytics Automation

One of the key ways to understand your product (web / app) usage by the end-user typically uses Analytics. 

In the case of the web, if some analytics event capture is not working as expected, it is “relatively” easy for you to fix the problem, and do a quick release and you will be able to start seeing that data.

In the case of mobile apps though, once the app is released and the user has installed it, unless you release the app again (with the fix, of course), AND the user updates it, only then you will be able to see the data correctly. Hence you need to very carefully plan the release approach of your mobile apps. See this webinar by Justin and me on “Stop Testing (Only) The Functionality of Your Mobile Apps!” on different aspects of Mobile Testing and mobile test automation that one needs to think about and include in the Testing and Automation Strategy.

Coming back to analytics, it is easily possible, with some collaboration with the developers of the app, to validate the analytics events being sent as part of your mobile test automation. There are various approaches to this, but that is a separate topic for discussion.

Running the Automated Tests

The value of automation is to run the tests as often as we can on every change in the product-under-test. This will help identify issues as soon as they would be introduced in the product code.

It can also highlight that the tests need to be updated in case they are out of sync with expected functionality. 

From a mobile test automation perspective, we need to do a few additional steps to make these automated tests run in a truly non-intrusive and fully automated manner.

Automated Artifact Generation, in Debug Mode

When automating tests for the web, you do not need to worry about an artifact being generated. When the product build completes, you can simply deploy the artifact(s) to an environment, update configuration(s), and your tests can run against it.

However for mobile test automation, the process would be different.

  1. You need to build the mobile app (apk / ipa) – and have it point to a specific environment

    To clarify this – the apk / ipa would point to backend servers via APIs. You would need this configurable to point to your test environment Vs production environment.
  1. The capability to build capability to generate the artifacts for each type of environment (ex: dev, qa, pre-prod, prod) – from the same snapshot of the code is also needed.
  2. You would need to have this artifact being built in the debug mode to allow the automated tests to interact with it

Once the artifact is generated, it should automatically trigger the end-2-end tests.

Teams need to invest in artifacts with the above capabilities generated automatically. Ideally, these artifacts are generated for each change in the product code base – and subsequently tests should run automatically against it. This would allow us to easily identify what parts of the code caused the tests to fail – hence fix the problem very quickly.

Unfortunately, I have seen an antipattern in far too many teams – where the artifact is created from some developer machine (which may have unexpected code as well), and shared over email or some weird mechanism. If your team is doing this, stop it immediately, and invest in automating the mobile app generation via a CI pipeline.

Running the Tests Automated for Mobile as Part of CI Execution

To run the tests as part of CI needs a lot of thought from a mobile test automation perspective.

Things to think about:

  1. How are your CI agents configured? They should use the same automated script as discussed in the “Test Execution Environment” section
  2. Do you have access to the CI agents? Can you add real devices to those agents? There is a regular maintenance activity required for real devices and you would need access to the same.
  3. Do you have a Device Farm (on-premise or cloud-based)? You need to be able to control the device allocation and optimal usage
  4. How will you get the latest artifact from the build pipeline automatically and pass it to your framework. The framework then needs to clean up the device(s) and install this latest artifact automatically, before starting the test execution.

The mobile test automation framework should be fully configurable from the command line to allow:

  • Local Vs CI-based execution
  • Run against local or device-farm based execution
  • Run a subset of tests, or full suite
  • Run against any supported (and implemented-for) environment
  • Run against any platform that your product is supported on – ex: android, iOS, web, etc.

The post [Visual] Mobile Test Automation Best Practices appeared first on Automated Visual Testing | Applitools.

]]>
A Guide To Installing Appium -The Easy Way https://applitools.com/blog/easy-appium-setup-guide/ Fri, 19 Feb 2021 01:38:10 +0000 https://applitools.com/?p=27006 I have written a script that will help set the Appium-Android environment automatically for you. The script is available for Mac OSX and Ubuntu and will set up the latest available version of the dependencies & appium itself.

The post A Guide To Installing Appium -The Easy Way appeared first on Automated Visual Testing | Applitools.

]]>

Appium is an open source test automation framework for use with native, hybrid and mobile web apps.

Developers install Appium to drive iOS, Android, and Windows apps using the WebDriver protocol which gives you the ability to automate any mobile app from any language and any test framework.

Appium released its first major version almost 7 years ago. Since then, Appium has rolled out a lot of new features and its automation backend architecture has evolved quite a lot. 

That said, getting started with implementing automated tests and executing them using Appium the 1st time can be a daunting experience.

There are a lot of dependencies that need to be set up – node, android-command-line tools, appium, etc.

Appium Install Scripts

In order to make this task easier, and avoid having to do this manually, one at a time, I have written a script that will help setup the Appium-Android environment automatically for you. The script is available for Mac OSX and Ubuntu and will install the latest available version of the dependencies & appium itself.

You can find the scripts here:

MacOSX: setup_mac.sh

Ubuntu: setup_linux.sh

The mentioned setup scripts install all dependencies required for implementing / running tests on Android devices. To do the setup for iOS devices, run appium-doctor and see the list of dependencies that are missing, and install the same.

To ensure the setup is working properly, you can clone / download this sample repository (https://github.com/anandbagmar/AppiumJavaSample) which has 2 tests that can be executed to verify the setup. 

Refer to the README file for specifics of the prerequisites to do the setup and also for running the tests.

To get full value of your functional test automation, add the power of Visual AI to your tests. You can refer to the relevant tutorial for adding Applitools Visual AI to your Appium tests from here.

Summary

Automate your Android and Appium test execution environment installation using an automated script on MacOSX (setup_mac.sh) and Ubuntu (setup_linux.sh).

The post A Guide To Installing Appium -The Easy Way appeared first on Automated Visual Testing | Applitools.

]]>
Visual Assertions – Hype or Reality? https://applitools.com/blog/visual-ai-hype-or-reality/ Thu, 21 Jan 2021 21:26:45 +0000 https://applitools.com/?p=25829 There is a lot of buzz around Visual Testing these days. You might have read or heard stories about the benefits of visual testing. You might have heard claims like,...

The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.

]]>

There is a lot of buzz around Visual Testing these days. You might have read or heard stories about the benefits of visual testing. You might have heard claims like, “more stable code,” “greater coverage,” “faster to code,” and “easier to maintain.” And, you might be wondering, is this a hype of a reality?

So I conducted an experiment to see how true this really is.

I used the instructions from this recently concluded hackathon to conduct my experiment.

I was blown away by the results of this experiment. Feel free to try out my code, which I published on Github, for yourself.

Visual Assertions – my learnings

Before I share the details of this experiment, here are the key takeaways I had from this exercise:

  1. Functional Automation is limiting. You only simulate known user behaviors, in predetermined conditions and in the process only validate & verify conditions what you know about. There has to be a more optimized and value generating approach!
  2. The Automation framework should have advanced capabilities like soft assertions and good reporting to allow quick RCA.
  3. Save time by integrating Applitools’ Visual AI Testing with the Ultrafast Grid to increase your test coverage and scale your test executions.

Let us now look at details of the experiment.

Context: What are we trying to automate?

We need implement the following tests to check the functionality of https://demo.applitools.com/tlcHackathonMasterV1.html

  1. Validate details on landing / home page
    This should include checking headers / footers, filters, displayed items
  2. Check if Filters are working correctly
  3. Check product details for a specific product

For this automation, I chose to use Selenium-Java for automation with Gradle as a build tool.

The code used for this exercise is available here: https://github.com/anandbagmar/visualAssertions

Step 1 – Pure Functional Automation using Selenium-Java

Once I spent time in understanding the functionality of the application, I was quickly able to automate the above mentioned tests.

Here is some data from the same.

Refer to HolidayShoppingWithSeTest.java

ActivityData (Time / LOC / etc.)

Time taken to understand the application and expected tests

30 min

Time taken to implement the tests

90 min

Number of tests automated

3

Lines of code (actual Test method code)

65 lines

Number of locators used

23

Test execution time:

Part 1: Chrome browser

32 sec

Test execution time:

Part 2: Chrome browser

57 sec

Test execution time:

Part 3: Chrome: 29 sec

29 sec

Test execution time:

Part 3: Firefox: 65 sec

65 sec

Test execution time:

Part 3: Safari: 35 sec

35 sec

Observations

A few interesting observations from this test execution:

  1. I added only superficial validations for each test.
    • I only added validations for the number of filters and items in each filter. But I have not added the validations for actual content of the filters.
    • To add actual validations for each aspect of the page will take 8-10x the time taken for my current implementation, and the number of locators and assertions would also probably increase by 4-6x of the current numbers.
    • Definitely does not seem worth the time and effort.
  1. The tests would not capture all errors based on the assertions added, as the first assertion failure would cause the test to stop.
  2. In order to check everything, instead of hard assertions, the framework would need to implement and support soft assertions
  3. The test implementation is heavily dependent on the locators in the page. Any small change in the locators will cause the test to fail. Ex: In the Product Details page, the locator of the Footer is different from that of the home page
  4. Scaling: I was limited by how many browsers / devices I could run my tests on. I needed to write additional code to manage browser drivers, and that too only for browsers that I had available on my laptop

Step 2 – Add Applitools Visual Assertions to Functional Automation

When I added Applitools Visual AI to the already created Functional Automation (in Step 1), the data was very interesting.

Refer to HolidayShoppingWithEyesTest.java

ActivityData (Time / LOC / etc.)

Time taken to add Visual Assertions to existing Selenium tests

10 min

Number of tests automated

3

Lines of code (actual Test method code)

7 lines

Number of locators used

3

Test execution time:

Part 1: Chrome browser

81 sec (test execution time)

38 sec (Applitools processing time)

Test execution time:

Part 2: Chrome browser

92 sec (test execution time)

42 sec (Applitools processing time)

Test execution time: (using Applitools Ultrafast Test Cloud)

Part 3: Chrome + Firefox + Safari + Edge + iPhone X

125 sec (test execution time)

65 sec (Applitools processing time)

Observations

Here are the observations from this test execution:

  1. My test implementation got simplified
    • Less lines of code
    • Fewer locators and assertions
    • Test became easier to read and extend
  1. Test became more stable
    • Fewer locators and assertions
    • It does not matter if the locators change for elements in the page as long as the user experience / look and feel remains as expected. (Of course, locators on which I need to do actions using Selenium need to be the same)
  1. Comprehensive coverage of functionality and user experience
    • My test focuses on specific functional aspects – but with Visual Assertions, I was able to get validation of the functional change from the whole page, automatically

See these below examples of the nature of validations that were reported by Applitools:

Version Check – Test 1:

Filter Check – Test 2:

Product Details – Test 3:

  1. Scaling test execution is seamless
    • I needed to run the tests only on any 1 browser which is available on my machine. I chose Chrome
    • With the Applitools Ultrafast Test Cloud, I was able to get results of the functional and visual validations across all supported platforms without any code change, and almost in the same time as a single browser execution.

Lastly, an activity I thoroughly enjoyed in Step 2 was the aspect of deleting code that now became irrelevant because of using Visual Assertions.

Conclusion

To conclude, the experiment made it clear – Visual Assertions are not a hype. The below table shows in summary the differences in the 2 approaches discussed earlier in the post.

ActivityPure Functional TestingUsing Applitools Visual Assertions

Number of Tests automated

3

3

Time taken to implement tests

90 min (implement + add relevant assertions)

Time taken to add Visual Assertions to existing Selenium tests

10 min

Includes time taken to delete the assertions and locators that now became irrelevant

Lines of code (actual Test method code)

65 lines

7 lines

Number of locators used

23

3

Number of assertions in Test implementation

16

This approach validates only specific behavior based on the assertions.

The first failing assertion stops the test. Remaining assertions do not even get executed

3 (1 in for each test)

Validates the full screen, captures all regressions and new changes as well in 1 validation

Test execution time:

Chrome + Firefox + Safari browser

129 sec

(for 3 browsers)

Test execution time: (using Applitools Ultrafast Test Cloud)

Part 3: Chrome + Firefox + Safari + Edge + iPhone X

125 sec (test execution time)

65 sec (Applitools processing time)

(for 4 browsers + 1 device)

Visual Assertions help in the following ways:

  • Make your tests more stable
  • Lower maintenance as there are less locators to work with
  • Increase test coverage – you do not need to add assertions for each and every piece of functionality as part of your automation. With Visual Assertions, you will get the full – functional & user experience validation by 1 call
  • Scaling is seamless – with the Ultrafast Test Cloud, you run your test just once, and get validation results across all supported browsers and devices

You can get started with Visual Testing by registering for a free account here. Also, you can take this course from the Test Automation University on “Automated Visual Testing: A Fast Path To Test Automation Success

The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.

]]>
Stop the Retries in Tests & Reruns of Failing Tests https://applitools.com/blog/uncover-flaky-tests/ Tue, 17 Nov 2020 05:54:24 +0000 https://applitools.com/?p=24703 The key is to try and find a pattern when the intermittent failure happens, and then dig deep into the RCA for the same.

The post Stop the Retries in Tests & Reruns of Failing Tests appeared first on Automated Visual Testing | Applitools.

]]>

Have you heard of “flaky tests”?

There are a lot of articles, blog posts, podcasts, conference talks about what are “Flaky tests” and how to avoid them.

Before we look at techniques to resolve flaky tests, let’s understand the typical reasons why tests are flaky / intermittent.  

Reasons for flakiness in tests

There are many reasons why tests can be flaky / intermittent.

pasted image 0 11

Let’s take a look at some concrete examples of why tests can be flaky  in each of the above mentioned areas:

  1. Synchronization / timing related issues

     

    • Examples:
      • page loading taking time because of front-end processing,
      • asynchronous processing,
      • load on backend APIs / server resulting in delayed response,
      • not using correct “wait” strategies in the test implementation,
      • loading frames / heavy elements / content, etc.
  2. Date / Time related

     

    • Examples:
      • incorrect time calculations based on time-zones, midnight
      • incorrect date calculations based on months / years
      • using incorrect date-time format / timezones as required by each specific service call
  3. Network issues

     

    • Examples:
      • drops in packets in the network,
      • unstable network connectivity,
      • network heavily loaded resulting in slow connection speed,
      • inconsistent product behaviour when simulating slow network conditions (ex: 2G speeds) , etc.
  4. Browser related issues

     

    • Examples:
      • each browser behaves differently and uses different resources.
      • the plugins / extensions installed in the browser may add to fluctuations in speed of rendering, memory usage, etc.
      • different locator identification strategy required based on the browser and its viewport size.
  5. Device related issues

     

    • Examples:
      • the device hardware specs can contribute to the device performance, and hence can impact the executing test as well
  6. Data dependencies

     

    • Dynamic data, changes in availability or validity of available data (because of other colleagues / tests using it / changing it / consuming it)
    • Caching strategy may be incorrect
    • Incorrect state of the application-under-test when test starts running
  7. Locator strategy

     

    • Examples:
      • weird & hard-wired xPaths / locators which changes due to data / small (unrelated) changes in the UI
      • Locators change based on responsive rendering of the application-under-test
  8. Application-under-test Environment issue

     

    • Examples:
      • Unstable components, on-going deployments of components
      • Instability of integrated 3-party components / systems in the environment
      • Other ongoing use of the environment (maybe other types of tests, performance testing, etc.) can call the environment to slow down, hence impacting test execution
  9. Test execution machine issue

     

    • Examples:
      • Other processes / applications / browser sessions running in parallel / background will end up competing and sharing of device / browser resources – hence impacting the test execution
      • Limitations of the machine where the test is running (processing speed, memory, etc.)
      • Inconsistent / incorrect version of software required to execute the tests causes tests to fail on certain set of machines
  10. Test execution sequencing issue – Does the test fail intermittently when the sequence changes?

     

    • Examples:
      • Tests are dependent on other tests (for setting up the data / state of the application-under-test)
  11. Parallel execution issue

     

    • Examples:
      • tests fail intermittently when run in parallel
  12. Actual issue / defects in the application-under-test

     

    • Example:
      • load on the application
      • race conditions
      • Intermittent connectivity to other systems / services / DBs

Antipatterns – Approach (many) teams use to handle test flakiness!

Antipattern #1: Rerun the failing test automatically

Retry once and pray that the test passes

Retry twice and pray some more

Retry thrice and pray a whole lot more

……

If the team is lucky, the test passes, and they report it accordingly – i.e. push the dirt under the carpet and claim all is clean and good.

I however feel that the team is unlucky in this case – as they have missed an opportunity to find, report and get the fix implemented at the right place, at the right time!

Do you think that is a good approach?

I definitely do not agree with this approach – it is not going to help anybody.

Antipattern #2: Intelligent retries in tests

There are tools / solutions having interesting features which will allow either automatic retry, or, can be configured to automatically retry) certain operations in the test to handle intermittent failures. These operations could be for retry click, checking for visibility of elements, or maybe some network API calls, etc.

However this approach is also not right in my opinion.

What if the retry succeeds, but the 1st time it failed because of a performance issue, or some other issue? If your test encountered this issue, what is the likelihood of the end-user also facing the similar issue? Will they do a retry? Think about this. In fact, all aspects of testing should be thinking about what will the end-user do with your product, and how will they react to / handle situations when their interactions with the application-under-test gives an unexpected behaviour.

Unfortunately the tools I am referring to are highlighting this as a great new feature – which is going to promote more poor / bad practices IMHO.

What’s next?

Stop the band-aid approach to fix the symptom, i.e. fixing a failing test. Focus on identifying the root cause and then fix that in the correct approach.

How to find the reasons for intermittent / flaky tests?

Unfortunately, there is no straight answer for this question. The key is to try and find a pattern when the intermittent failure happens, and then dig deep into the RCA for the same. Many times, the reason could be a poor test implementation itself. Once you rule that out, the reason could be a bug / defect in the application-under-test which is exposed in specific cases, data related issues, or an environment / network issue.

I follow various techniques to try and understand the reason for tests to be flaky. They are:

pasted image 0 10
  1. Test implementation

     

    • Does the test execute consistently (as per expectation) only in a specific sequence?
    • Does the test fail intermittently if run out of sequence, or in parallel with other tests?
    • Does the test fail intermittently on specific browser(s) / device(s)?
    • Does the test fail intermittently in specific environments?
    • Is test data changing / invalid?
    • Is the intermittent failure related to timing / wait conditions
  2. Environment / Application-under-test stability

     

    • Is there any deployment / maintenance activity happening when the test fails?
    • Any specific trend of timing when the failures appear more often
    • Any unusual load on the environment when the tests fail?
    • Any abnormal stats from the server resource usage?
  3. Network analysis

Work with the network team to identify

  • if there were any glitches in the network connectivity when the test failure occurred
  • what was the load on the network when the test failure occurred

Since you may not be able to replicate the failures very easily, to help in such investigations it is critical to have extensive logging enabled and available.

Tips & Techniques to reduce flakiness in tests

Now that you have taken the steps to do proper investigation and RCA to find the reason for the intermittent failures, look at the context of your application, team skills, infrastructure, etc. to come up with the “right” solution to fix the problem at the source.

The worst thing you can do as a fix is to blindly increase the wait time or rerun the test to hope it passes. DO NOT DO THAT!!

The “right” approach to fix a flaky test could mean any or multiple of the following, and more –

  • Architecture review and change
  • Infrastructure setup and management
  • Configurations for services / databases / hardware
  • Practices to enable quick feedback
  • Processes to foster collaboration
  • Monitoring & observability to understand environment and status of deployed systems in real-time
  • Intelligent service virtualisation for external systems in lower environments to give you more control, predictability and ability for testing positive, negative and edge case scenarios & workflows w.r.t. external systems
  • Logging – In order to prevent / reduce having to rerun a test with the hope of reproducing the intermittent issue, having extensive logging enabled by default is crucial.

Logs of the following various forms and types are valuable, hence required:

  • Test execution logs
  • Network logs – i.e. capture of network traffic as part of functional tests to understand any drop / issue in specific network calls. Ex: HAR (HTTP archive format) capture can give you a lot of insights
  • Device logs (if applicable)
  • Screenshots, if video recording of the failed test is not available)
  • Corresponding application logs
  • System health logs – i.e. Server-side memory / cpu usage and response times

In many cases, the above investigation needs to be a collaborative effort between various roles, as any one role may not have the full context of the whole system.

Takeaways

  • Recognise reasons why tests could be flaky / intermittent
  • Critique band-aid approach to fixing flakiness in tests
  • Discuss techniques to identify reasons for test flakiness
  • Fix the root-cause, not the symptoms to make your tests stable, robust and scalable!

Happy debugging to you!

Cover Photo by Julien Moreau on Unsplash

The post Stop the Retries in Tests & Reruns of Failing Tests appeared first on Automated Visual Testing | Applitools.

]]>