Events Archives - Automated Visual Testing | Applitools https://applitools.com/blog/category/events/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 30 Nov 2023 21:31:36 +0000 en-US hourly 1 Should We Fear AI in Test Automation? https://applitools.com/blog/should-we-fear-ai-in-test-automation/ Mon, 04 Dec 2023 13:39:00 +0000 https://applitools.com/?p=53216 Richard Bradshaw explores fears around the use of AI in test automation shared during his session—The Fear Factor—at Future of Testing.

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>

At the recent Future of Testing: AI in Automation event hosted by Applitools, I ran a session called ‘The Fear Factor’ where we safely and openly discussed some of our fears around the use of AI in test automation. At this event, we heard from many thought leaders and experts in this domain who shared their experiences and visions for the future. AI in test automation is already here, and its presence in test automation tooling will only increase in the very near future, but should we fear it or embrace it?

During my session, I asked the attendees three questions:

  • Do you have any fears about the use of AI in testing?
  • In one word, describe your feelings when you think about AI and testing.
  • If you do have fears about the use of AI in testing, describe them.

Do you have any fears about the use of AI in testing?

Where do you sit?

I’m in the Yes camp, and let me try to explain why.

Fear can mean many things, but one of them is the threat of harm. It’s that which concerns me in the software testing space. But that harm will only happen if teams/companies believe that AI alone can do a good enough job. If we start to see companies blindly trusting AI tools for all their testing efforts, I believe we’ll see many critical issues in production. It’s not that I don’t believe AI is capable of doing great testing—it’s more the fact that many testers struggle to explain their testing, so to have good enough data to train such a model feels distant to me. Of course, not all testing is equal, and I fully expect to see many AI-based tools doing some of the low-hanging fruit testing for us.

In one word, describe your feelings when you think about AI and testing.

It’s hard to disagree with the results from this question—if I were to pick two myself, I would have gone with ‘excited and skeptical.’ I’m excited because we seem to be seeing new developments and tools each week. On top of that, though, we are starting to see developments in tooling using AI outside of the traditional automation space, and that really pleases me. Combine that with the developments we are seeing in the automation space, such as autonomous testing, and the future tooling for testing looks rather exciting.

That said, though, I’m a tester, so I’m skeptical of most things. I’ve seen several testing tools now that are making some big promises around the use of AI, and unfortunately, several that are talking about replacing or needing fewer testers. I’m very skeptical of such claims. If we pause and look across the whole of the technology industry, the most impactful use of AI thus far is in assisting people. Various GPTs help generate all sorts of artifacts, such as code, copy, and images. Sometimes, it’s good enough, but the majority of the time is helping a human be more efficient—this use of AI and such messaging, excites me.

If you do have fears about the use of AI in testing, describe them here.

We got lots of responses to this question, but I’m going to summarise and elaborate on four of them:

  • Job security
  • Learning curve
  • Reliability & security
  • How it looks

Job Security

Several attendees shared they were concerned about AI replacing their jobs. Personally, I can’t see this happening. We had the same concern with test automation, and that never really materialized. Those automated tests don’t maintain themselves, or write themselves, or share the results themselves. The direction shared by Angie Jones in her talk Where Is My Flying Car?! Test Automation in the Space Age, and Tariq King in his talk, Automating Quality: A Vision Beyond AI for Testing, is AI that assists the human, giving them superpowers. That’s the future I hope, and believe we’ll see, where we are able to do our testing a lot more efficiently by having AI assist us. Hopefully, this means we can release even quicker, with higher quality for our customers.

Another concern shared was about skills that we’ve spent years and a lot of effort learning, suddenly being replaced by AI. Or significantly easier with AI. I think this is a valid concern but also inevitable. We’ve already seen AI have a significant benefit to developers with tools like GitHub Copilot. However, I’ve got a lot of experience with Copilot, and it only really helps when you know what to ask for—this is the same with GPTs. Therefore, I think the core skills of a tester will be crucial, and I can’t see AI replacing those.

Learning Curve

If we are going to be adding all these fantastic AI tools into our tool belts, I feel it’s going to be important we all have a basic understanding of AI. This concern was shared by the attendees. For me, if I’m going to be trusting a tool to do testing for me or generating test artefacts for me, I definitely want that basic understanding. So, that poses the question, where are we going to get this knowledge from?

On the flip side of this, what if we become over-reliant on these new AI tools? A concern shared by attendees was that the next generation of testers might not have some of the core skills we consider important today. Testers are known for being excellent thinkers and practitioners of critical thinking. If the AI tools are doing all this thinking for us, we run the risk of those skills losing their focus and no longer being taught. This could lead to us being over-reliant on such tools, but also the tools biassing the testing that we do. But given that the community is focusing on this already, I feel it’s something we can plan to mitigate and ensure this doesn’t happen.

Reliability & Security

Data, data, data. A lot of fears were shared over the use and collection of data. The majority of us work on applications where data, security, and integrity are critical. I absolutely share this concern. I’m no AI expert, but the best AI tools I’ve used thus far are ones that are contextual to my domain/application, and to do that, we need to train it on our data. These could lead to data bleeding and private data, and that is a huge challenge I think the AI space has yet to solve.

One of the huge benefits of AI tooling is that it’s always learning and, hopefully, improving. But that brings a new challenge to testing. Usually, when we create an automated test, we are codifying knowledge and behavior, to create something that is deterministic, we want it to do the same thing over and over again. This provides consistent feedback. However, with an AI-based tool it won’t always do the same thing over and over again—it will try and apply its intelligence, and here’s where the reliability issues come in. What it tested last week may not be the same this week, but it may give us the same indicator. This, for me, emphasizes the importance of basic AI knowledge but also that we use these tools as an assistant to our human skills and judgment.

How It Looks

Several attendees shared concerns about how these AI tools are going to look. Are they going to a completely black box, where we enter a URL or upload an app and just click Go? Then the tool will tell us pass or fail, or perhaps it will just go and log the bugs for us. I don’t think so. As per Angie’s and Tariq’s talk I mentioned before, I think it’s more likely these tools will focus on assistance. 

These tools will be incredibly powerful and capable of doing a lot of testing very quickly. However, what they’ll struggle to do is to put all the information they find into context. That’s why I like the idea of assistance, a bunch of AI robots going off and collecting information for me. It’s then up to me to process all that information and put it into the context of the product. The best AI tool is going to be the one that makes it as easy as possible to process the masses of information these tools are going to return.

Imagine you point an AI bot at your website, and within minutes, it’s reporting accessibility issues to you, performance issues, broken links, broken buttons, layout issues, and much more. It’s going to be imperative that we can process that information as quickly as possible to ensure these tools continue to support us and don’t drown us in information.

Visit the Future of Testing: AI in Automation archive

In summary, AI is here, and more is coming. It’s very exciting times in the software testing tooling space, and I’m really looking forward to playing with more new tools. I think we need to be curious with these new tools, try them, and see what sticks. The more tools we have in our tool belts, the more options we have to solve our ever-increasing complex testing challenges. 

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>
Future of Testing: AI in Automation Recap https://applitools.com/blog/future-of-testing-ai-in-automation-recap/ Tue, 28 Nov 2023 13:13:00 +0000 https://applitools.com/?p=53155 Recap of the Future of Testing: AI in Automation conference. Watch the on-demand sessions to learn actionable steps to implement AI in your software testing strategy, key considerations around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>

The latest edition of the Future of Testing events, held on November 7, 2023, was nothing short of inspiring and thought-provoking! Focused on AI in Automation, attendees learned how to leverage AI in software testing with top industry leaders like Angie Jones, Tariq King, Simon Stewart, and many more. All of the sessions are available now on-demand, and below, we take a look back at these groundbreaking sessions to give you a sneak peek of what to expect before you watch.

Opening Remarks

Joe Colantonio from TestGuild and Dave Piacente from Applitools set the stage for a thought-provoking discussion on reimagining test automation with AI. As technology continues to evolve at a rapid pace, it’s important for software testing professionals to adapt and embrace new tools and techniques. Joe and Dave encouraged attendees to explore the potential of AI in test automation and how it can enhance their current processes. They also touch upon the challenges faced by traditional test automation methods and how AI-powered solutions can help overcome them.

Dave shared one of our latest updates – the integration of Applitools Eyes with Preflight! Learn more about Preflight.

Keynote—Reimagining Test Automation with AI by Anand Bagmar

In this opening session, Anand Bagmar explored how to reimagine your test automation strategies with AI at each stage of the test automation life cycle, including a live demo showcasing the power of AI in test automation with Applitools.

Anand first introduced the transition from Waterfall to Agile software delivery practices, and while we can’t imagine going back to a Waterfall way of working, he addressed the challenges Agile brings to the software testing life cycle. Each iteration brings more room for error across analysis, maintenance, and validation of tests. This is why testers should turn toward AI-powered test automation, with the help of tools like Applitools, to help ease the pain of Agile testing.

The session is aimed at helping testers understand the importance of leveraging AI technology for successful test automation, as well as empowering them to become more effective in their roles. Watch now.

From Technical Debt to Technical Capital by Denali Lumma

In this session, Denali Lumma from Modular dived into the concept of technical debt and proposed a new perspective on how we view it – technical capital. She walked attendees through key mathematical concepts that help calculate technical capital, as well as examples comparing Pytorch vs. TensorFlow, MySQL vs.Postgres, Frameworks vs. Code Editors, and more.

Attendees gained insights into calculating technical capital and how it can impact the valuation of a company. Watch now.

Automating Quality: A Vision Beyond AI for Testing by Tariq King

Tariq King of EPAM Systems took attendees on a journey through the evolution of software testing and how it has been impacted by generative AI. He shared his vision for the future of automated quality, one that looks beyond just AI to also prioritize creativity and experimentation. Tariq emphasized the need for quality and not just using AI to “go faster.” The more quality you have, the more productive you will be.

Tariq also dove into the ethical implications of using AI for testing and how it can be used for good or evil. Watch the full session.

Leveraging ChatGPT with Cypress for API Testing: Hands-On Techniques by Anna Patterson

In this session, Anna Patterson of EVERFI explored practical techniques and provided hands-on examples of how to harness the combined power of Cypress and ChatGPT to create robust API tests for your applications.

Anna guided us through writing descriptive and clear test prompts using HTML status codes, with a pet store website as an example. She showed in real-time how meaningful prompts in ChatGPT can help you create a solid API test suite, while also considering the security requirements of your company. Watch now.

PANEL—Testing in the AI Era: Opportunities, Hurdles, and the Evolving Role of Engineers

Joe Colantonio, Test Guild • Janna Loeffler, mParticle • Dave Piacente, Applitools • Stephen Williams, Accenture

As the use of AI in software development continues to grow, it is important for engineers and testers to stay ahead of the curve. In this panel discussion led by Joe Colantonio from Test Guild, Janna Loeffler from mParticle, Dave Piacente from Applitools, and Stephen Williams from Accenture came together to discuss the current state of AI implementation and its impact on testing.

They talked about how AI is still in its early stages of adoption and why there may always be some level of distrust in AI technology. The panel emphasized the importance of first understanding why you might implement AI in your testing strategy so that you can determine what the technology will help to solve vs. jumping in right away. Many more incredible takes and insights were shared in this interactive session! Watch now.

The Fear Factor with Richard Bradshaw

The Friendly Tester, Richard Bradshaw, addressed the common fears about AI and automation in testing. Attendees heard Richard’s open and honest discussion on the challenges and concerns surrounding AI and automation in testing. Ultimately, he calmed many fears around AI and gave attendees a better understanding of how they can begin to use it in their organization and to their own advantage. Watch now.

Tests Too Slow? Rethink CI! by Simon Stewart

Simon Stewart from the Selenium Project discussed the latest updates on how to speed up your testing process and improve the reliability of your CI runs. He shared insights into the challenges and tradeoffs involved in this process, as well as what is to come with Selenium and Bazel.
Attendees learned how to rethink their CI approach and use these tools to get faster feedback and more reliable testing results. Watch now.

Revolutionizing Testing: Empowering Manual Testers with AI-Driven Automation by Dmitry Vinnik

Dmitry Vinnik explored how AI-driven automation is revolutionizing the testing process for manual testers. He showed how Applitools’ Visual AI and Preflight help streamline test maintenance and reduce the need for coding.

Dmitry shared the importance of test maintenance, no code solutions for AI testing, and a first-hand look at Applitools Preflight. Watch this session to better understand how AI is transforming testing and empowering manual testers to become more effective in their roles. Watch the full session.

Keynote—Where Is My Flying Car?! Test Automation in the Space Age by Angie Jones

In her closing keynote, Angie Jones of Block took us on a trip into the future to see how science fiction has influenced the technology we have today. The Jetsons predicted many futuristic inventions such as robots, holograms, 3D printing, smart devices, and drones. We will explore these predictions and see how far we have come regarding automation and technology in the testing space.

As technology continues to evolve, it is important for testers to stay updated and adapt their strategies accordingly. Angie dove into the exciting world of tech innovation and imagined the future for test automation in the space age. Watch now.


Visit the full Future of Testing: AI in Automation on-demand archive to watch now and learn actionable steps to implement AI in your software testing strategy, key considerations before you start, other ideas around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>
AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take https://applitools.com/blog/ai-and-the-future-of-test-automation-with-adam-carmi/ Mon, 16 Oct 2023 18:23:49 +0000 https://applitools.com/?p=52314 We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of...

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>

We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of room for you to form your own impressions.

Dave Piacente

Curious if the software robots are here to take our jobs? Or maybe you’re not a fan of the AI hype train? During a recent session, The Future of AI-Based Test Automation, CTO Adam Carmi discussed—in practical terms—the current and future state of AI-based test automation, why it matters, and what you can do today to level up your automation practice.

  • He describes how AI can be used to overcome common everyday challenges in end-to-end test automation, how the need for skilled testers will only increase, and how AI-based tooling can help supercharge any automated testing practice.
  • He also puts his money where his mouth is by demonstrating how the neverending maintenance overhead of tests can be mitigated using AI-driven tooling which already exists today using concrete examples (e.g., visual validation and self-healing locators).
  • He also discusses the role that AI will play in the future, including the development of autonomous testing platforms. These platforms will be able to automatically explore applications, add validations, and fill gaps in test coverage. (Spoiler alert: Applitools is building one, and Adam shows a bit of a teaser for it using a real-time in-browser REPL to automate the browser which uses natural language similar to ChatGPT.)

You can watch the full recording and find the session materials here, and I’ve included a quick breakdown with timestamps for ease of reference.

  • Challenges with automating end-to-end tests using traditional approaches (02:34-10:22)
  • How AI can be used to overcome these challenges (10:23-44:56)
  • The role of AI in the future of test automation (e.g., autonomous testing) (44:57-58:56)
  • The role of testers in the future (58:57-1:01:47)
  • Q&A session with the speaker (1:01:48-1:12:30)

Want to see more? Don’t miss Future of Testing: AI in Automation.

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>
Driving Successful Test Automation at Scale: Key Insights https://applitools.com/blog/driving-successful-test-automation-at-scale-key-insights/ Mon, 25 Sep 2023 13:30:00 +0000 https://applitools.com/?p=52139 Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their...

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>

Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their insights for overcoming common challenges. Here are their top recommendations.

Establish clear processes for collaboration.
Daily standups, sprint planning, and retrospectives are essential for enabling communication across distributed teams. “The only way that you can build a quality product that actually satisfies the business requirements is [through] that environment where you’ve got the different teams coming together,” said Ariola Qeleposhi, Test Automation Lead at Accenture.

Choose tools that meet current and future needs.
Consider how tools will integrate and the skills required to use them. While a “one-size-fits-all” approach may seem appealing, it may not suit every team’s needs. Think beyond individual products to the overall solution, advised Anand Bagmar, Senior Solution Architect at Applitools. Each product team should have a test pyramid, and tests should run at multiple levels to get real value from your automation.

Start small and build a proof of concept.
Demonstrate how automation reduces manual effort and finds defects faster to gain leadership buy-in. “Proof of concepts will really help to provide a form of evidence in a way to say that, okay, this is our product, this is how we automate or can potentially automate, and what we actually save from that,” said Qeleposhi.

Consider a “quality strategy” not just a “test strategy.”
Involve all roles like business, product, dev, test, and DevOps. “When you think about it as quality, then the role does not matter,” said Bagmar.

Leverage AI and automation as “seatbelts,” not silver bullets.
They enhance human judgment rather than replace it. “Automation is a lot, at least in this instance, it’s like a seatbelt. You don’t think you’ll need it, but when you need it you better have it,” said Kyle Penniston, Senior Software Developer at Bayer.

Build, buy, and reuse.
Don’t reinvent the wheel. Use open-source tools and existing frameworks. “There will be great resources that you can use. Open-source resources, for example, frameworks that might be there that you can use to quickly get started and build on top of that,” said Bagmar.

Provide learning resources for new team members.
For example, Applitools offers Test Automation University with resources for developing automation skills.

Measure and track metrics to ensure value.
Look at reduced manual testing, faster defect finding, test coverage, and other KPIs. “You need to get some metrics really, and then you need to use that from an automation side of things,” said Qeleposhi.

The key to building a solid foundation for scaling test automation is taking an iterative, collaborative approach focused on delivering value and enhancing quality. With the right strategies and tools in place, teams can overcome common challenges and achieve automation success. Watch the full recording.

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>
Power Up Your Test Automation with Playwright https://applitools.com/blog/power-up-your-test-automation-with-playwright/ Thu, 31 Aug 2023 12:53:00 +0000 https://applitools.com/?p=52108 As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust...

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
Locator Strategies with Playwright

As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust set of features to create fast, reliable, and maintainable tests.

In a recent webinar, Playwright Ambassador and TAU instructor Renata Andrade shared several use cases and best practices for using the framework. Here are some of the most valuable takeaways for test automation engineers:

Use Playwright’s built-in locators for resilient tests.
Playwright recommends using attributes like “text”, “aria-label”, “alt”, and “placeholder” to find elements. These locators are less prone to breakage, leading to more robust tests.

Speed up test creation with the code generator.
The Playwright code generator can automatically generate test code for you. This is useful when you’re first creating tests to quickly get started. You can then tweak and build on the generated code.

Debug tests and view runs with UI mode and the trace viewer.
Playwright’s UI mode and VS Code extension provide visibility into your test runs. You can step through tests, pick locators, view failures, and optimize your tests. The trace viewer gives you a detailed trace of all steps in a test run, which is invaluable for troubleshooting.

Add visual testing with Applitools Eyes.
For complete validation, combine Playwright with Applitools for visual and UI testing. Applitools Eyes catches unintended changes in UI that can be missed by traditional test automation.

Handle dynamic elements with the right locators.
Use a combination of attributes like “text”, “aria-label”, “alt”, “placeholder”, CSS, and XPath to locate dynamic elements that frequently change. This enables you to test dynamic web pages.

Set cookies to test personalization.
You can set cookies in Playwright to handle scenarios like A/B testing where the web page or flow differs based on cookies. This is important for testing personalization on websites.

Playwright provides a robust set of features to build, run, debug, and maintain end-to-end web tests. By leveraging the use cases and best practices shared in the webinar, you can power up your test automation and build a successful testing strategy using Playwright. Watch the full recording and see the session materials.

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help https://applitools.com/blog/ai-powered-test-automation-how-github-copilot-and-applitools-can-help/ Tue, 22 Aug 2023 21:23:00 +0000 https://applitools.com/?p=51789 Test automation is crucial for any software engineering team to ensure high-quality releases and a smooth software development lifecycle. However, test automation efforts can often be tedious, time-consuming, and require...

The post AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help appeared first on Automated Visual Testing | Applitools.

]]>
Can AI Autogenerate and Run Automated Tests?

Test automation is crucial for any software engineering team to ensure high-quality releases and a smooth software development lifecycle. However, test automation efforts can often be tedious, time-consuming, and require specialized skills. New AI tools are emerging that can help accelerate test automation, handle flaky tests, increase test coverage, and improve productivity.

In a recent webinar, Rizel Scarlett and Anand Bagmar discussed how to leverage AI-powered tools like GitHub Copilot and Applitools to boost your test automation strategy.

GitHub Copilot can generate automated tests.

By providing code suggestions based on comments and prompts, Copilot can help quickly write test cases and accelerate test automation development. For example, a comment like “validate phone number” can generate a full regular expression in seconds. Copilot also excels at writing unit tests, which many teams struggle to incorporate efficiently.

Applitools Execution Cloud provides self-healing test capabilities.

The Execution Cloud allows you to run tests in the cloud or on your local machine. With self-healing functionality, tests can continue running successfully even when there are changes to web elements or locators. This helps reduce flaky tests and maintenance time. Although skeptical about self-healing at first, the experts found Applitools to handle updates properly without clicking incorrect elements.

Together, tools like Copilot and Applitools can transform your test automation.

Copilot generates the initial test cases and Applitools provides a self-healing cloud environment to run them. This combination leads to improved productivity, reduced flaky tests, and increased coverage.

Applitools Eyes and Execution Cloud offer innovative AI solutions for automated visual testing. By leveraging new technologies like these, teams can achieve test automation at scale and ship high-quality software with confidence. To see these AI tools in action and learn how they can benefit your team, watch the full webinar recording.

The post AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help appeared first on Automated Visual Testing | Applitools.

]]>
3 Reasons to Attend Front-End Test Fest 2023 https://applitools.com/blog/3-reasons-to-attend-front-end-test-fest-2023/ Thu, 01 Jun 2023 01:21:34 +0000 https://applitools.com/?p=50710 Hey there, automation testing enthusiasts! Joe Colantonio here, founder of TestGuild and the author of the new book Automation Awesomeness: 260 actionable affirmations to improve your QA and automation testing...

The post 3 Reasons to Attend Front-End Test Fest 2023 appeared first on Automated Visual Testing | Applitools.

]]>
FETF Upcoming Events

Hey there, automation testing enthusiasts! Joe Colantonio here, founder of TestGuild and the author of the new book Automation Awesomeness: 260 actionable affirmations to improve your QA and automation testing skills. I’m thrilled to bring you an incredible opportunity to level up your front-end testing game. Front-End Test Fest 2023 is just around the corner, and I’m here to share with you the top three reasons why you absolutely cannot miss this event!

Reason 1: Unleash Your Front-End Testing Superpowers

Front-End Test Fest, happening on June 7, 2023, is a one-day virtual event that promises to unlock the full potential of your front-end testing skills. In an ever-changing landscape, where front-end testing is evolving rapidly, this event will provide you with the latest trends, strategies, and practical tips from industry experts. From UI/UX design to component testing and AI-powered techniques, Front-End Test Fest covers it all. Discover how front-end testing is changing and equip yourself with the tools to become a front-end testing superhero!

See the full Front-End Test Fest program

Reason 2: Learn from Industry Experts

One of the most exciting aspects of Front-End Test Fest is the incredible lineup of industry experts who will be sharing their wisdom. We’ve gathered thought leaders like Filip Hric, Ramona Schwering, Colby Fayock, Andrew Knight, Jason Lengstorf, and more. These experts will dive deep into topics such as testing like a developer, self-healing tests, and the power of AI-accelerated release pipelines. Their knowledge and experience will empower you to overcome testing challenges and deliver high-quality front-end experiences in this rapidly evolving landscape.

Reason 3: Connect and Expand Your Professional Network

Front-End Test Fest is not just about learning; it’s also an opportunity to connect with a vibrant community of automation testing enthusiasts. Engage in live Q&A sessions, participate in interactive discussions, and build valuable relationships with professionals who share your passion for test automation. The networking opportunities available during the event will expand your horizons and provide a platform for collaboration and growth.

Front-End Test Fest 2023 is the ultimate event for unleashing the power of automation testing in front-end development. I’m co-hosting the event with Bekah Hawrot Weigel, so I want to personally invite you to join us on June 7, 2023. Don’t miss this free opportunity to level up your front-end testing superpowers, connect with a vibrant community, and move ahead in this ever-evolving field. Mark your calendars and grab your virtual seat. I look forward to seeing you there!

The post 3 Reasons to Attend Front-End Test Fest 2023 appeared first on Automated Visual Testing | Applitools.

]]>
Unlocking the Power of ChatGPT and AI in Test Automation Q&A https://applitools.com/blog/chatgpt-and-ai-in-test-automation-q-and-a/ Thu, 20 Apr 2023 16:14:13 +0000 https://applitools.com/?p=49358 Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, where I explored how artificial intelligence, specifically ChatGPT, can revolutionize the field of test...

The post Unlocking the Power of ChatGPT and AI in Test Automation Q&A appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT webinar Q&A

Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, where I explored how artificial intelligence, specifically ChatGPT, can revolutionize the field of test automation. In this article, I’ll share the audience Q&A as well as some of the results of the audience polls. Be sure to read my previous article, where I summarized the key takeaways from the webinar. You can also find the full recording, session materials, and more in our event archive.

Audience Q&A

Our audience asked various questions about the data and intellectual property when using AI and adding AI into their own test automation processes.

Intellectual properties when using ChatGPT

Question: To avoid disclosing company intellectual properties to ChatGPT, is it not better to build a “private” ChatGPT / large language model to use and train for test automation inside the company while securing the privacy?

My response: The way a lot of organizations are proceeding is setting up private ChatGPT-like infrastructure to get the value of AI. I think that’s a good way to proceed, at least for now.

Data privacy when using ChatGPT

Question: What do you think about feeding commercial data (requirements, code, etc.) to ChatGPT and/or OpenAI API (e.g. gpt-3.5-turbo) data privacy connected to the recent privacy issues like with Samsung, exposure of ChatGPT chats, and so forth?

My response: Feeding public data is okay, because it’s out in the public space anyway, but commercial data could be public or it could be private and that could become an issue. The problem is we do not understand enough about how ChatGPT is using the data or the questions that we are asking it. It is constantly learning, so if you feed a very unique type of question that it has never come across before, the algorithm is intelligent to learn from that. It might give you the wrong answer, but it is going to learn based on your follow-up questions, and it is going to use that information to answer someone else’s similar question.

Complying with data regulations

Question: How can we ensure that AI-driven test automation tools maintain compliance with data privacy regulations like GDPR and CCPA during the testing process?

My response: It’s a tough question. I don’t know how we can ensure that, but if you are going to use any AI tool, you must make sure you are asking very focused, specific questions that don’t disclose any confidential information. For example, in my demo, I had a piece of code pointing to a website asking it a very specific question how to implement it. That question could be implemented using some complex free algorithms or anything else, but taking that solution, you make it your own and then implement it in your organization. That might be safer than disclosing anything more. This is a very new area right now. It’s better to be on the side of caution.

Adding AI into the test automation process

Question: Any suggestions on how to embed ChatGPT/AI into automation testing efforts as a process more than individual benefit?

My response: I unfortunately do not have an answer to this yet. It is something that needs to be explored and figured out. One thing I will add is that even though it may be similar to many others, each product is different. The processes and tech stacks are going to vary for all these types of products you use for testing and automation, so one solution is not going to fit everyone. Auto-generated code will go up to a certain level, but at least as of now, the human mind is still very essential to use it correctly. So it’s not going to solve your problems; it is just going to make solving them easier. The examples I showed are ways to make it easier in your own case.

Using AI for API and NFRs

Question: How effective would AI be for API and NFR?

My response: I could ask a question to give me a performance test implementation approach for the Amazon website, and it gives me a performance test strategy. If I ask it a question to give me an implementation detail of what tool I should use, it is probably going to suggest a few tools. If I ask it to build an initial script for automating this performance test, it is probably going to do that for me as well. It all depends on the questions you are asking to proceed from there, and I’m sure you’ll get good insights for NFRs.

Using AI a dedicated private cloud instance

Question: Our organization is very particular about intellectual property protection, and they might deny us from using third-party cloud tools. What solution is there for this?

My response: Applitools uses the public cloud. I use a public cloud for my learning and training and demos that I do, but a lot of our customers actually use the dedicated cloud instance, which is hosted only for them. Only they have access to that, so that takes care of the security concerns that might be there. We also work with our customers to ensure from compliance and security perspectives that all the questions are answered and to make sure everything conforms as per their standards.

Using AI for mobile test automation

Question: Do you think AI can improve quality of life for automation engineers working on mobile app testing too? Or mostly web and API?

My response: Yes, it works for mobile. It works for anything that you want. You just have to try it out and be specific with your questions. What I learned from using ChatGPT is that you need to learn the art of asking the questions. It is very important in any communication, but now it is becoming very important in communicating with tools as well to get you the appropriate responses.

Audience poll results

In the live webinar, the audience was asked “Given the privacy concerns, how comfortable are you using AI tools for automation?” Of 105 votes, over half of the respondents would be somewhat or very comfortable with using AI tools for automation.

  • Very comfortable: 16.95%
  • Somewhat comfortable: 33.05%
  • Not sure: 24.59%
  • Somewhat comfortable: 14.41%

Next steps

You can take advantage of AI today by using Applitools to test web apps, mobile apps, desktop apps, PDFs, screenshots, and more. Applitools offers SDKs that support several popular testing frameworks in multiple languages, and these SDKs install directly into your projects for seamless integration. You can try it yourself by claiming a free account or request a demo.

Check out our events page to register for an upcoming session with our CTO and the inventor of visual AI Adam Carmi. For our clients and friends in the Asia-Pacific region, be sure to register to attend our upcoming encore presentation of Unlocking the Power of ChatGPT and AI in Test Automation happening on April 26th.

The post Unlocking the Power of ChatGPT and AI in Test Automation Q&A appeared first on Automated Visual Testing | Applitools.

]]>
Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways https://applitools.com/blog/chatgpt-and-ai-in-test-automation-key-takeaways/ Tue, 18 Apr 2023 21:12:14 +0000 https://applitools.com/?p=49170 Editor’s note: This article was written with the support of ChatGPT. Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, when I discussed...

The post Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways appeared first on Automated Visual Testing | Applitools.

]]>
ChatGPT webinar key takeaways

Editor’s note: This article was written with the support of ChatGPT.

Last week, Applitools hosted Unlocking the Power of ChatGPT and AI in Test Automation: Next Steps, when I discussed how artificial intelligence – specifically ChatGPT – can impact the field of test automation. The webinar delved into the various applications of AI in test automation, the benefits it brings, and the best practices to follow for successful implementation. With the ever-growing need for efficient and effective testing, the webinar is a must-watch for anyone looking to stay ahead of the curve in software testing. This blog article recaps the key takeaways from the webinar. Also, you can find the full recording, session materials, and more in our event archive.

Takeaways from the previous webinar

I started with a recap of the takeaways from the previous webinar, Unlocking the Power of ChatGPT and AI in Testing: A Real-World Look. The webinar focused on two main aspects from a testing perspective: testing approach mindset (strategy, design, automation, and execution) and automation perspective. ChatGPT was able to help with automation by guiding me to automate test cases more quickly and effectively. ChatGPT was also able to provide solutions to programming problems, such as giving a solution to a problem statement and refactoring code. However, there were limitations to ChatGPT’s ability to provide answers, particularly in terms of test execution, and some challenges when working with large blobs of code.
If you didn’t catch the previous webinar, you can still watch it on demand.

What’s new in AI since the previous webinar

Since we hosted the previous webinar, there have been many updates in the AI chatbot space. A few key updates we covered in the webinar include:

  • ChatGPT has become accessible on laptops, phones, and Raspberry Pi, and can be run on own devices.
  • Google Bard was released, but it is limited to English language, cannot continue conversations, and cannot help with coding.
  • ChatGPT 4 was released, which accepts images and text inputs and provides text outputs.
  • ChatGPT Plus was introduced, offering better reasoning, faster responses, and higher availability to users.
  • Plugins can now be built on top of ChatGPT, opening up new and powerful ways of interaction.

During the webinar, I gave a live demo of some of ChatGPT’s updates, where it was able to provide code implementation and generate unit tests for a programming question.

Using AI to address common challenges in test automation

Next, I discussed the actual challenges in automation and how we can leverage AI tools to get better results. Those challenges include:

  • Slow and flaky test execution
  • Sub-optimal, inefficient automation
  • Incorrect, non-contextual test data

Using AI to address flakiness in test automation

The section specifically focused on flaky tests related to UI or locator changes, which can be identified using consistent logging and reporting. I advised against using a retry listener to handle flaky tests and instead suggest identifying and fixing the root cause. I then demonstrated an example of a test failing due to a locator change and discussed ways to solve this challenge.

Read our step-by-step tutorial to learn how to use visual AI locators to target anything you need to test in your application and how it can help you create tests that are more resilient and robust.

Using AI to address sub-optimal or inefficient information

Next, I discussed sub-optimal or inefficient automation and how to improve it. I use GitHub Copilot to generate a new test and auto-generate code. I explained how to integrate GitHub Copilot and JetBrains Aqua with IntelliJ and how to use Aqua to find locators for web elements. Then, I showed how to implement code in the IDE and interact with the application to perform automation.

Using AI to address incorrect, non-contextual test data

Next, I discussed the importance of test data in automation testing and related challenges. There are many libraries available for generating test data that can work in the context of the application. Aqua can generate test data by right-clicking and selecting the type of text to generate. Copilot can generate data for “send keys” commands automatically. It’s important to have a good test data strategy to avoid limitations and increase the value of automation testing.

Potential pitfalls of AI

AI is not a magic solution and requires conscious and contextual use. Over-reliance on AI can lead to incomplete knowledge and lack of understanding of generated code or tests. AI may replace certain jobs, but individuals can leverage AI tools to improve their work and make it an asset instead of a liability.

Data privacy is also a major concern with AI use, as accidental leaks of proprietary information can occur. And AI decision-making can be problematic if it does not make the user think critically and understand the reasoning behind the decisions. Countries and organizations are starting to ban the use of AI tools like ChatGPT due to concerns over data privacy and accidental leaks.

Conclusion

Overall at first, I was skeptical about AI in automation, but that skepticism has reduced significantly. You must embrace technology or you risk being left behind. Avoid manual repetition, and leverage automation tools to make work faster and more interesting. The automation cocktail (using different tools in combination) is the way forward.

Focus on ROI and value generation, and make wise choices when building, buying, or reusing tools. Being agile is important, not just following a methodology or procedure. Learning, evolving, iterating, communicating, and collaborating are key to staying agile. Upskilling and being creative and innovative are important for individuals and teams. Completing the same amount of work in a shorter time leads to learning, creativity, and innovation.

Be sure to read my next article where I answers questions from the audience Q&A. If you have any other questions, be sure to reach out on Twitter or LinkedIn.

The post Unlocking the Power of ChatGPT and AI in Test Automation Key Takeaways appeared first on Automated Visual Testing | Applitools.

]]>
The Role of Automation in Mobile Continuous Testing Q&As https://applitools.com/blog/the-role-of-automation-in-mobile-continuous-testing-qas/ Thu, 06 Apr 2023 19:23:11 +0000 https://applitools.com/?p=49010 My answers to questions I received at TAU Conference 2023 Recently it was a pleasure to give a talk about the role of automation in Mobile Continuous Testing at the...

The post The Role of Automation in Mobile Continuous Testing Q&As appeared first on Automated Visual Testing | Applitools.

]]>
The Role of Automation in Mobile Continuous Testing

My answers to questions I received at TAU Conference 2023

Recently it was a pleasure to give a talk about the role of automation in Mobile Continuous Testing at the TAU Conference powered by Applitools, and actually, it was an excellent opportunity to meet different test automation experts and attendees from all over the world.

If you missed the session, you could find the on-demand recording here: applitools.info/kte

During the session, I got various questions about mobile testing and CI, so I wrote this blog to answer them all.

Let’s get started!

Question 1: Automating tests for different devices

Question: What is the most efficient approach when automating tests for the same mobile app on different (Android/iOS) devices? Automate with a tool that works for both or a tool for each one individually?

My answer: The best answer always depends on the team’s needs and goals. Using a tool that works for both platforms is a good choice, because this approach saves time and effort in test automation as it allows you to write test scripts once and run them on multiple devices and platforms such as using Appium. 

But sometimes, the team decides to use platform-specific testing tools such as Espresso for Android and XCUITest for iOS, because they need the mobile developers to work closely with the test engineers in writing the UI tests because they believe it’s a shared responsibility. 

Additionally, they can use the same programming language. Test scripts will be in the same repositories, and using the native locators from the app will be directly accessible in this case.

Question 2: Testing multi-platform apps

Question: Is the approach to/process for testing a multi-platform app like Evernote (mobile, desktop, web) different from a mobile-only app?

My answer: Testing a multi-platform app like Evernote requires a different approach because the following reasons:

  • Each platform’s user interface, operating system, hardware, and software requirements (mobile, desktop, web) differ, so testing must account for these differences and ensure that the app operates as intended.
  • To ensure consistency across all platforms, including features like note-taking, search, sync, and collaboration, each platform must be tested separately, as well as its integration.
  • Testing the app across various platforms ensures the design, navigation, and features are consistent and intuitive.

In order to know how to deal with different platforms, these things should be included in the test plan and strategy at the beginning.

Question 3: Upcoming course content

Question: Is the outline described in your presentation today part of an existing course? Or will it be part of a future course re-haul of your existing course on TAU?
My answer: No, it’s not in my existing courses, but maybe we can have it in the future.

Question 4: Manual testing for mobile CI

Question: When in this mobile CI workflow would manual testing fit in? Or is it suggested that manual testing is not needed?

My answer: If your team would like to achieve the fully automated CI/CD to build, test, package, and release the mobile apps to the App Stores, manual testing will not fit in the lifecycle because the goal is to make the process fully automated as much as we can. But if the target is to have continuous integration only, you can build and package the apps and then send them to the manual testers to do the testing activities. After that, if there are no issues in the build or the release, you can continue with the release process of manually automating it. 

It always depends on the team goal and the release cadence. Some teams or companies release every one or two weeks; in this case, manual testing will be a blocker and releases can take a long time.

Question 5: Manual testing with an automation setup

Question: Is there still a need for manual testing, even with an automation setup?

My answer: Yes, manual testing can be used to identify issues not covered by automated tests, sometimes, different scenarios require a human eye, interaction, or decision. 

It always depends on the type of mobile app and the functionalities we need to cover; this can be added to the test plan and the test strategy from the beginning.

Question 6: Making apps testable

Question: Isn’t it also necessary that apps are made testable?

My answer: Yes, I totally agree, as a mobile development team, we should consider making the mobile app testable by doing the following tips:

  • By writing modular code, you can test individual components independently.
  • Make your app’s logic, data, and presentation layers separate using design patterns like MVVM or MVP. This will make testing easier.
  • Decouple dependencies between classes through dependency injection. This will make it easier to swap out components during testing.

Question 7: Using tools for mobile test automation

Question: If you’re not strong in mobile app automation, for something like a React Native app, would you stick with something like Appium (which is familiar with native automation/Selenium web), or would it be better to try the Maestro (or something similar)?


My answer: The POC (Proof of Concept) project is always the best, in my opinion. If you already have a React Native app, you can try to create a project and try to automate the app with Appium, as you know, Appium is a black-box testing tool so at the end you need only the .apk or .ipa files to be able to automate them, which is not different with React Native apps because at the end you will have already two native apps (iOS and Android). Or you can try Maestro. It’s a new framework, and it may be beneficial in your case.

Question 8: Staying up-to-date on industry skills

Question: Does the coming of flutter make the tester’s life easy?

My answer: I cannot guarantee that, as a tester, we face new challenges every day. For instance, we must learn about new testing tools and technologies, which requires us to stay on top of our game. In contrast, as we can see these days, AI is booming and is affecting our daily work, which makes acquiring unique skills indispensable.

Question 9: Preparing for ISTQB Foundation Certification

Question: How best to prepare for an ISTQB Foundation Certification?

My answer: It depends on which ISTQB certificate you want to achieve, for instance, if it’s the foundation level, you need at least 6 months of experience with software testing to understand the fundamentals and the concepts in the Syllabus. 


As you can see in the following image, the certificates have different tracks and levels. Each level required specific requirements and a different level of experience. You can find all the details on the ISTQB official website, including the Syllabus and the sample questions with answers.

ISTQB certification levels
Image source: ISTQB website

Besides reading the Syllabus, your background will be enough.

Wrapping up

I hope you enjoyed these answers to your questions, and I look forward to seeing you at the TAU and Applitools conferences in the near future.

Please do not hesitate to contact me if you have any other questions.

My LinkedIn: https://www.linkedin.com/in/moataz-nabil/

My Twitter: @moatazeldebsy

Thank you for reading.

Happy testing!

The post The Role of Automation in Mobile Continuous Testing Q&As appeared first on Automated Visual Testing | Applitools.

]]>
Upcoming Content for Test Automation University https://applitools.com/blog/upcoming-content-for-test-automation-university/ Tue, 14 Mar 2023 15:21:43 +0000 https://applitools.com/?p=48478 Test Automation University (TAU) is the premier platform for learning about software testing and automation. Powered by Applitools, TAU offers dozens of courses on all testing topics imaginable from the...

The post Upcoming Content for Test Automation University appeared first on Automated Visual Testing | Applitools.

]]>
Visual testing learning path

Test Automation University (TAU) is the premier platform for learning about software testing and automation. Powered by Applitools, TAU offers dozens of courses on all testing topics imaginable from the world’s best instructors for FREE.

In my previous article, I shared a breakdown of TAU by the numbers. I covered all the stats and metrics about TAU today. Now, in this article, I’d like to share the future by publicly announcing all the new content we have scheduled in our development pipeline!

New courses

Test Automation Design Patterns by Sarah Watkins

Up first is Test Automation Design Patterns by my good friend Sarah Watkins. Anyone who’s ever worked on a test project knows that good test cases don’t just happen – they’re built intentionally. In this upcoming course, Sarah teaches everything she knows about designing and delivering test automation projects at scale, from formulating tests to follow Arrange-Act-Assert, to using dependency injection for handling test inputs and other objects, to deciding between page objects or the Screenplay Pattern. This will be an advanced course to really help you become the ultimate SDET, or “Software Development Engineer in Test.”

GitHub Actions for Testing by Matthias Zax

Next up is GitHub Actions for Testing by Matthias Zax. Automation means much more than merely scripting test case procedures. It includes automating repetitive segments of your development process, like running tests and reporting results. GitHub Actions are a marvelous, lightweight way to add helpful workflows to manage your source code repositories. In this course, Matthias teaches how to create GitHub Actions specifically for testing and quality concerns. I know GitHub Actions have transformed how I run my own GitHub projects, and I’m sure they’ll make a big difference for you, too!

Behavior-Driven Development with SpecFlow by Bas Dijkstra

Next is a course for a topic very near and dear to my own heart: Behavior-Driven Development with SpecFlow by Bas Dijkstra. Behavior-Driven Development, or “BDD” for short, is a set of pragmatic practices that help you develop high-quality, valuable behaviors. Not only will Bas walk through the phases of Discovery, Formulation, and Automation, but he will show you how to use SpecFlow – one of the best BDD test frameworks around – to automate Gherkin scenarios. He will also teach how to integrate SpecFlow with popular libraries like Selenium WebDriver and RestAssured.NET.

I hope you’re excited for those three upcoming courses. Course development takes time, so we don’t have a hard publishing date set, but be on the lookout for them this year. I know Sarah, Matthias, and Bas are all hard at work recording their chapters.

New learning paths

That’s not all our new content. We have a lot more to announce today, so let’s keep going.

Visual Testing Learning Path

We have a new learning path to announce: Visual Testing! Applitools’ very own Matt Jasaitis is developing a three-part series on visual testing. The example code uses Selenium WebDriver with Java, but the concepts he teaches are universal. The first two courses in this learning path are already available. The third course, with the tentative title Expert Visual Testing Practices, will be available by April.

Leadership Learning Path

Historically, Test Automation University has focused almost exclusively on testing and automation skills. As we grow in our career as professionals, we often find we need to develop other kinds of skills as well. That’s why I’m excited to announce that TAU will be developing a new learning path for leadership!

Managing Test Teams by Jenny Bramble

The first course in our new leadership track will be Managing Test Teams by Jenny Bramble. At some point in our careers, many of us face the question, “Should I become a manager?” Becoming a good manager requires a different skill set than becoming a good tester or a good engineer. It comes with a new set of challenges that you might not expect. In this course, Jenny will share everything she’s learned about leading teams of testers, holding crucial conversations, and deciding if a career path in management is right for you. Jenny is also one of our speakers later today, so be sure to join her session.

Building Up Quality Leaders by Laveena Ramchandani

The second course will be Building Up Quality Leaders by Laveena Ramchandani. While Jenny’s course will focus primarily on being a good manager, Laveena’s course will focus on attributes of leadership that apply to any role within software quality. Laveena will cover leadership skills like servanthood, charisma, decisiveness, and gratitude, and how they all apply to the testing world.

Creating Effective Test Strategies by Erin Crise

The third course will be Creating Effective Test Strategies by Erin Crise. The key to leading successful testing initiatives is a solid, well-grounded, well-balanced strategy. The strategy must balance automation with exploration. It must include a variety of areas such as visuals, accessibility, and performance. It must be communicated effectively across teams. And it must be agile enough to adjust to changes. With her wealth of experience from several software projects, Erin will teach how to plan and execute effective test strategies in this course.

Just like for the other upcoming courses I just announced, these leadership courses aren’t ready yet. Expect them to drop later this year. We are also planning a few more courses in this leadership path, so stay tuned for more details!

Refreshing old courses

Now, we said earlier how TAU is over four years old. In the world of tech, that’s a long time. I hate to say it, but some of our courses are becoming outdated. Tools and frameworks evolve – as they should. It’s time to refresh many of our courses.

Last year, we published our first refresh courses: Introduction to Robot Framework by Paul Merrill and UI Automation with WebdriverIO v7 by Julia Pottinger. Both refresh courses were well received by our community of students.

After auditing our whole course catalog, I decided that it’s time to refresh some more courses!

Cypress courses by Filip Hric

Our intro and advanced Cypress courses are among our most popular. In fact, Introduction to Cypress is currently ranked #8 for monthly completions. This is no surprise given Cypress’s incredible popularity with front end developers. However, our mainstay courses were published before many of Cypress’s current features, like component testing and the new directory structure introduced with Cypress 10.

Filip Hric will take up the challenge to refresh both the intro and advanced Cypress courses. He will redevelop them to flow together cohesively as parts 1 and 2. We will also create a new Cypress learning path for them once we publish them.

Playwright courses by Renata Andrade

As a Playwright Ambassador, I can’t let Cypress have all the fun. Playwright is so hot right now. The community is booming, the downloads are skyrocketing, and even within Applitools and TAU, we see its adoption growing significantly. That’s why we are not only refreshing our introduction-to-Playwright course, but we are also going to publish an all-new Advanced Playwright course!

Our instructor Renata Andrade, like Filip, will develop these courses to be parts 1 and 2. She will also develop them in TypeScript. And you’d better believe we’ll create a Playwright learning path for them as well.

Mobile Automation with Appium in Java by Moataz Nabil

Let’s not forget mobile test automation as well. Moataz Nabil is going to split his course, Mobile Automation with Appium in Java, into two parts. At six and a half hours, this course is currently the longest on TAU. By splitting it into two parts, we hope to make it easier for students to complete.

Web Element Locator Strategies by Andrew Knight

And finally, there’s one more course we plan to redevelop: my very first course, Web Element Locator Strategies. In my refresh, I intend to develop new subchapters to show how to write locators in different frameworks like Playwright and Cypress, not just in Selenium Java. I’ll include new types of locators as well.

Retired courses

We also needed to retire a few of our courses that are out of date:

  1. Automated Visual Testing with Java by Angie Jones
  2. AI for Element Selection by Jason Arbon
  3. From Scripting to Framework with Selenium and C# by Carlos Kidman

In case you missed it, we removed these courses from TAU at the beginning of March. I want to thank these instructors for making these courses.

A full curriculum

I know I’ve talked a lot about learning paths today. One of the most requested new features we receive is certificates for completing learning paths. I’m sorry to say that we are not going to add certificates for learning paths. Learning paths are somewhat dynamic. They can change over time as we publish new courses. So, giving certificates for completing them doesn’t make sense.

Instead, we are toying with a stronger idea: a curriculum. A learning path concentrates on one narrow testing topic, like leadership or visual testing. I want to create a comprehensive program of courses covering a wide breadth of testing topics, and to recognize completion of that program with something like a diploma. We have a rich catalog of courses for building a strong curriculum. Right now, this is still in the idea stage. I’m hoping to bring it into reality later this year.

One more thing…

TAU is four years old, and while I hate to admit it, the TAU platform itself is starting to show its age. With so many courses available, it’s becoming hard to find the ones you want. On the backend, it actually takes quite a bit of manual labor to publish a new course, too.

It’s time to refresh the TAU platform. It’s time for TAU version 2.

Over the coming months, Matt Jasaitis and I will redevelop the whole TAU web app. Yes, that’s the same Matt Jasaitis who is creating the Visual Testing learning path. Our goal is to build a sustainable learning platform to serve Test Automation University for years to come. We will also build a better support system into the platform. Unfortunately, this project is so new that I don’t even have mockups to show you yet, but at least I can announce that TAUv2 is coming.

So, what do y’all think about all this new content coming to TAU? We’d love to hear from you. Let us know through TAU Slack.

The post Upcoming Content for Test Automation University appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation University by the Numbers https://applitools.com/blog/test-automation-university-by-the-numbers/ Mon, 13 Mar 2023 15:25:00 +0000 https://applitools.com/?p=48470 Test Automation University (TAU) is the premier platform for learning about software testing and automation. Powered by Applitools, TAU offers dozens of courses on all testing topics imaginable from the...

The post Test Automation University by the Numbers appeared first on Automated Visual Testing | Applitools.

]]>
TAU by the numbers

Test Automation University (TAU) is the premier platform for learning about software testing and automation. Powered by Applitools, TAU offers dozens of courses on all testing topics imaginable from the world’s best instructors for FREE.

In this article, I’d like to share a breakdown of Test Automation University by the numbers: all the stats and metrics that we gather for insights into the program. I also shared this information in my keynote address for Test Automation University’s 2023 conference, What’s New With TAU? Let’s dive in!

Students, courses, and achievements

Believe it or not, it’s been four years since Applitools launched TAU! Four years is a long time. That’s as long as a typical college undergraduate degree.

I remember feeling thrilled when Applitools first launched TAU. Before then, there wasn’t any cohesive, community-oriented education platform for testing and automation. Most schools barely included testing topics in their programs. While the Web had a wealth of information, it was difficult to know which resources were both comprehensive and trustworthy. Test Automation University provided a central platform that could be trusted with literally the best and the brightest instructors in the world.

Now, I can’t imagine a world without TAU. At least once a week, someone I’ve never met before slides into my DMs and asks me, “Andy, how can I start learning about testing and automation?” I literally just tell them, “Have you heard of Test Automation University?” Boom. Done. I don’t have to check my bookshelf for titles or do a quick search for blog articles. I just say go to TAU. They’re always grateful.

And when they do go to TAU, they join our ever-growing community of students! Just last month, we crossed 150 thousand registered “TAU-dents”! That’s a huge size for any community.

Every time we’ve hit a milestone, we’ve celebrated. When we hit 50 thousand students in June 2020, we threw our first TAU Party. Then, when we hit 100 thousand students, we hosted TAU: The Homecoming, our first virtual conference. That’s when I first joined Applitools. Now, at 150 thousand students, we’re hosting our second virtual conference.

Our growth is also accelerating. Historically, we have grown by 50 thousand students about every year and a half. This most recent milestone happened a little sooner than expected.

Every week, about 800 to 1000 new students sign up for TAU. That’s sustained, consistent growth. It’s awesome to see all the new students joining every week.

Over 4 years, we have published a total of 73 courses. That’s a lot of content! A single course has at least 4 chapters with at least 30 minutes of video lessons. Most courses average about 8 chapters over an hour of content, and our longest courses stretch to 6 hours! We try to publish a new course about once a month. And the best part is that every one of our courses is completely, totally, absolutely FREE!

Our courses would be nothing without the excellent instructors who teach them. To date, we have 48 different instructors who have produced courses for TAU. These folks are the real superstars. They are bona fide software testing champions. Let me tell you, it is no small effort to develop a TAU course.

When Applitools first launched TAU back in 2019, I remember thinking to myself as a younger panda, “Man, that’s awesome. I wish I could be part of that. Maybe someday I can develop a TAU course and be like them.” A few months later, Angie Jones slid into my DMs and asked me if I’d want to develop my first course: Web Element Locator Strategies. In all honesty, TAU is one of the main reasons I work at Applitools today!

Let’s look at achievements next. In total, all students have completed 162 thousand courses! That’s 162 thousand certificates and 162 thousand badges, averaging slightly more than 1 course completion per student.

The credit total is even more impressive. Our students have earned about 116 million course credits. That’s just under 800 credits per student, which correlates to about one course’s worth of credits. 116 million credits earned is a mind-blowing number, and that total is only going to increase with time.

Course completion analysis

Those are just some raw numbers on TAU. I also did some deeper analysis on our courses to learn what topics and formats are most valuable to our community. Let’s take a look at that together.

When evaluating courses, the main metric I measure is course completion – meaning, how many students completed the course and earned a certificate. Anyone can start a course, but it takes dedication to complete the course. Completion signals the highest level of engagement with TAU: a student must dedicate hours of study and coding to earn a course badge.

My goal for any course we publish – and how I determine if a course is “successful” in most basic terms – is to consistently hit 30 completions per month, or 1 completion per day. Very popular courses hit 2 completions per day, and our most popular courses hit at least 3 completions per day.

Our top 10 courses are all in that upper echelon of completion rate. Our most popular course by far is Angie JonesSetting a Foundation for Successful Test Automation, followed by my course, Web Element Locator Strategies.

All three of our programming courses – Java, JavaScript, and Python – appear in the top 10. Rounding out the top 10 are courses on Selenium WebDriver, Cypress, API testing, and IntelliJ IDEA. If you want to see more details about the most popular TAU courses, check out the article I wrote about it a few months ago on the Applitools blog. Note that some small shifts have happened since I wrote that article, but the information is still mostly accurate.

When partitioning the full catalog of courses by the programming languages they use, we can see that Java and JavaScript dominate the landscape. This should be no surprise, since those two languages are by far the most popular languages for test automation. Applitools product data also backs this up. TAU offers a good number of Python and C# courses that are reasonably popular. I personally developed 3 of those 7 Python courses. TAU also offers a handful of courses that use Ruby and other languages. Unfortunately, however, we have discovered that those courses, especially for Ruby, have very low completion rates. Again, I surmise that this reflects broader industry trends.

I also analyzed the completion rate for a course versus its length. What I found is probably not surprising: shorter courses have higher completion rates, while longer courses have lower completion rates. The average completion rate for all courses is 57%. The breaking point appears to be at about two hours, with a sweet spot between 60-90 minutes. Since we want to encourage course completions, we are going to encourage instructors to produce courses in the hour-to-hour-and-a-half range moving forward.

So, what’s next?

These numbers describe TAU as it is today. In my next article, I’ll share all the new plans we have for TAU, including upcoming courses!

The post Test Automation University by the Numbers appeared first on Automated Visual Testing | Applitools.

]]>